Direct Rendering

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Direct Rendering

Post by Pype.Clicker »

Just an idea that hit /dev/brain when i was at IWAN'05 ...

In Clicker, i want to avoid the "in-kernel widget library", but i also want to avoid to have things like GTK or Qt where the user program actually does all the dirty job across a not-that-fast IPC mechanism.

One of the ideas i'd like to try is to have plugins for the video server (a dedicated program owning the graphic hardware resources) that can parse 'higher-level' descriptions such as HTML documents, XML dialog boxes, SVG or PDF(?) graphics but also a specific language for describing how things like menus, text captures, etc. should work, that can be interpreted or JIT-compiled on the server side for quick response of "usual" events.

But what can we do about programs like MPlayer that have to update a whole picture every 1/24th second ? we still can't let them write directly to the screen buffer ... and we cannot expose just a part of the screen ...

No, but we can export to their address space a portion of offscreen memory, making sure that noone else is going to write there and that if the program goes 'beyond' the window it should write to, it will still not screw up display.

Once the new frame is rendered offscreen, the program can inform the video server that will then use hardware-accelerated blitting to put the frame onscreen ^_^

(i'm probably not the first one to think about it, but i thought you might like to hear it)
CopperMan

Re:Direct Rendering

Post by CopperMan »

I think frames of e.g. movie should be rendered to overlay (which is also offscreen and can be mapped to user process space) and bitblitting are'nt needed.
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:Direct Rendering

Post by Pype.Clicker »

as overlays ? ...

You mean telling the graphic card "here's my main screen" and here's the "overlay screen", combine both of them and gimme the display ?

If that's the case, how can you prevent the overlay screen from actually covering the whole display ?
distantvoices
Member
Member
Posts: 1600
Joined: Wed Oct 18, 2006 11:59 am
Location: Vienna/Austria
Contact:

Re:Direct Rendering

Post by distantvoices »

I'd have the applications share an offscreen buffer with the gui service for drawing custom controls, which aren't included in the gui service standard control set (which is intended to be rather minimalistic - menue, tabs, buttons, text line edits and such small things).

These controls are completely drawn by the client - by the use of library functions. The window looper fetches the clip set for the window in question from the server and then starts the default drawing function - if any is registered. If none is registered, it's up to the application to issue either sync or sync_area to have the server update the affected region. Not earlier anything is committed to the video memory.

Also, I consider html rendering something to be done client side - in the infamous custom control, with a dedicated control set - ah, I know, it sounds atrocious. I'm gonna rethink this. For now I'm busy with cascaded full adders (the are genious for they can handle two's complement without problems), multiplexers, demultiplexers, reset set flipflops, clock state flipflops, clock flank flipflop ... and blah. I 'm not in a mind ready for deeeeeeep osdeving currently. *gg* Oh, and I'm waiting for some parts for my new watercooling stuff. I'm crazy, yeah.

stay safe.

PS: this stems from talking out of the needle box, eh?
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:Direct Rendering

Post by Pype.Clicker »

oh, probably i should have made more obvious i was talking about off-screen VRAM ... not just plain memory (which would mean there was no real point of starting a thread for it)
I consider html rendering something to be done client side - in the infamous custom control, with a dedicated control set - ah, I know, it sounds atrocious.
Well, i don't think at a full-fledged implementation of HTML at server side, but it would be great if you could just send to the server text with alignment, font selection, color hints, tables structure (so that you can align stuff) and pictures support ...

It would also be quite interresting to be able to tell the video server "this is some "sensitive" area. Report mouse over/out or clicks that strike here.
distantvoices
Member
Member
Posts: 1600
Joined: Wed Oct 18, 2006 11:59 am
Location: Vienna/Austria
Contact:

Re:Direct Rendering

Post by distantvoices »

that's memory to be accessed via rather slow pci_to_vram things, eh? Well, I reckon it's quick enough for bulk memmoves once everything is done, but how about drawing lines and sorta?

this sensitive region can as well be the active window receiving mousemove events. If there's no handler registered - no bounce no play, isn't it?

Ah, what joy ...
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
JoeKayzA

Re:Direct Rendering

Post by JoeKayzA »

Pype.Clicker wrote: One of the ideas i'd like to try is to have plugins for the video server (a dedicated program owning the graphic hardware resources) that can parse 'higher-level' descriptions such as HTML documents, XML dialog boxes, SVG or PDF(?) graphics but also a specific language for describing how things like menus, text captures, etc. should work, that can be interpreted or JIT-compiled on the server side for quick response of "usual" events.
I had a similar idea for my graphics service, however 'limited' mostly to scalable vector graphics. The idea was to ensure that every drawn control or form should be scalable independently from the chosen screen resolution. (well, that's what vector graphics are about ;) )

About the extension mechanism: It sounds very interesting, especially useful when, for example, your terminal and the app server are connected only through a low-speed or high-latency network. The 'problem' that I see here is just this: When you give the possibility for clients to draw directly to offscreen buffers, how should that work remotely? (at reasonable speed?) This is not really a rant, I just thought it could be useful to seperate the lower level video server (which deals only with display output, maybe understands vector graphics commands too) from a higher level 'application GUI server' (which also deals with reacting to input directly, like mouseover-hover-effects and the like). Just an idea...
Pype.Clicker wrote: Once the new frame is rendered offscreen, the program can inform the video server that will then use hardware-accelerated blitting to put the frame onscreen ^_^

(i'm probably not the first one to think about it, but i thought you might like to hear it)
AFAIK, this is quite how X.org's composite-extension works.

cheers Joe
nick8325
Member
Member
Posts: 200
Joined: Wed Oct 18, 2006 5:49 am

Re:Direct Rendering

Post by nick8325 »

Pype.Clicker wrote: Just an idea that hit /dev/brain when i was at IWAN'05 ...

In Clicker, i want to avoid the "in-kernel widget library", but i also want to avoid to have things like GTK or Qt where the user program actually does all the dirty job across a not-that-fast IPC mechanism.

One of the ideas i'd like to try is to have plugins for the video server (a dedicated program owning the graphic hardware resources) that can parse 'higher-level' descriptions such as HTML documents, XML dialog boxes, SVG or PDF(?) graphics but also a specific language for describing how things like menus, text captures, etc. should work, that can be interpreted or JIT-compiled on the server side for quick response of "usual" events.
Mac OS X uses something a bit similar to this, called Display PDF. As far as I understand it (read: take with a pinch of salt), programs pass bits of PDF (normally generated by a library, I suppose) to the display process, which uses those to render things when it needs to.

I can't seem to find any good documentation for that, though. It's based on Display PostScript, which NeXTSTEP used. The interesting thing about PostScript is that, despite being originally intended for printers, it's a full programming language (stack-based, like Forth). There's a good amount of information, including links to online books, at Wikipedia. It's been a while since I looked at it, so I can't remember exactly how DPS worked, but you could certainly do a lot of work in the server.

With something like that, you could do quite a few cool things - for example, the application (or maybe the system) could define a toggle button. The toggle button would respond to mouse clicks by toggling its state, within the server - rather than the server sending the client a mouse click event, the client sending it off to a GUI library, the library toggling the state and asking to redraw on the server.

You could perhaps even put the whole GUI in the server, and have the server and client only talk to each other for important things (e.g. 'user has asked to open a file' and not 'user has moved the mouse into the window').
Exabyte256

Re:Direct Rendering

Post by Exabyte256 »

Here's something that you should check out, it's an opensource direct media layer for Linux, Windows and everything else. It works great too, I've tried some applications made with it. Looking at the source should give some ideas on how this graphics stuff works.

http://www.libsdl.org/
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:Direct Rendering

Post by Pype.Clicker »

beyond infinity wrote: that's memory to be accessed via rather slow pci_to_vram things, eh? Well, I reckon it's quick enough for bulk memmoves once everything is done, but how about drawing lines and sorta?
You'll noticed the 'map locally an offscreen buffer' is not the only proposal: it's one of the option we have, that should perfectly fit applications such as parallax-scrolling games or windowed movies rendering and every other application when you have a full new frame to render everytime.

If you're dealing with lines drawing (e.g. rendering a vector drawing, etc), the you'll more likely use an on-server renderer that updates directly the on-screen content sending "commands" through a stream.
this sensitive region can as well be the active window receiving mousemove events. If there's no handler registered - no bounce no play, isn't it?
Yes, indeed, we could also send all the informations for the display aswell as the substructure of the text (e.g. where are 'active subwindows' corresponding to hyperlinks ... however, i feel like it requires unneeded synchronization between the actual rendering plugin and the application that builds the 'subwindows' list) which could lead to incoherences between what's displayed and what's reactive as soon as there's a 1-pixel error in the line height, for instance.
Here's something that you should check out, it's an opensource direct media layer for Linux, Windows and everything else.
Could you be more specific about what should be checked out ? i mean, i know of SDL and what it's good for, but that's just an API, right ? what does it tell about what's going on underneath ?
It's been a while since I looked at it, so I can't remember exactly how DPS worked, but you could certainly do a lot of work in the server.
Yes, there's certainly a good load of inspiration from DPS in the "Y-window" idea ... however, i don't think a postscript-based system could be able - e.g. - to render a view of a treelist, for instance ... Considering how hard it is with X-window to have good performance with highly reactive widgets (menus flickering still happen too often, imho), and how poorly suited to over-network-display things like the-small-animation-that-says-you're-waiting-for-display are, i think like on-server execution of client-submitted code can be quite helpful (just like Javascript is helpful to the Web, actually ;)
You could perhaps even put the whole GUI in the server, and have the server and client only talk to each other for important things (e.g. 'user has asked to open a file' and not 'user has moved the mouse into the window').
Yes, that would be somehow the idea: send the description of the menubar, receive a 'file menu clicked', send back the description of the menu's content and wait until an action of that menu is selected, then receive the "File_Save_Request" message.

However, sending commands for rendering the menu with item 1, then item 2 then item 3 highlighted while the mouse is rolling over the menu is - imvho - waste of time.
The 'problem' that I see here is just this: When you give the possibility for clients to draw directly to offscreen buffers, how should that work remotely?
Several options could be envisaged, and they can all be implemented by a "proxy/stub" code that hooks the "offscreen buffer updated" message:
- just inform the application that, since it's running over a remote connection, there's simply no option of doing what it want. E.g. if you're trying to run DiV-X player over ADSL ... hum. expect dialog boxes.
- use on-the-fly MPEG compression between previously sent frame and new frame, then decode MPEG on the 'rendering' side.
- or send each frame 'raw' over the network connection (and pray for getting good display quality)
distantvoices
Member
Member
Posts: 1600
Joined: Wed Oct 18, 2006 11:59 am
Location: Vienna/Austria
Contact:

Re:Direct Rendering

Post by distantvoices »

I have a slight feeling of being misunderstood here.

Maybe I talk too much rubbish around the edges?

See, I don't think it is a good idea to keep a backbuffer PER window. Keep one global backbuffer, have the client draw there (send him clips - that's protocol relevant stuff. We don't want to have things not belonging to the client be overdrawn by the client, do we?). Use the same backbuffer to draw serverside stuff. Serverside stuff is of course drawn prior to client side stuff - if the server has a higher priority than the client, that is.
Yes, indeed, we could also send all the informations for the display aswell as the substructure of the text (e.g. where are 'active subwindows' corresponding to hyperlinks ... however, i feel like it requires unneeded synchronization between the actual rendering plugin and the application that builds the 'subwindows' list) which could lead to incoherences between what's displayed and what's reactive as soon as there's a 1-pixel error in the line height, for instance.
Events are dealt out by the server who has total control over the windows. The Server is responsible for keeping everything in plain order. Straight forward and simple. BTW: Do you mean controls by the term "subwindow"?

Hm. At least that's how I am doing this stuff and til now it looks like working good - not like a charm, but good.

Your mileage may of course vary.

Stay safe & have a good day :-)
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
JoeKayzA

Re:Direct Rendering

Post by JoeKayzA »

Pype.Clicker wrote: ..., i think like on-server execution of client-submitted code can be quite helpful (just like Java Script is helpful to the Web, actually ;)
I would even say that this is mandatory as soon as you want to process, say, mouseover-highlights or expanding/collapsing tree views directly in the server. So what did you think of? A real scripting language (which would need a full interpreter then...)? I'd rather go with something like flash here (which is, effectively, the same, but it's just bytecode instead of cleartext - saves bandwidth). In both cases you'd need full runtime support on the server, and this was one of the points why I meant to split the plain 'display server', which just deals with output - nothing else, from a full blown 'GUI server', which also handles input and can process events internally, as long as they don't need the client's attention.
Pype.Clicker wrote: - use on-the-fly MPEG compression between previously sent frame and new frame, then decode MPEG on the 'rendering' side.
Well then you could also send the whole video stream directly over the net and render it on the server entirely. A question arises: What do I need a media player application for, then? :D

cheers Joe
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:Direct Rendering

Post by Pype.Clicker »

JoeKayzA wrote: So what did you think of? A real scripting language (which would need a full interpreter then...)? I'd rather go with something like flash here (which is, effectively, the same, but it's just bytecode instead of cleartext - saves bandwidth).
Honnestly, i'm more the bytecode type of guy. I've been busy designing a virtual processor for my research activities recently and once you know how to handle it, you can come with a quite efficient and compact design. Here the VPU will need to deal with things like structured data so it may be a bit more complicated, but we certainly can make it not larger than 16 or 32K.

I wouldn't go for pure flash mainly because of macromedia licensing issues.

In both cases you'd need full runtime support on the server, and this was one of the points why I meant to split the plain 'display server', which just deals with output - nothing else, from a full blown 'GUI server', which also handles input and can process events internally, as long as they don't need the client's attention.
okay, i better understand your point now, at least if "to split" means "to make distinction between", and not "to implement in two different servers". Nothing but the GUI server is going to need the display server, so it'd be a bit odd to have them separated in code, even if they're different beast regarding design.
Well then you could also send the whole video stream directly over the net and render it on the server entirely.
Pretty useless, ain't it ? Unless you want to use a powerful machine on your lan to read out DiV-X and watch it on a poor desktop station that just has support for MPEG ;)
Nah, seriously, that was why i also had point 1 (tell the user the application isn't suited to remote execution)
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re:Direct Rendering

Post by Candy »

I agree on the bytecode-interpreting virtual server, although I have no idea what sort of instructions you would want to have.

My guess:

- blit line horz (x,new-x,y,array of bits)
- blit line vert (x,y,new-y,array of bits)
- blit square (x,y,w,h,array of bits)
- fill line horz (x,new-x,y,array of bits)
- fill line vert (x,y,new-y,array of bits)
- fill square (x,y,w,h)
- move block with 6x imm (source x/y, w/h, target x/y)
- alpha mix (x,y,w,h,alpha amount,array of bits)

Those are the ones I can see as immediately required (the alpha would be optional I think). Any stuff I've forgotten?
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:Direct Rendering

Post by Pype.Clicker »

bezier curves ?

and probably anything you could find in any scalar-vectors library (like SVG or postscript) ... plus invoking native plugins such as font renderers, libpng (nah, we don't want to decode bytes using bytecode)

my idea was to have _graphic context_ as an object the bytecode can be manipulating (in order to change the current color and stroke while keeping the font context, then popping the modifications to revert to the previous situation)

You probably will need instructions to tell 'report event towards the application', in addition (of course) of the usual ALU stuff. Data submitted by the application (such as the content of the treelist you want to operate on) should be kept in a 'databuffer' and we'll probably need operations such as 'locate dictionnary entry <key> in databuffer' or 'locate item i step(s) further in the current list' ...

sketching that on my wiki ...
Post Reply