Page 1 of 5

Window Manager specifics

Posted: Thu Mar 27, 2008 7:56 am
by lukem95
Im starting to write my window manager, and i can draw windows etc with perfect ease, but when it comes to the data inside the windows, this is a different story.

My idea is:

Each window has a data area allocated to it, that is the dimensions of the window in size. In this data area is everything the window will print to screen (including the border etc). I memcpy this data area into my double buffer, which then refreshes the relevent parts (that need updating) at screen refresh's.

The components (textbox's, scrollbars's etc) are all defined as a seperate "component window" and also have a data area defined by their dimensions. This is memcpy'd into the parent windows data area at every refresh.

I will deal with message passing later, but does this seem wasteful/economic/slow/fast/good/poor?

Critique would be nice :)

Posted: Thu Mar 27, 2008 8:19 am
by zaleschiemilgabriel
Hey'ya! It's your old friend, the troll... Image
I'd say it's wasteful and slow to use separate buffers for each window. Why not paint everything straight into the back-buffer? Just keep the coordinates and dimensions of each window somewhere and use them for clipping inside your drawing functions. If an app. wants to have it's own memory buffer, you should facilitate that too, but since you already have a back-buffer for the whole screen, adding an extra one would only add another memcpy call... The fewer memcpy's the faster is your code. The way you describe it, everything would get slower and slower every time you add a new window.
If you design a very good and solid clipping mechanism, you could even eliminate the need for a back-buffer.

Posted: Thu Mar 27, 2008 8:27 am
by Korona
Giving each component an own buffer is a good idea. The components can draw their content into their private buffers. Later these buffers are copied to the buffer of the parent window and that buffer is copied to the framebuffer. Just make sure the window buffers are only updated when it is necessary. e.g. if a character gets written to a textbox, you don't want to to update the whole box, but only a small part of it. Your solution should be pretty fast as you will only need a very few number of redraws and updates.

Posted: Thu Mar 27, 2008 8:28 am
by JamesM
If you design a very good and solid clipping mechanism, you could even eliminate the need for a back-buffer.
Without double buffering artifacts would appear on the screen.

And, while lukem's method requires more data movement on the whole, one window movement would not require another window (which may now have become unobscured) to be forcibly redrawn - the appropriate area could be bitblitted straight from that window owner's backbuffer.

Both have their merits but I personally think that the number of memcpys may just kill your refresh rate if you continue with the recursive design (windows -> widgets -> subwidgets etc).

Possibly just one data area per window as a compromise?

Posted: Thu Mar 27, 2008 8:47 am
by lukem95
One data area per window seems reasonable, its good to see my idea is well recieved.

Just gotta get to implementing it now (oh joy)

Posted: Thu Mar 27, 2008 8:48 am
by zaleschiemilgabriel
I believe Linux systems use the buffering approach, while Windows uses the clipping approach. I've always thought that the graphics functions in Windows were more flexible and faster than those in Linux (assuming they both use the same degree of hardware acceleration). Also, if you had hardware acceleration, the artifacts problem would be gone.
In buffering, I've always assumed that moving data from system memory to video memory was slower than system-to-system moves, and given that nowadays system memory frequencies are becoming comparable to processor frequencies, system-to-system moves are almost ignorable, but the overhead of adding more than one buffer is arguable. For the moment, processors are still way faster than system memory, so doing a trillion comparisons to decide whether to plot a pixel to the buffer is much faster than copying a thousand (maybe even a hundred) window-buffers to the back-buffer.
Say what you want, but the buffer approach doesn't sound good to me.

Posted: Thu Mar 27, 2008 8:51 am
by lukem95
but with the one buffer per window method, i only need to redraw the one window being moved, this means less memory transfers to-and-from the video cards LFB.

Posted: Thu Mar 27, 2008 8:57 am
by JamesM
Also, if you had hardware acceleration, the artifacts problem would be gone.
No, it wouldn't. Artifacts when using single buffering occur because the graphics hardware renders a frame while you are half way through updating it - it causes shearing and other such effects. Double buffering is the de facto way to avoid it, and is also used in 3D applications.
I've always thought that the graphics functions in Windows were more flexible and faster than those in Linux
How exactly do you define flexibility in this context? I'm having difficulty understanding what you mean. It's also very possible that the linux box you were testing on didn't have full 2d hardware acceleration. Acceleration is an extremely hacky, kludgy and dodgy area in GNU/Linux.

Posted: Thu Mar 27, 2008 9:00 am
by zaleschiemilgabriel
I'm just saying there's a difference between storing 1024x768 pixels in memory and storing just 4 coordinates of a rectangle. The more windows you will have, the more memory you will need. If your memory manager uses disk swapping, eventually a part of those buffer will be swapped to disk, then more speed problems arise...

Posted: Thu Mar 27, 2008 9:09 am
by lukem95
that is a valid point, i didnt fully understand before.

It would mean that the WM would be far more complex to implement. I'll consider it when i decide to upgrade, im looking for something that works with a degree of stability right now, and as my OS is fairly lacking feature-wise, memory usage isnt really a problem right now.

Posted: Thu Mar 27, 2008 9:11 am
by zaleschiemilgabriel
JamesM wrote:No, it wouldn't. Artifacts when using single buffering occur because the graphics hardware renders a frame while you are half way through updating it - it causes shearing and other such effects. Double buffering is the de facto way to avoid it, and is also used in 3D applications.
If all drawing functions (bitblt, line, rectangle, fill etc.) are handled by hardware, you probably won't have to care about the horizontal refresh. That means all your so-called "buffers" would also be stored in video memory before they are used.
JamesM wrote:How exactly do you define flexibility in this context? I'm having difficulty understanding what you mean. It's also very possible that the linux box you were testing on didn't have full 2d hardware acceleration. Acceleration is an extremely hacky, kludgy and dodgy area in GNU/Linux.
Nope, same box for both OS's with the official video drivers from NVIDIA.

My idea of software acceleration is to create something like a PlotPixel function that takes in an array of rectangles and does some comparisons to see if the pixel is clipped in/out and only plots the pixel if it passes the test. (depends on how you see the glass :) ) For lots of overlapping windows this is the fastest method I can imagine.

Posted: Thu Mar 27, 2008 9:38 am
by JamesM
That means all your so-called "buffers" would also be stored in video memory before they are used.
Why are they "so-called"? and why the need for the quotation marks?

I didn't say *where* the backbuffer would be kept - probably in video memory. The point is that without a backbuffer you *do* get artifacts. Disable double buffering on the next 3D game you run (you can do this manually if you have an nVidia card, not sure about ATI, by going to the adapter properties in windows) and see what you get.

Re: Window Manager specifics

Posted: Thu Mar 27, 2008 9:57 am
by Brendan
Hi,
lukem95 wrote:Critique would be nice :)
My opinion on video driver interfaces (and GUI interfaces) hasn't changed - the client (e.g. the application and/or GUI) should send a script that describes how the video data should be constructed by the server (e.g. the video driver), so that hardware acceleration isn't impossible, so that you're transferring relatively small scripts rather than relatively large bitmaps, and so you only need to optimize one set of drawing functions in the video driver rather than many sets of drawing functions (in each application).

Basically, the client (where an application is the GUI's client, and a GUI is the video driver's client) shouldn't have access to any kind of video buffer at all. ;)


Cheers,

Brendan

Posted: Thu Mar 27, 2008 10:35 am
by zaleschiemilgabriel
I totally agree with Brendan! I just had a brainstorm about using the Mozilla XML User Interface Language as a GUI interface for an OS. :P

JamesM, even with double buffering enabled in 3D games, you could still get artifacts (although barely noticeable) if Vertical Sync isn't enabled.
But with VBE framebuffers I don't think you have much control over that option. The one thing that you can and should optimize is system memory access.
The reason they use buffers in 3D games is because they are convenient means of representing textures. Also, triple-buffering in games is usually slower than double-buffering.
I agree that double buffering is a good technique to be used in video games, but maybe not so much in an OS interface.
P.S.: Sorry about the quotes and everything else. My native language is highly metaphorical. Just ignore the quotes. :D I mostly use them when I mean to say something like "fill in the dots". So by "buffers" I meant "video/virtual/memory/texture/whatever buffers" - basically "virtual buffers". Confused? :D

Posted: Thu Mar 27, 2008 10:45 am
by inx
the client (e.g. the application and/or GUI) should send a script that describes how the video data should be constructed by the server (e.g. the video driver)
IIRC, this is what NeXTSTEP did with Display Postscript, and that turned out quite well. 1120×832 was the standard resolution, and quite responsive on a 25MHz 68030.