Window Manager specifics
Window Manager specifics
Im starting to write my window manager, and i can draw windows etc with perfect ease, but when it comes to the data inside the windows, this is a different story.
My idea is:
Each window has a data area allocated to it, that is the dimensions of the window in size. In this data area is everything the window will print to screen (including the border etc). I memcpy this data area into my double buffer, which then refreshes the relevent parts (that need updating) at screen refresh's.
The components (textbox's, scrollbars's etc) are all defined as a seperate "component window" and also have a data area defined by their dimensions. This is memcpy'd into the parent windows data area at every refresh.
I will deal with message passing later, but does this seem wasteful/economic/slow/fast/good/poor?
Critique would be nice
My idea is:
Each window has a data area allocated to it, that is the dimensions of the window in size. In this data area is everything the window will print to screen (including the border etc). I memcpy this data area into my double buffer, which then refreshes the relevent parts (that need updating) at screen refresh's.
The components (textbox's, scrollbars's etc) are all defined as a seperate "component window" and also have a data area defined by their dimensions. This is memcpy'd into the parent windows data area at every refresh.
I will deal with message passing later, but does this seem wasteful/economic/slow/fast/good/poor?
Critique would be nice
- zaleschiemilgabriel
- Member
- Posts: 232
- Joined: Mon Feb 04, 2008 3:58 am
Hey'ya! It's your old friend, the troll...
I'd say it's wasteful and slow to use separate buffers for each window. Why not paint everything straight into the back-buffer? Just keep the coordinates and dimensions of each window somewhere and use them for clipping inside your drawing functions. If an app. wants to have it's own memory buffer, you should facilitate that too, but since you already have a back-buffer for the whole screen, adding an extra one would only add another memcpy call... The fewer memcpy's the faster is your code. The way you describe it, everything would get slower and slower every time you add a new window.
If you design a very good and solid clipping mechanism, you could even eliminate the need for a back-buffer.
I'd say it's wasteful and slow to use separate buffers for each window. Why not paint everything straight into the back-buffer? Just keep the coordinates and dimensions of each window somewhere and use them for clipping inside your drawing functions. If an app. wants to have it's own memory buffer, you should facilitate that too, but since you already have a back-buffer for the whole screen, adding an extra one would only add another memcpy call... The fewer memcpy's the faster is your code. The way you describe it, everything would get slower and slower every time you add a new window.
If you design a very good and solid clipping mechanism, you could even eliminate the need for a back-buffer.
Giving each component an own buffer is a good idea. The components can draw their content into their private buffers. Later these buffers are copied to the buffer of the parent window and that buffer is copied to the framebuffer. Just make sure the window buffers are only updated when it is necessary. e.g. if a character gets written to a textbox, you don't want to to update the whole box, but only a small part of it. Your solution should be pretty fast as you will only need a very few number of redraws and updates.
Without double buffering artifacts would appear on the screen.If you design a very good and solid clipping mechanism, you could even eliminate the need for a back-buffer.
And, while lukem's method requires more data movement on the whole, one window movement would not require another window (which may now have become unobscured) to be forcibly redrawn - the appropriate area could be bitblitted straight from that window owner's backbuffer.
Both have their merits but I personally think that the number of memcpys may just kill your refresh rate if you continue with the recursive design (windows -> widgets -> subwidgets etc).
Possibly just one data area per window as a compromise?
- zaleschiemilgabriel
- Member
- Posts: 232
- Joined: Mon Feb 04, 2008 3:58 am
I believe Linux systems use the buffering approach, while Windows uses the clipping approach. I've always thought that the graphics functions in Windows were more flexible and faster than those in Linux (assuming they both use the same degree of hardware acceleration). Also, if you had hardware acceleration, the artifacts problem would be gone.
In buffering, I've always assumed that moving data from system memory to video memory was slower than system-to-system moves, and given that nowadays system memory frequencies are becoming comparable to processor frequencies, system-to-system moves are almost ignorable, but the overhead of adding more than one buffer is arguable. For the moment, processors are still way faster than system memory, so doing a trillion comparisons to decide whether to plot a pixel to the buffer is much faster than copying a thousand (maybe even a hundred) window-buffers to the back-buffer.
Say what you want, but the buffer approach doesn't sound good to me.
In buffering, I've always assumed that moving data from system memory to video memory was slower than system-to-system moves, and given that nowadays system memory frequencies are becoming comparable to processor frequencies, system-to-system moves are almost ignorable, but the overhead of adding more than one buffer is arguable. For the moment, processors are still way faster than system memory, so doing a trillion comparisons to decide whether to plot a pixel to the buffer is much faster than copying a thousand (maybe even a hundred) window-buffers to the back-buffer.
Say what you want, but the buffer approach doesn't sound good to me.
No, it wouldn't. Artifacts when using single buffering occur because the graphics hardware renders a frame while you are half way through updating it - it causes shearing and other such effects. Double buffering is the de facto way to avoid it, and is also used in 3D applications.Also, if you had hardware acceleration, the artifacts problem would be gone.
How exactly do you define flexibility in this context? I'm having difficulty understanding what you mean. It's also very possible that the linux box you were testing on didn't have full 2d hardware acceleration. Acceleration is an extremely hacky, kludgy and dodgy area in GNU/Linux.I've always thought that the graphics functions in Windows were more flexible and faster than those in Linux
- zaleschiemilgabriel
- Member
- Posts: 232
- Joined: Mon Feb 04, 2008 3:58 am
I'm just saying there's a difference between storing 1024x768 pixels in memory and storing just 4 coordinates of a rectangle. The more windows you will have, the more memory you will need. If your memory manager uses disk swapping, eventually a part of those buffer will be swapped to disk, then more speed problems arise...
that is a valid point, i didnt fully understand before.
It would mean that the WM would be far more complex to implement. I'll consider it when i decide to upgrade, im looking for something that works with a degree of stability right now, and as my OS is fairly lacking feature-wise, memory usage isnt really a problem right now.
It would mean that the WM would be far more complex to implement. I'll consider it when i decide to upgrade, im looking for something that works with a degree of stability right now, and as my OS is fairly lacking feature-wise, memory usage isnt really a problem right now.
- zaleschiemilgabriel
- Member
- Posts: 232
- Joined: Mon Feb 04, 2008 3:58 am
If all drawing functions (bitblt, line, rectangle, fill etc.) are handled by hardware, you probably won't have to care about the horizontal refresh. That means all your so-called "buffers" would also be stored in video memory before they are used.JamesM wrote:No, it wouldn't. Artifacts when using single buffering occur because the graphics hardware renders a frame while you are half way through updating it - it causes shearing and other such effects. Double buffering is the de facto way to avoid it, and is also used in 3D applications.
Nope, same box for both OS's with the official video drivers from NVIDIA.JamesM wrote:How exactly do you define flexibility in this context? I'm having difficulty understanding what you mean. It's also very possible that the linux box you were testing on didn't have full 2d hardware acceleration. Acceleration is an extremely hacky, kludgy and dodgy area in GNU/Linux.
My idea of software acceleration is to create something like a PlotPixel function that takes in an array of rectangles and does some comparisons to see if the pixel is clipped in/out and only plots the pixel if it passes the test. (depends on how you see the glass ) For lots of overlapping windows this is the fastest method I can imagine.
Why are they "so-called"? and why the need for the quotation marks?That means all your so-called "buffers" would also be stored in video memory before they are used.
I didn't say *where* the backbuffer would be kept - probably in video memory. The point is that without a backbuffer you *do* get artifacts. Disable double buffering on the next 3D game you run (you can do this manually if you have an nVidia card, not sure about ATI, by going to the adapter properties in windows) and see what you get.
Re: Window Manager specifics
Hi,
Basically, the client (where an application is the GUI's client, and a GUI is the video driver's client) shouldn't have access to any kind of video buffer at all.
Cheers,
Brendan
My opinion on video driver interfaces (and GUI interfaces) hasn't changed - the client (e.g. the application and/or GUI) should send a script that describes how the video data should be constructed by the server (e.g. the video driver), so that hardware acceleration isn't impossible, so that you're transferring relatively small scripts rather than relatively large bitmaps, and so you only need to optimize one set of drawing functions in the video driver rather than many sets of drawing functions (in each application).lukem95 wrote:Critique would be nice
Basically, the client (where an application is the GUI's client, and a GUI is the video driver's client) shouldn't have access to any kind of video buffer at all.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- zaleschiemilgabriel
- Member
- Posts: 232
- Joined: Mon Feb 04, 2008 3:58 am
I totally agree with Brendan! I just had a brainstorm about using the Mozilla XML User Interface Language as a GUI interface for an OS.
JamesM, even with double buffering enabled in 3D games, you could still get artifacts (although barely noticeable) if Vertical Sync isn't enabled.
But with VBE framebuffers I don't think you have much control over that option. The one thing that you can and should optimize is system memory access.
The reason they use buffers in 3D games is because they are convenient means of representing textures. Also, triple-buffering in games is usually slower than double-buffering.
I agree that double buffering is a good technique to be used in video games, but maybe not so much in an OS interface.
P.S.: Sorry about the quotes and everything else. My native language is highly metaphorical. Just ignore the quotes. I mostly use them when I mean to say something like "fill in the dots". So by "buffers" I meant "video/virtual/memory/texture/whatever buffers" - basically "virtual buffers". Confused?
JamesM, even with double buffering enabled in 3D games, you could still get artifacts (although barely noticeable) if Vertical Sync isn't enabled.
But with VBE framebuffers I don't think you have much control over that option. The one thing that you can and should optimize is system memory access.
The reason they use buffers in 3D games is because they are convenient means of representing textures. Also, triple-buffering in games is usually slower than double-buffering.
I agree that double buffering is a good technique to be used in video games, but maybe not so much in an OS interface.
P.S.: Sorry about the quotes and everything else. My native language is highly metaphorical. Just ignore the quotes. I mostly use them when I mean to say something like "fill in the dots". So by "buffers" I meant "video/virtual/memory/texture/whatever buffers" - basically "virtual buffers". Confused?
IIRC, this is what NeXTSTEP did with Display Postscript, and that turned out quite well. 1120×832 was the standard resolution, and quite responsive on a 25MHz 68030.the client (e.g. the application and/or GUI) should send a script that describes how the video data should be constructed by the server (e.g. the video driver)