Well, coming from a graphics/game programming background (my first 3d app was on a 386 25mhz box), and having written a GUI before, I would like to comment on the back-buffer comments. Firstly, you can easily wait for vsync in any mode, provided it's a VGA compatible card. The biggest thing with a back-buffer, is that you can be rendering to it, while the screen is still drawing the original buffer, so it reduces tearing, especially when you wait for vsync. The idea of triple buffering (and why it *in theory* works), is that the screen is waiting on a vsync before it will copy the back-buffer, and the CPU is doing nothing, nor the GPU. To alleviate this, they create a 3rd buffer, which allows the application to continue rendering the next frame before the current frame is ever copied over. So, say the first frame is about 63hz, while the next is 59hz... well, if you're vsync is locked at 60hz, and you are waiting for it after each frame, you end up hitting your 60hz on the first, and dropping to 30hz on the next. With triple buffering, that extra time not used by the first frame is used in the second, and it doesn't miss the next refresh. As you can see, in theory this works, the GPU is never waiting and always running. Another reason, is that typically to copy the back-buffer to front, there requires a lock of sorts on both buffers, while both buffers are locked, nothing can happen to either one, triple-buffering to the rescue again, work continues while buffers are moved/copied (this is true when you aren't waiting on vsync also). Now, lets get out of theory a bit, and jump back to reality and why it isn't always as good as it first seems. Limited graphics memory and high resolutions mean you are sacrificing video ram for buffer space, so in a large 3d game that runs out of graphics memory, it starts using system ram, and we all know that system -> pci/agp/pci-x is much slower than video->video, so in order for triple buffering to be faster, the speed gained by utilizing the GPU during the locks (or waits for vsync) must outweigh the extra time required to transfer data through the buses. The higher the resolution (and texture resolutions) the less chance that this will happen, thus slowing your game down.zaleschiemilgabriel wrote:JamesM, even with double buffering enabled in 3D games, you could still get artifacts (although barely noticeable) if Vertical Sync isn't enabled.
But with VBE framebuffers I don't think you have much control over that option. The one thing that you can and should optimize is system memory access.
The reason they use buffers in 3D games is because they are convenient means of representing textures. Also, triple-buffering in games is usually slower than double-buffering.
I agree that double buffering is a good technique to be used in video games, but maybe not so much in an OS interface.
Anyways, after that long explanation that nobody read, I believe there should be an option to grab the window/frame's actual memory for those that want to be able to do things fast, like your own rendering software, a special effect, your own control types, etc. There are ups and downs to each method, but a direct video buffer for a window is required in some cases, for example, what if I wanted to port Doom over to your OS and run it in a window? How you go about that is up to you, typically I would let the graphics driver handle bit-blits, and give it scripts, but have a way to dynamically create a buffer/sprite and update the windows/widget's image with it, so doom would write to this buffer, then the program would call an update function to copy it to the video memory's buffer for the window, then the graphics driver would do it's special effects, and render the window. Scripts tend to be smaller, but I still think you need a method for a 'buffer' for the window/widget. The cool thing about this, is all special effects still take place, you can draw widgets over the game/video stream, and if you're really bored, you can set it up to be a shared buffer so you could display parts of the video/game in multiple windows (i've seen this under linux is why i bring it up, pretty useless, but neat non-the-less).