Page 2 of 5

Posted: Thu Mar 27, 2008 11:19 am
by Ready4Dis
zaleschiemilgabriel wrote:JamesM, even with double buffering enabled in 3D games, you could still get artifacts (although barely noticeable) if Vertical Sync isn't enabled.
But with VBE framebuffers I don't think you have much control over that option. The one thing that you can and should optimize is system memory access.
The reason they use buffers in 3D games is because they are convenient means of representing textures. Also, triple-buffering in games is usually slower than double-buffering.
I agree that double buffering is a good technique to be used in video games, but maybe not so much in an OS interface.
Well, coming from a graphics/game programming background (my first 3d app was on a 386 25mhz box), and having written a GUI before, I would like to comment on the back-buffer comments. Firstly, you can easily wait for vsync in any mode, provided it's a VGA compatible card. The biggest thing with a back-buffer, is that you can be rendering to it, while the screen is still drawing the original buffer, so it reduces tearing, especially when you wait for vsync. The idea of triple buffering (and why it *in theory* works), is that the screen is waiting on a vsync before it will copy the back-buffer, and the CPU is doing nothing, nor the GPU. To alleviate this, they create a 3rd buffer, which allows the application to continue rendering the next frame before the current frame is ever copied over. So, say the first frame is about 63hz, while the next is 59hz... well, if you're vsync is locked at 60hz, and you are waiting for it after each frame, you end up hitting your 60hz on the first, and dropping to 30hz on the next. With triple buffering, that extra time not used by the first frame is used in the second, and it doesn't miss the next refresh. As you can see, in theory this works, the GPU is never waiting and always running. Another reason, is that typically to copy the back-buffer to front, there requires a lock of sorts on both buffers, while both buffers are locked, nothing can happen to either one, triple-buffering to the rescue again, work continues while buffers are moved/copied (this is true when you aren't waiting on vsync also). Now, lets get out of theory a bit, and jump back to reality and why it isn't always as good as it first seems. Limited graphics memory and high resolutions mean you are sacrificing video ram for buffer space, so in a large 3d game that runs out of graphics memory, it starts using system ram, and we all know that system -> pci/agp/pci-x is much slower than video->video, so in order for triple buffering to be faster, the speed gained by utilizing the GPU during the locks (or waits for vsync) must outweigh the extra time required to transfer data through the buses. The higher the resolution (and texture resolutions) the less chance that this will happen, thus slowing your game down.

Anyways, after that long explanation that nobody read, I believe there should be an option to grab the window/frame's actual memory for those that want to be able to do things fast, like your own rendering software, a special effect, your own control types, etc. There are ups and downs to each method, but a direct video buffer for a window is required in some cases, for example, what if I wanted to port Doom over to your OS and run it in a window? How you go about that is up to you, typically I would let the graphics driver handle bit-blits, and give it scripts, but have a way to dynamically create a buffer/sprite and update the windows/widget's image with it, so doom would write to this buffer, then the program would call an update function to copy it to the video memory's buffer for the window, then the graphics driver would do it's special effects, and render the window. Scripts tend to be smaller, but I still think you need a method for a 'buffer' for the window/widget. The cool thing about this, is all special effects still take place, you can draw widgets over the game/video stream, and if you're really bored, you can set it up to be a shared buffer so you could display parts of the video/game in multiple windows (i've seen this under linux is why i bring it up, pretty useless, but neat non-the-less).

Posted: Tue Apr 01, 2008 6:21 pm
by iammisc
I'm going for the XWindows approach which is that when a window needs to be drawn(and only when it needs to be drawn), the server sends a paint message to the client which paints its window by drawing a bitmap or using drawing primitives.

Posted: Wed Apr 02, 2008 12:42 pm
by lukem95
yeah but that way you have to repaint the whole window

Posted: Wed Apr 02, 2008 1:45 pm
by t0xic
lukem, just followed the link in your sig and saw what progress you've made! Congratz on getting a GUI running. I'm still stuck in text/mode 13h :cry:

Posted: Wed Apr 02, 2008 4:20 pm
by iammisc
@lukem95: no you don't have to repaint the whole window. The X11 server can tell the client to only draw a certain region.

Posted: Thu Apr 03, 2008 2:02 pm
by lukem95
oh right, i suppose that makes sense. I'm looking into bitmaps for my server too, as it may increase speed.

And @ t0xic... thankyou for the comment, it's still very much a work in progress, but im slowely getting there.

Re: Window Manager specifics

Posted: Thu Apr 03, 2008 10:51 pm
by Crazed123
Brendan wrote:Hi,
lukem95 wrote:Critique would be nice :)
My opinion on video driver interfaces (and GUI interfaces) hasn't changed - the client (e.g. the application and/or GUI) should send a script that describes how the video data should be constructed by the server (e.g. the video driver), so that hardware acceleration isn't impossible, so that you're transferring relatively small scripts rather than relatively large bitmaps, and so you only need to optimize one set of drawing functions in the video driver rather than many sets of drawing functions (in each application).

Basically, the client (where an application is the GUI's client, and a GUI is the video driver's client) shouldn't have access to any kind of video buffer at all. ;)


Cheers,

Brendan
QFT. At this point in graphics technology we shouldn't be sending bitmaps at all to the graphics driver unless absolutely necessary (like textures and icons), because at least right now graphics computation is extremely cheap compared to relatively expensive memory space.

Over the summer I plan to experiment in either language creation (to create that safer systems-programming language I dreamt of) or I'll write a direct renderer for this, preferably using OpenGL 2.0 as the substrate API.

Now finally, we can note that through the use of rotation and translation (possibly scaling, too) we can easily make the "recursive windows draw each other recursively" approach work extremely well in such a system. Building complex transformations is just a matter of composing functions. Each sub-window (including sub-windows of the root window) has a transformation to apply to it. To compose the final path/picture:

(compose (trans0 win0) (trans1 win1) (trans2 win2) ...)

With those numbers Z-ordered so that higher-numbered windows are closer to the camera.

Posted: Fri Apr 04, 2008 7:14 am
by lukem95
Please bear in mind that this WM is going to be improved on.

And also, surely in order to use the GPU's to their full extent, we must implement drivers for the specific chipset? something that people very rarely have in hobbyos's.

Right now my design is so:

WM is a driver loaded by the kernel, that gets passed information about the video card (LFB, modes etc) via a structure (which i fill in my bootstrap).

Applications send a "redraw" message to the WM when something is changed on the window. This is implemented in the API, and as such is built into every program.

The WM is a high priority task, and is woken when a redraw event from an active task (non-active tasks messages are sent when they are woken), it then redraws only the changed area, and goes to sleep again.

The active window is drawn directly into the LFB. This is so it can be moved easily: the windows data is memcpy'd into the LFB, and the area it was before, is memcpy'd from that area in the double buffer. The active window is the only window allowed to move position, as i am not allowing non-focused windows to be resized or moved in my API (unless someone has a reason why this would be needed).

When a window is selected, the mouse (or keyboard) driver sends a message to the WM telling them which window now has focus, and this window becomes the "active window".

Posted: Fri Apr 04, 2008 9:04 am
by z180
there is much example code around for screen drivers but you
need to interface it with your code.I will use only a vga driver
which supports multiple 4bit and 8bit modes for the beginning.

Posted: Fri Apr 04, 2008 9:25 am
by lukem95
you know 8 bit can only support 256 colours? and a max resolution of (iirc) 320x400 (or something silly).

and as for the existing code, i plan to port OpenGL once i have a stable WM (and have fixed my heap problems). From OpenGL a lot of routes are open, including SDL (with games such as Doom, Quake I and II, and applications such as mp3 players etc).

I thought about porting an early version of X, but if im honest, i would prefer to implement my own WM, where the only restriction is getting OpenGL to work with it.

Re: Window Manager specifics

Posted: Fri Apr 04, 2008 9:15 pm
by bewing
Crazed123 wrote: At this point in graphics technology we shouldn't be sending bitmaps at all to the graphics driver unless absolutely necessary (like textures and icons), because at least right now graphics computation is extremely cheap compared to relatively expensive memory space.
Does the hardware usually provide a spline fitter primitive? And is it typical to use that technique for drawing fonts? If not, then I'd think that almost anyone would also have to draw every glyph of "text" to the screen as bitmaps, too. Which means half the screen is always bitmaps.

Posted: Mon Apr 07, 2008 12:59 am
by zaleschiemilgabriel
IIRC splines are rendered as lines...

Posted: Thu Apr 10, 2008 4:01 am
by jal
One observation not already made (if I read correctly) is that when using 3D acceleration, each window must have its own buffer in order to do all those nice things like the OS X 'zoom out' function, Vista's preview task bar, Compiz' wobbling windows etc. etc. Having a backbuffer for each window is surely the way to go (even if you do not plan on 3D acceleration very soon). All these 'update only the affected rectangle' stuff, come on, that's Windows 3.0 we're talking about. I mean, go ahead if you like, but don't claim it's the best way to do it.


JAL

Posted: Thu Apr 10, 2008 12:51 pm
by zaleschiemilgabriel
With 3D hardware support those buffers are in video memory, so it's fast. With system memory - bad idea. That's what I was trying to say. Go with buffers if you plan on using 3d hardware. And I think it is also possible to use VESA VBE to access offscreen video memory. Never tried it myself.

Posted: Thu Apr 10, 2008 3:48 pm
by bewing
zaleschiemilgabriel wrote:With 3D hardware support those buffers are in video memory, so it's fast. With system memory - bad idea.
Ouch. I see. Knowing the way hardware gets designed, that sounds all too likely. However, that kinda means that it's best to use both methods -- if it's possible that they can be abstracted far enough to look the same to an app.