Graphics driver functions
Graphics driver functions
Which functions must I implement in graphics drivers?
Hi,
This is a problem I had recently. My OS just uses VESA graphics modes as linear frame buffers - therefore why doesn't the window manager just write directly to graphics space? What I have done is implement a fairly basic graphics driver which does the following:
These are basic utility functions which will be present in all my drivers. Examples of the properties provided are "BitsPerPixel", "Resolution", "RedFieldSize", "RedFieldPosition" etc... which allow the window manager / console to provide apps with the correct types of buffers.
This function allows me to refresh the entire screen from a display buffer. For example, each virtual console in my console manager has an entire screen's worth of display buffer. The current buffer is just 'Filp'ped whenever it is switched to.
Same idea but for a smaller area of the screen - general principle is you should always update as little of the screen as possible.
I will be adding more but this is as far as I have got so far. The thing about having these functions built in to your graphics driver instead of your window manager is that the graphics driver can *always* use the best copy method available. If you have hardware acceleration, use that - if not, you can use MMX/SSE/SSE2 etc...
Note that I am not in any way attempting any kind of driver interface compatibility with other OS's - you will probably find better methods elsewhere but HTH for a start!
Cheers,
Adam
This is a problem I had recently. My OS just uses VESA graphics modes as linear frame buffers - therefore why doesn't the window manager just write directly to graphics space? What I have done is implement a fairly basic graphics driver which does the following:
Code: Select all
Initialise();
Terminate();
GetProp(index, *);
SetProp(index, val);
Code: Select all
Flip(void *displaybuffer);
Code: Select all
FlipRect(void *displaybuffer, unsigned x, unsigned y, unsigned width, unsigned height);
I will be adding more but this is as far as I have got so far. The thing about having these functions built in to your graphics driver instead of your window manager is that the graphics driver can *always* use the best copy method available. If you have hardware acceleration, use that - if not, you can use MMX/SSE/SSE2 etc...
Note that I am not in any way attempting any kind of driver interface compatibility with other OS's - you will probably find better methods elsewhere but HTH for a start!
Cheers,
Adam
Hi,
What I'd want is for applications to build a list of graphics commands that describe how to build their window. The GUI gets these lists from applications and pre-parses them (mostly just adjusting the co-ordinates for the commands, but also adding commands for window decoration - borders, title bar, etc), then combines the lists of commands from the applications and it's own list of commands (task bar, desktop, etc) into a single larger list. This list is sent to the video driver, and the video driver does all of the drawing and displays it's result.
For example, an application might send a list of commands like this to the GUI:
Then, the GUI might get the list from the application and build it's own list:
This list (or at least the encoded version of it) is what the video driver would receive. The video driver would check the list, load any image files from disk, then draw everything into a framebuffer somewhere, then blit the framebuffer to the screen (hopefully with caching built in to improve loading and drawing times).
In addition to these lists of commands, the video driver should allow graphics data to be pre-loaded. For example (for the example above), the GUI might ask the video driver to pre-load the files "/home/fred/background.jpg" and "/foo/bar.jpg" long before they're actually needed.
It might also be good to have something like "textures", where the GUI could send a list of commands that describe a texture (e.g. an application's window) and then send a second list of commands that reference that texture (e.g. tell the video driver to draw the pre-uilt application's window somewhere). That way these textures could be re-used for many different frames of video rather than rebuilt each time; and used for smaller things like icons, and also used for 3D textures (like most 3D games use). This would also make it simple to have a rotating cube of application windows (for e.g.) or to do the Vista style "alt_tab" thing.
Also, the mouse pointer should have seperate controls - perhaps one function to set the mouse pointer's position, one to enable/disable the mouse pointer, and another to change the mouse pointer's appearance.
Lastly there's 2 different types of power management - one to tell the video card that the user hasn't done anything for X minutes (that's only used for video and maybe sound), and another type that's used for all device drivers (sleep, standby, shutdown, etc).
BTW to add OpenGL compatability to something like this, all you'd need to do is write a suitable library that builds the list of commands to send to the GUI. You wouldn't need to change anything in the GUI or in the video driver to add OpenGL support. However, there is one problem...
OpenGL lets the applications draw a pretend frame of video, where "things" are given a number (instead of a colour or texture) and where the application can ask for the number associated with the "thing" at a certain screen co-ordinate. This is mostly used to find out which 3D object is at a specific 2D co-ordinate.
For an example, if the mouse is at screen co-ordinate (123, 234) and the user presses the mouse button, then an application might draw a pretend frame of video consisting of 2 dwarves and a horse. The application might give all the polygons used for the first dwarf the number 1, all the polygons for the second dwarf number 2, and all the polygons used for the horse might be given number 3. Then the application asks OpenGL what is at screen co-ordinate (123, 234) and OpenGL might tell it that "object number 2" is at that screen co-ordinate. This is how the application finds out that the user clicked on the second dwarf.
The problems is it tells the application which object is at a certain position on the screen, but doesn't tell the application where in that object it corresponds to. This is OK for games, etc (where you only care which object was clicked), but isn't that useful for application windows (where you might care exactly which pixel was clicked within the object/texture).
For the general case (arbitrary planes in 3D space) trying to find the object, the texture and the co-ordinates within the texture from 2D screen co-ordinates is complicated, as you need to reverse all the calculations that were involved in rendering the 3D planes onto the 2D screen. I'd probably do something similar to OpenGL for the general case, and then extend it so that if the texture (in 3D space) happens to be parallel with the screen and not rotated it does return the co-ordinates within the texture. That way an application could give icons, buttons, etc "object numbers" and find out which object was clicked with the mouse (even when the application's window is rotated/scaled in 3D space), but the more normal "the mouse clicked at (x,y) in your window" still works normally for normal GUI windows.
Cheers,
Brendan
Because you'll end up with a heap of code that writes directly to a frame buffer and you'll "accidentally" make assumptions about frame buffers in your design and specifications that most of your code will end up relying on, which will eventually make it extremely difficult for any video driver to use hardware acceleration. In this case, while writing the first device driver that supports hardware acceleration you'll also be rewriting most your GUI code and half the application code that uses the GUI. This is "bad" as (IMHO) it should be possible for someone you've never met to download specifications from your web site and implement a video driver for your OS (with full hardware acceleration, and without rewriting any GUI/application code).AJ wrote:This is a problem I had recently. My OS just uses VESA graphics modes as linear frame buffers - therefore why doesn't the window manager just write directly to graphics space?
What I'd want is for applications to build a list of graphics commands that describe how to build their window. The GUI gets these lists from applications and pre-parses them (mostly just adjusting the co-ordinates for the commands, but also adding commands for window decoration - borders, title bar, etc), then combines the lists of commands from the applications and it's own list of commands (task bar, desktop, etc) into a single larger list. This list is sent to the video driver, and the video driver does all of the drawing and displays it's result.
For example, an application might send a list of commands like this to the GUI:
Code: Select all
Draw rectangle from (0, 0) to (50,80) using colour 0x1234
Load picture "/foo/bar.jpg" and display it at from (50, 0) to (100, 80)
Put words "Hello World" from (20, 20) to (80, 40)
Code: Select all
Load picture "/home/fred/background.jpg" and display it at from (0, 0) to (640, 480)
Draw rectangle from (200, 80) to (300,100) using colour 0x3210
Put words "Application" from (205, 85) to (295, 95)
Draw rectangle from (200, 100) to (250,180) using colour 0x1234
Load picture "/foo/bar.jpg" and display it at from (250, 100) to (300, 180)
Put words "Hello World" from (220, 120) to (280, 140)
Draw rectangle from (0, 440) to (600,480) using colour 0x3456
Put words "START" from (10, 450) to (100, 470)
In addition to these lists of commands, the video driver should allow graphics data to be pre-loaded. For example (for the example above), the GUI might ask the video driver to pre-load the files "/home/fred/background.jpg" and "/foo/bar.jpg" long before they're actually needed.
It might also be good to have something like "textures", where the GUI could send a list of commands that describe a texture (e.g. an application's window) and then send a second list of commands that reference that texture (e.g. tell the video driver to draw the pre-uilt application's window somewhere). That way these textures could be re-used for many different frames of video rather than rebuilt each time; and used for smaller things like icons, and also used for 3D textures (like most 3D games use). This would also make it simple to have a rotating cube of application windows (for e.g.) or to do the Vista style "alt_tab" thing.
Also, the mouse pointer should have seperate controls - perhaps one function to set the mouse pointer's position, one to enable/disable the mouse pointer, and another to change the mouse pointer's appearance.
Lastly there's 2 different types of power management - one to tell the video card that the user hasn't done anything for X minutes (that's only used for video and maybe sound), and another type that's used for all device drivers (sleep, standby, shutdown, etc).
BTW to add OpenGL compatability to something like this, all you'd need to do is write a suitable library that builds the list of commands to send to the GUI. You wouldn't need to change anything in the GUI or in the video driver to add OpenGL support. However, there is one problem...
OpenGL lets the applications draw a pretend frame of video, where "things" are given a number (instead of a colour or texture) and where the application can ask for the number associated with the "thing" at a certain screen co-ordinate. This is mostly used to find out which 3D object is at a specific 2D co-ordinate.
For an example, if the mouse is at screen co-ordinate (123, 234) and the user presses the mouse button, then an application might draw a pretend frame of video consisting of 2 dwarves and a horse. The application might give all the polygons used for the first dwarf the number 1, all the polygons for the second dwarf number 2, and all the polygons used for the horse might be given number 3. Then the application asks OpenGL what is at screen co-ordinate (123, 234) and OpenGL might tell it that "object number 2" is at that screen co-ordinate. This is how the application finds out that the user clicked on the second dwarf.
The problems is it tells the application which object is at a certain position on the screen, but doesn't tell the application where in that object it corresponds to. This is OK for games, etc (where you only care which object was clicked), but isn't that useful for application windows (where you might care exactly which pixel was clicked within the object/texture).
For the general case (arbitrary planes in 3D space) trying to find the object, the texture and the co-ordinates within the texture from 2D screen co-ordinates is complicated, as you need to reverse all the calculations that were involved in rendering the 3D planes onto the 2D screen. I'd probably do something similar to OpenGL for the general case, and then extend it so that if the texture (in 3D space) happens to be parallel with the screen and not rotated it does return the co-ordinates within the texture. That way an application could give icons, buttons, etc "object numbers" and find out which object was clicked with the mouse (even when the application's window is rotated/scaled in 3D space), but the more normal "the mouse clicked at (x,y) in your window" still works normally for normal GUI windows.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
take a look at SGOS, some of the comments are in chinese, but most are english..
it has a really good looking GUI, without masses of code to trawl through. The code is also nicely laid out with a file for each win32 drawing function (Well.. whats been done anyway) and the generic ones.
http://www.sgos.org.cn/
it has a really good looking GUI, without masses of code to trawl through. The code is also nicely laid out with a file for each win32 drawing function (Well.. whats been done anyway) and the generic ones.
http://www.sgos.org.cn/