Page 1 of 1

Video driver design question

Posted: Fri Jan 27, 2006 4:59 pm
by proxy
I have developed some basic code which switches video modes and all that, I have a dinky little box for a mouse which changes colours when I click the buttons. It's a good test of the correctness of the code, but has no real design.

So want to design a good driver system and the thing that is really holding me up is video. I have no idea how I want to present the driver to rest of the system.

I would love to have my video server to be a user space application. How should it draw graphics? Should my video driver itself provide primitives like drawPixel, drawLine, drawRect, etc? I feel that any access to these would be slow.

So now I am thinking, well i'll just make my video driver be able to switch modes, and return basic information on it such as where the LFB is in memory. Now my current idea is to have a system call which would let an app ask to have a block of memory mapped into it's address space so the process would do something like the following:

(syscall may become ioctl in the future if i ever have them)

execute syscall: enumerate_video_modes
pick a mode
execute syscall: switch_video_mode
execute syscall: get_linear_frame_buffer
execute syscall: map_memory (passing LFB address and where i would like it in my process)

from this point on, the application, would have direct access to video memory which would be nice and fast...

But this seems like an awful security flaw, since an application could just map memory and write it and such. I mean we could put limitations on this, such as an application may not map memory owned by another process, fine that's better, but now there is a "false" gui attack that can happen if some malicious application requests the memory first.

So what do you guys think? Anyone implemented any good systems for user mode graphics?

proxy

Re:Video driver design question

Posted: Fri Jan 27, 2006 5:45 pm
by Senaus
Do you have an 'init' process which loads all the servers? If so, I would have the init process take the access rights from the video driver and pass them on to the GUI.

Re:Video driver design question

Posted: Fri Jan 27, 2006 6:17 pm
by Brendan
Hi,
proxy wrote:I would love to have my video server to be a user space application. How should it draw graphics? Should my video driver itself provide primitives like drawPixel, drawLine, drawRect, etc? I feel that any access to these would be slow.
IMHO a (modern) video driver interface needs to allow applications to make use of hardware acceleration. For example, an application could ceate a "script" of operations (including UTF strings for selected fonts, textured polygons, fog/darkness, lighting, etc) and send it to the video driver. The video driver would process the script using as much hardware acceleration as possible (but falling back to a software renderer if hardware acceleration isn't supported).

The video driver would also need to keep state. For example, the application might tell it to load some graphics data (textures, icons, etc) in one script, and then send more scripts that reference this graphics data. In this case, the video driver should try to store the graphics data in the video card's "off screen memory" to enable faster blitting.

Consider a GUI. It'd "upload" the graphics data for each window into the video driver, and then tell the video driver where on the screen each one should be drawn. This can make things like moving windows, minimizing/maximizing them or "alt+tab" extremely fast if the video card has hardware accelerated video bit-blits.

The other thing that might be worth considering is using video mode independant functions. For example, the video driver would use "virtual co-ordinates" from 0x00000000 to 0xFFFFFFFF instead of from 0 to 639 (or 0 to 799, or 0 to 1023), and it'd also use a virtual pixel format. The video driver might have a function to draw a line:

void drawLine(vColour, vWidth, vStartX, vStartY, vEndX, vEndY);

And the application could draw a blue line down the middle of the screen with:

do_line(0x0000FF, 0x000100, 0x8000000, 0x00000000, 0x8000000, 0xFFFFFFFF);

That way the application can draw anything it likes without caring what video mode is being used, and the video driver could change video modes without telling the application (which allows the video driver to auto-select the best mode for all applications and avoid switching video modes when the user does "alt+tab").

I'd also recommend having a look at something like OpenGL to see what functions it provides...


Cheers,

Brendan

Re:Video driver design question

Posted: Fri Jan 27, 2006 9:50 pm
by proxy
Brenden,

Thanks for the responce, you have definitely given me a lot of think about in my design.

Being able to send batch jobs to the video driver is definitely something to consider, especially once I get into the more advanced things like hardware acceleration. I definiltey plan to have some sort of abstraction of any hardware acceleration that the card may support which may very well be as you described.

But what if they just want to plot some pixels? It seems somewhat slow to do a system call/ioctl just to plot ONE pixel...

I do like the idea of having a sort of virtual resolution that is implicitly scaled to the native one, kinda makes the whole system almost vector graphics based. But once again, I have concerns about the abstraction imposing performance hits.

Anyway, onto your suggestion, just to make sure I understand it well.

You are basically suggesting I do all the primitives in kernel land in the video driver, then be able to load/store graphical resources which i assume will have some way of referenceing (resource #/handle/etc). Then the application can send a bunch of commands whenever it wants to have an item be rendered/moved/etc.

I can see this working pretty well, and I will certainly look into the API that opengl provides, i have seen some opengl code before. If i remember correctly it is like (just psuedo code representing codeing pattern...):

opengl_start();
update_objects();
opengl_end();
opengl_render();

did i follow what you meant, or no? also, if I did, what do I do if I want to provde direct video memory access to an app directx style?

proxy

Re:Video driver design question

Posted: Fri Jan 27, 2006 10:47 pm
by Phugoid
One thing you might find interesting in OpenGL is display lists and family. The user essentially draws his object, but the OpenGL library "remembers" the sequence of drawing commands - making a display list. When the object actually needs to be rendered to the screen, the user merely asks OpenGL to draw the display list. The idea there is that the actual data and commands needed to draw the list may be stored anywhere - kernel space, video memory, etc. This greatly speeds up repeated drawing.

Re:Video driver design question

Posted: Fri Jan 27, 2006 11:15 pm
by Brendan
Hi,
proxy wrote:But what if they just want to plot some pixels? It seems somewhat slow to do a system call/ioctl just to plot ONE pixel...
That's why you send the scripts instead - if you want to plot 10 pixels it's only one system call rather than 10 of them. Usually an application draws more than one thing at a time (I can't think of anything that would draw one pixel by itself).
proxy wrote:I do like the idea of having a sort of virtual resolution that is implicitly scaled to the native one, kinda makes the whole system almost vector graphics based. But once again, I have concerns about the abstraction imposing performance hits.
For 3D polygons it wouldn't make any difference, as they are scaled anyway. For "solid fill" drawing primitives like circles, elipses, squares, rectanges, etc you'd only need to scale a few points (for e.g. scaling the corners of a square before drawing it). For fonts, most systems scale them anyway.

The big performance problem would be bitmaps. The video driver could keep a scaled copy of it so that it doesn't need to be scaled twice if it's drawn the same size twice. For something like movies the application would just tell the video driver which movie to play (and where), rather than sending one frame at a time.

The video driver could also auto-compensate (drop back to a lower resolution, stop doing anti-aliasing, use simpler shading, skip the shadows, use lower detail textures, etc). Of course this would work both ways (improving the picture quality if there's heaps of spare CPU time). IMHO this could be good - instead of applications/games saying "must have a 3 GHz CPU and 1024 * 768 video or better" they'd just say "better graphics with better hardware".

It'd also be different. For example, you wouldn't draw one pixel because it'd be invisible after it's scaled down to the actual screen resolution (instead you'd draw a small square). The same happens to lines (you'd need to specify how thick the line is).

I guess there's nothing to say you can't support virtual video and normal video - the application would tell the video driver what "mode" it wants to operate in.
proxy wrote:You are basically suggesting I do all the primitives in kernel land in the video driver, then be able to load/store graphical resources which i assume will have some way of referenceing (resource #/handle/etc). Then the application can send a bunch of commands whenever it wants to have an item be rendered/moved/etc.
Yes, something like:

[tt] upload "chicken.bmp" as BMP1
upload "file.ico" as BMP2
upload "exit_button.bmp" as BMP3
upload "crazy.font" as FNT1

draw box in white with top left at (0,0) and bottom right at (122,132)
draw rectangle in blue with top left at (1,1) and bottom right at (121,131)
draw BMP1 with top left at (2,3) and bottom right at (120,130)
draw BMP2 with top left at (20,22) and bottom right at (52,54)
draw BMP2 with top left at (99,90) and bottom right at (119,128)
write "Funky Chicken!" with FNT1 with top left at (5,5) and bottom right at (20,127)[/tt]

The script itself could also be a file on disk if nothing is dynamically generated (e.g. the application just tells the video driver "execute file foobar.vscpt").
proxy wrote:I can see this working pretty well, and I will certainly look into the API that opengl provides, i have seen some opengl code before. If i remember correctly it is like (just psuedo code representing codeing pattern...):
I'm not very familiar with OpenGL (not sure how it's functions are called, etc). The idea was to take a look at it to find out what kinds of operations are normally supported (3D polygons, fog/darkness, clipping, bitmaps, sprites, etc).


Cheers,

Brendan

Re:Video driver design question

Posted: Mon Feb 06, 2006 3:17 pm
by giszo
Hi!

I thought about starting a new thread to my question but it's close to this thread so i'm asking here...

I'm also thinking and desinging the GUI system for my OS and i've a little demo application that implements all features that i thought. The GUI systems is really simple: a window has a buffer that contains the pixels. A window also has a mask that contains information about the visible part(s) of the win. The important part of my question is that the window now uses pixel drawing and i'd like to ask you how i should use some HW accelerated drawing like fillrect / drawline in this system?

Thx, giszo

Re:Video driver design question

Posted: Tue Feb 07, 2006 1:02 am
by Brendan
Hi,
giszo wrote:I'm also thinking and desinging the GUI system for my OS and i've a little demo application that implements all features that i thought. The GUI systems is really simple: a window has a buffer that contains the pixels. A window also has a mask that contains information about the visible part(s) of the win. The important part of my question is that the window now uses pixel drawing and i'd like to ask you how i should use some HW accelerated drawing like fillrect / drawline in this system?
The main problem with using hardware acceleration is that you need to tell the video card what to draw (so it can ask the hardware to do the work, if the hardware supports it).

For your current system, you could use hardware accelerated bit blits to transfer pixel data into the video card's display memory. You could also transfer each window's pixel data into off-screen memory so that the video card can transfer it onto the screen anywhere you like (which is faster because the computer's memory and PCI bus don't get in the way, but only if the same pixel data is re-used). I assume you'd also be able to use the hardware mouse cursor.

That alone can make a GUI perform a fair bit better, but it's a small part of what a modern video card is capable of. For anything more than this you'd need to redesign the software interface to your video driver. For a simple example, the GUI should be able to draw a menu or a window's border & title using hardware accelerated rectangles, lines, etc.

For a more realistic example, an application should be able to say "draw a strange shaped 3D polygon using the texture data I sent earlier, but make it partially opaque (so 50% of the background colours show through) except where where the texture is transparent, and do lighting and shading effects according to the light sources I specified earlier". This would be sent to the GUI, and the GUI would adjust the parameters so that the polygon is positioned correctly on the screen and clipped to the edge of the window. Then the GUI would send it to the video driver. Hopefully the video driver would send it to the video card where it would be drawn by hardware (but unfortunately the video card may need to do some or all of it in software).

AFAIK the video hardware itself often consists of a "command buffer" and some other stuff. For older video cards the command buffer might only be one command deep (you put the command and it's arguments into the video card, and then say "go"), where video operations are done one at a time. For modern video cards the command buffer might hold many commands, so that you can put 32 commands into it (for e.g.) and get an IRQ when they are all done. In this case the IRQ handler could put more commands in, so that a 3D scene can be drawn by filling the command buffer several times (which allows the CPU to do other things while the video card is doing the drawing).

For a rough idea of the sort of things needed, here is a list of functions supported by openGL. I would assume that the openGL library builds a "script" of operations to be done, and then when glFinish is called it sends this script to the video driver (or GUI) and waits for it to be completed. I'm not an openGL expert (I could be wrong), but this is how I'd implement it (except for the blocking until completion part).

The other thing that can be a major pain is font drawing. Let the first line of text be Times New Roman in italics with underline (size = 8, colour = green), let the second line of text be an Arabic font (size = 12, colour = blue, and don't forget Arabic is written from right to left) and let the third line of text be Japanese in bold (size = 14, colour = orange). Once this text is drawn, use it as texture data on a curved surface (concave) at the top left of the window that is angled such that the start is at (20, 20, 40), the middle is at (70, 55, 5) and the end is at (120, 90, 60).


Cheers,

Brendan