Video Drivers
Video Drivers
I need to know how Linux handles video drivers.
How applications talk with video drivers? There is a standard way, or each driver is unique?
How applications talk with video drivers? There is a standard way, or each driver is unique?
AFAIK Linux has no video drivers, video drivers are part of X where the video drivers are handled in user space. take a look at XOrg and XFree86 projects.
Systems and Computer Engineering Researcher
"Do you pine for the nice days of Minix-1.1, when men were men and wrote their own device drivers?" -- Linus Torvalds
http://sce.carleton.ca/~maslan
"Do you pine for the nice days of Minix-1.1, when men were men and wrote their own device drivers?" -- Linus Torvalds
http://sce.carleton.ca/~maslan
Re: Video Drivers
Hi,
The CPL=3 untrusted process should be an X server, which includes the video driver, some networking stuff, and some other GUI stuff (and hopefully no bugs that trash the system).
Of course this isn't the only option - an application can include it's own video drivers (and it's own bugs that hopefully don't trash everything, or possibly malicious code intended to trash everything). This is normally done using a library (the SVGAlib).
IMHO it's probably the worst (least secure, least fault tolerant) "video driver" design possible, but AFAIK other *nix clones (FreeBSD, Solaris, etc) do the same thing. Also AFAIK, there are (were?) attempts to change this and put the video driver/s in the "monolithic" kernel where they belong - I'm not sure if any of these attempts have made any progress though.
Cheers,
Brendan
I'm not a Linux expert, but AFAIK Linux (the kernel) does almost nothing - it allows an untrusted CPL=3 process have full access to I/O ports and (some or all?) of the physical address space, and then hopes that the untrusted CPL=3 process will "do the right thing(tm)"...MarkOS wrote:I need to know how Linux handles video drivers.
The CPL=3 untrusted process should be an X server, which includes the video driver, some networking stuff, and some other GUI stuff (and hopefully no bugs that trash the system).
Of course this isn't the only option - an application can include it's own video drivers (and it's own bugs that hopefully don't trash everything, or possibly malicious code intended to trash everything). This is normally done using a library (the SVGAlib).
IMHO it's probably the worst (least secure, least fault tolerant) "video driver" design possible, but AFAIK other *nix clones (FreeBSD, Solaris, etc) do the same thing. Also AFAIK, there are (were?) attempts to change this and put the video driver/s in the "monolithic" kernel where they belong - I'm not sure if any of these attempts have made any progress though.
Applications talk to the X server via. networking (TCP/IP sockets?), using the "X Window System core protocol". I'd assume most applications use some sort of library for this (e.g. Xlib, Xaw, Motif, GTK+, or Qt).MarkOS wrote:How applications talk with video drivers? There is a standard way, or each driver is unique?
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- Brynet-Inc
- Member
- Posts: 2426
- Joined: Tue Oct 17, 2006 9:29 pm
- Libera.chat IRC: brynet
- Location: Canada
- Contact:
Re: Video Drivers
Unix sockets.. but there is a TCP/IP component for network sessions.Brendan wrote:Applications talk to the X server via. networking (TCP/IP sockets?)
You can prevent Xorg from listing on any TCP ports: startx -- -nolisten tcp.
Re: Video Drivers
What is a better "video driver" design?Brendan wrote:IMHO it's probably the worst (least secure, least fault tolerant) "video driver" design possible, but AFAIK other *nix clones (FreeBSD, Solaris, etc) do the same thing. Also AFAIK, there are (were?) attempts to change this and put the video driver/s in the "monolithic" kernel where they belong - I'm not sure if any of these attempts have made any progress though.
Re: Video Drivers
Hi,
For a true monolithic kernel, I'd put (at least) basic functionality in the kernel itself so that normal applications (and GUIs) don't need access to I/O ports and memory areas (and then prevent normal applications from accessing I/O ports and memory areas).
For a true micro-kernel, I'd put the video driver in it's own address space and only let it access the video card's I/O ports (not all I/O ports) and only let it access the video card's physical address space regions (not all of the physical address space).
Of course this is more about protecting the system from untrusted code.
For the video driver itself I'd prefer an interface where the video driver accepts some sort of script that describes the frame (and then generates the frame from the script and displays it, including some way to upload textures to the video driver, and possibly including caching parts of the data being displayed to improve performance for subsequent frames). I don't like raw framebuffer access as it makes 2D/3D acceleration impossible (does anyone really want to pay $500 or more for an extremely powerful video card and then get the same performance as a $30 video card due to OS design flaws?).
Note that the code for the X server was (recently?) split into modules, where the "back-end" can be customised to talk to an OS's native video driver (instead of containing it's own video driver with direct video card access). IIRC there's already back-ends to talk to WIndows (directX?), OpenGL, etc. It's the poor design of Linux (and other *nix clones) that cause the poor security, not the X server itself (which only does what it has to do to work on the OS).
Also note that Linux seems to be moving to DRI/DRM, where "DRM" is "Direct Rendering Manager" - a kernel module for talking directly to the video hardware that prevents the need for untrusted code to have direct hardware access (among other things).
Cheers,
Brendan
That depends. Mostly any video driver design is better than Linux's "we have no video driver design, so figure it out yourself".MarkOS wrote:What is a better "video driver" design?Brendan wrote:IMHO it's probably the worst (least secure, least fault tolerant) "video driver" design possible, but AFAIK other *nix clones (FreeBSD, Solaris, etc) do the same thing. Also AFAIK, there are (were?) attempts to change this and put the video driver/s in the "monolithic" kernel where they belong - I'm not sure if any of these attempts have made any progress though.
For a true monolithic kernel, I'd put (at least) basic functionality in the kernel itself so that normal applications (and GUIs) don't need access to I/O ports and memory areas (and then prevent normal applications from accessing I/O ports and memory areas).
For a true micro-kernel, I'd put the video driver in it's own address space and only let it access the video card's I/O ports (not all I/O ports) and only let it access the video card's physical address space regions (not all of the physical address space).
Of course this is more about protecting the system from untrusted code.
For the video driver itself I'd prefer an interface where the video driver accepts some sort of script that describes the frame (and then generates the frame from the script and displays it, including some way to upload textures to the video driver, and possibly including caching parts of the data being displayed to improve performance for subsequent frames). I don't like raw framebuffer access as it makes 2D/3D acceleration impossible (does anyone really want to pay $500 or more for an extremely powerful video card and then get the same performance as a $30 video card due to OS design flaws?).
Note that the code for the X server was (recently?) split into modules, where the "back-end" can be customised to talk to an OS's native video driver (instead of containing it's own video driver with direct video card access). IIRC there's already back-ends to talk to WIndows (directX?), OpenGL, etc. It's the poor design of Linux (and other *nix clones) that cause the poor security, not the X server itself (which only does what it has to do to work on the OS).
Also note that Linux seems to be moving to DRI/DRM, where "DRM" is "Direct Rendering Manager" - a kernel module for talking directly to the video hardware that prevents the need for untrusted code to have direct hardware access (among other things).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
I think I maybe could write drivers for each card and applications will talk with these drivers using IOCTLs.
For example there will be an IOCTL for drawing triangles, an IOCTL for drawing rectangles, et cetera. So the video driver can use hardware acceleration and there isn't any security issue, because this is a real driver.
Is this a good way?
(However each vendor could write his own driver)
For example there will be an IOCTL for drawing triangles, an IOCTL for drawing rectangles, et cetera. So the video driver can use hardware acceleration and there isn't any security issue, because this is a real driver.
Is this a good way?
(However each vendor could write his own driver)
Hi,
However...
Imagine something like a modern 3D game trying to display 10000 textured polygons per frame, and then consider how much overhead one IOCTL per polygon would involve. At 50 frames per second it adds up to 500000 IOCTL's per second, and if each IOCTL costs 200 cycles then it's 100000000 cycles per second (or about 10% of a 1 GHz CPU's time consumed by IOCTL overhead alone).
That's why I suggested "application sends list of commands" - for e.g. one IOCTL to send the list of commands, rather than one IOCTL per command.![Wink ;)](./images/smilies/icon_wink.gif)
Cheers,
Brendan
Yes.MarkOS wrote:I think I maybe could write drivers for each card and applications will talk with these drivers using IOCTLs.
For example there will be an IOCTL for drawing triangles, an IOCTL for drawing rectangles, et cetera. So the video driver can use hardware acceleration and there isn't any security issue, because this is a real driver.
Is this a good way?
However...
Imagine something like a modern 3D game trying to display 10000 textured polygons per frame, and then consider how much overhead one IOCTL per polygon would involve. At 50 frames per second it adds up to 500000 IOCTL's per second, and if each IOCTL costs 200 cycles then it's 100000000 cycles per second (or about 10% of a 1 GHz CPU's time consumed by IOCTL overhead alone).
That's why I suggested "application sends list of commands" - for e.g. one IOCTL to send the list of commands, rather than one IOCTL per command.
![Wink ;)](./images/smilies/icon_wink.gif)
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
It's a good idea. I can work on this.Brendan wrote:Hi,
Yes.MarkOS wrote:I think I maybe could write drivers for each card and applications will talk with these drivers using IOCTLs.
For example there will be an IOCTL for drawing triangles, an IOCTL for drawing rectangles, et cetera. So the video driver can use hardware acceleration and there isn't any security issue, because this is a real driver.
Is this a good way?
However...
Imagine something like a modern 3D game trying to display 10000 textured polygons per frame, and then consider how much overhead one IOCTL per polygon would involve. At 50 frames per second it adds up to 500000 IOCTL's per second, and if each IOCTL costs 200 cycles then it's 100000000 cycles per second (or about 10% of a 1 GHz CPU's time consumed by IOCTL overhead alone).
That's why I suggested "application sends list of commands" - for e.g. one IOCTL to send the list of commands, rather than one IOCTL per command.
Cheers,
Brendan
Thank you
Whoever said linux has no video drivers is just dead wrong. Linux does. However, its driver set is not as rich as Xorgs. The Linux framebuffer is a pure kernel approach to graphics. That said, even the Xorg drivers usually have a kernel part and kernel support is required for the Direct Rendering Infrastructure(IIRC).