Hi,
frizzz wrote:I think Brendan isn't right (I hope it) - You won't need years to code. There are still some register adresses and bit-meanings You need to know.
Whether Brendan is right or not depends on what you want the video driver to be capable of. Obviously generic VBE 3.0 support is going to be much easier than creating decent drivers that support all video modes with 2D and 3D acceleration using direct IO. It also depends on how many video cards you support.
Most video manufacturers produce a chipset, and then enhance the original chipset for later versions. Therefore creating a decent device driver for the original video card is going to be much harder than changing that video driver to suit the later versions of the card.
Let's say for the original version it takes you 2 months, and 2 weeks to modify that for each further version of the card. You'd need drivers for:
- ATI
- Chips & Technologies
- Cirrus Logic
- IIT
- Matrox
- NCR
- Nvidea
- Oak
- Paradise
- S3 Incorporated
- SiS
- Trident
- Tseng Labs
- Video7
- Weitek
That's 15 "chipsets" that I can remember, which would be around 30 months. For each of these there's the variations on the original chipset. If you say there's an average of 10 variations per chipset then that works out to another 75 months.
On top of this you'll probably want a generic VGA driver and a generic VBE 3 driver (so people can use something while you're trying to catch up to the manufacturers latest designs). Allow another 2 months for these.
Further, you're going to need to design the software interface to the video drivers and document it well. You're also going to need code to do some things in software for when the video card doesn't do it in hardware (for e.g. if someone wants 3D polygons or something on an old card it will work by doing it in software). Add another 3 months (or more) for this.
Ok, that all adds up to around 120 months, or 10 years. Unfortunately this doesn't really allow time for finding all the documentation you'll need, aquiring video cards to test drivers on or fixing all the bug reports that people will (hopefully) send in.
There's also things that can be done to improve this. If you don't need to work, or attend school or university then you can probably halve this. You could also skip the older video cards which could halve it again.
One possibility would be to write the generic video code (standard VGA and VBE 3) and a few decent video drivers, and then ask for volunteers to write the others. This is where your well documented device driver interface is needed - people would use your documentation and the source code for the decent video drivers that you've done as a guide for figuring out their own drivers.
Another possibility would be to write poor quality code. For example, if you didn't bother with 3D it'd save heaps of time. Unfortunately this would also make it less likely that volunteers will be interested in helping, so I'm not too sure if this would save time in the end or not.
Something that may be a huge help is supporting computers with multiple video cards (ie. dual display). This would let you use one stable video driver/monitor to see debugging information, etc from a video driver you're working on. Otherwise, if the video driver your working on results in a black screen or trashed video it can be very difficult to figure out what's going wrong.
frizzz wrote:But You need to see, that everything, vendors could have done, needs to act as fast as possible. Thus it needs to be simple!
The well hidden things are special registers and commands for video-purpose (i.e.Ati-All in wonder...), but You could omit it ( or dissassemble the driver-binaries
This is obviously wrong!
For example, to make things simple for software (and faster) video cards could have a single IO port that software uses to set a new video mode - for e.g. "out 0x1234, 0x13" could set 320*200*256 mode. This would make the hardware more complicated though, so no manufacturers will ever do it. Instead you need to set around 100 different registers (or more) to set a video mode properly.
Instead of making the software interface simple the manufacturers try to get the best performance for the least cost. This often means doing some things in hardware where performance matters and pushing things back to software where performance doesn't matter so much (or where it's too costly to do it in hardware, or where the card doesn't support the feature at all).
If you're talking about 2D and 3D acceleration (line drawing, hardware mouse cursors, bit-blits, polygons, fog, z-buffers, volumetric smoke, textures, lighting, shadow, bump mapping, etc) then it's probably going to be more complicated than you expect. For example, your video interface might use signed 32 bit integers for x, y and z co-ordinates where (0,0,0) is the centre of the screen, while the video card might expect 64 bit floating point x, y and 1/z where (0,0,0) is the top left corner of the screen. You might have to do hidden surface removal or clipping in software, or perhaps the video card only supports 3D triangles and your video interface uses polygons that need to be split into multiple triangles? Even for something like a 2D line you might need "(x, y), step and length" or "starting (x,y) and ending (x,y)", and there's things like patterns, colour and/or source data, and mixing modes.
Cheers,
Brendan