Graphics driver interface design
Posted: Sun Dec 09, 2007 7:42 pm
So.. I wrote a minimal VGA 320x200x8 (rrrgggbb truecolor) driver yesterday, and while I'll probably have to figure out how to get VBE modes or something as fall-back modes when no special drivers exists, this driver has a dual purpose: being the generic driver on which to base other drivers.
I thought I'd post about it here, in order to open a discussion. The current driver interface looks like this:
Ok, a bit longish for all the comments, but you get the point. Idea is that any driver init will call vga_init() and then override the setmode-pointer, and if necessary/desired other pointers as well. The base driver happens to work such that this is enough.
The driver model I have in mind is designed for truecolor only (bitdepth can vary, but I have no intention of supporting palettes). The above is still lacking mode-enumeration, not to mention alpha-blits and device-independent bitmaps (which probably should be taken into consideration on driver level as they'll probably be worth accelerating).
Anyway, I have two things in mind here... first of all, does the above look sensible to those (if any) who've written real accelerated drivers? Also, if somebody else has a generic graphics driver model already in place, does it look similar? If not, how does it work?
Finally, if we find that there are several people with similar ideas about how graphics drivers should work, would it be totally stupid idea to attempt to design a common interface, in order to share code?
I thought I'd post about it here, in order to open a discussion. The current driver interface looks like this:
Code: Select all
typedef struct vga_display_S vga_display;
struct vga_display_S {
// mode data: width/height for clipping, p/d for access..
// to blog a byte at (x,y) take data[x*d+y*p] = value
struct {
unsigned short w; // width
unsigned short h; // height
unsigned short p; // pitch = number of bytes in memory between lines
unsigned short d; // depth = number of bytes per pixel
struct { char r,g,b,a; } bits; // number of bits for each channel
struct { char r,g,b,a; } shift; // how many bits each channel is shifted
} mode;
void * buffer; // linear framebuffer address (default: 0xA0000)
void * driver; // driver internal data, always 0 in default implementation
// lock() the driver before accessing 'mode' or 'buffer' directly
// unlock() the driver before calling any other driver functions
//
// The implementation need not be recursive, and lock should be blocking.
//
// A client should not rely on these, and it's fine to provide no-ops
// where hardware doesn't mandate locking.
//
// The default implementation uses no-ops.
//
void (*lock)(vga_display *);
void (*unlock)(vga_display *);
// setmode() set a given graphics mode
// FIXME: need a protocol for detection/selection of available modes
//
// The default implementation only supports mode=0 (320x200 with RRRGGGBB)
//
// Returns 0 on success, non-zero on failure.
int (*setmode)(vga_display *, int mode);
// flip() updates the screen when the drivers is using hardware double
// buffering or software emulation of linear frame-buffer.
//
// Software double-buffering should NOT be done at driver level.
//
// The default implementation uses no-op.
//
void (*flip)(vga_display *);
// blit() - copies an image from one address to another
//
// Implementations that provide acceleration should detect whether
// either of the pointers point to video memory in order to choose
// the correct strategy.
//
// When source data contains alpha-values, those are expected to be
// copied as in to the destination (no alpha blending is done).
//
// Notice that the client is expected to do clipping.
//
// Returns 0 on success, non-zero on failure.
//
// The default implementation does a manual byte-by-byte copy.
//
int (*blit)(vga_display *,
void * src_data, unsigned src_pitch,
void * dest_data, unsigned dest_pitch,
unsigned width, unsigned height);
};
vga_display * vga_init();
The driver model I have in mind is designed for truecolor only (bitdepth can vary, but I have no intention of supporting palettes). The above is still lacking mode-enumeration, not to mention alpha-blits and device-independent bitmaps (which probably should be taken into consideration on driver level as they'll probably be worth accelerating).
Anyway, I have two things in mind here... first of all, does the above look sensible to those (if any) who've written real accelerated drivers? Also, if somebody else has a generic graphics driver model already in place, does it look similar? If not, how does it work?
Finally, if we find that there are several people with similar ideas about how graphics drivers should work, would it be totally stupid idea to attempt to design a common interface, in order to share code?