My current 'interface design' is nothing the way I'd like it to be in the future as I really want to get the basics working before focussing fully on an interface that is firendly or even looks good.
My current thought for design is - code to handle the drawing and a task that goes through buffers copying the info that needs to be rewritten into the frame buffer.
The 'graphics code' is called by apps when needed to draw something unless apps write directly into a buffer. This would be the equivalent of GDI/DirectX/OpenGL.
The 'graphics task' is called by the task switching mechanism to perform the physical act of updating the screen.
Both of these are combined into a single 'Graphical User Task'.
The 'graphics code' when called by apps, marks the buffer that it needs to be updated. If there is nothing to be updated the blitter can 'skip the blit'. Thus saving time (the app sends no message and the blit isn't done). When a buffer is created, it can be set to 'always update' allowing applications that write directly to the buffer to do so at the fastest speed they can(subject to the displays refresh speed).
The 'graphics task' should really be split into two with one part coming up with some kind of display list and copying data into the frame buffer, and the second task called by an interrupt (based on a refresh interrupt) to copy the 'double buffer' at 'X'hz into the real frame buffer.
@distantvoices:
Why is this more complicated than doing this having to wait till the gui service wakes up for getting a refreshed screen thing?
Although I can understand the way the 'update now please' message system works, I'm lazy. This box of tricks should be doing the work for me. It's really really hard writing an extra three lines of code to ask the GUI to refresh the window, when with an extra three lines in the 'graphics code' to set a flag in a buffer - I'll never have to do it at all. If a screen area doesn't need to be updated because it can't be seen, it will never be updated - just the bits around it. If a buffer takes up the entire screen area and never gets written to, the 'graphics task' basically checks the list and yields() using no CPU time nor wasting memory bandwidth.
It really doesn't matter at this stage that there isn't a split between 'graphics code' and 'video driver'. After all, everytime I install a new video driver on Windows I also have to reinstall DirectX so that it can figure out what things it can accerate or not. Lumping it all together means a single file guaranteed to work and as long as the data structures in the buffers are compatible, should ease upgrading (hopefully on the fly).
There is a client/server relationship going (that can be bypassed if necessary) but my driver design thoughts came from the 'interrupt'/'sheduled work' type of thing.
Do what you can now (interrupt/function call/graphics API/drawing/moving/filling/copying)
and
Delay what can be delayed (buffer combining/screen updating/sheduled work).
Don't worry about those IT qualifications. I work as a debt collector.
@Brendan too:
How would the GUT know anything has changed? Would it waste CPU time refreshing the video every N ms when nothing has changed?
Even if you solve this problem, you wouldn't know if the client has finished updating it's buffer or if it's only done some of the changes. This would lead to ugly shearing and tearing effects. For a simple example, imagine if the client draws a large green rectange in one frame, then changes the colour of the rectange to blue for the next frame - the user might not see the green rectangle at all, or they might see a rectangle that's half green and half blue. Now consider an animated ball that bounces around, where the ball might be seen in 2 places at the same time in one frame and be partially invisible in the next.
The problems I'm seeing I think are to do with the fact that I don't have that seperate refresh task for 'direct buffer writes' so things get lost unless I draw using the 'graphics code' which ironically acts like a 'please refresh message service' distantvoices and candy were mentioning. The fact that the 'graphics task' doesn't have double buffering results in the horrible shear / dual colour mentioned. I've seen this happen when working with software sprites. The buffering is definitely needed and whenever the time is available to sit and add the code I will.
As for hardware acceleration, I intend to only write for the computers I have (including the emulated ones). At the moment everything is software, but as mentioned the only acceleration useable is 2D. That will be fine for the moment, but there's nothing to prevent the addition of a 3d display list parser to the 'graphics code'. I've recently been trawling through 'Black art of 3d game programming by Andrew LaMoth', it's based in DOS, but develops a software 3D graphics pipeline. It shouldn't be too hard to add some code to parse 'vector buffers' so the graphics task can work on those when it has the time. But again that's all in the future.
Another problem at the moment is that everything is tacked onto the kernel. Once I get the floppy driver fully working, I have to get the graphics task off of disk and into memory properly. At that point it will not only be running at user privileges but also in user memory (below the 0xC0000000) mark. Lot's more bugfinding fun.
I will keep an open mind about other methods, but for the moment I intend to persist with the current design just to see if it will do what I want it to. If it all collapses in a heap, then I'll consider it a learning experience and rip it out. If it does work I'll happily post a screen shot in one of those 'what does your OS look like threads' that turn up every 6 months or so.