Hi,
mystran wrote:Brendan wrote:The other idea is to have a "resolution independant" video driver interface, e.g. where co-ordinates are specified as floating point values ranging from (-1, -1) to (1, 1). Then the video driver can automatically select and adjust the video mode using some sort of intelligent algorithm (e.g. based on the average time taken to generate previous frames) and/or the video mode could be directly controlled by the user, without applications, etc knowing or caring which video mode is being used.
Come on. 120 dpi (or so) on a typical display is hardly enough to start playing the "pixels aren't important" game. Try editing bitmapped images with a TFT screen in non-native resolution and you see what I mean. And unless the real-world turns into vector graphics, there remains valid reasons for sampled snapshots of the same (also known as 'photos').
Let's make some assumptions first....
Let's assume you've got a font engine that can generate variable sized fonts. For example, you tell the font engine to convert the string "foo" into bitmap data that is N pixels high and M pixels wide and it returns a N * M mask that says how transparent each pixel is. Something like the FreeType project's font engine would probably work nicely.
Let's also assume that *somewhere* you've got code to load bitmap data from a file on disk, scale the bitmap data and convert from one pixel format to another (all web browsers have code to do this).
Lastly, assume that *somewhere* you've got code to draw rectangles (including lines), and code to draw font data from the font engine.
Now (except for the font engine, which should be a seperate service IMHO) put all of this code into the video driver (instead of reproducing most of it in each application) and let applications tell the video driver what to do using scripts.
For example, a simple application might tell the video driver:
- - draw a blue rectangle from (-1, -1) to (1, -0.9)
- draw a white rectangle from (-1, -0.9) to (1, 1)
- draw the string "foo" from (-1, -1) to (-0.8, -0.9) using the font "bar"
- draw the bitmap "/myApp/myIcon.bmp" from (-1, 0) to (-0.8, 0.2)
The video driver would convert the virtual coordinates into screen coordinates, draw the rectangles, ask the font engine to generate font data for "foo" and draw the resulting font data, load the bitmap from disk, scale the bitmap, convert the bitmap's pixel data and draw the bitmap.
For a modern video card some of this can be done by the graphics accelerator - the rectangles, and the bitmap scaling and colour conversion.
A good video driver would also cache the processed bitmap data so that it doesn't need to be loaded, scaled or converted to a different pixel format next time. Hopefully the preprocessed data would be cached in video memory so the hardware can do a "video to video" blit, instead of using the CPU to do it, and instead of caching the data in RAM (where the PCI bus is a bottleneck for "RAM to video" blits).
The same goes for the font data - maybe that can be cached in video memory and perhaps the video card's hardware can do alpha blending while it blits from video to video (most video card's can do this).
Of course even though this system is capable of handling most applications, even though (with basic hardware acceleration) it'd perform much better than a "framebuffer" interface, and even though you haven't got the same code for graphics operations duplicated in each application, it's still a very primitive interface...
What I'd want is for the video driver to accept scripts that describe a "canvas", where the script for a canvas can include other canvases, and where the video driver can cache the graphics data for any completed canvas (and the script used to generate that canvas) .
For example, imagine a GUI with 3 windows open. Each application would send a script describing it's main canvas (and any of the canvases included by the application's main canvas) to the GUI. The GUI would include the main canvases from these applications (and some of it's own canvases for things like the task bar, etc) into a script that describes the GUI's main canvas, and then send this large script (containing a hierarchy of canvas descriptions) to the video driver. The video driver might look at the scripts for all these canvases and determine which canvases have changed, and then get data for unchanged canvases from it's cache and only rebuild canvases that did change.
Of course I'd also want to add 3D to this - e.g. have "containers", and allow canvases to be mapped onto polygons in containers, and allow containers to be projected onto canvases. Then there's other graphics operations I'd want to add eventually (circles, curves, fog, lighting, shadow, etc), and support for 3D monitors (e.g. layered LCD and stereoscopic).
After all this, I'm going to have windows that bounce back when you click on them as if they're on springs, and desktop icons that can spin in 3D space like they're dangling from thin wires attached to the top of the monitor. You'll be able to rotate a window and look at it from the side and it'll have thickness and the raised buttons from the application will still look raised. If you close a window it might spin into the distance and disappear over a horizon (while never losing it's content until it vanishes).
I want weather themes. I want see snow fall inside the monitor and build up on top of windows and icons, and I want to see icicles form on the bottom of the windows. When I click on a window I want it to cause vibrations that shake some of the snow and icicles loose so that I can watch them fall to the bottom of the screen, bouncing off other windows and icons and knocking more snow and ice free, until all the snow and ice reaches the bottom of the screen and slowly melts away.
I want to be able to have windows and icons at strange angles everywhere and then I want fly the mouse pointer like a glider between them. I want to skim across the surface of a status bar pulling up at the last second before crashing into the raised window border before quickly flipping over the windows edge and landing peacefully on the back of the window.
When I get bored I want to swap the mouse pointer for a bat and have a steel ball that bounces around and crashes into windows, making them tilt and rotate and collide with other windows. And perhaps, occasionally, a window will shatter sending shards in all directions.
I want a sun that slowly shifts from the left to the top to the right during the day. You'd never actually see the sun but you'd know where it is by the shadows from windows, icons, buttons and the mouse pointer. People would know it's time to go home from work when the mouse pointer starts casting a long shadow stretching across the desktop towards the left of the screen. At night there can be a soft ambient light and windows and the mouse pointer can have their own faint internal glow so that moving a window causes subtle changes to the shadows cast by other windows onto the desktop.
Maybe this is all just a dream, but it's what I want my video drivers to be capable of eventually.
So, what do you want your video drivers to be capable of eventually? A flat 2D framebuffer with no way to make use of hardware acceleration, no way to do any 3D graphics and no way to handle 3D monitors? That's nice, but even the Commodore64 added sprites to that...
Cheers,
Brendan