For significant changes (e.g. when "true 3D" arrives) it's impossible for existing "too low level" APIs to support it; which means it's impossible for game engines to support it, impossible for games to be modified to support it, and impossible for old games to work without these impossible modifications. In both cases (existing APIs or mine) it will require new video cards (for the new connectors/signals/whatever that "true 3D" requires) and therefore new video drivers are required (and this is completely unavoidable).Rusky wrote:This is no different from existing game engines, GUI toolkits, etc. adding support for new hardware, except that in your solution you have to reimplement the entire graphics stack for every device, and in the current solution you only need to tweak a handful of libraries, and that can happen without the support of the OS.Brendan wrote:The important thing is that if/when "true 3D" monitor support is added, no normal software (widgets, applications, games, GUIs) will care and all normal software will still work perfectly on "true 3D" displays without any change at all. I won't have to redesign the video interface that normal software uses and then modify every piece of existing software (which is exactly what existing systems using "too low level" video interfaces will be forced to do in this case).
I have no idea how you've managed to conclude that "impossible" is easier than "required anyway".
Now consider what happens everything is using this (widgets, text editors, calculators, spreadsheets, GUIs, ...) and not just games, and all of software that has ever been written for the OS has to be updated (because it's not something that only effects new video drivers that don't exist).
You think all existing games are going to work seamlessly when (e.g.) you unplug your single monitor and replace it with a 3*3 grid of monitors?Rusky wrote:This is, again, already the case with existing libraries. You just have the irrational idea that anything not in the OS is a hassle to use, or is The Wrong Choice (tm) because developers have to "reach for it" somehow. Libraries are already completely adequate.Brendan wrote:There should be no need for software to know or care if the video system uses multiple monitors or not, or 2D or stereoscopic or "true 3D" or VR helmet; because the OS provides adequate abstractions that hide these "irrelevant to applications" details.
In theory, it might be possible for libraries to hide the fact that the "too low level" API was designed by short-sighted/incompetent fools. In practice, for most games I can't even "alt+tab" to a blank desktop and back without the video getting trashed.
I can have it both ways - it has been "partially done" enough to show that it's perfectly feasible (despite the fact that, for the research I saw, they were doing incredibly stupid things like redrawing everything every frame and sending pixel data across the network, likely because they were relying on an existing crippled/retarded "too low level" interface); and OS aren't able to do it well enough to make it worth doing for OSs intended for "single computer" (especially when they're already cursed by an existing "too low level" interface).Rusky wrote:You can't have it both ways- either it has been done and is thus a valid argument for its feasibility, or it hasn't been done because current hardware can't do it for latency-sensitive applications.Brendan wrote:I hope you're right (and hope that my OS will be the first and only OS that's capable of doing this, and that when I need to promote my OS I'll be able to laugh at how crippled and crappy other OSs are in comparison for at least 5+ years while other OSs redesign their short-sighted interfaces).
Of course it's also possible to look at this from a different perspective: supporting GPUs is not practical, and generating high quality/high resolution graphics in real-time from a renderer running on a single modern Intel chip is not practical either; and using distributed rendering is far more practical than either of these options.
Cheers,
Brendan