Cross-hardware consistency is a fantastic goal, but there's no point in discussing how to implement it if we all just encourage whatever implementation happens to be proposed whether it sounds like it will work or not. I am perhaps too confrontational, but in the end I do think Brendan's ideas are interesting or I wouldn't be here.
Brendan wrote:- software flexibility (the ability to "mix & match" processes to do powerful things)
The current winner in this category uses the low level interface of "streams of bytes, often interpreted as text." It's nowhere near the flexibility ceiling, but I think it proves that you don't need the OS itself to understand the camera, meshes, and materials just to add icicle effects to your windows.
Brendan wrote:- hardware compatibility - e.g. software being able to use "2D flat" displays, and VR helmets, and "true 3D" displays, without knowing or caring what the display is and without needing every application/game to be specially modified just to support something like Oculus Rift
The Oculus Rift could not have been developed on your OS (because they needed to experiment with shaders directly) nor do I think it will be capable of distributed rendering for a long time (because of extreme latency issues causing motion sickness). I do think this is a good case for building cameras into a standard API, but more as a protocol than as a shared implementation- that way applications following the camera protocol would be compatible with weird displays without sacrificing control over how.
Brendan wrote:- the ability for a negligible/irrelevant minority of artists to mimic the look of other mediums that were caused by technical limitations of those mediums and weren't desired for those mediums (and only became "desired by some" due to nostalgia).
This is a sad misrepresentation of artists and of the mediums they use. Technical limitations are not something to be eliminated forever (though
enabling their elimination is good), nor are they only desired due to nostalgia. Intentionally making a game look a certain way is a major part of controlling its impact on people, which is the essence of what art is. Forcing people into one particular set of (non-)limitations means you limit the kinds of art they can make on your platform.
Brendan wrote:As a user, I also want consistently-behaved applications, rather than games that crash due to compatibility problems and/or unstable drivers (that has always, and will always, plague systems like Windows and Linux because of "far too low level" graphics APIs).
This, on the other hand, is a sad misrepresentation of existing systems. Intentionally ignoring existing platforms' strengths does not help you compete with them.
Windows is actually incredibly consistent in supporting old applications until hardware no longer supports it (at which point there are emulators like DOSBox)- this is definitely not done
in the way you want, but that doesn't diminish the fact of its accomplishment.
New APIs like Vulkan and DX12, which are lower-level than current ones, which are already lower-level than you want, promise to
improve stability by reducing the amount of code duplicated between drivers. This is coming from the people who actually work on the drivers and the software that has to run on top of it- just because you don't like their methods doesn't mean it won't increase stability.
Brendan wrote:It'd be trivial to (e.g.) have a "create object as (n*m*o) grid of sub-objects" functionality in the video driver's interface and allow "sub objects" to be referenced by indexing (which would be helpful for both blocks and voxels). It'd also be relatively easy for the video driver to store these in "run length encoded" format (or whatever format it likes).
The slightly trickier part is doing hidden surface removal on "container objects" so that internal surfaces are ignored during rendering. This is trivial for the "grid of blocks" case (e.g. just remove adjoining faces for solid blocks). For "collection of arbitrary shaped sub-objects" it's a bit harder.
At this point you've added an awful lot of code, almost completely specific to games-with-grids-of-blocks, which now has to be duplicated across all video drivers. Combine that with all the other optimizations you'd need to add for other types of things games want to do, and keep adding it every time a new game idea comes out, and now your video driver is massive.
It would probably make sense to deduplicate helper code like that into a shared wrapper (or several, so applications can talk to only the ones they want), at which point you've created a game engine on top of a lower-level (relatively speaking) API.
Brendan wrote:Physics is a completely different topic (where I'll want a generic "physics engine" service that uses momentum and collision detection to predict where objects will be in the future).
The point of my particle system example is that sometimes you don't want a full-blown physics engine- doing that for particles would be insane. Particles have very simple movements, they don't interact, and there are very large numbers of them. You could of course just use a parallel-computation API, but then you've introduced a bandwidth problem (transferring the particles at all is unnecessary) in the name of scaling, which particle systems don't need.
MessiahAndrw wrote:Even if Brendan's only allowed realistic perspective projection, I'm sure there's away around it if you want to make a cartoony 2D game
But not, according to Brendan, a way to make a cartoony 3D game.
MessiahAndrw wrote:For example, a video game artist would probably love being able to fine-tune the colours in this scene, knowing they'd show up nearly exactly the same on everyone's screen:
As I understand it, this is somewhat contrary to Brendan's claims of scaling the game between old and new hardware- the base colors used may remain the same, but a 3D game definitely wouldn't look the same. In fact, his distributed rendering system would lead to inconsistencies just between different runs of the game when there is more or less hardware available at the time.