Hi,
Ready4Dis wrote:a) If the video driver manages a dependency graph of "things" (where each thing might be a flat 2D texture, but might be texture with bumps, or a voxel space, or a point cloud, or a set of meshes and textures, or ....); then nothing prevents you from saying (e.g.) "This application's window is a point cloud and not a flat 2D texture".
Why is the video driver managing a dependency graph? The video driver should only be worried about doing what it's told. It gets vertex, color, texture coord buffers, as well as others (for bump, parallex, etc mapping, and vertex/pixel shader inputs), and is then told to render triangles using specific textures into a buffer.
It's not that simple. E.g.:
- Video driver is told to render triangles using specific textures to create "texture #4"; then
- Video driver is told to render triangles using more specific textures that includes "texture #4" to create "texture #9"; then
- Video driver is told to render triangles using more specific textures that includes "texture #4" and "texture #9" to create "texture #33"; then
....
It's like makefiles. You don't just have a single massive list of commands that rebuilds all of the pieces and then builds/links the final executable every single time a project is built; because that's extremely inefficient and incredibly stupid. In the same way you don't just have a single massive list of commands that renders the entire screen from scratch every frame. Instead you cache intermediate stuff (objects files or texture or whatever) and keep track of dependencies; to ensure that you only do whatever needs to be done (and don't waste massive amounts of time rebuilding/rendering things for no sane reason at all).
Ready4Dis wrote:It doesn't know anything about the graph, the application stores the graph of what it needs to render (irregardless of what any other app is doing or using).
No. The graph is a system wide thing, where different parts of the graph are from different processes (applications, GUIs, whatever). There is very little practical difference between (e.g.) generating a "rear view mirror" texture and slapping that texture into a larger "player's view from the car's driver seat" texture and generating a "racing car game" texture and slapping that texture into a larger "GUI's desktop texture". The only practical difference is who "owns" (has permission to modify) which nodes in the global dependency graph (where "node ownership" is irrelevant for rendering).
Ready4Dis wrote:An indoor environment would require a completely different structure than an outdoor environment, which is completely different than handling clickable buttons on an app.
You provide functionality; and that functionality can be used in multiple different ways by different pieces of software for very different reasons. You provide "render to texture" functionality; someone uses it for dynamically generated distant impostors, someone else uses it for portal rendering and someone else uses it for slapping window contents onto a GUI's screen. As far as the video driver cares it's all the same.
Ready4Dis wrote:After each application renders its own buffer...
Applications should not render their own buffer.
Ready4Dis wrote:..the compositors job is to put them on screen in the correct location. The application itself doesn't determine where it is located on screen (on the desktop, in a task manager, in a task bar, etc), it only worries about rendering to it's window.
In the same way that the code that asks the video driver to render the "rear view mirror" texture doesn't need to know or care what the final texture might be used for, or where or how that texture might be inserted into a racing car game's "player's view from the car's driver seat". It's the exact same functionality.
Ready4Dis wrote:b) The entire video system involves generating "things" by combining "sub-things" in some way; and because this "combine sub-things" functionality is ubiquitous and used throughout the entire system it's silly to consider "compositor" as a distinct/separate piece that is somehow important in any way at all. It would be more accurate to say that the system is not crap and therefore has no need for a "compositor".
I think we are talk about slightly different levels. When I say graphics API, all I mean is an api like openlg. This doesn't have any sort of dependency graph, that is a higher level thing, unless you're talking about not using a graphics API but have an entire graphics engine instead?
Take a look at things like
VBOs,
render to texture,
portal rendering,
dynamically generated distant impostors, etc. Basically (assuming the game developer isn't incompetent) their final scene is the combination of many smaller pieces which can be updated independently and have dependencies. The only difference is that OpenGL doesn't track dependencies between pieces, which forces game developers to do it themselves. In other words, the dependence graph is there, it's just that it's typically explicitly written into a game's code because OpenGL sucks.
Ready4Dis wrote:For 3D games, most things don't change every frame and there's significant opportunities (and techniques) to recycle data to improve performance; including VBOs, distant impostors, portal rendering, overlays (for menu system and/or HUD), etc.
While the entire scene doesn't always change, typically what is on screen does, and if you are rendering what's on screen, you are going to have to keep doing so. If a portal for an indoor rendering engine isn't visible, you don't render anything behind it. But that isn't the job of the graphics API ,that's the game or graphics engine (which is typically part of the game itself, not the OS). A lot of buffers will not change (vertices stay in the same place and are animated using vertex shaders for the most part nowadays), but why does the graphics api care, again, the application knows when that buffer needs changing.
A process (e.g. GUI) may not know when a texture it uses (that belongs to a completely different/unrelated process - e.g. an application) has changed. When the application tells the video driver to change its texture, the video driver knows the texture was changed even though the GUI doesn't.
Now imagine a widget tells the video driver "use data from the file "/foo/bar/hello.texture" for this texture"; the widget uses that texture in another texture; which is then included in an application (running as a separate process); and that application is embedded into a second application; and the second application's texture is included into the GUI's texture. Then imagine the user modifies the file "/foo/bar/hello.texture" on disk, and the VFS sends a notification to the video driver (causing the screen to be updated without any of the 4 separate processes involved doing anything at all).
Ready4Dis wrote:There are a lot of techniques for rendering 3d games, but I don't suspect many of them should be in the graphics api, otherwise you are trying to build an all inclusive game engine inside of your graphics api as part of your OS.
Recycling "include child object/s within parent object" support that has to exist (and currently does exist) within the video driver/GPU does not mean that it suddenly becomes a game engine (complete with collision detection, physics, sound, scripting, AI, etc).
Cheers,
Brendan