Hi,
Rusky wrote:Brendan wrote:For movies it's just going to be an application telling the video driver "Play this file" (but I haven't thought about or designed the file format for this yet).
I'm not talking about pre-recorded videos, I'm talking about things like Pixar's render farms that are actually generating those videos.
After my video system has spent many many years evolving, gaining features and gaining rendering quality; and after my OS has support for GPGPU (which would be very beneficial for things that involve massive amounts of processing that have very little to do with the OS's real-time "fixed at 60 frames per second with variable quality" video system); I can't see why my system wouldn't be usable for render farms that generate movies.
Rusky wrote:Brendan wrote:Games use custom shaders because the graphics API sucks. It's not necessary. It might be "desirable" in some cases, but that doesn't mean it's worth the pain.
It's absolutely necessary. Custom shaders are just as important as custom CPU-side programs. You would call me insane if I proposed only giving access to the CPU through pre-existing code to process pre-existing file formats.
From my perspective; your suggestion is that I:
- Destroy the ability to distribute "video rendering load" across multiple machines;
despite the fact that distributing load across multiple machines is the primary goal of the project.
- Destroy the ability to support multiple cooperating renders within a single computer (e.g. one renderer using an AMD video card's GPU, one renderer using an Intel on-board GPU and one renderer using CPUs, all working together to generate graphics for a single monitor).
- Destroy the ability to do a real time "fixed frame rate with variable quality" video system; which is something I think is necessary to get acceptable latency from software rendering and therefore necessary for the initial stages of the OS (before native video drivers capable of utilising GPU and/or GPGPU exist); and also something I think is "extremely beneficial" after native video drivers exist.
- Design and implement some form of "shader language", and implement developer tools (IDE, computer, etc) for this, and also implement some sort of "shader emulator" in software so that video can work before native video drivers capable of utilising GPU and/or GPGPU exist; as if my plans aren't ambitious enough already.
- Significantly increase the work necessary for anyone to create a GUI, game, application or widget (it's all 3D).
- Significantly increase the chance of compatibility problems between GUIs, games, applications and widgets (as they all fight to use "shaders" in completely different ways).
- Significantly increase the chance that new software won't work on old systems.
- Significantly increase the chance that old software won't be any better when running on newer/more capable systems.
- Significantly increase the difficulty of supporting multiple monitors (where the same scene is displayed across all monitors).
- Significantly increase the difficulty of recovering gracefully when applications crash (because, who knows what they were doing via. the low level graphics interface at the time or what state the video card might have been left in). Note: This is something that Microsoft has failed to do for about 20 years now.
- Significantly increase the difficulty of true multi-tasking (e.g. having two 3D games running in windows at the same time). Note: This is also something that Microsoft has failed to do for about 20 years now.
- Completely destroy all advantages that make my OS better than existing OSs and able to compete with existing OSs for the video system.
In return to making many massive sacrifices that destroys almost everything I'm trying to achieve, this will:
- Make it possible for irrelevant retards to generate graphics that completely ignores the medium and uses a different (e.g. "drawn with crayons") artistic style for an insignificant number of games.
Rusky wrote:Brendan wrote:There's never a need to synchronise user input and physics with the frame rate.
There is absolutely a need to synchronize user input and physics with the frame rate. There needs to be a consistent lag time between input and its processing, physics needs to happen at a consistent rate to get consistent results, and rendering needs to happen at a consistent rate to avoid jitter/tearing. Of course these rates don't have to be the same, but they do have to be synchronized. Many games already separate these rates and process input as fast as is reasonable, video at the monitor refresh rate, and physics at a lower, fixed rate for simulation consistency.
There does not need to be a consistent lag time between input and its processing, and this can be "as soon as possible" and not "always slow enough to cover worst case to guarantee constant lag" (and by getting user input once per frame the lag time between input and its processing varies depending on how long until the main loop gets back to checking for user input again, and is therefore not constant).
Things like physics do need to be running at a constant rate and this needs to be synchronised with "wall clock time"; but the granularity of the time keeping can be extremely precise (e.g. nanosecond precision) and should not be crippled (e.g. 1/60th of a second precision). The video system (e.g. renderer, etc) needs to be synchronised with the monitor's vertical sync; but it just uses data from the game to determine what is where at any point in time (as I've described previously) and needn't be synchronised with the game in any other way.
Rusky wrote:Brendan wrote:(and are the reason why gamers want "500 frames per second" just to get user input polled more frequently; and the reason why most games fail to use multiple threads effectively).
These gamers are wrong. Correctly-written games take into account all the input received since the last frame whether they're running at 15, 20, 30, or 60 frames per second. Doing it any faster than the video framerate is literally impossible to perceive and anyone who claims to do so is experiencing the placebo effect.
I wasn't talking about correctly written games. I was talking about incorrectly written games that have a ""do { get_user_input(); update_AI(); do_physics(); update_screen(); }" main loop.
The difference is extremely easy to perceive in some cases (and not easy to perceive in others). For example; imagine you're playing a game where you've got a rifle and have to shoot a flying duck; the duck is 20 pixels wide on the screen and is flying across the screen from left to right at a speed of 100 pixels per frame (due to "persistence of vision" it looks like the duck is moving but it's actually teleporting from one place on the screen to the next 60 times per second). To hit the duck you need to pull the trigger at exactly the right time - 1/60th of a second too early or 1/60th of a second too late and you miss. If user input is checked once per frame, then it becomes impossible to pull the trigger at the right time.
Rusky wrote:Brendan wrote:By shifting the renderer into the video driver the renderer gets far more control over what is loaded into VRAM than ever before.
The video driver doesn't have the correct information to utilize that control. It doesn't know the optimal format for the data nor which data is likely to be used next; a renderer that's part of a game does.
Um, what? The video driver knows exactly what needs to be drawn, knows exactly what is in VRAM and what isn't, and knows all the details of the hardware; but "lacks" some missing piece of information that you've failed to mention (like, what the objects its displaying smell like)?
When it's time to render a scene; the format the data is in is whatever format the video driver felt like converting the data into after it loaded the data from files. The data in files (textures, meshes, whatever) may or may not be optimal, but "file format standardisation" is far more important.
Yes; for some things the application may want to provide the data itself (rather than telling the video driver which file contains the data); but that changes nothing - the video driver still converts the data into whatever format it feels like when the data is uploaded to the video driver (and not when a rendering is being done), and "message protocol standardisation" is far more important.
Rusky wrote:Brendan wrote:Games like (e.g.) Minecraft only need to tell the video driver when a block is placed or removed. Not only is this far simpler for the game but allows the video driver to optimise in ways "generic code for wildly different video hardware" can never hope to do.
...
when games specify what they want (via. a description of their scene) instead of how they want things done, there's no way to scale games up to new hardware without the games specifying what they want??
...
The game just tells the video driver the volume of the liquid, its colour/transparency and how reflective the surface is. Things like "rippling" will be done elsewhere via. control points (part of physics, not rendering).
So you're going to include support specifically for voxel-based games in your API?
For raw data formats; as far as I can tell at this stage; I will want:
- Textures with just "XYZ reflected" for each texel
- Textures with "XYZ reflected" and "XYZ passing through" for each texel
- Textures with "XYZ reflected", and either "surface normal" or "bump height" for each texel
- Textures with "XYZ reflected" and "XYZ passing through"; and either "surface normal" or "bump height" for each texel
- Simple/static meshes with vertices and polygons, where polygons have either "fixed colour", "shaded/interpolated colour for each vertex" or "texture and (u, v) within texture for each vertex)
- Complex/deformable meshes which have control points ("skeleton") and vertices that are effected by those control points (with the same stuff for polygons, etc as simple meshes)
- Cuboids with a 3D grid of "XYZ reflected" for each voxel
- Cuboids with a 3D grid of "object reference" for each coord
- Something to describe a light source (XYZ amplitudes emitted, direction of light emitted, radius of light source)
- Static collections of objects with a list of object references and their location relative to the origin of the collection and their "front" and "top" vectors; that may also include light sources. Note: this is mostly a container object - e.g. so you can have a "car" object that includes "car body", "wheel", "seat" and "engine" sub-objects.
I'm sure there's something I've overlooked, but I'm also sure I'll have a much better idea when I actually start designing the video system.
Rusky wrote:Your standardized scene description format cannot possibly anticipate all the types of information games will want to specify in the future, nor can it possibly provide an optimal format for all the different things games care about specifying today.
Nobody can possibly anticipate all possible things that might or might not be needed in the future when designing anything for any purpose. The only thing that's reasonable to expect is to combine research and foresight in the hope of finding the best solution at the time.
Rusky wrote:Take my water example and explain how that will work- will you design a "Concise Way to Describe Bodies of Water" in all possible situations so that games can scale from "transparent textured plane" to "ripples and reflections using a shader" that takes into account all the possible ways games might want to do that, especially when there's not enough bandwidth to do ripples as a physics process so it must be done in the shader? How will this solution work when you have to specify a "Concise Way to Describe Piles of Dirt" and a "Concise Way to Describe Clouds" and a "Concise Way to Describe Plants" and a "Concise Way to Describe Alien Creatures" and a "Concise Way to Describe Stars" and a "Concise Way to Describe Cars" and a "Concise Way to Describe Spaceships" so that detail can automatically be added when new hardware is available?
No, I won't design "a concise way to describe bodies of water" because that's not general enough. Instead, I will design "a concise way to describe volumes with uniform characteristics" that will be used for things that are solid (e.g. bar of metal) or opaque (e.g. orange juice) or emit light (e.g. lava) or have a reflective surface (e.g. water) or do not (e.g. fog). In fact I will have 2 - one using "simple meshes" and one using "control points". These volumes will act as objects. For example, you might have a huge cuboid using a simple mesh and a uniform "water like material" for an entire ocean, and you might have a small "one meter square and 200 mm high" volume/object using a control points and the same uniform "water like material", and you might duplicate that small volume/object (with the control points) a thousand times over the top of the ocean, so that with a grid of 10*10 control points across the top of that small volume/object you can create waves across a massive ocean by only changing 100 control points once per second.
I will not design "a concise way to describe piles of dirt" (or clouds, or plants, or aliens, or stars, or cars, or whatever else) because it's completely unnecessary as all of these things can be described in a far more general way.
Rusky wrote:On the scaling-up side, we differ in our goals. I care about artistic integrity, and think games and movies should look the way their designers intended them, and that they should have the freedom to control what that look is, whether it's purposefully-blocky, shaded and outlined like a cartoon, or whatever more-subtle variation on reality they care to come up with. Thus, the only thing old games should do on new hardware is run faster and with higher resolution.
You're right, we do have different goals. I don't care about "artistic integrity"; I care about things like complexity and compatibility, and the ability to make full use of all hardware available (in an entire group of computers, even for hardware that didn't exist when software was written) to get the best possible result for the end user (and not "OMG my eyes are bleeding" retro stuff).
Rusky wrote:You don't care about any of that and think the video driver should redesign every game's look on every hardware update for the sake of "realism." I think this should only apply to applications that want it, like CAD tools or scientific imaging tools, that can use a specific library for that purpose, which will only need to be updated once when a new generation of hardware comes out, not again for every driver.
I think you may be getting confused here.
This is a realistic rendering of a realistic scene:
And this is a realistic rendering of an unrealistic scene:
Both of these use realistic rendering; and both of these are examples of what I want.
However...
This is an unrealistic rendering of an unrealistic scene (which I don't want, but I can't actually prevent a game from generating a 2D texture and having a realistically rendered scene with that single texture, ambient lighting and nothing else):
Note: I tried to find an example for "unrealistic rendering of realistic scene" and failed - the closest that I did find was only "extremely bad realistic rendering of realistic scene".
If you use google images to search for "computer aided design screenshot" you'll see a lot of pictures. Some of these pictures use realistic rendering anyway, some use wire frame, and some use "2D lines on 2D plane". For wire frame, you can just connect "source vertexes" with thin black tubes and it'd be mostly the same (a little less, but not enough to care about). For "2D lines on 2D plane" software can just create a texture and say where it wants that texture. There were also some pictures showing realistic rendering on top of a flat surface, where that flat surface was a "2D lines on 2D plane" diagram. Finally there was also a few (much less common) pictures where it had wire frame super-imposed on top of a realistic rendering; and for these you'd need to use the renderer recursively (e.g. generate a texture from "vertices connected by tubes with transparent background", then do "realistic rendering, with previous texture in foreground"). Neither of these cases are a significant problem (a bit less efficient in some cases, but not enough to care about).
Rusky wrote:Brendan wrote:Often impostors aren't "good enough" and do need to be redrawn, but it's far less often than "every object redrawn every frame".
Please note that this is not something I invented. "Static impostors" have been used in games for a very long time, and (some) game developers are doing auto-generated/dynamic impostors with OpenGL/DirectX.
Imposters are a trick to improve performance when necessary, not the primary way things are drawn in games today. Parallelizing a game across a network by converting everything to imposters is not an improvement, it's throwing everything in the toilet for the sake of forcing your idea to work.
The point is, in the worst case (which happens a lot) the camera will be moving and rotating and changing the perspective on everything visible, so you need to be prepared to handle this case even if it doesn't happen all the time. And if your render state is strewn across the network, you have absolutely no hope of handling this and games will just have imposter artifacts and lowered quality all the time.
I am prepared to handle that case. That case is the sort of thing that my "fixed frame rate, variable quality" approach is designed for - if things are changing so rapidly that the video system can't do everything as "high quality" in enough time; then quality is reduced (lower resolution rendering that gets upscaled, less lights taken into account, "solid colour" used instead of textures, "far clipping plane" much closer to the camera, impostors that are recycled when they're "less good enough", etc). That absolute worst case is that the video driver isn't able to draw anything at all in the time available before a frame is shown and the scene changes so much between frames that this happens for many frames; where the video driver reduces quality all the way down to "user sees nothing more than a screen full of grey". Of course this absolute worst case is something that I hope nobody will ever see in practice.
Cheers,
Brendan