Page 10 of 14

Re: Concise Way to Describe Colour Spaces

Posted: Wed Jul 29, 2015 5:21 am
by embryo2
Brendan wrote:Now imagine that some of the computers in between are doing fancy tricks (e.g. maybe sort of intelligent proxy cache) and aren't just shoving TCP/IP packets around but are actually inspecting and modifying the packets. In this case the client and server can't use whatever protocol they like; and have to use a protocol that the computers in the middle can understand.
In this case all participants can't use whatever protocol they like. It means almost all OS components should understand the protocol. But the goals of all components are different. And you now have a situation with one protocol for many goals. As it was written above, the HTML experienced a row of serious updates and finally found itself in a position, where it was almost replaced with JavaScript working on DOM model. It means your system could end up with a lot of ugly code (JavaScript) that performs the same tasks for all clients, but was written by different people in a very different manner. And even worse, because you still have no scene description (like the DOM or HTML), you can't start writing any of your system components. When you finally define the description and have a first component written, one of the next components could easily show some deficiencies of the description and then you should not only change the description, but also change all already written components. And such interdependence is too widespread, because you have bound to the description not only UI components, but also included into the description the system wide messaging protocol. So, when your messaging protocol to be changed, all your OS with all it's components should be changed. That's a great case of a "good" design.

Separation of responsibilities and encapsulation should be your friends. The words about a flexible chain of processing are great, but the implementation often smashes the nice picture with it's face against some hard place.
Brendan wrote:I mostly only care about creating the best OS I can; and if people don't understand what I'm doing it's unfortunate but doesn't really effect my long term plans.
Ok, I misunderstood your frustration from previous post.

Re: Concise Way to Describe Colour Spaces

Posted: Wed Jul 29, 2015 9:22 am
by Rusky
Brendan wrote:We've been through this already - lower level just pointlessly complicates for everything and makes the system far less flexible (rather than confining the complications in the video driver); and there are multiple examples of similar things working perfectly fine over a network.
It's a tradeoff between application flexibility (as opposed to system flexibility) and API simplicity (not flexibility). You want a simple API so that it's easier to mix and match services, but it is still possible to do that when using a low-level API, by adding complexity to the API in the form of wrapper libraries/services and more standard protocols. The application flexibility you would gain is very much worth it, at least to me- applications are what people use the computer for in the first place; the OS is just a way to facilitate that. But as we've already seen you don't care if you restrict the features applications can provide as long as you're convinced it's necessary in order to magically interchange old and new hardware.

For example, as a user, I would much rather have a variety of art styles in my games, consistently-behaved applications, and the possibility of innovation from application developers, than the ability to seamlessly add icicle effects to my windows (not that that's even a necessary tradeoff). :roll:
Brendan wrote:Erm. Minecraft has never and will never use voxels. It uses "blocks" (textured cubes) arranged in a regular grid.
Perhaps I used the word "voxel" incorrectly, but regardless, the optimization I described is still very much applicable. Minecraft blocks are always aligned to a grid, and a generic format for storing sub-object positions would not be able to take advantage of that by e.g. storing a compressed matrix of game-specific block types rather than a list of block positions. Further, multiplayer can and must handle much greater latency than the renderer, which needs to show an added or removed block in the very next frame once the player clicks.

And even if you don't like the Minecraft example in particular (it seems to be succeeding just fine with a horribly inefficient codebase on a horribly inefficent platform), the principle applies more strongly to more ambitious games and other rendering and simulation techniques, like my particle system example.

Re: Concise Way to Describe Colour Spaces

Posted: Wed Jul 29, 2015 11:54 am
by Brendan
Hi,
Rusky wrote:
Brendan wrote:We've been through this already - lower level just pointlessly complicates for everything and makes the system far less flexible (rather than confining the complications in the video driver); and there are multiple examples of similar things working perfectly fine over a network.
It's a tradeoff between application flexibility (as opposed to system flexibility) and API simplicity (not flexibility). You want a simple API so that it's easier to mix and match services, but it is still possible to do that when using a low-level API, by adding complexity to the API in the form of wrapper libraries/services and more standard protocols. The application flexibility you would gain is very much worth it, at least to me- applications are what people use the computer for in the first place; the OS is just a way to facilitate that. But as we've already seen you don't care if you restrict the features applications can provide as long as you're convinced it's necessary in order to magically interchange old and new hardware.
I'd say it's more about:
  • software flexibility (the ability to "mix & match" processes to do powerful things)
  • software compatibility (both forward and backward compatibility, between normal processes and other normal processes and between normal processes and video drivers)
  • hardware compatibility - e.g. software being able to use "2D flat" displays, and VR helmets, and "true 3D" displays, without knowing or caring what the display is and without needing every application/game to be specially modified just to support something like Oculus Rift
  • practicality - e.g. no hobbyist OS developer has ever managed to write a "shader compiler" or support shaders for any video card; and even large/established OSs like Linux struggle to get it working and stable for longer than 3 minutes despite support from large companies that produce the video cards
  • marketing - e.g. the ability to highlight advantages (because it has advantages, because it's not the same as existing systems), while not mentioning the disadvantages
..versus:
  • the ability for a negligible/irrelevant minority of artists to mimic the look of other mediums that were caused by technical limitations of those mediums and weren't desired for those mediums (and only became "desired by some" due to nostalgia).
Rusky wrote:For example, as a user, I would much rather have a variety of art styles in my games, consistently-behaved applications, and the possibility of innovation from application developers, than the ability to seamlessly add icicle effects to my windows (not that that's even a necessary tradeoff). :roll:
As a user, I also want consistently-behaved applications, rather than games that crash due to compatibility problems and/or unstable drivers (that has always, and will always, plague systems like Windows and Linux because of "far too low level" graphics APIs).
Rusky wrote:
Brendan wrote:Erm. Minecraft has never and will never use voxels. It uses "blocks" (textured cubes) arranged in a regular grid.
Perhaps I used the word "voxel" incorrectly, but regardless, the optimization I described is still very much applicable. Minecraft blocks are always aligned to a grid, and a generic format for storing sub-object positions would not be able to take advantage of that by e.g. storing a compressed matrix of game-specific block types rather than a list of block positions.
It'd be trivial to (e.g.) have a "create object as (n*m*o) grid of sub-objects" functionality in the video driver's interface and allow "sub objects" to be referenced by indexing (which would be helpful for both blocks and voxels). It'd also be relatively easy for the video driver to store these in "run length encoded" format (or whatever format it likes).

The slightly trickier part is doing hidden surface removal on "container objects" so that internal surfaces are ignored during rendering. This is trivial for the "grid of blocks" case (e.g. just remove adjoining faces for solid blocks). For "collection of arbitrary shaped sub-objects" it's a bit harder.

Note that the kernel will do generic message compression, but only to improve network bandwidth usage (for all other cases it's faster not to bother with compressing/decompressing message data, even if a message is huge); and for files compression and decompression can be done automatically (via. my "file format converter" feature). Mostly, there'd be no need for a game like Minecraft to bother compressing/decompressing anything.

However; I can't help thinking that these are easy answers to the wrong problem. Should I be worried that a significant number of programmers lack problem solving skills; and that by making my video system work differently it'll create problems that these people won't be able to solve?
Rusky wrote:Further, multiplayer can and must handle much greater latency than the renderer, which needs to show an added or removed block in the very next frame once the player clicks.
When one player adds/removes a block, all players need to see the change within about 50 ms. However, this isn't necessarily true.

If you have a 250 ms "placing the block" animation then you know when the block will be placed 250 ms in advance, and can do the animation faster to hide any latency issues and then place the block at exactly the right time despite network latency.
Rusky wrote:And even if you don't like the Minecraft example in particular (it seems to be succeeding just fine with a horribly inefficient codebase on a horribly inefficent platform), the principle applies more strongly to more ambitious games and other rendering and simulation techniques, like my particle system example.
Physics is a completely different topic (where I'll want a generic "physics engine" service that uses momentum and collision detection to predict where objects will be in the future).


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Wed Jul 29, 2015 2:41 pm
by AndrewAPrice
Brendan's rendering system sounds awesome. We should all encourage innovation and applaud the people that want to try something different.

Many compositing window managers and 2D games use 3D APIs (OpenGL/Direct3D), they just use orthogonal projection that effectively flattens one of the axes to display the output as if it were a flat 2D plane.

Even if Brendan's only allowed realistic perspective projection, I'm sure there's away around it if you want to make a cartoony 2D game:
- Most APIs have some kind of 'render-to-texture' functionality (used for mirrors and in-game monitor/television screens), render your game to this texture then display it in front of the user (even if the user is using a traditional monitor or an Occulus Rift.)
- He talks about playing videos through his rendering system, so he's providing some kind of pixel-pushing mechanism out there that video decoders can use. Perhaps they can make a custom 2D renderer that pushes pixels to the screen.
- He talks about GUI widgets and images embedded into documents. If there's a mechanism to draw these 2D elements, then there's likely a mechanism to draw 2D sprites (which are just images.)

Anyway, if we return to the original topic of describing colour spaces (irrelevant of if we're using a 3D API, a ray tracer, pixel pushing, etc..) then I fail to see how Brendan's representation can't represent unrealistic scenes.

For example, a video game artist would probably love being able to fine-tune the colours in this scene, knowing they'd show up nearly exactly the same on everyone's screen:
Image

Likewise, if we wanted to draw this, we could output their exact NTSC colours:
Image

I think haters are going to keep hating until Brendan releases a design document or a demo showing them wrong. :)

Re: Concise Way to Describe Colour Spaces

Posted: Wed Jul 29, 2015 6:45 pm
by Rusky
Cross-hardware consistency is a fantastic goal, but there's no point in discussing how to implement it if we all just encourage whatever implementation happens to be proposed whether it sounds like it will work or not. I am perhaps too confrontational, but in the end I do think Brendan's ideas are interesting or I wouldn't be here.
Brendan wrote:
  • software flexibility (the ability to "mix & match" processes to do powerful things)
The current winner in this category uses the low level interface of "streams of bytes, often interpreted as text." It's nowhere near the flexibility ceiling, but I think it proves that you don't need the OS itself to understand the camera, meshes, and materials just to add icicle effects to your windows.
Brendan wrote:
  • hardware compatibility - e.g. software being able to use "2D flat" displays, and VR helmets, and "true 3D" displays, without knowing or caring what the display is and without needing every application/game to be specially modified just to support something like Oculus Rift
The Oculus Rift could not have been developed on your OS (because they needed to experiment with shaders directly) nor do I think it will be capable of distributed rendering for a long time (because of extreme latency issues causing motion sickness). I do think this is a good case for building cameras into a standard API, but more as a protocol than as a shared implementation- that way applications following the camera protocol would be compatible with weird displays without sacrificing control over how.
Brendan wrote:
  • the ability for a negligible/irrelevant minority of artists to mimic the look of other mediums that were caused by technical limitations of those mediums and weren't desired for those mediums (and only became "desired by some" due to nostalgia).
This is a sad misrepresentation of artists and of the mediums they use. Technical limitations are not something to be eliminated forever (though enabling their elimination is good), nor are they only desired due to nostalgia. Intentionally making a game look a certain way is a major part of controlling its impact on people, which is the essence of what art is. Forcing people into one particular set of (non-)limitations means you limit the kinds of art they can make on your platform.
Brendan wrote:As a user, I also want consistently-behaved applications, rather than games that crash due to compatibility problems and/or unstable drivers (that has always, and will always, plague systems like Windows and Linux because of "far too low level" graphics APIs).
This, on the other hand, is a sad misrepresentation of existing systems. Intentionally ignoring existing platforms' strengths does not help you compete with them.

Windows is actually incredibly consistent in supporting old applications until hardware no longer supports it (at which point there are emulators like DOSBox)- this is definitely not done in the way you want, but that doesn't diminish the fact of its accomplishment.

New APIs like Vulkan and DX12, which are lower-level than current ones, which are already lower-level than you want, promise to improve stability by reducing the amount of code duplicated between drivers. This is coming from the people who actually work on the drivers and the software that has to run on top of it- just because you don't like their methods doesn't mean it won't increase stability.
Brendan wrote:It'd be trivial to (e.g.) have a "create object as (n*m*o) grid of sub-objects" functionality in the video driver's interface and allow "sub objects" to be referenced by indexing (which would be helpful for both blocks and voxels). It'd also be relatively easy for the video driver to store these in "run length encoded" format (or whatever format it likes).

The slightly trickier part is doing hidden surface removal on "container objects" so that internal surfaces are ignored during rendering. This is trivial for the "grid of blocks" case (e.g. just remove adjoining faces for solid blocks). For "collection of arbitrary shaped sub-objects" it's a bit harder.
At this point you've added an awful lot of code, almost completely specific to games-with-grids-of-blocks, which now has to be duplicated across all video drivers. Combine that with all the other optimizations you'd need to add for other types of things games want to do, and keep adding it every time a new game idea comes out, and now your video driver is massive.

It would probably make sense to deduplicate helper code like that into a shared wrapper (or several, so applications can talk to only the ones they want), at which point you've created a game engine on top of a lower-level (relatively speaking) API.
Brendan wrote:Physics is a completely different topic (where I'll want a generic "physics engine" service that uses momentum and collision detection to predict where objects will be in the future).
The point of my particle system example is that sometimes you don't want a full-blown physics engine- doing that for particles would be insane. Particles have very simple movements, they don't interact, and there are very large numbers of them. You could of course just use a parallel-computation API, but then you've introduced a bandwidth problem (transferring the particles at all is unnecessary) in the name of scaling, which particle systems don't need.
MessiahAndrw wrote:Even if Brendan's only allowed realistic perspective projection, I'm sure there's away around it if you want to make a cartoony 2D game
But not, according to Brendan, a way to make a cartoony 3D game.
MessiahAndrw wrote:For example, a video game artist would probably love being able to fine-tune the colours in this scene, knowing they'd show up nearly exactly the same on everyone's screen:
As I understand it, this is somewhat contrary to Brendan's claims of scaling the game between old and new hardware- the base colors used may remain the same, but a 3D game definitely wouldn't look the same. In fact, his distributed rendering system would lead to inconsistencies just between different runs of the game when there is more or less hardware available at the time.

Re: Concise Way to Describe Colour Spaces

Posted: Thu Jul 30, 2015 4:25 am
by Brendan
Hi,
MessiahAndrw wrote:Even if Brendan's only allowed realistic perspective projection, I'm sure there's away around it if you want to make a cartoony 2D game:
- Most APIs have some kind of 'render-to-texture' functionality (used for mirrors and in-game monitor/television screens), render your game to this texture then display it in front of the user (even if the user is using a traditional monitor or an Occulus Rift.)
- He talks about playing videos through his rendering system, so he's providing some kind of pixel-pushing mechanism out there that video decoders can use. Perhaps they can make a custom 2D renderer that pushes pixels to the screen.
- He talks about GUI widgets and images embedded into documents. If there's a mechanism to draw these 2D elements, then there's likely a mechanism to draw 2D sprites (which are just images.)
To be clear; for all 2D games (including those with parallax scrolling, etc) there's no problem at all (just create the textures/sprites/background to look however you want, set the ambient lighting to 1.0 and have no other lighting). It's 3D games where the artistic style is applied later in the graphics pipeline (e.g. cell shading) that wouldn't be possible unless all video drivers and renderers support that artistic style (which means it's not necessarily impossible, but extremely difficult to enumerate all the different styles and add support for all of them in everything).

For playing videos/movies, I haven't decided on the file format that "movie data" will be in, but it's extremely likely that it's going to mirror the design of the video API. Essentially; processes (e.g. applications, 3D games, GUIs) send "commands" to the video driver to describe what to render, and a movie contains these same commands stored in a file. The idea is to make it relatively easy to (e.g.) record a video of a process while its data is being displayed with very bad quality rendering on a single slow computer in "320*200 with 256 colours" video mode, and then (later) play that recording on a group of 16 extremely powerful computers connected to four 3840*2160 "stereoscopic 3D" displays and get extremely high quality rendering. Of course it won't be this simple (recorder will need to store data for things like "source textures" rather than just the commands, and have "sync points" so you can skip backward/forward efficiently, and will want to optimise where possible by doing things like hidden surface culling); and I know I'm going to have problems converting data from (e.g.) web cameras and legacy file formats (MPEG) into this format efficiently (without doing "one texture per frame").


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Thu Jul 30, 2015 5:35 am
by Brendan
Hi,
Rusky wrote:
Brendan wrote:
  • software flexibility (the ability to "mix & match" processes to do powerful things)
The current winner in this category uses the low level interface of "streams of bytes, often interpreted as text." It's nowhere near the flexibility ceiling, but I think it proves that you don't need the OS itself to understand the camera, meshes, and materials just to add icicle effects to your windows.
You need everything that works with the data to understand it (e.g. utilities like "more" and "wc", and the VFS and virtual terminal/s in Unix; and GUIs, debuggers, video recorders/players, video drivers, "virtual terminal layer" and processes that do special effects for my OS). Some of these pieces are typically considered parts of the OS, and some aren't.
Rusky wrote:
Brendan wrote:
  • hardware compatibility - e.g. software being able to use "2D flat" displays, and VR helmets, and "true 3D" displays, without knowing or caring what the display is and without needing every application/game to be specially modified just to support something like Oculus Rift
The Oculus Rift could not have been developed on your OS (because they needed to experiment with shaders directly) nor do I think it will be capable of distributed rendering for a long time (because of extreme latency issues causing motion sickness). I do think this is a good case for building cameras into a standard API, but more as a protocol than as a shared implementation- that way applications following the camera protocol would be compatible with weird displays without sacrificing control over how.
Oculus Rift could have been developed on my OS just by creating a suitable "display device description" file; which is much simpler than diddling with shaders. Note: This isn't strictly true as it only covers "video output" and doesn't include the headset's head tracking/motion sensing stuff; but that applies to both "shaders" and my system equally.
Rusky wrote:
Brendan wrote:
  • the ability for a negligible/irrelevant minority of artists to mimic the look of other mediums that were caused by technical limitations of those mediums and weren't desired for those mediums (and only became "desired by some" due to nostalgia).
This is a sad misrepresentation of artists and of the mediums they use. Technical limitations are not something to be eliminated forever (though enabling their elimination is good), nor are they only desired due to nostalgia. Intentionally making a game look a certain way is a major part of controlling its impact on people, which is the essence of what art is. Forcing people into one particular set of (non-)limitations means you limit the kinds of art they can make on your platform.
For everything; there are advantages and disadvantages of different approaches. What matters most is whether the advantages justify the disadvantages. For my video system, no support for different artistic styles is a disadvantage, but all of the advantages more than justify that disadvantages (and even just one of those advantages, how practical it is for me to implement the tools and drivers, is enough to justifies the disadvantage all by itself).
Rusky wrote:
Brendan wrote:As a user, I also want consistently-behaved applications, rather than games that crash due to compatibility problems and/or unstable drivers (that has always, and will always, plague systems like Windows and Linux because of "far too low level" graphics APIs).
This, on the other hand, is a sad misrepresentation of existing systems. Intentionally ignoring existing platforms' strengths does not help you compete with them.

Windows is actually incredibly consistent in supporting old applications until hardware no longer supports it (at which point there are emulators like DOSBox)- this is definitely not done in the way you want, but that doesn't diminish the fact of its accomplishment.
Windows has inconsistent results just trying to cope with new games designed for new hardware. For example, recently on OSnews there was an article about Windows 10 forcing updates and a tool to block updates; where a lot of the comments were people worried about video drivers being updated/replaced with buggy drivers.
Rusky wrote:New APIs like Vulkan and DX12, which are lower-level than current ones, which are already lower-level than you want, promise to improve stability by reducing the amount of code duplicated between drivers. This is coming from the people who actually work on the drivers and the software that has to run on top of it- just because you don't like their methods doesn't mean it won't increase stability.
No, this is coming from marketing/PR people who can and will say anything; including "promising" more stability because they know that it's one of the things consumers frequently have problems with. We won't know how stable/unstable it actually is until it's been in use for 12+ months.
Rusky wrote:
Brendan wrote:It'd be trivial to (e.g.) have a "create object as (n*m*o) grid of sub-objects" functionality in the video driver's interface and allow "sub objects" to be referenced by indexing (which would be helpful for both blocks and voxels). It'd also be relatively easy for the video driver to store these in "run length encoded" format (or whatever format it likes).

The slightly trickier part is doing hidden surface removal on "container objects" so that internal surfaces are ignored during rendering. This is trivial for the "grid of blocks" case (e.g. just remove adjoining faces for solid blocks). For "collection of arbitrary shaped sub-objects" it's a bit harder.
At this point you've added an awful lot of code, almost completely specific to games-with-grids-of-blocks, which now has to be duplicated across all video drivers. Combine that with all the other optimizations you'd need to add for other types of things games want to do, and keep adding it every time a new game idea comes out, and now your video driver is massive.

It would probably make sense to deduplicate helper code like that into a shared wrapper (or several, so applications can talk to only the ones they want), at which point you've created a game engine on top of a lower-level (relatively speaking) API.
It adds up to about 5 "base object" types, about 5 "container object" types, about 10 "texture" types, and about 2 "light source" types. The number of different types is responsible for a relatively tiny amount of complexity, and majority of the complexity is unrelated (and caused by minimising data passed via. messages, rendering triangles, etc).

Also note that the design already allows "external renderers" (and the initial implementation will be a software renderer using CPUs that's not built into any video driver); and this includes multiple external renderers being used in parallel with the video driver's own (for distributed processing), and one external renderer being used instead of the video driver's. However; all of this is behind the video interface/abstraction (where no widget, application, game or GUI has to care about it).
Rusky wrote:
Brendan wrote:Physics is a completely different topic (where I'll want a generic "physics engine" service that uses momentum and collision detection to predict where objects will be in the future).
The point of my particle system example is that sometimes you don't want a full-blown physics engine- doing that for particles would be insane. Particles have very simple movements, they don't interact, and there are very large numbers of them. You could of course just use a parallel-computation API, but then you've introduced a bandwidth problem (transferring the particles at all is unnecessary) in the name of scaling, which particle systems don't need.
If each particle has a current position and current trajectory; these values only need to be updated when the trajectory changes. With 10000 particles it might cost 32 KiB per second. Of course I would want to use a full blown physics engine and have (e.g.) particles bouncing off of walls, being effected by gravity, etc; partly because I need it for other things and partly because I can.


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Thu Jul 30, 2015 9:53 am
by AndrewAPrice
Brandan - do you think the operating system is the appropriate place for doing 3D scene management?

I ask this because game developers rarely deal directly with OpenGL or Direct3D. Instead, they use an engine like Unity 3D, Unreal Engine, Ogre 3D, or Irrlicht. Instead of writing boilerplate code to deal with creating a window and initializing shaders, they're arranging objects in a scene, applying materials (like cloth or metal) to meshes, placing lights and audio emitters, and writing their game logic in a high level language (C#, Javascript, UnrealScript).

In Unity, you just have to change the build target, and within minutes your game is running on smartphones, web browsers, desktops, and it largely works out of the box (with obvious exceptions like designing your game purely around keyboard input then trying to play it on a smartphone with only a touchscreen), and can dynamically adapt the level of detail and resolution for the platform you're running on.

What I'm getting at is that most of these problems are solved, but at the engine level and not the operating system level. A few that haven't solved, which is what I think you're trying to get to:

1. Streaming/saving the output image in a way that's resolution independent (a vector video format?) Games often have the ability to share recordings, or join into multiplayer games as a spectator, but there's currently no efficient vector video format that'll let you stream a 3D scene without everybody having a copy of the program running on their system.
2. Distributed real time rendering over a network (I know it's solved for off-line rendering, but not so sure about in realtime applications.)
3. Universally describing colour so that it's identical on every device. We should move beyond using RGB for designing user interfaces and storing images.

I think you're thinking too much into (1) that it sounds like you're effectively implementing an entire rendering engine in your operating system. Where you are really looking for a practical solution where you can have a 3D program running on another server, and be able to log in control it/view it's output with as little bandwidth as possible. Honestly, I think it would be easier to implement this as a second project - a rendering engine that has the ability to connect to a running instance of a program on another device, but draw it locally.

Re: Concise Way to Describe Colour Spaces

Posted: Thu Jul 30, 2015 11:01 am
by Brendan
Hi,
MessiahAndrw wrote:Brandan - do you think the operating system is the appropriate place for doing 3D scene management?
Yes.

More specifically I think an OS should include:
  • Device drivers that abstract the underlying details of the hardware (e.g. AHCI driver, NE2000 driver, SoundBlaster driver, Radeon driver)
  • Higher level layers that provide convenience and flexibility (e.g. file system and VFS, networking stack, sound system, video system)
MessiahAndrw wrote:I ask this because game developers rarely deal directly with OpenGL or Direct3D. Instead, they use an engine like Unity 3D, Unreal Engine, Ogre 3D, or Irrlicht. Instead of writing boilerplate code to deal with creating a window and initializing shaders, they're arranging objects in a scene, applying materials (like cloth or metal) to meshes, placing lights and audio emitters, and writing their game logic in a high level language (C#, Javascript, UnrealScript).
Imagine if an OS didn't provide file systems (or a network stack, or sound system), and developers were all working around this by using third party "file engines" (or "networking engines" or "sound engines") to make up for the OS's failure to provide functionality that applications need. This should sound very silly to everyone.

Now imagine if an OS didn't provide any video system, were developers are all working around this by using third party "game engines" to make up for the OS's failure to provide required functionality. Why doesn't this sound very silly to everyone?

Basically; to me the fact that game developers rarely deal directly with OpenGL and Direct3D directly, and instead go out of their way to find a higher level/more convenient interface (provided by game engines) is proof that existing OSs are missing an essential "video system" layer.
MessiahAndrw wrote:What I'm getting at is that most of these problems are solved, but at the engine level and not the operating system level. A few that haven't solved, which is what I think you're trying to get to:

1. Streaming/saving the output image in a way that's resolution independent (a vector video format?) Games often have the ability to share recordings, or join into multiplayer games as a spectator, but there's currently no efficient vector video format that'll let you stream a 3D scene without everybody having a copy of the program running on their system.
Not just resolution independent; but "output device independent", which includes:
  • Resolution independent
  • Colour space independent
  • Device size and shape independent (e.g. tiny/huge, 2D/stereoscopic/3D, flat/curved)
  • Number of output devices independent (e.g. one screen, or a massive 10*10 wall of individual screens, or something very different like a printer or a file)
MessiahAndrw wrote:2. Distributed real time rendering over a network (I know it's solved for off-line rendering, but not so sure about in realtime applications.)
It's been done for real-time rendering too. However, most OSs aren't distributed OSs and are primarily designed for (and primarily used for) "stand alone single computer"; which means that it's harder to do distributed rendering (as the interfaces aren't designed for it) and there's no demand for it (as few users are using multiple computers).

The other thing I'm trying to get is a single consistent protocol used for everything; rather than one used for 3D (games) and a completely different interface for 2D (office apps); and rather than one for communication between widget and application, another for communication between application and GUI, etc. Of course this is just an extension of "device independence" - e.g. a widget's "output device" may be an application, an application's "output device" may be a GUI, and a GUI's "output device" might be a video card (but in all those cases "may be" doesn't imply that it actually is and the "output device" could be anything).
MessiahAndrw wrote:I think you're thinking too much into (1) that it sounds like you're effectively implementing an entire rendering engine in your operating system. Where you are really looking for a practical solution where you can have a 3D program running on another server, and be able to log in control it/view it's output with as little bandwidth as possible. Honestly, I think it would be easier to implement this as a second project - a rendering engine that has the ability to connect to a running instance of a program on another device, but draw it locally.
They could be considered separate "sub-projects"; but at the end of the day I have to design the protocol and also design and implement a software renderer before any process (application, game, GUI) running on the OS can display anything.


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Thu Jul 30, 2015 12:53 pm
by AndrewAPrice
I actually like what you're doing, and my questions are to help me find my own answers too.
Brendan wrote:Imagine if an OS didn't provide file systems (or a network stack, or sound system), and developers were all working around this by using third party "file engines" (or "networking engines" or "sound engines") to make up for the OS's failure to provide functionality that applications need. This should sound very silly to everyone.

Now imagine if an OS didn't provide any video system, were developers are all working around this by using third party "game engines" to make up for the OS's failure to provide required functionality. Why doesn't this sound very silly to everyone?

Basically; to me the fact that game developers rarely deal directly with OpenGL and Direct3D directly, and instead go out of their way to find a higher level/more convenient interface (provided by game engines) is proof that existing OSs are missing an essential "video system" layer.
This is true. Most environments (game engines, event-based GUI systems, language-based frameworks e.g. Java on Android) provide a full blown application framework that hides the event loop and forces you to use their messaging framework, input framework, drawing framework, etc.. And for the most part it works.. There are very very few corner cases where it doesn't work, and it's not because you can't build the same kind of application with that framework, but because you're trying to adapt one framework to run on top of another (for example, trying to port a C++ program to Android involves writing a lot of Java wrapping code.)

I'm not sure if this is 100% correct, but I think the Wii U only supports the Unity engine as their development platform. So they're pretty much saying that every game developed for their system must be built with this one game engine using their IDE, compiler, scene manager, material system, physics system - everything! It works because Unity is incredibly flexible and powerful, and virtually all styles of games can be built in it - realistic, cartoony, 2D, 3D. I actually think you might be interested in playing with Unity (the IDE runs on Mac and Windows) to get an idea of how their rendering system works. For 99% of cases, you can use the default scene manager, but they provide a low-level graphics API (GL - 'Graphics Library') for the few use cases where you need to manually describe what is being drawn for a particular effect (I did this for my water gun effect in Drenched.) In Nullify, the entire map is dynamically drawn from a 3D array in memory, in a way that makes it look an endlessly wrapping world. I got Nullify running on my Occulus Rift DK2 in 10 minutes. I'm trying to tell you that Unity is really an incredibly powerful engine. It's even possible to communicate with code written in C++/Java/etc for when you want to reuse libraries.

I'm not trying to sell you up on using Unity, but I find it relevant to this discussion, because if Nintendo can come out and say "all games for the Wii U must be developed in Unity", why can't we, in the same fashion, provide a Unity-style environment and engine (but one that might be apt for not only games, but also plain old desktop applications) and say all programs must be developed using this IDE and engine? Would it really be much different to Microsoft saying all Windows Metro applications must be developed on Visual Studio using .Net?

Now, if everything was built on this one engine, every application in our operating system would have a consistent way of representing scenes (both 2D and 3D), textures, models, sound resources, etc. It would then be easy for a viewer (an Oculus Rift headset, your local window manager, or a remote computer) to connect to any running application and say "send me your graphics and audio resources and the currently loaded scene", and after that, you only have to send lightweight messages ("play animation #12 on asset #4823, add button saying 'Click Here' to screen") to keep all copies of the scene in sync. The viewer can then decide on the level of detail, resolution, etc. This would be efficient for most use cases.

There are going to be some use cases where the programmer is going to try to achieve some unique effect by manually drawing polygons or pixels every frame, which you're then trying to stream over the network. In moderation, this might not be bad (e.g. you're just using it to draw the occasional water gun effect), but the developer should heed the warning that this is an inefficient way to do things, especially if they intend to stream this over a network. But, there are cases when this could be useful (they're trying to decode a unique image format), and often it would only be done once and loaded into a texture, and you're good.

This would be the approach I would want to take. With a fancy batteries-included Unity-style IDE and build system for my operating system. :) Thoughts?

Re: Concise Way to Describe Colour Spaces

Posted: Thu Jul 30, 2015 2:52 pm
by Brendan
Hi,
MessiahAndrw wrote:I'm not trying to sell you up on using Unity, but I find it relevant to this discussion, because if Nintendo can come out and say "all games for the Wii U must be developed in Unity", why can't we, in the same fashion, provide a Unity-style environment and engine (but one that might be apt for not only games, but also plain old desktop applications) and say all programs must be developed using this IDE and engine? Would it really be much different to Microsoft saying all Windows Metro applications must be developed on Visual Studio using .Net?
I definitely do want something like this (and will/should research Unity), but at this stage it's too far into the future for me.

Mostly I need to get an initial IDE and compiler working for my language (which will involve implementing an initial renderer, video driver, etc before I can even start). Once that's done I can start porting everything to my language. After that's done I'll be looking into improving everything; including doing "IDE version 2", and worrying about things like a debugger and data visualisation, and integrating it with other things (e.g. 3D modeller, scene editor, etc).


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Thu Jul 30, 2015 4:11 pm
by Rusky
Brendan wrote:For everything; there are advantages and disadvantages of different approaches. What matters most is whether the advantages justify the disadvantages. For my video system, no support for different artistic styles is a disadvantage, but all of the advantages more than justify that disadvantages (and even just one of those advantages, how practical it is for me to implement the tools and drivers, is enough to justifies the disadvantage all by itself).
Indeed. My point is that whether something is an advantage or a disadvantage is relative, not absolute. Making it more practical to develop is a trade-off against making it more useful for game developers. That's not always a bad thing, but since your goals include things that are useful for artists, like consistent color display, it seems a little odd to me.

For example, these are the sorts of things I'm seeing that don't really help artists/developers out much:
Brendan wrote:Oculus Rift could have been developed on my OS just by creating a suitable "display device description" file; which is much simpler than diddling with shaders.
No, Oculus Rift requires special transformations to work with the lenses it uses, so somebody has to "diddle with shaders" to develop it.

"Display device description" files are great, but would they have been flexible enough for VR if you had designed them years before it came out? Will they be flexible enough to handle new things like holographic displays that require even more drastic changes to the renderer? How about the next display technology after that, which is currently only one of many long-shot experiments?
Brendan wrote:No, this is coming from marketing/PR people who can and will say anything; including "promising" more stability because they know that it's one of the things consumers frequently have problems with. We won't know how stable/unstable it actually is until it's been in use for 12+ months.
Nope. It's coming from the actual individuals who write the actual code used in actual drivers and in actual games, and who are thus responsible for fixing actual instability bugs.
Brendan wrote:If each particle has a current position and current trajectory; these values only need to be updated when the trajectory changes. With 10000 particles it might cost 32 KiB per second. Of course I would want to use a full blown physics engine and have (e.g.) particles bouncing off of walls, being effected by gravity, etc; partly because I need it for other things and partly because I can.
GPU-side particle systems that don't need to have particles interact with anything much (because that's typically what particle systems are) cost 0 KiB per second.
Brendan wrote:Now imagine if an OS didn't provide any video system, were developers are all working around this by using third party "game engines" to make up for the OS's failure to provide required functionality. Why doesn't this sound very silly to everyone?
Because unlike file systems and network stacks, game engines have no need to interoperate with other applications- they just simply produce outputs from inputs. On the other hand, there's no need for most applications to mess with how the file system works, for features or optimization, while there is plenty of need for various trade-offs and feature development between game engines.
Brendan wrote:It's been done for real-time rendering too.
No, it has not. There are distributed renderers, and there are real-time renderers, and there are even distributed "real-time" renderers for massive amounts of static data, but there are no such things as distributed real-time renderers that can handle game-levels of interactivity.

---
MessiahAndrw wrote:I'm not sure if this is 100% correct, but I think the Wii U only supports the Unity engine as their development platform.
This is false, the Wii U has its own SDK completely unrelated to Unity.

Re: Concise Way to Describe Colour Spaces

Posted: Thu Jul 30, 2015 6:03 pm
by AndrewAPrice
Rusky wrote:
MessiahAndrw wrote:I'm not sure if this is 100% correct, but I think the Wii U only supports the Unity engine as their development platform.
This is false, the Wii U has its own SDK completely unrelated to Unity.
Thanks for verifying that. I know Nintendo have been promoting Unity. (https://wiiu-developers.nintendo.com/, http://www.gamespot.com/articles/ninten ... 0-6418487/)

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 31, 2015 2:06 am
by Brendan
Hi,
Rusky wrote:
Brendan wrote:For everything; there are advantages and disadvantages of different approaches. What matters most is whether the advantages justify the disadvantages. For my video system, no support for different artistic styles is a disadvantage, but all of the advantages more than justify that disadvantages (and even just one of those advantages, how practical it is for me to implement the tools and drivers, is enough to justifies the disadvantage all by itself).
Indeed. My point is that whether something is an advantage or a disadvantage is relative, not absolute. Making it more practical to develop is a trade-off against making it more useful for game developers. That's not always a bad thing, but since your goals include things that are useful for artists, like consistent color display, it seems a little odd to me.
Consistent colours is just part of device independence (the ability for software to ask for a colour without caring if the device is an NTSC television or RGB monitor or CMYK printer or..).
Rusky wrote:
Brendan wrote:Oculus Rift could have been developed on my OS just by creating a suitable "display device description" file; which is much simpler than diddling with shaders.
No, Oculus Rift requires special transformations to work with the lenses it uses, so somebody has to "diddle with shaders" to develop it.

"Display device description" files are great, but would they have been flexible enough for VR if you had designed them years before it came out? Will they be flexible enough to handle new things like holographic displays that require even more drastic changes to the renderer? How about the next display technology after that, which is currently only one of many long-shot experiments?
Support for Oculus Rift's display mostly involves 2 things: stereoscopy and support for "non-flat" displays. When creating a new file format I research existing stuff, and stereoscopy has been in EDID specifications for as long as I can remember. I started looking into "non-flat" displays because of ancient "convex curved bubble" CRT displays and saw both the more modern "concave curved" screens and things like projectors being used for parabolic surfaces. Basically, it's likely I would've ended up with both stereoscopy and support for "non-flat" displays in my monitor descriptions even if I'd never heard of any VR helmet.

For "true 3D" displays, my monitor descriptions can't support them (yet). More specifically, I can support various monitor shapes and the techniques that might be used (e.g. 3D grid, spinning plane, polar co-ord + depth); but the monitor descriptions include information about supported video signals/timing and 3D displays can't work for "rows of horizontal scan lines" and will require something very different, and I can't support the details for "some unknowable future video signals/timing scheme".

What I have done is made the monitor descriptions extensible (e.g. where things like the list of video mode timings the display supports uses a "entry length; entry type; entry data" format and I can add new types). What this means is that I can add support for "some unknowable future video signals/timing scheme" (and 3D displays in general) in the future when it's possible to do so; and video drivers for existing video cards can just skip all video mode timings that use "entry types" it doesn't understand (and the video card can't support anyway).

The important thing is that if/when "true 3D" monitor support is added, no normal software (widgets, applications, games, GUIs) will care and all normal software will still work perfectly on "true 3D" displays without any change at all. I won't have to redesign the video interface that normal software uses and then modify every piece of existing software (which is exactly what existing systems using "too low level" video interfaces will be forced to do in this case).
Rusky wrote:
Brendan wrote:No, this is coming from marketing/PR people who can and will say anything; including "promising" more stability because they know that it's one of the things consumers frequently have problems with. We won't know how stable/unstable it actually is until it's been in use for 12+ months.
Nope. It's coming from the actual individuals who write the actual code used in actual drivers and in actual games, and who are thus responsible for fixing actual instability bugs.
Do you honestly think it makes any difference? Tell me the last time you heard a programmer say "I wrote this new code, and it's just as bad or worse than the old code it replaces".

It might even be more stable today. Today it's still new, there's "only" (already) been one addition to the spec, it's only supported by a small number of video drivers and only used by a relatively small number of games. Wait for 12+ months until there's more additions to the spec, more different video drivers trying to support it and more games using it.
Rusky wrote:
Brendan wrote:If each particle has a current position and current trajectory; these values only need to be updated when the trajectory changes. With 10000 particles it might cost 32 KiB per second. Of course I would want to use a full blown physics engine and have (e.g.) particles bouncing off of walls, being effected by gravity, etc; partly because I need it for other things and partly because I can.
GPU-side particle systems that don't need to have particles interact with anything much (because that's typically what particle systems are) cost 0 KiB per second.
In that case, everything in my video system also costs "0 KiB per second" and the video driver just uses magic to know when particles (and other objects) are created, where they are, what they look like and where they're moving.
Rusky wrote:
Brendan wrote:Now imagine if an OS didn't provide any video system, were developers are all working around this by using third party "game engines" to make up for the OS's failure to provide required functionality. Why doesn't this sound very silly to everyone?
Because unlike file systems and network stacks, game engines have no need to interoperate with other applications- they just simply produce outputs from inputs. On the other hand, there's no need for most applications to mess with how the file system works, for features or optimization, while there is plenty of need for various trade-offs and feature development between game engines.
There should be no need for software to know or care if the storage system uses RAID or not, or AHCI or USB or iSCSI; because the OS provides adequate abstractions that hide these "irrelevant to applications" details.

There should be no need for software to know or care if the networking system uses "bridged" network adapters or not, or if it uses optical fibre or copper or radio waves; because the OS provides adequate abstractions that hide these "irrelevant to applications" details.

There should be no need for software to know or care if the video system uses multiple monitors or not, or 2D or stereoscopic or "true 3D" or VR helmet; because the OS provides adequate abstractions that hide these "irrelevant to applications" details.
Rusky wrote:
Brendan wrote:It's been done for real-time rendering too.
No, it has not. There are distributed renderers, and there are real-time renderers, and there are even distributed "real-time" renderers for massive amounts of static data, but there are no such things as distributed real-time renderers that can handle game-levels of interactivity.
I hope you're right (and hope that my OS will be the first and only OS that's capable of doing this, and that when I need to promote my OS I'll be able to laugh at how crippled and crappy other OSs are in comparison for at least 5+ years while other OSs redesign their short-sighted interfaces).


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 31, 2015 4:47 am
by Rusky
Brendan wrote:The important thing is that if/when "true 3D" monitor support is added, no normal software (widgets, applications, games, GUIs) will care and all normal software will still work perfectly on "true 3D" displays without any change at all. I won't have to redesign the video interface that normal software uses and then modify every piece of existing software (which is exactly what existing systems using "too low level" video interfaces will be forced to do in this case).
This is no different from existing game engines, GUI toolkits, etc. adding support for new hardware, except that in your solution you have to reimplement the entire graphics stack for every device, and in the current solution you only need to tweak a handful of libraries, and that can happen without the support of the OS.
Brendan wrote:There should be no need for software to know or care if the video system uses multiple monitors or not, or 2D or stereoscopic or "true 3D" or VR helmet; because the OS provides adequate abstractions that hide these "irrelevant to applications" details.
This is, again, already the case with existing libraries. You just have the irrational idea that anything not in the OS is a hassle to use, or is The Wrong Choice (tm) because developers have to "reach for it" somehow. Libraries are already completely adequate.
Brendan wrote:I hope you're right (and hope that my OS will be the first and only OS that's capable of doing this, and that when I need to promote my OS I'll be able to laugh at how crippled and crappy other OSs are in comparison for at least 5+ years while other OSs redesign their short-sighted interfaces).
You can't have it both ways- either it has been done and is thus a valid argument for its feasibility, or it hasn't been done because current hardware can't do it for latency-sensitive applications.