Page 11 of 14

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 31, 2015 7:19 am
by Brendan
Hi,
Rusky wrote:
Brendan wrote:The important thing is that if/when "true 3D" monitor support is added, no normal software (widgets, applications, games, GUIs) will care and all normal software will still work perfectly on "true 3D" displays without any change at all. I won't have to redesign the video interface that normal software uses and then modify every piece of existing software (which is exactly what existing systems using "too low level" video interfaces will be forced to do in this case).
This is no different from existing game engines, GUI toolkits, etc. adding support for new hardware, except that in your solution you have to reimplement the entire graphics stack for every device, and in the current solution you only need to tweak a handful of libraries, and that can happen without the support of the OS.
For significant changes (e.g. when "true 3D" arrives) it's impossible for existing "too low level" APIs to support it; which means it's impossible for game engines to support it, impossible for games to be modified to support it, and impossible for old games to work without these impossible modifications. In both cases (existing APIs or mine) it will require new video cards (for the new connectors/signals/whatever that "true 3D" requires) and therefore new video drivers are required (and this is completely unavoidable).

I have no idea how you've managed to conclude that "impossible" is easier than "required anyway".

Now consider what happens everything is using this (widgets, text editors, calculators, spreadsheets, GUIs, ...) and not just games, and all of software that has ever been written for the OS has to be updated (because it's not something that only effects new video drivers that don't exist).
Rusky wrote:
Brendan wrote:There should be no need for software to know or care if the video system uses multiple monitors or not, or 2D or stereoscopic or "true 3D" or VR helmet; because the OS provides adequate abstractions that hide these "irrelevant to applications" details.
This is, again, already the case with existing libraries. You just have the irrational idea that anything not in the OS is a hassle to use, or is The Wrong Choice (tm) because developers have to "reach for it" somehow. Libraries are already completely adequate.
You think all existing games are going to work seamlessly when (e.g.) you unplug your single monitor and replace it with a 3*3 grid of monitors?

In theory, it might be possible for libraries to hide the fact that the "too low level" API was designed by short-sighted/incompetent fools. In practice, for most games I can't even "alt+tab" to a blank desktop and back without the video getting trashed.
Rusky wrote:
Brendan wrote:I hope you're right (and hope that my OS will be the first and only OS that's capable of doing this, and that when I need to promote my OS I'll be able to laugh at how crippled and crappy other OSs are in comparison for at least 5+ years while other OSs redesign their short-sighted interfaces).
You can't have it both ways- either it has been done and is thus a valid argument for its feasibility, or it hasn't been done because current hardware can't do it for latency-sensitive applications.
I can have it both ways - it has been "partially done" enough to show that it's perfectly feasible (despite the fact that, for the research I saw, they were doing incredibly stupid things like redrawing everything every frame and sending pixel data across the network, likely because they were relying on an existing crippled/retarded "too low level" interface); and OS aren't able to do it well enough to make it worth doing for OSs intended for "single computer" (especially when they're already cursed by an existing "too low level" interface).

Of course it's also possible to look at this from a different perspective: supporting GPUs is not practical, and generating high quality/high resolution graphics in real-time from a renderer running on a single modern Intel chip is not practical either; and using distributed rendering is far more practical than either of these options.


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 31, 2015 9:11 am
by AndrewAPrice
Rusky wrote:This is no different from existing game engines, GUI toolkits, etc. adding support for new hardware, except that in your solution you have to reimplement the entire graphics stack for every device, and in the current solution you only need to tweak a handful of libraries, and that can happen without the support of the OS.
I don't think that's what he's suggesting. There are likely to be many components that make up the graphics stack:

- Loading resources (reading file formats, etc.)
- Camera (projection algorithms, occlusion culling)
- Managing resources (textures, models)
- Scene management
- Animation
- Rasterization
- Colour mapping

What I'm imagining is that the OS will come with some kind of CIE XYZ -> RGB conversion algorithm, and if I plug in my monitor (and it doesn't detect it), then in the worst case I just have to provide something like an ICC Profile file (or use SRGB) and we're done. But if said device accepts something funny like RGBY, then it might provide it's own CIE XYZ -> RGBY conversion algorithm. Likewise, if some video card has a hardware accelerated way of doing so, it could plug in it's own CIE XYZ -> RGB conversion algorithm. There might be a default CIE XY -> CMYK conversion algorithm that you can load a colour profile for your printer into, or it could be a homemade plotter where you provide a custom CIE XYZ -> Sharpie pen colour algorithm.

The camera code that handles the projection/field of view/etc would be the same in 99% of use cases, but if you plug in an Oculus Rift, it might want to overrider those functions with it's own (to handle the fact that the camera can rotate and move as the user moves their head), and insert it's distortion algorithm after or as part of the colour mapping stage.

Animation (as in bone animation on 3D meshes) will mostly be done in software, unless the hardware provides some accelerated function for it, and if so, the driver can override the animation stage with it's own hardware animation algorithm.

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 31, 2015 9:36 am
by AndrewAPrice
In my mind, I'm seeing exactly how a system like Brendan is describing could be implemented at the OS level. I don't think Rusky is necessarily disagreeing with the idea, I just think he isn't seeing how it would work yet.
Brendan wrote:You think all existing games are going to work seamlessly when (e.g.) you unplug your single monitor and replace it with a 3*3 grid of monitors?

In theory, it might be possible for libraries to hide the fact that the "too low level" API was designed by short-sighted/incompetent fools. In practice, for most games I can't even "alt+tab" to a blank desktop and back without the video getting trashed.
I'm with you on that. Modern game engines automatically reload resources when they are lost (such as ALT+TAB between full screen applications, changing the monitor resolution, etc.), but many games that are directly built on Direct3D/OpenGL can't even handle resizing the window (you know what I mean - those games that say "Restart the game for these settings to be applied.") This is pretty much boilerplate code that every game/application should handle correctly and in the same way, and so what is the reason for not moving this to the OS level?

I was bringing up Unity earlier because it handles stuff every game engine should handle (handling scene management and occlusion culling, adjusting quality for performance, shadows and lighting, animations, an advance material system, camera projection, networking, a full physics system), while also letting you get low-level enough to draw objects dynamically or do your own scene management if you want. Someone even wrote a ray tracer. You can ignore the physics system and just use it for detecting collisions while writing your own movement code, or you can ignore it altogether and do it all yourself. It's incredibly flexible, and I haven't encountered a use case where I've found using Unity to be a burden. Photo realistic games, cartoony games, scientific visualization.

@Everyone not Brendan: So what is wrong with moving all of that boilerplate stuff into the operating system itself? This doesn't mean that every device driver has to reimplement the entire graphics stack, no, it means that device drivers can inject hardware accelerated functions or special functions (like their own colour conversation algorithm) where it's appropriate.

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 31, 2015 12:20 pm
by embryo2
MessiahAndrw wrote:@Everyone not Brendan: So what is wrong with moving all of that boilerplate stuff into the operating system itself?
It denies us the diversity. The Linux was a diversity choice. The Mac also was the same. But you can dream about new Windows and personally you as a new Bill Gates. It's an exciting dream, isn't it? It's really cool to decide how people will do something. The only problem here - there could be just a few Bills while candidates are numbered in millions.

System should be diverse, extensible and configurable, despite of any Bill Gates.

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 31, 2015 12:49 pm
by Rusky
MessiahAndrw wrote:I don't think that's what he's suggesting. There are likely to be many components that make up the graphics stack:
So far Brendan has not contradicted my characterizations of his system as reimplementing everything in every video driver, and has justified it with phrasing like "in my system, writing video drivers will be hard but writing games will be easy." Further, he has denied that he will use libraries of any kind whatsoever, and denied that he will split the video driver into multiple services. If you're correct, I'd like to see how exactly he plans to do this.
Brendan wrote:For significant changes (e.g. when "true 3D" arrives) it's impossible for existing "too low level" APIs to support it
New devices need new drivers, and some drivers need new interfaces. The point is to make the drivers as minimal as possible to make writing them easier. Then, you make the higher-level interfaces support them both the same way the block device system supports local disks, RAID, RAM disks, and networked storage.

This amounts to the same type of work as your system, but with much less duplication. It also allows everything to continue to work, the same way existing game engines can support both OpenGL and DirectX (though that split is unfortunate because they're not really that different). At that point, the decision of whether to "bless" one game engine as "part of the OS" doesn't matter nearly as much.
Brendan wrote:You think all existing games are going to work seamlessly when (e.g.) you unplug your single monitor and replace it with a 3*3 grid of monitors? ... In practice, for most games I can't even "alt+tab" to a blank desktop and back without the video getting trashed.
These problems are the fault of existing implementations, not of low-level interfaces in general. You could even provide a multi-monitor wrapper that provides the same low-level interface, not that I'd want to run a game on multiple monitors.
MessiahAndrw wrote:@Everyone not Brendan: So what is wrong with moving all of that boilerplate stuff into the operating system itself? This doesn't mean that every device driver has to reimplement the entire graphics stack, no, it means that device drivers can inject hardware accelerated functions or special functions (like their own colour conversation algorithm) where it's appropriate.
The main things that bother me are 1) Brendan seems to want to reimplement this all for every driver (maybe wrong), and more importantly 2) Brendan doesn't seem to want it to be possible to bypass the "boilerplate" when you need to for convenience/performance/capability, for the dubious goal of of dynamically modifying games' rendering "quality" on different hardware. Injecting hardware acceleration is already possible without baking it into the OS (see the current situation with video codecs and hardware acceleration).

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 31, 2015 1:08 pm
by AndrewAPrice
embryo2 wrote:
MessiahAndrw wrote:@Everyone not Brendan: So what is wrong with moving all of that boilerplate stuff into the operating system itself?
It denies us the diversity. The Linux was a diversity choice. The Mac also was the same. But you can dream about new Windows and personally you as a new Bill Gates. It's an exciting dream, isn't it? It's really cool to decide how people will do something. The only problem here - there could be just a few Bills while candidates are numbered in millions.

System should be diverse, extensible and configurable, despite of any Bill Gates.
What the heck.. We're talking about the design of our hobby OS's here..

At some point in any system, design choices were made, even in a system that millions of people use. Otherwise nothing would ever get done in the world. The best thing to do is to come up with use cases where your system won't work and try to design it to minimize those.

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 31, 2015 1:22 pm
by AndrewAPrice
Rusky wrote:
MessiahAndrw wrote:@Everyone not Brendan: So what is wrong with moving all of that boilerplate stuff into the operating system itself? This doesn't mean that every device driver has to reimplement the entire graphics stack, no, it means that device drivers can inject hardware accelerated functions or special functions (like their own colour conversation algorithm) where it's appropriate.
The main things that bother me are 1) Brendan seems to want to reimplement this all for every driver (maybe wrong), and more importantly 2) Brendan doesn't seem to want it to be possible to bypass the "boilerplate" when you need to for convenience/performance/capability, for the dubious goal of of dynamically modifying games' rendering "quality" on different hardware. Injecting hardware acceleration is already possible without baking it into the OS (see the current situation with video codecs and hardware acceleration).
I agree with you there Rusky. Within reason, your system needs a way to at least plug in your own components (or override certain parts of the existing components) when the standard components that come don't work for your needs out of the box.

As an example, my game Nullify was made in Unity, where the world is represented as an infinitely wrapping 3D grid. I had to do my own camera matrix trickery to give it the illusion that the world seamlessly wraps forever. The default scene manager works for the vast majority of use cases, and mine was an uncommon use case, but the engine was flexible enough to not stop me from doing what I wanted.

It was still easier for me to use Unity, because I still wanted the rest of the features (graphical effects, audio, input, UI, platform portability, etc) - I don't want to throw out all of that because of one little thing. You should never be cornered into a situation where you have to throw the baby out with the bathtub, just because one feature isn't what you want.

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 31, 2015 2:13 pm
by Brendan
Hi,
Rusky wrote:
MessiahAndrw wrote:I don't think that's what he's suggesting. There are likely to be many components that make up the graphics stack:
So far Brendan has not contradicted my characterizations of his system as reimplementing everything in every video driver, and has justified it with phrasing like "in my system, writing video drivers will be hard but writing games will be easy." Further, he has denied that he will use libraries of any kind whatsoever, and denied that he will split the video driver into multiple services. If you're correct, I'd like to see how exactly he plans to do this.
I'm fairly sure I mentioned somewhere that (at least initially) video drivers would just use the OS's "software renderer service" (an external process) instead of having their own. Of course if/when GPUs are involved you wouldn't want to use a generic software renderer for all rendering and would want a renderer designed for that GPU.

More specifically; I imagine native drivers being implemented in stages, where developers take the "generic frame buffer" driver (that has no renderer of its own) and:
  • add support for video mode switching and release it (without implementing a renderer)
  • then add support for other stuff ("VRAM as swap space", proper support for vertical sync and page flipping, etc) and release it (without implementing a renderer)
  • then add support for bit-blits and more other stuff and release it (without implementing a renderer)
  • then add support for doing the "XYZ to monitor's colour space" conversions on the GPU and release it (without implementing a renderer)
  • then add a renderer capable of using the GPU and release it
Rusky wrote:
Brendan wrote:For significant changes (e.g. when "true 3D" arrives) it's impossible for existing "too low level" APIs to support it
New devices need new drivers, and some drivers need new interfaces. The point is to make the drivers as minimal as possible to make writing them easier. Then, you make the higher-level interfaces support them both the same way the block device system supports local disks, RAID, RAM disks, and networked storage.
"As minimal as possible" means as minimal as possible to cover all storage devices, or as minimal as possible to cover all networking devices, or as minimal as possible to cover all video devices. I do not want "more minimal than possible", and I do not want multiple different messaging protocols (e.g. one for 2D displays, one for "true 3D" displays, one for "fixed function pipeline and software rendering", one for "shaders", etc).
Rusky wrote:
Brendan wrote:You think all existing games are going to work seamlessly when (e.g.) you unplug your single monitor and replace it with a 3*3 grid of monitors? ... In practice, for most games I can't even "alt+tab" to a blank desktop and back without the video getting trashed.
These problems are the fault of existing implementations, not of low-level interfaces in general. You could even provide a multi-monitor wrapper that provides the same low-level interface, not that I'd want to run a game on multiple monitors.
Sure, and in the same way millions of integer overflow bugs are the fault of millions of programmers, and current programming languages that make it extremely hard to detect (or prevent) integer overflows aren't the problem at all.

If a significant number of existing implementations get it wrong (and have been getting it wrong for about 15 years now), that means it's unacceptably difficult for professional game developers to get it right (and extremely inappropriate for people who are not professional game developers and are just writing simple text editors, widgets, etc; who I will be forcing to use the same interface).


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 31, 2015 2:28 pm
by Brendan
Hi,
MessiahAndrw wrote:
Rusky wrote:
MessiahAndrw wrote:@Everyone not Brendan: So what is wrong with moving all of that boilerplate stuff into the operating system itself? This doesn't mean that every device driver has to reimplement the entire graphics stack, no, it means that device drivers can inject hardware accelerated functions or special functions (like their own colour conversation algorithm) where it's appropriate.
The main things that bother me are 1) Brendan seems to want to reimplement this all for every driver (maybe wrong), and more importantly 2) Brendan doesn't seem to want it to be possible to bypass the "boilerplate" when you need to for convenience/performance/capability, for the dubious goal of of dynamically modifying games' rendering "quality" on different hardware. Injecting hardware acceleration is already possible without baking it into the OS (see the current situation with video codecs and hardware acceleration).
I agree with you there Rusky. Within reason, your system needs a way to at least plug in your own components (or override certain parts of the existing components) when the standard components that come don't work for your needs out of the box.

As an example, my game Nullify was made in Unity, where the world is represented as an infinitely wrapping 3D grid. I had to do my own camera matrix trickery to give it the illusion that the world seamlessly wraps forever. The default scene manager works for the vast majority of use cases, and mine was an uncommon use case, but the engine was flexible enough to not stop me from doing what I wanted.
I don't think you should under-estimate people's ability to find alternative ways to achieve the same result.

For example; to create a wrapping world; maybe you can just define the world as an object, and then create a "9*9 grid container object" (or a whatever size is necessary to cover the worst case view distance) and insert the same "world object" at each spot in the grid (and ensure camera is always within the centre object of the grid).


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 31, 2015 7:16 pm
by Rusky
Brendan wrote:More specifically; I imagine native drivers being implemented in stages, where developers take the "generic frame buffer" driver (that has no renderer of its own) and ...
By "take the 'generic frame buffer' driver and [add stuff]," do you mean "copy and paste the 'generic frame buffer' driver and edit it," or do you mean "write a new system and delegate generic functionality to the generic driver"?

By "add a renderer capable of using the GPU" do you mean "completely rewrite the entire renderer, including transformations, projections, a scene graph, animation, and hardware control" or do you mean "implement the hardware pipeline so existing code for transformations, projections, a scene graph, and animation can be reused"?

Because no matter how many hardware-specific optimizations you add, there will always be a significant amount of functionality shared between large numbers of hardware configurations, and there will always be a relatively-small-but-significant amount of code that needs to bypass this shared code.
Brendan wrote:If a significant number of existing implementations get it wrong (and have been getting it wrong for about 15 years now), that means it's unacceptably difficult for professional game developers to get it right.
It's not the large number of game engines that all got it wrong- it's the small number (i.e. 2) of low-level interfaces that all got it wrong.
Brendan wrote:(and extremely inappropriate for people who are not professional game developers and are just writing simple text editors, widgets, etc; who I will be forcing to use the same interface)
So... no GUI-specific libraries? Every application has to recreate its own widgets?
Brendan wrote:I don't think you should under-estimate people's ability to find alternative ways to achieve the same result.

For example; to create a wrapping world; maybe you can just define the world as an object, and then create a "9*9 grid container object" (or a whatever size is necessary to cover the worst case view distance) and insert the same "world object" at each spot in the grid (and ensure camera is always within the centre object of the grid).
Certainly workarounds like this are possible, but at this point you're no longer making things easier for game developers- you're creating mountains of hassles (and inefficiency that can't be fixed by your renderer) where otherwise there would be molehills of using a lower-level interface.

Re: Concise Way to Describe Colour Spaces

Posted: Sat Aug 01, 2015 2:01 am
by Brendan
Hi,
Rusky wrote:
Brendan wrote:More specifically; I imagine native drivers being implemented in stages, where developers take the "generic frame buffer" driver (that has no renderer of its own) and ...
By "take the 'generic frame buffer' driver and [add stuff]," do you mean "copy and paste the 'generic frame buffer' driver and edit it," or do you mean "write a new system and delegate generic functionality to the generic driver"?

By "add a renderer capable of using the GPU" do you mean "completely rewrite the entire renderer, including transformations, projections, a scene graph, animation, and hardware control" or do you mean "implement the hardware pipeline so existing code for transformations, projections, a scene graph, and animation can be reused"?
I mean copy the existing driver's code and modify it (in stages) in whatever way is necessary to ensure it's as optimal as possible for the specific cards unique characteristics.
Rusky wrote:Because no matter how many hardware-specific optimizations you add, there will always be a significant amount of functionality shared between large numbers of hardware configurations, and there will always be a relatively-small-but-significant amount of code that needs to bypass this shared code.
You're over-exaggerating (trying to pretend the differences between different generations and different manufacturers are mostly superficial, and that (e.g.) Intel's adapters are 99% compatible with NVidia's at the hardware level and are capable of executing the exact same "GPU machine code").

If different video cards are similar enough you'd just support both with the same driver (e.g. one driver for all "ATI R9" video cards). There will be some code duplication, but it's stupid to avoid all code duplication (e.g. have a library that adds 2 integers because integer addition is duplicated everywhere), and having some code duplication is better than having dependencies that cripple "whole process optimisation" and make it impossible for anyone to take responsibility for the stability and security of the work they've supplied.
Rusky wrote:
Brendan wrote:If a significant number of existing implementations get it wrong (and have been getting it wrong for about 15 years now), that means it's unacceptably difficult for professional game developers to get it right.
It's not the large number of game engines that all got it wrong- it's the small number (i.e. 2) of low-level interfaces that all got it wrong.
No; I've been seeing the same/similar problems for 15 years across many versions of DirectX, many video drivers and many games. This includes the "alt+tab causes black screen" problem, the "alt+tab causes double mouse pointer" problem, the "semi-transparent texture in foreground makes semi-transparent textures in background invisible" problem, the "game crashed and Windows put a dialog box on desktop that's impossible to see or interact with because DirectX is still showing crashed game" problem, and the "game crashed and the only way to get the OS to respond is to reboot" problem.
Rusky wrote:
Brendan wrote:(and extremely inappropriate for people who are not professional game developers and are just writing simple text editors, widgets, etc; who I will be forcing to use the same interface)
So... no GUI-specific libraries? Every application has to recreate its own widgets?
Widgets are implemented in a "widget service" that communicates with the application (mostly using the exact same messaging protocol that GUIs use to talk to video drivers); which means that if you feel like having a full screen widget (with no application and no GUI) then that's fine (widget can talk directly to video driver); and if you feel like pretending a GUI is a widget in your application's toolbar then that's fine too; because it's all using the same messaging protocol. Note: "fine" doesn't necessarily mean it makes sense, only that it'll work how you'd expect without compatibility problems.
Rusky wrote:
Brendan wrote:I don't think you should under-estimate people's ability to find alternative ways to achieve the same result.

For example; to create a wrapping world; maybe you can just define the world as an object, and then create a "9*9 grid container object" (or a whatever size is necessary to cover the worst case view distance) and insert the same "world object" at each spot in the grid (and ensure camera is always within the centre object of the grid).
Certainly workarounds like this are possible, but at this point you're no longer making things easier for game developers- you're creating mountains of hassles (and inefficiency that can't be fixed by your renderer) where otherwise there would be molehills of using a lower-level interface.
Show me some code that draws one blue triangle spinning slowly in 3D on a black background; that uses DirectX alone (and not a massive game engine and/or a bunch of libraries, unless you want to show all the code for the game engine and libraries too); which:
  • works correctly for "flat 2D screen" and stereoscopic
  • works correctly when stretched across multiple monitors connected to completely different video cards (e.g. one AMD and one Intel)
  • will balance "rendering load" across all GPUs and CPUs in the same computer
  • works in a window and full screen
  • uses "shaders" if present but works on ancient "fixed function pipeline" too
  • avoids all the problems I've been seeing for 15 years (mentioned above) in all cases
  • sends a screenshot of the triangle to a printer when you press the F6 key
  • starts/stops recording the triangle on disk as a movie file you press the F7 key
  • can be used as a widget in an application's toolbar
  • can replace the default Windows' GUI
My guess is that you're not going to achieve this with less than a million lines of code; while for my "mountains of hassles" system an application that does all of this is going to be around 100 lines of code.


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Sat Aug 01, 2015 2:35 am
by bluemoon
Brendan wrote:Mostly I'm thinking of using 32-bits per primary, where the normal range is 0 to 65535 (and 0xFFFFFFFF is about 65 thousand times brighter than monitors can display). This gives me 16-bits per primary after doing the "auto-iris" thing, which I think will be enough to cover the extra bits needed to compensate for gamma and imaginary colours (but is small enough to use lookup tables for gamma).
I have not read the whole thread, but have you considered using (or support) floating point for color representation? (Ultra) high quality rendering already make use of 32-bit fp as internal representation. The obvious benefits is the "color range" do not change, ie. remains 0-1, when you increase the precision to, say, 48 bits.

Re: Concise Way to Describe Colour Spaces

Posted: Sat Aug 01, 2015 4:01 am
by Brendan
Hi,
bluemoon wrote:
Brendan wrote:Mostly I'm thinking of using 32-bits per primary, where the normal range is 0 to 65535 (and 0xFFFFFFFF is about 65 thousand times brighter than monitors can display). This gives me 16-bits per primary after doing the "auto-iris" thing, which I think will be enough to cover the extra bits needed to compensate for gamma and imaginary colours (but is small enough to use lookup tables for gamma).
I have not read the whole thread, but have you considered using (or support) floating point for color representation? (Ultra) high quality rendering already make use of 32-bit fp as internal representation. The obvious benefits is the "color range" do not change, ie. remains 0-1, when you increase the precision to, say, 48 bits.
At the moment I mostly only care about getting the interfaces between pieces right, and implementing crappy code for 80486SX CPUs (the OS's absolute minimum requirements on 80x86) where there isn't any FPU.

Eventually the initial software renderer will be replaced by a second software renderer (written in my language and compiled by my tools) that can/will use things like FPU/MMX/SSE/AVX (if the target CPU supports it). However even in that case I'm not sure using floating point would help - compared to fixed point/integers, for values within a specific range using the same number of bits, floating point gives worse precision over the majority of the range (for everything except small values).


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Sat Aug 01, 2015 1:37 pm
by Rusky
Brendan wrote:I mean copy the existing driver's code and modify it (in stages) in whatever way is necessary to ensure it's as optimal as possible for the specific cards unique characteristics.
...
You're over-exaggerating (trying to pretend the differences between different generations and different manufacturers are mostly superficial, and that (e.g.) Intel's adapters are 99% compatible with NVidia's at the hardware level and are capable of executing the exact same "GPU machine code").

If different video cards are similar enough you'd just support both with the same driver (e.g. one driver for all "ATI R9" video cards). There will be some code duplication, but it's stupid to avoid all code duplication (e.g. have a library that adds 2 integers because integer addition is duplicated everywhere), and having some code duplication is better than having dependencies that cripple "whole process optimisation" and make it impossible for anyone to take responsibility for the stability and security of the work they've supplied.
You misunderstand- I'm not exaggerating anything, just describing existing code. Of course Intel and NVidia are drastically different and of course you can use the same driver for all cards in a series, this is already done.

But you don't just want your drivers to handle the hardware interface itself, you want them to include a full rendering engine on top, and that is where you are introducing massive amounts of pointless duplication. There may well be hardware-specific optimizations that your system enables, but the vast majority of the renderer doesn't have to do anything differently, and even when one driver can optimize that doesn't mean there won't be others that can share generic code.

There is no reason you can't share that common code, either. You could move it into a wrapper service/library that speaks a lower-level API to the driver. Or, you could move it into a helper service/library that the video driver talks to, if you don't like having a lower-level API even internally to the OS. Either way, the fact that you have a generic renderer to copy from in the first place should be evidence enough that you would benefit from something like this.
Brendan wrote:No; I've been seeing the same/similar problems for 15 years across many versions of DirectX, many video drivers and many games. This includes the "alt+tab causes black screen" problem, the "alt+tab causes double mouse pointer" problem, the "semi-transparent texture in foreground makes semi-transparent textures in background invisible" problem, the "game crashed and Windows put a dialog box on desktop that's impossible to see or interact with because DirectX is still showing crashed game" problem, and the "game crashed and the only way to get the OS to respond is to reboot" problem.
Again, these problems are ultimately the fault of the particular interface design used by DirectX and OpenGL. A low-level interface does not need to lose the graphics context state on alt+tab, nor does a low-level interface need to prevent the OS from taking control to show a dialog box. If DirectX/OpenGL handled those situations better, all of those problems would go away with no changes to the applications. As far as I'm aware, it would even be a backwards compatible change, and applications would just never be aware that they had been alt+tabbed from.
Brendan wrote:Show me some code that draws one blue triangle spinning slowly in 3D on a black background; that uses DirectX alone (and not a massive game engine and/or a bunch of libraries, unless you want to show all the code for the game engine and libraries too); which:
  • works correctly for "flat 2D screen" and stereoscopic
  • works correctly when stretched across multiple monitors connected to completely different video cards (e.g. one AMD and one Intel)
  • will balance "rendering load" across all GPUs and CPUs in the same computer
  • works in a window and full screen
  • uses "shaders" if present but works on ancient "fixed function pipeline" too
  • avoids all the problems I've been seeing for 15 years (mentioned above) in all cases
  • sends a screenshot of the triangle to a printer when you press the F6 key
  • starts/stops recording the triangle on disk as a movie file you press the F7 key
  • can be used as a widget in an application's toolbar
  • can replace the default Windows' GUI
My guess is that you're not going to achieve this with less than a million lines of code; while for my "mountains of hassles" system an application that does all of this is going to be around 100 lines of code.
You're comparing apples to oranges. If you're going to include engine/library LoC, then you'd better also include all the mostly-duplicated LoC for the higher-level parts of all your video drivers. Further, if you're going to include requirements like "can be used as a widget" or "can replace the default GUI" then you'd better not compare to DirectX, but rather to my just-as-hypothetical-as-yours system that uses a better low-level interface.

In this hypothetical system of mine, the application itself takes about the same amount of code as yours, but the drivers and engine take much less. The differences:
  • It uses a library instead of the higher-level part of the video driver, and can thus bypass parts of the renderer.
  • The library talks to an API that exposes virtual displays through a shader-based rasterization pipeline, with the ability to manage VRAM and build command buffers in parallel.
  • The API also provides information about the display, which could be a window, a full monitor, several monitors, VR goggles, a stereoscopic display, a holographic display, etc. This allows the library or application to take advantage of this knowledge, but does not require its use:
    • VR can be handled automatically with a post-processing effect much like a compositing window manger's.
    • Stereoscopy can be handled automatically by modifying the vertex shader's output in clip space, before perspective division (has been done by NVidia 3D Vision).
    • Holography can be handled automatically by replacing perspective division with a set of orthographic x-shears and tweaking the pixel shader's semantics, much like supersampling (has been done here), as long as the API requests some camera information (minimally projection type and near/far plane) like I mentioned earlier.
  • The concept of virtual displays doesn't make the actual API look any different, it just works differently under the covers across virtual displays when allocating buffers, distributing shaders to compute resources, etc. This would be partially implemented in the API, a layer above the drivers themselves.
  • The API would not lose virtual displays' graphics contexts, transparently fixing all your alt+tab problems.
  • Screenshots and video recording are provided by the OS video system, by recording the output buffers from the application.
  • Applications can be used as widgets or as a replacement GUI by giving them the proper virtual display when running them.
Now that we've established what exactly low-level and high-level APIs can and can't do, we can discuss their actual pros and cons:
  • Your API can more easily change rendering techniques generically to "improve" games, which you see as an advantage but I see as a disadvantage.
  • Your API can run more applications on fixed-function hardware, which you see as an advantage but I see as useless, especially by the time either of these hypothetical systems actually exist.
  • My API allows rendering code to be shared between drivers as well as bypassed by applications, which I see as an advantage but you see as unnecessary and harmful to compatibility with fixed-function hardware.

Re: Concise Way to Describe Colour Spaces

Posted: Sat Aug 01, 2015 3:13 pm
by Brendan
Hi,
Rusky wrote:
Brendan wrote:I mean copy the existing driver's code and modify it (in stages) in whatever way is necessary to ensure it's as optimal as possible for the specific cards unique characteristics.
...
You're over-exaggerating (trying to pretend the differences between different generations and different manufacturers are mostly superficial, and that (e.g.) Intel's adapters are 99% compatible with NVidia's at the hardware level and are capable of executing the exact same "GPU machine code").

If different video cards are similar enough you'd just support both with the same driver (e.g. one driver for all "ATI R9" video cards). There will be some code duplication, but it's stupid to avoid all code duplication (e.g. have a library that adds 2 integers because integer addition is duplicated everywhere), and having some code duplication is better than having dependencies that cripple "whole process optimisation" and make it impossible for anyone to take responsibility for the stability and security of the work they've supplied.
You misunderstand- I'm not exaggerating anything, just describing existing code. Of course Intel and NVidia are drastically different and of course you can use the same driver for all cards in a series, this is already done.

But you don't just want your drivers to handle the hardware interface itself, you want them to include a full rendering engine on top, and that is where you are introducing massive amounts of pointless duplication. There may well be hardware-specific optimizations that your system enables, but the vast majority of the renderer doesn't have to do anything differently, and even when one driver can optimize that doesn't mean there won't be others that can share generic code.
I would've assumed the entire point of implementing a renderer within a video driver would be to make use of the GPU for rendering, which implies using that specific GPU's "GPU machine language" for the renderer.

There is (potentially, in theory) an alternative though: add support for the specific video card's "GPU machine language" to my "portable byte code to native" compiler, then compile a generic renderer with that specific GPU as target. To be perfectly honest, I'm not too sure how this would work, and even if I were planning to do this I'd implement "something" (e.g. a renderer) for the GPU first, just to make sure I know enough about it to write a compiler back-end.
Rusky wrote:There is no reason you can't share that common code, either. You could move it into a wrapper service/library that speaks a lower-level API to the driver. Or, you could move it into a helper service/library that the video driver talks to, if you don't like having a lower-level API even internally to the OS. Either way, the fact that you have a generic renderer to copy from in the first place should be evidence enough that you would benefit from something like this.
I think you're assuming that some sort of "portable byte code to GPU specific machine code" compiler exists. It doesn't. Someone would have to write it first.
Rusky wrote:
Brendan wrote:No; I've been seeing the same/similar problems for 15 years across many versions of DirectX, many video drivers and many games. This includes the "alt+tab causes black screen" problem, the "alt+tab causes double mouse pointer" problem, the "semi-transparent texture in foreground makes semi-transparent textures in background invisible" problem, the "game crashed and Windows put a dialog box on desktop that's impossible to see or interact with because DirectX is still showing crashed game" problem, and the "game crashed and the only way to get the OS to respond is to reboot" problem.
Again, these problems are ultimately the fault of the particular interface design used by DirectX and OpenGL. A low-level interface does not need to lose the graphics context state on alt+tab, nor does a low-level interface need to prevent the OS from taking control to show a dialog box. If DirectX/OpenGL handled those situations better, all of those problems would go away with no changes to the applications. As far as I'm aware, it would even be a backwards compatible change, and applications would just never be aware that they had been alt+tabbed from.
Maybe - I'm no expert on the DirectX or modern OpenGL APIs.
Rusky wrote:
Brendan wrote:Show me some code that draws one blue triangle spinning slowly in 3D on a black background; that uses DirectX alone (and not a massive game engine and/or a bunch of libraries, unless you want to show all the code for the game engine and libraries too); which:
  • works correctly for "flat 2D screen" and stereoscopic
  • works correctly when stretched across multiple monitors connected to completely different video cards (e.g. one AMD and one Intel)
  • will balance "rendering load" across all GPUs and CPUs in the same computer
  • works in a window and full screen
  • uses "shaders" if present but works on ancient "fixed function pipeline" too
  • avoids all the problems I've been seeing for 15 years (mentioned above) in all cases
  • sends a screenshot of the triangle to a printer when you press the F6 key
  • starts/stops recording the triangle on disk as a movie file you press the F7 key
  • can be used as a widget in an application's toolbar
  • can replace the default Windows' GUI
My guess is that you're not going to achieve this with less than a million lines of code; while for my "mountains of hassles" system an application that does all of this is going to be around 100 lines of code.
You're comparing apples to oranges. If you're going to include engine/library LoC, then you'd better also include all the mostly-duplicated LoC for the higher-level parts of all your video drivers. Further, if you're going to include requirements like "can be used as a widget" or "can replace the default GUI" then you'd better not compare to DirectX, but rather to my just-as-hypothetical-as-yours system that uses a better low-level interface.
I'm comparing "everything above the driver's abstraction" in both cases.
Rusky wrote:In this hypothetical system of mine, the application itself takes about the same amount of code as yours. The differences:
  • It uses a library instead of the higher-level part of the video driver, and can thus bypass parts of the renderer.
I have no reason to read anything beyond this. I refuse to support libraries for any reason whatsoever.

However...
Rusky wrote:Now that we've established what exactly low-level and high-level APIs can and can't do, we can discuss their actual pros and cons:
  • Your API can more easily change rendering techniques generically to "improve" games, which you see as an advantage but I see as a disadvantage.
  • Your API can run more applications on fixed-function hardware, which you see as an advantage but I see as useless, especially by the time either of these hypothetical systems actually exist.
  • My API allows rendering code to be shared as well as bypassed by applications, which I see as an advantage but you see as unnecessary and harmful to compatibility with fixed-function hardware.
And for more disadvantages:
  • If "whole process optimisation" is involved:
    • You'd added a massive amount of bloat to every single process
    • You've forced a developer to wager their reputation on the quality of the library's code
    • It's not possible to upgrade/replace it later
  • If "whole process optimisation" is not involved:
    • Performance sucks due to poor optimisation
    • You've forced a developer to wager their reputation on the quality of the library's code, and that library is no longer under the developers control
  • You've assumed processes only ever send their video, and forgotton that normal processes have to understand the video sent by arbitrary processes, and either destroyed a massive amount of flexibility or added a massive amount of hassle for all processes that have to understand the "too low level" video sent by its children.
Note: For my OS, every developer gets a key, and every process they write is digitally signed with the developers key. By signing their code they take full responsibility for that process. If one of their processes does anything malicious the developers key is revoked, and the OS refuses to execute any of the code they've ever written. There are no excuses.


Cheers,

Brendan