Concise Way to Describe Colour Spaces

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
User avatar
Owen
Member
Member
Posts: 1700
Joined: Fri Jun 13, 2008 3:21 pm
Location: Cambridge, United Kingdom
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Owen »

Brendan wrote:Hi,
Owen wrote:
Brendan wrote:When it's time to render a scene; the format the data is in is whatever format the video driver felt like converting the data into after it loaded the data from files. The data in files (textures, meshes, whatever) may or may not be optimal, but "file format standardisation" is far more important.

Yes; for some things the application may want to provide the data itself (rather than telling the video driver which file contains the data); but that changes nothing - the video driver still converts the data into whatever format it feels like when the data is uploaded to the video driver (and not when a rendering is being done), and "message protocol standardisation" is far more important.
How does your driver know whether a texture is a suitable target for texture compression using one of DXT1, DXT3, DXT5, ETC1, ETC2, PVRTC or ASTC? Note: Each of these texture formats is lossy and therefore has it's own tradeoffs and some of them will make certain things look terrible. On the other hand, if you don't use them, you'll throw away a bunch of performance (yes, reading from compressed textures is faster than uncompressed) and at a minimum quadruple your memory bandwidth and the size of every texture (from 8bpp to 32bpp).
Why make one static choice when you're able to make choices dynamically (e.g. depending on the specific video card's capabilities, and whether the texture is being used for something too far away for details to make any difference or for something very close where details are very important, and "usage history" from previous frames, and estimates of how much memory bandwidth you've got to spare, etc)?
And how are you making that dynamic choice without even knowing that the texture compresses reasonably? (Especially considering that your average texture takes around a couple of seconds of CPU time to compress, so you can't exactly try each one and then use metrics to pick)
Brendan wrote:
Owen wrote:So, if you're abstracting anything and your driver really has no idea (because lets face it, your driver doesn't contain a human eye which can accurately identify compression aberrations), lets give the hypothetical future situation that your OS is running on my mobile phone. You just quadrupled the size of every texture and massively dropped the texture cache residency. Also, because of this, the memory bus is running at 100% utilization and burning 1W, or 1/3rd of SOC power (yes, that is really how much power the memory interface on a SOC will burn), so now the CPU and GPU are required to down clock due to TDP limitations.
If you're running a distributed OS then it's because you've got a network of multiple computers and not a single device (and ironically; about half of the research into "distributed real time rendering" is specifically for mobile phones because they're too under-powered and have to offload some rendering to a remote machine just to get acceptable results). With this in mind I'd be very tempted to just use the mobile phone as a dumb terminal and run its GUI, apps and/or games plus most of its rendering on a server.

Of course if there isn't a suitable computer on the network (e.g. you took the mobile phone with you to lunch and you're out of wifi range) then the video driver will reduce quality to sustain frame rate and you'll probably just be seeing fuzzy grey blobs lurching around the screen for anything that changes often.
I'm pretty sure that in any situation where I'd be playing a game on my phone, I'd not be in range of any of my own machines to use as a render server (because if I was, why am I using my phone?), and given that I'm away from my network I'd rather you didn't use my precious few megabytes of mobile data sending video data back and forth (at unacceptably high latency, anyway).

Of course, my phone is currently capable of much more than lurching gray blobs.
Brendan wrote:
Owen wrote:Is that a desirable situation?
It's definitely more desirable than forcing mountains of hassles onto every application developer and forcing compatibility problems on users. Of course this doesn't mean it's a desirable situation, it's just a "least worst" situation.

To put this into perspective; think about a "Hello World" application. For plain text only, a "Hello World" application is about 3 lines of C. For 2D it gets more complicated and it becomes something like (for Windows) the ~80 lines of code shown here. For 3D with OpenGL 1.0 the complexity involved becomes massive (starting with implementing a font engine(!)) and you end up relying on libraries to cope; and for modern OpenGL (with shaders) the complexity involved doesn't get any better. Now imagine what it's going to be like for software developers (especially beginners attempting to learn) when everything is 3D from the smallest little widget all the way up.
Of course, everything is 3D from the smallest little widget up on modern platforms. Have you ever looked at how the Android or iOS UIs are implemented? On top of OpenGL (er, probably Metal in the iOS case these days). Does the developer have to know about this? Not really.

On the other hand, implementing a UI with a generalized 3D engine sounds like hell (and indeed every 3D engine I've ever worked with comes with a specialized UI library or gives you the option of several to plug in)
Brendan wrote:
Owen wrote:Is your driver capable of figuring out the correct order to draw things for a given scene? (note: there is no optimum for every scene. If I'm drawing a race track, I want to draw sequential track segments in closest-to-furthest track order for maximum occlusion. If I'm drawing a corridor shooter, then I want to use the visibility information I pre-baked at map compile time)

Does your driver have any clue whether it can pull tricks like screen-space ambient occlusion, which have no basis in reality but subjectively work? Can it pull tricks like screen-space reflection where suitable to mostly simulate reflections without having to redraw the scene twice?
Most existing renderers use "draw everything with Z buffer tests". I'm not. I'm using "overlapping triangle depth culling" where nothing is drawn until all the occlusion, lighting, etc is done and there is no Z buffer. Because it's different to existing stuff it has different advantages and different disadvantages; and requires different methods.
Your methodology sounds not too different from tile based rendering
Brendan wrote: The "overlapping triangles" stuff is done in closest-to-furthest order because nothing else makes sense. Lighting is done very differently and screen-space ambient occlusion isn't necessary.

Reflection is an unsolved problem - I haven't found an acceptable method yet (only "fast" methods that are dodgy/fragile/broken and give unacceptable/incorrect results, and "correct" methods that are too expensive unless you're able to throw multiple GPUs at it). My plan here is to use an expensive method that gives correct results, but is effected by the "fixed frame rate, variable quality" rule (where quality is reduced if/when there isn't enough time).
You may call tricks like screen space reflection "broken." I'd say from experience that it's subjectively much better than no reflection at all (of course in some situations it would be and you'd not want it - so in those situations of course you'd want to let the artist turn it off)
Brendan wrote:Of course all of this is on the "lower level" side of the video interface's abstraction; and you can have 5 completely different renderers all (internally) using 5 completely different methods, that are all cooperating to generate (different parts of) the same frame. The exact details of any specific renderer are mostly irrelevant for OS design or where GUI/game/applications are concerned. It's not like existing systems where you're intimately tied to a specific renderer with specific techniques; where if you want to change to a radically different renderer (e.g. "cast rays from light sources") you're completely screwed and have to rewrite every game that's ever been written.
I'd be quite intrested to see a picture of this scene which is 1/3rd ray cast, 1/3rd ray traced and 1/3rd rasterized...
Brendan wrote:Of course (sadly), its far too easy for people who have never invented anything themselves (who only ever repeat ideas that someone else invented) to ignore massive advantages of anything different and only focus on the disadvantages that are far less important.
Of course (sadly), its far too easy for people who have never actually implemented realtime systems themselves (who only ever theorize new ideas and never implement them) to ignore the massive practical advantages of existing systems and only focus on the disadvantages.
User avatar
SpyderTL
Member
Member
Posts: 1074
Joined: Sun Sep 19, 2010 10:05 pm

Re: Concise Way to Describe Colour Spaces

Post by SpyderTL »

you can't record the sound of a dog barking with MIDI, and can't change an instrument from guitar to flute in an existing MP3.
I think this is what Rusky is getting at, essentially. By choosing the "midi" approach, you are trading flexibility for simplicity, on the application side. If done right, this should allow you to swap out renderers to improve performance and/or visual quality in the future. The down side is that the application will be limited to whatever "messages" are available when the application is built.

If you are the only person writing applications, this is no big deal. But you may find yourself bombarded with requests to add new features for specific applications, if you go this route. I guess this would be a good thing, in a way...
Project: OZone
Source: GitHub
Current Task: LIB/OBJ file support
"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
Owen wrote:
Brendan wrote:Why make one static choice when you're able to make choices dynamically (e.g. depending on the specific video card's capabilities, and whether the texture is being used for something too far away for details to make any difference or for something very close where details are very important, and "usage history" from previous frames, and estimates of how much memory bandwidth you've got to spare, etc)?
And how are you making that dynamic choice without even knowing that the texture compresses reasonably? (Especially considering that your average texture takes around a couple of seconds of CPU time to compress, so you can't exactly try each one and then use metrics to pick)
I guess that in 10+ years time (when I actually have a reason to care), assuming video cards are still using some sort of texture compression, I'll just have to take "compression time" and "probability it'll help" into account when deciding if/when to compress.
Owen wrote:
Brendan wrote:If you're running a distributed OS then it's because you've got a network of multiple computers and not a single device (and ironically; about half of the research into "distributed real time rendering" is specifically for mobile phones because they're too under-powered and have to offload some rendering to a remote machine just to get acceptable results). With this in mind I'd be very tempted to just use the mobile phone as a dumb terminal and run its GUI, apps and/or games plus most of its rendering on a server.

Of course if there isn't a suitable computer on the network (e.g. you took the mobile phone with you to lunch and you're out of wifi range) then the video driver will reduce quality to sustain frame rate and you'll probably just be seeing fuzzy grey blobs lurching around the screen for anything that changes often.
I'm pretty sure that in any situation where I'd be playing a game on my phone, I'd not be in range of any of my own machines to use as a render server (because if I was, why am I using my phone?), and given that I'm away from my network I'd rather you didn't use my precious few megabytes of mobile data sending video data back and forth (at unacceptably high latency, anyway).
I guess that in 20+ years time (after the OS has significant market share for 80x86 laptop/desktop/server and I have to start looking into a completely different markets like smartphones just so the OS can continue expanding) I'll have to do a little research into what the best approach might be for whatever hardware smartphones are using then.
Owen wrote:Of course, my phone is currently capable of much more than lurching gray blobs.
I really have no idea what smartphones are currently capable of - I've never owned or used one.
Owen wrote:
Brendan wrote:It's definitely more desirable than forcing mountains of hassles onto every application developer and forcing compatibility problems on users. Of course this doesn't mean it's a desirable situation, it's just a "least worst" situation.

To put this into perspective; think about a "Hello World" application. For plain text only, a "Hello World" application is about 3 lines of C. For 2D it gets more complicated and it becomes something like (for Windows) the ~80 lines of code shown here. For 3D with OpenGL 1.0 the complexity involved becomes massive (starting with implementing a font engine(!)) and you end up relying on libraries to cope; and for modern OpenGL (with shaders) the complexity involved doesn't get any better. Now imagine what it's going to be like for software developers (especially beginners attempting to learn) when everything is 3D from the smallest little widget all the way up.
Of course, everything is 3D from the smallest little widget up on modern platforms. Have you ever looked at how the Android or iOS UIs are implemented? On top of OpenGL (er, probably Metal in the iOS case these days). Does the developer have to know about this? Not really.
Show me a picture of an application's window taken from the left/right side, so I can see if it's just a 2D plane or actually has raised buttons, recessed text boxes, etc. My guess is that (from the side) the application's window will look like a vertical line and nothing more because it's not 3D at all (e.g. just 2D with "baked on shading" technology from 2 decades ago).

Alternatively; put a light source next to the window and show me how the shadows from buttons, etc. change as the light is moved around.
Owen wrote:On the other hand, implementing a UI with a generalized 3D engine sounds like hell (and indeed every 3D engine I've ever worked with comes with a specialized UI library or gives you the option of several to plug in).
There's probably a direct relationship between "complexity of 3D API" and "difficulty in implementing 3D application's user interface".
Owen wrote:
Brendan wrote:
Owen wrote:Is your driver capable of figuring out the correct order to draw things for a given scene? (note: there is no optimum for every scene. If I'm drawing a race track, I want to draw sequential track segments in closest-to-furthest track order for maximum occlusion. If I'm drawing a corridor shooter, then I want to use the visibility information I pre-baked at map compile time)

Does your driver have any clue whether it can pull tricks like screen-space ambient occlusion, which have no basis in reality but subjectively work? Can it pull tricks like screen-space reflection where suitable to mostly simulate reflections without having to redraw the scene twice?
Most existing renderers use "draw everything with Z buffer tests". I'm not. I'm using "overlapping triangle depth culling" where nothing is drawn until all the occlusion, lighting, etc is done and there is no Z buffer. Because it's different to existing stuff it has different advantages and different disadvantages; and requires different methods.
Your methodology sounds not too different from tile based rendering
The basic idea is to take a scene (described as a set of potentially overlapping triangles) and find a set of triangles where every point on the screen is within one (and only 1) triangle (by splitting triangles and discarding any overlapping "sub-triangles"). Then you process the scene a second time from the perspective of a light source and do similar "triangle overlap" stuff to determine which of the triangles get hit by the light (e.g. splitting triangles that are "half in shadow and half in light" and storing "light at each vertex"). You do that for every light source.

The end result is a set of triangles where every point on the screen is within one triangle; where the light hitting every triangle is known. From this you can interpolate texture coords and light intensities while doing rasterization; which gives you "horizontal line segments" (with a starting and ending "texture coord + light intensities").

Of course this is just the basic idea and doesn't work (e.g. anything with transparency has to be handled separately), and is also extremely expensive (much worse than "O(n*n)" for "n triangles").

The full thing is much more complicated; but I think I've solved the majority of the problems; and I think I can get the equivalent of "infinite super-sampling" out of it, and perfect "focal blur", and perfect motion blur, and perfect dynamic lighting/shadows.
Owen wrote:
Brendan wrote:Reflection is an unsolved problem - I haven't found an acceptable method yet (only "fast" methods that are dodgy/fragile/broken and give unacceptable/incorrect results, and "correct" methods that are too expensive unless you're able to throw multiple GPUs at it). My plan here is to use an expensive method that gives correct results, but is effected by the "fixed frame rate, variable quality" rule (where quality is reduced if/when there isn't enough time).
You may call tricks like screen space reflection "broken." I'd say from experience that it's subjectively much better than no reflection at all (of course in some situations it would be and you'd not want it - so in those situations of course you'd want to let the artist turn it off)
There isn't any reason I can't do it (and no reason I can't have a "reflectivity value" for each polygon/triangle that artists can set to zero to get no reflections). I'd just prefer to find a way to do it right (which would still need a "reflectivity value" for each polygon/triangle).
Owen wrote:
Brendan wrote:Of course (sadly), its far too easy for people who have never invented anything themselves (who only ever repeat ideas that someone else invented) to ignore massive advantages of anything different and only focus on the disadvantages that are far less important.
Of course (sadly), its far too easy for people who have never actually implemented realtime systems themselves (who only ever theorize new ideas and never implement them) to ignore the massive practical advantages of existing systems and only focus on the disadvantages.
You don't need to build a car to know that treating it like a horse and shoving hay into the carburetor is a bad idea (even though it's a good idea for existing horses).

How many GPU drivers, shader language compilers and game engines have you written?


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:It's definitely more desirable than forcing mountains of hassles onto every application developer and forcing compatibility problems on users.
...
you end up relying on libraries to cope
Relying on libraries is a perfectly good way to avoid "forcing mountains of hassles onto every application developer," and libraries in fact allow you to do everything you want without forcing it on applications that don't want it.
Sweeping dirt under a rug doesn't make it go away. By shifting complexity into a library it's still there, and now you've got to learn "random library API of the week" on top of that.
Rusky wrote:More blustering that shows you have no clue how to get good performance and quality out of a renderer.
Given your track record I consider that a compliment. Thanks!
Rusky wrote:
Brendan wrote:When you want to use old hardware (e.g. "fixed function pipeline")
In other words, never. The most recent fixed-function hardware is at least 10 years old, and even low-end CPUs include powerful programmable graphics hardware. By the time your OS is doing anything with graphics, supporting fixed-function hardware will be as ludicrous an idea as supporting the 8086.
Yes, technology changes, and by the time I even consider implementing the native GPU support that is required for your brain dead rantings those GPUs will probably be obsolete too (and I'll be extremely glad that I didn't design an API around the way GPUs happen to work today).
Rusky wrote:
Brendan wrote:New games that take advantage of the new features will not work on old systems and old games will not take advantage of new features.
Let me repeat this again. These are not desirable features, because developers need their applications to behave consistently across hardware so they can test it. Magically upgrading old games is both impossible to do in a way the developers would want and impossible to do without your APIs behavior becoming so inconsistent as to be unusable. Magically downgrading new games is more possible, and in fact is already done to the extent that it is at all reasonable, by letting players adjust quality settings (of course it could be automatic).
Sure; you can't convert a web page designed for 1024*768 into Braille or sound (via. speech synthesiser) because it's a higher level representation of the document; and you most definitely can't add fancy graphical effects to a game that was never designed for it.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Concise Way to Describe Colour Spaces

Post by embryo2 »

Brendan wrote:It is possible to cover the majority of (e.g.) new rendering algorithms because the interface is high level. To understand this imagine it's a "calculator service" where applications send a request like "calculate 5*4". In this case it doesn't matter much how the "calculator service" represents number internally, and doesn't matter much if it does "result = 5*4" or "result = 5+5+5+5" or "temp = 5+5; result = temp + temp;" because these details are hidden behind the high level abstraction and don't effect the application using the service. The algorithms that the "calculator service" uses can be completely changed to something radically different and nobody will know or care.
I recommend you to split the extensibility part into two pieces - the system and application ones. On the application level there could be a few system variants available. One system is exactly what you are going to implement while all others are some alternatives, that use some lower level gates into the OS internals. For you to control application behavior it is enough to provide a user with only your system variant. But the user should have a choice of another systems with another rules. Here the other variants will come into play. The system level is still higher than OS level, but extends the OS at the lowest possible level. And it's not a new OS, because it still uses some OS services. It's like different Linux distributions, when all security and compatibility issues are completely on the responsibility of the distribution vendor. The failure to provide the low level extensibility (if your OS to succeed) will lead to hacking the source code and creation of something completely unacceptable by you. So, it is better to start caring about the system level extensibility for the actual winner to emerge in the world of competing distributions, at least at the system level controlled by you.

But if you expect that the only your vision will lead the game, then even the application level of extensibility and configurability is useless and can be omitted (with all the bad consequences, that you think are unimportant).
Brendan wrote:A GUI, games and applications only send "description of scene" (although its more complicated than that - e.g. asking for data to be pre-loaded, then setting up an initial scene, then just describing "what changed when" after that). The GUI, games and applications don't use any renderer themselves. Their "description of scene" is sent to the video driver/s, and each video driver is responsible for ensuring it's rendered and displayed on time (where "ensuring its rendered" means that the video driver ensures that one or more "renderer services" does the rendering; where a native video driver with GPU support may provide its own built-in "renderer service" for its own use and for other drivers to use).
An application provides a "bytecode", a driver sends it to a rendering engine (a compiler?) and receives raw bytes with a color for every pixel, next the driver sends the colors to the hardware. Here the compiler part is hidden from an application developer, but the "bytecode" is perfectly visible and can limit a developer essentially. For example, the HTML has many updates since it's first appearance, next it was extended with CSS and finally the JavaScript was introduced to make the "scene description" acceptable for the real world. So, what do you think about the scene description?
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:It is possible to cover the majority of (e.g.) new rendering algorithms because the interface is high level. To understand this imagine it's a "calculator service" where applications send a request like "calculate 5*4". In this case it doesn't matter much how the "calculator service" represents number internally, and doesn't matter much if it does "result = 5*4" or "result = 5+5+5+5" or "temp = 5+5; result = temp + temp;" because these details are hidden behind the high level abstraction and don't effect the application using the service. The algorithms that the "calculator service" uses can be completely changed to something radically different and nobody will know or care.
I recommend you to split the extensibility part into two pieces - the system and application ones. On the application level there could be a few system variants available. One system is exactly what you are going to implement while all others are some alternatives, that use some lower level gates into the OS internals. For you to control application behavior it is enough to provide a user with only your system variant. But the user should have a choice of another systems with another rules. Here the other variants will come into play. The system level is still higher than OS level, but extends the OS at the lowest possible level. And it's not a new OS, because it still uses some OS services. It's like different Linux distributions, when all security and compatibility issues are completely on the responsibility of the distribution vendor. The failure to provide the low level extensibility (if your OS to succeed) will lead to hacking the source code and creation of something completely unacceptable by you. So, it is better to start caring about the system level extensibility for the actual winner to emerge in the world of competing distributions, at least at the system level controlled by you.

But if you expect that the only your vision will lead the game, then even the application level of extensibility and configurability is useless and can be omitted (with all the bad consequences, that you think are unimportant).
People can either choose to accept standardised APIs, messaging protocols and file formats that all ensure interoperability; or they choose not to use the OS. I will be doing everything I possibly can to prevent "random extension/library/framework/public service/programming language of the week". The project is not open source and there won't be multiple distributions;
and it is all digitally signed with 2048-bit keys and (unless I've got serious security flaws) there shouldn't be any hacking of any kind.
embryo2 wrote:
Brendan wrote:A GUI, games and applications only send "description of scene" (although its more complicated than that - e.g. asking for data to be pre-loaded, then setting up an initial scene, then just describing "what changed when" after that). The GUI, games and applications don't use any renderer themselves. Their "description of scene" is sent to the video driver/s, and each video driver is responsible for ensuring it's rendered and displayed on time (where "ensuring its rendered" means that the video driver ensures that one or more "renderer services" does the rendering; where a native video driver with GPU support may provide its own built-in "renderer service" for its own use and for other drivers to use).
An application provides a "bytecode", a driver sends it to a rendering engine (a compiler?) and receives raw bytes with a color for every pixel, next the driver sends the colors to the hardware. Here the compiler part is hidden from an application developer, but the "bytecode" is perfectly visible and can limit a developer essentially. For example, the HTML has many updates since it's first appearance, next it was extended with CSS and finally the JavaScript was introduced to make the "scene description" acceptable for the real world. So, what do you think about the scene description?
Messages for video and sound go from child to parent, to grandparent, ..., all the way to drivers. Messages for keyboard and mouse follow the same paths but go in the opposite direction (from drivers to driver's child, to driver's grandchild, ...). At each step along these paths messages may be forwarded "as is"; but they can also be modified.

Imagine an application is describing a fairly typical "square" (cubic) window. Maybe the GUI knows the application is minimised and discards half the messages instead of forwarding them. Maybe the GUI modifies the application's messages so that the "square" window ends up being displayed as a rotating sphere. Maybe the GUI is using a "deep winter" theme, and adds little icicles on the under-side of any raised edges in the application's window; and when you click a button in the application's toolbar a little icicle breaks off, and falls, and shatters when it hits the application's status bar at the bottom. Maybe the application is running inside a debugger, and the debugger modifies the application's messages to create an "exploded view" of the application's user interface.

With one standard messaging protocol it shouldn't be too hard to do these kinds of things because it's not too hard for a piece of software (e.g. GUI) to understand the contents of messages that its "forwarding". For multiple competing messaging protocols the difficulty is multiplied. With arbitrary extensions it's effectively "infinite difficulty" (impossible).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Concise Way to Describe Colour Spaces

Post by Rusky »

Brendan wrote:The full thing is much more complicated; but I think I've solved the majority of the problems; and I think I can get the equivalent of "infinite super-sampling" out of it, and perfect "focal blur", and perfect motion blur, and perfect dynamic lighting/shadows.
Would be genuinely interested in more details and/or previous similar systems, perhaps in another thread.
Brendan wrote:How many GPU drivers, shader language compilers and game engines have you written?
Can't speak for Owen, but I have used and worked on several game engines, which entails heavily using (if not writing) drivers and shader compilers. As far as I know, the last game you worked on was in something like C64 BASIC. You certainly don't seem to have much grasp on what modern game engines actually do, as you keep referring to it as "mountains of hassles" for game developers, which it is not- your API sounds exactly like existing game engines, but magically shifting under their feet far more than existing drivers do, and without the workaround of a common game engine library to handle the shifting.
Brendan wrote:By shifting complexity into a library it's still there, and now you've got to learn "random library API of the week" on top of that.
Just because you can make/use a new library doesn't means you have to every week or that it's even a good idea. Granted, this sort of thing is very much a problem in fields like client-side web programming, but I attribute that to the domain rather than the libraries- the "game engine ecosystem" is much more stable.
Brendan wrote:Yes, technology changes, and by the time I even consider implementing the native GPU support that is required for your brain dead rantings those GPUs will probably be obsolete too (and I'll be extremely glad that I didn't design an API around the way GPUs happen to work today).
Unlikely. GPUs have gotten more general-purpose over time, not less. Much like executable formats and system call interfaces have become more stable since the days of DOS when they had to be extremely hardware-specific for performance reasons, GPU APIs and shader languages have also become more generic and uniform. Any changes to GPU interfaces will be similar to changes to CPU interfaces- backwards-compatible additions to their capabilities that can be automatically taken advantage of by new compilers.
Brendan wrote:Sure; you can't convert a web page designed for 1024*768 into Braille or sound (via. speech synthesiser) because it's a higher level representation of the document; and you most definitely can't add fancy graphical effects to a game that was never designed for it.
Way to demonstrate my point. Mods are always designed for one specific game, not included in a driver update that magically improves all your games.

Again, low-level interfaces do not prevent you from upgrading things transparently to applications- libraries allow you to do exactly the same thing, and much more flexibly, when it's reasonable.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi
Rusky wrote:
Brendan wrote:The full thing is much more complicated; but I think I've solved the majority of the problems; and I think I can get the equivalent of "infinite super-sampling" out of it, and perfect "focal blur", and perfect motion blur, and perfect dynamic lighting/shadows.
Would be genuinely interested in more details and/or previous similar systems, perhaps in another thread.
While writing a response to Owen's previous questions about lighting, reflection, etc; I started writing out a detailed list of steps my render would take - how the data is arranged and split up, sorted, processed, etc. It grew to about 2 pages and only covered the first 75% of the renderer, and I started having second thoughts. It was never posted - instead; I deleted it and replaced it with a single sentence ("I'm using "overlapping triangle depth culling" where nothing is drawn until all the occlusion, lighting, etc is done and there is no Z buffer.").

The thing is, researching new ideas is hard. You look like you're washing the dishes, but you're thinking about alternative ways to do network routing. You're out at a restaurant with your family and someone's telling you what they did on their holiday, but you're just nodding and saying generic babble like "OK" and "cool" when there's a gap in the conversation - you're not listening, you're thinking about ways to do array bounds checking at compile time. You're just playing a computer game and relaxing, but you're not - in the back of your mind you're thinking about collision detection. You spend about half of every day doing "passive research" like this.

Occasionally; an idea reaches the point where "passive research" isn't enough - there'll be some annoying detail that you're not sure about. You put everything else aside just to build a prototype to convince yourself that an idea that's been rolling around your head is plausible (or equally likely, to discover a problem with the idea you can't solve yet). Maybe this costs you a few days; but maybe it takes 6 months.

If you do this for long enough it starts to effect everything you do. You'll be writing boot code and get to the part about setting a default video mode. The average person would just get the EDID from the monitor and use it to decide which video mode to use; and it'll work, and they'll be happy, and they'll move on to implementing the next thing. You can't. You've got these ideas that have formed in the back of your mind over the least 2 years. You spend weeks researching everything, and wondering how future "true 3D" displays would need to be described, and trying to figure out why a digital video interface like HDMI would care about "horizontal sync width" timing.

It doesn't just cost time though. If you spent that time (e.g.) building bird houses out of wood, you can point at the bird houses you've made and say "this is what I did". People can see the results of your efforts and understand what you've achieved. Ideas aren't like that. You can't take a photo of "concept for highly scalable scheduling algorithm" and show your brother-in-law next time he asks what you've been doing. He doesn't understand how computers work, and you can't explain it. He thinks you sacrificed $$ per hour working as an electrician just to play computer games. But it's not just people that don't know anything about OSs; it's programmers and even other OS developers too. Some kid will come along and start writing their OS, and within 6 months they'll have a working shell or something, and they'll look at your project and see "nothing".

I've been researching new ideas for 20 years now. The end result is a reasonable collection of ideas that (if/when implemented at the same time) have the potential to make my OS better than anything that currently exists in multiple areas. This "potential to be better than anything that exists" is the reward for all the work, and all the sacrifices. But none of it is patented, and if you say too much you risk losing your reward, so you only show the tip of the iceberg and don't put too much information in one place.
Rusky wrote:
Brendan wrote:How many GPU drivers, shader language compilers and game engines have you written?
Can't speak for Owen, but I have used and worked on several game engines, which entails heavily using (if not writing) drivers and shader compilers. As far as I know, the last game you worked on was in something like C64 BASIC. You certainly don't seem to have much grasp on what modern game engines actually do, as you keep referring to it as "mountains of hassles" for game developers, which it is not- your API sounds exactly like existing game engines, but magically shifting under their feet far more than existing drivers do, and without the workaround of a common game engine library to handle the shifting.
When I first saw ID software's "Doom" (in the 1990s) I was impressed/amazed. I wrote a very crappy renderer using similar principles (cast 320 rays, see what wall they hit, scale the vertical strip) to play with the idea. Then I started looking into ray casting and started a prototype that was abandoned before it came close to working because I didn't think I'd ever be able to get acceptable the frame rates.

Later (about 15 years ago) I wanted an excuse to practice using 80x86's FPU; and I had a unique idea involving the rasteriser that I wanted to try out (keeping "start and end" of horizontal line segments as floating point and to get perfect anti-aliasing on the horizontal direction). I implemented the render in 16 bit assembly as a DOS program. It worked very well; so I decided to re-implement it in C as a 32-bit windows program (using Allegro to get a frame buffer, etc). For this version I added a few things (including doing 2 rows of pixels per screen line and averaging the results to get some vertical anti-aliasing to go with the far higher quality horizontal anti-aliasing it was doing). This renderer worked very well too, and I started thinking about turning it into an actual game; but the 3D models I was using to test it (a few space ships) were created by hand (with diagrams on paper and a bunch of calculations to determine vertex coords) and were entered into the program as a hard-coded data structures. I decided I couldn't be bothered with writing an application to create 3D models in a sane way and went back to OS development.

Later (about 5 years ago) I figured out a way to do time travel in a multi-player game; mostly involving keeping a log of "history" that determines the state of the game world in the past/present/future; where any change a player causes in one time effects the "log of history" and therefore effects the game world of all players seeing future times. It was an interesting idea, so I started a prototype. I ended up with a server (for Linux) and client (for Windows). The client was in C, using SDL and OpenGL. The client would download resources (textures, fonts) from the server while the game was running, and had a working menu system (and its own little font engine) implemented with OpenGL, and some ugly 3D terrain. Then I started trying to figure out how to get the client to support 2 monitors and gave up.

Finally (about 2 years ago) I decided it'd be fun to do something like Minecraft in Java with LWJGL. I was wrong, it wasn't fun, and even simple things (like trying to determine how many CPUs the system has) was painfully frustrating. This lasted about 3 weeks before I remembered how much Java and OpenGL both suck.

Of course I am not a game developer. I just do these things for research and/or to take a break from OS development.
Rusky wrote:
Brendan wrote:By shifting complexity into a library it's still there, and now you've got to learn "random library API of the week" on top of that.
Just because you can make/use a new library doesn't means you have to every week or that it's even a good idea. Granted, this sort of thing is very much a problem in fields like client-side web programming, but I attribute that to the domain rather than the libraries- the "game engine ecosystem" is much more stable.
I'm sure we've talked about libraries before - libraries have advantages and disadvantages; and for my OS I'm not interested and never will be for multiple reasons.

In theory I could have "wrapper services" (where video driver has lower level interface and wrappers provide higher level interfaces), but that destroys all the benefits of having all the different pieces (widgets, apps, games, GUIs, etc) all using the same standard messaging protocol that everything understands, and therefore destroys the flexibility of the system as whole.
Rusky wrote:
Brendan wrote:Yes, technology changes, and by the time I even consider implementing the native GPU support that is required for your brain dead rantings those GPUs will probably be obsolete too (and I'll be extremely glad that I didn't design an API around the way GPUs happen to work today).
Unlikely. GPUs have gotten more general-purpose over time, not less. Much like executable formats and system call interfaces have become more stable since the days of DOS when they had to be extremely hardware-specific for performance reasons, GPU APIs and shader languages have also become more generic and uniform. Any changes to GPU interfaces will be similar to changes to CPU interfaces- backwards-compatible additions to their capabilities that can be automatically taken advantage of by new compilers.
For hardware there's a compromise between "efficiency for one specific job" and "flexibility". Rendering got shifted from CPU to hardware accelerators to improve efficiency, then the hardware accelerators got replaced with "graphics processors" to regain the flexibility, now GPUs are getting more "CPU like" and CPUs are getting more "GPU like". Within 10 years I wouldn't necessarily be too surprised if we go full circle and end back to where we started (where "very generic/flexible" CPUs do it all). You only need to look at Xeon Phi ("many core 80x86 with AVX-512") and what Intel has being doing with SIMD in recent generations to realise this is at least a possibility.
Rusky wrote:
Brendan wrote:Sure; you can't convert a web page designed for 1024*768 into Braille or sound (via. speech synthesiser) because it's a higher level representation of the document; and you most definitely can't add fancy graphical effects to a game that was never designed for it.
Way to demonstrate my point. Mods are always designed for one specific game, not included in a driver update that magically improves all your games.
Yes; a shader mod for one specific game won't work for other games because there isn't a standard representation for data that's used by all games. This problem doesn't apply to what I'm proposing (where there is a standard representation of data used by all games) and is therefore irrelevant.

Now let's try "1 + 1 = 2". By changing the software that GPU executes (regardless of whether that software came from a game or from the video diver), and ensuring that games do use a standard representation for data, improving the video driver can improve the quality of graphics in all games.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Concise Way to Describe Colour Spaces

Post by Antti »

Brendan wrote:If you spent that time (e.g.) building bird houses out of wood, you can point at the bird houses you've made and say "this is what I did". People can see the results of your efforts and understand what you've achieved. Ideas aren't like that.
This is not necessarily true except if we concentrated on this particular quote and forgot the context. It seems that you have written many prototypes, and those were probably good for their intended purpose, but you seem to be less experienced when it comes to finalizing something. Of course, we have to take into account that prototypes could have been "better than avarage anyway" but they have been written just for their intended purpose only and nothing else. I am almost sure that with relatively little effort some of those prototypes could have been finalized and you would have an impressive bird house collection.

It is a kind of valid strategy to concentrate on reseaching and prototyping but there will always be something missing. Especially when there is a vision of having something finalized eventually. Doom was mentioned. It is hard for me to believe that id Software had succeeded writing Doom without first having finalized games on their résumé. Even if we looked at this from the research viewpoint, there would be a lot of researching and prototyping to be done at the product finalizing stage itself. What I am saying is that you can have all this. More thorough researching & prototyping and the bird house collection as a side product.

Besides, pre-Doom games are fun to play.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Concise Way to Describe Colour Spaces

Post by embryo2 »

Brendan wrote:Messages for video and sound go from child to parent, to grandparent, ..., all the way to drivers. Messages for keyboard and mouse follow the same paths but go in the opposite direction (from drivers to driver's child, to driver's grandchild, ...). At each step along these paths messages may be forwarded "as is"; but they can also be modified.
You confuse the messaging protocol with data representation. For your system to be able to use your messages it should understand them. A protocol usually concerned with moments like "give it" or "take it" where "it" is the data, that should be understood by a recipient (but shouldn't be by the protocol). A recipient usually concerned with the data and shouldn't understand the protocol, that delivers the data.

So, in your system I see a mess of a protocol with a very wide array of possible data, would it be the data for video, from keyboard, hard disk or whatever. It means that you still have no scene description (or do you seriously think that the combination of the protocol with every possible system data is a scene description?).
Brendan wrote:You can't take a photo of "concept for highly scalable scheduling algorithm" and show your brother-in-law next time he asks what you've been doing. He doesn't understand how computers work, and you can't explain it.
You can create a demonstration. If you are so inclined towards 3D, then the demonstration can look much more attractive than the bird's nest. But the problem here is your ability to finalize your idea. It's better to start making simple, but finished, demonstrations, instead of keeping dreaming about the best ever OS. When there would be some finished products, then the chances for the world to see the best ever OS are greatly increased.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:Messages for video and sound go from child to parent, to grandparent, ..., all the way to drivers. Messages for keyboard and mouse follow the same paths but go in the opposite direction (from drivers to driver's child, to driver's grandchild, ...). At each step along these paths messages may be forwarded "as is"; but they can also be modified.
You confuse the messaging protocol with data representation. For your system to be able to use your messages it should understand them. A protocol usually concerned with moments like "give it" or "take it" where "it" is the data, that should be understood by a recipient (but shouldn't be by the protocol). A recipient usually concerned with the data and shouldn't understand the protocol, that delivers the data.
Imagine you've got a client on one side of the Internet and a server on the other side of the Internet. All the computers in between are just shoving TCP/IP packets around; and there's no reason for them to understand the data in those packets. In this case the client and server can use whatever protocol they like and it wouldn't make any difference to those computers in between.

Now imagine that some of the computers in between are doing fancy tricks (e.g. maybe sort of intelligent proxy cache) and aren't just shoving TCP/IP packets around but are actually inspecting and modifying the packets. In this case the client and server can't use whatever protocol they like; and have to use a protocol that the computers in the middle can understand.

The first way is more flexible than the second because the protocol can be "anything" (as long as the client and server use the same protocol). However, the second way is more flexible than the first because the computers in the middle have the ability to do powerful things and aren't limited to being "dumb pipes". Basically it's 2 different types of flexibility, that are mutually exclusive.

For my video system, I want the second type of flexibility. If there's a widget talking to an application that's talking to a GUI that's talking to a process that records everything as a movie to be played back later; and the application is modifying the widget's data to make the widget look shiny, and the GUI is modifying the application's data to make the app look like a marshmallow that changes shape when it's squashed, then that's all good. And if the none of this is happening and the messages are just being forwarded "as is" well that's good too.
embryo2 wrote:
Brendan wrote:You can't take a photo of "concept for highly scalable scheduling algorithm" and show your brother-in-law next time he asks what you've been doing. He doesn't understand how computers work, and you can't explain it.
You can create a demonstration. If you are so inclined towards 3D, then the demonstration can look much more attractive than the bird's nest. But the problem here is your ability to finalize your idea. It's better to start making simple, but finished, demonstrations, instead of keeping dreaming about the best ever OS. When there would be some finished products, then the chances for the world to see the best ever OS are greatly increased.
I mostly only care about creating the best OS I can; and if people don't understand what I'm doing it's unfortunate but doesn't really effect my long term plans. Turning code that has out-lived its usefulness into usable/tangible things would help people understand what I'm doing, but the time it'd take to do this is time that could've been spent creating the best OS I can.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Concise Way to Describe Colour Spaces

Post by Rusky »

Brendan wrote:I've been researching new ideas for 20 years now. The end result is a reasonable collection of ideas that (if/when implemented at the same time) have the potential to make my OS better than anything that currently exists in multiple areas. This "potential to be better than anything that exists" is the reward for all the work, and all the sacrifices. But none of it is patented, and if you say too much you risk losing your reward, so you only show the tip of the iceberg and don't put too much information in one place.
That's unfortunate. It's possible many of these ideas will never see the light of day, and I personally don't think you really need a patent to benefit from them (or that software patents in general should exist). Certainly nobody will benefit from them if they're never shared or implemented. It's up to you, though- I guess consider this some encouragement to build your renderer instead. :)
Brendan wrote:Finally (about 2 years ago) I decided it'd be fun to do something like Minecraft in Java with LWJGL. I was wrong, it wasn't fun, and even simple things (like trying to determine how many CPUs the system has) was painfully frustrating. This lasted about 3 weeks before I remembered how much Java and OpenGL both suck.
Fortunately, by "game engine" nobody means "Java and plain OpenGL" (especially not together, ugh). By "game engine" I mean things like "Unreal Engine" or "Unity 3D" or "GameMaker: Studio," which are infinitely less of a hassle and provide a very similar level of functionality to your API.
Brendan wrote:In theory I could have "wrapper services" (where video driver has lower level interface and wrappers provide higher level interfaces), but that destroys all the benefits of having all the different pieces (widgets, apps, games, GUIs, etc) all using the same standard messaging protocol that everything understands, and therefore destroys the flexibility of the system as whole.
There's no reason all the wrapper services can't speak the same protocols, just like all the many OSes speak TCP/IP. Of course wrapper services also allow for new (versions of) protocols to be created and used alongside the old, if/when necessary.
Brendan wrote:For hardware there's a compromise between "efficiency for one specific job" and "flexibility". Rendering got shifted from CPU to hardware accelerators to improve efficiency, then the hardware accelerators got replaced with "graphics processors" to regain the flexibility, now GPUs are getting more "CPU like" and CPUs are getting more "GPU like". Within 10 years I wouldn't necessarily be too surprised if we go full circle and end back to where we started (where "very generic/flexible" CPUs do it all). You only need to look at Xeon Phi ("many core 80x86 with AVX-512") and what Intel has being doing with SIMD in recent generations to realise this is at least a possibility.
Indeed. At which point, a really good low-level graphics API would still work great as a way to write cross-architecture, massively-parallel code. Although the possibility means you may want to name things in a less graphics-centric way, or maybe provide the graphics-centric functionality a layer above a lower-level "compute shader" API.
Brendan wrote:Yes; a shader mod for one specific game won't work for other games because there isn't a standard representation for data that's used by all games. This problem doesn't apply to what I'm proposing (where there is a standard representation of data used by all games) and is therefore irrelevant.

Now let's try "1 + 1 = 2". By changing the software that GPU executes (regardless of whether that software came from a game or from the video diver), and ensuring that games do use a standard representation for data, improving the video driver can improve the quality of graphics in all games.
Let's think more specifically about how you would apply the moral equivalent of a Minecraft mod to all games. Shaders could be done if you standardized the input formats for meshes (trivial) and materials (not too hard either, and modern game engines already seem to be doing this by unifying most of their shaders for Physically Based Rendering). This is essentially your proposal. However, this excludes any other shader styles, which apparently everyone but you cares about. It also excludes any optimizations like Minecraft using tesselation to minimize bandwidth usage, or GPU-side particle systems.

Textures and meshes, on the other hand, are completely impossible, since they'll always be stuck at the level of detail they were specified at, and are highly game-specific. This could be mitigated using vector-based formats, but that's not always optimal or even possible.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:I've been researching new ideas for 20 years now. The end result is a reasonable collection of ideas that (if/when implemented at the same time) have the potential to make my OS better than anything that currently exists in multiple areas. This "potential to be better than anything that exists" is the reward for all the work, and all the sacrifices. But none of it is patented, and if you say too much you risk losing your reward, so you only show the tip of the iceberg and don't put too much information in one place.
That's unfortunate. It's possible many of these ideas will never see the light of day, and I personally don't think you really need a patent to benefit from them (or that software patents in general should exist). Certainly nobody will benefit from them if they're never shared or implemented. It's up to you, though- I guess consider this some encouragement to build your renderer instead. :)
All my ideas will either be replaced with something better, or implemented (and eventually released). If I've found something better there's no point keeping the details for the old idea private, and when the OS is released there's no point keeping the internal details hidden.
Rusky wrote:
Brendan wrote:In theory I could have "wrapper services" (where video driver has lower level interface and wrappers provide higher level interfaces), but that destroys all the benefits of having all the different pieces (widgets, apps, games, GUIs, etc) all using the same standard messaging protocol that everything understands, and therefore destroys the flexibility of the system as whole.
There's no reason all the wrapper services can't speak the same protocols, just like all the many OSes speak TCP/IP. Of course wrapper services also allow for new (versions of) protocols to be created and used alongside the old, if/when necessary.
Imagine you've got 4 separate processes communicating, like this (where the "=>" represents the standard message protocol):

Code: Select all

(widget) => (application1) => (VT_layer) => (driver)
And then the user decides to shift the application (while the application is running) from "full screen" to "window in a GUI" and it becomes this:

Code: Select all

(widget) => (application1) => (GUI) => (VT_layer) => (driver)
Then the user decides they want everything to look like its made out of ice, so (while everything is running) they stick an "ice effect" process in there and it becomes this:

Code: Select all

(widget) => (application1) => (GUI) => (ice_effect) => (VT_layer) => (driver)
Then the user decides they also want to record it as a movie, so they put a splitter in there and it becomes:

Code: Select all

(widget) => (application1) => (GUI) => (ice_effect) => (splitter) => (VT_layer) => (driver)
                                                                  => (recorder) -> VFS
But then they decide they want to see "without ice" and "with ice" at the same time, so they do this (with a second GUI so they can see "with ice" and "without ice" in different windows):

Code: Select all

... => (GUI1) =>  (splitter) => (ice_effect) => (GUI2) => (splitter) => (VT_layer) => (driver)
                             =================>                      => (recorder) -> VFS
Now think about the amount of data being shuffled around between processes. Would you rather there be many KiB of higher level data ("move object A to position X") being shifted around (potentially over network connections), or would you rather have many MiB of low level (e.g. vertex/pixel) data being shifted around (potentially over network connections)? Also think about how you'd implement that "ice effect" process - would you rather be processing low level data or higher level data?

Next, imagine if you allowed "wrapper service" processes to be inserted "wherever" in the middle of all of this; and instead of having one standard protocol to deal with you've got multiple protocols, plus the additional communication between the "needed processes" and all the "wrapper processes".

Finally, see if you can figure out what happens when the user unplugs their standard 2D LCD monitor and plugs in something like Oculus Rift instead (hint: none of the widgets/applications/GUIs need to know or care).
Rusky wrote:
Brendan wrote:Yes; a shader mod for one specific game won't work for other games because there isn't a standard representation for data that's used by all games. This problem doesn't apply to what I'm proposing (where there is a standard representation of data used by all games) and is therefore irrelevant.

Now let's try "1 + 1 = 2". By changing the software that GPU executes (regardless of whether that software came from a game or from the video diver), and ensuring that games do use a standard representation for data, improving the video driver can improve the quality of graphics in all games.
Let's think more specifically about how you would apply the moral equivalent of a Minecraft mod to all games. Shaders could be done if you standardized the input formats for meshes (trivial) and materials (not too hard either, and modern game engines already seem to be doing this by unifying most of their shaders for Physically Based Rendering). This is essentially your proposal. However, this excludes any other shader styles, which apparently everyone but you cares about.
Yes.
Rusky wrote:It also excludes any optimizations like Minecraft using tesselation to minimize bandwidth usage, or GPU-side particle systems.
For tesselation, I assume you just mean the standardised format for meshes allows a mesh to say "these polygons edges are smooth/curved" and the video driver can break it into as many or as few triangles at it feels like, using whatever method it wants. For GPU-side particle system I don't know if you mean graphics (e.g. "simple object/sphere with location and direction per particle") or physics (that has nothing to do with graphics/rendering).
Rusky wrote:Textures and meshes, on the other hand, are completely impossible, since they'll always be stuck at the level of detail they were specified at, and are highly game-specific. This could be mitigated using vector-based formats, but that's not always optimal or even possible.
Improving the video driver/renderer won't improve meshes or normal textures (and only improves how they're rendered); and these would be standardised and not "game specific" at all. For generated textures ("render to texture") improving the renderer will improve the texture.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Concise Way to Describe Colour Spaces

Post by Rusky »

Brendan wrote:Would you rather there be many KiB of higher level data ("move object A to position X") being shifted around (potentially over network connections), or would you rather have many MiB of low level (e.g. vertex/pixel) data being shifted around (potentially over network connections)? Also think about how you'd implement that "ice effect" process - would you rather be processing low level data or higher level data?
I'm not proposing that every application use a low-level API exclusively- all your examples still work with e.g. the game using a low-level API that the GUI knows how to forward efficiently, just like existing OSes do, with the exception of trying to run a game over a network, which wouldn't work anyway.
Brendan wrote:For tesselation, I assume you just mean the standardised format for meshes allows a mesh to say "these polygons edges are smooth/curved" and the video driver can break it into as many or as few triangles at it feels like, using whatever method it wants. For GPU-side particle system I don't know if you mean graphics (e.g. "simple object/sphere with location and direction per particle") or physics (that has nothing to do with graphics/rendering).
For tesselation in the context of Minecraft, I mean "the game sends only which voxels are filled, and the GPU generates the actual cubes' geometry, for massive bandwidth savings." For GPU-side particle systems I mean "a combination of relatively simple object rendering combined with physics in the same GPU program for zero bandwidth usage."
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:Would you rather there be many KiB of higher level data ("move object A to position X") being shifted around (potentially over network connections), or would you rather have many MiB of low level (e.g. vertex/pixel) data being shifted around (potentially over network connections)? Also think about how you'd implement that "ice effect" process - would you rather be processing low level data or higher level data?
I'm not proposing that every application use a low-level API exclusively- all your examples still work with e.g. the game using a low-level API that the GUI knows how to forward efficiently, just like existing OSes do, with the exception of trying to run a game over a network, which wouldn't work anyway.
We've been through this already - lower level just pointlessly complicates for everything and makes the system far less flexible (rather than confining the complications in the video driver); and there are multiple examples of similar things working perfectly fine over a network.
Rusky wrote:
Brendan wrote:For tesselation, I assume you just mean the standardised format for meshes allows a mesh to say "these polygons edges are smooth/curved" and the video driver can break it into as many or as few triangles at it feels like, using whatever method it wants. For GPU-side particle system I don't know if you mean graphics (e.g. "simple object/sphere with location and direction per particle") or physics (that has nothing to do with graphics/rendering).
For tesselation in the context of Minecraft, I mean "the game sends only which voxels are filled, and the GPU generates the actual cubes' geometry, for massive bandwidth savings." For GPU-side particle systems I mean "a combination of relatively simple object rendering combined with physics in the same GPU program for zero bandwidth usage."
Erm. Minecraft has never and will never use voxels. It uses "blocks" (textured cubes) arranged in a regular grid.

For my system you'd creating about 200 smaller objects (where each one is a cube representing one type of block); then you'd create larger objects ("chunks") by telling video driver which of those smaller objects are where within the larger object. When block/s are added or removed you just tell the video driver what changed in which larger object/s.

Please note that Minecraft is "client/server", and when block/s are added or removed the server tells the client which blocks/s are added or removed, often over relatively slow Internet connections, and this "when block/s are added or removed" information is quite similar to the information you'd be sending from game to video driver over much faster local networking.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply