Concise Way to Describe Colour Spaces

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Concise Way to Describe Colour Spaces

Post by Rusky »

Brendan wrote:I think you're assuming that some sort of "portable byte code to GPU specific machine code" compiler exists. It doesn't. Someone would have to write it first.
There are actually already multiple portable GPU bytecodes- this is how shaders are defined. DirectX's HLSL has a bytecode, and Vulkan's SPIR-V is a bytecode designed to be a target for GLSL, HLSL, or whatever other language people want to write shader compilers for. GLSL is sort of an exception in that the driver accepts the shader source rather than a bytecode, but the principle is the same.
Brendan wrote:I'm comparing "everything above the driver's abstraction" in both cases.
...which is exactly what "apples to oranges" means here, because you've moved the boundary of the driver in one case but not the other, and then ignored the fact that the boundary is actually irrelevant because you can still reuse the rendering engine anyway.
Brendan wrote:I have no reason to read anything beyond this. I refuse to support libraries for any reason whatsoever.
Please, feel free to replace "library" with "service" and read the rest. It makes no difference whatsoever to the API design, so your ignoring it is rather childish.
Brendan wrote:
  • If "whole process optimisation" is involved:
    • You'd added a massive amount of bloat to every single process
    • You've forced a developer to wager their reputation on the quality of the library's code
    • It's not possible to upgrade/replace it later
  • If "whole process optimisation" is not involved:
    • Performance sucks due to poor optimisation
    • You've forced a developer to wager their reputation on the quality of the library's code, and that library is no longer under the developers control
All false, because you've confused the concept of a low level API with the concept of a crappy library implementation, which (again) makes no difference whatsoever to the API design. And even if you were to use libraries, you would have several choices to remedy these problems:

Dynamic linking is the closest to services: The whole-program optimizer has the same amount of information and thus creates the same amount of "bloat" and "poor optimization;" The developer's reputation is "wagered" on something out of their control (yes, services do this too); Later replacement is possible.

Of course, you already want a portable bytecode, which is an even better solution: The whole-program optimizer can control exactly how much of the library to inline, to avoid bloat while improving performance beyond what's possible with services; Later replacement is even easier because distributed code is portable.
Brendan wrote:
  • You've assumed processes only ever send their video, and forgotton that normal processes have to understand the video sent by arbitrary processes, and either destroyed a massive amount of flexibility or added a massive amount of hassle for all processes that have to understand the "too low level" video sent by its children.
The only flexibility I've destroyed is the flexibility for the driver to be inconsistent, which I already mentioned as an advantage for you but a disadvantage for me. The only things you would possibly want to do to a game's video at a higher level than framebuffers are, to me, idiotic novelties. Even the idea of streaming higher-level data across a network to play a game remotely is much better served with game-specific protocols. The times you would want to manipulate things at a higher level are e.g. GUIs, which you already want a more specific protocol for than meshes/lights/camera.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:I think you're assuming that some sort of "portable byte code to GPU specific machine code" compiler exists. It doesn't. Someone would have to write it first.
There are actually already multiple portable GPU bytecodes- this is how shaders are defined. DirectX's HLSL has a bytecode, and Vulkan's SPIR-V is a bytecode designed to be a target for GLSL, HLSL, or whatever other language people want to write shader compilers for. GLSL is sort of an exception in that the driver accepts the shader source rather than a bytecode, but the principle is the same.
There are no "portable byte code to GPU specific machine code" compilers for my OS. I have to implement my software renderer (and some other stuff); then implement my IDE and "source to portable byte-code" compiler; then implement a "portable byte-code to native 80x86" compiler; then use my language and tool-chain to write "portable byte-code to GPU specific" compilers.
Rusky wrote:
Brendan wrote:I have no reason to read anything beyond this. I refuse to support libraries for any reason whatsoever.
Please, feel free to replace "library" with "service" and read the rest. It makes no difference whatsoever to the API design, so your ignoring it is rather childish.
Are you sure you want to do "(service->widget->service) => (service->app->service) => (service->GUI->service) -> ..." where each "->" and "=>" represents communication between separate processes (potentially running on separate computers), and each "=>" is the standard messaging protocol for video (that's necessary for flexibility/interoperability)? Why? Are you trying to triple communication costs while also trying to make it much harder for app to understand widget, and for GUI to understand app?
Rusky wrote:
Brendan wrote:
  • If "whole process optimisation" is involved:
    • You'd added a massive amount of bloat to every single process
    • You've forced a developer to wager their reputation on the quality of the library's code
    • It's not possible to upgrade/replace it later
  • If "whole process optimisation" is not involved:
    • Performance sucks due to poor optimisation
    • You've forced a developer to wager their reputation on the quality of the library's code, and that library is no longer under the developers control
All false, because you've confused the concept of a low level API with the concept of a crappy library implementation, which (again) makes no difference whatsoever to the API design. And even if you were to use libraries, you would have several choices to remedy these problems:

Dynamic linking is the closest to services: The whole-program optimizer has the same amount of information and thus creates the same amount of "bloat" and "poor optimization;" The developer's reputation is "wagered" on something out of their control (yes, services do this too); Later replacement is possible.
This just tells me that you don't understand what services are at some very fundamental level. If an FTP client is communicating with an FTP server on a different computer; would you say the FTP server is "close to" a dynamic library being used by the FTP client? Should someone that wrote an FTP client worry that the FTP server's code is beyond their control?
Rusky wrote:Of course, you already want a portable bytecode, which is an even better solution: The whole-program optimizer can control exactly how much of the library to inline, to avoid bloat while improving performance beyond what's possible with services; Later replacement is even easier because distributed code is portable.
The "portable byte code to native" compiler works as a file format converter. One file in, one file out. No way to handle dependencies or detect when a library has been replaced with malicious code that you want to inject into the native executable.
Rusky wrote:
Brendan wrote:
  • You've assumed processes only ever send their video, and forgotton that normal processes have to understand the video sent by arbitrary processes, and either destroyed a massive amount of flexibility or added a massive amount of hassle for all processes that have to understand the "too low level" video sent by its children.
The only flexibility I've destroyed is the flexibility for the driver to be inconsistent, which I already mentioned as an advantage for you but a disadvantage for me. The only things you would possibly want to do to a game's video at a higher level than framebuffers are, to me, idiotic novelties. Even the idea of streaming higher-level data across a network to play a game remotely is much better served with game-specific protocols. The times you would want to manipulate things at a higher level are e.g. GUIs, which you already want a more specific protocol for than meshes/lights/camera.
Sure; and in the same way Unix shouldn't support "ls | more" because the flexibility of pipes can only be used for idiotic novelties; "web apps" should be implemented as libraries because networking is too slow for remote applications; and games should use a low level interface except when they're running in a window because GUIs need something even higher level than my high level interface.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Concise Way to Describe Colour Spaces

Post by Rusky »

From your wording I assumed you were claiming that no portable GPU bytecode existed at all, but yes- for your OS you would need to create your own.

I don't see why you're complaining about communication costs. Why are you sending video messages from the widget to the app? Why are you surrounding everything with a service on both sides? Why do you think it would help anything scale to run widgets on a separate machine from both the app and its GUI?

I also understand services fine. FTP servers provide a very different service from a renderer. FTP explicitly represents a remote location to store and retrieve data. A renderer does nothing of the sort- it just acts as a data sink for the application that it hopes will result in output to the user, which is exactly what a render in a dynamic library would be. FTP clients and servers do need to worry about security in case of connections with malicious hosts, on both ends.
Brendan wrote:Sure; and in the same way Unix shouldn't support "ls | more" because the flexibility of pipes can only be used for idiotic novelties; "web apps" should be implemented as libraries because networking is too slow for remote applications; and games should use a low level interface except when they're running in a window because GUIs need something even higher level than my high level interface.
Please enlighten us: provide a less idiotic use of intercepting application rendering at the level of meshes/materials/cameras than adding icicles to the window. Otherwise all we have to go on is your unsubstantiated claims that higher level graphics interfaces are somehow "more flexible," because everything you've come up with is either already possible with a low-level interface, or is a tradeoff with some other feature that you can't do.
Antti
Member
Member
Posts: 923
Joined: Thu Jul 05, 2012 5:12 am
Location: Finland

Re: Concise Way to Describe Colour Spaces

Post by Antti »

Brendan wrote:Note: For my OS, every developer gets a key, and every process they write is digitally signed with the developers key. By signing their code they take full responsibility for that process. If one of their processes does anything malicious the developers key is revoked, and the OS refuses to execute any of the code they've ever written. There are no excuses.
This may be a little bit too strict policy. What if there were a good software company writing applications (definitely non-malware) but for some reason the company is acquired by another company that likes to add malware to their products. The obvious solution is to say that a new key is required. However, it may be difficult to identify when a new key is required if everything is technically the same. This no excuses policy may be used against itself, i.e. destroy a whole history of good products.

That is why "no excuses" should be reconsidered.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
Rusky wrote:I don't see why you're complaining about communication costs.
I'm complaining about pointless communication costs that achieve nothing (other than allowing the sender to use a higher level protocol when the standard protocol between pieces is a different "too low level" protocol that nothing wanted).
Rusky wrote:Why are you surrounding everything with a service on both sides?
I surrounded everything with a service on both sides because it's the only way your "service" idea makes any sense at all. Basically:
  • Process #1 sends its video using "high level non-standard protocol #1" to service #1
  • Service #1 converts it into "standard protocol required for interoperability" and sends that to service #2
  • Service #2 converts "standard protocol" into "high level non-standard protocol #2" and sends it to process #2
  • Process #2 sends its video (which includes process #1's video) using "high level non-standard protocol #2" to service #2
  • Service #2 converts it into "standard protocol" and sends that to service #3
Rusky wrote:Why are you sending video messages from the widget to the app?
Widget sends its video to its parent (the app) and the parent modifies it if necessary and "embeds" the widgets video into its own video output however it wants. This means that if (e.g.) you want to tell a GUI to talk to a different video driver you only have to tell the GUI to talk to a different video driver (because communication between GUI and video driver includes all video from GUI's children, all GUI's children's children and so on) and don't have to tell 50 different things scattered everywhere (the GUI and all its children and all its children's children) to all change from one video driver to another (with race conditions when widgets switch before GUI does).
Rusky wrote:Why do you think it would help anything scale to run widgets on a separate machine from both the app and its GUI?
There is no difference between widgets, applications, games, GUIs, etc. They are all just processes that communicate using the same "user interface" messaging protocol. That "user interface" messaging protocol is designed for a distributed OS - to allow any process using it; whether its a widget, app, game, GUI or whatever; to be running on any computer to distribute the load across many computers; which is the entire point of having a distributed system in the first place. Whether or not it makes sense in any specific situation is a decision that's made dynamically by the OS (and not by me, or you, or by processes) where this decision takes into account current load and communication costs. Because the widget only talks to the app and typically doesn't use too much CPU or memory it's likely the OS will put it on the same computer as the app; but if that computer is overloaded and there's a high speed network connection to another computer that's doing almost nothing then the OS won't put the widgets on the same "already overloaded" computer as the app.
Rusky wrote:I also understand services fine. FTP servers provide a very different service from a renderer. FTP explicitly represents a remote location to store and retrieve data. A renderer does nothing of the sort- it just acts as a data sink for the application that it hopes will result in output to the user, which is exactly what a render in a dynamic library would be. FTP clients and servers do need to worry about security in case of connections with malicious hosts, on both ends.
Yes; FTP is designed for and used for a different purpose; but in both cases (services and FTP servers) there's strong isolation between separate processes (unlike the "zero isolation" between a process and any libraries it uses); and this isolation makes it far easier to determine which piece was at fault or is malicious (unlike the "zero isolation" between a process and any libraries it uses where it's almost impossible to determine if the library was to blame or not without examining the source code for both the process and all its libraries).
Rusky wrote:
Brendan wrote:Sure; and in the same way Unix shouldn't support "ls | more" because the flexibility of pipes can only be used for idiotic novelties; "web apps" should be implemented as libraries because networking is too slow for remote applications; and games should use a low level interface except when they're running in a window because GUIs need something even higher level than my high level interface.
Please enlighten us: provide a less idiotic use of intercepting application rendering at the level of meshes/materials/cameras than adding icicles to the window. Otherwise all we have to go on is your unsubstantiated claims that higher level graphics interfaces are somehow "more flexible," because everything you've come up with is either already possible with a low-level interface, or is a tradeoff with some other feature that you can't do.
Open a word processor and type some stuff, then "drag and drop" any file (of any type) into the document. The end result is an application (e.g. bitmap image editor, a movie player, a CAD program, whatever) embedded into another application (the word processor). Can you guess how this will work? Can you guess how the word processor will hide the child process' toolbars, menus,etc. when the user isn't editing the embedded file; or deform or modify the other application's video for the sake of "artistic style"?

Of course it doesn't matter how many examples I provide (and it'd be foolish to assume I can foresee all of the things people might use the flexibility for); because every single time I provide any example you just add "yet another layer of hassles" on top of your massive library of bloat just to pretend it's possible, and you'll always fail to see any difference between a clean/elegant solution and the "tower of work-arounds" you've ended up with.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Concise Way to Describe Colour Spaces

Post by Rusky »

You're the one who inserted a bunch of non-standard protocols into the example. Nothing about my proposed design requires non-standard protocols, especially if you're already willing to move the renderer into the OS (as you are), because you can just standardize the high level protocol.

My main point is that, while you usually use a high level API, you also occasionally need a low-level one, and that the benefits (sharing the code for the high level API; consistent behavior across hardware; the ability to replace parts of the high level API) outweigh the drawbacks (difficulty targeting deprecated hardware; having to deal with two protocols).

Your word processor example can be done without adding anything, let along your mythical "hassle," to this system. GUI widgets already use a widget protocol, and the main application can use whatever protocol it likes because all the word processor needs besides GUI control is to give the application a virtual display that outputs to the provided section of the document.

What you see as workarounds, I see as far more elegant. I see your system as a wall barring developers from doing what they need, and thus requiring far more workarounds than my design ever does.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
Rusky wrote:You're the one who inserted a bunch of non-standard protocols into the example. Nothing about my proposed design requires non-standard protocols, especially if you're already willing to move the renderer into the OS (as you are), because you can just standardize the high level protocol.
From my point of view...

I say I plan to use a standard high level protocol for everything. You suggested it'd be better to use a low level protocol instead; and I pointed out multiple cases where that's bad. You suggested hiding support for those cases in a library; and I mention libraries are bad and won't be supported. You suggested using services instead of libraries; and I pointed out that services are isolated processes and this would increase communication costs. Now you're saying "you can just standardize the high level protocol", which is what I was originally planning.
Rusky wrote:My main point is that, while you usually use a high level API, you also occasionally need a low-level one, and that the benefits (sharing the code for the high level API; consistent behavior across hardware; the ability to replace parts of the high level API) outweigh the drawbacks (difficulty targeting deprecated hardware; having to deal with two protocols).
This is "all processes communicate with a low level protocol" in disguise; where "sharing the code for the high level API" means either cut&paste or libraries or services.

Note that there are many significant drawbacks. It's not just deprecated hardware its future hardware too; it's the practicality of implementing it before we all die of old age; it's the fact that I want to push it through "not necessarily awesome" network connections and need to minimise bandwidth (by using "higher level"); it's the fact that (eventually) I'll be trying to attract volunteers when the OS is using "far slower than GPUs" software rendering; and it's the fact that "no better than existing crud" doesn't make a good marketing slogan.
Rusky wrote:Your word processor example can be done without adding anything, let along your mythical "hassle," to this system. GUI widgets already use a widget protocol, and the main application can use whatever protocol it likes because all the word processor needs besides GUI control is to give the application a virtual display that outputs to the provided section of the document.
You're still suffering from the "incapable of imagining anything Windows doesn't do" curse.

You embed a bitmap editor into a word processor document, and the word processor hides the bitmap editor's menus, etc without the bitmap editor having any idea that it's running inside another app (and without any extra "hide/show the menus" signalling that every application has to support). Then you install an application for designing/decorating lounge rooms, and embed the word processor document in a virtual lounge room, and the "lounge room designer" application adds a picture frame and reflective surface so it looks like that document is in a picture frame hanging on the wall of the mock-up of your lounge room. The word processor has no idea that it's running inside another app. Then you install a "movie editor" application and connect the "lounge room designer app" to it and start splicing pictures of your lounge room into a promotional video you're creating (with fancy effects when changing from one video source to another), and you set that up to play the video you're creating in a loop. Then you start designing a 3D game and you've got a little shop in your virtual world with 20 different TVs all lined up along a wall and decide to pump the video from that "move editor" application onto those 20 different virtual TVs in your 3D game and then you add additional graphics to it (e.g. "20% off" so it looks like a sticker that's stuck to the TV screen). While you're building the game you can zoom in on one of those 20 virtual TVs and use the bitmap editor that's inside the word processor that's inside the lounge designer that's inside the movie editor; and when you're happy with it you tell the movie editor to save the movie and tell the game to use the resulting file instead. The you test the game and get to the part where the boss monster is trashing the little shop in your virtual world and you like it so you press the "print screen" key on your keyboard; and email the resulting file to a friend. You friend likes it too, so he sends it to his 3D printer and in "print properties" dialog box selects the boss monster and tells it to discard all the background, and creates a little plastic statue of your boss monster.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Concise Way to Describe Colour Spaces

Post by Rusky »

I never suggested removing the high-level API, or even making it non-standard. I've never even really cared whether it was in a library or a service, I just tend to use the word "library" for "shared functionality." From the start, my point has been that there needs to be a low-level API to give applications more control.

As you've imagined up drawbacks in having a low-level API, I've shown how such a system can still support future hardware, why it takes less code to implement, that free-form distributed rendering is a bad idea for games, and how a low-level API can be an improvement over existing systems. You've just ignored it all because you're suffering from the "incapable of imagining anything a high-level API can't do" curse.

By compiling existing shaders to use x86 SIMD instructions, the LLVMPipe driver can run compositing window managers that previously required hardware acceleration (and even some low-power games). Speaking of compositing: embedding applications in each other seamlessly, including fancy effects and overlays, has been done for a very long time, and hiding menus requires GUI-level information in the protocol regardless.

In fact, the only part of your convoluted example that hasn't already been done is the part where "print screen" actually means "save a 3D model," and even that has already been done, just without replacing the "print screen" functionality, because 3D models are not a good default when you really want an image.

Note that most of this has never been done in one place, which is the real benefit a system like yours would provide- color handling, file formats, good APIs as protocols, etc.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
Rusky wrote:By compiling existing shaders to use x86 SIMD instructions, the LLVMPipe driver can run compositing window managers that previously required hardware acceleration (and even some low-power games). Speaking of compositing: embedding applications in each other seamlessly, including fancy effects and overlays, has been done for a very long time, and hiding menus requires GUI-level information in the protocol regardless.
Embedding applications into each other has been done, but hasn't really been done seamlessly. The only case I've seen is "embedded application is flat plane" where the parent isn't able to change anything within that 2D plane. For what I'm talking about, one process can create a "3D terrain with trees" scene, the next process can add "ground fog" in between the objects in that 3D scene, the next process can add lighting, the next process can replace the original process' grass texture, and so on.

Note that each object that responds to mouse clicks has an ID, and when the user clicks a mouse button something asks the video driver to report the ID for the object at "screen co-ords where mouse pointer happened to be". Hiding menus means finding objects within a 3D scene that consist of "short string of text" sub-objects that have IDs needed for mouse click, and discarding the object (and its sub-objects). Basically; it's just an object filter that works on simple heuristics; in the same way that "keep boss monster and discard background" (or even "keep background and discard boss monster that was occluding background") is just an object filter.
Rusky wrote:In fact, the only part of your convoluted example that hasn't already been done is the part where "print screen" actually means "save a 3D model," and even that has already been done, just without replacing the "print screen" functionality, because 3D models are not a good default when you really want an image.
Yes; for my OS "print screen" won't literally print the screen (e.g. save pixel data from the screen) and will store details for the higher level stuff that was used to render the scene instead (partly so that it's entirely device independent, and partly so that it can be rendered with a "not real-time" renderer in extremely high detail). Applications won't have to do anything to support this either.

Note: for simplicity, (in some places but not others) I've been pretending that GUIs talk to the video driver. This isn't true, GUIs talk to a "virtual terminal layer" and this "virtual terminal layer" is the only thing that talks directly to (video, sound, keyboard, mouse) drivers. The "print screen" key will be handled by this "virtual terminal layer" (in addition to a few other things - e.g. "control+alt+delete", switching between different virtual terminals/GUIs, etc).
Rusky wrote:Note that most of this has never been done in one place, which is the real benefit a system like yours would provide- color handling, file formats, good APIs as protocols, etc.
Hopefully ;)


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Concise Way to Describe Colour Spaces

Post by Rusky »

Brendan wrote:Embedding applications into each other has been done, but hasn't really been done seamlessly. The only case I've seen is "embedded application is flat plane" where the parent isn't able to change anything within that 2D plane. For what I'm talking about, one process can create a "3D terrain with trees" scene, the next process can add "ground fog" in between the objects in that 3D scene, the next process can add lighting, the next process can replace the original process' grass texture, and so on.
None of your earlier examples suggested you wanted to do anything to high-level scenes besides render them at different levels of quality. I don't think it's a great idea to restrict what games can do just so you can modify them at that level on the fly, although it would be possible even for games that bypassed parts of the high level API, and it has been done in e.g. game console emulators.
Brendan wrote:Note that each object that responds to mouse clicks has an ID, and when the user clicks a mouse button something asks the video driver to report the ID for the object at "screen co-ords where mouse pointer happened to be". Hiding menus means finding objects within a 3D scene that consist of "short string of text" sub-objects that have IDs needed for mouse click, and discarding the object (and its sub-objects). Basically; it's just an object filter that works on simple heuristics; in the same way that "keep boss monster and discard background" (or even "keep background and discard boss monster that was occluding background") is just an object filter.
So you do include GUI information in the protocol- which objects are clickable and have "short strings of text" (I can think of plenty of ways this heuristic would backfire, and I suspect you'll want to tag objects as part of a GUI anyway in the end). Think also about distortion effects, either in-world or from post-processing (or are you going to ban all those, too?)- suddenly you have to decide on a per-object basis whether to ignore the distortion effect (for most in-world operations) or map the mouse's position through it (for more GUI-like behavior).
Brendan wrote:Yes; for my OS "print screen" won't literally print the screen (e.g. save pixel data from the screen) and will store details for the higher level stuff that was used to render the scene instead (partly so that it's entirely device independent, and partly so that it can be rendered with a "not real-time" renderer in extremely high detail). Applications won't have to do anything to support this either.
This is a great to support, much like current graphics debuggers. It does, again, mean that any rendering techniques that aren't expressible in the high level protocol wouldn't be fully supported, but I think partially supporting those techniques (i.e. only at the level of detail the application provides) is better than not supporting them at all.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:Embedding applications into each other has been done, but hasn't really been done seamlessly. The only case I've seen is "embedded application is flat plane" where the parent isn't able to change anything within that 2D plane. For what I'm talking about, one process can create a "3D terrain with trees" scene, the next process can add "ground fog" in between the objects in that 3D scene, the next process can add lighting, the next process can replace the original process' grass texture, and so on.
None of your earlier examples suggested you wanted to do anything to high-level scenes besides render them at different levels of quality. I don't think it's a great idea to restrict what games can do just so you can modify them at that level on the fly, although it would be possible even for games that bypassed parts of the high level API, and it has been done in e.g. game console emulators.
I want games to be built on this - e.g. where you have a process for each thing or each type of thing; and a "world process" that manages all the "thing processes" and handles global state, the user interface, etc (where the "world process'" combines the video from all its children).

More specifically; I (literally) want people to be able to create a "chicken process" that simulates a chicken (its AI, animation, appearance, sounds); where someone might create the chicken simply because they felt like it, and might have it running around their GUI (outside of any game). I want the OS to have part of the file system dedicated to an "asset archive" to store all of these (where the chicken might be "/assets/animals/chickens/Rusky's_chicken" in the file system); where a game developer can use any of the assets to construct a game and only have to (e.g.) put the chicken in a world and don't need to create the graphics, AI, etc. themselves; and where the same chicken can be used in 10 games made by 10 different people; and where replacing that chicken with something better replaces it in all games.

Of course at this stage it's just a concept; and there's a large number of details that would need to be sorted out - e.g. physics, transformations (e.g. "live chicken (animal) -> raw chicken (ingredient) -> roast chicken (food)"), and meta-data (to allow games to dynamically select suitable objects that match certain criteria).
Rusky wrote:
Brendan wrote:Note that each object that responds to mouse clicks has an ID, and when the user clicks a mouse button something asks the video driver to report the ID for the object at "screen co-ords where mouse pointer happened to be". Hiding menus means finding objects within a 3D scene that consist of "short string of text" sub-objects that have IDs needed for mouse click, and discarding the object (and its sub-objects). Basically; it's just an object filter that works on simple heuristics; in the same way that "keep boss monster and discard background" (or even "keep background and discard boss monster that was occluding background") is just an object filter.
So you do include GUI information in the protocol- which objects are clickable and have "short strings of text" (I can think of plenty of ways this heuristic would backfire, and I suspect you'll want to tag objects as part of a GUI anyway in the end). Think also about distortion effects, either in-world or from post-processing (or are you going to ban all those, too?)- suddenly you have to decide on a per-object basis whether to ignore the distortion effect (for most in-world operations) or map the mouse's position through it (for more GUI-like behavior).
I don't really consider that information "GUI information"; mostly because I stole the idea from 3D OpenGL games (where it's called "picking"). The basic idea (in case anyone doesn't already know) is to render the scene to a tiny (1-pixel) view port, but instead of using colours and textures you use "object IDs". The nice thing about this is that it copes with any transformations/mutations/distortions that are applied during rendering.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Concise Way to Describe Colour Spaces

Post by Rusky »

Brendan wrote:I (literally) want people to be able to create a "chicken process" that simulates a chicken (its AI, animation, appearance, sounds);
I can honestly say that as an application and game developer, I would never use this, nor would any other competent developer.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:I (literally) want people to be able to create a "chicken process" that simulates a chicken (its AI, animation, appearance, sounds);
I can honestly say that as an application and game developer, I would never use this, nor would any other competent developer.
I didn't realise there were any competent game developers. They're all crippling their games due to limited time and/or limited resources; and the only reason they can get away with it is that consumers are used to things like loading screens, and time freezing everywhere in the world except where the player is, and max. limits on the number entities, and NPCs that keep bumping into each other, and things (plants, animals, people) that don't grow for ages and then suddenly become mature in an instant (or never age at all), and "medical miracles" where wounds just disappear, and "worlds" that are limited to the size of a small town, and space ships that can fly through planets unharmed if they're moving fast enough, and dilapidated wooden shacks that can withstand the blast of a rocket propelled grenade, and flame-throwers being used in book shops where nothing catches fire.

To be perfectly honest; I'm not even sure I want games (plural). I just want one game.

For this game, the player is a white sphere (no arms or legs, no body, no face) and starts in a square room that only has 2 things in it - a portal, and a little computer/controller on the wall next to the portal that doesn't seem to do anything. Through the portal you can see a world, and if you go through the portal you end up in that world and the portal disappears behind you. That world will be perfectly flat brown surface with a plain blue sky, and nothing else (no hills, no buildings, no plants, no people, no weather, no day, no night, nothing). The player will have a button they can press to return to the square portal room, and this is the only thing they're able to do.

It will be the only game for the OS, and it will be the most boring game that has ever existed (but it'll support extensions).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Concise Way to Describe Colour Spaces

Post by Rusky »

Brendan wrote:time freezing everywhere in the world except where the player is, and max. limits on the number entities, and NPCs that keep bumping into each other, and things (plants, animals, people) that don't grow for ages and then suddenly become mature in an instant (or never age at all), and "medical miracles" where wounds just disappear, and "worlds" that are limited to the size of a small town, and space ships that can fly through planets unharmed if they're moving fast enough, and dilapidated wooden shacks that can withstand the blast of a rocket propelled grenade, and flame-throwers being used in book shops where nothing catches fire.
These are not actually problems. Focusing on them is to miss the point of games.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
Rusky wrote:
Brendan wrote:time freezing everywhere in the world except where the player is, and max. limits on the number entities, and NPCs that keep bumping into each other, and things (plants, animals, people) that don't grow for ages and then suddenly become mature in an instant (or never age at all), and "medical miracles" where wounds just disappear, and "worlds" that are limited to the size of a small town, and space ships that can fly through planets unharmed if they're moving fast enough, and dilapidated wooden shacks that can withstand the blast of a rocket propelled grenade, and flame-throwers being used in book shops where nothing catches fire.
These are not actually problems. Focusing on them is to miss the point of games.
These are problems. They're all things that break immersion, or cause the player to get annoyed/disappointed.

They're all things that game developers should be doing trying to do better (even for "single computer"); but do game developers even try? No. Most games are just the same tired old crud from 10 years ago with all the same mistakes (just with slightly better graphics than last time). It's like the entire industry reached "average" and gave up.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply