Concise Way to Describe Colour Spaces

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Post Reply
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Concise Way to Describe Colour Spaces

Post by embryo2 »

And the question about color representation. May be somebody can clean a bit the picture with colors? The colors are represented with some "color space" attributes while in the reality (physically) the colors are represented as wave lengths of electromagnetic emission. So, why not to represent colors as a set of wave lengths? Is it less intuitive? Yes, but as a way of a universal color description, which is hidden by a software, that uses colors, such representation makes sense. And it's advantage is in it's generality - it allows to represent any imaginable color in any situation, would it be the monitor's pixels or printed paper piece under ambient light of a known wave length(s). Note - there will be a need for the light characteristics for printed colors, but with traditional color spaces there also is such need.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Concise Way to Describe Colour Spaces

Post by Rusky »

embryo2 wrote:What a difference between Rust's destructors and Java's catch clause?
The difference is Rust destructors only have the object they're destructing in scope, and they do not allow the program to continue execution. Java catch clauses have all the potentially-corrupted state in scope and allow execution to continue afterwards. Rust eliminates the possibility of accessing partially-modified state when an exception is thrown in the middle of a critical section, while Java does not.
embryo2 wrote:In Java it is considered a bad practice to add "throws Exception" because it's too general and hides many possible variants of application behavior.
The fact that it's bad practice proves that it would be better just to have a language where it's impossible to write "throws Exception" (or its equivalent, RuntimeException-without-a-catch-clause) in the first place.

At this point, your argument has become full of complete falsehoods:
embryo2 wrote:In case of exception handlers the situation is more intuitive than in case of destructors, because the developer sees the problem (exception) and has all means to work accordingly.
Wrong. The developer only sees the (non-checked) exception at runtime when it's triggered, or if they remember to read the documentation. The developer sees and handles in-band errors like Result as a compiler error if they forget them.
embryo2 wrote:So, it looks like a complete analogy to the one catch per one exception type in Java. The difference is only in the form of a textual representation.
Absolutely false. The analog in Java would be one catch per expression, not per exception type.
embryo2 wrote:There's no need for Result in Java. When we work with the code in try clause we just assume that everything is ok, but in the separate section of the catch clause we pay attention to the cases when something is not ok. In case of you example with Rust the separate section is still required, but wasn't shown. So, in Java we have a clear separation of concerns (cases when everything is ok and cases when something is bad) while in Rust we see just the case when everything is ok, while lose the case when something goes bad way.
False. Rust programs must always handle both cases, while Java has the option of leaving out the catch clause and thus not handling the error case. The difference is that Rust's error handling is better-structured than Java's, so common types of error handling (like using a default value, or aborting the program) can be moved into functions (like unwrap_or) to be reused.
embryo2 wrote:So, why not to represent colors as a set of wave lengths?
Because most colors we see are actually combinations of several wavelengths, this would make the color representation variably-sized.
User avatar
AndrewAPrice
Member
Member
Posts: 2303
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Concise Way to Describe Colour Spaces

Post by AndrewAPrice »

Brendan wrote:I've done some checking; and as far as I know you can do the exact same operations on XYZ colours.

For example; if you add the 2 sRGB colours [0.2, 0.4, 0.6] and [0.7, 0.3, 0.1] together you get [0.9, 0.7, 0.7]. If you convert the colours from RGB into XYZ you get the XYZ colours [0.333784, 0.371900, 0.621726] and [0.414036, 0.370634, 0.144322], adding them together as XYZ gives you [0.74782, 0.742534, 0.766048], and converting from XYZ back into sRGB gives [0.900000, 0.699999, 0.700000].

For another example, for the sRGB colour [0.2, 0.4, 0.6] if you take 25% of it you get the result [0.05, 0.1, 0.15]. If you convert the colour from RGB into XYZ you get the XYZ colours [0.333784, 0.371900, 0.621726], taking 25% of that gives you [0.083446, 0.092975, 0.1554315], and converting from XYZ back into sRGB gives [0.05, 0.1, 0.15].

Basically, the result is identical for both cases, regardless of whether you do the operation on sRGB colours or with XYZ colours.
I did not know that! Thank you for teaching me something new!
My OS is Perception.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
embryo2 wrote:And the question about color representation. May be somebody can clean a bit the picture with colors? The colors are represented with some "color space" attributes while in the reality (physically) the colors are represented as wave lengths of electromagnetic emission. So, why not to represent colors as a set of wave lengths? Is it less intuitive? Yes, but as a way of a universal color description, which is hidden by a software, that uses colors, such representation makes sense. And it's advantage is in it's generality - it allows to represent any imaginable color in any situation, would it be the monitor's pixels or printed paper piece under ambient light of a known wave length(s). Note - there will be a need for the light characteristics for printed colors, but with traditional color spaces there also is such need.
Representing colour as one wavelength (plus amplitude) wouldn't work. For example, it can't represent white, or any colour that is a mixture of blue and red.

Representing colour as a spectrum (multiple wavelengths, each with their own amplitude) would be far more realistic than anything any software currently does, and would allow the renderer to correctly support things that existing software can't (e.g. dispersion, fluorescence). However, it would be extremely expensive - there are an infinite number of wavelengths, so each colour would have to be represented by an "infinite" array of amplitudes. Also, don't forget that when light hits a surface, some wavelengths are absorbed, some pass through, and some wavelengths are reflected back. For each surface you need to store 2 sets of wavelengths (e.g. one for how much light at each wavelength is reflected back and one for how much of each wavelength passes through; where anything that wasn't reflected back or passed through must've been absorbed). This means that for each light source you've got an infinite array, and for each surface (e.g. every "textel" in every texture) you've got 2 infinite arrays.

Obviously infinite arrays aren't very practical; so you have to reduce the number of wavelengths in the spectrum. For example, instead of "infinite number of wavelengths" you could only have 10000 wavelengths (and light could be represented by an array of 10000 amplitudes). The more wavelengths you have the more expensive it is for rendering, and reducing the number of wavelengths reduces the visible colours it can represent. The minimum number of wavelengths you can use is 3 (e.g. one wavelength for red, one for green and one for blue); which "works" but means that about half the colours can't be represented.

Alternatively, instead of having an array of (3 or more) amplitudes you could store "wavelength+amplitude" pairs. For some colours you'd only need one "wavelength+amplitude", for some you'd need 2 of them, and for most you'd need 3 of these "wavelength+amplitude" pairs. This would be able to represent all visible colours. However, it would only work for light. For surfaces (e.g. how much of which frequencies are reflected) it becomes very complicated (e.g. do you have an array of N frequencies and interpolate?).

The XYZ system is like the "3 amplitudes for 3 fixed wavelengths" system (similar to RGB), where the only real difference is that those 3 wavelengths are fictitious (physically impossible) and do cover all possible colours.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Concise Way to Describe Colour Spaces

Post by embryo2 »

Rusky wrote:Rust destructors only have the object they're destructing in scope, and they do not allow the program to continue execution. Java catch clauses have all the potentially-corrupted state in scope and allow execution to continue afterwards. Rust eliminates the possibility of accessing partially-modified state when an exception is thrown in the middle of a critical section, while Java does not.
Object wide scope is not enough for many possible complex states. If the state includes just two objects of the same type it already leads to some complexity the destructor can't handle easily. But what if there are more than two objects and many of them of different type? So, Java allows us to see the whole situation with as many objects as required, while Rust hides the complexity and confuses developers.

The ability to access partially modified state is very useful when we are going to implement a rollback mechanism. And Rust's feature, that denies such opportunity, looks pale in comparison with Java.

Also Java gives a developer more flexibility in defining application behavior. The developer can throw runtime exception in cases when it's safe (no state corruption) and eliminate the need for try-catch-finally blocks. But when the situation requires it is possible to insist on the safe way of dealing with exceptions by declaring checked exceptions in the throws clause.

If a developer make mistake and doesn't define exception propagation strategy carefully then yes, the application can behave badly. But the chance of making mistake in case of destructors is more dangerous for beginners, because they see the limited part of the whole situation. And even if they somehow manage to see all required objects with their complex relationships, even then some developers can write a truncated version of the destructor (or even empty one) in the same way as beginners in Java define dangerous exception clauses, just to get the task done as quick as possible.
Rusky wrote:
embryo2 wrote:In Java it is considered a bad practice to add "throws Exception" because it's too general and hides many possible variants of application behavior.
The fact that it's bad practice proves that it would be better just to have a language where it's impossible to write "throws Exception" (or its equivalent, RuntimeException-without-a-catch-clause) in the first place.
The Rust doesn't prevent a developer from writing empty destructor. So, the skills are required in both cases (Java and Rust) and there is no way to blame Java more than Rust. We have tools, that allow us to write exactly the same quality code, but in a different manner. And both tools allow us to make the same mistakes, but in different places. But for a developer, who has good experience of using the tool, the switch between such similar tools is useless.
Rusky wrote:
embryo2 wrote:In case of exception handlers the situation is more intuitive than in case of destructors, because the developer sees the problem (exception) and has all means to work accordingly.
Wrong. The developer only sees the (non-checked) exception at runtime when it's triggered, or if they remember to read the documentation. The developer sees and handles in-band errors like Result as a compiler error if they forget them.
Rust in no way better because it allows exceptions that kill the execution thread without any chance to catch it. Here I can repeat - the developer only sees the (non-checked) exception at runtime when it's triggered, or if they remember to read the documentation.
Rusky wrote:Rust programs must always handle both cases, while Java has the option of leaving out the catch clause and thus not handling the error case. The difference is that Rust's error handling is better-structured than Java's, so common types of error handling (like using a default value, or aborting the program) can be moved into functions (like unwrap_or) to be reused.
The sentence "better-structured" is arbitrarily applicable to any situation a person wants. It's about look and feel, colors and tastes. For you the color is nice, but for me it doesn't look as fascinating as you see it.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Concise Way to Describe Colour Spaces

Post by embryo2 »

Brendan wrote:Representing colour as one wavelength (plus amplitude) wouldn't work. For example, it can't represent white, or any colour that is a mixture of blue and red.
The main disadvantage here is the variable length of the wavelength array. But with the length of 3 it is already possible to show all colors from sRGB space. If we represent the amplitude (for predefined wavelengths) with one byte and use 1 word of 32 bits then it is possible to have 4 wavelengths and to show additional colors, not included by the sRGB space. It is possible to allocate four bits for array length and wavelength sequence index without significant loss of the number of amplitudes (4 by 7 bits).
Brendan wrote:Also, don't forget that when light hits a surface, some wavelengths are absorbed, some pass through, and some wavelengths are reflected back. For each surface you need to store 2 sets of wavelengths (e.g. one for how much light at each wavelength is reflected back and one for how much of each wavelength passes through; where anything that wasn't reflected back or passed through must've been absorbed).
Actually we often need only reflected fraction of the light hitting the surface, but sometime the passing fraction is also important. The fluorescence can be represented as a sum of reflection and emission. So, for complete description of a material we need a few additional characteristics like reflection percent curve, diffuse reflection factor, dispersion angle and so on. But in the modeling of 3D scenes many of such factors are used extensively and do not look like something extraordinary. Also it is possible to simplify the reflection curve and to compose it of a few linear parts (or even one constant). And after we have all those characteristics it becomes possible to calculate the actual color a human can see for every printed piece or for every thing "printed" with 3D printer having characteristics of the ambient light (here we can use some predefined sets like sun light, glow-lamp, luminous tube and so on). Storage requirements here can be very limited, like one 64-bit word, for example, but even 128 bits for accurate material description with all it's features like fluorescence et al doesn't seem as something too big to work with.

But with the standard representation with 3 colors from a color space it is impossible to calculate anything about printed colors without direct measurements under some predefined illumination conditions.
Brendan wrote:Obviously infinite arrays aren't very practical; so you have to reduce the number of wavelengths in the spectrum.
Of course. But why in the hell we need those 10000 number of wavelengths? Just 1 to 4 with 7 bit length or even 1 to 8 with 64-bit word.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
embryo2 wrote:
Brendan wrote:You're using a very loose definition of "faults". To me, a failure is when software doesn't do something it should or does do something it shouldn't (e.g. crash).
The "something" here is the loose part. Your approach is based on your understanding of the "something", sometime your understanding is limited to "file not found" error, but in other cases it flies away and suggests an OS which automatically heals user's misbehavior. So, the intrusion of the OS in application operations can vary from non existent to creating very high load on all your 100 computers. But what if a user just doesn't want all this high load mess?
If a file doesn't exist and the OS returns a "file not found" error, then that's correct behaviour and not a failure, and because it's not a failure it's impossible for fault tolerance to avoid it. If the file does exist but the OS returns a "file not found" error, then that would be a failure, and maybe it would be worthwhile having a redundant set of three "VFS services" (and a redundant set of 3 "native file system services", and possibly 3 physical hard disks).

If the user doesn't want the extra load caused by redundancy, then they wouldn't have enabled it in the first place.
embryo2 wrote:
Brendan wrote:
embryo2 wrote:Ok, if you insist then here is another version - imagine one page of a code with 20 nested ifs and 100 pages of a code with prints only. Both variants can have the same number of bugs despite of the code size.
So you're saying that for a fair comparison (same quality control, same code maturity, same code size); complex code (e.g. the code in high performance VM that does JIT compiling and optimisation) is more likely to have bugs and security vulnerabilities than simpler code (e.g. the code you find in typical desktop applications)?
Your bracketed examples are skewed towards your vision and distort my vision of the fair comparison. It is the whole system that should be compared with another system, but you are trying to compare randomly chosen parts of different systems.
I'm only pointing out that people who think VMs actually help rely on flawed logic - they assume it's impossible for programmers to write software without bugs while at the same time they assume that programmers can write a VM without bugs.
embryo2 wrote:
Brendan wrote:
embryo2 wrote:You have no need for downloading anything or even for the computers and your LAN, just because the cloud computing already has it available online (yes, you need one PC and some kind of internet connection, but that's all you need).
How do I plug 100 keyboards and 100 monitors into this "cloud"? When the cloud tells me networking is a bottleneck, and I just install some extra network cards in my cloud?
I hope that your goal is not to plug 100 something into another something, but most probably your goal is to have some job done. The job is perfectly done by the cloud without any networking bottlenecks. You just tell the cloud to do something usual and cloud does it and returns some result, even if the result should be shown to 100 people.
The goal is to allow a collection of 1 or more computers to behave like a single system; for the purpose of maximising performance and/or minimising hardware costs. For a simple example, instead of having an office for 20 people with one computer per user plus a server (21 computers) where most computers are idle most of the time, you could have an office for 20 people with 4 people per computer and no server that's faster (due to resource sharing and effective utilisation of resources) and a lot cheaper (5 computers and not 21).

"Cloud" (paying for remote processing) is fine when you've got a massive temporary job. For a massive permanent job it's stupid (cheaper to buy the hardware yourself instead of paying to use someone else's and paying for internet costs); and for thousands of tiny separate little jobs (e.g. where the latency is far more important than the processing) it's stupid. Very few people ever have a massive temporary job that's too big to do it on their own hardware in a reasonable amount of time; which means that "cloud" is stupid for almost everything. Of course even for the very rare cases where cloud might make sense in theory, you have to trust someone else with your data and (for sensitive data) it might not make sense in practice due to lack of trust.

Finally, for the extremely rare cases where "cloud" isn't stupid; there's no reason why someone using my OS wouldn't be able to rent some servers on the internet, put my OS on those servers, and make those servers part of their cluster.
embryo2 wrote:The goal of efficiently using all available resources was always on the radar of all system designers. But the cost of distributing a job across all available devices is considered too high for it to be implemented. However, you can try to beat the heavily funded designers of world's top corporations. And I do not want to tell you that it is impossible. It is possible, but I doubt it is possible within some acceptable time frame for one developer. So, in the end you will have some incomplete system and many years behind, but you still will be able to claim that the goal is achieved in the form as you see it.
For normal software (e.g. "oversized monolithic blob" applications using procedural programming) the cost of distributing it across multiple machines is too high (and quite frankly, most of the victims of the "oversized monolithic blob using procedural programming" approach are struggling just to use 2 or more CPUs in the same computer). To be effecitve, the "oversized monolithic blob using procedural programming" stupidity has to be discarded and replaced with something suitable, like "shared nothing entities that communicate". The "shared nothing entities that communicate" approach is already used for almost everything involving 2 or more computers.

For time frame, we've already covered this - I know it's going to take ages and that there's only a very small chance that it'll succeed eventually; but that's far better than wasting less time on a worthless "same as everything else" project that's guaranteed to fail.
embryo2 wrote:
Brendan wrote:I also want to do things like let users send running applications to each other (e.g. you open a word processor document, write half of a letter, then send the application "as is" to another user so they can finish writing the letter);
It's just about sending snapshot of a document. The MS Word saves such snapshots for the user to have an ability to restore old variant, so you need to add network support to the MS Word and your goal is accomplished.
Fred is using an application. That application communicates with Fred's GUI (sending video and sound to the GUI, and receiving keyboard, mouse from the GUI). Fred tells his GUI to send the application to Jane's GUI. Now the application communicates with Jane's GUI. No process was stopped or started, nothing was written to disk.
embryo2 wrote:
Brendan wrote:and to have multi-user applications (where 10 programmers working on the project can all use the same IDE at the same time).
I just can't imagine what 10 developers can do in the same IDE. Interfere and interrupt each other? The world has solutions for team work for decades (code repositories, for example), so why there should be another solution? What it is better for?
There's 2 different types of teamwork. The first involves splitting a task into sub-tasks and getting team members to do separate sub-tasks in isolation (and merge the results). This is what existing solutions provide; except those existing solutions are typically poorly integrated with the IDE and toolchain, and could be done with less hassle/configuration/maintenance.

The second type of teamwork is more like pair programming. Imagine a small group of developers that have headsets and are constantly talking/listening to each other and watching each other type and correcting each other's typos while they're typing. In this case team members do not work in isolation, but work on the same task at the same time. For programming; there are existing solutions for pair programming (including collaborative real-time editors), but they're not widespread.

The goal would be to provide both types of teamwork in a way that allows people to use either type of teamwork whenever they want however they want (including "ad hoc" where no prior arrangements are made); and seamlessly integrate it all into the IDE so that it's a single consistent system rather than an ugly and inflexible collection of separate tools.
embryo2 wrote:
Brendan wrote:I also want all of this to be as close as possible to "zero configuration". For example, if you buy 20 new computers with no OS on them at all; you should be able to plug them into your network and do nothing else; where those 20 new computers boot from network, automatically become part of the cluster, and automatically start doing work.
I see it viable in case of a new available computer, that is connected to the cluster. But in case of a new cluster there should be many user defined actions, because the OS just doesn't know the goal of the final cluster. So, the configuration task is always required for new setup, but can be avoided for additional computers. If you look at such cases separately you can see more ways of creating a convenient OS.
In general, creating a cluster will be a little bit like installing a normal OS (Windows, Linux) onto one computer, and then adding new computers to the cluster. However; it's not that simple because there's some major security concerns involved - e.g. you don't want to allow any random/unauthorised computer to become part of the cluster and start sharing data with a potentially malicious attacker (and there's going to be an "OS generator" that digitally signs boot code and kernels using the user's own keys, that creates an OS installation disk for the cluster before the OS can be installed on anything).
embryo2 wrote:
Brendan wrote:
embryo2 wrote:It can use something like pattern matching for detection of "non-standard" mouse behaviour. But it's not a trivial task.
For existing mouse hardware, I very much doubt that this is possible without an unacceptable number of false positives and/or false negatives (and if its too unreliable it's just going to annoy people instead of being useful).
I think the false results will be on par with the time based maintenance person call. And if we remember those erratic movements of a dirty mouse then it becomes obvious that they differ a lot from a standard mouse movement pattern. We can measure the distance between two adjacent mouse events and for dirty mouse the distance is almost always too big.
I'm sceptical; but if it can work reliably then mouse drivers can use this (instead of, or in addition to, the "distance" method).
embryo2 wrote:
Brendan wrote:There would also be a way for normal users to add a "trouble ticket" to the system; for whatever the OS can't automatically detect (even if it's "my office chair broke!").

I really don't think it'd be hard to design a maintenance tool that suits almost everyone; and if people don't like my tool then they're able to suggest improvements, or replace any or all of the "pieces that communicate" with their own alternative piece/s;
Improvement suggestion is used by the humanity for thousands of years. But there's still no "a maintenance tool that suits almost everyone". Because it's not a software problem.
Why don't you think a maintenance tool that suits almost everyone is possible? Can you think of anything people might want in a maintenance tool that is impossible?


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Rusky
Member
Member
Posts: 792
Joined: Wed Jan 06, 2010 7:07 pm

Re: Concise Way to Describe Colour Spaces

Post by Rusky »

embroy2 wrote:Object wide scope is not enough for many possible complex states.
False. Object scope is, in fact, whatever you put in an object. Representing an operation's state in a separate struct rather than scattered across the stack is useful for other reasons as well- Iterators are a prime example of this.
embroy2 wrote:The Rust doesn't prevent a developer from writing empty destructor.
It doesn't need to, because library types like locks, vectors, files, etc. already have destructors which will be called regardless of whether their containing object has a destructor, or whether their containing stack frame has a finally block. The only time you need to write your own destructor is if you are wrapping manual resource management, which is rare and already requires you to know what you're doing.
embroy2 wrote:Rust in no way better because it allows exceptions that kill the execution thread without any chance to catch it.
False. It only does this for unrecoverable errors, not errors that occur in normal operation like network cables being unplugged. Normal, recoverable errors use things like "Result" or "Option" which I already described. This is objectively better-structured because it gets rid of the out-of-band side channel which is exceptions.

Exceptions, no matter what nonsense you make up about them, still have these bad properties:
  • The decision of how to handle them can be ignored, and has a bad default
  • When they are ignored, they allow resources and partially-modified state to leak when it should be encapsulated
  • When you do handle them, it's verbose and repetitive, which discourages people from handling them correctly
On the other hand, in-band error handling with "Result" still solves these exact problems:
  • The only way to get the result you want is by checking for and handling the error
  • Checking for and then explicitly ignoring an error does not leak anything out of the function, you must explicitly return an error.
  • Because they are in-band, error handling strategies can be factored out into functions and macros
  • In-band error handling is less verbose when handling errors, and more verbose when ignoring them, so it encourages people to do things correctly
In the end, the better solution is the one that defaults to making the right choice, or if there is no good default, forces the developer to make the choice. It makes common right choices short and easy to express, and uncommon/usually-bad choices harder. Exceptions do the opposite.
User avatar
AndrewAPrice
Member
Member
Posts: 2303
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Concise Way to Describe Colour Spaces

Post by AndrewAPrice »

Thinking about wave lengths, completely realistic reproduction of the light waves may not always be desirable. You expect the image to be dimmer on a smart phone in bed, compared to viewing it on a laptop on a lawn on a bright sunny day. Similarly, you might want to switch between an image of the night sky, and a bright sunny object without your eyes having to adjust first. So, I think it's useful (and a desirable feature) to separate colour into chromaticity and luminance (which seems pretty much reproducing the original light waves, but at a lower intensity.)

Also - why is everyone getting off topic? Please keep this thread about colour theory, or start a new thread!
My OS is Perception.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
MessiahAndrw wrote:Thinking about wave lengths, completely realistic reproduction of the light waves may not always be desirable. You expect the image to be dimmer on a smart phone in bed, compared to viewing it on a laptop on a lawn on a bright sunny day. Similarly, you might want to switch between an image of the night sky, and a bright sunny object without your eyes having to adjust first. So, I think it's useful (and a desirable feature) to separate colour into chromaticity and luminance (which seems pretty much reproducing the original light waves, but at a lower intensity.)
You're right - I've been thinking about colour/hue, and neglected to take luminance into account.

I think I need to do more research. Mostly it looks like there's several variables:
  • The max. luminance of each display that the user is using
  • Ambient light sensors
  • Battery state/power management
  • The original luminance of the scene
And these variables would influence the "HDR auto-iris" stuff; which would then influence the pixel data (when there's no software adjustable back-light control) or the software adjustable back-light control (if it exists).


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
embryo2
Member
Member
Posts: 397
Joined: Wed Jun 03, 2015 5:03 am

Re: Concise Way to Describe Colour Spaces

Post by embryo2 »

MessiahAndrw wrote:Thinking about wave lengths, completely realistic reproduction of the light waves may not always be desirable. You expect the image to be dimmer on a smart phone in bed, compared to viewing it on a laptop on a lawn on a bright sunny day. Similarly, you might want to switch between an image of the night sky, and a bright sunny object without your eyes having to adjust first. So, I think it's useful (and a desirable feature) to separate colour into chromaticity and luminance (which seems pretty much reproducing the original light waves, but at a lower intensity.)
The image should be as realistic as possible when we speak about it's internal representation. But when we show the image it is possible to transform it's brightness in some suitable manner (multiplying by a factor, for example).
MessiahAndrw wrote:Also - why is everyone getting off topic? Please keep this thread about colour theory, or start a new thread!
Ok, here (to Brendan) and here (to Rusky) are the answers to the off topic subjects.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability :)
User avatar
AndrewAPrice
Member
Member
Posts: 2303
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Concise Way to Describe Colour Spaces

Post by AndrewAPrice »

Brendan wrote:You're right - I've been thinking about colour/hue, and neglected to take luminance into account.

I think I need to do more research. Mostly it looks like there's several variables:
  • The max. luminance of each display that the user is using
  • Ambient light sensors
  • Battery state/power management
  • The original luminance of the scene
And these variables would influence the "HDR auto-iris" stuff; which would then influence the pixel data (when there's no software adjustable back-light control) or the software adjustable back-light control (if it exists).
I think we're getting somewhere here. It seems like a good idea to separate color and luminance (but keep relative luminance within the image realistic), because a basic user can understand that the brightness on their display is turned down, therefore the image is darker. But, it should be easy and intuitive to calibrate two devices to the same brightness.

Now we have a model that is reproduces colour but is device-luminance independent, but is this still satisfactory?

How do we handle colours that are outside of what a monitor can reproduce? Should all dark colours saturate to black? If the image is too dark, and you simply increase the brightness, what if the image becomes too bright and all of your bright colours saturate to white?
levels.png
levels.png (6.45 KiB) Viewed 4531 times
We can shift the green window up and down by playing with the monitor's brightness, but we can't expand it (due to technical limitations, the monitor has a limited range.) There are times this could become a problem:
  • I'm watching a movie, and the range is too large for my display to handle so half the screen is saturated to black, so I'd rather the colours be washed out but see all of the detail:
    Image
  • I'm a security guard, and I'm more concerned with seeing detail in the image than realistic colour representation.
  • I'm a scientist dealing with heat maps and it would be useful to tell between different shades of red, but the reds I'm trying to distinguish between are both outside of my display's range and show up as the same colour.
Last edited by AndrewAPrice on Mon Jul 20, 2015 1:18 pm, edited 1 time in total.
My OS is Perception.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
MessiahAndrw wrote:
Brendan wrote:You're right - I've been thinking about colour/hue, and neglected to take luminance into account.

I think I need to do more research. Mostly it looks like there's several variables:
  • The max. luminance of each display that the user is using
  • Ambient light sensors
  • Battery state/power management
  • The original luminance of the scene
And these variables would influence the "HDR auto-iris" stuff; which would then influence the pixel data (when there's no software adjustable back-light control) or the software adjustable back-light control (if it exists).
I think we're getting somewhere here. It seems like a good idea to separate color and luminance (but keep relative luminance within the image realistic), because a basic user can understand that the brightness on their display is turned down, therefore the image is darker. But, it should be easy and intuitive to calibrate two devices to the same brightness.

Now we have a model that is reproduces colour but is device-luminance independent, but is this still satisfactory?
It's not satisfactory - I have to take luminance into account.

The basic idea would be to generate "HDR XYZ" where luminance ranges from "pitch black" all the way to "brighter than the sun". This would be fed into an "auto-iris" stage that brings the luminance back to a range from "pitch back" to "as bright as the monitor's max. luminance"; where if you look at something very bright everything is saturated (white) initially, but the "auto-iris" adjusts over (a small amount of) time making the image less white until the image is normal (and the reverse - when you go from "bright scene" to "dark scene" the image looks too bright and becomes normal over time). Basically; calculate that average luminance for a frame and convert that to a scaling factor with "normal_scaling_factor = monitor_max_luminance/scene_average_luminance"; but then do "actual_scaling_factor = previous_scaling_factor * 0.75 + normal_scaling_factor * 0.25" and multiply the pixel values (with saturation) by "actual_scaling_factor" to get the "scaled for monitor XYZ" values. Note: The formula for "actual_scaling_factor" here is only an example, doesn't take into account the time between frames, and would need fine tuning.

For multiple monitors (to keep them equal), the "monitor_max_luminance" in that that "normal_scaling_factor = monitor_max_luminance/scene_average_luminance" formula would have to be the max. luminance that all monitors can handle (or "min( max_luminance_of_monitor1, max_luminance_of_monitor2, ...)").

For monitors with different black levels, I did some research and it's scary/messy. For example, for CRTs light from "not black" pixels bounces around and makes the black pixels less black; and some LCD displays have dynamic contrast that modifies the back light (where black pixels in a darker scene are blacker than the same black pixels in a lighter scene). Mostly, adjusting black levels properly would be considerably complicated (and I suspect black levels are typically drowned out by the ambient light in a room anyway) so I'm going to pretend that "black" is always true black on all monitors.
MessiahAndrw wrote:There are times this could become a problem:
  • I'm watching a movie, and the range is too large for my display to handle so half the screen is saturated to black, so I'd rather the colours be washed out but see all of the detail:
MessiahAndrw wrote:
  • I'm a security guard, and I'm more concerned with seeing detail in the image than realistic colour representation.
  • I'm a scientist dealing with heat maps and it would be useful to tell between different shades of red, but the reds I'm trying to distinguish between are both outside of my display's range and show up as the same colour.
For watching a movie and for the security guard, the "auto-iris" would hopefully fix the problem. For the scientist, I think the problem here is the source of the data (e.g. the infra-red camera) and not the video/display.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
AndrewAPrice
Member
Member
Posts: 2303
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Concise Way to Describe Colour Spaces

Post by AndrewAPrice »

Brendan wrote:The basic idea would be to generate "HDR XYZ" where luminance ranges from "pitch black" all the way to "brighter than the sun". This would be fed into an "auto-iris" stage that brings the luminance back to a range from "pitch back" to "as bright as the monitor's max. luminance"; where if you look at something very bright everything is saturated (white) initially, but the "auto-iris" adjusts over (a small amount of) time making the image less white until the image is normal (and the reverse - when you go from "bright scene" to "dark scene" the image looks too bright and becomes normal over time). Basically; calculate that average luminance for a frame and convert that to a scaling factor with "normal_scaling_factor = monitor_max_luminance/scene_average_luminance"; but then do "actual_scaling_factor = previous_scaling_factor * 0.75 + normal_scaling_factor * 0.25" and multiply the pixel values (with saturation) by "actual_scaling_factor" to get the "scaled for monitor XYZ" values. Note: The formula for "actual_scaling_factor" here is only an example, doesn't take into account the time between frames, and would need fine tuning.
That's an interesting approach. Similar to how many 3D video games automatically adapt the camera to the on-screen exposure (usually in the pixel shader).

Let's say I am working on a document, then I multitask by opening an image editor (and load an image taken during the day time, for example) on the same screen but in a different window. Will the entire screen (including my document in the other window) darken to display the image? Will just the image editor darken to fit the image on the screen? Ideally, the "auto-iris" would only adjust the canvas the image is being drawn in. What about overlays and pop ups? If I'm scrolling through a website with images of the night sky and day sky, will my screen be darkening and brightening all over the place?

I don't mean to be adding problems - these are actually issues I'm thinking about too!

I think you're going to have to expose some of this programmatically to the application. Perhaps the desktop environment should be set to a fixed exposure setting, then you have design guidelines stating how general purpose graphics should render their GUI (regarding exposure and brightness levels), while software that actually interacts with images and videos might their own ways to set the exposure levels (like a maximum black and white level for a canvas.)
My OS is Perception.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Concise Way to Describe Colour Spaces

Post by Brendan »

Hi,
MessiahAndrw wrote:
Brendan wrote:The basic idea would be to generate "HDR XYZ" where luminance ranges from "pitch black" all the way to "brighter than the sun". This would be fed into an "auto-iris" stage that brings the luminance back to a range from "pitch back" to "as bright as the monitor's max. luminance"; where if you look at something very bright everything is saturated (white) initially, but the "auto-iris" adjusts over (a small amount of) time making the image less white until the image is normal (and the reverse - when you go from "bright scene" to "dark scene" the image looks too bright and becomes normal over time). Basically; calculate that average luminance for a frame and convert that to a scaling factor with "normal_scaling_factor = monitor_max_luminance/scene_average_luminance"; but then do "actual_scaling_factor = previous_scaling_factor * 0.75 + normal_scaling_factor * 0.25" and multiply the pixel values (with saturation) by "actual_scaling_factor" to get the "scaled for monitor XYZ" values. Note: The formula for "actual_scaling_factor" here is only an example, doesn't take into account the time between frames, and would need fine tuning.
That's an interesting approach. Similar to how many 3D video games automatically adapt the camera to the on-screen exposure (usually in the pixel shader).

Let's say I am working on a document, then I multitask by opening an image editor (and load an image taken during the day time, for example) on the same screen but in a different window. Will the entire screen (including my document in the other window) darken to display the image? Will just the image editor darken to fit the image on the screen? Ideally, the "auto-iris" would only adjust the canvas the image is being drawn in. What about overlays and pop ups? If I'm scrolling through a website with images of the night sky and day sky, will my screen be darkening and brightening all over the place?
Mostly; yes, it will behave the same way that your eyes do.

For example, if you got outside on a bright sunny day your eyes adjust to "bright", when you come back indoors (where the ambient light is a lot less bright) it takes a second or 2 for your eyes to adjust; and if you're outside on a dark night for a while before someone turns on a bright light you get blinded briefly while you're eyes adjust.

The basic idea is to mimic the behaviour of your eyes to create the illusion that the monitor supports a massive range of light levels, even though the monitor is very limited and can't.

Of course most applications will use normal lighting (approximately equivalent to looking at paper with the ambient light levels in an office); so switching between normal applications won't change much. For normal pictures, it'd be a little like printing the picture on paper and looking at it with the ambient light levels in an office (e.g. if a picture of the sun is on the screen it won't make everything else go dark). However, for 3D games and special pictures (e.g. screenshots of 3D games) it will be different - if you have a document in one window and you're looking at the sun in another window, expect to document to fade to black within a second or 2 because of the intensity of the sun.
MessiahAndrw wrote:I think you're going to have to expose some of this programmatically to the application. Perhaps the desktop environment should be set to a fixed exposure setting, then you have design guidelines stating how general purpose graphics should render their GUI (regarding exposure and brightness levels), while software that actually interacts with images and videos might their own ways to set the exposure levels (like a maximum black and white level for a canvas.)
Normal applications aren't light sources; and only control how much of the light that hits the application is reflected, absorbed or passes through.

If you're editing a document and the window is mostly white ("all light reflected") but the GUI is using a green light, then the application looks green. If the GUI is using a white light and you have a "transparent red" window open ("blue and green absorbed, red passes through"), then everything in that window's shadow will be lit with red light and everything that isn't in the application's shadow will be lit with white light.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Post Reply