Concise Way to Describe Colour Spaces
Re: Concise Way to Describe Colour Spaces
I want to stress the ability of a user to adjust monitor parameters manually. While it looks good when you are speaking about auto-iris, but in my experience there were situations when only manual configuration is acceptable. When a user works with a document it is convenient to configure the document's background to be of some not irritating brightness and color, also the overall monitor brightness should be dimmer than in other cases because after a long work the eyes get weary. But when I work with images I prefer to set the brightness and contrast to a much greater value than in case of the document processing, it allows me to see more details and increases black-white range for the sun to shine more like it shines in the real life. Usually I do such adjusting with the help of monitor's controls, but may be there is a way of doing this programmatically (I'm no sure if such extended control is supported by the monitor's hardware).
So, all the auto-iris stuff can be useful only if it can be turned off (at least in some situations) and replaced with preconfigured presets.
So, all the auto-iris stuff can be useful only if it can be turned off (at least in some situations) and replaced with preconfigured presets.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability
Re: Concise Way to Describe Colour Spaces
Hi,
Note: I do try to take accessibility into account (and do think these things need to be taken care of "globally" in the video driver and not "locally" in every application), and to be perfectly honest I don't think I've been trying hard enough in this area. There are a few things (e.g. nyctalopia and photophobia) that I've previously failed to consider.
What I do know is that very bright displays with inadequate/dim ambient lighting can be annoying; and very dim displays with very bright ambient lighting can be hard to read. These are both examples of "ambient light and/or monitor is bad". This implies that I'd want to match the monitor's brightness to the ambient lighting; which can only be done automatically in software if the hardware has an ambient light sensor. If there is no ambient light sensor, then maybe I can provide an "ambient light" control that the user can use manually.
In theory; what I want is to render a 3D scene using (an approximation of) the physical behaviour of light to obtain "high dynamic range XYZ" (e.g. XYZ where luminosity ranges from "black" to "brighter than the sun"). The contents of the scene (surfaces, light sources, etc) are beyond the scope of the video driver - the video driver does not care in any way, and it does not matter to the video driver if (e.g.) the surfaces are documents/office applications with soft/ambient lighting (for typical office work) or if (e.g.) the surfaces are a boss monster in a cave and the lighting is harsh spotlights (in a 3D game).
Once the scene has been rendered and I have a "HDR XYZ" source image; then it goes through multiple post-processing stages. These stages can be split into 2 categories. The first category is "accessibility" ("sound visualisation" overlay for deafness, hue adjustment for colour blindness, flash limiting for photosensitive epilepsy, and now also luminosity adjustment of dark scenes for nyctalopia and luminosity adjustment of bright scenes for photophobia). The second category is compensating for hardware limitations (luminosity adjustment to compensate for any mismatch between monitor brightness brightness and ambient lighting, colour space conversion to compensate for limited gamut, anti-aliasing to compensate for limited resolution, "auto-iris" to compensate for limited range of brightness, dithering to compensate for hardware's limited colour depth). In theory, each of these stages would be done in a suitable order, one after the other.
Of course in practice the stages need to be combined for the sake of performance - e.g. a single pass that adjusts luminosity of the pixel data (and/or back light if possible); that takes into account all of the things that effect luminosity and not just one.
Cheers,
Brendan
From this I assume:embryo2 wrote:I want to stress the ability of a user to adjust monitor parameters manually. While it looks good when you are speaking about auto-iris, but in my experience there were situations when only manual configuration is acceptable. When a user works with a document it is convenient to configure the document's background to be of some not irritating brightness and color, also the overall monitor brightness should be dimmer than in other cases because after a long work the eyes get weary. But when I work with images I prefer to set the brightness and contrast to a much greater value than in case of the document processing, it allows me to see more details and increases black-white range for the sun to shine more like it shines in the real life. Usually I do such adjusting with the help of monitor's controls, but may be there is a way of doing this programmatically (I'm no sure if such extended control is supported by the monitor's hardware).
- Each time you "alt+tab" between a picture and a document you adjust your monitor
- If you have 2 windows on the same screen where one is a document and the other is a picture, you get all confused
- If a document contains a picture, you get all confused
- If you print a document on paper and take a photo of it, then view that photo on your computer you get all confused
- You spend more time figuring out how to adjust the monitor and adjusting the monitor than you spend actually using the computer
Note: I do try to take accessibility into account (and do think these things need to be taken care of "globally" in the video driver and not "locally" in every application), and to be perfectly honest I don't think I've been trying hard enough in this area. There are a few things (e.g. nyctalopia and photophobia) that I've previously failed to consider.
What I do know is that very bright displays with inadequate/dim ambient lighting can be annoying; and very dim displays with very bright ambient lighting can be hard to read. These are both examples of "ambient light and/or monitor is bad". This implies that I'd want to match the monitor's brightness to the ambient lighting; which can only be done automatically in software if the hardware has an ambient light sensor. If there is no ambient light sensor, then maybe I can provide an "ambient light" control that the user can use manually.
In theory; what I want is to render a 3D scene using (an approximation of) the physical behaviour of light to obtain "high dynamic range XYZ" (e.g. XYZ where luminosity ranges from "black" to "brighter than the sun"). The contents of the scene (surfaces, light sources, etc) are beyond the scope of the video driver - the video driver does not care in any way, and it does not matter to the video driver if (e.g.) the surfaces are documents/office applications with soft/ambient lighting (for typical office work) or if (e.g.) the surfaces are a boss monster in a cave and the lighting is harsh spotlights (in a 3D game).
Once the scene has been rendered and I have a "HDR XYZ" source image; then it goes through multiple post-processing stages. These stages can be split into 2 categories. The first category is "accessibility" ("sound visualisation" overlay for deafness, hue adjustment for colour blindness, flash limiting for photosensitive epilepsy, and now also luminosity adjustment of dark scenes for nyctalopia and luminosity adjustment of bright scenes for photophobia). The second category is compensating for hardware limitations (luminosity adjustment to compensate for any mismatch between monitor brightness brightness and ambient lighting, colour space conversion to compensate for limited gamut, anti-aliasing to compensate for limited resolution, "auto-iris" to compensate for limited range of brightness, dithering to compensate for hardware's limited colour depth). In theory, each of these stages would be done in a suitable order, one after the other.
Of course in practice the stages need to be combined for the sake of performance - e.g. a single pass that adjusts luminosity of the pixel data (and/or back light if possible); that takes into account all of the things that effect luminosity and not just one.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Concise Way to Describe Colour Spaces
It's simple - I just do not use multitasking when I do something serious. So, all your assumptions are applicable to an imaginary person, which jumps permanently from one task to another and finally accomplishes nothing useful.Brendan wrote:From this I assume:
- Each time you "alt+tab" between a picture and a document you adjust your monitor
- If you have 2 windows on the same screen where one is a document and the other is a picture, you get all confused
- If a document contains a picture, you get all confused
- If you print a document on paper and take a photo of it, then view that photo on your computer you get all confused
- You spend more time figuring out how to adjust the monitor and adjusting the monitor than you spend actually using the computer
The bad thing is simple - the tasks are very different. If you want to repair mechanical watches then you need a lot of light and if you just want to scratch your back there could be no light at all. But if you can propose a way of generalizing of all tasks in a simple manner, then I will use only your OS without any doubt.Brendan wrote:If you have to manually adjust the monitor for any reason then something is wrong. Maybe the software is bad, maybe the ambient light is bad, maybe the monitor is bad, maybe your eyes are bad, maybe it's a combination of several things. I don't know.
Yes, the accessibility is another example of "not so good for everything" situations.Brendan wrote:Note: I do try to take accessibility into account (and do think these things need to be taken care of "globally" in the video driver and not "locally" in every application), and to be perfectly honest I don't think I've been trying hard enough in this area. There are a few things (e.g. nyctalopia and photophobia) that I've previously failed to consider.
The ambient light sensor is a good thing, but even without any change in the lighting conditions it is important to have some means for adjusting contrast/brightness to the current task (or just to the current person's mood). So, the scene adjustment shouldn't be automatic only. And even the automatic part should be configurable for it not to be annoying (adjusting what it wants in a manner that it thinks is good for everybody, like the Ford's choice of a car color - it can be any color if the chosen color is black). Such imposition of your will is the first thing that should be avoided, of course, if you want the OS to be attractive.Brendan wrote:What I do know is that very bright displays with inadequate/dim ambient lighting can be annoying; and very dim displays with very bright ambient lighting can be hard to read. These are both examples of "ambient light and/or monitor is bad". This implies that I'd want to match the monitor's brightness to the ambient lighting; which can only be done automatically in software if the hardware has an ambient light sensor. If there is no ambient light sensor, then maybe I can provide an "ambient light" control that the user can use manually.
I agree with the general intermediate representation idea, but I still in doubts about the color spaces. With the wavelengths and amplitudes it is possible to determine the light characteristics in a reliable manner, when the human adjusted color space is a derivative of the measured light characteristics and can differ among different people. The color perception can be modeled with a simple transform, applied to the wavelengths, but such transform from one perception based metric to another perception based looks as less reliable mean of the color processing. However, I understand, that the knowledge base about color spaces is rich and much more accessible than the wavelength based color representation. So, working with wavelengths can be error prone when the human perception is considered. But the clear and reliable measurements and comparisons are possible only in case of the representation that is as close to the physics as possible.Brendan wrote:Once the scene has been rendered and I have a "HDR XYZ" source image; then it goes through multiple post-processing stages.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability
-
- Member
- Posts: 5588
- Joined: Mon Mar 25, 2013 7:01 pm
Re: Concise Way to Describe Colour Spaces
The transform is simple, but you need to keep track of many wavelengths - possibly hundreds, maybe even thousands - in order to get a useful approximation of reality for one single color. The perception-based metrics are only approximations, but they are useful because they greatly reduce the amount of data you must store and process.embryo2 wrote:The color perception can be modeled with a simple transform, applied to the wavelengths, but such transform from one perception based metric to another perception based looks as less reliable mean of the color processing.
I think wavelengths are pretty intuitive, especially compared to XYZ. The math behind adding and subtracting colors in a wavelength representation is very nearly trivial, so there's much less room for error than in any other representation. It's easy to take the absorption spectrum of an object and calculate the reflected spectrum under different illuminants; it is much more difficult to take an XYZ coordinate of an object lit by D65 and determine the correct XYZ coordinate for the same object lit by F4.embryo2 wrote:However, I understand, that the knowledge base about color spaces is rich and much more accessible than the wavelength based color representation. So, working with wavelengths can be error prone when the human perception is considered.
I think the hardest part about working with wavelengths would most likely be converting from a normal colorspace to wavelengths. How would you do something like this? There is more than one correct answer here!
Re: Concise Way to Describe Colour Spaces
Hi,
More importantly, regardless of which task you're doing you don't change the laws of physics that determine how light behaves and you don't change biology (the way cones in your eyes work or how the mind processes that data) either. These things are what the video system (from renderer through post processing stages to monitor) are attempting to approximate; and because physics/biology doesn't change when you're doing different tasks the video system should not change when you're doing different tasks either. What does change when you're doing different tasks is the scene (light sources and the surfaces that reflect/absorb light), which means that the source data for the video system changes but the video system does not.
In other words, if you're someone with an obsessive compulsive disorder that's constantly diddling with monitor settings when they're using a standard/crappy OS; then you're going to be someone with an obsessive compulsive disorder that's constantly diddling with light sources in the GUI on my OS.
Cheers,
Brendan
I'd estimate that at least 50% of the documents I see have pictures embedded in them (especially HTML documents on the web).embryo2 wrote:It's simple - I just do not use multitasking when I do something serious. So, all your assumptions are applicable to an imaginary person, which jumps permanently from one task to another and finally accomplishes nothing useful.Brendan wrote:From this I assume:
- Each time you "alt+tab" between a picture and a document you adjust your monitor
- If you have 2 windows on the same screen where one is a document and the other is a picture, you get all confused
- If a document contains a picture, you get all confused
- If you print a document on paper and take a photo of it, then view that photo on your computer you get all confused
- You spend more time figuring out how to adjust the monitor and adjusting the monitor than you spend actually using the computer
If you repair watches you need magnification, and don't need more light than someone working in an office.embryo2 wrote:The bad thing is simple - the tasks are very different. If you want to repair mechanical watches then you need a lot of light and if you just want to scratch your back there could be no light at all. But if you can propose a way of generalizing of all tasks in a simple manner, then I will use only your OS without any doubt.Brendan wrote:If you have to manually adjust the monitor for any reason then something is wrong. Maybe the software is bad, maybe the ambient light is bad, maybe the monitor is bad, maybe your eyes are bad, maybe it's a combination of several things. I don't know.
More importantly, regardless of which task you're doing you don't change the laws of physics that determine how light behaves and you don't change biology (the way cones in your eyes work or how the mind processes that data) either. These things are what the video system (from renderer through post processing stages to monitor) are attempting to approximate; and because physics/biology doesn't change when you're doing different tasks the video system should not change when you're doing different tasks either. What does change when you're doing different tasks is the scene (light sources and the surfaces that reflect/absorb light), which means that the source data for the video system changes but the video system does not.
In other words, if you're someone with an obsessive compulsive disorder that's constantly diddling with monitor settings when they're using a standard/crappy OS; then you're going to be someone with an obsessive compulsive disorder that's constantly diddling with light sources in the GUI on my OS.
Accessibility is an example of "biology is different". Accessibility is also the only reason to deviate from "video system mimics reality" that I'm willing to accept.embryo2 wrote:Yes, the accessibility is another example of "not so good for everything" situations.Brendan wrote:Note: I do try to take accessibility into account (and do think these things need to be taken care of "globally" in the video driver and not "locally" in every application), and to be perfectly honest I don't think I've been trying hard enough in this area. There are a few things (e.g. nyctalopia and photophobia) that I've previously failed to consider.
I think you're grossly over-exaggerating how often people adjust the monitor's settings (brightness, contrast, etc); and I think that when people do adjust their monitor it's because typical/standard software doesn't do any/many of the things I'm planning.embryo2 wrote:The ambient light sensor is a good thing, but even without any change in the lighting conditions it is important to have some means for adjusting contrast/brightness to the current task (or just to the current person's mood). So, the scene adjustment shouldn't be automatic only. And even the automatic part should be configurable for it not to be annoying (adjusting what it wants in a manner that it thinks is good for everybody, like the Ford's choice of a car color - it can be any color if the chosen color is black). Such imposition of your will is the first thing that should be avoided, of course, if you want the OS to be attractive.Brendan wrote:What I do know is that very bright displays with inadequate/dim ambient lighting can be annoying; and very dim displays with very bright ambient lighting can be hard to read. These are both examples of "ambient light and/or monitor is bad". This implies that I'd want to match the monitor's brightness to the ambient lighting; which can only be done automatically in software if the hardware has an ambient light sensor. If there is no ambient light sensor, then maybe I can provide an "ambient light" control that the user can use manually.
Imagine you have 2 light sources, where one produces white light and another produces cyan light, and light from both light sources hits a surface. Show me:embryo2 wrote:I agree with the general intermediate representation idea, but I still in doubts about the color spaces. With the wavelengths and amplitudes it is possible to determine the light characteristics in a reliable manner, when the human adjusted color space is a derivative of the measured light characteristics and can differ among different people. The color perception can be modeled with a simple transform, applied to the wavelengths, but such transform from one perception based metric to another perception based looks as less reliable mean of the color processing. However, I understand, that the knowledge base about color spaces is rich and much more accessible than the wavelength based color representation. So, working with wavelengths can be error prone when the human perception is considered. But the clear and reliable measurements and comparisons are possible only in case of the representation that is as close to the physics as possible.Brendan wrote:Once the scene has been rendered and I have a "HDR XYZ" source image; then it goes through multiple post-processing stages.
- the data structure you'd use to represent light
- the data structure you'd use to represent a surface
- the formulas necessary to determine how much light hitting the surface is reflected back by the surface
Code: Select all
reflectedLight1X = light1X * surfaceX;
reflectedLight1Y = light1Y * surfaceY;
reflectedLight1Z = light1Z * surfaceZ;
reflectedLight2X = light2X * surfaceX;
reflectedLight2Y = light2Y * surfaceY;
reflectedLight2Z = light2Z * surfaceZ;
resultX = reflectedLight1X + reflectedLight2X;
resultY = reflectedLight1Y + reflectedLight2Y;
resultZ = reflectedLight1Z + reflectedLight2Z;
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
-
- Member
- Posts: 5588
- Joined: Mon Mar 25, 2013 7:01 pm
Re: Concise Way to Describe Colour Spaces
For a spectrum simulation, I might choose to represent light as an array of 100 values (amplitude for 100 different wavelengths of light) and a surface as another array of 100 values (how much of each wavelength is reflected). The formulas look exactly the same as yours, just with a lot more components.Brendan wrote:Imagine you have 2 light sources, where one produces white light and another produces cyan light, and light from both light sources hits a surface. Show me:
- the data structure you'd use to represent light
- the data structure you'd use to represent a surface
- the formulas necessary to determine how much light hitting the surface is reflected back by the surface
To save on multiplies, you can sum the light sources instead of the reflections.
-
- Member
- Posts: 283
- Joined: Mon Jan 03, 2011 6:58 pm
Re: Concise Way to Describe Colour Spaces
Just wanted to hop in and point out something. In my normal computer use, I often have a video (youtube/netflix/whatever with highly variable lighting from the videos obviously) up on my second monitor while working with code (generally white background with black/colored text) on my main montior.Brendan wrote:I think you're grossly over-exaggerating how often people adjust the monitor's settings (brightness, contrast, etc); and I think that when people do adjust their monitor it's because typical/standard software doesn't do any/many of the things I'm planning.embryo2 wrote:... even without any change in the lighting conditions it is important to have some means for adjusting contrast/brightness to the current task (or just to the current person's mood). So, the scene adjustment shouldn't be automatic only. And even the automatic part should be configurable for it not to be annoying ...
If my main monitor is constantly adjusting "lighting" because of changing scenes on my second screen I would get very distracted and/or annoyed very quickly.
- Monk
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: Concise Way to Describe Colour Spaces
I know from practice that it's not true for precision mechanics, and neither is it in the general case. You need significantly more light over the ambient one so that you can see the light source's reflection highlight the details you need to see. Just like a magnifying glass doesn't help in finding watermarks in paper (instead you hold it against a lamp), or locating where the chip is in an RFID card (you hold it at an angle under a light source until you see the lamp's reflection forming the outline of a square)brendan wrote:If you repair watches you need magnification, and don't need more light than someone working in an office.
Basically, if your premise was right, these devices wouldn't have any business case.
Re: Concise Way to Describe Colour Spaces
Hi,
You still want the same amount of light entering your eye; and to achieve that you need to compensate for the reduction caused by magnification.
Basically; (in general) regardless of what work you're doing you want the same amount of light entering your eye, so (for computer use) regardless of what work you're doing you want the same amount of light leaving the monitor.
Note: Typical software doesn't model magnification correctly - it "zooms in" without reducing the light levels, and therefore doesn't need to compensate for a reduction in light that never happened.
Cheers,
Brendan
When you magnify something your eye gets less light. For example, instead of seeing a 20*20 area (and getting the light reflected off of a 20*20 area) you might only see a 2*2 area that's been magnified up to 20*20 (and you get light reflected off of a 2*2 area, which is 100 times less light).Combuster wrote:I know from practice that it's not true for precision mechanics, and neither is it in the general case. You need significantly more light over the ambient one so that you can see the light source's reflection highlight the details you need to see. Just like a magnifying glass doesn't help in finding watermarks in paper (instead you hold it against a lamp), or locating where the chip is in an RFID card (you hold it at an angle under a light source until you see the lamp's reflection forming the outline of a square)brendan wrote:If you repair watches you need magnification, and don't need more light than someone working in an office.
Basically, if your premise was right, these devices wouldn't have any business case.
You still want the same amount of light entering your eye; and to achieve that you need to compensate for the reduction caused by magnification.
Basically; (in general) regardless of what work you're doing you want the same amount of light entering your eye, so (for computer use) regardless of what work you're doing you want the same amount of light leaving the monitor.
Note: Typical software doesn't model magnification correctly - it "zooms in" without reducing the light levels, and therefore doesn't need to compensate for a reduction in light that never happened.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Concise Way to Describe Colour Spaces
Fluorescent lamp has a narrow band of wavelengths and still it's lighting is enough to see all useful colors. It means there shouldn't be a lot of wavelengths in the spectrum. It also is an approximation in case if we compare it with the sunlight, but the approximation is very close to the capabilities of the LED monitors, because every pixel there generates a combination of only three narrow bands of wavelengths. So, at least three bands is enough to show us the colors from sRGB color space. If we add a few more bands then we can describe any visible color in a way that existing hardware still is incapable of representing. But the variations in the hardware light representation can be caught with the help of the extended number of wavelength bands, like 4 or even up to 8. 64 bit word is enough to represent up to 8 amplitudes of fixed wavelengths from 32 wavelength sets. Here amplitudes can be represented by just 7 bits and the extra byte can be used as the wavelength array length and set index information.Octocontrabass wrote:The transform is simple, but you need to keep track of many wavelengths - possibly hundreds, maybe even thousands - in order to get a useful approximation of reality for one single color. The perception-based metrics are only approximations, but they are useful because they greatly reduce the amount of data you must store and process.embryo2 wrote:The color perception can be modeled with a simple transform, applied to the wavelengths, but such transform from one perception based metric to another perception based looks as less reliable mean of the color processing.
Here I was meant that the human's perception of light is not as intuitive as the wavelength light representation. So, the mistakes could be related to the perception, but not to the representation.Octocontrabass wrote:I think wavelengths are pretty intuitive, especially compared to XYZ. The math behind adding and subtracting colors in a wavelength representation is very nearly trivial, so there's much less room for error than in any other representation.embryo2 wrote:However, I understand, that the knowledge base about color spaces is rich and much more accessible than the wavelength based color representation. So, working with wavelengths can be error prone when the human perception is considered.
Yes, it's another error prone part. I suppose the conversion can be implemented in stages. First the representation of a color in color space is converted to the hardware pixel emission, and second the pixel's emission can be converted to a wavelength band. The hardware here, of course, is something universal, but not a particular device.Octocontrabass wrote:I think the hardest part about working with wavelengths would most likely be converting from a normal colorspace to wavelengths. How would you do something like this? There is more than one correct answer here!
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability
Re: Concise Way to Describe Colour Spaces
Even if your estimations are correct they still suck just because "working with images" means something like mechanical watch repair when "working with documents" (including internet) means something like walking in a room without hitting something every second.Brendan wrote:I'd estimate that at least 50% of the documents I see have pictures embedded in them (especially HTML documents on the web).
Ok, I see that other systems, that deny problem fixing to some small communities because they are too small, are doing exactly as you plan to do. And as usual, the argumentation is always can be found.Brendan wrote:More importantly, regardless of which task you're doing you don't change the laws of physics that determine how light behaves and you don't change biology (the way cones in your eyes work or how the mind processes that data) either. These things are what the video system (from renderer through post processing stages to monitor) are attempting to approximate; and because physics/biology doesn't change when you're doing different tasks the video system should not change when you're doing different tasks either. What does change when you're doing different tasks is the scene (light sources and the surfaces that reflect/absorb light), which means that the source data for the video system changes but the video system does not.
In other words, if you're someone with an obsessive compulsive disorder that's constantly diddling with monitor settings when they're using a standard/crappy OS; then you're going to be someone with an obsessive compulsive disorder that's constantly diddling with light sources in the GUI on my OS.
I think you are grossly over-exaggerating how often your system will be helpful to all people on earth. Because your plans are based on your perception only.Brendan wrote:I think you're grossly over-exaggerating how often people adjust the monitor's settings (brightness, contrast, etc); and I think that when people do adjust their monitor it's because typical/standard software doesn't do any/many of the things I'm planning.
The data structure is a variable length array of bytes representing from 1 to 8 fixed wavelength amplitudes.Brendan wrote:Imagine you have 2 light sources, where one produces white light and another produces cyan light, and light from both light sources hits a surface. Show me:
- the data structure you'd use to represent light
- the data structure you'd use to represent a surface
- the formulas necessary to determine how much light hitting the surface is reflected back by the surface
The surface also should be represented with help of the variable length array. But the meaning of it's bytes will be different. For the reflection often it is enough to have just 3 bytes for them to represent reflected fraction per one wavelength band.
My previous account (embryo) was accidentally deleted, so I have no chance but to use something new. But may be it was a good lesson about software reliability
-
- Member
- Posts: 5588
- Joined: Mon Mar 25, 2013 7:01 pm
Re: Concise Way to Describe Colour Spaces
Fluorescent lighting visibly skews colors because of the "holes" in the spectrum. If you ignore these holes, your model won't be able to simulate the difference between fluorescent light and daylight.embryo2 wrote:Fluorescent lamp has a narrow band of wavelengths and still it's lighting is enough to see all useful colors. It means there shouldn't be a lot of wavelengths in the spectrum.
LED monitors typically have a narrow band of wavelengths for blue, and a wide band of wavelengths that are shared for green and red. As with fluorescent light, holes in the spectrum skew the apparent colors of objects lit by an LED monitor.embryo2 wrote:It also is an approximation in case if we compare it with the sunlight, but the approximation is very close to the capabilities of the LED monitors, because every pixel there generates a combination of only three narrow bands of wavelengths.
The point of simulating a whole spectrum is to take into account color addition and subtraction effects that can't be simulated any other way. If you just want to represent all visible colors, the XYZ colorspace is already more than capable of handling that.embryo2 wrote:So, at least three bands is enough to show us the colors from sRGB color space. If we add a few more bands then we can describe any visible color in a way that existing hardware still is incapable of representing.
Re: Concise Way to Describe Colour Spaces
Hi,
Now; if I believed that some end users will need to constantly change monitor settings for my "as close to 100% correct as practically possible" video system; what exactly are you suggesting my video system should do about it? The only possibility I see is to ignore it - e.g. the OS tries its hardest to get "as close to 100% correct as practically possible" and if the user does change monitor settings and makes what they see "less correct than possible" then there's nothing the OS can or should do about it.
Cheers,
Brendan
Most (all?) existing image editors have "zoom" for people that want to work on very fine details, and I'll probably have "generic zoom" that works with all applications.embryo2 wrote:Even if your estimations are correct they still suck just because "working with images" means something like mechanical watch repair when "working with documents" (including internet) means something like walking in a room without hitting something every second.Brendan wrote:I'd estimate that at least 50% of the documents I see have pictures embedded in them (especially HTML documents on the web).
Think of it like this. A "100% correct video system" models reality perfectly, but isn't achievable in practice due to both the overhead of processing that would be involved and the limitations of hardware. I'm trying to get "as close to 100% correct as practically possible"; while most existing video systems don't even try (causing applications to look fake/wrong and causing 3D games to implement everything that the video system should've but didn't provide; and possibly even causing the need for some end users to adjust monitor settings).embryo2 wrote:Ok, I see that other systems, that deny problem fixing to some small communities because they are too small, are doing exactly as you plan to do. And as usual, the argumentation is always can be found.Brendan wrote:More importantly, regardless of which task you're doing you don't change the laws of physics that determine how light behaves and you don't change biology (the way cones in your eyes work or how the mind processes that data) either. These things are what the video system (from renderer through post processing stages to monitor) are attempting to approximate; and because physics/biology doesn't change when you're doing different tasks the video system should not change when you're doing different tasks either. What does change when you're doing different tasks is the scene (light sources and the surfaces that reflect/absorb light), which means that the source data for the video system changes but the video system does not.
In other words, if you're someone with an obsessive compulsive disorder that's constantly diddling with monitor settings when they're using a standard/crappy OS; then you're going to be someone with an obsessive compulsive disorder that's constantly diddling with light sources in the GUI on my OS.
Now; if I believed that some end users will need to constantly change monitor settings for my "as close to 100% correct as practically possible" video system; what exactly are you suggesting my video system should do about it? The only possibility I see is to ignore it - e.g. the OS tries its hardest to get "as close to 100% correct as practically possible" and if the user does change monitor settings and makes what they see "less correct than possible" then there's nothing the OS can or should do about it.
I think that people in general have difficulty imagining a system that differs from existing software (and has different advantages and different disadvantages to existing software); and this leads to a tendency for people to focus on "perceived disadvantages" (that may or may not exist) while failing to take into account that even if there are some disadvantages the advantages be more important anyway.embryo2 wrote:I think you are grossly over-exaggerating how often your system will be helpful to all people on earth. Because your plans are based on your perception only.Brendan wrote:I think you're grossly over-exaggerating how often people adjust the monitor's settings (brightness, contrast, etc); and I think that when people do adjust their monitor it's because typical/standard software doesn't do any/many of the things I'm planning.
The video system will need to do millions of "texel lookups" and using variable sized texels makes that far more expensive; and I suspect you failed to provide the formulas I requested (which are also performed millions of times per frame) because they're very complicated and very expensive. On top of that, I don't see what the advantage is meant to be (given than XYZ is able to represent all visible colours anyway).embryo2 wrote:The data structure is a variable length array of bytes representing from 1 to 8 fixed wavelength amplitudes.Brendan wrote:Imagine you have 2 light sources, where one produces white light and another produces cyan light, and light from both light sources hits a surface. Show me:
- the data structure you'd use to represent light
- the data structure you'd use to represent a surface
- the formulas necessary to determine how much light hitting the surface is reflected back by the surface
The surface also should be represented with help of the variable length array. But the meaning of it's bytes will be different. For the reflection often it is enough to have just 3 bytes for them to represent reflected fraction per one wavelength band.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Concise Way to Describe Colour Spaces
What are you talking about here? 3D games have very good reasons to implement their own rendering engines, separate from other applications from other games.Brendan wrote:most existing video systems don't even try (causing applications to look fake/wrong and causing 3D games to implement everything that the video system should've but didn't provide
Re: Concise Way to Describe Colour Spaces
Hi,
For simple examples; here's a list of games using the Unity engine, and here's a list of games using the Unreal engine, and here's a list of games using the CryEngine.
Cheers,
Brendan
The number of games that have their own rendering engine is "almost zero". Instead they either create a rendering engine and re-use it for many games, or use an existing rendering engine from somewhere else, because each game doesn't need its own unique rendering engine at all.Rusky wrote:What are you talking about here? 3D games have very good reasons to implement their own rendering engines, separate from other applications from other games.Brendan wrote:most existing video systems don't even try (causing applications to look fake/wrong and causing 3D games to implement everything that the video system should've but didn't provide
For simple examples; here's a list of games using the Unity engine, and here's a list of games using the Unreal engine, and here's a list of games using the CryEngine.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.