Standardized Representation of Colour
Standardized Representation of Colour
Hi,
For my OS I want all graphics data to use a standardized representation for colours; where colours in this standardized representation are converted into a device's representation by the corresponding device driver (video card driver, printer driver, scanner driver, etc).
Some systems of representing colours aren't able to represent all colours. For example, the RGB colours (or standardized "sRGB") used by monitors and video cards are inadequate to describe all colours. The wikipedia has a good picture describing this inadequacy (from the wikipedia page on sRGB, where all colours outside the triangle can't be displayed correctly:
One thing to understand here is that even though a colour can't be displayed, for things like alpha blending and anti-aliasing colours are mixed, and colours that can't be displayed may be mixed into colours that can be displayed. For example, consider this (256 * 128) picture:
If this is scaled down to an 8 * 4 picture (and then blown up again so you can see individual pixels) it becomes:
If the green band in first picture was a colour that a monitor can't display, then a monitor would still be able to display all colours in the scaled down image perfectly. However, if the green band was converted into a colour that the monitor can display before the scaling is done, then the resulting scaled down image (which could have been displayed perfectly) would be wrong (e.g. the pixels that are anti-aliased would have the wrong hue).
Also, certain devices (e.g. digital cameras) have a far larger range of luminance than the human eye can handle. This is mostly so that software can adjust exposure after the picture has been taken (e.g. so that dark pictures can be made lighter without losing detail). Therefore I want to be able to represent a similar range of luminance (e.g. from absolute black to the brightness of the sun). This is also important for things like specular highlights in 3D models.
My current thinking is to use something like HSV (Hue Saturation and "Value"); where "Value" (brightness) is defined in lux and has a much larger range (e.g. from 0 lux to 65536 lux).
The main problem here is defining "hue". One idea I had was to use the equivelent wave length of monochromatic light (e.g. 460 nm for blue, 520 nm for green, 700 nm for red, etc) for some colours, and the ratio of blue/red for other colours (as there is no equivelent monochromatic light for purples). For example (with hue encoded as 16-bit unsigned integers), 0x0000 could represent 460 nm light (blue), 0x2AAB could represent 495 nm light (cyan), 0x5555 could represent 520 nm light (green), 0xAAAA could represent 700 nm light (red); then 0xD555 could represent an equal mixture of 460 nm light and 700 nm light (magenta) and 0xEAAA could represent a mixture where the 460 nm light is stronger than the 700 nm light (purple). This is good in that it's able to represent all visible hues, but I don't know how to combine colours - it's not a linear relationship, for example 50% 500 nm light combined with 50% 600 nm light does not produce 550 nm light (I'd guess the correct result would be closer to 570 nm).
Does anyone have any ideas or suggestions?
Thanks,
Brendan
For my OS I want all graphics data to use a standardized representation for colours; where colours in this standardized representation are converted into a device's representation by the corresponding device driver (video card driver, printer driver, scanner driver, etc).
Some systems of representing colours aren't able to represent all colours. For example, the RGB colours (or standardized "sRGB") used by monitors and video cards are inadequate to describe all colours. The wikipedia has a good picture describing this inadequacy (from the wikipedia page on sRGB, where all colours outside the triangle can't be displayed correctly:
One thing to understand here is that even though a colour can't be displayed, for things like alpha blending and anti-aliasing colours are mixed, and colours that can't be displayed may be mixed into colours that can be displayed. For example, consider this (256 * 128) picture:
If this is scaled down to an 8 * 4 picture (and then blown up again so you can see individual pixels) it becomes:
If the green band in first picture was a colour that a monitor can't display, then a monitor would still be able to display all colours in the scaled down image perfectly. However, if the green band was converted into a colour that the monitor can display before the scaling is done, then the resulting scaled down image (which could have been displayed perfectly) would be wrong (e.g. the pixels that are anti-aliased would have the wrong hue).
Also, certain devices (e.g. digital cameras) have a far larger range of luminance than the human eye can handle. This is mostly so that software can adjust exposure after the picture has been taken (e.g. so that dark pictures can be made lighter without losing detail). Therefore I want to be able to represent a similar range of luminance (e.g. from absolute black to the brightness of the sun). This is also important for things like specular highlights in 3D models.
My current thinking is to use something like HSV (Hue Saturation and "Value"); where "Value" (brightness) is defined in lux and has a much larger range (e.g. from 0 lux to 65536 lux).
The main problem here is defining "hue". One idea I had was to use the equivelent wave length of monochromatic light (e.g. 460 nm for blue, 520 nm for green, 700 nm for red, etc) for some colours, and the ratio of blue/red for other colours (as there is no equivelent monochromatic light for purples). For example (with hue encoded as 16-bit unsigned integers), 0x0000 could represent 460 nm light (blue), 0x2AAB could represent 495 nm light (cyan), 0x5555 could represent 520 nm light (green), 0xAAAA could represent 700 nm light (red); then 0xD555 could represent an equal mixture of 460 nm light and 700 nm light (magenta) and 0xEAAA could represent a mixture where the 460 nm light is stronger than the 700 nm light (purple). This is good in that it's able to represent all visible hues, but I don't know how to combine colours - it's not a linear relationship, for example 50% 500 nm light combined with 50% 600 nm light does not produce 550 nm light (I'd guess the correct result would be closer to 570 nm).
Does anyone have any ideas or suggestions?
Thanks,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Standardized Representation of Colour
Mathematically and physically, it is impossible to do it in anything but a cheap-and-dirty way, or to model the full reality. You have to look at the concept of spectral analysis. If you take some input light and divide it into a spectrum ... how do you convert that spectrum into data? You have to list the intensity of every single wavelength. There is no other way. Each wavelength intensity is an independent variable -- or "dimension". If you want to divide visible light from 400 to 699nm into 1nm-wide quanta -- you have a 300 dimensional array. Period.
If you don't want a 300 dimensional array, then you have to use some pseudo-physical (biometric?) algorithm to do a severe data reduction. The graph you posted is a typical color response of a typical human nervous system, averaged over several thousand humans. It has no physical basis whatsoever, and the standard deviation of response for any individual human is large. ie. It is really cheap, and really dirty. If you base ANYTHING you do on that graph, it is automatically really cheap and really dirty.
So, I would say that the only rational answer to your concept is to store all pixels as full spectra, with logarithmic intensities for each wavelength.
If you don't want a 300 dimensional array, then you have to use some pseudo-physical (biometric?) algorithm to do a severe data reduction. The graph you posted is a typical color response of a typical human nervous system, averaged over several thousand humans. It has no physical basis whatsoever, and the standard deviation of response for any individual human is large. ie. It is really cheap, and really dirty. If you base ANYTHING you do on that graph, it is automatically really cheap and really dirty.
This is absolutely false. The human eye can easily detect 10 photons per second. This is orders of magnitude better than the very best photodiode can detect. It takes a photomultiplier tube to achieve the response an eye can achieve trivially. All CCDs in all cameras have a linear response to luminosity by the nature of the process. Eyes have a logarithmic response. Therefore an eye can perceive luminosity contrasts that are many orders of magnitude larger than any camera can handle. And it is luminosity and hue contrasts that are important in an image -- not the absolute magnitudes.Also, certain devices (e.g. digital cameras) have a far larger range of luminance than the human eye can handle.
So, I would say that the only rational answer to your concept is to store all pixels as full spectra, with logarithmic intensities for each wavelength.
- NickJohnson
- Member
- Posts: 1249
- Joined: Tue Mar 24, 2009 8:11 pm
- Location: Sunnyvale, California
Re: Standardized Representation of Colour
Why would this make a 300 dimensional array? You're only representing one point on that array at once. Wouldn't it just be 300 independent variables? That's still a lot, but if you limit yourself to fewer wavelengths, you can definitely implement it (it would be really slow though.)bewing wrote:Mathematically and physically, it is impossible to do it in anything but a cheap-and-dirty way, or to model the full reality. You have to look at the concept of spectral analysis. If you take some input light and divide it into a spectrum ... how do you convert that spectrum into data? You have to list the intensity of every single wavelength. There is no other way. Each wavelength intensity is an independent variable -- or "dimension". If you want to divide visible light from 400 to 699nm into 1nm-wide quanta -- you have a 300 dimensional array. Period.
If you don't want a 300 dimensional array, then you have to use some pseudo-physical (biometric?) algorithm to do a severe data reduction. The graph you posted is a typical color response of a typical human nervous system, averaged over several thousand humans. It has no physical basis whatsoever, and the standard deviation of response for any individual human is large. ie. It is really cheap, and really dirty. If you base ANYTHING you do on that graph, it is automatically really cheap and really dirty.
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: Standardized Representation of Colour
Any objections to just recording the stimulation factors for each of the rod/cone types? That'd be complete and efficient storage for all human observers.
- NickJohnson
- Member
- Posts: 1249
- Joined: Tue Mar 24, 2009 8:11 pm
- Location: Sunnyvale, California
Re: Standardized Representation of Colour
If you think about it, that *is* RGB. Humans are trichromates (except for colorblind people who are dichromates), so we only perceive the brightness of three different wavelengths, i.e. red, green, blue. I think you can express any color humans can see with only three values. That's why we have three primary colors, and our monitors can only express amounts of red, green, and blue.Combuster wrote:Any objections to just recording the stimulation factors for each of the rod/cone types? That'd be complete and efficient storage for all human observers.
Edit: Wavelengths that are between two of our three colors are simply received as both of those colors. Therefore, there are many colors RGB cannot produce, but we cannot distinguish those colors from combinations of colors that can be produced with RGB, because we are human. That's why the area in that graph outside the triangle looks the same as the stuff within it.
Re: Standardized Representation of Colour
Except that humans are extremely variable when it comes to their stimulation factors. So you are back to a semi-subjective selection of an "average human". And, as said above, that is what sRGB is.Combuster wrote:Any objections to just recording the stimulation factors for each of the rod/cone types? That'd be complete and efficient storage for all human observers.
Heh. Ah, no. The graph is being displayed on an sRGB monitor. So every color on it everywhere is a color that an sRGB display can display. So they all look like colors that an sRGB monitor can display, because they ARE. And the same goes for printing such a graph in a book. Then every color on it is a color that can by displayed in CMYK. There is no practical way to display a graph of "colors that cannot be displayed". You just have to imagine what they are.That's why the area in that graph outside the triangle looks the same as the stuff within it.
- Masterkiller
- Member
- Posts: 153
- Joined: Sat May 05, 2007 6:20 pm
Re: Standardized Representation of Colour
One factor, that you cannot be sure, is how far the user is from his monitor. If the users sees the "whole" picrture, he is enough far to take false colours of the alpha channel in the antlialising, as a part of the vector line. There are also other factors like brightness, contrast, color-levels and the user can really mess-up the monitor colors. So you really cannot thrust that if your video driver converts a wave length for example something 540nm, the output wavelength of the pixel can be 460, 600 or any color. Monitor does not send its settings to the video card. So you cannot show a green color and say to user "that is 540nm" and expect to be true. So taking care if the monitor can or cannot display something cannot be done by the video card (Also CRT monitor displays one sets of colors, while LCD display other, and LCD changes their color while changing the angle of view). You can provide this feature, but you should not worry about antialiasing colors. Just prebuffer the final bitmap before write it to video memory.
ALCA OS: Project temporarity suspended!
Current state: real-mode kernel-FS reader...
Current state: real-mode kernel-FS reader...
-
- Member
- Posts: 204
- Joined: Thu Apr 12, 2007 8:15 am
- Location: Michigan
Re: Standardized Representation of Colour
Yes, human vision is much more complex than that. Still in the eye, lateral inhibition enhances contrast before an action potential is transfered from rods and cones to the optic nerve. Furthermore, the brain does a substantial amount of work to allow you to see "correct" colors in spite of the signals your eyes send as the photochemicals become depleted (or just to fill in color for peripheral vision).Combuster wrote:Any objections to just recording the stimulation factors for each of the rod/cone types?
Some people are offended by the verifiable truth; such people tend to remain blissfully unencumbered by fact.
If you are one of these people, my posts may cause considerable discomfort. Read at your own risk.
If you are one of these people, my posts may cause considerable discomfort. Read at your own risk.
Re: Standardized Representation of Colour
Hi,
You're also forgetting exposure times - search for "long exposure photography"...
Note: For brightness I'm thinking of a simplified floating point format (e.g. "0 * 2^0" to "65536 * 2^255" millionths of a lux).
In addition, humans have "rods". Under normal conditions the rods are saturated and the cones do all the work; and under low light situations the rods aren't saturated and add their data to the (lack of) information from the cones. This means that a person can be colour-blind (one or more types of cones don't work), but a person can also have "night blindness" (where all cone types work but rods don't, resulting in a person being virtually blind in very dim light).
Also note that for the CIE chart (from my previous post), you can draw a line from white (labeled as "D65" for this diagram) to the edge, and all of the colours on that line will be the same hue (with different saturation). From this you can tell that the colours outside of the "RGB triangle" would be "less white" versions of the colours inside the triangle - e.g. greener shades of green and "cyaner" shader of cyan.
However, I'm not entirely sure how I'd convert between this representation and RGB (I'll end up converting into RGB very often, and I'll need to convert RGB into this representation fairly often too).
I guess that's mostly where I'm getting lost - the mathematics necessary to convert between representations, and the mathematics necessary to combine colours in the standardized representation.
Thanks,
Brendan
I should've been more specific - I'm interested in the perceived colour, rather than each frequency of light that makes up that perceived colour. Due to the way the human eye works there's a huge range of light that all looks the same (for e.g. if you combine blue monochromatic light and green monochromatic light, then it'll look the same as a cyan monochromatic light; and there's an infinite number of combinations of frequencies that will all look identical to that same cyan monochromatic light).bewing wrote:Mathematically and physically, it is impossible to do it in anything but a cheap-and-dirty way, or to model the full reality.
You're neglicting a few things here. First is the effects of the eye's iris (a human eye can detect very low levels of light, and handle very high levels of light, but can't do both at the same time). Also at very low levels of light the eye's cones don't work and the rods take over (with very dim lighting you can't see colour).bewing wrote:This is absolutely false. The human eye can easily detect 10 photons per second. This is orders of magnitude better than the very best photodiode can detect. It takes a photomultiplier tube to achieve the response an eye can achieve trivially. All CCDs in all cameras have a linear response to luminosity by the nature of the process. Eyes have a logarithmic response. Therefore an eye can perceive luminosity contrasts that are many orders of magnitude larger than any camera can handle. And it is luminosity and hue contrasts that are important in an image -- not the absolute magnitudes.Also, certain devices (e.g. digital cameras) have a far larger range of luminance than the human eye can handle.
You're also forgetting exposure times - search for "long exposure photography"...
Note: For brightness I'm thinking of a simplified floating point format (e.g. "0 * 2^0" to "65536 * 2^255" millionths of a lux).
Humans have cones that respond differently to different wavelengths of light (but none of these cones are set to a specific frequency). There's a good picture describing this on wikipedia's spectral sensitivity page. It's definitely not RGB - for example, there aren't any wavelengths of monochromatic light that stimulate the M (medium wavelength) cones that won't also stimulate at least one of the other cone types (and the rods).NickJohnson wrote:If you think about it, that *is* RGB. Humans are trichromates (except for colorblind people who are dichromates), so we only perceive the brightness of three different wavelengths, i.e. red, green, blue. I think you can express any color humans can see with only three values. That's why we have three primary colors, and our monitors can only express amounts of red, green, and blue.Combuster wrote:Any objections to just recording the stimulation factors for each of the rod/cone types? That'd be complete and efficient storage for all human observers.
In addition, humans have "rods". Under normal conditions the rods are saturated and the cones do all the work; and under low light situations the rods aren't saturated and add their data to the (lack of) information from the cones. This means that a person can be colour-blind (one or more types of cones don't work), but a person can also have "night blindness" (where all cone types work but rods don't, resulting in a person being virtually blind in very dim light).
No - there actually are colours that RGB can't reproduce. Every point on the CIE chart (that I included in my previous post) is meant to be different colour. If you choose any 3 colours on this chart, then any colour that is within the triangle formed by the 3 colours you chose can be represent by different amounts of those 3 colours; but all colours outside of this triangle can't be represented by different amounts of those 3 colours. For example, if you choose yellow, cyan and blue, then you'd be able to represent green, but wouldn't be able to represent red. For RGB (or "sRGB" which is a standardized RGB) all they did was choose 3 colours - they would've wanted a red, a green and a blue (as these are obvious choices), but they were also probably restricted to wavelengths that could be produced by suitable phosphors (or perhaps, restricted by a desire to match CRT monitors that were around at the time). Otherwise I assume they would've chosen a better red, a better green and a better blue; but even with the best possible red, green and blue you still can't combine them to get all colours.bewing wrote:Edit: Wavelengths that are between two of our three colors are simply received as both of those colors. Therefore, there are many colors RGB cannot produce, but we cannot distinguish those colors from combinations of colors that can be produced with RGB, because we are human. That's why the area in that graph outside the triangle looks the same as the stuff within it.
Also note that for the CIE chart (from my previous post), you can draw a line from white (labeled as "D65" for this diagram) to the edge, and all of the colours on that line will be the same hue (with different saturation). From this you can tell that the colours outside of the "RGB triangle" would be "less white" versions of the colours inside the triangle - e.g. greener shades of green and "cyaner" shader of cyan.
That would work. For example, I'd be able to have 3 floating point values (from 0.0 to 1.0) for each type of cone, plus a "brightness" value that ranges from 0.0 to 100000.0 (where the range of brightness from 0.0 to 1.0 represents a rod's normal range).Combuster wrote:Any objections to just recording the stimulation factors for each of the rod/cone types? That'd be complete and efficient storage for all human observers.
However, I'm not entirely sure how I'd convert between this representation and RGB (I'll end up converting into RGB very often, and I'll need to convert RGB into this representation fairly often too).
I guess that's mostly where I'm getting lost - the mathematics necessary to convert between representations, and the mathematics necessary to combine colours in the standardized representation.
Thanks,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
-
- Member
- Posts: 204
- Joined: Thu Apr 12, 2007 8:15 am
- Location: Michigan
Re: Standardized Representation of Colour
I'm not sure if this will be helpful to you, but after considering this my thoughts are that you could pick multiple triangles from the graph in such a way that they cover as much area as possible. Then, an encoded color would have four fields, three color intensity of the three selected colors for any triangle, and a selector field which selects exactly which triangle.Brendan wrote:Every point on the CIE chart (that I included in my previous post) is meant to be different colour. If you choose any 3 colours on this chart, then any colour that is within the triangle formed by the 3 colours you chose can be represent by different amounts of those 3 colours; but all colours outside of this triangle can't be represented by different amounts of those 3 colours. For example, if you choose yellow, cyan and blue, then you'd be able to represent green, but wouldn't be able to represent red. For RGB (or "sRGB" which is a standardized RGB) all they did was choose 3 colours - they would've wanted a red, a green and a blue (as these are obvious choices), but they were also probably restricted to wavelengths that could be produced by suitable phosphors (or perhaps, restricted by a desire to match CRT monitors that were around at the time). Otherwise I assume they would've chosen a better red, a better green and a better blue; but even with the best possible red, green and blue you still can't combine them to get all colours.
Also note that for the CIE chart (from my previous post), you can draw a line from white (labeled as "D65" for this diagram) to the edge, and all of the colours on that line will be the same hue (with different saturation). From this you can tell that the colours outside of the "RGB triangle" would be "less white" versions of the colours inside the triangle - e.g. greener shades of green and "cyaner" shader of cyan.
Some people are offended by the verifiable truth; such people tend to remain blissfully unencumbered by fact.
If you are one of these people, my posts may cause considerable discomfort. Read at your own risk.
If you are one of these people, my posts may cause considerable discomfort. Read at your own risk.
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: Standardized Representation of Colour
Hmm, Given that the R G and B are (close to) zero-width frequency spikes, you get the following (frequencies are IIRC, but you get the idea)Brendan wrote:However, I'm not entirely sure how I'd convert between this representation and RGB (I'll end up converting into RGB very often, and I'll need to convert RGB into this representation fairly often too).
red_cone = sensitivityR(400nm) * rgb.r + sensitivityR(450nm) * rgb.g + sensitivityR(650nm) * rgb.b
green_cone = sensitivityG(400nm) * rgb.r + ...
blue_cone = ....
If you like to disagree with that assumption, you can compute the integral over the frequency domain
sensitivity_r_to_g = x| sensitivityG(x) r_spectrum(x) dx
which again results in a constant
In any case, the conversion is mathematically equivalent to:
(Rc,Gc,Bc) = sensitivity_matrix * (Rm,Gm,Bm)
Which means the process is invertible by matrix inversion:
(Rm,Gm,Bm) = inv(sensitivity_matrix) * (Rc,Gc,Bc)
now, you'd only need to precompute sensitivity_matrix and inv(sensitivity_matrix) at compile time, and the rest is a triplet of dot products (totalling 9 muls, 6 adds) and 3 clamps per color conversion. Optionally you could compute the pair of matrices for each display (or printer) depending on their specific spectral properties.
- gravaera
- Member
- Posts: 737
- Joined: Tue Jun 02, 2009 4:35 pm
- Location: Supporting the cause: Use \tabs to indent code. NOT \x20 spaces.
Re: Standardized Representation of Colour
I was wondering why those unrepresented colours looked pretty much like the colours already in the triangle. If this is the case, then, (I'm just using what NickJohnson said) I'm afraid it might be a better idea to use your time coding something that will make a noticeable change to your OS.NickJohnson wrote:If you think about it, that *is* RGB. Humans are trichromates (except for colorblind people who are dichromates), so we only perceive the brightness of three different wavelengths, i.e. red, green, blue. I think you can express any color humans can see with only three values. That's why we have three primary colors, and our monitors can only express amounts of red, green, and blue.Combuster wrote:Any objections to just recording the stimulation factors for each of the rod/cone types? That'd be complete and efficient storage for all human observers.
Edit: Wavelengths that are between two of our three colors are simply received as both of those colors. Therefore, there are many colors RGB cannot produce, but we cannot distinguish those colors from combinations of colors that can be produced with RGB, because we are human. That's why the area in that graph outside the triangle looks the same as the stuff within it.
Many of today's games, with HI Def colour, and 3D effects at the extreme use the RGB palette, and we think it looks fantastic. There's no real reason to move out of it if it already expresses every colour we are physically capable of seeing.
Although, when I read NickJohnson's post, it made me look around to see if there's some area that has an undefined colour that I think I'm seeing, but really can't see. Interesting fact, NickJohnson.
17:56 < sortie> Paging is called paging because you need to draw it on pages in your notebook to succeed at it.
- AndrewAPrice
- Member
- Posts: 2299
- Joined: Mon Jun 05, 2006 11:00 pm
- Location: USA (and Australia)
Re: Standardized Representation of Colour
As far as RGB goes, you can over project resulting in a loss of storage efficiently and you bring in the concept of imaginary numbers.
Some video games, aswell as video codecs and image editors, do use alternative colour spaces, and it's fairly inexpensive to convert it to RGB after the very last post-processing effect inside a pixel shader. While RGB is good for storing/displaying colour, it's not so good for colour manipulation.
For example, in RGB if you wanted to simulate illuminating an object with a yellow light, you would normally multiply it by (r: 1, g: 1, b: 0). However, nature, you'll expect the light that is bounced off is a shade of yellow, but instead of returning a shade of yellow, red objects will return red, and green objects will return green.
Back when I didn't fully understand alternative colour spaces (I knew they existed but thought RGB did everything I wanted) I tried to develop and independent solution. I represented the RGB colours as 3D vectors, calculated the angle and axis it takes to rotate that light's vector so it is a unit vector, applied that rotation to the surface's colour, then use the unit axis's value as a scalar representing how much of that colour is reflected. The above model worked for coloured lights, but white lights should reflect multiple colours, so to solve this I added a 4th colour component for 'saturation' (which I stored in the alpha channel), and then if saturation was 0, the above calculated scalar was multiplied by the surface colour (effectively just affecting brightness), if it was 1, it returned the blended colour, and I interpolated between the two. Then I realised, to save space, I can reconstruct the colour vector in 2 values, one representing length (brightness) and one representing angle (hue). Then after a bit of research I discovered the HSL/HSV colour space was effectively that, but with a lot less calculating.
However, now that I'm much wiser I prefer the LAB colour space. LAB can't be directly converted to RGB because RGB is device dependent. This is where monitor drivers come in (another thing I thought was pointless until I discovered their use), since one of their functions is usually to describe how colour spaces map onto them. A lot of people don't bother with this, and it's usually not a problem unless you're a colourphile or an artist.
I'd also like to mention that most programs output RGB which is directly sent to your monitor as such. In which case, monitor drivers won't make any difference. It use to be limited to the realm of high-end Photoshop graphics, but now camera's and scanners are embedding their colour maps into image files as meta data, as well as some digital video cameras, so it's becoming increasingly more common.
Windows Vista uses CIECAM02 as it's standard for high-precision imaging. Wiki describes it as having 6 dimensions controlling "brightness (luminance), lightness, colorfulness, chroma, saturation, and hue". Windows Vista implements it using several types of profiles that work together to produce the final output (Device Profiles, Viewing Condition Profiles, Gamut Mapping Profiles).
Some video games, aswell as video codecs and image editors, do use alternative colour spaces, and it's fairly inexpensive to convert it to RGB after the very last post-processing effect inside a pixel shader. While RGB is good for storing/displaying colour, it's not so good for colour manipulation.
For example, in RGB if you wanted to simulate illuminating an object with a yellow light, you would normally multiply it by (r: 1, g: 1, b: 0). However, nature, you'll expect the light that is bounced off is a shade of yellow, but instead of returning a shade of yellow, red objects will return red, and green objects will return green.
Back when I didn't fully understand alternative colour spaces (I knew they existed but thought RGB did everything I wanted) I tried to develop and independent solution. I represented the RGB colours as 3D vectors, calculated the angle and axis it takes to rotate that light's vector so it is a unit vector, applied that rotation to the surface's colour, then use the unit axis's value as a scalar representing how much of that colour is reflected. The above model worked for coloured lights, but white lights should reflect multiple colours, so to solve this I added a 4th colour component for 'saturation' (which I stored in the alpha channel), and then if saturation was 0, the above calculated scalar was multiplied by the surface colour (effectively just affecting brightness), if it was 1, it returned the blended colour, and I interpolated between the two. Then I realised, to save space, I can reconstruct the colour vector in 2 values, one representing length (brightness) and one representing angle (hue). Then after a bit of research I discovered the HSL/HSV colour space was effectively that, but with a lot less calculating.
However, now that I'm much wiser I prefer the LAB colour space. LAB can't be directly converted to RGB because RGB is device dependent. This is where monitor drivers come in (another thing I thought was pointless until I discovered their use), since one of their functions is usually to describe how colour spaces map onto them. A lot of people don't bother with this, and it's usually not a problem unless you're a colourphile or an artist.
I'd also like to mention that most programs output RGB which is directly sent to your monitor as such. In which case, monitor drivers won't make any difference. It use to be limited to the realm of high-end Photoshop graphics, but now camera's and scanners are embedding their colour maps into image files as meta data, as well as some digital video cameras, so it's becoming increasingly more common.
Windows Vista uses CIECAM02 as it's standard for high-precision imaging. Wiki describes it as having 6 dimensions controlling "brightness (luminance), lightness, colorfulness, chroma, saturation, and hue". Windows Vista implements it using several types of profiles that work together to produce the final output (Device Profiles, Viewing Condition Profiles, Gamut Mapping Profiles).
My OS is Perception.
Re: Standardized Representation of Colour
Not entirely true. Plastics will reflect the color of light that they are illuminated with. Metals reflect their own color. That is how the eye distinguishes a "metallic" sheen from something that looks like cheap plastic.MessiahAndrw wrote: However, (in) nature, you'll expect the light that is bounced off is a shade of yellow, but instead of returning a shade of yellow, red objects will return red, and green objects will return green.
- Owen
- Member
- Posts: 1700
- Joined: Fri Jun 13, 2008 3:21 pm
- Location: Cambridge, United Kingdom
- Contact:
Re: Standardized Representation of Colour
Brendan: Why not choose 3 colours off the chart as such the entire chart is within the triangle, on the understanding that, yes, this will waste bits? (I'm assuming choosing colours off the chart is possible )
I'd probably make it 16-bit half precision floats - 0.0 to 1.0 is sRGB, and when displaying on a sRGB monitor clamp to that. Minifloats have a size and range (Useful for HDR imagery - games use them for HDR intermediates for that reason) advantages, and modern graphics cards handle them natively (I assume that software rendering is a secondary target here - whatever alternative colour representation you choose, someone's gonna complain about performance vs RGB).
I'd probably make it 16-bit half precision floats - 0.0 to 1.0 is sRGB, and when displaying on a sRGB monitor clamp to that. Minifloats have a size and range (Useful for HDR imagery - games use them for HDR intermediates for that reason) advantages, and modern graphics cards handle them natively (I assume that software rendering is a secondary target here - whatever alternative colour representation you choose, someone's gonna complain about performance vs RGB).