Hi,
MessiahAndrw wrote:My wife is a graphic designer and she deals with the Pantone Matching System for printed media. She has one of these monitor calibrator things (
http://www.xrite.com/i1display-pro) that measure the wavelengths outputted by the monitor to create a monitor profile, so what she sees on the printed output matches what her monitor shows. The main problem is that monitors are backlit, whereas printed media isn't and the colour on printed media depends on ambient light. In the end, you end up with 3 colour profiles - for the input media (the camera), the workstation monitor, and the printer. Since you're just dealing with screens it's a simpler problem, and X-Rite (the company that now owns Pantone) might have
some kind of gadget for measuring your monitor output.
The fact that your wife needs to do this calibration in the first place (and the fact that colours aren't automatically the same on all devices wherever possible) is appalling. It represents an unacceptable failure in the IT industry.
For monitors, EDID provides the information needed to characterise the monitor's colour profile and convert from a known/standardised colour space to whatever the monitor's colour space happens to be, and has provided this information since it was first designed (about 15 years ago). For cameras, scanners and printers, I haven't researched it properly; but do know that for USB cameras the USB Video Class specification does have a "Color Matching Descriptor" and do know that HP's "
Printer Control Language" has a lot of stuff for colour management (support for multiple colour spaces including device independent colour spaces, ICC profiles, etc).
Basically, as far as I can tell, hardware is not to blame for the unacceptable failure in the IT industry and it's almost entirely software's fault.
MessiahAndrw wrote:My concerns;
- A large portion of the CIE XYZ colour space is imaginary, in contrast with RGB where every possible colour value creates a unique colour (HSV also suffers this problem - a V of 0 is output as black, regardless of H and S.)
For a system that uses 3 primary colours there are only 2 possibilities:
- it is able to represent all colours that humans can see and there is some wasted space for imaginary colours.
- it's unacceptable as a device independent colour space (incapable of representing all colours that humans can see).
For the only acceptable case; at least 2 of the primary colours must be imaginary, and the amount of space wasted by imaginary colours can be reduced by using 3 imaginary primaries. If you carefully chose 3 imaginary primaries to minimise the amount of space wasted by imaginary colours, then you'd end up with something very similar to CIE XYZ.
In other words, while CIE XYZ does waste some space for imaginary colours, but this is impossible to avoid (while remaining acceptable as a device independent colour space) and the amount of wasted space is very close to the minimum possible.
MessiahAndrw wrote:- The blending algorithms are somewhat more difficult in this colour space. Imagine you're writing a 3D game and two light sources are shining on the same wall, you can easily do final colour = (wall colour * light a) + (wall colour * light b) in RGB. Transparency is final colour = (source a * 1/alpha) + (source b * alpha).
RGB is crippled and completely unacceptable for 3D rendering. It fails for "additive colour" (e.g. you can have 2 or more light sources that use colours that RGB is unable to represent where combining the light sources results in a colour that RGB can represent); and it also fails for "subtractive colour" (e.g. you can have a light source that uses a colour that RGB is unable to represent where that light passes through a filter and becomes a colour that RGB can represent).
However; it really doesn't matter what the render does or which colour space/s it uses - I'd still need some way to convert "renderer colour space" into "device specific colour space", which means I need some way to describe this colour space conversion. The nice thing is that most colour space conversions use matrices, and these conversion matrices can be multiplied. For example, if the monitor's description provided an "XYZ to monitor colour space conversion matrix", and if the renderer felt like using sRGB and had an "sRGB to XYZ conversion matrix", then both conversion matrices can be multiplied together to create an "sRGB to monitor colour space conversion matrix" that the renderer can use to convert its colour space into whatever the monitor uses without anything ever actually being converted to XYZ.
MessiahAndrw wrote:- Performance. If you're watching a video or playing a real time 3D game, can you convert 1920*1080*60 pixels per second? I'd imagine for an 8-bit video game, you could simplify the problem by generating a lookup table for all 256 colours - but what if the screen is shared across multiple monitors? (A video wall or a duel-monitor workstation.)
Converting XYZ to one of the RGB colour spaces costs the same as converting one RGB space to another RGB space (it's a single matrix multiplication in both cases).
For the total cost of the simple/brute force approach, a matrix multiplication is 9 multiplications and 6 additions per pixel (15 floating point operations). For 1920*1200*60 you'd be looking at 138240000 pixels per second, or 2.0736 billion floating point operations per second. Modern Intel CPUs range from about 10 GFLOPS to a few hundred GFLOPS, which means they'd be able to handle 5 times as much (or more).
However, please note that the simple/brute force approach isn't necessarily the approach I'd use. I have theories... Mostly, the results from rasterisation are naturally horizontal line segments, where each line segment is 1 pixel tall and any width (up to the full width of the screen), and each line segment has a starting colour and an ending colour (where the ending colour for one line segment is the next line segment's starting colour). It should be possible to keep the data in this "line segment" format during subsequent processing stages (HDR, accessibility and conversion to device specific colour space); so that the number of colour conversions depends on the number of line segments and not the number of pixels, where if the average line segment width is larger than one pixel (which is likely) the overhead of colour space conversion is less than it would've been for the simple/brute force approach.
Finally, don't forget that I will be planning "fixed frame rate variable quality" (and not using the typical "fixed quality, variable frame rate" approach). What this means is that (e.g.) if the video driver doesn't think it'll be able to do 1920*1200 pixels before the frame's deadline then it can reduce the resolution (e.g. do colour space conversion for 960*1200 pixels instead) and upscale after (and then, if the next frame is the same do a subsequent 1920*1200 pixel colour space conversion of the previous frame's data and re-display the previous frame at the higher quality before the user has enough time to notice that the initial frame was lower quality). Basically, for 1920*1200 at 60 frames per second it doesn't have to be able to process 1920*1200*60 pixels per second.
Essentially what I'm saying here is that the question isn't whether the CPU is able to process 1920*1600*60 pixels per second; the real question is whether I'm able to design and optimise the graphics pipeline to give "good enough quality" when every frame is different (e.g. 3D games) and to reach "max. possible quality" before the user has time to notice lower quality frames when most frames aren't different (e.g. desktop apps).
MessiahAndrw wrote:There have also been RGB colour space standards (like sRGB, Adobe RGB), and many monitors and HDTVs are sRGB compliant, and I've thought the theory was that a colour on any two sRGB screens should be identical. My monitor has an sRGB mode, but it also lets me change the brightness/contrast/colour temperature (which obviously alters the output colour) while claiming to be in sRGB mode, so that throws out that theory.
For 2 displays that both claim to be sRGB and are both in their default configuration, in theory a colour should be the same on both displays but in practice it's unwise to trust marketing departments and there's no guarantee that the colours actually will be the same.
Also, take a look at the sRGB gamut (from the wikipedia page):
That larger horseshoe shape represents all the colours that humans can see, and that pathetic little triangle represents how inadequate and crippled sRGB actually is in comparison.
Cheers,
Brendan