Page 1 of 14

Concise Way to Describe Colour Spaces

Posted: Wed Jul 01, 2015 10:13 am
by Brendan
Hi,

I'm (still) designing a native file format for my OS to describe monitors. I want to generate graphics in a device independent colour space (most likely, it'll be the CIE XYZ colour space with D65 as the white point, with no gamma); and then convert the device independent colours into whatever the monitor happens to use; and mostly need a way to parameterise this conversion.

For convenience (so I can "auto-convert" the monitor's EDID into my file format if/when the OS doesn't have a suitable file for the monitor); where possible, the data I use to describe the monitor's colour space will need to be derived from the values that the monitor's EDID provides. EDID provides:
  • The (x, y) chromaticity coords (based on the CIE 1931 2° Chromaticity Chart) for each of the 3 primary colours (e.g. red, green, blue)
  • The (x, y) chromaticity coord (based on the CIE 1931 2° Chromaticity Chart) for monitor's default white point. Note: I will be assuming the monitor is always using its default whitepoint.
  • The monitor's gamma information (either an exponent or an "sRGB" flag)
The conversion process involves 3 steps:
  • Correct the white point (chromatic adaptation); which mostly involves a matrix multiplication
  • Converting from one colour space to another; which mostly involves a matrix multiplication again
  • Doing gamma encoding to adjust the linear/"no gamma" values into whatever the monitor wants; which mostly involves exponentiation (e.g. "out = in**2.2") but gets slightly trickier for sRGB.
Note that the first and second steps can be combined. Essentially, you pre-multiply the chromatic adaptation matrix and the "XYZ to whatever" matrix together to find a "combined matrix"; and instead of doing 2 matrix multiplications per pixel (both steps one after the other) you only do one matrix multiplication with the "combined matrix" (both steps at the same time).

Typically a monitor only has 3 channels, one for each primary colour (e.g. red, green, blue). I want to be able to handle any number of channels from 1 channel (e.g. monochrome) to 4 channels. For 4 channels, it could be "red, green, yellow, blue" (note: apparently there are some recent TVs that use this system instead of RGB), or "red, green, blue, white", or "cyan, magenta, yellow, black", or it could be RGB where the extra channel is used for alpha/transparency (which I assume is necessary for "augmented reality" displays like Microsoft's HoloLens), or anything else. Mostly this is about future proofing (and not so much about typical hardware we use today).

Because the source colour space will be standardised for the OS; the correct chromatic adaptation matrix for the monitor and the correct "XYZ to whatever" matrix for the monitor can be derived from EDID's information and then combined. Essentially, for 3 channels/primaries my display information only needs a single combined matrix (and the gamma value). For 4 channels/primaries, I assume I can extrapolate from the matrix maths.

Basically; for each channel I'll have an "X multiplier", "Y multiplier" and "Z multiplier" to convert the source XYZ pixel colour into the monitor's colour space; and I assume this will work fine for all displays with 1 to 4 primary colours.

In addition, I'd have an enumeration of "special channel" types. This would include alpha/transparency; but also black and white, because the conversion from (e.g.) RGB to RGBW or from CMY to CMYK involves finding the minimum of the original 3 primaries and can't be done with matrix multiplication.

Questions!

Are there any cases where "channel type [primary colour, transparency, black, white], X multiplier, Y multiplier, Z multiplier" (for up to 4 channels) isn't adequate, or any other reason why this might not be a good idea?

For the chromatic adaptation transform; there's multiple different alternatives (XYZ scaling, Bradford, Von Kries, Sharp CAT, CMCCAT2000). Does anyone know which is likely to give the most accurate results? For example, assume I have 2 monitors side-by-side where one uses one white point and another uses a different white point; which chromatic adaptation transform is most likely to make colours look the same on both monitors?

Does anyone know how to convert EDID's "(x, y) chromaticity coord for default whitepoint" into any of the chromatic adaptation transform matrices? Note: I think I've found an answer to this question on this web page.

Does anyone know how to convert EDID's "(x, y) chromaticity coord for primary colours" into an "XYZ to whatever" transformation matrix? I can find plenty of pre-computed matrices for specific cases (e.g. "XYZ to xyY", "XYZ to sRGC", etc), but haven't found anything describing how these are created. Note: I think I've found an answer to this question on this web page.

Is there anything else I've overlooked or falsely assumed or messed up?


Thanks,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Wed Jul 01, 2015 11:28 pm
by linguofreak
AFAIK, displays that use RGBY (or any number of primaries beyond three) just do so internally: The display still takes RGB input.

Re: Concise Way to Describe Colour Spaces

Posted: Thu Jul 02, 2015 12:24 am
by Brendan
Hi,
linguofreak wrote:AFAIK, displays that use RGBY (or any number of primaries beyond three) just do so internally: The display still takes RGB input.
Normally, yes.

However, display manufacturers are shifting from "restrictive" analogue signals (e.g. VGA) to digital signals (e.g. HDMI) where it's easy to change what the digital data represents (e.g. HDMI has already added xvYCC, YCbCr, Rec. 2020 and "2D+depth"). Given that there are monitors (internally) using RGBY, and given that it's relatively easy for HDMI to add support for more different representations of colour (e.g. without changing physical cables/plugs/sockets or bandwidth), I wouldn't want to exclude the possibility of RGBY support being added to HDMI at some point in the future.


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Thu Jul 02, 2015 9:50 am
by Antti
The design direction is definitely right. However, I am a little bit worried about the device independent color space if you store an "X", "Y", and "Z". Would there be a huge unusable volume, i.e. imaginary colors? By using a big gamut, it is possible to keep all combinations in a non-imaginary color space.

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 03, 2015 2:16 am
by Brendan
Hi,
Antti wrote:The design direction is definitely right. However, I am a little bit worried about the device independent color space if you store an "X", "Y", and "Z". Would there be a huge unusable volume, i.e. imaginary colors? By using a big gamut, it is possible to keep all combinations in a non-imaginary color space.
For XYZ, I'd guess that about a third of the colours are imaginary. However, I don't think it's possible to reduce this while still using 3 primaries (without shifting to 4 or more primaries); and even if it could be reduced a little it's possibly more important to use something standard. Of course in practice, maybe about half of the (real and imaginary) colours that can be represented by XYZ can't be displayed on a standard sRGB monitor. To compensate for this, I'll just use a few more bits per pixel (e.g. 9 bits per primary instead of 8 bits).

However; (for darker colours) "linear/no gamma" needs more bits per pixel than "with gamma"; and I'll be using more bits to compensate for that too. On top of that, I'll be generating HDR data (and doing an "auto-iris" thing) which means I'll want to be able to store colours that are thousands of times brighter than any monitor can display; which is going to cost even more bits per pixel.

Mostly I'm thinking of using 32-bits per primary, where the normal range is 0 to 65535 (and 0xFFFFFFFF is about 65 thousand times brighter than monitors can display). This gives me 16-bits per primary after doing the "auto-iris" thing, which I think will be enough to cover the extra bits needed to compensate for gamma and imaginary colours (but is small enough to use lookup tables for gamma).

More specifically; starting with "96-bit XYZ HDR pixels" I'm planning to:
  • find the brightest point and adjust my "iris value" slightly (iris doesn't respond immediately and should take several frames to adjust to changes). This requires cooperation between displays.
  • scale everything by the "iris value" and clamp the results to the 0 to 65535 range to get "48-bit XYZ pixels"
  • do any accessibility modifications (hue shifting for colour blind users and "flash limiting" for users with photosensitive epilepsy). This is mostly for convenience - so that software developers don't need to know or care if (e.g.) the user is colour blind or not (which is something software developers typically forget to consider anyway). Video drivers enable/disable these modifications based on information they get from OS when a user logs in (from information stored in each user's profile).
  • multiply pixels by the monitor's "XYZ to whatever monitor wants" matrix and clip any out-of-range results to what the monitor can handle ("absolute colorimetric intent"). Note: "absolute colorimetric intent" is probably bad for single-monitor (in most cases you'd probably want "relative colorimetric intent"); but it means that each monitor can be handled independently in the "2 or more monitors with different colour spaces" case and I don't need to invent a complex global consensus scheme (to avoid the "same colour becomes different on different monitors" problem).
  • use a gamma lookup table to convert "linear/no gamma" values into "with gamma" values. Note: If dithering is enabled this will give 16 bits per primary for the next step; and if dithering is disabled this will give the final values to send to the monitor
  • (optional, for low colour depth cases only) use dithering to convert 16-bit per primary into N-bit per primary (where N depends on the video mode and the monitor's capabilities, whichever is lower - e.g. if it's an 8-bit per primary video mode but the monitor can only display 6 bits per primary, then N = 6)

Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 03, 2015 2:55 am
by Antti
Your research on this topic is, just, ... , impressive. Heh - for me it took a while to do the proper background research for to understand the topic. It should not be too hard to tackle this criticism. After all, I am currently struggling with 1-bit colors in my boot loader...

The main problem might be that your solution of using XYZ pixels is too obvious, i.e. would be already in use if it were superior. The storage size of few extra bytes must not be the biggest drawback. Perhaps it is inconvenient to do any image processing on XYZ values and a "work copy gamut" is needed for doing anything useful? It would be a huge advantage to have a device independent color space that is usable if all your applications rendered graphics into that space (without caring where it is displayed on). At the end you might end up doing conversions all the time without any significant advantages?

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 03, 2015 4:58 am
by Brendan
Hi,
Antti wrote:Your research on this topic is, just, ... , impressive. Heh - for me it took a while to do the proper background research for to understand the topic. It should not be too hard to tackle this criticism. After all, I am currently struggling with 1-bit colors in my boot loader...

The main problem might be that your solution of using XYZ pixels is too obvious, i.e. would be already in use if it were superior.
Most OSs do have support for some of this (e.g. Color Sync (Apple), Windows Colour System (Microsoft), Linux Colour Managment). However most OSs can't say "all applications and all file formats must use CIE XYZ from now on" to make it seamless. Instead it ends up being messy "arbitrary application colour space -> intermediate device independent colour space -> device dependent colour space" conversions typically based on (e.g.) ICC profiles; and because it's messy most software (e.g. everything except applications that deal with professional publishing and image processing) ignores it; and because it's normally ignored (as far as I can tell) most programmers don't even realise it exists (e.g. they just assume everything is sRGB, even if they don't realise they're making that assumption and even when the assumption isn't true).

Because my project is "burn the world NIH" (where all software, all APIs, all file formats, etc. are redesigned from scratch) I can say "for my OS, everything uses CIE XYZ" and avoid all the mess. ;)
Antti wrote:The storage size of few extra bytes must not be the biggest drawback. Perhaps it is inconvenient to do any image processing on XYZ values and a "work copy gamut" is needed for doing anything useful? It would be a huge advantage to have a device independent color space that is usable if all your applications rendered graphics into that space (without caring where it is displayed on). At the end you might end up doing conversions all the time without any significant advantages?
As far as applications are concerned; processing XYZ values is no different to processing RGB (it's only messy when applications need to convert between them which is something my applications won't need to do); and "linear/no gamma" makes it easier to get correct results (a lot of existing software tends to ignore gamma and get wrong results because they don't want to do the extra work needed to get right results). It does/should be much more convenient, especially when software is dealing with very different devices (e.g. sending the exact same graphics to both video driver and printer driver).

For drawbacks; I suspect that the biggest drawback will be either the amount of CPU time consumed or RAM bandwidth limits. Memory consumption (for 96-bit per pixel) is going to be (e.g.) about 36 MiB for 1920*1600 and about 9 MiB for 1024*768; which isn't that much for modern computers (it's mostly only a problem for old machines with less than 512 MiB of RAM).

Also note that I plan to do "fixed frame rate, variable quality" rather than "variable frame rate, fixed quality". What I mean here is that the video driver will estimate how much processing it can afford to do before the frame is displayed, and if doesn't think it can't do it fast enough it'll reduce quality to reduce processing time (e.g. use 960 * 800 in earlier steps and upscale to 1920 * 1600 before the final steps; skip the final dithering step when there isn't enough time, etc). Of course this will be done throughout the entire graphics pipeline (including all the rendering, etc) and not just in the "XYZ HDR to monitor" steps at the end of the pipeline. Part of the idea behind this is that humans take a while to notice image quality; which means that for normal applications (where screen contents don't change every frame) video driver can do a lower quality initial frame and then (before the user has time to notice that the initial frame wasn't as good) re-process the frame's data and replace the initial frame with a higher quality subsequent frame; and means that for fast moving graphics (e.g. 3D games) the user doesn't see each frame for long enough to really notice the image quality (until they pause the game or stand still).

Note: There's been research into how long it takes people to notice things; which suggests there's 2 groups. Some things (motion, flicker, edges, depth) are in the "fast group" and can be noticed within about 16 ms (e.g. something flickering faster becomes a blur); while other things (fine details, exact colours, word recognition) are in the "slow group" and take about 5 times longer to notice.


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 03, 2015 5:36 am
by Antti
Brendan wrote:As far as applications are concerned; processing XYZ values is no different to processing RGB
If this is true, and I really hope it is, there are no problems. It just intuitively feels that increasing and decreasing XYZ values does produce "unpredictable and non-linear" results that is not so easy to handle. If comparing to "increase the RED channel", you know what happens.
Brendan wrote:the data I use to describe the monitor's colour space will need to be derived from the values that the monitor's EDID provides
In general, the EDID seems to be an interesting set of data structures. I have not really used it for anything but I got a few values from it just for informational purposes (i.e. just to "show that I knew it exists"). Have you analyzed an unbiased sample of monitors just to see what kind of chromaticity coordinates there usually are? Hopefully this information from the EDID is used somewhere so that manufacturers have paid attention to it and is not filled with "standard tuples" just to comply with the structure standard?

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 03, 2015 6:26 am
by Antti
Brendan wrote:Because my project is "burn the world NIH" (where all software, all APIs, all file formats, etc. are redesigned from scratch) I can say "for my OS, everything uses CIE XYZ" and avoid all the mess.
I guess it is safe to assume that normal rules do not apply, e.g. "just a hobby, won't be big and professional". With this in mind, everything must just fall into place. Looks good so far.

The only thing on this particular topic (in my opinion) is the usability of CIE XYZ in image processing that needs a strong argument in favor of it. I assume that there are a lot of image processing algorithms (not just software but theory) that may disagree with the nature of CIE XYZ. Also, please note that I am wrong most of the time but I am almost sure this concern will arise sooner or later.

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 03, 2015 6:27 am
by embryo2
Brendan wrote:I want to generate graphics in a device independent colour space
It's an interesting exercise, but what about the rest of the world? The rest of the world mostly uses sRGB space and your system always will be required to convert all images into your format. Next it will be required to display them using different color space (then they most probably will look different than on any other system), next to save the image and track it's leaving for another system to convert it back to sRGB (else another system won't understand it).

And even more, your system should acquaint a user with the area of color spaces, because it is hardly possible to track each copy of an image and to convert it automatically in the sRGB when it leaves for another system, so the user just must inform your system about the need to convert the image. If the user sees the importance of the color spaces he can tolerate such annoying need for conversion, but most users just have no idea about the color spaces and it is a problem.

However, within some small world of only your OS powered computers it is possible to get something interesting. But even in such world it is still impossible to get a uniform representation of an image on different monitors because of physical limitations of different monitor's matrices, only some "close enough" representation is possible. Also there are extensions in the areas of printing, scanning, and other "colored" areas, so there is the need for uniform representation of all those areas, but the printed image is highly dependent on the light characteristics it is exposed to, so it's another limitation of the "very general" approach. And may be it worth to define exactly what you will miss in case of just using most popular sRGB space. Would it be the difference of a picture on two monitors, attached to the computer under your OS? And that's all? Does it worth the efforts required?

Brendan, it's a really nice attempt to redefine the world, but after you've gone in to the gory details you just can find that the existing solutions are not that bad as you had thought. So, it is highly recommended to define the difference between your solution and the solutions the world already has. Next you can assess the efforts required and the effect produced. And remember about the underestimation of efforts that every human makes when first time meets a new area (you can see it when a beginner osdever claims he is ready to create the best OS). However, after meeting the new area your knowledge base definitely will be extended for your future good.

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 03, 2015 8:49 am
by Brendan
Hi,
Antti wrote:In general, the EDID seems to be an interesting set of data structures. I have not really used it for anything but I got a few values from it just for informational purposes (i.e. just to "show that I knew it exists").
I've used EDID before, but mostly only for video mode timings (e.g. to attempt to determine if the monitor supports various VBE modes). I haven't used EDID's colour values before (but did a lot of research into colour spaces and wrote code to convert between CIE XYZ and sRGB when I was experimenting with the "everything is triangles, no textures" idea a few years ago).
Antti wrote:Have you analyzed an unbiased sample of monitors just to see what kind of chromaticity coordinates there usually are? Hopefully this information from the EDID is used somewhere so that manufacturers have paid attention to it and is not filled with "standard tuples" just to comply with the structure standard?
I haven't examined monitors, but I'd suspect most modern monitors (especially commodity consumer stuff) either use sRGB or a flavour of RGB that's quite close to sRGB.

I'd still want to ensure my file format is able to cope with obscure cases (including old CRTs and existing high-end/expensive "wide gamut" monitors) and future hardware that doesn't exist yet. Mostly; I want to avoid the "good enough for now; oops, it needs a revision; oops, needs another revision; ...; oh my it's a mess of extensions!" problem that seems to have become standard practice (or at least minimise the chance of future revisions being necessary). ;)


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 03, 2015 9:29 am
by Brendan
Hi,
embryo2 wrote:
Brendan wrote:I want to generate graphics in a device independent colour space
It's an interesting exercise, but what about the rest of the world? The rest of the world mostly uses sRGB space and your system always will be required to convert all images into your format. Next it will be required to display them using different color space (then they most probably will look different than on any other system), next to save the image and track it's leaving for another system to convert it back to sRGB (else another system won't understand it).
For devices (e.g. camera, scanner) the device drivers have to convert to CIE XYZ.

For legacy file formats my OS design uses "file format converters" to auto-convert files into my OS's native file formats. For example, if an application opens "test.jpg" (or "test.bmp" or "test.png" or ....) then the VFS finds a file format converter that converts JPG into my native graphics file format without the application knowing or caring; and the application itself never sees anything other than my native graphics file format.

For exporting data to other systems, quite frankly I don't consider this my problem. If Windows/Linux/OSX doesn't support my file formats, then that's Microsoft/GNU/Apple's problem.
embryo2 wrote:And even more, your system should acquaint a user with the area of color spaces, because it is hardly possible to track each copy of an image and to convert it automatically in the sRGB when it leaves for another system, so the user just must inform your system about the need to convert the image. If the user sees the importance of the color spaces he can tolerate such annoying need for conversion, but most users just have no idea about the color spaces and it is a problem.
No. If normal programmers (excluding device driver writers) have to deal with the hassles of colour space conversion then I've failed. If users have to deal with the hassles of colour space conversion then it's far worse than just failure.
embryo2 wrote:However, within some small world of only your OS powered computers it is possible to get something interesting. But even in such world it is still impossible to get a uniform representation of an image on different monitors because of physical limitations of different monitor's matrices, only some "close enough" representation is possible. Also there are extensions in the areas of printing, scanning, and other "colored" areas, so there is the need for uniform representation of all those areas, but the printed image is highly dependent on the light characteristics it is exposed to, so it's another limitation of the "very general" approach. And may be it worth to define exactly what you will miss in case of just using most popular sRGB space. Would it be the difference of a picture on two monitors, attached to the computer under your OS? And that's all? Does it worth the efforts required?
So, your suggestion is to fail without even attempting to succeed?


Cheers,

Brendan

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 03, 2015 9:54 am
by linguofreak
Brendan wrote:
For legacy file formats my OS design uses "file format converters" to auto-convert files into my OS's native file formats. For example, if an application opens "test.jpg" (or "test.bmp" or "test.png" or ....) then the VFS finds a file format converter that converts JPG into my native graphics file format without the application knowing or caring; and the application itself never sees anything other than my native graphics file format.

For exporting data to other systems, quite frankly I don't consider this my problem. If Windows/Linux/OSX doesn't support my file formats, then that's Microsoft/GNU/Apple's problem.
Given that you auto-convert into your native format, exporting data is your problem. Otherwise any image file that touches your system becomes locked in to being viewable on your system only until MS/GNU/Apple decide to support your format (good luck). So a user plugs a flash drive filled with their family pictures into a computer running your system and suddenly finds that their pictures are now all in some strange format that isn't viewable on Windows. This will not endear them to your system. It's best to leave choice of image format up to the user. If you don't, then you really are going to have to make provisions for exporting to other systems, otherwise your users will hate you.

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 03, 2015 10:24 am
by Antti
Brendan wrote:but did a lot of research into colour spaces and wrote code to convert between CIE XYZ and sRGB when I was experimenting with the "everything is triangles, no textures" idea a few years ago
This research phase puts more trust in your idea. I am sure that during those conversions you got the idea of potential problems (if any). Maybe my concerns about imaginary color problems are without the word "color" inbetween, i.e. imaginary problems.
Brendan wrote:I want to avoid the "good enough for now; oops, it needs a revision; oops, needs another revision; ...; oh my it's a mess of extensions!" problem that seems to have become standard practice (or at least minimise the chance of future revisions being necessary)
This would be an excellent topic in itself. Perhaps the problem is just that people try to make it good enough? What if they tried to make it "bad enough" on purpose? Yes, this may sound ridiculous but I am serious. Making it "bad enough" is a guarantee that there will be another revision (backward compatible if possible). There will be much more data and experience available when creating the next revision so it could be the "perfect enough" version, i.e. likely no need for future revisions. The trick is to make the "bad enough" version actually good (the correct description could be "highly potential") but for some reason not "good enough" so that it could end up being the final version.

A little bit too deep but I guess you understand the point I tried to make.

Re: Concise Way to Describe Colour Spaces

Posted: Fri Jul 03, 2015 2:01 pm
by Brendan
Hi,
linguofreak wrote:
Brendan wrote:For legacy file formats my OS design uses "file format converters" to auto-convert files into my OS's native file formats. For example, if an application opens "test.jpg" (or "test.bmp" or "test.png" or ....) then the VFS finds a file format converter that converts JPG into my native graphics file format without the application knowing or caring; and the application itself never sees anything other than my native graphics file format.

For exporting data to other systems, quite frankly I don't consider this my problem. If Windows/Linux/OSX doesn't support my file formats, then that's Microsoft/GNU/Apple's problem.
Given that you auto-convert into your native format, exporting data is your problem. Otherwise any image file that touches your system becomes locked in to being viewable on your system only until MS/GNU/Apple decide to support your format (good luck). So a user plugs a flash drive filled with their family pictures into a computer running your system and suddenly finds that their pictures are now all in some strange format that isn't viewable on Windows. This will not endear them to your system. It's best to leave choice of image format up to the user. If you don't, then you really are going to have to make provisions for exporting to other systems, otherwise your users will hate you.
If a user plugs a flash drive filled with their family pictures into a computer running my OS; the original files won't be deleted or modified (unless the user deletes them). If the user uses applications on my OS to create a new file then that new file will be saved in my file format (but the old file will still be there).

If the user prefers a world of bloated puke where all applications need to support many different file formats for the same purpose (e.g. PCX, GIF, BMP, TIFF, JPG, ....) and each of those has multiple different "sub-formats" internally, and basic things like email attachments don't work because the sender used TIFF and the receiver has software that supports 12 different graphics file formats but doesn't support TIFF; then they can use a crappy OS instead of my OS. If the user wants an OS where everything "just works" with no bloat and the absolute minimum of end user hassle, then (eventually, if I succeed) they'll try my OS and won't ever want to go back.
Antti wrote:
Brendan wrote:I want to avoid the "good enough for now; oops, it needs a revision; oops, needs another revision; ...; oh my it's a mess of extensions!" problem that seems to have become standard practice (or at least minimise the chance of future revisions being necessary)
This would be an excellent topic in itself. Perhaps the problem is just that people try to make it good enough in the beginning with? What if they tried to make it "bad enough" on purpose? Yes, this may sound ridiculous but I am serious. Making it "bad enough" is a guarantee that there will be another revision (backward compatible if possible). There will be much more data and experience available when creating the next revision so it could be the "perfect enough" version, i.e. likely no need for future revisions. The trick is to make the "bad enough" version actually good (the correct description could be "highly potential") but for some reason not "good enough" so that it could end up being the final version.
For most things we've got several decades of "bad enough" attempts to look at already. We can/should go directly to "perfect enough, until/unless there's a significant and unforeseeable change in hardware capabilities".


Cheers,

Brendan