I haven't done much research into the most effective "which hues to shift where for each of the different types of colour blindness". However...DavidCooper wrote:If you're dealing with red/green colourblindness, shifting red to orange will merely provide a brighter version of the same colour, as will shifting orange to yellow, and shifting yellow towards green will provide a dimmer version of the same colour again, and given that the brightness of reds and greens can vary too, that provides no useful way whatsoever to distinguish between any of these colours, other than that the brightest ones that can be displayed must be yellow (with the red and green pixels turned up full bright).If the user can't see red it gets shifted to orange, but orange (that they could've seen) *also* gets shifted towards yellow, and yellow gets shifted a little towards green. Essentially the colours they can see get "compressed" to make room for the colours they couldn't see.
To understand what I meant, here's a picture:
A monitor only has 3 wavelength of light. These are at roughly 450 nm (blue), 550 nm (green) and 605 nm (red).
For someone who has no long wavelength receptors (one of the 2 different types of red-green colour blindness); this means that the monitor's blue (450 nm) is fine, the monitor's green (550 nm) is fine, and the monitors red (605 nm) ends up being perceived as dark green and is therefore completely useless.
Now imagine a transformation like:
- blue = original_blue * 0.70 + original_green * 0.30
green = original_green * 0.40 + original_red * 0.45
red = original_red
Now imagine a transformation like:
- blue = original_blue * 0.50 + original_green * 0.50
green = original_red
red = original_red
You're writing a spreadsheet application. You've decided that the icon/logo for the application will be a 3D model of an abacus. You are responsible for choosing the colour of the abacus' beads and the colour of its frame. Your boss wants the beads and frame to be different colours (and likes bright colours).DavidCooper wrote:This means that if you're going to shift all the colours into a range where they can be distinguished by a red/green colourblind person, you are working with a very narrow range of colours from green=yellow=orange=red through cyan=grey=magenta to blue, and you're either going to have to stuff all the green shades down into the cyan=grey=magenta range while keeping the reds in the green=yellow=orange=red range or stuff all the red shades down into the cyan=grey=magenta range while keeping the greens in the green=yellow=orange=red range. While you do that, the blue to cyan=grey=magenta range has to be shoved further towards the blue. The result will be far inferior to normal vision, but it will work considerably better than a greyscale display, and they can be perfectly usable - I grew up watching a black & white TV and it was rare that you'd notice that it wasn't in colour. Given that colourblind people see the real world in a colour-depleted form anyway, they aren't losing anything when they use a computer that they don't lose all the time normally, but because software is capable of helping them in a way that the outside world can't, software should offer to help, and particularly in cases where two colours are used to distinguish between two radically different things and those colours look identical to some users, but the simplest way to guard against that is to run all your software in greyscale so that you can be sure it functions well with all kinds of colourblindness.
The 3D model will be displayed on the desktop, and there are 1000 colour blind users all with completely different coloured background images. It will also be displayed in the application's help system where the background of all "help pages" depends on the user's "GUI theme" and there are 1000 colour blind users all with completely different "GUI themes".
Now tell me; what are you're testing with your "grey scale" test to ensure that all users can see the abacus' beads and frame properly?
This is the problem with "let the application developer figure it out". It simply can't work.
I don't know what the best way to handle that would be (no hue shift, or a special hue shift designed for "red-green colour blind with cyan and magenta filters", or "let's give them anisotropic 3D and maybe they'll forget they're colour blind").DavidCooper wrote:I should also add that if a red-green colourblind person wears a cyan filter in front of one eye and a magenta one in front of the other, (s)he can distinguish between all colours, so on the odd occasions where it really matters, that is a practical solution, and it works for the outside world too.
Human hearing can tell the difference between front/back (and for home theatre/high-end gaming I'd assume surround sound speakers) but not above/below. For this reason I was thinking more like a circular disk, a bit like this (but transparent) super-imposed over the entire bottom half of the screen:DavidCooper wrote:On the issue of displaying sound visually, what I'd try is displaying it at the edges of the screen using colours to give a guide to stereo separation (in addition to showing the left channel in the left margin and the right channel in the right margin). High sounds would be displayed higher up those margins and low sounds lower, while the volumes for each pitch component of the sound would be shown using brightness.
The sound in 3D virtual world would be mapped to 2D coords on the disk (with the centre representing the camera).
I think ianhamilton was right in that the pitch doesn't mean much to deaf users (but wrong in that information that can be determined from the pitch can mean things to deaf users). For a simple example, something like music could be represented as notes or musical instruments that are recognisable, but would be confusing as pitch.
Of course the details would take some research and trail and error.
Speech would be converted to readable text.DavidCooper wrote:This would probably fall far short of showing enough detail for people to understand speech from it, but they might well be able to distinguish between a wide range of sounds and be able to tell the difference between different people's voices.
I'm going to need time to think about this. My first thought was "why would a blind user have a screen anyway"; but then I realised the blind user could "look" anywhere without restrictions caused by physical monitor dimensions. I'm not sure it'd be fast though - e.g. lots of "trial and error" to initially find the right place/s on a previously unknown surface, where anything that changes the position of anything (scrolling, turning pages, cut & paste, etc) would send you right back to "trial and error".DavidCooper wrote:For blind users, I want to see visual displays converted to sound. This is already being done with some software which allows blind people to see some things through sound, but it is far from being done in the best possible way. What you want is something that uses stereo and pitch to indicate screen location, and my idea would be to use long sounds to represent objects covering a wide area and short ones for ones filling only a few pixels. This could indicate where the different paragraphs and tools are located on the screen, and more detail would be provided where the cursor is located. It could also provide more information where the user's eyes are looking, and that's a reason why this is particularly urgent - children who are born blind don't learn to direct their eyes towards things, so they just wander about aimlessly, but if they were brought up using the right software from the start (with eye-tracking capability), they would be able to use their eyes as a powerful input device to computers. The longer we fail to provide this for them, the more generations of blind children will miss out on this and not develop that extremely important tool. I hope Ian Hamilton is informed about this by his friend, because he may be best placed to pass the idea on and make it happen sooner. This is all stuff I plan to do with my own OS, but the ideas are potentially too important not to share given that it's going to take me a long time to get it all done, and particularly as it isn't my top priority.
I tend to think onlyonemac's was right about tree structures being the most efficient for navigation. For something like (e.g.) Intel's manual I'd want to be able to skip from one chapter heading to the next, then move down and do the same for section headings, then move down and do the same for paragraphs, then sentences, then individual words. Programming languages are abstract syntax trees. File systems are trees. Menus are trees. My bookmarks are mostly jumbled linear list because I'm too lazy to sort them into categories properly and (as a sighted person) it's a pain in the neck trying to find anything.
Cheers,
Brendan