Dodgy EDIDs (was: What does your OS look like?)
- Schol-R-LEA
- Member
- Posts: 1925
- Joined: Fri Oct 27, 2006 9:42 am
- Location: Athens, GA, USA
Re: Dodgy EDIDs (was: What does your OS look like?)
The thing is, as Brendan said, ASCII and its derivatives (e.g., UTF-8) are still conventions, no matter how ubiquitous they might be; the reasons why (for example) the glyph 'A' is encoded as a binary signal that happens to match a certain 7-bit encoding for the integer 65 (decimal) are entirely historical, and intimately tied to the specific technology of teletypewriters circa 1960. Why are there 33 control characters, despite the fact that most of them are ignored today? Because there were around 30 commonly supported teleprinter operations in use at the time, and using a 5-bit encoding for most of them (EOF being the odd man out) was convenient. Why does the encoding have exactly those letter-forms commonly used in English, but not those of any other languages? Because the technology was developed in the US, and mostly used in the US and UK for over a decade. Why does the entire list of capitalized letters come before the list of lowercase letters, rather than interleaving with them? Because previous 6-bit teleprinters had no lowercase printing support, and because it was convenient to have the two lists in sequence so they could use the 7th bit to indicate a shift operation.
The point is, 'plain text' is something of a mirage, one that exists for historical reasons, and one which has caused a lot of trouble over the years. While ASCII has been a good thing for the most part, as it provided a common basis for text communication with which to replace the proprietary encodings (e.g., EBCDIC) that preceded it, its limitations have been obvious for decades, and the fact that in 2016 most voice synthesis software still handles it better than other things is a bad thing, not something to propagate further. IMAO, you should be up in arms over this failure to improve accessibility software, not defending it.
The point is, 'plain text' is something of a mirage, one that exists for historical reasons, and one which has caused a lot of trouble over the years. While ASCII has been a good thing for the most part, as it provided a common basis for text communication with which to replace the proprietary encodings (e.g., EBCDIC) that preceded it, its limitations have been obvious for decades, and the fact that in 2016 most voice synthesis software still handles it better than other things is a bad thing, not something to propagate further. IMAO, you should be up in arms over this failure to improve accessibility software, not defending it.
Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTF
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
Ordo OS Project
Lisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.
-
- Member
- Posts: 1146
- Joined: Sat Mar 01, 2014 2:59 pm
Re: Dodgy EDIDs (was: What does your OS look like?)
I think typing is faster than speaking, as is keyboard navigation vs voice navigation. I can get around my computer faster as a blind person than I ever did as a sighted person, because I keyboard-hammer on the tab key, the arrow keys, the space bar, and the return key and can hear everything as I navigate through it. As I type, I can hear the letters and words that I am typing and navigate around it with the arrow keys, then press tab to move to another control on the page (if I'm typing on a webpage, for example). Trying to do all of this through voice commands would be insane, even if the speech recognition was perfect.Brendan wrote:Remember how I said applications would be split into a minimum of 2 or more processes (front-end and back-end), where you can have multiple front-ends and multiple back-ends? The audio interface would just a different "front-end". You've probably guessed this would also be used by blind people (even when they aren't driving a car).
Using speech output is fine, but restricting blind users to voice commands because your interface isn't keyboard-friendly is very restrictive for those users (and sighted "power users" who often prefer the keyboard because of its efficiency over the mouse, as I did before I went blind). By "non-keyboard-friendly interface", I mean things such as those "intelligent" programming editors with such features as auto-completion, code folding, syntax highlighting, and whatever other fancy features you intend to build into your graphical programming environment. Unless you're planning on a radical redesign of such concepts, and making it all keyboard-friendly (which is more important in some ways than providing textual output, as long as your graphical interface elements can be translated into text for blind users) then I expect to find that the pop-up menus (which are now essential, rather than a convenience, due to your "forward-thinking design") won't be read out or navigable with the keyboard, your code folding controls (or whatever equivalent concept you've got) can't be accessed with the keyboard (or would interfere with either the screenreader or the editing of the code) and your syntax highlighting (which is again now an essential feature, as a way of cutting down on the number of keywords and punctuation characters required) will again either not be translated to text or will be translated to text in a rather long-winded way that's difficult to use.
I'm not having a dig at your design concepts here, as I think many of them (except the lack of manual configuration) are very interesting and could be a great success, but just think carefully about accessibility for blind users from the outset because while your design has the potential for great non-visual accessibility support, that support has to be in mind from the start as it is with the other forms of accessibility that you have mentioned; adding accessibility for blind users later on never works well, especially for highly-graphical interfaces which I gather yours is, and believe me blind users will be pissed off at being forced to use an inferior and inefficient voice-command-only interface - while that interface may be sufficient for "everyday" tasks (if designed *very* well), it's never going to be sufficient for "advanced" or "technical" tasks and remember that just because someone's blind doesn't mean they can't be an advanced computer user.
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Re: Dodgy EDIDs (was: What does your OS look like?)
Hi,
The problem is that you can't use a keyboard while driving a car, or jogging, or riding a bicycle. It's a problem because programmers don't want to spend a lot of time on something that is only useful to a small minority of users ("good intentions" vs. "cold hard economics"). Sure if you ask them they'll say they care deeply about accessibility, but they'll secretly be cursing about the hassle and it'll remaining on their "TODO" list forever. I have to prevent that by either making it so the typical application programmer doesn't have to do anything at all (e.g. hue shifting for colour blind people), or by making it useful to the majority of users (and not just a minority of users).
And that's the "clever" part. For everything I mentioned, application programmers do nothing for accessibility at all and continue to only care about the majority of users; and people that need accessibility features win because everything provides it.
Cheers,
Brendan
Yes, and it should also be possible to use keyboard for input and speech for output.onlyonemac wrote:I think typing is faster than speaking, as is keyboard navigation vs voice navigation. I can get around my computer faster as a blind person than I ever did as a sighted person, because I keyboard-hammer on the tab key, the arrow keys, the space bar, and the return key and can hear everything as I navigate through it. As I type, I can hear the letters and words that I am typing and navigate around it with the arrow keys, then press tab to move to another control on the page (if I'm typing on a webpage, for example). Trying to do all of this through voice commands would be insane, even if the speech recognition was perfect.Brendan wrote:Remember how I said applications would be split into a minimum of 2 or more processes (front-end and back-end), where you can have multiple front-ends and multiple back-ends? The audio interface would just a different "front-end". You've probably guessed this would also be used by blind people (even when they aren't driving a car).
The problem is that you can't use a keyboard while driving a car, or jogging, or riding a bicycle. It's a problem because programmers don't want to spend a lot of time on something that is only useful to a small minority of users ("good intentions" vs. "cold hard economics"). Sure if you ask them they'll say they care deeply about accessibility, but they'll secretly be cursing about the hassle and it'll remaining on their "TODO" list forever. I have to prevent that by either making it so the typical application programmer doesn't have to do anything at all (e.g. hue shifting for colour blind people), or by making it useful to the majority of users (and not just a minority of users).
And that's the "clever" part. For everything I mentioned, application programmers do nothing for accessibility at all and continue to only care about the majority of users; and people that need accessibility features win because everything provides it.
I'm curious... As a blind person, (for output) are you pissed off that the screen reader you're using has been retro-fitted to applications that were probably completely designed and implemented without a single thought towards non-visual accessibility?onlyonemac wrote:I'm not having a dig at your design concepts here, as I think many of them (except the lack of manual configuration) are very interesting and could be a great success, but just think carefully about accessibility for blind users from the outset because while your design has the potential for great non-visual accessibility support, that support has to be in mind from the start as it is with the other forms of accessibility that you have mentioned; adding accessibility for blind users later on never works well, especially for highly-graphical interfaces which I gather yours is, and believe me blind users will be pissed off at being forced to use an inferior and inefficient voice-command-only interface - while that interface may be sufficient for "everyday" tasks (if designed *very* well), it's never going to be sufficient for "advanced" or "technical" tasks and remember that just because someone's blind doesn't mean they can't be an advanced computer user.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
-
- Member
- Posts: 1146
- Joined: Sat Mar 01, 2014 2:59 pm
Re: Dodgy EDIDs (was: What does your OS look like?)
Yes, of course that pisses me off, and that's why I'm cautioning you to make sure you think about non-visual accessibility in whatever application development "toolkit" you package with/integrate into your OS so that applications can be made accessible transparently, by the OS itself.Brendan wrote:I'm curious... As a blind person, (for output) are you pissed off that the screen reader you're using has been retro-fitted to applications that were probably completely designed and implemented without a single thought towards non-visual accessibility?
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Re: Dodgy EDIDs (was: What does your OS look like?)
I don't think anyone's defending ASCII/UTF-8 as a whole, just its advantages- we shouldn't throw out the baby with the bathwater here. The exact origin of a standard is not nearly as important as the fact that we have one that's as widely-supported as it is and high a common denominator as it is. Moving to binary-formats-for-everything without taking those into consideration would cost us, and those costs are worth considering.Schol-R-LEA wrote:The thing is, as Brendan said, ASCII and its derivatives (e.g., UTF-8) are still conventions, no matter how ubiquitous they might be; the reasons why (for example) the glyph 'A' is encoded as a binary signal that happens to match a certain 7-bit encoding for the integer 65 (decimal) are entirely historical, and intimately tied to the specific technology of teletypewriters circa 1960.
...
While ASCII has been a good thing for the most part, as it provided a common basis for text communication with which to replace the proprietary encodings (e.g., EBCDIC) that preceded it, its limitations have been obvious for decades, and the fact that in 2016 most voice synthesis software still handles it better than other things is a bad thing, not something to propagate further. IMAO, you should be up in arms over this failure to improve accessibility software, not defending it.
I'm interested in what sorts of graphical code editors could work well with screenreaders. For example, I've been thinking about using a table layout, with columns for branches of control flow and rows for stages dataflow (the advantage being automatic case coverage by the editor inserting and removing columns, less micro-management of if/switch placement, and more straightforwardly moving and sharing computations between branches). It sounds like good keyboard control and maybe tweaking what the screen reader says might help? It is a lot less linear than text or even a form.onlyonemac wrote:Plain-text is still the most accessible for blind users, especially when programming. While I appreciate the value of a graphical programming editor, my experience is that they never work properly with screenreaders and I always have to revert to plain-text. Unless you can develop some radical new way of making a graphical application accessible to a user who can only listen to/read in Braille textual information, then I imagine that your OS would be a PITA to use for someone like me.
-
- Member
- Posts: 1146
- Joined: Sat Mar 01, 2014 2:59 pm
Re: Dodgy EDIDs (was: What does your OS look like?)
Don't even think about tables for accessibility - they suck. If you want to make an accessible graphical code editor, use something with a tree structure. If you implement your tree properly, with the right widgets, it'll actually make life easier for a blind user, not harder. Traditional code folding editors don't work well with screenreaders because there are no keyboard shortcuts to expand/collapse the current code block, and the screenreader doesn't read out anything to tell me if the block is expanded or collapsed; the folding controls are completely separate from the text editing area as far as implementation goes.Rusky wrote:I'm interested in what sorts of graphical code editors could work well with screenreaders. For example, I've been thinking about using a table layout, with columns for branches of control flow and rows for stages dataflow (the advantage being automatic case coverage by the editor inserting and removing columns, less micro-management of if/switch placement, and more straightforwardly moving and sharing computations between branches). It sounds like good keyboard control and maybe tweaking what the screen reader says might help? It is a lot less linear than text or even a form.onlyonemac wrote:Plain-text is still the most accessible for blind users, especially when programming. While I appreciate the value of a graphical programming editor, my experience is that they never work properly with screenreaders and I always have to revert to plain-text. Unless you can develop some radical new way of making a graphical application accessible to a user who can only listen to/read in Braille textual information, then I imagine that your OS would be a PITA to use for someone like me.
In short, current graphical code editors offer pretty much no extra functionality than plain-text editors for screenreader users, and are often less accessible due to their complex interface and custom widgets. Having said that, however, I believe that the concept of a graphical code editor could work for a screenreader user, but it would have to be done very carefully. I fear that Brendan in the design of the programming environment for his OS will offer a graphical code editor as the only option, with no plain-text alternative, and that consequently the accessibility will be poor - even if you build accessibility functionality into your application development framework, that usually doesn't carry over well to advanced interfaces such as graphical editors. I'm not saying that plain-text is the best that's possible for screenreader users, but it's the best that there currently is and I see that it's the best that there will be for a long time because graphical interfaces simply can't be "converted" to an accessible format in a reliable, completely natural, completely user-friendly way; the only real option is to implement two completely separate interfaces - one for sighted users and one for screenreader users - and Brendan's hinting that that should be possible with his OS but how well the "screenreader interface" will actually integrate with the screenreader is another question, if all of his interfaces are constrained to being built around the same (graphical) framework.
Heck, even Microsoft Word's a pain to use and I'd far rather write everything in a markup language (like HTML) and convert it later, except that that's not an option.
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Re: Dodgy EDIDs (was: What does your OS look like?)
Hi,
A tree structure would be easy to implement for source code, syntax highlighting can be done by changing the pitch and/or other parameters that effect the speech synthesiser, confirmations (when the IDE finishes saving the project, finishes compiling the project, etc) can be done with digitised sounds. You could define keys (and their corresponding voice commands) specifically for it, including maybe having user-defined marker points (to allow people to set a marker and "teleport" directly to that marker later). When the user moves up or down within something you could have a quiet "ding" sound where the pitch tells them where they are within the data (high pitch for close to start of the data, low pitch for close to end of the data), where how quickly the pitch changes as they move around gives them some idea of the total size (if they're on the third line of six lines, or the third line of several thousand lines).
Of course these are just a few random (untested) ideas, and I haven't done any research into audible user interfaces yet. The point I'm trying to make here is that there's probably a large number of things that can be done to make it much more usable than screen readers ever will be.
Cheers,
Brendan
This is why I'd want application "front-ends" that are designed specifically for an audible user interface.onlyonemac wrote:Don't even think about tables for accessibility - they suck. If you want to make an accessible graphical code editor, use something with a tree structure. If you implement your tree properly, with the right widgets, it'll actually make life easier for a blind user, not harder. Traditional code folding editors don't work well with screenreaders because there are no keyboard shortcuts to expand/collapse the current code block, and the screenreader doesn't read out anything to tell me if the block is expanded or collapsed; the folding controls are completely separate from the text editing area as far as implementation goes.Rusky wrote:I'm interested in what sorts of graphical code editors could work well with screenreaders. For example, I've been thinking about using a table layout, with columns for branches of control flow and rows for stages dataflow (the advantage being automatic case coverage by the editor inserting and removing columns, less micro-management of if/switch placement, and more straightforwardly moving and sharing computations between branches). It sounds like good keyboard control and maybe tweaking what the screen reader says might help? It is a lot less linear than text or even a form.
In short, current graphical code editors offer pretty much no extra functionality than plain-text editors for screenreader users, and are often less accessible due to their complex interface and custom widgets. Having said that, however, I believe that the concept of a graphical code editor could work for a screenreader user, but it would have to be done very carefully. I fear that Brendan in the design of the programming environment for his OS will offer a graphical code editor as the only option, with no plain-text alternative, and that consequently the accessibility will be poor - even if you build accessibility functionality into your application development framework, that usually doesn't carry over well to advanced interfaces such as graphical editors. I'm not saying that plain-text is the best that's possible for screenreader users, but it's the best that there currently is and I see that it's the best that there will be for a long time because graphical interfaces simply can't be "converted" to an accessible format in a reliable, completely natural, completely user-friendly way; the only real option is to implement two completely separate interfaces - one for sighted users and one for screenreader users - and Brendan's hinting that that should be possible with his OS but how well the "screenreader interface" will actually integrate with the screenreader is another question, if all of his interfaces are constrained to being built around the same (graphical) framework.
Heck, even Microsoft Word's a pain to use and I'd far rather write everything in a markup language (like HTML) and convert it later, except that that's not an option.
A tree structure would be easy to implement for source code, syntax highlighting can be done by changing the pitch and/or other parameters that effect the speech synthesiser, confirmations (when the IDE finishes saving the project, finishes compiling the project, etc) can be done with digitised sounds. You could define keys (and their corresponding voice commands) specifically for it, including maybe having user-defined marker points (to allow people to set a marker and "teleport" directly to that marker later). When the user moves up or down within something you could have a quiet "ding" sound where the pitch tells them where they are within the data (high pitch for close to start of the data, low pitch for close to end of the data), where how quickly the pitch changes as they move around gives them some idea of the total size (if they're on the third line of six lines, or the third line of several thousand lines).
Of course these are just a few random (untested) ideas, and I haven't done any research into audible user interfaces yet. The point I'm trying to make here is that there's probably a large number of things that can be done to make it much more usable than screen readers ever will be.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Dodgy EDIDs (was: What does your OS look like?)
removed
Last edited by removed on Mon Feb 15, 2016 5:29 pm, edited 1 time in total.
Re: Dodgy EDIDs (was: What does your OS look like?)
Hi,
As far as developers doing anything, that's mostly impossible. For an example, a user is browsing the Internet and sees a picture of a white picket fence with a little puppy in front of it. They shift the GUI's camera up close to the browser's window to get a good look at the little puppy. Then the user rotates the camera to the left. White picket, gap between pickets, white picket, ...., seizure. Who do you blame? It's not the web page or browser's fault - it's a static image with no flashing at all. It's not the GUI's fault - it's just slowly rotating the camera.
The option would be part of the user's profile. One user logs in and it gets enabled. A different user logs in and it's not enabled. However; for office scenarios (where people can see each other's screens) I'm think more like "if any user is sensitive, enable it for all users".
Note that pitch indicates an object's resonant frequency which tells you something about the object's size and material (drop a metal spoon on hard tiles then drop a large metal pot and I guarantee you'll hear a difference in pitch). I'm fairly sure deaf people know the difference between small things and large things.
If a user's profile says that specific user can't tell green and red apart, then that specific user would get hue shifting designed for people that can't tell red and green apart. If a user's profile says that specific user can't tell blue and green apart, then that specific user would get hue shifting designed for people that can't tell blue and green apart.
I'm not sure how anyone could have assumed that a user with one type of colour blindness would be given hue shifting intended for a different kind of colour blindness.
The other thing is having windows/applications floating in a virtual space combined with the ability to move the camera (like you would in a first person 3D game). This can cause simulation sickness. However, it's not like I'll be forcing the user to move the camera around, and not like they won't be able to do the old fashioned "alt+tab between windows" thing to make the camera jump directly to a specific application.
Cheers,
Brendan
The idea for this is based on research done by others and the relevant requirements/guidelines established for television broadcasts, movies, gaming, etc (and the implementation would similarly rely on the existing research). While I'm fine with people saying multiple researchers and multiple industries are all wrong, at a minimum I'd want some reason why everyone is wrong and preferably some description of how it can be done better. Otherwise the only option I have is to dismiss the claim as nonsense.ianhamilton wrote:Flash limiting doesn't save developers from having to think about epilepsy, there are other triggers. But it can be useful. It does need to be an option that users can turn off of they need to though.
As far as developers doing anything, that's mostly impossible. For an example, a user is browsing the Internet and sees a picture of a white picket fence with a little puppy in front of it. They shift the GUI's camera up close to the browser's window to get a good look at the little puppy. Then the user rotates the camera to the left. White picket, gap between pickets, white picket, ...., seizure. Who do you blame? It's not the web page or browser's fault - it's a static image with no flashing at all. It's not the GUI's fault - it's just slowly rotating the camera.
The option would be part of the user's profile. One user logs in and it gets enabled. A different user logs in and it's not enabled. However; for office scenarios (where people can see each other's screens) I'm think more like "if any user is sensitive, enable it for all users".
I doubt you've thought about this enough. It shouldn't be hard to analyse a sound and determine various characteristics ("pitch, attack, decay, sustain, release"), then use those characteristics to find a suitable icon representing the cause. It won't exactly identify the cause, but that's not the point. It only needs alert the user of the location of the sound and provide some indication of the sound's general characteristics.ianhamilton wrote:For deafness, what's important isn't so much knowing what the pitch of the sound is, what's important is knowing what the sound was supposed to be communicating, which obviously can't be determined in an automated way.
Pitch is obviously a meaningless concept for people who are born deaf too.
Note that pitch indicates an object's resonant frequency which tells you something about the object's size and material (drop a metal spoon on hard tiles then drop a large metal pot and I guarantee you'll hear a difference in pitch). I'm fairly sure deaf people know the difference between small things and large things.
Um, what?ianhamilton wrote:Hue shifting however, don't go near this with a barge pole. It not only doesn't work, it actually makes things worse. It creates new issues in things that were perfectly accessible with it turned off, and makes devs think they no longer have to consider it when they 100% do.
Ultimately, if you can only see in X number of colours and some software uses X+5 colours, it doesn't matter which direction you shift in, you're still going to be short. And on top of that, of you for example shift green towards blue instead, you've successfully made it easier for proto/deuteranopes to tell green and red apart, but you've at the same time made it harder to tell blue and green apart. And if course there's achromatopia to consider too.
If a user's profile says that specific user can't tell green and red apart, then that specific user would get hue shifting designed for people that can't tell red and green apart. If a user's profile says that specific user can't tell blue and green apart, then that specific user would get hue shifting designed for people that can't tell blue and green apart.
I'm not sure how anyone could have assumed that a user with one type of colour blindness would be given hue shifting intended for a different kind of colour blindness.
I think you mean "in addition" and not "instead" here - I'd want to show developers what it would look like to a colour blind person after daltonizing (and not show them what it would look like without daltonizing).ianhamilton wrote:Instead what could be helpful might be a colour blindness simulator, to use as a development tool.
There are only 3 options:ianhamilton wrote:I may have got the wrong of the stick with the mention of different front ends, which sounds like something that developers would need to implement themselves. That's unlikely to happen. What you really want is for as much as possible of it to just work out of the box, in which case dev overhead is cut way back to predominantly tweaking and labeling.
- Developers are expected to do nothing, and OS provides a work around that sucks badly (screen reader). This is the "solution" that existing OSs provide. However, in my opinion it's a solution to the "need to comply with disability laws with the least effort" problem and has nothing at all to do with actually providing a usable user interface.
- Developers are expected to do something specifically for blind people alone; and because it's a lot of work for a minority of users they never bother or don't try to do anything that does it well. This is worse that then "solution" existing OSs use.
- Developers are expected to do something for the majority of users (people driving cars, jogging, riding bicycles, whatever), and it just happens to be vastly superior for blind users by "cunning accident". This is the solution I'm going for.
There's 2 different things that are easy to confuse because they're both "3D". The first is 3D applications (where the camera is static), where the 3D is used for things like lighting/shadows. This isn't very different to "2D windows with lame baked on 3D look" that most OSs have been using for about 20 years (other than the lighting/shadows being dynamic and working properly when one window casts a shadow over other windows, and for people using stereoscopic/3D display technology where things like raised buttons actually will look raised). This doesn't have anything to do with simulation sickness.ianhamilton wrote:Something else worth keeping a keen eye on is simulation sickness. Again I may have got the wrong end of the stick, but if you're talking about 3D ne easy I think you are, you need to start looking at thinks like having an appropriate FOV angle fit the screen size and curing distance and showing it to be customised, and options to reduce/turn off movement, in the way that iOS had to.
The other thing is having windows/applications floating in a virtual space combined with the ability to move the camera (like you would in a first person 3D game). This can cause simulation sickness. However, it's not like I'll be forcing the user to move the camera around, and not like they won't be able to do the old fashioned "alt+tab between windows" thing to make the camera jump directly to a specific application.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
-
- Member
- Posts: 1146
- Joined: Sat Mar 01, 2014 2:59 pm
Re: Dodgy EDIDs (was: What does your OS look like?)
Those ideas sound good, but how many developers are going to implement a separate audio-based front-end for their application? Even today, developers never bother to accessibility-test their software, and a lot never do their bit to make software accessible (like unlabelled images, especially images on buttons with no label which is especially a problem in Android (so much so that the Android screenreader allows the user to label the buttons themself, if they can figure out what the buttons are for), or providing keyboard shortcuts for enough features or making the shortcuts configurable so that they don't conflict with the screenreader's own keyboard controls). Thus you need to have a "fallback" screenreader system like existing OSes do that will "convert" the graphical interface into a text/audio-based interface for blind users (and don't claim that having such a "fallback" will make application developers lazy - they'll be lazy anyway).Brendan wrote:This is why I'd want application "front-ends" that are designed specifically for an audible user interface.
A tree structure would be easy to implement for source code, syntax highlighting can be done by changing the pitch and/or other parameters that effect the speech synthesiser, confirmations (when the IDE finishes saving the project, finishes compiling the project, etc) can be done with digitised sounds. You could define keys (and their corresponding voice commands) specifically for it, including maybe having user-defined marker points (to allow people to set a marker and "teleport" directly to that marker later). When the user moves up or down within something you could have a quiet "ding" sound where the pitch tells them where they are within the data (high pitch for close to start of the data, low pitch for close to end of the data), where how quickly the pitch changes as they move around gives them some idea of the total size (if they're on the third line of six lines, or the third line of several thousand lines).
Of course these are just a few random (untested) ideas, and I haven't done any research into audible user interfaces yet. The point I'm trying to make here is that there's probably a large number of things that can be done to make it much more usable than screen readers ever will be.
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Re: Dodgy EDIDs (was: What does your OS look like?)
Hi,
How many developers are going to spend their time writing a whole application (a "front-end plus back-end pair")? This is a question I can answer: none.
Mostly; everything is split into 4 groups of developers, where each group has completely different skills and completely different responsibilities. One group that is responsible for designing and maintaining the relevant standards for messaging protocols and file formats (with good skills in research, design and technical writing) who don't actually implement anything. One group that's responsible for back-ends (with good skills in dealing with concurrency/synchronisation, fault tolerance, standards compliance, etc). One group that's responsible for video based front-ends (with good skills in the 3D graphics, user experience design and artwork). One group that's responsible for audio based front-ends (with good skills in the 3D sound protocols and a completely different type of user experience design). Each of these group doesn't even need to know that other groups exist.
Note that this is similar to the way the Internet works - one group creating standards for networking protocols and file formats (like NTP, FTP, HTTP and HTML). Multiple groups writing servers that comply with those standards. Multiple groups writing clients that comply with those standards. Any client ("front-end") can connect to any server ("back-end"). Nobody does an entire "standards plus server plus client" set.
I need to make sure audio-based front-ends aren't just something that's only used by a minority; so that they do exist for everyone (and not just blind people).
Cheers,
Brendan
How many developers are going to spend their time writing video-based front-ends for existing back-ends that other developers wrote? How many developers are going to spend their time writing audio-based front-ends for existing back-ends? How many developers are going to spend their time writing back-ends for existing front-ends? I can't really answer these questions.onlyonemac wrote:Those ideas sound good, but how many developers are going to implement a separate audio-based front-end for their application?
How many developers are going to spend their time writing a whole application (a "front-end plus back-end pair")? This is a question I can answer: none.
Mostly; everything is split into 4 groups of developers, where each group has completely different skills and completely different responsibilities. One group that is responsible for designing and maintaining the relevant standards for messaging protocols and file formats (with good skills in research, design and technical writing) who don't actually implement anything. One group that's responsible for back-ends (with good skills in dealing with concurrency/synchronisation, fault tolerance, standards compliance, etc). One group that's responsible for video based front-ends (with good skills in the 3D graphics, user experience design and artwork). One group that's responsible for audio based front-ends (with good skills in the 3D sound protocols and a completely different type of user experience design). Each of these group doesn't even need to know that other groups exist.
Note that this is similar to the way the Internet works - one group creating standards for networking protocols and file formats (like NTP, FTP, HTTP and HTML). Multiple groups writing servers that comply with those standards. Multiple groups writing clients that comply with those standards. Any client ("front-end") can connect to any server ("back-end"). Nobody does an entire "standards plus server plus client" set.
No. If I provided a "fallback" screen reader system then it's much much easier for developers to decide they couldn't be bothered, and people that want the audio-based front-ends will end up screwed. While I'm happy when people end up with barely usable puke on existing operating systems like Windows, it's not even close to acceptable for my OS.onlyonemac wrote:Even today, developers never bother to accessibility-test their software, and a lot never do their bit to make software accessible (like unlabelled images, especially images on buttons with no label which is especially a problem in Android (so much so that the Android screenreader allows the user to label the buttons themself, if they can figure out what the buttons are for), or providing keyboard shortcuts for enough features or making the shortcuts configurable so that they don't conflict with the screenreader's own keyboard controls). Thus you need to have a "fallback" screenreader system like existing OSes do that will "convert" the graphical interface into a text/audio-based interface for blind users (and don't claim that having such a "fallback" will make application developers lazy - they'll be lazy anyway).
I need to make sure audio-based front-ends aren't just something that's only used by a minority; so that they do exist for everyone (and not just blind people).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Dodgy EDIDs (was: What does your OS look like?)
removed
Last edited by removed on Mon Feb 15, 2016 5:29 pm, edited 2 times in total.
-
- Member
- Posts: 1146
- Joined: Sat Mar 01, 2014 2:59 pm
Re: Dodgy EDIDs (was: What does your OS look like?)
Believe me, developers aren't going to write audio front-ends when they can't even be bothered to accessibility-test their software. They'll write a visual front-end + back-end pair as an "application" and distribute it like that, and nobody's going to write an audio front-end for it separately, and blind people will be left without access to those applications because you didn't bother to put in some provision for screenreading functionality. Developers will only write audio front-ends when they can see value in it for the majority of users (such as a web search app like Google today) and furthermore those audio front-ends will probably accept voice input only, no keyboard control; they're not going to write an "accessible" (as in, with proper keyboard control support) audio front-end for a programming IDE or even a word processor because the majority of users aren't going to use it (or again, if they do, it will be voice-based interaction for simple tasks e.g. "read the second paragraph of the document to me" rather than a proper keyboard-and-audio-based way of interacting with all of the back-end's features). In short, what works well as a "convenient" audio interface for sighted users doesn't work well as a "fully-featured and powerful" interface for blind users like a graphical interface does for sighted users.
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
-
- Member
- Posts: 1146
- Joined: Sat Mar 01, 2014 2:59 pm
Re: Dodgy EDIDs (was: What does your OS look like?)
You're right, screenreaders don't suck *in and of themselves*. What does suck is trying to use a lot of applications with screenreaders, because the application developers didn't make their application work properly with screenreaders (e.g. they implemented a custom widget and didn't add accessibility hooks, or they created a toolbar of buttons with icons on them but didn't add labels, or there's a focus-grabbing popup menu (or sometimes it's a popup menu with no way to move the keyboard focus into it) and other things like that). What I've realised is that it's not possible to have application developers write an application while completely ignoring blind users, and then write a screenreader (or an accessibility API in the operating system) that can make every application accessible; it requires some participation on the part of the application developers.ianhamilton wrote:Screen readers do not 'suck badly'. Have a chat with some blind users about their experiences with them. And your opinion isn't quite right. They were developed specifically to provide a usable user interface, years before any relevant accessibility laws came about (1986).
I dont know what your reasons are for thinking that they suck, but there are common misconceptions amongst non-users, for example thinking that basic robotic sounding synthesised voices are low quality. This is an intentional usability feature specific to blind users, as the voices can be sped up to an extremely high rate without losing ability to discern syllables. So if you're someone who uses a screen reader day in day out (I.e. you're blind) you gradually ramp up the speed and end up with a drastically faster experience, and therefore drastically less frustrating experience.
When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Syntax checkup:
Wrong: OS's, IRQ's, zero'ing
Right: OSes, IRQs, zeroing
Re: Dodgy EDIDs (was: What does your OS look like?)
Hi,
The number of developers that have read any accessibility guideline is probably less than 20%. The number that actually follow these guidelines are probably less than 2%. They are ineffective.
Note that this was primarily intended for 3D games (where deaf people have an unfair disadvantage); where there is a relationship between the sounds and the size and material of (virtual) objects that caused them. This relationship works both ways - e.g. it wouldn't be that hard to auto-generate realistic digitised sound from collisions/impact, and the size, material and force involved.
I simply can't understand how anyone can think a screen reader can ever be ideal. It's not ideal, it's a compromise. A user interface designed specifically for audio can get much much closer to ideal (and if for some extremely unlikely cases it can't be better than a screen reader, then it can still match the experience of a screen reader and won't be worse).
Note that this isn't limited to "audio based user interface" and "video based user interface". Maybe there's a word-processor and someone writes a front end that's better for "small screen, big thumbs" (smartphones) and someone else writes a front-end that's better for "large screen, precise mouse" (desktop/laptop). Maybe there's 10 different front-ends with different features and different use cases, and they all use the same back-end. Maybe there's 10 users, all editing the same document at the same time, and all using different front-ends.
Note that it won't be a case of "the GUI" (like Windows where it's built in); but it will be more like Linux where the GUI is just another set of cooperating processes. When a user logs in the OS determines which GUI to start from the user's profile (although not specifically a GUI either - it may be an application instead). I'll provide one GUI that works how I want it to work; but I'll have no real control over the design of third party GUIs.
For other reasons (mostly networking lag in distributed systems); I'm also insisting on "prediction to hide latencies" for everything between user input devices to display devices.
Cheers,
Brendan
"Sum of absolute differences" covers the first 3 cases (although if I remember right it's meant to be weighted slightly so red matters more). There's nothing I can do about the last case.ianhamilton wrote:Here's a list of common triggers. These are the same triggers that the Harding test uses, which is the industry standard tool used by the BBC, Ubisoft etc:
Any sequence of flashing* images that lasts for more than 5 seconds
More than three flashes* in a single second, covering 25%+ of the screen
Moving repeated patterns** or uniform text***, covering 25%+ of the screen
Static repeated patterns** or uniform text***, covering 40%+ of the screen
-------------
* an instantaneous high change in brightness/contrast (including fast cuts), or to/from the colour red
** more than five evenly spaced high contrast repeated stripes – rows or columns such as grids and checkerboards, that may be composed of smaller regular elements such as polkadots
**** more than five lines of text formatted as capital letters only, with not much spacing between letters, and line spacing the same height as the lines themselves, effectively turning it into high contrast evenly alternating rows
Yes; and I never said it'd be "100% epilepsy safe". It's about risk minimisation not guarantees.ianhamilton wrote:You can't ever make anything 'epilepsy safe', but by concentrating on flashing alone you're leaving out some important common triggers.
Ultimately (for this specific part of the conversation) it's about expecting developers to prevent unforeseeable situations caused by the "unfortunate interaction" of 2 or more completely separate pieces of software that are both perfectly fine on their own.ianhamilton wrote:That's an extreme and unlikely example, and also one that is unlikely to result in seizure.Brendan wrote:As far as developers doing anything, that's mostly impossible. For an example, a user is browsing the Internet and sees a picture of a white picket fence with a little puppy in front of it. They shift the GUI's camera up close to the browser's window to get a good look at the little puppy. Then the user rotates the camera to the left. White picket, gap between pickets, white picket, ...., seizure. Who do you blame? It's not the web page or browser's fault - it's a static image with no flashing at all. It's not the GUI's fault - it's just slowly rotating the camera.
More likely examples are a full screen flashing advert - that was caused by developer of the ad, not the developer of the browser. Or a tiled scrolling background in a piece of software - that was caused by the developer of the software.
Ultimately though it is not about blame, or about eliminating seizures. It is about doing as much as is reasonably possible to avoid injuring people. Your flash reduction will help, but it is not the whole story, developers still need to comply with accessibility guidelines.
The number of developers that have read any accessibility guideline is probably less than 20%. The number that actually follow these guidelines are probably less than 2%. They are ineffective.
Yes, they're completely different, and people that can hear learn to recognise the different sounds from different applications, and people that can't hear will learn to recognise the different icons caused by different applications.ianhamilton wrote:Strong language! I've thought about it plenty enough.Brendan wrote:I doubt you've thought about this enough. It shouldn't be hard to analyse a sound and determine various characteristics ("pitch, attack, decay, sustain, release"), then use those characteristics to find a suitable icon representing the cause. It won't exactly identify the cause, but that's not the point. It only needs alert the user of the location of the sound and provide some indication of the sound's general characteristics.
Note that pitch indicates an object's resonant frequency which tells you something about the object's size and material (drop a metal spoon on hard tiles then drop a large metal pot and I guarantee you'll hear a difference in pitch). I'm fairly sure deaf people know the difference between small things and large things.
You cannot judge what the purpose of a sound is from its characteristics. Firstly, they bear no relation to size and material, as they are digital. Secondly, the sounds are completely different from app to app. Have a listen to the sounds in Facebook, Skype and What's App - completely different.
Note that this was primarily intended for 3D games (where deaf people have an unfair disadvantage); where there is a relationship between the sounds and the size and material of (virtual) objects that caused them. This relationship works both ways - e.g. it wouldn't be that hard to auto-generate realistic digitised sound from collisions/impact, and the size, material and force involved.
Forget about the icons and assume it's just a meaningless red dot. That alone is enough to warn a deaf user (playing a 3D game) that something is approaching quickly from behind. The icons just give them some sort of hint about whether it's a huge grisly monster stomping and snorting and banging into trash cans, or if it's just a harmless fly. I don't care what the cause is because I can figure out enough for "some sort of hint" from the sound alone.ianhamilton wrote:You can't provide an icon indicating the cause unless you precisely know the cause. Providing an icon that's incorrect is worse than not providing an icon, telling someone something is happening when in fact something completely different is happening is extremely unhelpful.
The user sees an icon towards the bottom left of the screen, realises that's where their messaging app is and figures out the sound must've come from the messaging app. The user sees the exact same icon to the right of the screen, realises that's where the telephony app is, and figures out the sound must've come from the telephony app.ianhamilton wrote:For example indicating that there has been an incoming message. The user sees that icon, and decides to reply to it later as they're busy writing an email. They finish the mail, go to check it later and find out that it wasn't a message at all, it was an incoming call, and they've missed it. Cue rage, directed at you.
You have a poor understanding of hue shifting. If the user can't see red it gets shifted to orange, but orange (that they could've seen) *also* gets shifted towards yellow, and yellow gets shifted a little towards green. Essentially the colours they can see get "compressed" to make room for the colours they couldn't see. It doesn't cause colours to match when they otherwise wouldn't (unless 2 colours are so close together that it's bad for everyone anyway). What it does mean is that (especially if it's excessive) things on the screen end up using colours that don't match reality - a banana looks slightly green, an orange looks a bit yellow.ianhamilton wrote:Unfortunately colourblindness isn't that simple, if it was it would have ceased to be an issue a long time ago.Brendan wrote:If a user's profile says that specific user can't tell green and red apart, then that specific user would get hue shifting designed for people that can't tell red and green apart. If a user's profile says that specific user can't tell blue and green apart, then that specific user would get hue shifting designed for people that can't tell blue and green apart.
I'm not sure how anyone could have assumed that a user with one type of colour blindness would be given hue shifting intended for a different kind of colour blindness.
I think you might have misunderstood what I was saying: Hue shifting involves shifting hues. E.g. if you can't tell red and green apart, you shift those colours into a different hues. For example you turn unsaturated red into highly saturated orangey red.
However, if your software already used an orangey red, you then end up with a problem. The resulting inability to distinguish red and bright orange isn't because the user started out with some difficulty between those colours, its because the shift has made them closer. And if your software didn't use green to start with, all of that is being caused for no good reason at all, inflicting color perception issues in an attempt to fix color perception issues that did not even exist.
The laws of physics don't apply to computer programmers.ianhamilton wrote:You can't just compress a wide range of colours into a narrow range of colours. It isn't physically possible.
While differences in shape (a tick or cross) would definately help in some situations, it's not possible in others and doesn't negate the benefits of daltonisation for the majority of colour blind users.ianhamilton wrote:Telling developers that they don't need to worry about colourblindness on the basis of daltonising also had a devastating effect on people with achromatopsia. That's when you see no colour at all. It is rare, buy I've still managed to meet three people who have it.
So, let's take the example of a form, with valid fields marked green, and invalid marked red. Some daltonisation and you can make them more distinct, but where does that leave people who are achromatic? Even if you changed the brightness, what cultural meaning do different shades of grey hold?
Instead, the obvious solution is for the developer to use a tick and a cross as well as the color. The extra reinforcement means it is more usable for all users, particularly so for people who are colourblind, and enormously so for achromatopsia, and other conditions that affect colour perception too, such as cataracts or sickle cell.
So, it lies with developers. Developers need to primarily design without using color alone, then check using a simulator to identify any contrast issues (e.g. red text on a brown background).
We live in completely different worlds. I live in an imperfect world where developers rarely do what they should. You live in a perfect world where all developers follow accessibility guidelines rigorously and do use colourblind simulators, and graphics from multiple independent sources are never combined. While "do effectively nothing and then blame application developers" sounds much easier and might work fine for your perfect world, it's completely useless for my imperfect world.ianhamilton wrote:Which leads on to..
No, most definitely did not mean that!Brendan wrote:I think you mean "in addition" and not "instead" here - I'd want to show developers what it would look like to a colour blind person after daltonizing (and not show them what it would look like without daltonizing).
Colourblind simulators are a common design tool. You simulate what it looks to have the various common types at full strength, and that allows you to identify any areas in your software that will be problematic, correct them, and validate the changes. Many designers and developers already do this.
However, colourblind simulators are external pieces of software, downloaded by people who know about them. There is only one development environment (and some other bits of software, like Photoshop) that I'm aware of in any industry that includes its own built in simulator - Unreal Engine. And in there, it is hidden several layers deep in menus.
Yes, I can (and probably will) do that; and it will make a small amount of difference, but not enough.ianhamilton wrote:So that's a real opportunity for you. If you can implement a simulator (quick easy job, using Color Oracle's excellent open source algorithms) and put it up front and centre in your development environment, then it is no longer just a useful tool, it is a powerful awareness raising device.
You mean, a chat like this:ianhamilton wrote:Screen readers do not 'suck badly'. Have a chat with some blind users about their experiences with them.Brendan wrote:There are only 3 options:Note that I (as a sighted person) want this for me. If I could be writing this reply and/or browsing the web using an audio interface, and at the same time also be playing a full screen 3D game using a video interface, then I'd get a whole lot more done.
- Developers are expected to do nothing, and OS provides a work around that sucks badly (screen reader). This is the "solution" that existing OSs provide. However, in my opinion it's a solution to the "need to comply with disability laws with the least effort" problem and has nothing at all to do with actually providing a usable user interface.
- Developers are expected to do something specifically for blind people alone; and because it's a lot of work for a minority of users they never bother or don't try to do anything that does it well. This is worse that then "solution" existing OSs use.
- Developers are expected to do something for the majority of users (people driving cars, jogging, riding bicycles, whatever), and it just happens to be vastly superior for blind users by "cunning accident". This is the solution I'm going for.
Note that good and bad are relative. Someone who hasn't had any other option has nothing to compare a screen reader to.onlyonemac wrote:Yes, of course that pisses me off, and that's why I'm cautioning you to make sure you think about non-visual accessibility in whatever application development "toolkit" you package with/integrate into your OS so that applications can be made accessible transparently, by the OS itself.Brendan wrote:I'm curious... As a blind person, (for output) are you pissed off that the screen reader you're using has been retro-fitted to applications that were probably completely designed and implemented without a single thought towards non-visual accessibility?
So you're saying they were developed specifically to avoid the hassle of bothering with anything better, years before they became a convenient way to comply with relevant accessibility laws without bothering to do anything better?ianhamilton wrote:And your opinion isn't quite right. They were developed specifically to provide a usable user interface, years before any relevant accessibility laws came about (1986).
About a week ago my father was complaining about a bank's web site, saying how it's a pain in the neck because there's so little information on each screen and "why can't it just display a list of transactions like my other bank". I explained to him that one bank's web site is designed for mobile users with small screens (and that's why it sucks for his larger laptop screen), and the other bank's web site is designed for desktop/laptop (and would suck for small smartphones). The fact is that both web sites sucked - neither provided different pages designed for the 2 different use cases, with suitable navigation and content for each specific case. Of course both are based on HTML, which is designed for "separation of content and presentation" specifically so pages can adapt to things like different screen size, and "small screen" vs. "big screen" isn't radically different (nowhere near as different as "video vs. audio").ianhamilton wrote:I dont know what your reasons are for thinking that they suck, but there are common misconceptions amongst non-users, for example thinking that basic robotic sounding synthesised voices are low quality. This is an intentional usability feature specific to blind users, as the voices can be sped up to an extremely high rate without losing ability to discern syllables. So if you're someone who uses a screen reader day in day out (I.e. you're blind) you gradually ramp up the speed and end up with a drastically faster experience, and therefore drastically less frustrating experience.
I simply can't understand how anyone can think a screen reader can ever be ideal. It's not ideal, it's a compromise. A user interface designed specifically for audio can get much much closer to ideal (and if for some extremely unlikely cases it can't be better than a screen reader, then it can still match the experience of a screen reader and won't be worse).
This is where things get messy. Developers don't write applications for my OS. They write processes that cooperate (via. open standards). What might look like a single application to the end user may be a "front end", a spell checker service, an arbitrary precision maths service, a "back-end" and a file format converter; where all 5 of these separate processes were designed and written by completely different/unrelated developers, possibly working for competing companies. Anyone can grab the specifications/standards and write a new piece whenever they like, without anyone's permission and without anyone's source code. If an application has no audio based front end, anyone can add one.ianhamilton wrote:The three options aren't quite right either. But for noe lets just say they were. Would 100% of your developers go for option 3? I don't know, but I would expect the answer to be no.
In which case what happens to the remainder, when they're contacted by a blind user, or worse, get sued (e.g. if the OS is ever used in anything involved with government, falling under section 508). Or again under section 508 if someone is trying to pitch for business, but finds out they're going to be scuppered because the client is legally required to choose an accessible bidder. How much work would it be to get it up to scratch?
Note that this isn't limited to "audio based user interface" and "video based user interface". Maybe there's a word-processor and someone writes a front end that's better for "small screen, big thumbs" (smartphones) and someone else writes a front-end that's better for "large screen, precise mouse" (desktop/laptop). Maybe there's 10 different front-ends with different features and different use cases, and they all use the same back-end. Maybe there's 10 users, all editing the same document at the same time, and all using different front-ends.
More likely is that the design decisions that benefit blind users (like the robotic sounding synthesised voice) also benefit everyone else.ianhamilton wrote:So, the options. Option 1 simply doesn't exist. Screen reader compatibility requires work by developers. However that work is greatly reduced by so much of it being handled at a system level. Option 2 does exist, and is used too. Primarily by developers who are creating something experiential rather than functional, such as an audio game.
The issue here really is about how much work is involved for the developer.
So what you really want is a combination of options 1 and 3. For developers to want to do it for other reasons than just accessibility, but also for it to be an out of the box system that just requires some minor developer input (adding textual equivalents, defining tab order etc). If you can do that, you'll be on to a winner.
You do need to bear in mind that the needs of a blind user and someone who is using a screenless device will have some differences though, for example the speed thing above, or something as simple as how the device is held.
I can't think of a reason why the OS would want to move the camera itself (unless it's a result of user input).ianhamilton wrote:No confusion here, its the latter that I'm referring to. Actually a bigger issue than the user being able to move the camera around is when you move it for them. Simulation sickness is a mismatch between what your brain expects to see and what it sees, so if the movement is outside of the user's control, I.e. less predictable, it's worse. Transitions are a biggie, and any kind of transitions, not just 3D ones.Brendan wrote:The other thing is having windows/applications floating in a virtual space combined with the ability to move the camera (like you would in a first person 3D game). This can cause simulation sickness. However, it's not like I'll be forcing the user to move the camera around, and not like they won't be able to do the old fashioned "alt+tab between windows" thing to make the camera jump directly to a specific application.
I don't have any other plans. Either you move the camera around yourself, or you alt+tab. "Fancy" transitions look cool for about 3 minutes and remain an annoyance thereafter (mostly because of the time it takes waiting impatiently for them to complete).ianhamilton wrote:This is really worth a read, it coffers some of the changes that Apple made following the horrendous time that so many people had with iOS7:
http://www.theguardian.com/technology/2 ... n-sickness
I don't know what your plans are for how the either the 3D or transitions will work. But you don't have to just leave people to alt-tabbing, there are design considerations that you can make to reduce nausea, and allow some people to use the camera who wouldn't otherwise be able to.
Note that it won't be a case of "the GUI" (like Windows where it's built in); but it will be more like Linux where the GUI is just another set of cooperating processes. When a user logs in the OS determines which GUI to start from the user's profile (although not specifically a GUI either - it may be an application instead). I'll provide one GUI that works how I want it to work; but I'll have no real control over the design of third party GUIs.
For other reasons (mostly spreading a virtual world across multiple monitors); I'm already insisting that the angle from camera to any point in the virtual world matches (as close as is practically possible) the angle from the eye to that same point on the screen.ianhamilton wrote:That iOS link above is full of good ones. Also the freely available official documentation for Oculus Rift, they ploughed vast amounts of money into research into simulation sickness, and a great deal of what they're addressing for VR is just as helpful for regular 3D camera work too:
https://developer.oculus.com/documentat ... _sickness/
The FOV angle really is a biggie for simulation sickness. There's a great two part tutorial video here explaining exactly why and showing how to do it well. It's from a gaming course, but equally relevant to any other virtual camera work:
For other reasons (mostly networking lag in distributed systems); I'm also insisting on "prediction to hide latencies" for everything between user input devices to display devices.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.