That must be a particularly hard thing for people to do if they were born blind, but I can easily understand how people who have gone blind and who can still visualise what documents look like to sighted people will be determined to be able to make use of that ability rather than being forced to bring in a sighted person to arrange everything for them. So it's a good example: I can see straight away how low on the priority list it would be for any application writer to provide the required functionality, so the tools needed to deal with this either need to be standard parts of the operating system or something else added to the operating system to work as if it is part of the operating system. If any part of the functionality that it offers is useful to sighted users too, then it would be best if that functionality was a standard part of the operating system rather than an optional addition to it. Any functionality which is not useful to sighted users though could perhaps remain part of a separate package which blind users would download and install into the OS, but it's only going to be worth keeping it separate if there's a lot of functionality in that package. I very much doubt that there would be enough of it for it to make sense not just to build it all directly into the OS.onlyonemac wrote:Something like office software could be a problem: formatting text, moving elements around on the page, and so on are all tasks which are difficult to do with an audio interface but which if blind people were unable to do they would be very frustrated at being unable to produce documents that look as professional as those produced by a sighted person.
I would want to set it up to read in different accents to indicate the font, to change the pitch of voice to indicate text size and perhaps change the mood of the voice to represent associations with different colours. That would make it less irritating when there are lots of changes, and it would be faster to process. I would also want it to be able to spell out the formatting too though, and I can't see any reason why any of that would only be of use to blind people. If I'm writing something while walking in the countryside, I could get the formatting right at the time of writing instead of having to leave it till I get home.My screenreader will read out text formatting (such as the font name, size, colour, whether the text is bold/itallic/underlined, and whatever else I've configured it to read out)
And that's the kind of task that a sound screen would handle, giving you something close to a visual representation of the layout of the content. I am not blind, but I want to use a sound screen and that's what motivates me to think about developing one. Again it would be useful to be able to get it to state exact locations and positions of things so that they can be adjusted with precision. A lot of GUI software doesn't allow you to line things up perfectly because it's hard to get it right by eye even when you can see that it's wrong - you want actual numbers for the locations, and you want to be able to adjust those numbers to move things around.and when I'm producing a presentation in PowerPoint it reads out the size and position of the elements on the slide as I move them around (and I think there's also a key combination to read them out on demand but I never use that feature) so I can estimate where something is on the slide and get an idea of how the presentation will look.
Some sighted users will want all of that, and the best way to get it is to make sure it's made a part of the operating system. Once a need for a control is recognised, it should be added. The problem comes from operating system developers having other priorities which they want to deal with first, while someone writing software to open up the machine to blind people puts higher priority on providing that special functionality, but the better answer's for them to become part of the team of people developing the operating system and to make sure everything's done right.Sighted users aren't going to want to fuss with listening to descriptions of formatting and try to guess where the elements on the slides are, so chances are audio interface developers are going to leave those features out of the audio interface, thus putting blind users at a disadvantage.
Rather than having a screenreader designed to operate a GUI that's specifically designed for sighted people, you'd be better off with a universal user interface which can handle input from all devices and make the way they're used as flexible as possible. If you want to the machine to tell you anything through words which it normally tells you visually, it should be possible to instruct it to do just that. In the same way, if it normally tells you something by making a noise, it should be possible to instruct it to display something visual instead. You are right that the application developers should not be left to deal with these issues, but screenreader software that's designed to operate through a GUI is always going to be following the GUI (and may be limited by it in many ways) instead of taking its own lead and acting on things directly.That's why, even if a separate audio interface is provided, having built-in support for something like a screenreader designed to work with the graphical interface when the audio interface is too restrictive moves the responsibility off of the application developers (who have repeatedly proven themselves to not care about software accessibility) to the screenreader developers (who's main focus is software accessibility).
The real issue is bad user interface design, and the best way to fix it is to work directly on improving that rather than bolting on an audio converter to a primitive GUI (which is what all GUIs currently are). We need people to stop thinking of a GUI as an isolated unit, because it should just be part of a bigger user interface with many overlaps in functionality between all the different input and output devices. With a GUI, you're often forced to go through a long chain of badly thought out menus to get simple things done, and why would you want to have to go through that same chain of garbage using sound commands instead of just naming the thing you want to get to and going there directly? A proper speech user interface would take you straight to where you want to go, and if you have one of those, the graphical user interface should be integrated with it so that it too can take that direct path - both interfaces need to be designed to be part of the same universal user interface so that they can both offer the same functionality as standard wherever it is possible to do so. That is the direction to go in with this instead of keeping different user interfaces in separate compartments and then bolting disability software packages to them to provide the limited functionality of one user interface through another. I think the way forward is to unify the whole lot as one box of tricks which doesn't perpetuate unnecessary divides that restrict functionality.