Hi,
onlyonemac wrote:Brendan wrote:For my OS and applications designed for my OS, assume you write "O" (and wait once).
Justify that. Explain how that's going to work, without ending up with conflicts once we've got more than 26 commands (remember that every command needs a shortcut letter, for your blind users who aren't being given a proper interface). (Also note the difference between keyboard shortcuts e.g. "ctrl-s" and the underlined letters in menus e.g. "alt-f, s".)
If there's more than 26 frequently used commands the application is probably a hideous mess that needs redesigning; and for rarely used things nobody is going to remember them and the user will end up using other things (main menu, context sensitive menu, toolbar, whatever) regardless.
onlyonemac wrote:Brendan wrote:Of course what I've described is only one of the many possible ways that a handwriting recognition system could be implemented (and is just something that I "invented" without much research into the most effective way of designing a handwriting recognition system); and changing the way the handwriting recognition system works would make absolutely no difference to the way front-ends work.
Yet when I "invented" an equally plausible way for the system to be implemented you discarded it without any consideration?
There's many different ways that a handwriting recognition system could be designed which work with a "front end doesn't care what the input device is" system; and also many different ways that a handwriting recognition system could be designed that don't work in conjunction with the "front end doesn't care what the input device is" system. The existence of the latter doesn't prove the former is impossible or undesirable.
onlyonemac wrote:Brendan wrote:From this you decide that I'd be forcing users to always use the menu/toolbar/toolbox for frequently used things, and preventing them from using the commands (as keyboard shortcuts or underlined characters)?
No, the toolbars and toolboxes are for mouse (and other pointing device) users; the keyboard shortcuts and underlined characters are for keyboard users (and, in your fantasy world, handwriting recognition users as well).
No. The menus, toolbars, toolboxes and whatever are for all input devices and for discoverability (to provide a way for user to know what the application's commands are).
onlyonemac wrote:Brendan wrote:Um, what? We've taken what is essentially a
graphics tablet and added handwriting recognition software to that hardware to create a handwriting recognition system; and now you're trying to tell me that the graphics tablet shouldn't be used as a graphics tablet, because a graphics tablet is completely separate class of input device to a graphics tablet?
It doesn't matter what the underlying hardware is, we're talking about handwriting recognition systems here. If the actual hardware is a graphics tablet and it can also work as a graphics tablet and we're using it as a graphics tablet then that's a separate input device to when it's being used with handwriting recognition software and being treated as a handwriting recognition input device.
I am amazed at your ability to stretch reality into a twisted pile of nonsense. My keyboard is 2 completely separate input devices, one that does capital letters (unless I hold down the shift key) and another that does lower case letters (unless I hold down the shift key). It's not one device with 2 modes at all.
onlyonemac wrote:Brendan wrote:Combuster got confused and thought we were talking about differences between different input devices from the user's perspective (of which there are many) and not talking about differences between different input devices from the front-end's perspective (of which there may be none).
He didn't get confused; you did. He's talking about the dexterity of control over the devices in question and how those influence the way that they are used.
It might influence which devices the user chooses to purchase/use; but makes no difference to any application's front-end.
onlyonemac wrote:If you really insist on taking this approach, why not try this:
Divide input devices into categories. I see three main categories: pointing devices (mice, touchscreens, trackpads, trackballs, analogue joysticks, etc. - anything that can input an absolute or relative location or movement along two or more axes and a select action), text input devices (keyboards, voice recognition, handwriting recognition, OCR, etc. - anything that can input text characters), and directional navigation devices (4-way keypads, digital joysticks, switch-access devices, etc. - anything that's got controls to navigate in two or more directions and a control to select things). There may be additional categories needed (for example, a scanner and a camera could fall into a separate category, but if they're used for OCR or recognising sign language then they're text input, and if they're used for navigation through hand gestures then they're either pointing devices or directional navigation devices depending on the system used) or there may be another way entirely to classify devices, but the point is that we're grouping devices into groups with similar characteristics. Note also that devices may fall into one or more categories depending on the mode used, for example a keyboard can be used as either a text input device or a directional navigation device, and as you suggested the handwriting recognition system's hardware can also be used as a pointing device (although then it no longer falls into the category of a handwriting recognition system and isn't a text input device; it's a separate device entirely).
OK; but note that:
- A directional navigation device can be used to emulate a pointing device (just with less control over the speed of movement)
- Any pointing device (and therefore any directional navigation device), in conjunction with an "on screen virtual keyboard", can emulate a text input device (and that this is extremely common - e.g. smartphones, tablets)
- Any text input device can have a method or mode where it emulates a pointing device or directional navigation device (even if this means using words like "up" to represent movement)
- An input device emulating a different category may be worse than a device intended for that category; but an input device emulating a different category is better than nothing when no other device exists.
- The combinations above imply that all categories of input devices are able to emulate all other categories of input devices; which means that all input devices are able to generate all events to send to a front-end.
- Each category of input devices has a small set of events intended for it's native "no emulation". These events can be combined into a super-set of events for all input devices
- The super-set events would have categories too - e.g. "pointing device events", "commands", and "literal characters/text"; but (partly because all input devices are able to generate all events anyway) there's no reason for a front-end to care what the input device/s actually are, or (when there are multiple different input devices being used simultaneously) which input device sent an event.
onlyonemac wrote:Then you can design your interface system so that it can work with any number of devices from any number of categories. So for example the word processor takes text from a text input device and can also take keyboard shortcuts with a modifier or commands written on a handwriting recognition device with a "modifier" gesture. But it also has menus and toolbars for use with a pointing device and an on-screen keyboard (although few users would actually use this seriously), and the menus can also be navigated using a directional navigation device and again there can be an on-screen keyboard for entering text.
You've just described a word-processor using a single front-end for all input devices; that has no need to care if (e.g.) text came from a keyboard or a pointing device (using an on screen keyboard) or handwriting recognition system; and has no reason to care if the commands came from keyboard shortcuts or were handwrittten or were entered via. menus; and has no reason to care if the menus are being used from keyboard or mouse or joystick or handwriting system in "touchpad mode".
onlyonemac wrote:The graphics editor allows the user to draw with a pointing device and provides toolboxes for frequently-used tools and a menubar for all the available options, but users can also draw with a directional navigation device (there are many ways this could be done - you could have a path-based system whereby the user can enter and modify nodes in a drawing path before confirming the path to have it applied, or you could have a plotting-style system for straight lines, or whatever else that allows mostly freeform shapes to be drawn with a directional device) and again they could navigate the menus with the directional device; using a text input device for a graphics editor could prove a little tricky and I'd like to see you suggest some way of doing this that isn't just using a handwriting recognition system as a pointing device.
You've just described a graphics editor using a single front-end for all input devices; that has no reason to care if the commands came from keyboard shortcuts or were handwrittten or were entered via. menus; and has no reason to care if the menus are being used from keyboard or mouse or joystick or handwriting system in "touchpad mode"; and has no reason to care if the pointer is being controlled by a mouse or touchpad or cursor keys or anything else.
For speech recognition you could use coordinates (e.g. user says "coords 123, 456") to emulate a pointing device. If you deliberately cripple a handwriting system to be a lot less inefficient by denying the use of "touch pad" capabilities that must exist by definition (to allow the user to enter handwriting); then it could also use the same coordinate system to emulate a pointing device (e.g. user writes "coords 123, 456').
onlyonemac wrote:As we can see, once we group input devices into categories with similar characteristics it is easy to create interfaces that are optimised for the characteristics of each group of input devices and allow users to get the full benefit from their particular input device (whether they're using that input device out of choice or necessity is irrelevant); it just requires the application/frontend to be aware of what input device is used and present an appropriate interface.
Except, you had the same interface for all input devices instead; and failed to provide any reason why the application/front-end would care if (e.g.) text came from a keyboard or a virtual keyboard/mouse, or....
Cheers,
Brendan