Hey guys, I'm working on getting keyboard input, having some trouble with it, and wondering if you could give me some advice with it.
Currently, I have the interrupts set up and they successfully recognize whenever a key is pressed or released. However, I'm having some troubles with interpreting the scan codes. I found a website http://www.barcodeman.com/altek/mule/scandoc.php with several charts listing scan codes corresponding to a 102-key keyboard.
I could manually create two character arrays, one for the ascii letters and one for the keyboard scancodes and then refer one to the other, but this seems like a lot of work, that should be avoidable.
I found code on OSdev http://wiki.osdev.org/Keyboard_Input listing the following code as a reference to the letters, but it gives me a compiler error:
char lowercase[256] = {0x1E:'a'};
char uppercase[256] = {0x1E:'A'};
How do the rest of you all handle keyboard inputs? Or do I just need to bite the bullet and hand-write out all the scan codes?
Keyboard input
Keyboard input
Hexciting: An open source hex editor for the command line.
https://sourceforge.net/projects/hexciting/
https://sourceforge.net/projects/hexciting/
Re: Keyboard input
I think this means that the 31th entry in the array should be set to the codes of the letters 'a' and 'A' respectively.samoz wrote:I found code on OSdev http://wiki.osdev.org/Keyboard_Input listing the following code as a reference to the letters, but it gives me a compiler error:
char lowercase[256] = {0x1E:'a'};
char uppercase[256] = {0x1E:'A'};
How do the rest of you all handle keyboard inputs? Or do I just need to bite the bullet and hand-write out all the scan codes?
As for the rest of your question, keyboard handling is implemented in the following way in my OS. There are two types of drivers for processing the key presses, these are stacked one on the other. The lower driver is a keyboard class driver. It accepts scan codes from the keyboard, and translates them to virtual keys (translation is implemented via a finite state machine). The higher drivers are the layout drivers, which convert the virtual key packets read from the class driver into Unicode characters or other keyboard-related events (for example, backspace is converted to an event). Separating the two tasks (reading scan codes and interpreting them as keys) allows me to switch keyboard layouts on the fly, as it is supposed to work...
- Combuster
- Member
- Posts: 9301
- Joined: Wed Oct 18, 2006 3:45 am
- Libera.chat IRC: [com]buster
- Location: On the balcony, where I can actually keep 1½m distance
- Contact:
Re: Keyboard input
At some point you *will* need to have a scancode-character map somewhere. The easiest way out is indeed to write out the map, but you could probably use some perl script to convert existing tables.
Re: Keyboard input
You also need to ask yourself what character set and encoding you'll be using. ISO-8859-1 seems to be the default choice for most users, as it's popular and is a standard 1 byte per character encoding. But I'd use Unicode for any decent OS, especially as internal representation as it saves you a lot of headaches later. For encoding there are then several options: UTF-32 is fast and simple as it has no multibyte characters, but uses 4 bytes per char regardless. UTF-8 is slower because it makes use of multibyte sequences (which need to be parsed) but it wastes little space especially for ASCII text. UTF-16 seems to be a happy medium. Personally, I wouldn't really care about some extra wasted space, as speed is more important in a proper OS, so I'd go for UTF-32.
- Troy Martin
- Member
- Posts: 1686
- Joined: Fri Apr 18, 2008 4:40 pm
- Location: Langley, Vancouver, BC, Canada
- Contact:
Re: Keyboard input
Woot for perl!
I can't really help you much, my kernel is real mode asm.
I can't really help you much, my kernel is real mode asm.