Page 1 of 1

Is this a proper way of reading the KBC?

Posted: Fri Nov 05, 2010 10:18 pm
by ~
I have been trying to test some things such as enabling the PS/2 mouse and thus work with the keyboard controller. There are some commands that according to the wiki could or could not return an ACK:

http://wiki.osdev.org/Mouse_Input#Set_C ... able_IRQ12
Set Compaq Status/Enable IRQ12

.....
Then send command byte 0x60 ("Set Compaq Status") to port 0x64, followed by the modified Status byte to port 0x60. This might generate a 0xFA ACK byte from the keyboard.
The other problem I have is that I need a loop that doesn't wait too much or too little time. I couldn't do it reliably across machines of 500MHz, 2GHz, 2.9GHz, etc., using LOOP instructions, so I had to make the function below.

It uses the RTC without interrupts. In the worst of cases it waits 1-2 seconds (one second is what's intended) when the expected bytes that it tries to read are more than the ones in the keyboard controller. Otherwise it is very fast.

It tries to see if the keyboard controller is offering data while the RTC switches between "updating" and "updated". It reads it as soon as it's available and finishes as soon as the number of expected bytes (as a parameter by the programmer) are actually retrieved.

Then it returns the number of bytes actually read.

But that particular problem still remains. At least when initializing it may not be too important to wait 1-2 more seconds, but more than likely there has to be a better, faster and safe potentially-bug-free way.

Maybe using the PIT would make for better time precision and less noticeable delays, but how to use it without IRQs on each update and the whole IDT and PIT reprogramming dependence would now be the problem just for PS/2 initialization. It would look too bulky.

What do you think about this method?


Of course I only plan to use it for gathering of data bytes during initialization, and once everything is set up I will gather the data using the IRQs.

Code: Select all

while(seconds_count!=0)
{

1. Disable interrupts

1.1. Check for a keyboard data byte; retrieve and account for if present. End loop if the expected number of bytes has been read.

2. Check if the CMOS is making a update
3. If not, do a NOP, then go to Step 1

4. Check for a keyboard data byte; retrieve and account for if present. End loop if the expected number of bytes has been read.

4.1. Check if the CMOS is making a update again
5. If so,do a NOP,  then go to Step 4

6. Check for a keyboard data byte; retrieve and account for if present. End loop if the expected number of bytes has been read.

7. seconds_count--;
}

Code: Select all

// Inputs:
// EDI==Keyboard controller data destination
// CL==Number of (potential) seconds to wait
// CH==Number of bytes to expect from KBC
//
//Outputs:
//EBX or BL==Number of retrieved bytes
///
/*$EBX */function kbc_MULTI_readDataPort(/*$WIDEDI buffer destination, $CH num_expected_data_bytes*/)
{
 push $WIDEAX;
 push $WIDECX;
 push $WIDEDI;
 push $WIDEDX;

 $WIDEBX=0;  //Set the number of retrieved bytes to 0

 $DX=0x60; //keyboard data port {BYTE}


while($CL!=0)  //As long as seconds count is not reached
{
 .Step1:
 //Disable Interrupts
 ///
  asm cli


 .Step2:
  //Check for KBC data:
  ///
    asm in al,0x64   ;//Port 0x64 is the Status
    $AL&=1;
    if($AL==1) //Data is waiting to be read
    {
      asm insb      //Read it
      $WIDEBX++;    //Count one more byte
      if($CH==$BL)goto .myends;  //If the expected number of bytes
                                 //are already read, end waiting, finish!
    }


 //Check if the CMOS is making an update
 ///
  $AL=0x0A;
  asm out 0x70,al
  asm in al,0x71
  $AL&=10000000b;


 .Step3:
 //If not, do a NOP, then go to Step1
 ///      _
  if($AL==00000000b)
  {
   asm cli
   asm nop

     goto .Step1;
  }


 .Step4:
  //Check for KBC data:
  ///
    asm in al,0x64   ;//Port 0x64 is the Status
    $AL&=1;
    if($AL==1) //Data is waiting to be read
    {
      asm insb      //Read it
      $WIDEBX++;    //Count one more byte
      if($CH==$BL)goto .myends;  //If the expected number of bytes
                                 //are already read, end waiting, finish!
    }


 //Check if the CMOS is making an update again
 ///
  $AL=0x0A;
  asm out 0x70,al
  asm in al,0x71
  $AL&=10000000b;


 .Step5:
 //If so, do a NOP, disable interrupts again,
 //then go to Step4
 ///      _
  if($AL==10000000b)
  {
   asm cli
   asm nop
   asm cli

   goto .Step4;
  }


 .Step6:
 //If not, read (all) the registers as quickly as you can:
 ///
  asm cli


  //Check for KBC data:
  ///
    asm in al,0x64   ;//Port 0x64 is the Status
    $AL&=1;
    if($AL==1) //Data is waiting to be read
    {
      asm insb      //Read it
      $WIDEBX++;    //Count one more byte
      if($CH==$BL)goto .myends;  //If the expected number of bytes
                                 //are already read, end waiting, finish!
    }



 //Count down seconds
  $CL--;
}


 .myends:

 pop $WIDEDX;
 pop $WIDEDI;
 pop $WIDECX;
 pop $WIDEAX;
}

Re: Is this a proper way of reading the KBC?

Posted: Fri Nov 05, 2010 11:55 pm
by Brendan
Hi,
~ wrote:What do you think about this method?
I think you're seriously lacking any/all abstractions.

To begin with, you should have a generic timer service that can be used for small delays, longer delays and time-outs. The "keyboard controller chip" driver should not mess with the RTC directly, but instead should use the API provided by the generic timer service.

The "keyboard controller chip" is mostly a chip that supports one or 2 serial ports. You should separate the code that deals with the "keyboard controller chip" from the code that handles any device that is connected to it. For example, you might have a "keyboard controller chip" driver that provides a "send_byte()/get_byte()" interface (which doesn't know or care what the bytes sent/received actually are), and a separate PS/2 mouse driver that uses the "send_byte()/get_byte()" interface (and doesn't know or care about any of the lower level details, like which I/O ports the "keyboard controller chip" driver is using). It doesn't matter too much if these logically separate pieces of code are physically separated (e.g. contained in separate binaries) or not (e.g. all just pieces of a larger "controller and all possible PS/2 devices" driver, or maybe all implemented as pieces of a much larger monolithic kernel), as long as there's logical separation.

Also, disabling interrupts and never re-enabling them (which is what your code seems to do) is a very bad idea. You probably shouldn't leave interrupts disabled for more than about 10 normal instructions. One I/O port read or write to an old/slow legacy device (like the RTC) is worth about 500 normal instructions.

Finally, polling is stupid. The RTC has an "update IRQ" so you don't need to continually poll the "update in progress" flag; and the "keyboard controller" chip has IRQ1 (for "PS/2 port #0 needs attention") and IRQ12 (for "PS/2 port #1 needs attention").


Cheers,

Brendan

Re: Is this a proper way of reading the KBC?

Posted: Sat Nov 06, 2010 9:55 pm
by ~
Brendan wrote:Also, disabling interrupts and never re-enabling them (which is what your code seems to do) is a very bad idea. You probably shouldn't leave interrupts disabled for more than about 10 normal instructions. One I/O port read or write to an old/slow legacy device (like the RTC) is worth about 500 normal instructions.

Finally, polling is stupid. The RTC has an "update IRQ" so you don't need to continually poll the "update in progress" flag; and the "keyboard controller" chip has IRQ1 (for "PS/2 port #0 needs attention") and IRQ12 (for "PS/2 port #1 needs attention").
Yes, I know how bad disabling interrupts for too much time and polling in an interrupt-disabled loop is. I would use better timings and currently use IRQs for the time the PS/2 devices are sending user data in my own test "kernels".

Brendan wrote:I think you're seriously lacking any/all abstractions.

To begin with, you should have a generic timer service that can be used for small delays, longer delays and time-outs. The "keyboard controller chip" driver should not mess with the RTC directly, but instead should use the API provided by the generic timer service.
What I am currently intending are basically short snippets for each keyboard controller command, mouse command and keyboard command and magic sequences, that can be called from the DOS, FreeDOS, etc., command line in a way in which they won't lock the keyboard. In this situation I think that interrupts, keyboard usability and timeouts would not allow anything else than disabling interrupts, polling and carrying full command sequences and the only way I found to keep a simple, "dependency-free" and constant time across processor speeds was to use the RTC polling method exclusively for the time ACKs and other data is expected in response to the commands.

The intention is to properly analyze and document the correct sequences to handle each of those commands, and their results. But having too many dependencies at this point would not make very clear the actual concepts.

Then I would have keyboard controller functions like

kbc_waitForDataForUs
kbc_waitUntilCommandable
kbc_readDataPort
kbc_sendCommand

And these would be called from the PS/2 mouse/keyboard drivers, which in turn would have its own list of command-handling functions, etc., but specific for the mouse/keyboard.

But is there some better way for these simple tests?

Brendan wrote:The "keyboard controller chip" is mostly a chip that supports one or 2 serial ports. You should separate the code that deals with the "keyboard controller chip" from the code that handles any device that is connected to it. For example, you might have a "keyboard controller chip" driver that provides a "send_byte()/get_byte()" interface (which doesn't know or care what the bytes sent/received actually are), and a separate PS/2 mouse driver that uses the "send_byte()/get_byte()" interface (and doesn't know or care about any of the lower level details, like which I/O ports the "keyboard controller chip" driver is using). It doesn't matter too much if these logically separate pieces of code are physically separated (e.g. contained in separate binaries) or not (e.g. all just pieces of a larger "controller and all possible PS/2 devices" driver, or maybe all implemented as pieces of a much larger monolithic kernel), as long as there's logical separation.
Yes, I understand how the KBC and the keyboard/mouse are separate devices but share the same keyboard controller across different IRQs, and I have seen in many places that it necessary to enable/disable the keyboard/mouse to avoid interference with user data when sending commands and reading the ACK and results.

I think I understand that this sort of separation would help in making a more stable system that in this case could handle these 3 things without much problem regardless on the state of the other (mouse or keyboard not connected, etc.).

In short, would this method of gradually adding more elements, services and APIs prove to have scalability problems when replacing these "dummy" routines? I found more logical handling these elements gradually with better methods as you say, and seems that trying to add better time handling and abstraction at once without first analyzing the behavior of the controller and across CPU speeds is very difficult in a reliable way and in a clearly documentable way.

And if I should focus more on something like a defined HAL, what should I read to know more about this? Not even the book recommendations, the OS Dev Wiki or Wikipedia have very much information. Is there some general operating system, or Windows or Linux book that clearly defines it and what I should keep in mind and consider for correctly trying to define one that can be maintainable and actually worth the intention of portability?

The results returned from Google seem to be mostly for the UNIX "HAL" project and in general are just an application of the concepts but the actual theory behind them seems largely missing.

What about "Hardware-dependent Software: Principles and Practice by Wolfgang Ecker, Wolfgang Müller, and Rainer Dömer"?

And this reference:

http://ometer.com/hardware.html

Re: Is this a proper way of reading the KBC?

Posted: Sun Nov 07, 2010 12:59 am
by Brendan
Hi,
~ wrote:
Brendan wrote:I think you're seriously lacking any/all abstractions.

To begin with, you should have a generic timer service that can be used for small delays, longer delays and time-outs. The "keyboard controller chip" driver should not mess with the RTC directly, but instead should use the API provided by the generic timer service.
What I am currently intending are basically short snippets for each keyboard controller command, mouse command and keyboard command and magic sequences, that can be called from the DOS, FreeDOS, etc., command line in a way in which they won't lock the keyboard. In this situation I think that interrupts, keyboard usability and timeouts would not allow anything else than disabling interrupts, polling and carrying full command sequences and the only way I found to keep a simple, "dependency-free" and constant time across processor speeds was to use the RTC polling method exclusively for the time ACKs and other data is expected in response to the commands.
During initialisation (where you can't rely on IRQ1 or IRQ12) use the PIT and IRQ0 (or HPET or a local APIC timer) to update a "ticks since" counter regularly, and use that counter to check for time-outs (while polling with interrupts enabled, and maybe doing HLT between polls to help minimise unnecessary heat).
~ wrote:The intention is to properly analyze and document the correct sequences to handle each of those commands, and their results. But having too many dependencies at this point would not make very clear the actual concepts.
I fail to see the difference between depending on direct RTC access and depending on a cleaner/simpler abstraction that doesn't clutter up your keyboard controller code with irrelevant low level details for a completely different chip (the RTC). In both cases you depend on something to measure time, so some sort of dependancy is unavoidable.

If you just want to test the keyboard controller (and don't want to implement anything to do with timers if you can avoid it), then you could do your testing in real mode and use the BIOS "ticks since midnight" as the basis for your time-outs.
~ wrote:
Brendan wrote:The "keyboard controller chip" is mostly a chip that supports one or 2 serial ports. You should separate the code that deals with the "keyboard controller chip" from the code that handles any device that is connected to it. For example, you might have a "keyboard controller chip" driver that provides a "send_byte()/get_byte()" interface (which doesn't know or care what the bytes sent/received actually are), and a separate PS/2 mouse driver that uses the "send_byte()/get_byte()" interface (and doesn't know or care about any of the lower level details, like which I/O ports the "keyboard controller chip" driver is using). It doesn't matter too much if these logically separate pieces of code are physically separated (e.g. contained in separate binaries) or not (e.g. all just pieces of a larger "controller and all possible PS/2 devices" driver, or maybe all implemented as pieces of a much larger monolithic kernel), as long as there's logical separation.
Yes, I understand how the KBC and the keyboard/mouse are separate devices but share the same keyboard controller across different IRQs, and I have seen in many places that it necessary to enable/disable the keyboard/mouse to avoid interference with user data when sending commands and reading the ACK and results.

I think I understand that this sort of separation would help in making a more stable system that in this case could handle these 3 things without much problem regardless on the state of the other (mouse or keyboard not connected, etc.).
This sort of separation helps with maintaining the code, stability and flexibility. For an example, you'd be able to implement a device driver for a PS/2 bar code scanner, a PS/2 touchpad or a PS/2 touchscreen without touching any of the code in the keyboard controller chip's driver; or add support for HPET without touching any of the code that depends on a timer.
~ wrote:In short, would this method of gradually adding more elements, services and APIs prove to have scalability problems when replacing these "dummy" routines? I found more logical handling these elements gradually with better methods as you say, and seems that trying to add better time handling and abstraction at once without first analyzing the behavior of the controller and across CPU speeds is very difficult in a reliable way and in a clearly documentable way.

And if I should focus more on something like a defined HAL, what should I read to know more about this? Not even the book recommendations, the OS Dev Wiki or Wikipedia have very much information. Is there some general operating system, or Windows or Linux book that clearly defines it and what I should keep in mind and consider for correctly trying to define one that can be maintainable and actually worth the intention of portability?
Sometimes portability makes sense (e.g. USB device drivers, as lots of completely different architectures support USB), but sometimes it doesn't (e.g. PS/2 device drivers, as only 80x86 has ever supported PS/2 devices).

I probably wouldn't focus on a HAL, but instead I'd focus on managing a hierarchical tree of devices. The driver for a parent node is responsible for detecting child devices, inserting them into the "device tree" and starting suitable drivers for those child devices. For example, you might have a driver for a PCI hub, which detects a PCI->LPC bridge and starts a driver for it. The PCI->LPC bridge driver might detect a PS/2 controller and start a driver for it. The PS/2 controller driver might detect a pair of keyboards (one in each PS/2 port) and start keyboard drivers for them. This starts to become important when you start looking at power management and hot-plug support (if a device is put to sleep or removed, then any/all of that device's children become inaccessible).


Cheers,

Brendan