Hi,
jammmie999 wrote:Sorry to hijack, but is that the best way to keep track of time will the PIT keep timing that accurately, especially when disabling and re-enabling interrupts.
In general, you should never need to disable IRQs for more than about 10 instructions anyway. The only case where this isn't really possible is during software task switches, where it might be as bad as 200 instructions. For a 33 MHz CPU (assuming 3 cycles per instruction) that might work out to a worst case of about 19 us of "jitter" caused by extra IRQ latency (caused by disabling IRQs).
For timing, there's 2 different things - keeping track of real time, and measuring durations (e.g. how long until a sleeping task should wake up, how long until the currently running task has used all the time it was given, how long until a network connection should timeout, etc).
For keeping track of real time, you don't want any IRQ to be involved at all. Instead you want to read a counter (like the CPU's TSC, the HPET main counter or the ACPI counter); and then have something like NTP (Network Time Protocol), and/or maybe the RTC's update IRQ to keep it synchronised. The important thing is getting good precision (e.g. nanoseconds rather than milliseconds) without the overhead of thousands of pointless IRQs constantly interrupting the CPU.
For measuring durations, you want dynamically programmed delay/s. For example, if the next sleeping task to wake up is meant to wake up in 12345 us, then you'd set the PIT or local APIC timer or HPET to generate an IRQ in 12345 us (or as close to that as you can get). For a 1000 Hz PIT, you'd have 12 pointless IRQs followed by an IRQ that is 900 us too late because you had to round up. Basically you want to use the local APIC timer, HPET or the PIT, and you want it it "one shot" mode and not "fixed frequency mode".
Note: For very accurate delays (e.g. for device drivers, etc), you can set the timer's IRQ to occur just before the delay expires and then poll something like the TSC until the exact time when the delay should expire. This approach can give you a "nano_sleep()" that is accurate to within 1 ns on modern hardware. In this case, the more accurate the first timer's IRQ is the less CPU time you waste polling the TSC (e.g. you don't want to poll for up to 1 ms when you could poll for up to 1 us instead).
Now; for modern computers you're likely to have a usable TSC, plus local APIC timer and HPET; so getting close to 1 ns precision for everything (with no pointless IRQs/overhead) should be reasonably easy.
For old hardware (if you've only got the PIT and RTC to work with), you can set PIT to "one shot" mode and get 838 ns precision for delays, and read the PIT's counter to get the current time (where "current time = time when PIT counter was set last + count that was set last time - current count read from PIT") with 838 ns precision. The problem with this is that there's a lot of extra overhead reading/writing IO ports; especially for reading the count in the default "low byte then high byte" mode, because if you read at the wrong time the low byte can wrap (e.g. so instead of reading 0x1200 or 0x11FF for the count you actually read 0x12FF) and to avoid that you need to send a "latch" command (so reading the count becomes an IO port write followed by 2 IO port reads).
To reduce the overhead of setting and getting the count, you can set the PIT to "low byte only" mode or "high byte only" mode. For "low byte only mode" the maximum delay would be 256 * 828 ns = 212 us, which is too little (e.g. a 1234 us delay would become five 212 us delays followed by a 174 us delay, and you'd have to set the count and send EOI to the PIC chip 6 times). For "high byte only mode" the maximum delay is about 55 ms (much better) and you'd get 212 us precision out of it; which is a more reasonable compromise between precision and overhead (especially for old hardware that doesn't have better timers).
Basically what I'm saying here is that a good OS wouldn't just set the PIT to 1000 Hz and use that for everything; but would determine which timers are available and use what it can get in ways that improve precision and reduce overhead.
jammmie999 wrote:I ask the RTC for the time each time the PIT interrupt fires. This doesn't seem to slow anything down, and it would only take a couple of nano seconds to read from the RTC?. Or would it be better to artificially increment time using the PIT?
You should only read the time and date from the RTC once during boot and keep track of the time yourself after that. For example, (for a bad/simple OS) during boot you might read the RTC's time and date and use it to set a "nanoseconds since 1970, UTC" variable, then after that you might use the RTC's "update IRQ" to add 1000000000 to that "nanoseconds since 1970, UTC" variable each second. Of course if you were doing that it'd be trivial to use the RTC's "periodic IRQ" instead, and add 250000000 to your "nanoseconds since" variable four times per second, or add 1953125 to it 512 times per second.
More importantly, later on you'd be able to replace that variable with something much more precise; like the HPET counter (e.g. "
nanoseconds since 1970 = nanoseconds_since_1970_from_RTC_at_boot + (HPET_current_count - HPET_count_at_boot) * 1000000000 / HPET_frequency"); without causing problems for any code that asks the OS for the current time, and without forcing applications to have some sort of
retarded scaling factor to compensate for poor design.
The only other case where it's OK to read the time and date from the RTC is when the computer is coming out of a deep power saving state (e.g. where almost everything except RAM was turned off) and you've lost track of time because your normal timer IRQ/s were disabled to save power.
Cheers,
Brendan