Page 1 of 1

RTC and microseconds

Posted: Fri Aug 28, 2009 4:36 am
by finarfin
Hi,
i'm trying to read time from RTC (using linux as guestOS), and now with gettimeofday, i also can read microsecond, now i'm wondering if i can do the same using RTC.

Is it possible, or no?

Thank you

P.s. Sorry for my english :D

Re: RTC and microseconds

Posted: Fri Aug 28, 2009 6:56 am
by Brendan
Hi,
finarfin wrote:i'm trying to read time from RTC (using linux as guestOS), and now with gettimeofday, i also can read microsecond, now i'm wondering if i can do the same using RTC.

Is it possible, or no?
Mostly no.

If you read the RTC time and date fields the best you can get is seconds. The RTC can also be configured to generate a "periodic interrupt", which can (in theory) be setup for any "power of 2 interrupts per second" between 2 Hz and 8192 Hz (e.g. every 500 ms, every 220 ms, every 125 ms, ..., every 244.140625 us, every 122.0703125 us), but in practice some chipsets can't handle the higher frequencies - 1024 Hz should work on all chipsets, but going faster than that decreases the chance that it'll work on all chipsets, and I wouldn't recommend using 4096 Hz or 8192 Hz.

Note: Part of the problem is that accessing I/O ports for old ISA devices can be very slow, and for the RTC periodic interrupt you need to access 2 of these slow I/O ports for every IRQ.

For more IRQs per second you could use the PIT (although I wouldn't use that for more than about 5000 IRQs per second), or the local APIC timer or HPET. However even for a timer like the local APIC timer (which probably can handle the frequency you'd need), with 1 million IRQs per second (e.g. one every microsecond) you'd be looking at some serious overhead. For example, if it takes 500 cycles to handle the timer's IRQ on average, then a 1 GHz CPU would spend 50% of it's time just handling the timer IRQs.

The only sane option is to not rely on an IRQ. There's a few ways this can be done.

You could configure a timer to generate a reasonable amount of IRQs per second, then (when you need to) read the timer's remaining count. For an example, you could setup the PIT chip to generate 1000 IRQs per second and use this IRQs to keep track of (for e.g.) milliseconds since boot, and when you need a more precise time you can read the PIT timer's count and do "time = ms_since_boot + (max_PIT_count - current_PIT_count)". For the PIT, in theory this method can give you up to about 1 us accuracy. You can do the same with the local APIC timer and get closer to 1 ns accuracy. The problem here (especially for the PIT) is the overhead of reading the timer's remaining count - if you do this every time anyone calls "gettimeofday()" it's going to become a nightmare. There's also a problem with some chipsets where reading the remaining count causes drift. For local APIC timers this method is far more practical.

The other option is to use a counter rather than a timer. HPET is probably the best example here - find out how often it's main counter register is incremented, then do "current_time = HPET_count * K + time_at_boot". Another alternative would be using the CPU's time stamp counter (e.g. RDTSC). This has the highest precision and least overhead, but it's also the most complex method to get right, because the time stamp counter can be effected by sleep states and power management and the time stamp counters for different CPUs aren't kept in sync.

Also note that the local APIC timer, HPET and the CPU's time stamp counter are not supported on all computers.

Mostly what I'd suggest is having 3 different functions. The first function always returns the time in milliseconds (1 ms precision and 1 ms accuracy).

The second function returns an extremely precise representation of the current time with 1 ms or better accuracy - for e.g. it could return something like picoseconds, "1 / 2^64 ths" of a second, or "1 / 2^32 ths" of a millisecond, but the accuracy of the value returned may be a lot worse than it's precision (e.g. if the best time source the kernel can use measures nanoseconds, then you can convert this to picoseconds and have picosecond precision with nanosecond accuracy). You'd also want a "get_high_precision_timer_accuracy()" function. During boot you'd detect which timers are present/supported by hardware, and you'd use the best timer that you can to implement this.

Finally I'd have a third function which returns the time in milliseconds (1 ms precision and 1 ms accuracy) and a "number of times this function was called this millisecond" count. The reason for this is sometimes you need a timestamp that is guaranteed to be unique but doesn't need to be very accurate. An example of this file modification times, where some utilities need to know if a file is older or newer than another file but don't care exactly how much older or newer (e.g. "make" and backup utilities).


Cheers,

Brendan