Started reading CMOS in order to provide timestamps and noticed that the I'm not tracking time very accurately ( 3~5 seconds slower per hour). I currently re-read CMOS on each hour mark so the effect is quite visible as the clock jumps forward 3~5 seconds at the end of each hour.
I'm using the LAPIC timer, it has the ARAT bit and after numerous attempts to calibrate, I'm pretty sure its native rate is 1ghz as strange as that may be (neither the CPU's clock rate nor the QPI bus speed).
The way I do calibration is:
1. set up PIT to run at 1Khz (using 1193 as divider) and set up LAPIC to run at a rate low enough (like divider 32, initial value 10000) that I can comfortably service interrupts from both, the interrupt handlers increment counters for each.
2. wait for 1 sec or PIT counter reach 1000
3. CLI and use the ratio between the 2 counters to determine LAPIC timer's rate
But this calibration seems too fragile:
Although the native rate is 1000000000, getting 1 less interrupt during the calibration would knock that down to 999680000 and 1 more it becomes 1000320000, and I can't run the interrupts much faster than this to greatly reduce the effect of 1 less/more interrupt.
It is also difficult to wait for longer times as these timers can easily roll over if I simply read them later, and during boot there are many areas (probably more going forward) where interrupts are disabled so leave the 2 timer interrupts running and hope for the best doesn't seem like a good idea either.
The issue might also be:
Maybe I'm not tracking time correctly. I'm counting ticks in the normal timer handler (that also drive many other events like context switch or print a clock at a corner of screen at lower frequency) and if this handler misses any interrupt ticks (for example due to CLI elsewhere) there's no good way to tell that ticks are missed.
Any suggestions to improve the calibration/tracking?
[solved]Calibrate timer/track time with better accuracy
[solved]Calibrate timer/track time with better accuracy
Last edited by xeyes on Sat Jan 23, 2021 4:08 pm, edited 1 time in total.
Re: Calibrate timer/track time with better accuracy
According to the documentation, the LAPIC timer clock rate is at a fixed divider from the TSC clock, and you can enumerate the divider with CPUID leaf 15h. So here's an idea: Set up the PIT/HPET of an interrupt in 100ms, read the TSC now and read the TSC 100ms from now. The difference between the two is how far the TSC will tick in 100ms. Times ten and you have the TSC frequency. Now you only need to divide that by the value you get from CPUID 15h and you have the LAPIC timer frequency. The CPU barely gets involved, and afterwards you will no longer have need of the PIT or HPET at all anymore.
Carpe diem!
Re: Calibrate timer/track time with better accuracy
I don't think using interupts is a good way to provide an exact timer. Instead, you should read the PIT (or better HPET if its available) and then add the difference from the last read. The APIC timer is better used as an interval timer (if it's available). It's also an issue to decide what time base to use. I use the PIT 1.193 MHz frequency as the time base, but it might be better to use 1 GHz or similar.
To synchronize with real time is a bit complex. The CMOS has a very poor resolution and it's not simple to synchronize. Instead, what I do is to read it on startup and set the initial time to CMOS time. Then I can adjust real time from things like NTP which has far better resolution. The system time which is managed from PIT or HPET will always increase and will not jump back and forth, which is a requirement for using it for meassuring time. I have a constant which I add to system time to get real time, and this constant can be changed by synchronization with CMOS, NTP or other time source.
To synchronize with real time is a bit complex. The CMOS has a very poor resolution and it's not simple to synchronize. Instead, what I do is to read it on startup and set the initial time to CMOS time. Then I can adjust real time from things like NTP which has far better resolution. The system time which is managed from PIT or HPET will always increase and will not jump back and forth, which is a requirement for using it for meassuring time. I have a constant which I add to system time to get real time, and this constant can be changed by synchronization with CMOS, NTP or other time source.
Re: Calibrate timer/track time with better accuracy
Thanks guys! Great ideas about reading a time counter in the system rather than only keeping count of interrupts!
It might be a long time before I can put the TSC ratio idea into (regular) practice though. As qemu doesn't seem to pass-through valid leaf 15h. It claims that 15h is supported in 0h but all registers of 15h read out as zeros.
Also noticed that the issue is more in my tracking rather than the initial calibration, if a user space program that mostly repeatedly call syscalls (and some heavy weight ones) is running the whole hour, then the clock will be 5s slower. But if a user space program that has a more 'normal' mix of syscalls and user space operations (initialize and compare large buffers as part of the file sys test) is running for the whole hour, then the clock actually tracks quite well (usually ends up 1 second-ish faster than host probably due to the 1 lost interrupt during calibration causing it to think that the timer is slower than it really is )
I guess the trick is to read the some counter fast enough in the normal timer handler so it doesn't roll over more than once between 2 reads.
Now onto the new quest of figuring out the rate of some counter without CPUID 15h.
It might be a long time before I can put the TSC ratio idea into (regular) practice though. As qemu doesn't seem to pass-through valid leaf 15h. It claims that 15h is supported in 0h but all registers of 15h read out as zeros.
Also noticed that the issue is more in my tracking rather than the initial calibration, if a user space program that mostly repeatedly call syscalls (and some heavy weight ones) is running the whole hour, then the clock will be 5s slower. But if a user space program that has a more 'normal' mix of syscalls and user space operations (initialize and compare large buffers as part of the file sys test) is running for the whole hour, then the clock actually tracks quite well (usually ends up 1 second-ish faster than host probably due to the 1 lost interrupt during calibration causing it to think that the timer is slower than it really is )
I guess the trick is to read the some counter fast enough in the normal timer handler so it doesn't roll over more than once between 2 reads.
Now onto the new quest of figuring out the rate of some counter without CPUID 15h.
Re: Calibrate timer/track time with better accuracy
Thanks again for this idea!nullplan wrote:Set up the PIT/HPET of an interrupt in 100ms, read the TSC now and read the TSC 100ms from now. The difference between the two is how far the TSC will tick in 100ms. Times ten and you have the TSC frequency.
After switching from interrupt counting to reading TSC I tried an overnight high load run without hourly sync with CMOS. It only drifted ~1 seconds slower when compared against the host.
So the problem is solved now.