Hi,
Dandee Yuyo wrote:If we set the PIT counter to 11931 (about 1193181 / 100) as in the egos example then we really tick 100.00684485793311541362836308775 times per second. If we count seconds with integers we have an error of 0.6844%.
If we set the PIT counter to 9861 (exactly 1193181 / 121), we tick 121.00006753878916945543048372376 times per second, with an error of 0.0067%
Am I wrong?
That depends on your code.
If your code expects 100 IRQs per second (or 121 IRQs per second) then there will be some error in your code. If your code expects 99.99846351547658956 IRQs per second (with divisor set to 11932) then there will still be an extremely small amount of error that can easily be completely ignored.
The main problem is that you're using integers to represent real numbers. Why?
I use fixed point maths instead. The timer IRQ ends up doing something like:
Code: Select all
mov eax,[periodLow]
mov ebx,[periodHigh]
add [timerFractions],eax
adc [timerTick],ebx
In this case:
Code: Select all
periodHigh = int( 1000 * count / (3579545/3) )
periodLow = (2^32 * (1000 * count / (3579545/3) ) & 0xFFFFFFFF
Or, if the count is 11932 then "periodHigh = 10 ms" and "periodLow = 0x000A11D5". If you do things like this, then your software might lose one second every million years.
However, there's also error in the hardware itself - the PIT chip's base frequency is effected by the quality of the oscillator, heat, dust, etc. Unless you adjust for this then there's no point being extremely accurate in software. For example, the calculations for "periodHigh" and "periodLow" might just be initial calculations, and the OS might adjust them based on feedback from a time server or some other (more accurate) source.
Of course accuracy isn't the same as precision. Imagine you're measuring how long something takes, and you do "time_taken = end_time - start_time". If an IRQ occurs every 10 ms, then you could set "start_time" just before the IRQ occurs and you could set "end_time" just after an IRQ occurs. In this case you could do "time_taken = 90 - 60 = 30 ms" when in reality it only took 10.1 ms. This is called quantumization. The error you get can be up to double the time between IRQs; or if there's an IRQ every 10.0 ms the error will be up to (but less than) 20.0 ms.
To improve precision you'd want to reduce the time between IRQs (or increase the timer frequency). Of course to improve performance you'd want to increase the time between IRQs (or reduce the timer frequency). The amount of overhead the timer causes depends on the timer frequency and the CPU speed. For example, a 1000 Hz timer might use 1% of CPU time for a 25 MHz 80486 (and you might want to use a slower timer frequency to reduce overhead), but a 1000 Hz timer might use 0.01% of CPU time on a 2.5 Ghz CPU (and you might want to use a faster timer frequency to improve precision).
During boot, you might want to test CPU speed and then dynamically calculate the timer frequency to get the best compromise between precision and overhead for the computer (e.g. less overhead on slow computers and better precision on faster computers).
Of course CPU's don't run at a fixed speed anymore - if you do dynamically calculate the timer frequency during boot, then you might want to recalculate the timer frequency when CPU speeds change (e.g. when a CPU gets hot and starts using thermal throttling).
This can be done accurately if it's done right. To avoid problems caused by the quantumizing effect, you need to adjust the timer frequency when an IRQ occurs. For example:
Code: Select all
mov eax,[periodLow]
mov ebx,[periodHigh]
add [timerFractions],eax
adc [timerTick],ebx
mov ax,0
xchg [newCount],ax
test ax,ax
je .done
call changeTimer ;Change the timer divisor, and recalculate periodLow and periodHigh
.done:
In this case, if the kernel wants to change the timer frequency it just sets "newCount" and the change will be postponed until it can be done accurately.
Cheers,
Brendan