Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
I've set the PIT up in the same way as in Programmable_Interval_Timer. It seems to be losing about a second every 2 minutes. I've heard the PIT drifts but this seems far too quick, is this normal?
Initially, I thought it's how I'm tracking seconds, but after messing around with a few methods I'm not sure anymore. I've tried calculating the timer frequency and period as follows, and either accumulating the ticks passed and dividing by the frequency or accumulating the milliseconds passed. Both the numbers are in 32.32 fixed-point representation for accuracy.
static const int FREQUENCY = 1193182; // Really, it is 1193181.6666... Hz
void PIT::Initialize(int frequency)
{
uint32_t divisor = (frequency > 0) ? FREQUENCY / frequency : 0;
// Valid range for divisor is 16 bits (0 is interpreted as 65536)
if (divisor > 0xFFFF) divisor = 0; // Cap at 18.2 Hz
else if (divisor < 1) divisor = 1; // Cap at 1193182 Hz
io_out_8(PIT_COMMAND, PIT_INIT_TIMER);
io_out_8(PIT_CHANNEL0, divisor & 0xFF);
io_out_8(PIT_CHANNEL0, divisor >> 8);
m_divisor = divisor ? divisor : 0x10000;
}
static const int FREQUENCY = 1193182; // Really, it is 1193181.6666... Hz
void PIT::Initialize(int frequency)
{
uint32_t divisor = (frequency > 0) ? FREQUENCY / frequency : 0;
// Valid range for divisor is 16 bits (0 is interpreted as 65536)
if (divisor > 0xFFFF) divisor = 0; // Cap at 18.2 Hz
else if (divisor < 1) divisor = 1; // Cap at 1193182 Hz
io_out_8(PIT_COMMAND, PIT_INIT_TIMER);
io_out_8(PIT_CHANNEL0, divisor & 0xFF);
io_out_8(PIT_CHANNEL0, divisor >> 8);
m_divisor = divisor ? divisor : 0x10000;
}
What frequency are you using?
I'm doing the same thing to check for reload_value overflow. That and I've been using frequencies far from the limits, I've tried 1000, 5000, and 50000 hz. The drift seems to get slightly worst with higher frequencies which is why I'd initially thought it was an accuracy issue.
Last edited by 23scurtu on Thu Jan 21, 2021 8:06 pm, edited 1 time in total.
You cannot do it like that even on real hardware. There will always be interrupt latency which differs based on how much you disable interrupts (among other things), and this will cause loss of tics. A better method is to keep real-time with a free-running clock that you read regularly and then implement timers with another method. Combining these results in poor precision and drift.
Mmm if you are using the timer to track time, you shoudn't be using interrupts to reset the timer on each cycle. If you do, you will indeed lose time because of interrupt latency.
What I do is use the PIT in "rate generator" mode, meaning that it will automatically reset the count over and over. When I want to know the time, I read the counter.
Of course, if you don't read the counter at least once per cycle (which is very short with the PIT), you will start losing time again. Having an interrupt here can help to detect this (but is no guarantee).
This works better with other timers like the ACPI PM one where the cycle is longer.
Last edited by kzinti on Thu Jan 21, 2021 9:22 pm, edited 1 time in total.
kzinti wrote:Mmm if you are using the timer to track time, you shoudn't be using interrupts toreset the timer on each cycle. If you do, you will indeed lose time because of interrupt latency.
What I do is use the PIT in "rate generator" mode, meaning that it will automatically reset the count over and over. When I want to know the time, I read the counter.
Of course, if you don't read the counter at least once per cycle (which is very short with the PIT), you will start losing time again. Having an interrupt here can help to detect this (but is no guarantee).
This works better with other timers like the ACPI PM one where the cycle is longer.
rdos wrote:You cannot do it like that even on real hardware. There will always be interrupt latency which differs based on how much you disable interrupts (among other things), and this will cause loss of tics. A better method is to keep real-time with a free-running clock that you read regularly and then implement timers with another method. Combining these results in poor precision and drift.
I'm not resetting the timer on each cycle, I'm using the PIT in "rate generator" mode as well (set by calling outb(0x43, 0x34)). Losing ticks due to disabling interrupts makes sense, and could explain why really high frequencies would cause more drift. I guess that I'll have to keep the PIT frequency low (around 100hz), and do as kzinti mentioned where I read the PIT counter after every interrupt (or whenever requested) to keep track of time. Although, I've heard that reading I/O ports can be slow, couldn't this cause a missing tick if I take longer than a tick period to read the PIT? Eventually, I will move to using the HPET to keep track of time, but I wanted to implement and understand the PIT first.
kzinti wrote:I/O ports access used to be very slow, I don't know that this is true anymore, especially not on an emulator...
Port I/O enforces stronger serialization than uncached memory (MMIO), so yes, it's still true outside of emulators. (Inside emulators, port I/O and MMIO will probably be the same speed.)
I'd like to see if the PIT drifts so much with actual hardware instead of QEMU's emulation.
Accurate time keeping seems out of the question in an emulator like bochs or qemu emulator mode, as they themselves are user space apps, and we all know how much control they have over their own time on the CPU.
The results are better in a VMM/hypervisor (for qemu that is KVM) from what I saw.
But even with KVM, Qemu doesn't pass through the TSC/core clock ratio in CPUID 15, or the constant TSC bit in CPUID 80000007. So the excellent method outlined by nullplan (use PIT or HPET to calibrate TSC and then each core can do its own accurate time keeping using that) in my earlier question is not guaranteed to work. To be sure I used some liveCDs to read these as well and got same results.
There were old discussions about TSC patch-sets for qemu from years ago but not sure if they ever went in.
Perhaps someone more familiar with VMX could help point out whether it is by design that a VMM/hypervisor doesn't have complete control over TSC this on x86 CPUs?
Or If anyone get their qemu to show the TSC related bits please share the settings/options
xeyes wrote:Accurate time keeping seems out of the question in an emulator like bochs or qemu emulator mode, as they themselves are user space apps, and we all know how much control they have over their own time on the CPU.
The results are better in a VMM/hypervisor (for qemu that is KVM) from what I saw.
But even with KVM, Qemu doesn't pass through the TSC/core clock ratio in CPUID 15, or the constant TSC bit in CPUID 80000007. So the excellent method outlined by nullplan (use PIT or HPET to calibrate TSC and then each core can do its own accurate time keeping using that) in my earlier question is not guaranteed to work. To be sure I used some liveCDs to read these as well and got same results.
There were old discussions about TSC patch-sets for qemu from years ago but not sure if they ever went in.
Perhaps someone more familiar with VMX could help point out whether it is by design that a VMM/hypervisor doesn't have complete control over TSC this on x86 CPUs?
Or If anyone get their qemu to show the TSC related bits please share the settings/options
Answering my own question, below is the option to turn on the constant TSC bit in qemu.
-cpu host,migratable=no,+invtsc
Shows the importance of adhering to our motto of "RTFM"