LAPIC timer interrupts going slower and faster with load
Posted: Sun Jan 26, 2025 7:12 am
Didn't post for a long time here, nice to see that so many are still here after so long
Worked a lot on my system recently and I've made an interesting observation on x86 with the LAPIC timer. I've programmed it to trigger a timer interrupt used for preemptive multitasking every 1ms. To do so I'm using the PIT to sleep for 10ms and then calculate the settings for the periodic timer based on the counted ticks. This works fine so far - under usual conditions.
Now I'm also using that interrupt to track time, so everytime it occurs I count a clock per processor in milliseconds - I know, not optimal. I noticed now that when a process is taking high CPU load, the time is running "slower", so less of the timer interrupts are going through. But, once that process is finished, time is running "faster" for a moment, so it looks like all the interrupts have queued up and are processed. Testing all of this in VirtualBox btw.
I first had the suspicion that on high load, the CPU frequency might increase and in turn the APIC frequency, but wouldn't that mean the interrupt would happen rather more often then less? Also this wouldn't explain the "queueing" effect.
Or could the reason for something like that rather be that maybe interrupts are disabled for too long, causing the timers to queue up? What speaks against this though is that the same effect happens even if the process is just looping and not really using any syscalls or something.
I'm a bit lost on this and wonder if you experienced something similar. Is it required to consistently recalibrate the APIC timer? Is there something regarding power management going on?
Best regards
Max
Worked a lot on my system recently and I've made an interesting observation on x86 with the LAPIC timer. I've programmed it to trigger a timer interrupt used for preemptive multitasking every 1ms. To do so I'm using the PIT to sleep for 10ms and then calculate the settings for the periodic timer based on the counted ticks. This works fine so far - under usual conditions.
Now I'm also using that interrupt to track time, so everytime it occurs I count a clock per processor in milliseconds - I know, not optimal. I noticed now that when a process is taking high CPU load, the time is running "slower", so less of the timer interrupts are going through. But, once that process is finished, time is running "faster" for a moment, so it looks like all the interrupts have queued up and are processed. Testing all of this in VirtualBox btw.
I first had the suspicion that on high load, the CPU frequency might increase and in turn the APIC frequency, but wouldn't that mean the interrupt would happen rather more often then less? Also this wouldn't explain the "queueing" effect.
Or could the reason for something like that rather be that maybe interrupts are disabled for too long, causing the timers to queue up? What speaks against this though is that the same effect happens even if the process is just looping and not really using any syscalls or something.
I'm a bit lost on this and wonder if you experienced something similar. Is it required to consistently recalibrate the APIC timer? Is there something regarding power management going on?
Best regards
Max