The problem isn't one thread running on one CPU that asks for time delays that are too close together. The problem is thousands of threads running in hundreds of processes on tens of CPUs all wanting time delays that happen to result in "not enough time between IRQs", that leads to an IRQ being missed and the new count never being set, and thousands of threads locking up because they're waiting for something that will never happen; followed by a bug report for "unpredictable lock up under load" that's impossible to debug.rdos wrote:If somebody is stupid enough to start a new timer each tic, yes.Brendan wrote:And meaning that your broken code could attempt 1.193 million IRQs per second under the right conditions and screw things up (setting PIT count to 1 is never a good idea).
A sustained rate of 1.193 million PIT interrupts per second should be theoretically possible. Unfortunately everything I've read says that such high frequencies aren't sustainable in practice. I'm not sure if it works on some chipsets and not others, or works on none of them. If I was planning to attempt high frequencies with the PIT, I'd test it on a range of computers to determine the maximum frequency the computers I have can handle and then halve it just in case.rdos wrote:I'm pretty sure that interrupt latencies in RDOS on new CPUs is several orders lower than the PIT tic, which means that a sustained rate of 1.193 million PIT interrupts would be possible. Late PIT ISRs is only an issue on older CPUs.Brendan wrote:In a good OS (possibly not your OS) IRQs are rarely disabled for more than about 100 cycles. On an old slow 500 MHz CPU that works out to about 200 ns of "jitter" caused by interrupt latency. The minimum precision of the PIT is about 838 ns which is 4 times higher than "jitter" caused by interrupt latency. Interrupt latency is negligible in a good OS (when running on CPU/s that were made in this century).
Sounds like you only need to read real time once for that (e.g. "Log started on 20/11/2011 at 12:34") and then use elapsed time for everything else. It'd be easier calculate the difference between packets if you don't need to worry about different us/seconds/minutes wrapping around back to zero.rdos wrote:I could give you some examples when it is useful. If you log network traffic, you could also log the real-time marks with us precision. This is useful since you both want to see the real-time when things happened, and the time interval between packets. It is not important if real-time is accurate down to us, but it is important that the difference between packets is accurate down to us.Brendan wrote:Unless you're using HPET's main counter or TSC (where there's no downside), there's no point increasing overhead (to increase effective precision, not the "storage precision") for real time when nobody cares about that extra precision anyway.
Wikipedia says "NTP can usually maintain time to within tens of milliseconds over the public Internet,[1] and can achieve 1 millisecond accuracy in local area networks under ideal conditions.". You'd need much better precision than milliseconds for fine granularity drift adjustment (e.g. the "drift adjustment in 0.000000195156391 ns increments" from the example code I posted earlier), but not for real time itself.rdos wrote:Another example is synchronization with NTP-servers, that have much better resolution than seconds or milliseconds. In order to take advantage of NTP, you need far better precision than milliseconds.
Erm.rdos wrote:Because you suggested to keep track of real time with an ISR.Brendan wrote:I'm talking about the overhead of keeping track of real time; and you're talking about the overhead of synchronising real time with elapsed time.rdos wrote:The only overhead that is needed to synchronize real time with elapsed time is two RTC ints per second. That would adjust for the drift, and doesn't cost any significant overhead even on a 386 processor.
Here is how I read real time (no overhead or IRQs involved):
Code: Select all
get_time PROC far GetSystemTime add eax,cs:time_diff adc edx,cs:time_diff+4 retf32 get_time ENDP
Here's the source code for the best kernel that anyone could ever possibly write:
Code: Select all
someUnknownMacro
ret
Cheers,
Brendan