Peter_Vigren wrote:
Curufir... The method you present here is intriguing... indeed. And you seem to have given it alot of thought and research (e.g. the question about SpeedStep technology shows that)... which is impressing. Very impressing I might add...
Wish I could say I came up with it in isolation, but most of the ideas are from me reading articles on low level hardware, figuring out that the principles they showed could be used to solve this problem and then trying to tie them together into some kind of cohesive strategy. Although I haven't seen anyone else doing things this way I'd imagine someone has tried it before.
However, the downside is that the operating system will be limited to processors with the RDTSC instruction... Is it only Intel that implement it yet?
AFAIK it's supported on AMD's from the K6 upwards (Possibly K5, but I've yet to get that confirmed). Celeron etc will need some more research on my part.
By the way... what will you use to switch tasks? Still IRQ0 or something else?
APIC timer if there is one present, if not the the normal PIC/PIT combination will be used, this reduces the range of systems I'm coding for a bit further. This chops out the delay from using port operations on the PIC.
Curufir wrote:
I have accurate readings of processor speed for all processors on the system (For free) (...)
Explain.
Calibration is performed at bootime to form the TSC->Seconds ratio and mark the basetime. Now since the TSC is incrementing at the same speed as the processor finding the processor speed in hz is trivial once you know this ratio. Processor speed does vary a little over time, but not enough to introduce any huge inaccuracies, if it becomes a burden I'll recalibrate every few minutes or so against the RTC.
***
Now I know this seems like a "round the houses" way of approaching the problem when you consider modern processor speeds, but consider this:
On my little 800Mhz Celeron Linux runs at around 500 timer interrupts a second.
Now let's assume they are just keeping a counter in memory (They don't afaik, but this is just for example's sake).
A simple increment to a memory location takes around 3 cycles. So in the very best scenario I'd be losing 1.5k cycles per second just to maintain a system clock, and that's not taking into account extra loss from forced cache misses etc. It doesn't sound much, and it isn't, but that's all time the processor could be using doing something else. Personally I think the saving is worth the complexity, others might not think so (Bear in mind that what I'm talking about here for a uniprocessor is effectively losing 1 second's processing for every 6 days runtime, it's not the end of the world
).
Guess I'm just not a believer in the "worse is better" school of philosophy, which is why I'll probably still be coding the OS when I'm 60
.