Compute time and day...

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
mystran

Re:Compute time and day...

Post by mystran »

Curufir wrote: Nope, what I'm saying is that you'd have to recalibrate the base time (Not the TSC -> Sec ratio) every time you come back from an "HLT" state. You already have to do this if you use the counter mechanism because the counter won't be updated whilst the processor is powered down, so there is no net loss.
HLT stops the processor, not the whole system, and processor still responds to interrupts (coming back online) unless they are disabled
so it will wake on PIT interrupt and the counter update will happen normally. There's no extra thing to do.

HLT is basicly just the same thing as while(1); but consumes less power. The CPU just sits waiting for some interrupt happen. Actually, you still need PIT or some such interrupt getting you out of HLT state once in a while to check if some timer completed, so you can't even save the PIT interrupts really..

The only different between HLT state and endless nop-loop is that some processors stop their internal counter when they are in HLT state, which means you either have to keep them 100% busy with normal nop-loop or you lose any real benefits you get from the internal counter.

I'm happy to lose a few ?sec on each PIT interrupts if my laptops battery lasts twice times as long. There are factors other than speed you know..
Curufir

Re:Compute time and day...

Post by Curufir »

mystran wrote: HLT stops the processor, not the whole system, and processor still responds to interrupts (coming back online) unless they are disabled
so it will wake on PIT interrupt and the counter update will happen normally. There's no extra thing to do.
...
Snip. I know what HLT does>...
...
I'm happy to lose a few ?sec on each PIT interrupts if my laptops battery lasts twice times as long. There are factors other than speed you know..
I'm not talking about removing the idle process. There will still be an idle process and it will still use HLT, the difference between my idea of an idle process and yours appears to be that I am quite happy to mask off the timer completely and allow the processor to return from the HLT state solely via signals other than the timer.

So instead of the processor waking up on the timer interrupt (A coupla hundred times a second for a normal OS) and returning to the idle process with nothing to do, thereby wasting battery life powering up/down, I halt the processor completely until such times as an external interrupt (Eg Keyboard IRQ) occurs at which point it is woken up. At this point the basetime is reset and everything runs along as usual. So my scheme actually give longer battery life, because the processor is only running when there is work for it to do and otherwise exists in a powered down state.
mystran

Re:Compute time and day...

Post by mystran »

Curufir wrote: So instead of the processor waking up on the timer interrupt (A coupla hundred times a second for a normal OS) and returning to the idle process with nothing to do, thereby wasting battery life powering up/down, I halt the processor completely until such times as an external interrupt (Eg Keyboard IRQ) occurs at which point it is woken up. At this point the basetime is reset and everything runs along as usual. So my scheme actually give longer battery life, because the processor is only running when there is work for it to do and otherwise exists in a powered down state.
Ok, this is fine then.. except how do you handle things like a process calling sleep(42)? Do you enable PIT temporarily then to handle that?
Curufir

Re:Compute time and day...

Post by Curufir »

mystran wrote: Ok, this is fine then.. except how do you handle things like a process calling sleep(42)? Do you enable PIT temporarily then to handle that?
Hehe, you've got me there, this is one of the problems I'm still working on :).

My initial thoughts on this bascially followed these lines:
  • Sleeping tasks are arranged in an list ordered by the time stamp at which they are due to awaken.
  • If no process is capable of running then the idle task is called.
  • If there is a sleeping task (Ie the list isn't empty) when the idle task is called then don't disable the PIT, but instead load it with the required value so that it wakes up the CPU in time to run the topmost sleeping task in the list. If another interrupt wakes up the CPU before the PIT then it doesn't matter.
  • If there is no sleeping task disable the PIT before entering HLT mode.
  • Upon returning from a HLT state reset the basetime for the CPU and unmask the PIT if required.
Now my problem lies in doing all that elegantly. I can do it in a fairly brutish fashion, but doing it in a good way is proving tricky. Eg If there is no sleeping task then however the CPU is woken up the PIT must be unmasked which isn't exactly a speedy operation and uses the ports.
Peter_Vigren

Re:Compute time and day...

Post by Peter_Vigren »

Curufir... The method you present here is intriguing... indeed. And you seem to have given it alot of thought and research (e.g. the question about SpeedStep technology shows that)... which is impressing. Very impressing I might add...

However, the downside is that the operating system will be limited to processors with the RDTSC instruction... Is it only Intel that implement it yet?

By the way... what will you use to switch tasks? Still IRQ0 or something else?
Curufir wrote: I have accurate readings of processor speed for all processors on the system (For free) (...)
Explain.
Curufir

Re:Compute time and day...

Post by Curufir »

Peter_Vigren wrote: Curufir... The method you present here is intriguing... indeed. And you seem to have given it alot of thought and research (e.g. the question about SpeedStep technology shows that)... which is impressing. Very impressing I might add...
Wish I could say I came up with it in isolation, but most of the ideas are from me reading articles on low level hardware, figuring out that the principles they showed could be used to solve this problem and then trying to tie them together into some kind of cohesive strategy. Although I haven't seen anyone else doing things this way I'd imagine someone has tried it before.
However, the downside is that the operating system will be limited to processors with the RDTSC instruction... Is it only Intel that implement it yet?
AFAIK it's supported on AMD's from the K6 upwards (Possibly K5, but I've yet to get that confirmed). Celeron etc will need some more research on my part.
By the way... what will you use to switch tasks? Still IRQ0 or something else?
APIC timer if there is one present, if not the the normal PIC/PIT combination will be used, this reduces the range of systems I'm coding for a bit further. This chops out the delay from using port operations on the PIC.
Curufir wrote: I have accurate readings of processor speed for all processors on the system (For free) (...)
Explain.
Calibration is performed at bootime to form the TSC->Seconds ratio and mark the basetime. Now since the TSC is incrementing at the same speed as the processor finding the processor speed in hz is trivial once you know this ratio. Processor speed does vary a little over time, but not enough to introduce any huge inaccuracies, if it becomes a burden I'll recalibrate every few minutes or so against the RTC.

***

Now I know this seems like a "round the houses" way of approaching the problem when you consider modern processor speeds, but consider this:

On my little 800Mhz Celeron Linux runs at around 500 timer interrupts a second.

Now let's assume they are just keeping a counter in memory (They don't afaik, but this is just for example's sake).

A simple increment to a memory location takes around 3 cycles. So in the very best scenario I'd be losing 1.5k cycles per second just to maintain a system clock, and that's not taking into account extra loss from forced cache misses etc. It doesn't sound much, and it isn't, but that's all time the processor could be using doing something else. Personally I think the saving is worth the complexity, others might not think so (Bear in mind that what I'm talking about here for a uniprocessor is effectively losing 1 second's processing for every 6 days runtime, it's not the end of the world :)).

Guess I'm just not a believer in the "worse is better" school of philosophy, which is why I'll probably still be coding the OS when I'm 60 :).
distantvoices
Member
Member
Posts: 1600
Joined: Wed Oct 18, 2006 11:59 am
Location: Vienna/Austria
Contact:

Re:Compute time and day...

Post by distantvoices »

regarding to the Sleep() call, with which a thread can put itself to sleeping state:

I do it this way: Upon receipt of the sleeping call, the system task takes the issuing process out of the queue of running processes, puts the "time to sleep" value into the "sleeping" field of it's structure and stuffs it into the "sleeping"-queue. The Timer Interrupt 0-handler keeps a counter. when this counter reaches say 2000, it issues a message to the timer task thus waking it up for having it do the book keeping work on the sleeping process queue. It walkd throu the queue, updates the leeping values towards zero and if a process is due to wake up, it stuffs this process back into the running queue.

As you see, it is a soft timer method for I don't care much about exact timing when a process is put to sleep for a certain time. Maybe I have to pay a price later on for this philosophy. I lay more strength of design on the message passing mechanism of my micro kernel.

Hope I haven't put my butt in the thistles with that.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
Peter_Vigren

Re:Compute time and day...

Post by Peter_Vigren »

Curufir wrote: Wish I could say I came up with it in isolation, but most of the ideas are from me reading articles on low level hardware, figuring out that the principles they showed could be used to solve this problem and then trying to tie them together into some kind of cohesive strategy.
Intriguing. Is any of those articles on the net? If so, where?
Curufir wrote: AFAIK it's supported on AMD's from the K6 upwards (Possibly K5, but I've yet to get that confirmed). Celeron etc will need some more research on my part.
How did you check that? I would really appreciate if someone could tell me of a page or something that compares different processors and reports compatibility.
Curufir wrote: APIC timer if there is one present, if not the the normal PIC/PIT combination will be used, this reduces the range of systems I'm coding for a bit further. This chops out the delay from using port operations on the PIC.
Which are the port operations you are referring to? The initialization?
Curufir wrote: I have accurate readings of processor speed for all processors on the system (For free) (...)
My inquiry was mostly done because I didn't really get the "(For free)" part of the quote...
Curufir wrote: (Bear in mind that what I'm talking about here for a uniprocessor is effectively losing 1 second's processing for every 6 days runtime, it's not the end of the world :)).
I realize that it would not be that much of a loss not using your approach... but it is intriguing... By discussing different approaches, we train ourselves to think in different ways. Even if I probably won't use this approach it is still very intriguing and I actually have begun thinking about implementing something nearly similar (upon the event no process is available) because of this discussion... It's safe to say that I wouldn't even be thinking about anything like that if we hadn't begun the discussion...


Anyway, is there someone who knows how Linux keeps track of system time? Or any other system for that matter...
Curufir

Re:Compute time and day...

Post by Curufir »

Peter_Vigren wrote: Intriguing. Is any of those articles on the net? If so, where?
Yup, there's plenty of docs on the net, unfortunately not most of them describe principles not actual hardware manipulation.

Here's a decent one for APIC which shows a little sample C code:
http://osdev.berlios.de/pic.html

The RDTSC instruction is documented all over the place, including Intel's docs.
How did you check that? I would really appreciate if someone could tell me of a page or something that compares different processors and reports compatibility.
Basically looked around in the documentation and on the web. Since Linux makes the APIC timer available to assist with low latency processes there's actually quite a bit of information lying around on which systems do/don't work. Plus there's always the processor manufacturer's docs.
Which are the port operations you are referring to? The initialization?
I still have to finish my research, docs on actually manipulating the APIC as opposed to describing what it does are thin on the ground. What I've got so far suggests that the APIC timer can be manipulated through mapped memory instead of ports. What this means, at least the way I'm reading it, is that I can remove the EOI and PIT port operations that would be needed otherwise. For those keeping score that's around 28 cycles extra saved on every timer interrupt (This is the equivalent of saving 1 second every 14 hours at 500 switches/second. As a sidenote, if you have an 1Ghz computer switching 500 times/second and somehow manage to save 1000 cycles/switch you buy around 2 minutes/hour of extra app runtime. 1000 cycles is a large amount to aim for, but you see how losing the odd cycle here and there can start to add up), which on x86 is enough to get most of the context switch for free compared to a PIC/PIT combo. Seeing as nothing, and I mean nothing, gets called more frequently in an OS than the scheduler this is the point where a little optimisation can get a lot of reward.

Of course this all depends on me actually reading the docs correctly, and Intel isn't exactly being straightforward in the explanation, so ATM I'm not 100% certain I can work it like this. Would be neat if I can.
I realize that it would not be that much of a loss not using your approach... but it is intriguing... By discussing different approaches, we train ourselves to think in different ways. Even if I probably won't use this approach it is still very intriguing and I actually have begun thinking about implementing something nearly similar (upon the event no process is available) because of this discussion... It's safe to say that I wouldn't even be thinking about anything like that if we hadn't begun the discussion...
Glad to provoke a new line of thought :).
Post Reply