Compute time and day...
Compute time and day...
Hello. How do you compute time of day and current day? For now, I just compute time of day and I does it in the timer interrupt. Every second, a second counter is incremented and checked if it's 60. If it is, that counter is zeroed and the minute counter is incremented etc...
How do you do it? *curious*
How do you do it? *curious*
Re:Compute time and day...
Indeed, but as Pype so nicely pointed out for me, it takes quite some time to read the time directly from CMOS. Therefore it is preferable that the time is computed by the OS itself... using the PIT...slacker wrote: cmos contains time, day, etc...
Re:Compute time and day...
You might want to keep track of the time yourself, and then get CMOS time once in a while to check that you still have the right time.
I mean, missing PIT interrupts can happen, PIT might not be completely accurate, and given the base frequency of PIT it's pain to compute time from that, since you can't have it send you EXACTLY 100 ticks per second or 60 per seconds or something.
Ofcourse it's not important if you get the time from more reliable source at certain interval. You can keep the your time accurate enough with PIT but if you rely only on PIT for long periods, the time will (likely) be wrong after letting the system run long enough..
I mean, missing PIT interrupts can happen, PIT might not be completely accurate, and given the base frequency of PIT it's pain to compute time from that, since you can't have it send you EXACTLY 100 ticks per second or 60 per seconds or something.
Ofcourse it's not important if you get the time from more reliable source at certain interval. You can keep the your time accurate enough with PIT but if you rely only on PIT for long periods, the time will (likely) be wrong after letting the system run long enough..
Re:Compute time and day...
At system boot time you can get the time/date/day from the CMOS, and store it somewhere. You can use your clock interrupt to increment a counter of "ticks" and once this accumulates to a second, update the seconds counter. You can then provide system calls to get the time in seconds since boot, or get the time of day/date by adding it to the value you obtained at boot time from the CMOS, or provide a system call that polls the clock chip briefly to achieve millisecond resolution. Of course, tasks should not just sit there calling this system call in a tight loop... ;D - supposing you want a variety of interesting time/date functions.Peter_Vigren wrote: Hello. How do you compute time of day and current day? For now, I just compute time of day and I does it in the timer interrupt. Every second, a second counter is incremented and checked if it's 60. If it is, that counter is zeroed and the minute counter is incremented etc...
How do you do it? *curious*
I highly recommend reading the source code for Minix, Linux, and the BSDs - also there are some good real time systems like MicroC OS II (there is a book about this RTOS which details the source code and includes a CD-ROM.)
If you want to read kernel sources I recommend starting with Minix. It is easy to get lost in detail with the others. MicroC II OS, which is at http://www.ucos-ii.com is a great little kernel to learn about (get the book with all kernel sources commented, rather than just the ports on the website.)
Re:Compute time and day...
Actually I'm headed a slightly different way with this.
I figure all you actually need to know is the speed of the computer (In hz) and a bootime. Since the time stamp counter increments at the same rate as the processor clock cycle (Unless something odd is going on) this provides a very accurate value for uptime, and if you know uptime and bootime you know the current time. Some calibration (At the same point you calibrate processor speed) would be needed during the bootup phase to get the accurate time stamp counter tick -> seconds ratio, but beyond that it should be fine.
The bonuses of this is that:
a) Reading the time stamp counter is much faster than reading the RTC.
b) You don't have to keep a counter in the kernel, you only have to keep 2 fixed values, so less overhead(Especially with consideration to the the timer interrupt), .
Of course it only works on processors that support the RDTSC instruction (Pentium and above AFAIK).
Now this gives you other things to play with as well.
Let's say you set things up so that every thousand or so timer interrupts (Losing the no counter bonus ) you recalibrate the tsc -> seconds variable. After a few times through this routine you have a very precise clock and a very precise estimation of the speed the processor is running at.
This level of accuracy can bring about all kinds of savings when you start talking about apps that require very accurate timings, eg scientific measurements. With a processor running at 300Mhz (Not exactly a high rate for modern cpus) you're talking about an accuracy in the order of nanoseconds which is far better than the RTC or any kernel counter based system can achieve.
I figure all you actually need to know is the speed of the computer (In hz) and a bootime. Since the time stamp counter increments at the same rate as the processor clock cycle (Unless something odd is going on) this provides a very accurate value for uptime, and if you know uptime and bootime you know the current time. Some calibration (At the same point you calibrate processor speed) would be needed during the bootup phase to get the accurate time stamp counter tick -> seconds ratio, but beyond that it should be fine.
The bonuses of this is that:
a) Reading the time stamp counter is much faster than reading the RTC.
b) You don't have to keep a counter in the kernel, you only have to keep 2 fixed values, so less overhead(Especially with consideration to the the timer interrupt), .
Of course it only works on processors that support the RDTSC instruction (Pentium and above AFAIK).
Now this gives you other things to play with as well.
Let's say you set things up so that every thousand or so timer interrupts (Losing the no counter bonus ) you recalibrate the tsc -> seconds variable. After a few times through this routine you have a very precise clock and a very precise estimation of the speed the processor is running at.
This level of accuracy can bring about all kinds of savings when you start talking about apps that require very accurate timings, eg scientific measurements. With a processor running at 300Mhz (Not exactly a high rate for modern cpus) you're talking about an accuracy in the order of nanoseconds which is far better than the RTC or any kernel counter based system can achieve.
Re:Compute time and day...
AFAIK there are known "something odd" there... namely, many mobile processors in laptops don't increment the counter when they are idle, that is in "HLT" state. This is done to save power.Curufir wrote: Actually I'm headed a slightly different way with this.
I figure all you actually need to know is the speed of the computer (In hz) and a bootime. Since the time stamp counter increments at the same rate as the processor clock cycle (Unless something odd is going on) this provides a very accurate value for uptime, and if you know uptime and bootime you know the current time.
Re:Compute time and day...
So you recalibrate coming back from a "HLT" state and on a regular basis during normal operations. During powerdown, by definition, they aren't looking at the clock and no processes are relying on it.mystran wrote: AFAIK there are known "something odd" there... namely, many mobile processors in laptops don't increment the counter when they are idle, that is in "HLT" state. This is done to save power.
Re:Compute time and day...
just read the cmos time whenever you need it....its proboly only .00000001 slower then if you calc it yourself and in an age of 3gz processors it wont really make a difference
btw i heard that one day procesors will be faster than the brain so you wont be able to tell the difference between a 200000ghz and 30000000000ghz processor.
btw i heard that one day procesors will be faster than the brain so you wont be able to tell the difference between a 200000ghz and 30000000000ghz processor.
Re:Compute time and day...
Hmmh.. You realize that unless you want to keep CPU 100% busy at all time (which is bad enough on desktops, wasting power and raising CPU temp, and awful on laptops) you should execute HLT state every time you have nothing to schedule, which is, a lot of time, since HLT is the way to make CPU idle..Curufir wrote: So you recalibrate coming back from a "HLT" state and on a regular basis during normal operations. During powerdown, by definition, they aren't looking at the clock and no processes are relying on it.
I personally won't mind my desktop 100% busy all time, since it's properly cooled, but not all are. On laptop, it's just unacceptable.
So you have to synchronize everytime your processor switches from idle->busy, which is on every interrupt (unless you where busy already) which means it adds to your interrupt latency..
Futher, those laptops that absolutely need the HLT instruction in idle-loop are exactly the same systems where the cycle counter is not reliable method for time calculations..
Given the average amount of work done in a timer interrupt, I'd say that incrementing one variable by one is not so bad. If you are going to run any scheduler code to provide pre-emptive multitasking, that is going to take so much more time that the difference is not important. Even checking if the time-quantum of the currently running thread is used takes about the same amount of time,
since in both cases most of the time is wasted on cache-misses.
Re:Compute time and day...
Ehm... CMOS and the processor is two completely different things. The speed of the processor don't affect the speed of CMOS.slacker wrote: just read the cmos time whenever you need it....its proboly only .00000001 slower then if you calc it yourself and in an age of 3gz processors it wont really make a difference
btw i heard that one day procesors will be faster than the brain so you wont be able to tell the difference between a 200000ghz and 30000000000ghz processor.
And I doubt there will be any modifications to the speed of CMOS... Just compare the FDC... It is as slow as it was before... And that one is not affected by the speed of the processor either...
Therefor the calculation is WAY faster because, as you point out, the processors get faster and faster...
Re:Compute time and day...
I think Intel has something called SpeedStep which makes the speed of the processor vary ALOT from time to time... It would be almost impossible to compensate for that...Curufir wrote: So you recalibrate coming back from a "HLT" state and on a regular basis during normal operations. During powerdown, by definition, they aren't looking at the clock and no processes are relying on it.
Re:Compute time and day...
It's precisely this line of thinking that has left us with operating systems and applications that lag even on 1Ghz+ processors.slacker wrote: just read the cmos time whenever you need it....its proboly only .00000001 slower then if you calc it yourself and in an age of 3gz processors it wont really make a difference
Let's just take a look at this one teeny, tiny, simple problem in terms of a multi-processor machine.
Traditional:
Maintaining a counter in memory and verifying it every x minutes against the RTC.
a) Updating the counter means accessing the memory bus which in turn provides contention across the memory bus, and forces a cache miss in any processors trying to read the counter after the update (Assuming they would have had a cache hit in the first place).
b) By accessing the RTC you tie up the IO bus which in turn plays merry hell with any other processor wishing to perform port operations.
Yours:
Now in the traditional case b) is not a problem because it happens so infrequently, with your mechanism it happens constantly (ie. Every time some app requires the system time...file operations, system clock etc).
Mine:
There is no counter to provide bus contention, each processor maintains its own TSC -> Seconds ratio which will be in an individual processor's cache if it's been used recently, there's no IO contention because once those ratios have stabilised there's no further need to read the RTC (Except when coming back from a "HLT" state ). I have accurate readings of processor speed for all processors on the system (For free), so I can alter my scheduler to try and schedule calculation heavy processes on the fast processors and IO heavy processes on the slower processors (Assuming they aren't identical).
See the difference? Instead of tying up global system resources my scheme ties up only local resources. The only minor downside to this is if taken down to nanosecond accuracy two different processors on the same system would give two different system times, but I can live with that and can get around it by determining what level of accuracy to use when returning times.
The point here is that it's a simple problem and seemingly has a very simple solution, but things just never get simple with OS dev. It might only be a few cycles being lost to inefficiency, but those few cycles are constant, we can't get them back...ever. The less resources an OS consumes (IO/RAM/Processor time) the more are available to the applications users are actually using. I'm not talking about optimising everything to the nth degree, but I am talking about being aware of where system resources are being used needlessly.
Relying on Moore's law to mask poor design is the worst kind of sloppy thinking.
Re:Compute time and day...
Nope, what I'm saying is that you'd have to recalibrate the base time (Not the TSC -> Sec ratio) every time you come back from an "HLT" state. You already have to do this if you use the counter mechanism because the counter won't be updated whilst the processor is powered down, so there is no net loss.Hmmh.. You realize that unless you want to keep CPU 100% busy at all time (which is bad enough on desktops, wasting power and raising CPU temp, and awful on laptops) you should execute HLT state every time you have nothing to schedule, which is, a lot of time, since HLT is the way to make CPU idle..
Speedstep lets you operate the processor in 2 modes dependent on the power supply it is connected to. If works by dropping the processor to a stable new clock frequency. Now with my method the clock is calibrated at bootime (This will represent the clock speed of the processor running on whichever power source you boot on), then it's just a case of recalibrating should the power source ever change. The processor only ever has 2 speeds, one for low power sources, and one for high power sources. Calibrating around them isn't a problem.I think Intel has something called SpeedStep which makes the speed of the processor vary ALOT from time to time... It would be almost impossible to compensate for that...
Re:Compute time and day...
Indeed.Curufir wrote: Relying on Moore's law to mask poor design is the worst kind of sloppy thinking.