Page 1 of 1

System Time problem (clock goes fast, could I use CMOS?)

Posted: Sat Apr 28, 2007 3:43 pm
by Mikor
Hi, I am completely new to OS development, and am making one because (a) It seemed like a good idea at the time, and (b) Its a handy way to learn C.
Anyways, on to the problem...

I am currently increasing the system clock by 1 second every 18 ticks. HOWEVER, there are about 18.2 ticks per second, so the clock gains time. Is there a way for me to read the time from the CMOS, and use that, or do I have to find a way to make my current function more accurate?

Edit: The timer is installed in IRQ0

Posted: Sat Apr 28, 2007 4:30 pm
by Kevin McGuire
You could try incrementing the current time in thousandths by:

curTime = curTime + 989;

Since 18/18.2 = .98901098~

So we would lose at least .00001098 of a second every tick.

1.0/.00001098/18 should tell you the amount of seconds you can count until the current time is incorrect by slightly more than one second.

When displaying it just disregard the first three least significant base ten digits so that something like 172989 is really 172 seconds.

PS: Someone who knows what they are doing needs to help you better!

Posted: Sat Apr 28, 2007 5:00 pm
by Kevin McGuire
I just thought about it, but you could count (18*5) ticks would be 90 ticks in total. Then after 90 ticks increment a second counter by one.

Code: Select all

unsigned int tickMaster = 0;
unsigned int tickCurrent = 0;
void tick_handler(void){
	static unsigned int ta = 0;
	++ta;
	tb = tb + 98901;
	if(ta == 90){
	       ++tickMaster;
		ta = 0;
		tb = 0;
	}
	return;
}
So the tickMaster retains a accurate time in seconds.
(tickMaster*91) = current_seconds_elapsed;

While tickb serves to retain a slighty inaccurate time that can be added with current_seconds_elapsed, after you drop the five least significant digits of tb.

This would allow the time to have a low latency, but yet retain a high accuracy.

Posted: Sat Apr 28, 2007 6:51 pm
by mathematician
You could multiply by 182 and divide by 10. There is no reason for not reading the RTC if you wanted to. It is fairly well documented on the web and elsewhere. It uses i/o ports 70h (index register) and 71h (data register).

Re: System Time problem (clock goes fast, could I use CMOS?)

Posted: Sat Apr 28, 2007 11:02 pm
by Brendan
Hi,
Mikor wrote:I am currently increasing the system clock by 1 second every 18 ticks. HOWEVER, there are about 18.2 ticks per second, so the clock gains time. Is there a way for me to read the time from the CMOS, and use that, or do I have to find a way to make my current function more accurate?
My favourite method is to use overflow:

Code: Select all

    add dword [factions_of_a_second], 0xE10E10E
    adc dword [seconds], 0
In this case, the value 0xE10E10E makes the 32-bit "fractions_of_a_second" overflow once per second, which sets the carry flag and causes "seconds" to be incremented once per second.

This isn't quite right though...

The PIT runs with a base frequency of 1.19318166666 MHz, or more accurately "(3579544 / 3) MHz". The BIOS uses a divisor of 65536, which means IRQ 0 fires at 18.20650227864583333 Hz. To work out the value above you do "2^32 / frequency", which gives the value 0xE0F97D5.

Of course you don't have to use the PIT timer count that the BIOS used, and it's probably a good idea not to - using a smaller PIT count means you get a faster IRQ 0 frequency, and can measure the time more precisely. It also increases overhead a little, but that shouldn't matter much unless you set the PIT count too low . The maths above is the same though:

amount_to_add = (2^32 * 3 * PIT_count) / 3579544

I've done this at run-time before - the OS worked out how fast the CPU is, then decided what PIT count to use, then calculated the amount to add to the "factions_of_a_second" counter and setup the PIT. This means reduced overhead on slow CPUs and more precise timing on faster CPUs.

Note: for "precision" imagine what happens if software reads your time counter just before the IRQ fires and the counter is updated. If the counter is being increased once per second, then it can be wrong by almost a full a second.

The CMOS/RTC could be used too. It has a periodic interrupt which generates IRQ 8 that can be setup for any "power of 2" frequency from 2 Hz to 8192 Hz ("2, 4, 8, 16, 32, ..., 4096, 8192"). There's also an update interrupt which triggers IRQ 8 once per second.

You could also read the time directly from the CMOS/RTC "time of day" registers. This is slow (there's a lot of overhead involved) and it also lacks precision.


Cheers,

Brendan

Re: System Time problem (clock goes fast, could I use CMOS?)

Posted: Sun Apr 29, 2007 3:52 am
by Mikor
Brendan wrote:

Code: Select all

    add dword [factions_of_a_second], 0xE10E10E
    adc dword [seconds], 0
I've stared at it for a while, googled (and found nothing useful), and I can't figure out how to put that into djgcc inline assembly...

Here is my current function:

Code: Select all

void timer_handler(struct regs *r){
    /* Increment our 'tick count' */
    timer_ticks++;
	
	/* If the current ticks, minus the ticks when the time was set, is a multiple of 18... */
	if((timer_ticks - timer_set) % 18 == 0){
		time_sec++;
		updateTime(1);
	}
	/* Check if this is an 18th tick */
	if(timer_ticks % 18 == 0){
		runtime_sec++;
		updateTime(2);
	}
}

Posted: Sun Apr 29, 2007 4:14 am
by mystran
Actually, even when you get your PIT to tick as often as you want (I use 10.008ms, approximately, which is what you get when you aim for 10ms) you'll find that while the theoretical value will probably be fine for stuff like scheduling, for tracking real-time, you'll need something more flexible, because nothing says PIT is completely accurate.

So unless you do something to prevent it, you'll drift over longer periods anyway. The solution I use for this is to keep a variable with the current estimate of micro-seconds per tick, and on every tick update the time by that many microseconds. Full seconds are moved to another counter, which is initialized from RTC. The "ticks_per_usec" is initialized with the theoretical value (for me that's 10008) but in order to combat drift, it's adjustable, even at runtime. Say, you wanna sync to NTP, then ticks_per_usec can be adjusted to too large value to catch NTP time, or too small value to let NTP catch you, and then calibrated to correct value against NTP to minimize local drift.

There are some variations, but while you should initialize PIT to whatever you need (it not even good idea to rely on 18.2Hz initially, as that might not always hold), you still need more sophisticated tracking for realtime in any case. Fact is, you won't know the exact speed of your local PIT before you synchronize with some known-good external clock (ofcourse one can sanitycheck against the local RTC in order to prevent large drifts if the PIT is a lot too fast/slow).