avoiding timer counter overflow

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Post Reply
mr. xsism

avoiding timer counter overflow

Post by mr. xsism »

when you have a timer, it increments a tick counter every sooften per second. Eventually the counter gets pretty high in value. How would I avoid going over the 32bit limit of a variable?

Regards,
mr. xsism
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:avoiding timer counter overflow

Post by Pype.Clicker »

everything depends on what your timer is used for, but basically, this is done by having a cascade of timers, one counting milliseconds from the last second, another one counting seconds from system boot ...

Note that if you opt for "increment date and daytime every minute" rather than "add seconds from system boot to system boot date and daytime", you'll find less problems with total uptime looping ... (which only occurs once every 136 years, though :-p)
Tim

Re:avoiding timer counter overflow

Post by Tim »

...and if you use a 64-bit counter, you won't have to worry about wrapping around for a VERY long time. NT uses 1/10th of a microsecond as its timebase, which wraps around every 58,494 years.
mr. xsism

Re:avoiding timer counter overflow

Post by mr. xsism »

How would i use a 64bit var in C? I could probably use a couple diff counters too to divide the load.

Thanks...

Regards,
mr. xsism
Tim

Re:avoiding timer counter overflow

Post by Tim »

There isn't a standard way of doing it before C99. gcc supports [tt]long long[/tt] (and [tt]unsigned long long[/tt]). VC++ supports [tt]__int64[/tt] (and [tt]unsigned __int64[/tt]). Incidentally, C99 defines 'large' (64-bit on x86) integers the same way as gcc.
mr. xsism

Re:avoiding timer counter overflow

Post by mr. xsism »

ok,i tried 64bit with 'unsigned long long' but when i link it together, gcc gives me this:

Code: Select all

pit.o(.text+0x107):pit.c: undefined reference to `___udivdi3'
pit.o(.text+0x115):pit.c: undefined reference to `___umoddi3'
pit.o(.text+0x13a):pit.c: undefined reference to `___udivdi3'
pit.o(.text+0x148):pit.c: undefined reference to `___umoddi3'
pit.o(.text+0x17a):pit.c: undefined reference to `___udivdi3'
pit.o(.text+0x188):pit.c: undefined reference to `___umoddi3'
pit.o(.text+0x1ba):pit.c: undefined reference to `___udivdi3'
pit.o(.text+0x1c8):pit.c: undefined reference to `___umoddi3'

What is that? I see that it is divide and modulus, which i do use in the code. I guess 64bit ops aren't std?

Regards,
mr. xsism
mr. xsism

real time

Post by mr. xsism »

Another thing: how would i show the real time? I already get the static real time, but the problem is adding the new ticks to the static time without wasting time equating it.

I've tried thinking of ideas, but they all proved inefficient so far. The real prob is this: I have a struct called real_time that holds the time and date. real_time.secs + getsecs() <<-- that adds the static seconds to the uptime seconds to make a current time, but say the static is 52seconds... and the uptime is now 16seconds. I add those 2 together and it makes 68seconds, which is over 60. Look at this code:

Code: Select all

dword getticks()
{
  return ticks;
}

dword getsecs()
{
  return ticks/100%60;
}

dword getmins()
{
  return ticks/100/60%60;
}

dword gethours()
{
  return ticks/100/60/60%24;
}


void showtime(time_st time)
{
char buffer[10];
int tmp;

print("day of week",0,18,0x02);//day of week
print(days[time.weekday+(getdays()%7)],15,17,0x05);

itoa(time.monthday+getdays(),16,buffer);//day of month
print("day of month",0,19,0x02);
print(buffer,15,18,0x03);

print("month's name",0,20,0x02);//month's name
print(months[time.month],15,19,0x03);
itoa(time.month+getmonths(),16,buffer);//month
print("month",0,20,0x02);
print(buffer,15,20,0x03);

itoa(time.year+getyears(),16,buffer);//year
print("year",0,21,0x02);
print(buffer,15,21,0x03);

//nevermind that code ^ Look at bottom code only

itoa(time.hour+gethours(),16,buffer);//hour
print("hour",0,22,0x02);
print(buffer,15,22,0x01);

itoa(time.min+getmins(),16,buffer);//mins
print("min",0,23,0x02);
print(buffer,15,23,0x01);

itoa(time.sec+getsecs(),16,buffer);//secs
print("sec",0,24,0x02);
print(buffer,15,24,0x01);
}


'ticks' starts out as 0 and then incs 100 times a second. Would I need to check if the secs is over 60? What about mins and all that? Do you see what i mean? I hope so, because i'm stuck.


Regards,
mr. xsism
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:avoiding timer counter overflow

Post by Pype.Clicker »

ok,i tried 64bit with 'unsigned long long' but when i link it together, gcc gives me this:
The CPU doesn't support for 64 bits division and multiplications. It needs a library to do so. 64 additions and substractions are performed through ADC and SBB, and therefore need no library

I've tried thinking of ideas, but they all proved inefficient so far. The real prob is this: I have a struct called real_time that holds the time and date. real_time.secs + getsecs() <<-- that adds the static seconds to the uptime seconds to make a current time, but say the static is 52seconds... and the uptime is now 16seconds. I add those 2 together and it makes 68seconds, which is over 60. Look at this code:
as i said above, you should better update your "real time" everytime a minute is gone (for instance) than trying to recompute it from realtime at boot up.

Code: Select all

long long uptime; // for various purpose
unsigned int ticks;
unsigned int seconds;

struct TimeVal realtime;
IRQhandler() {
    uptime++;
    ticks++; if (ticks<TICKS_PER_SECOND) return;
    ticks=0; seconds++; 
    if (seconds<60) return;
    seconds=0; realtime.update();
    return;
}
Post Reply