Page 2 of 2
Posted: Thu Jan 04, 2007 3:05 am
by Solar
Even if you would measure in nanoseconds, a 64-bit-value would cover 600 years...
Posted: Thu Jan 04, 2007 3:24 am
by Candy
Solar wrote:Even if you would measure in nanoseconds, a 64-bit-value would cover 600 years...
I'm making a sort of valiant attempt at making all variables large enough to be used for nearly all instances of the problem they're solving - not just those I can see. Times should stretch from creation to armageddon, times for the computer should stretch at least from boot to hardware fail / shutdown.
So, I've chosen two value types, 128-bit for generic time and 64-bit for cpu time, first measured half in seconds half in subseconds, second measured in 1/65536ths of a second. The second is used for timers etc., the first is used to return realtime values etc.
The only ones to which you need an actual conversion is Windows and DOS (using division and multiplication).
Posted: Thu Jan 04, 2007 3:47 am
by Solar
Nice approach... but... do you really require a precision in the nanosecond range for historical dates?
Usually I am from the same school of thought (make it big enough right from the beginning). But usually exact-time and calendar are two variable types that seldom if ever have to be mixed. For example, do you intend to keep 128-bit / nanosecond precision timestamps for all your files?
Posted: Thu Jan 04, 2007 4:03 am
by Brendan
Hi,
Solar wrote:Nice approach... but... do you really require a precision in the nanosecond range for historical dates?
Usually I am from the same school of thought (make it big enough right from the beginning). But usually exact-time and calendar are two variable types that seldom if ever have to be mixed. For example, do you intend to keep 128-bit / nanosecond precision timestamps for all your files?
I personally like the idea of a time format that is capable of storing all times for all purposes (i.e. with the a range suitable for historical dates and accuracy suitable for very small time delays and perhaps instruction timing).
Also, I'm not convinced that Candy isn't planning more than he's stated....
For example, it'd be possible to have a 32-bit seconds counter, a 32-bit sub-second counter, and a 64-bit access count.
The idea of the access count is to give increasing values for requests for the current time, such that no 2 times are the same and all values returned are in order.
For example, on a normal OS with a 100 ms timer IRQ, if 2 threads request the current time within that 100 ms timer period, then they'd both get the same time value. With the "access count" they'd both get different values (for e.g. 0x1234567800000001 and 0x1234567800000002).
I remember
a few posts Candy made previously...
Cheers,
Brendan
Posted: Thu Jan 04, 2007 4:50 am
by Ready4Dis
Yeah, that is a very good point... even a 64-bit number, if you are using milliseconds, would last you 584.5 million years
. If you're using seconds, multilpy that by 1000. If you're using clock cycles, that just seems silly, however those could add up very quickly, if you're using it for a timer or whatever, it would probably time-out before anything, but who knows as cpu's get faster and faster, rather use a size to big than find out later it's to small (2k bug anybody?).
Posted: Thu Jan 04, 2007 5:07 am
by m
Brendan wrote:I personally like the idea of a time format that is capable of storing all times for all purposes (i.e. with the a range suitable for historical dates and accuracy suitable for very small time delays and perhaps instruction timing).
Err..One thing I'm confused about.You wrote "
a time format"...How can only one format suit several types of time accuracy?
(I understand that as:a large time type containing different fields for different use.If it's just what you originally thounght of,it might waste storage space.A more efficient way might be to design a set of time types with fewer bits and processed in different methods.If I mistake that,just ignore this paragraph.)
The idea of the access count is to give increasing values for requests for the current time, such that no 2 times are the same and all values returned are in order.
For example, on a normal OS with a 100 ms timer IRQ, if 2 threads request the current time within that 100 ms timer period, then they'd both get the same time value. With the "access count" they'd both get different values (for e.g. 0x1234567800000001 and 0x1234567800000002).
Making a micro-scheduling in a micro-order as a solution to synchronization?
Posted: Thu Jan 04, 2007 5:12 am
by Candy
Brendan wrote:Hi,
Solar wrote:Nice approach... but... do you really require a precision in the nanosecond range for historical dates?
Usually I am from the same school of thought (make it big enough right from the beginning). But usually exact-time and calendar are two variable types that seldom if ever have to be mixed. For example, do you intend to keep 128-bit / nanosecond precision timestamps for all your files?
I personally like the idea of a time format that is capable of storing all times for all purposes (i.e. with the a range suitable for historical dates and accuracy suitable for very small time delays and perhaps instruction timing).
Also, I'm not convinced that Candy isn't planning more than he's stated....
Not nice to read my mind like that.
I was hoping for a time format that was specific and time-indicative, so I couldn't use a floating point number that's more specific at a certain time and less specific in the future. I also wanted it to stretch from start to end of world, within a precision of at least a femtosecond (so you could use it for hardware simulation, which takes the femtosecond as basic unit). A femtosecond is a 1/10^15th of a second, so this is far enough. It also leaves for an access counter that doesn't overflow excessively fast and doesn't give a noticeable effect on actual measured time.
It's not necessary or interesting to see which file was created at which sub-femtosecond interval. It is interesting to see in which order they were created.
Posted: Thu Jan 04, 2007 5:50 am
by Brendan
Hi,
Candy wrote:Brendan wrote:Also, I'm not convinced that Candy isn't planning more than he's stated....
Not nice to read my mind like that.
You are entirely correct, and have my apologies..
It's rare for me to hear of an idea that I'd consider new, useful and nonobvious to "someone of ordinary skill in the art".
BTW, has this idea been published and/or used publicly somewhere that I'm not aware of? If not, would your previous posts count as "published" from the US legal system's perspective (IIRC, in the US there's a one year window after an idea is published)?
Lastly, would you mind if I intended to also use the "access count" idea?
Cheers,
Brendan
Posted: Thu Jan 04, 2007 7:17 am
by Candy
Brendan wrote:
You are entirely correct, and have my apologies..
Was meant to be a joke, it's ok.
It's rare for me to hear of an idea that I'd consider new, useful and nonobvious to "someone of ordinary skill in the art".
Thanks
BTW, has this idea been published and/or used publicly somewhere that I'm not aware of? If not, would your previous posts count as "published" from the US legal system's perspective (IIRC, in the US there's a one year window after an idea is published)?
I would say that this forum is backed up often enough by search engines to count as prior art, at least, that's why I've been dumping most of my ideas here.
Lastly, would you mind if I intended to also use the "access count" idea?
All the stuff I make during my free time is published in the public domain, including ideas and so on. Patents are your own problem, I disregard them.
Posted: Sat Jan 06, 2007 12:37 pm
by dave
While having the ability to maintain femotsecond (10^-15) maybe nice.
A 3 GHz processor at best would have 333 picosecond (10^-12) saying all you processor did was keep setting your varaible with updates. Taking the fact that you would probably like to run some code. The best accuracy you are most likely to get is nanoseconds (10^-9). If you say you processor is amazing and could run every single instruction in one clock cycle, 10 instruction would take 3.3 nanoseconds to run. And if your using C/C++ as a programming choice the pushing and popping of arguments could easily account for those 10 instructions.
So, while it is a nice idea to have this wonderful accuracy, the underlying hardware will not give you the precision you are trying to obtain.
Dave
Posted: Sat Jan 06, 2007 2:51 pm
by Solar
{deleted, too stupid to tell page 1 of the thread from page 2}
Posted: Sat Jan 06, 2007 4:24 pm
by Combuster
dave wrote:So, while it is a nice idea to have this wonderful accuracy, the underlying hardware will not give you the precision you are trying to obtain.
Not
yet..............
Posted: Sat Jan 06, 2007 4:32 pm
by dave
My point was nothing about the fact more precision was wanted. It was simply that you cannot get femto second accuracy not even 1 nanosecond accuracy on a modern cpu. The logic gates involved in bringing you all this wonderful technology have delays in that are in the pico to femotesecond range. So my point is that even if you wanted femote second precision you could not get it out of a cpu.
Also, Computers are not getting faster they are getting slower clock speed wise.
Combuster, by the time you get that speed intel probably will not exist.
Posted: Sun Jan 07, 2007 1:29 pm
by Candy
dave wrote:While having the ability to maintain femotsecond (10^-15) maybe nice.
A 3 GHz processor at best would have 333 picosecond (10^-12) saying all you processor did was keep setting your varaible with updates. Taking the fact that you would probably like to run some code. The best accuracy you are most likely to get is nanoseconds (10^-9). If you say you processor is amazing and could run every single instruction in one clock cycle, 10 instruction would take 3.3 nanoseconds to run. And if your using C/C++ as a programming choice the pushing and popping of arguments could easily account for those 10 instructions.
So, while it is a nice idea to have this wonderful accuracy, the underlying hardware will not give you the precision you are trying to obtain.
Dave
When updating time, record TSC value. Record delay of requesting time through kernel interface and returning it from the kernel in cycles (including the addition of this variable). When requesting time, determine TSC value, substract original TSC value, multiply by time slice, divide by processor frequency, add to original time, add offset too, return value. Nanosecond accuracy.
Posted: Sun Jan 07, 2007 3:07 pm
by dave
Im not saying you can not get nanosecond accuracy but were talking 10's to 100's of nanoseconds would more likely to be what is you are able to get. So taking that into account your asking for on the order of 6 magnitudes (10^6) greater precision which just is not going to happen with any modern hardware. femtosecond accuracy is just not possible, nor is picosecond, and nanosecond while possible is not truely precise because your guesing how long it took the processor to do something based on typical data. All im saying is femto second accuracy for say simulations should be handled by the application because it is truely application specific and you would truely be simulating the time no matter what.