Is there consensus on a 128-bit type?
I'm making a sort of valiant attempt at making all variables large enough to be used for nearly all instances of the problem they're solving - not just those I can see. Times should stretch from creation to armageddon, times for the computer should stretch at least from boot to hardware fail / shutdown.Solar wrote:Even if you would measure in nanoseconds, a 64-bit-value would cover 600 years...
So, I've chosen two value types, 128-bit for generic time and 64-bit for cpu time, first measured half in seconds half in subseconds, second measured in 1/65536ths of a second. The second is used for timers etc., the first is used to return realtime values etc.
The only ones to which you need an actual conversion is Windows and DOS (using division and multiplication).
Nice approach... but... do you really require a precision in the nanosecond range for historical dates?
Usually I am from the same school of thought (make it big enough right from the beginning). But usually exact-time and calendar are two variable types that seldom if ever have to be mixed. For example, do you intend to keep 128-bit / nanosecond precision timestamps for all your files?
Usually I am from the same school of thought (make it big enough right from the beginning). But usually exact-time and calendar are two variable types that seldom if ever have to be mixed. For example, do you intend to keep 128-bit / nanosecond precision timestamps for all your files?
Every good solution is obvious once you've found it.
Hi,
Also, I'm not convinced that Candy isn't planning more than he's stated....
For example, it'd be possible to have a 32-bit seconds counter, a 32-bit sub-second counter, and a 64-bit access count.
The idea of the access count is to give increasing values for requests for the current time, such that no 2 times are the same and all values returned are in order.
For example, on a normal OS with a 100 ms timer IRQ, if 2 threads request the current time within that 100 ms timer period, then they'd both get the same time value. With the "access count" they'd both get different values (for e.g. 0x1234567800000001 and 0x1234567800000002).
I remember a few posts Candy made previously...
Cheers,
Brendan
I personally like the idea of a time format that is capable of storing all times for all purposes (i.e. with the a range suitable for historical dates and accuracy suitable for very small time delays and perhaps instruction timing).Solar wrote:Nice approach... but... do you really require a precision in the nanosecond range for historical dates?
Usually I am from the same school of thought (make it big enough right from the beginning). But usually exact-time and calendar are two variable types that seldom if ever have to be mixed. For example, do you intend to keep 128-bit / nanosecond precision timestamps for all your files?
Also, I'm not convinced that Candy isn't planning more than he's stated....
For example, it'd be possible to have a 32-bit seconds counter, a 32-bit sub-second counter, and a 64-bit access count.
The idea of the access count is to give increasing values for requests for the current time, such that no 2 times are the same and all values returned are in order.
For example, on a normal OS with a 100 ms timer IRQ, if 2 threads request the current time within that 100 ms timer period, then they'd both get the same time value. With the "access count" they'd both get different values (for e.g. 0x1234567800000001 and 0x1234567800000002).
I remember a few posts Candy made previously...
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Yeah, that is a very good point... even a 64-bit number, if you are using milliseconds, would last you 584.5 million years . If you're using seconds, multilpy that by 1000. If you're using clock cycles, that just seems silly, however those could add up very quickly, if you're using it for a timer or whatever, it would probably time-out before anything, but who knows as cpu's get faster and faster, rather use a size to big than find out later it's to small (2k bug anybody?).
Err..One thing I'm confused about.You wrote "a time format"...How can only one format suit several types of time accuracy?Brendan wrote:I personally like the idea of a time format that is capable of storing all times for all purposes (i.e. with the a range suitable for historical dates and accuracy suitable for very small time delays and perhaps instruction timing).
(I understand that as:a large time type containing different fields for different use.If it's just what you originally thounght of,it might waste storage space.A more efficient way might be to design a set of time types with fewer bits and processed in different methods.If I mistake that,just ignore this paragraph.)
Making a micro-scheduling in a micro-order as a solution to synchronization?The idea of the access count is to give increasing values for requests for the current time, such that no 2 times are the same and all values returned are in order.
For example, on a normal OS with a 100 ms timer IRQ, if 2 threads request the current time within that 100 ms timer period, then they'd both get the same time value. With the "access count" they'd both get different values (for e.g. 0x1234567800000001 and 0x1234567800000002).
Last edited by m on Thu Jan 04, 2007 5:23 am, edited 1 time in total.
Not nice to read my mind like that.Brendan wrote:Hi,
I personally like the idea of a time format that is capable of storing all times for all purposes (i.e. with the a range suitable for historical dates and accuracy suitable for very small time delays and perhaps instruction timing).Solar wrote:Nice approach... but... do you really require a precision in the nanosecond range for historical dates?
Usually I am from the same school of thought (make it big enough right from the beginning). But usually exact-time and calendar are two variable types that seldom if ever have to be mixed. For example, do you intend to keep 128-bit / nanosecond precision timestamps for all your files?
Also, I'm not convinced that Candy isn't planning more than he's stated....
I was hoping for a time format that was specific and time-indicative, so I couldn't use a floating point number that's more specific at a certain time and less specific in the future. I also wanted it to stretch from start to end of world, within a precision of at least a femtosecond (so you could use it for hardware simulation, which takes the femtosecond as basic unit). A femtosecond is a 1/10^15th of a second, so this is far enough. It also leaves for an access counter that doesn't overflow excessively fast and doesn't give a noticeable effect on actual measured time.
It's not necessary or interesting to see which file was created at which sub-femtosecond interval. It is interesting to see in which order they were created.
Hi,
It's rare for me to hear of an idea that I'd consider new, useful and nonobvious to "someone of ordinary skill in the art".
BTW, has this idea been published and/or used publicly somewhere that I'm not aware of? If not, would your previous posts count as "published" from the US legal system's perspective (IIRC, in the US there's a one year window after an idea is published)?
Lastly, would you mind if I intended to also use the "access count" idea?
Cheers,
Brendan
You are entirely correct, and have my apologies..Candy wrote:Not nice to read my mind like that.Brendan wrote:Also, I'm not convinced that Candy isn't planning more than he's stated....
It's rare for me to hear of an idea that I'd consider new, useful and nonobvious to "someone of ordinary skill in the art".
BTW, has this idea been published and/or used publicly somewhere that I'm not aware of? If not, would your previous posts count as "published" from the US legal system's perspective (IIRC, in the US there's a one year window after an idea is published)?
Lastly, would you mind if I intended to also use the "access count" idea?
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Was meant to be a joke, it's ok.Brendan wrote: You are entirely correct, and have my apologies..
ThanksIt's rare for me to hear of an idea that I'd consider new, useful and nonobvious to "someone of ordinary skill in the art".
I would say that this forum is backed up often enough by search engines to count as prior art, at least, that's why I've been dumping most of my ideas here.BTW, has this idea been published and/or used publicly somewhere that I'm not aware of? If not, would your previous posts count as "published" from the US legal system's perspective (IIRC, in the US there's a one year window after an idea is published)?
All the stuff I make during my free time is published in the public domain, including ideas and so on. Patents are your own problem, I disregard them.Lastly, would you mind if I intended to also use the "access count" idea?
While having the ability to maintain femotsecond (10^-15) maybe nice.
A 3 GHz processor at best would have 333 picosecond (10^-12) saying all you processor did was keep setting your varaible with updates. Taking the fact that you would probably like to run some code. The best accuracy you are most likely to get is nanoseconds (10^-9). If you say you processor is amazing and could run every single instruction in one clock cycle, 10 instruction would take 3.3 nanoseconds to run. And if your using C/C++ as a programming choice the pushing and popping of arguments could easily account for those 10 instructions.
So, while it is a nice idea to have this wonderful accuracy, the underlying hardware will not give you the precision you are trying to obtain.
Dave
A 3 GHz processor at best would have 333 picosecond (10^-12) saying all you processor did was keep setting your varaible with updates. Taking the fact that you would probably like to run some code. The best accuracy you are most likely to get is nanoseconds (10^-9). If you say you processor is amazing and could run every single instruction in one clock cycle, 10 instruction would take 3.3 nanoseconds to run. And if your using C/C++ as a programming choice the pushing and popping of arguments could easily account for those 10 instructions.
So, while it is a nice idea to have this wonderful accuracy, the underlying hardware will not give you the precision you are trying to obtain.
Dave
{deleted, too stupid to tell page 1 of the thread from page 2}
Last edited by Solar on Sun Jan 07, 2007 7:34 am, edited 1 time in total.
Every good solution is obvious once you've found it.
My point was nothing about the fact more precision was wanted. It was simply that you cannot get femto second accuracy not even 1 nanosecond accuracy on a modern cpu. The logic gates involved in bringing you all this wonderful technology have delays in that are in the pico to femotesecond range. So my point is that even if you wanted femote second precision you could not get it out of a cpu.
Also, Computers are not getting faster they are getting slower clock speed wise.
Combuster, by the time you get that speed intel probably will not exist.
Also, Computers are not getting faster they are getting slower clock speed wise.
Combuster, by the time you get that speed intel probably will not exist.
When updating time, record TSC value. Record delay of requesting time through kernel interface and returning it from the kernel in cycles (including the addition of this variable). When requesting time, determine TSC value, substract original TSC value, multiply by time slice, divide by processor frequency, add to original time, add offset too, return value. Nanosecond accuracy.dave wrote:While having the ability to maintain femotsecond (10^-15) maybe nice.
A 3 GHz processor at best would have 333 picosecond (10^-12) saying all you processor did was keep setting your varaible with updates. Taking the fact that you would probably like to run some code. The best accuracy you are most likely to get is nanoseconds (10^-9). If you say you processor is amazing and could run every single instruction in one clock cycle, 10 instruction would take 3.3 nanoseconds to run. And if your using C/C++ as a programming choice the pushing and popping of arguments could easily account for those 10 instructions.
So, while it is a nice idea to have this wonderful accuracy, the underlying hardware will not give you the precision you are trying to obtain.
Dave
Im not saying you can not get nanosecond accuracy but were talking 10's to 100's of nanoseconds would more likely to be what is you are able to get. So taking that into account your asking for on the order of 6 magnitudes (10^6) greater precision which just is not going to happen with any modern hardware. femtosecond accuracy is just not possible, nor is picosecond, and nanosecond while possible is not truely precise because your guesing how long it took the processor to do something based on typical data. All im saying is femto second accuracy for say simulations should be handled by the application because it is truely application specific and you would truely be simulating the time no matter what.