rdos wrote:
Why do programmers think the world started in 1970 or 2000? My 64-bit time with microsecond resolution starts at year 0, not 1970. I can interpret negative values as B.C. It will work long enough into the future that I don't need to worry about it.
The year 0 is just as arbitrary as 1970 or any other number. The world didn't start in the year 0 either.
nullplan wrote:
Why do people limit themselves to these small sizes. Modern UNIX variants use a 64-bit timestamp counting seconds since 1970, and another machine word for the sub-second part, down to nanoseconds. That will overflow around the time the sun starts expanding to a brown giant.
I use a 128 bit number in nanoseconds for "system" time but a 64 bit number in nanoseconds for "monotonic" time as I don't expect any single machine to run 24/7 for over 500 years.
For anything that is fine with microsecond precision I still use 64 bit numbers as I don't care what happens 100 000+ years in the future. Anything I made most likely won't be in use by then anyways.
What I do wonder is why timespec separates seconds and nanoseconds instead of using a 128 bit integer (or two 64 bit integers). The former seems more complicated for little (if any?) gain, even if only slightly.
azblue wrote:
Another idea I had was using a double precision floating point value. As it moves beyond its time epoch it sacrifices precision for larger numbers. 285,000 years after its time epoch it still manages millisecond precision -- not too bad! And it continues to maintain 1-second precision even 285 million years after its time epoch -- not super precise, but good enough to keep track of time.
1-second precision is terrible for any application that requires higher precision. If 1-second precision is sufficient then it would make more sense to go straight for an integer representing seconds.