Future time epoch and floating point time stamps

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
azblue
Member
Member
Posts: 147
Joined: Sat Feb 27, 2010 8:55 pm

Future time epoch and floating point time stamps

Post by azblue »

Years ago I was thinking about how to represent time on my OS and I wanted to combine date, time, and high-precision time all under one roof, so my initial thought was a 64 bit unsigned integer counting nanoseconds since midnight on january 1st, 2000. That has a period of just over 584+ years, so after 2584 a new standard would be needed.

Now, obviously no matter how you record time there comes a point when it rolls over. Neverthelss, I started thinking of ways to extend the life of a time stamp and one of the ways I came up with that I believe is somewhat unique is setting the time epoch in the future. If the value returned is a signed integer counting the nanoseconds "since" Jan 1st, 2300, then current readings would give a negative number. After 2300 it can return an unsigned integer, allowing it to continue counting until 2884 before a new standard is needed. You cannot compare readings from the entire ~900 year range: readings after ~2592 cannot be compared to readings before 2300, but, so long as you're not comparing readings very far apart, it does allow for a very long range without the need for a new standard.

Another idea I had was using a double precision floating point value. As it moves beyond its time epoch it sacrifices precision for larger numbers. 285,000 years after its time epoch it still manages millisecond precision -- not too bad! And it continues to maintain 1-second precision even 285 million years after its time epoch -- not super precise, but good enough to keep track of time.
nullplan
Member
Member
Posts: 1779
Joined: Wed Aug 30, 2017 8:24 am

Re: Future time epoch and floating point time stamps

Post by nullplan »

Why do people limit themselves to these small sizes. Modern UNIX variants use a 64-bit timestamp counting seconds since 1970, and another machine word for the sub-second part, down to nanoseconds. That will overflow around the time the sun starts expanding to a brown giant.

So yeah, just use a 96 bit number, like a normal person. :wink:
Carpe diem!
rdos
Member
Member
Posts: 3288
Joined: Wed Oct 01, 2008 1:55 pm

Re: Future time epoch and floating point time stamps

Post by rdos »

Why do programmers think the world started in 1970 or 2000? My 64-bit time with microsecond resolution starts at year 0, not 1970. I can interpret negative values as B.C. It will work long enough into the future that I don't need to worry about it.
User avatar
Demindiro
Member
Member
Posts: 96
Joined: Fri Jun 11, 2021 6:02 am
Libera.chat IRC: demindiro
Location: Belgium
Contact:

Re: Future time epoch and floating point time stamps

Post by Demindiro »

rdos wrote:Why do programmers think the world started in 1970 or 2000? My 64-bit time with microsecond resolution starts at year 0, not 1970. I can interpret negative values as B.C. It will work long enough into the future that I don't need to worry about it.
The year 0 is just as arbitrary as 1970 or any other number. The world didn't start in the year 0 either.
nullplan wrote: Why do people limit themselves to these small sizes. Modern UNIX variants use a 64-bit timestamp counting seconds since 1970, and another machine word for the sub-second part, down to nanoseconds. That will overflow around the time the sun starts expanding to a brown giant.
I use a 128 bit number in nanoseconds for "system" time but a 64 bit number in nanoseconds for "monotonic" time as I don't expect any single machine to run 24/7 for over 500 years.

For anything that is fine with microsecond precision I still use 64 bit numbers as I don't care what happens 100 000+ years in the future. Anything I made most likely won't be in use by then anyways.

What I do wonder is why timespec separates seconds and nanoseconds instead of using a 128 bit integer (or two 64 bit integers). The former seems more complicated for little (if any?) gain, even if only slightly.
azblue wrote: Another idea I had was using a double precision floating point value. As it moves beyond its time epoch it sacrifices precision for larger numbers. 285,000 years after its time epoch it still manages millisecond precision -- not too bad! And it continues to maintain 1-second precision even 285 million years after its time epoch -- not super precise, but good enough to keep track of time.
1-second precision is terrible for any application that requires higher precision. If 1-second precision is sufficient then it would make more sense to go straight for an integer representing seconds.
My OS is Norost B (website, Github, sourcehut)
My filesystem is NRFS (Github, sourcehut)
rdos
Member
Member
Posts: 3288
Joined: Wed Oct 01, 2008 1:55 pm

Re: Future time epoch and floating point time stamps

Post by rdos »

Demindiro wrote:
rdos wrote:Why do programmers think the world started in 1970 or 2000? My 64-bit time with microsecond resolution starts at year 0, not 1970. I can interpret negative values as B.C. It will work long enough into the future that I don't need to worry about it.
The year 0 is just as arbitrary as 1970 or any other number. The world didn't start in the year 0 either.
Time is relative, but if you use 1970 as the base and unsigned numbers, then you cannot represent anything before 1970. At least my time keeping can handle a few 1000 years back in time and a 1000 years forward.

I don't think an OS can keep nanosecond precision on events, but most modern hardware can handle microseconds pretty well.
nullplan
Member
Member
Posts: 1779
Joined: Wed Aug 30, 2017 8:24 am

Re: Future time epoch and floating point time stamps

Post by nullplan »

rdos wrote:Time is relative, but if you use 1970 as the base and unsigned numbers, then you cannot represent anything before 1970.
Have you ever heard of signed integers? The original UNIX format (with a signed 32-bit number seconds starting in 1970) can represent all times from 1902 or so until 2038. The current one (with a signed 64-bit integer) can cover all of human history. Actually, if I've not miscalculated, it can handle the entire history of the universe from beginning to likely end.
rdos wrote:I don't think an OS can keep nanosecond precision on events, but most modern hardware can handle microseconds pretty well.
Another reason for the 96-bit split timestamp: To separate the precise part (seconds) from the imprecise (nanoseconds). Originally, the idea was that no hardware would ever actually get to nanosecond precision, but you can normalize whatever counter the hardware provides into nanoseconds without loss of precision.

Of course, a counter with 1ns period time has a frequency of 1Ghz, which is very much achievable with modern electronics.
Carpe diem!
rdos
Member
Member
Posts: 3288
Joined: Wed Oct 01, 2008 1:55 pm

Re: Future time epoch and floating point time stamps

Post by rdos »

nullplan wrote:
rdos wrote:Time is relative, but if you use 1970 as the base and unsigned numbers, then you cannot represent anything before 1970.
Have you ever heard of signed integers? The original UNIX format (with a signed 32-bit number seconds starting in 1970) can represent all times from 1902 or so until 2038.
A smarter usage of it is to consider everything to be after 1970, and then you can regard "negative numbers" as after 2038 instead.
nullplan wrote:The current one (with a signed 64-bit integer) can cover all of human history. Actually, if I've not miscalculated, it can handle the entire history of the universe from beginning to likely end.
Extended it to 64-bits and still keeping second resolution isn't very smart either. You get poor resolution and the higher 32-bit are not used for anything significant.
nullplan wrote:
rdos wrote:I don't think an OS can keep nanosecond precision on events, but most modern hardware can handle microseconds pretty well.
Another reason for the 96-bit split timestamp: To separate the precise part (seconds) from the imprecise (nanoseconds). Originally, the idea was that no hardware would ever actually get to nanosecond precision, but you can normalize whatever counter the hardware provides into nanoseconds without loss of precision.

Of course, a counter with 1ns period time has a frequency of 1Ghz, which is very much achievable with modern electronics.
I think this is only achievable for dedicated hardware. An OS cannot schedule threads so they achieve timing with ns resolution, unless it lets the thread run completely uninterrupted. Microsecond resolution should be achievable, at least with higher-than-normal priority.
User avatar
JAAman
Member
Member
Posts: 879
Joined: Wed Oct 27, 2004 11:00 pm
Location: WA

Re: Future time epoch and floating point time stamps

Post by JAAman »

rdos wrote:Why do programmers think the world started in 1970 or 2000? My 64-bit time with microsecond resolution starts at year 0, not 1970. I can interpret negative values as B.C.
there never was a year 0: year 1 BC was followed by year 1 AD

it does make date math with history a little inconvenient
linguofreak
Member
Member
Posts: 510
Joined: Wed Mar 09, 2011 3:55 am

Re: Future time epoch and floating point time stamps

Post by linguofreak »

JAAman wrote:
rdos wrote:Why do programmers think the world started in 1970 or 2000? My 64-bit time with microsecond resolution starts at year 0, not 1970. I can interpret negative values as B.C.
there never was a year 0: year 1 BC was followed by year 1 AD

it does make date math with history a little inconvenient
Actually, astronomers use the AD epoch with a year zero instead of 1 BC, and negative years before that. ISO 8601 also specifies signed years and a year zero, though dates prior to the introduction of the Gregorian calendar are only supposed to be used by prior agreement of the communicating parties.

"There never was a year zero" does have some truth to it, though. Generally, it's people that come later that define a given date as a calendar epoch. The people alive at the time are generally using some other calendar, so there may very well have never been a year zero that was called year zero by anyone alive at the time.
linguofreak
Member
Member
Posts: 510
Joined: Wed Mar 09, 2011 3:55 am

Re: Future time epoch and floating point time stamps

Post by linguofreak »

With a 128-bit seconds part and a 128-bit fractional seconds part, you can get *almost* all the way down to the Planck time for resolution, and not suffer a rollover until all the stars that will ever exist are dead. 256 bits should be enough for anybody. If you ever have a clock with resolution on the order of the Planck time, another 16 bits on the fractional second part will get you all the resolution that you'll ever need.
rdos
Member
Member
Posts: 3288
Joined: Wed Oct 01, 2008 1:55 pm

Re: Future time epoch and floating point time stamps

Post by rdos »

linguofreak wrote:With a 128-bit seconds part and a 128-bit fractional seconds part, you can get *almost* all the way down to the Planck time for resolution, and not suffer a rollover until all the stars that will ever exist are dead. 256 bits should be enough for anybody. If you ever have a clock with resolution on the order of the Planck time, another 16 bits on the fractional second part will get you all the resolution that you'll ever need.
It needs to be practical too. This is a bit like extending "julian time" from 32-bits to 64-bits. Lots of the bits will be unused at the high side and at the low side they are meaningless since such precision is not attainable by a real-world OS.
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Future time epoch and floating point time stamps

Post by AndrewAPrice »

I'm building a microkernel and I save a 64-bit counter of microseconds since the kernel starred, which is good enough me. I like that it's a plain old 64-bit number because I'm targeting 64-bit architectures so it's super fast and I don't need to do any large number math.

Services can use what ever representation they want. I'll probably go with 64-bit Unix epoch since it will require the least conversions.
My OS is Perception.
nullplan
Member
Member
Posts: 1779
Joined: Wed Aug 30, 2017 8:24 am

Re: Future time epoch and floating point time stamps

Post by nullplan »

rdos wrote:A smarter usage of it is to consider everything to be after 1970, and then you can regard "negative numbers" as after 2038 instead.
But... then you get exactly the other problem you mentioned.
rdos wrote:Extended it to 64-bits and still keeping second resolution isn't very smart either. You get poor resolution and the higher 32-bit are not used for anything significant.
That was never the point. The point was always compatibility. time_t used to be a signed 32-bit type. Any change to it will break something. Change the signedness and you can no longer represent historical times that may still be important. Change the size and you are breaking a lot of other stuff. But fundamentally, since both things will break existing software, the UNIX community as a whole went the way that solves the problem completely.

The high 32 bits will become significant in 2038, when they cease being a sign extension. Sure, the problem could have been pushed back with a change of signedness, but then it would just reappear in 2100 or so. With the change to a 64-bit type (which is the smallest sensible type larger than 32 bits), the problem is solved forever.

And resolution wasn't an issue. If you want resolution, you don't use time_t, since that is defined with second-precision. You use struct timespec, which is defined with an extra part that has nanoseconds.
Carpe diem!
User avatar
SeaLiteral
Posts: 19
Joined: Wed Sep 27, 2017 1:44 pm

Re: Future time epoch and floating point time stamps

Post by SeaLiteral »

When I get to implementing EXT-2, I might consider using the OS-specific values to extend the range. Those only exist per inode, so for things like "last mount time" I'll have to just have to consider the times there ambiguous. But I can think of at least three features I want to add before I add a filesystem, and I think the first filesystem I add will probably be USTAR (which seems to use 36-bit timestamps, which considering they're whole seconds would give it a millennium or so before it rolls over, which I hope is enough for me to implement another filesystem).

But in the long run I kinda want to try rolling my own filesystem, just like I kinda want to try rolling my own bootloader sometime. But I figured using a few existing things until I get my own replacements working seemed like a good idea. I just have to hope I've implemented EXT-2 before 2038 so I don't have to initially be making disk images with tools that don't support encoding the times at which I'm using them.

[Edit: I do wonder if computers in the future might have more precise timers making nanosecond timestamps accurate and possibly even justifying having such timestamps in a filesystem. Then again, I don't see a need to implement UTF-64 at this time.]
rdos
Member
Member
Posts: 3288
Joined: Wed Oct 01, 2008 1:55 pm

Re: Future time epoch and floating point time stamps

Post by rdos »

nullplan wrote: And resolution wasn't an issue. If you want resolution, you don't use time_t, since that is defined with second-precision. You use struct timespec, which is defined with an extra part that has nanoseconds.
I don't use any of them. I use my own time&date routines.

Also, I don't see why the divide should be with seconds. It only creates a lot of problems. In my time format, the least significant 32-bits are fractions of hours, while the most significant part is number of hours. This means the resolution is a bit better than microseconds, and 3600 times longer periods can be represented than time_t can handle. This encoding simply use all the space in a 64-bit number effectively, and provide decent resolution & long enough time spans.
Post Reply