Hi,
LtG wrote:Brendan wrote:For UEFI's "setTime()" I really don't know what best practice is. On one hand; it'd be nice for an OS to use (e.g.) NTP to update UEFI's time source if necessary. On the other hand; it's a resource shared by all installed OS and no OS should assume it has permission to modify any "shared by all OSs" resource (even if requested by that specific OS's "admin" as there's no guarantee the person is more than just "anonymous guest" for other OSs on the computer); the firmware can (should?) use NTP to update UEFI's time keeping itself so that no OS has to; and if an OS is using NTP anyway then it has little reason to care about UEFI's time.
What do you mean "firmware can"? From a purely technical perspective that yes, of course it could be done, or from a practical perspective that the firmware is already doing significant network operations on it's own without any supervision? I know there's already some anti-theft and other things out there, but I certainly don't want the firmware to start utilizing network on it's own, it has no realistic way of knowing any of the consequences (including financial costs involved with using the network, I think Apple made a mistake with that on the iPhone's while roaming).
For the practical perspective, it depends too much on the scenario - a desktop machine connected to company LAN (e.g. with a local NTP server and corresponding DHCP option) is very different to a laptop (e.g. with intermittent networking). In any case, if the OS itself is using NTP anyway, then I'd be more worried about the time it'd take for firmware to use NTP too and less worried about any network bandwidth costs if/when the user enables the firmware's "use NTP to correct system time" option.
LtG wrote:However if an OS utilizes NTP it doesn't mean the RTC time is useless. Consider the RTC after a time being 15 minutes fast, you turning on your laptop without network, now the OS has incorrect time. Or RTC being 1h fast and you being in a foreign country (which will make network unlikely initially) and relying on the timezone correction on the laptop, thus making it quite possible that you won't notice the 1h mistake immediately... So I would like the RTC to be maintained. UEFI of course should have fixed this by giving the time some semantics (haven't read the specs, so maybe they did?), such as following (one of the) UTC with no DST and other stupidity, or something else, but so long as it had semantics everybody would know if/when/how to update it.
UEFI gives you date and time, plus "time zone offset from GMT", then a flag for whether the time should be effected by daylight savings or not, plus a flag to indicate if UEFI thinks it's currently in daylight savings or not. The problem here is that the OS is responsible for adjusting UEFI's clock for daylight savings if it hasn't already been adjusted; and this is impossible to do properly when the only information you have is "time zone offset from GMT" (and can't figure out which time zone it actually is, or even whether it's northern hemisphere or southern hemisphere); and can't work for "travelling laptop" either.
Note that I'd like to see "firmware always set to UTC with no exceptions", as this avoids all time zone and daylight savings problems. For laptops I'd also like to see GPS hardware built in by default, so the OS can get time and location and auto-adjust time zone, etc.
LtG wrote:Brendan wrote:First, let's define some things. Let's define "drift" as how well the timer measures extremely long durations (e.g. a timer that can measure an entire year within +/- 1 second has extremely low drift); and let's define "precision" as the granularity of a timer's measurements.
Now let's define "accuracy" as how well the timer can measure short durations. Accuracy is a combination of both drift and granularity. For example, if you have a time source that has extremely low drift and extremely low precision (e.g. only measures seconds) then you aren't going to be able to use it to accurately measure 1234 nanoseconds. For another example, if you have a time source that has extremely good precision but very bad drift, then you aren't going to be able to use it to accurately measure 4 weeks.
The RTC is supposed to be the local time source with the least drift (because all other local time sources completely lose track of time when the computer is turned off). However, the RTC's also has relatively poor precision - by setting the RTC periodic IRQ to 8000 Hz you can get 125 us precision (with extremely high "IRQs per second" overhead). In comparison, the CPU's Time Stamp Counter has relatively poor drift, but is the most precise time source there is (typically better than 1 nanosecond). HPET is somewhere between, with reasonable drift and precision of typically 100 nanoseconds.
This is where the essential problem is - they're all a compromise between drift and precision, and nothing has both good drift and good precision, so nothing has good accuracy.
The solution is to combine time sources. By using one time source to keep a second time source synchronized, you can get the drift from one time source and the precision from the other time source. This means that the best solution would be to use RTC (best for drift) to keep the CPU's Time Stamp Counter (best for precision) synchronized; so that you're combining "best drift with best precision" to get the maximum possible accuracy.
This same thinking applies to NTP, which has even better drift and even worse precision. This leads to the "triple layer" solution (that is common practice); where NTP is used to keep RTC synchronized, and RTC is used to keep something else (e.g. TSC) synchronized.
I know the difference between accuracy and precision, did I switch them accidentally?
Everyone defines them differently, which tends to cause lots of confusion. That's why I've explicitly defined exactly what I mean.
LtG wrote:In any case, my point was exactly is there a difference in accuracy between RTC, HPET, TSC, etc? I was just curious if you knew how much that difference would be.
For drift, typically RTC and HPET are similar and "least worst". Because TSC is typically derived from "CPU bus clock" (and there's no real need for its clock source to be "low drift") the TSC can have very bad drift.
For precision, you'll never find anything more precise than the TSC.
For accuracy, because it's a compromise between drift and precision everything is bad; but HPET is probably the "least worst" (likely reasonable drift with precision that's not as good as TSC but not as bad as RTC). Of course you can't/shouldn't assume HPET exists.
LtG wrote:For TSC there's also technical problems of figuring out which kind of TSC it is (not sure if the info is always reliable), but assuming the TSC does not vary with CPU clock, is the problem with TSC not knowing it's exact frequency or does it just fluctuate too much to be accurate?
There's multiple different problems here. The first is determining if the CPU's TSC is fixed frequency, which is no different to determining if the CPU supports any feature (test a CPUID flag and fix up any/all cases where CPUs have errata that causes them to misreport).
The second problem is figuring out which frequency it's running at initially; which involves calibrating it with some other timer during boot (e.g. RTC, PIT, HPET, ...).
The third problem (even with "TSC is fixed frequency") is drift. This is why you need to regularly synchronise TSC (for each CPU) with some other time source.
The fourth problem is that (even with "TSC is fixed frequency") on some CPUs the TSC stops ticking when the CPU enters certain (deep) sleep states; which means you need to synchronise with some other time source when you take the CPU out of a sleep state.
The fifth problem (even with "TSC is fixed frequency") is that different CPUs can have different TSC values. The correct way to solve this is to have a separate "TSC offset" and "TSC multiplier" for each CPU; so that the kernel can do "this CPU's current TSC value * this CPU's TSC multiplier + this CPU's TSC offset" and arrive at the same answer on any CPU. Note that if the CPU's TSC doesn't run at a fixed frequency (and is effected by power management) the OS can adjust the CPU's "TSC multiplier" when it changes CPU speed (and if/when it receives the local APIC's "thermal monitor interrupt").
Also note that Intel have been warning against allowing user-space software to access the TSC directly ever since it was introduced (in Pentium CPUs) for security reasons (the precision it provides leaves as OS wide open for timing side-channel attacks); so you should disable the ability to access TSC at CPL=3 (there's a "Time Stamp Disable" flag in CR4 for this). This also means that kernel is free to do it's TSC adjustments without worrying about user-space getting confused; and that (if it wants to) kernel can emulate the "RDTSC" instruction in the general protection fault exception handler (possibly including making RDTSC return some other time, like "nanoseconds of CPU time given to this thread since it was started").
LtG wrote:My point being that TSC would probably be the optimal source (is any of the other ones faster to query?) if it doesn't fluctuate much (at all), even if it's exact value is not known exactly, NTP could possibly be used to find out it's value accurately enough that it could be used more accurately than the RTC which really isn't all that accurate. I decided to test my laptop this week, I'll let the clock run for a week with NTP disabled, so far it's been around 16h since I disabled NTP and it seems to be about 2s fast, based on that it would be 21s fast in just one week, which I do find to be too much.
The other thing that hasn't been mentioned yet is "other capabilities" (e.g. if the timer can be setup for a periodic IRQ, and if the timer can be setup for "one-shot IRQ on terminal count"). For example, you can't (at least not easily) setup the RTC for "one-shot IRQ on terminal count". This is important for implementing things like "sleep()", scheduling, networking time-outs, etc.
For older CPUs it's impossible to get TSC to generate an IRQ; which means you need yet another timer (local APIC timer, HPET, PIT). For newer CPUs, Intel added a "TSC deadline mode" to the local APIC timer so that you can use the TSC for "one-shot IRQ on terminal count" (which makes it extremely awesome, if it's synchronised well).
LtG wrote:Brendan wrote:The first problem with the RTC is figuring out which time it keeps - local time (for which time zone?), or UTC, or TAI, or something else? Historically (for MS-DOS and "inherited" by Windows) the RTC kept track of local time, and because of this it screws up daylight savings twice per year unless there's "one and only one" OS adjusting it for daylight savings.
As I mentioned earlier I think for the time your OS runs it could calculate the difference, that way the RTC would be more accurate, how much depends on how much is powered on vs off.
And if it is set to local time and is effected by daylight savings; you have no way to know if some other OS adjusted it for daylight savings or not; so even if the OS keeps track of a "difference between RTC and current time" your OS's timing will still get messed up twice per year.
LtG wrote:So I think the only sane thing to do is to make it a system setting, so the admin can decide if it's ok to update the RTC or not.
Ideally this should be a firmware setting set by "system owner" and not set by the admin of any OS.
Imagine you have 5 family members; and one shared computer that has no hard drives (that is configured by "system owner" to boot from USB). Each family member has their own USB flash stick with their OS installed on it. Now you have 5 different "admins". Which admin decides if it's ok to update the RTC or not on which OS?
LtG wrote:I certainly don't want to live in a world where the RTC can't be updated (even though most people only use one OS) and should/must be allowed to drift aimlessly. Realistically, when taking a new computer into use, I'll set the time if it's not already correct, but I don't want to have to boot to BIOS/UEFI to set it..
I just run an NTP server on my LAN, and tell all the other computers to get their time from my local NTP server.
Cheers,
Brendan