Code: Select all
AH=2CH INT 21H
Is this functions gets the OS time so that will not work on a kernel since there is no OS to get the time from?
If yes, how could i get the time? im working with NASM
Code: Select all
AH=2CH INT 21H
I think that int 0x21 is a DOS call.Mohamed007 wrote:Whyget the time when assembled as EXE program and don't work when I include it in my kernel code?Code: Select all
AH=2CH INT 21H
Is this functions gets the OS time so that will not work on a kernel since there is no OS to get the time from?
If yes, how could i get the time? im working with NASM
Oh you want to get Time/Date right?zenzizenzicube wrote:I think that int 0x21 is a DOS call.Mohamed007 wrote:Whyget the time when assembled as EXE program and don't work when I include it in my kernel code?Code: Select all
AH=2CH INT 21H
Is this functions gets the OS time so that will not work on a kernel since there is no OS to get the time from?
If yes, how could i get the time? im working with NASM
You need to implement it yourself, maybe look at the wiki page for CMOS.
You're right in saying CMOS is not the best idea, but in the beginning stages, it's easy enough to use and can even replace PIT, as the CMOS can generate IRQs at equal time intervals as well, although this doesn't work on some hardware, including my laptop. The CMOS should be used just for the time, and the OS should avoid touching it because by writing one invalid register you've destroyed your BIOS.octacone wrote:CMOS is not the best idea, once you get networking set up you should use that.
I'm assuming the getTime() is a run-time service? Haven't touched UEFI yet.. Otherwise how do you update the time?Brendan wrote: For UEFI there's a "getTime()" function (that returns time including nanosecond, date, timezone and daylight savings offset).
Typically you'd want to get the time once during boot and convert it to UTC, then use a timer (e.g. RTC periodic IRQ) to keep your "current time in UTC" current, either directly or indirectly (by using a faster/more precise timer like the CPU's TSC, or HPET or local APIC timer and then using the slower/more accurate timer to avoid drift). Once that's done, NTP can be used to synchronise the entire (local) thing with an external source.
It is a run-time service; but "getTime()" should never be used after boot anyway (because that means your OS has a "firmware dependency" and isn't keeping track of time internally using a better method).LtG wrote:I'm assuming the getTime() is a run-time service? Haven't touched UEFI yet.. Otherwise how do you update the time?Brendan wrote:For UEFI there's a "getTime()" function (that returns time including nanosecond, date, timezone and daylight savings offset).
Typically you'd want to get the time once during boot and convert it to UTC, then use a timer (e.g. RTC periodic IRQ) to keep your "current time in UTC" current, either directly or indirectly (by using a faster/more precise timer like the CPU's TSC, or HPET or local APIC timer and then using the slower/more accurate timer to avoid drift). Once that's done, NTP can be used to synchronise the entire (local) thing with an external source.
First, let's define some things. Let's define "drift" as how well the timer measures extremely long durations (e.g. a timer that can measure an entire year within +/- 1 second has extremely low drift); and let's define "precision" as the granularity of a timer's measurements.LtG wrote:Do you have any details on the actual accuracy of the different timers/clock? Is the RTC really that much more accurate to warrant the extra effort of using it beyond initial time? I tried quickly searching for the accuracy of the RTC on PC's and it seems to be in the few seconds per day, though worst cases up to a minute (well, even worse but those can be considered malfunctioning)..
The first problem with the RTC is figuring out which time it keeps - local time (for which time zone?), or UTC, or TAI, or something else? Historically (for MS-DOS and "inherited" by Windows) the RTC kept track of local time, and because of this it screws up daylight savings twice per year unless there's "one and only one" OS adjusting it for daylight savings.LtG wrote:As an aside, generally speaking you should not change the time in the RTC as to not mess up other OS's on the system, though I think adjusting it for accuracy (drift) is ok.
Agreed, I intended getTime()'s counterpart, setTime().Brendan wrote:Hi,
It is a run-time service; but "getTime()" should never be used after boot anyway (because that means your OS has a "firmware dependency" and isn't keeping track of time internally using a better method).
What do you mean "firmware can"? From a purely technical perspective that yes, of course it could be done, or from a practical perspective that the firmware is already doing significant network operations on it's own without any supervision? I know there's already some anti-theft and other things out there, but I certainly don't want the firmware to start utilizing network on it's own, it has no realistic way of knowing any of the consequences (including financial costs involved with using the network, I think Apple made a mistake with that on the iPhone's while roaming).Brendan wrote: For UEFI's "setTime()" I really don't know what best practice is. On one hand; it'd be nice for an OS to use (e.g.) NTP to update UEFI's time source if necessary. On the other hand; it's a resource shared by all installed OS and no OS should assume it has permission to modify any "shared by all OSs" resource (even if requested by that specific OS's "admin" as there's no guarantee the person is more than just "anonymous guest" for other OSs on the computer); the firmware can (should?) use NTP to update UEFI's time keeping itself so that no OS has to; and if an OS is using NTP anyway then it has little reason to care about UEFI's time.
I know the difference between accuracy and precision, did I switch them accidentally?Brendan wrote: First, let's define some things. Let's define "drift" as how well the timer measures extremely long durations (e.g. a timer that can measure an entire year within +/- 1 second has extremely low drift); and let's define "precision" as the granularity of a timer's measurements.
Now let's define "accuracy" as how well the timer can measure short durations. Accuracy is a combination of both drift and granularity. For example, if you have a time source that has extremely low drift and extremely low precision (e.g. only measures seconds) then you aren't going to be able to use it to accurately measure 1234 nanoseconds. For another example, if you have a time source that has extremely good precision but very bad drift, then you aren't going to be able to use it to accurately measure 4 weeks.
The RTC is supposed to be the local time source with the least drift (because all other local time sources completely lose track of time when the computer is turned off). However, the RTC's also has relatively poor precision - by setting the RTC periodic IRQ to 8000 Hz you can get 125 us precision (with extremely high "IRQs per second" overhead). In comparison, the CPU's Time Stamp Counter has relatively poor drift, but is the most precise time source there is (typically better than 1 nanosecond). HPET is somewhere between, with reasonable drift and precision of typically 100 nanoseconds.
This is where the essential problem is - they're all a compromise between drift and precision, and nothing has both good drift and good precision, so nothing has good accuracy.
The solution is to combine time sources. By using one time source to keep a second time source synchronized, you can get the drift from one time source and the precision from the other time source. This means that the best solution would be to use RTC (best for drift) to keep the CPU's Time Stamp Counter (best for precision) synchronized; so that you're combining "best drift with best precision" to get the maximum possible accuracy.
This same thinking applies to NTP, which has even better drift and even worse precision. This leads to the "triple layer" solution (that is common practice); where NTP is used to keep RTC synchronized, and RTC is used to keep something else (e.g. TSC) synchronized.
As I mentioned earlier I think for the time your OS runs it could calculate the difference, that way the RTC would be more accurate, how much depends on how much is powered on vs off.Brendan wrote:The first problem with the RTC is figuring out which time it keeps - local time (for which time zone?), or UTC, or TAI, or something else? Historically (for MS-DOS and "inherited" by Windows) the RTC kept track of local time, and because of this it screws up daylight savings twice per year unless there's "one and only one" OS adjusting it for daylight savings.LtG wrote:As an aside, generally speaking you should not change the time in the RTC as to not mess up other OS's on the system, though I think adjusting it for accuracy (drift) is ok.
For the practical perspective, it depends too much on the scenario - a desktop machine connected to company LAN (e.g. with a local NTP server and corresponding DHCP option) is very different to a laptop (e.g. with intermittent networking). In any case, if the OS itself is using NTP anyway, then I'd be more worried about the time it'd take for firmware to use NTP too and less worried about any network bandwidth costs if/when the user enables the firmware's "use NTP to correct system time" option.LtG wrote:What do you mean "firmware can"? From a purely technical perspective that yes, of course it could be done, or from a practical perspective that the firmware is already doing significant network operations on it's own without any supervision? I know there's already some anti-theft and other things out there, but I certainly don't want the firmware to start utilizing network on it's own, it has no realistic way of knowing any of the consequences (including financial costs involved with using the network, I think Apple made a mistake with that on the iPhone's while roaming).Brendan wrote:For UEFI's "setTime()" I really don't know what best practice is. On one hand; it'd be nice for an OS to use (e.g.) NTP to update UEFI's time source if necessary. On the other hand; it's a resource shared by all installed OS and no OS should assume it has permission to modify any "shared by all OSs" resource (even if requested by that specific OS's "admin" as there's no guarantee the person is more than just "anonymous guest" for other OSs on the computer); the firmware can (should?) use NTP to update UEFI's time keeping itself so that no OS has to; and if an OS is using NTP anyway then it has little reason to care about UEFI's time.
UEFI gives you date and time, plus "time zone offset from GMT", then a flag for whether the time should be effected by daylight savings or not, plus a flag to indicate if UEFI thinks it's currently in daylight savings or not. The problem here is that the OS is responsible for adjusting UEFI's clock for daylight savings if it hasn't already been adjusted; and this is impossible to do properly when the only information you have is "time zone offset from GMT" (and can't figure out which time zone it actually is, or even whether it's northern hemisphere or southern hemisphere); and can't work for "travelling laptop" either.LtG wrote:However if an OS utilizes NTP it doesn't mean the RTC time is useless. Consider the RTC after a time being 15 minutes fast, you turning on your laptop without network, now the OS has incorrect time. Or RTC being 1h fast and you being in a foreign country (which will make network unlikely initially) and relying on the timezone correction on the laptop, thus making it quite possible that you won't notice the 1h mistake immediately... So I would like the RTC to be maintained. UEFI of course should have fixed this by giving the time some semantics (haven't read the specs, so maybe they did?), such as following (one of the) UTC with no DST and other stupidity, or something else, but so long as it had semantics everybody would know if/when/how to update it.
Everyone defines them differently, which tends to cause lots of confusion. That's why I've explicitly defined exactly what I mean.LtG wrote:I know the difference between accuracy and precision, did I switch them accidentally?Brendan wrote:First, let's define some things. Let's define "drift" as how well the timer measures extremely long durations (e.g. a timer that can measure an entire year within +/- 1 second has extremely low drift); and let's define "precision" as the granularity of a timer's measurements.
Now let's define "accuracy" as how well the timer can measure short durations. Accuracy is a combination of both drift and granularity. For example, if you have a time source that has extremely low drift and extremely low precision (e.g. only measures seconds) then you aren't going to be able to use it to accurately measure 1234 nanoseconds. For another example, if you have a time source that has extremely good precision but very bad drift, then you aren't going to be able to use it to accurately measure 4 weeks.
The RTC is supposed to be the local time source with the least drift (because all other local time sources completely lose track of time when the computer is turned off). However, the RTC's also has relatively poor precision - by setting the RTC periodic IRQ to 8000 Hz you can get 125 us precision (with extremely high "IRQs per second" overhead). In comparison, the CPU's Time Stamp Counter has relatively poor drift, but is the most precise time source there is (typically better than 1 nanosecond). HPET is somewhere between, with reasonable drift and precision of typically 100 nanoseconds.
This is where the essential problem is - they're all a compromise between drift and precision, and nothing has both good drift and good precision, so nothing has good accuracy.
The solution is to combine time sources. By using one time source to keep a second time source synchronized, you can get the drift from one time source and the precision from the other time source. This means that the best solution would be to use RTC (best for drift) to keep the CPU's Time Stamp Counter (best for precision) synchronized; so that you're combining "best drift with best precision" to get the maximum possible accuracy.
This same thinking applies to NTP, which has even better drift and even worse precision. This leads to the "triple layer" solution (that is common practice); where NTP is used to keep RTC synchronized, and RTC is used to keep something else (e.g. TSC) synchronized.
For drift, typically RTC and HPET are similar and "least worst". Because TSC is typically derived from "CPU bus clock" (and there's no real need for its clock source to be "low drift") the TSC can have very bad drift.LtG wrote:In any case, my point was exactly is there a difference in accuracy between RTC, HPET, TSC, etc? I was just curious if you knew how much that difference would be.
There's multiple different problems here. The first is determining if the CPU's TSC is fixed frequency, which is no different to determining if the CPU supports any feature (test a CPUID flag and fix up any/all cases where CPUs have errata that causes them to misreport).LtG wrote:For TSC there's also technical problems of figuring out which kind of TSC it is (not sure if the info is always reliable), but assuming the TSC does not vary with CPU clock, is the problem with TSC not knowing it's exact frequency or does it just fluctuate too much to be accurate?
The other thing that hasn't been mentioned yet is "other capabilities" (e.g. if the timer can be setup for a periodic IRQ, and if the timer can be setup for "one-shot IRQ on terminal count"). For example, you can't (at least not easily) setup the RTC for "one-shot IRQ on terminal count". This is important for implementing things like "sleep()", scheduling, networking time-outs, etc.LtG wrote:My point being that TSC would probably be the optimal source (is any of the other ones faster to query?) if it doesn't fluctuate much (at all), even if it's exact value is not known exactly, NTP could possibly be used to find out it's value accurately enough that it could be used more accurately than the RTC which really isn't all that accurate. I decided to test my laptop this week, I'll let the clock run for a week with NTP disabled, so far it's been around 16h since I disabled NTP and it seems to be about 2s fast, based on that it would be 21s fast in just one week, which I do find to be too much.
And if it is set to local time and is effected by daylight savings; you have no way to know if some other OS adjusted it for daylight savings or not; so even if the OS keeps track of a "difference between RTC and current time" your OS's timing will still get messed up twice per year.LtG wrote:As I mentioned earlier I think for the time your OS runs it could calculate the difference, that way the RTC would be more accurate, how much depends on how much is powered on vs off.Brendan wrote:The first problem with the RTC is figuring out which time it keeps - local time (for which time zone?), or UTC, or TAI, or something else? Historically (for MS-DOS and "inherited" by Windows) the RTC kept track of local time, and because of this it screws up daylight savings twice per year unless there's "one and only one" OS adjusting it for daylight savings.
Ideally this should be a firmware setting set by "system owner" and not set by the admin of any OS.LtG wrote:So I think the only sane thing to do is to make it a system setting, so the admin can decide if it's ok to update the RTC or not.
I just run an NTP server on my LAN, and tell all the other computers to get their time from my local NTP server.LtG wrote:I certainly don't want to live in a world where the RTC can't be updated (even though most people only use one OS) and should/must be allowed to drift aimlessly. Realistically, when taking a new computer into use, I'll set the time if it's not already correct, but I don't want to have to boot to BIOS/UEFI to set it..