get computer ticks

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Post Reply
User avatar
nicola
Member
Member
Posts: 32
Joined: Mon May 16, 2011 2:05 pm
Location: hanoi

get computer ticks

Post by nicola »

u guys know how to get computer ticks? (like milliseconds of system time)
in assembly (PC)?

i found this article in osdev wiki CMOS,
but it shows only how to get year,month,day,hours,mins,secs.

i need this 'milliseconds' value coz i want to make a delay for my keyboard input
from port 60h and also for calculating algorithm effectiveness.
I'm using AMD Sempron 140 Single core 2.7GHz
User avatar
Nessphoro
Member
Member
Posts: 308
Joined: Sat Apr 30, 2011 12:50 am

Re: get computer ticks

Post by Nessphoro »

Set up a PIC, and set the frequency to 1000 hz, you will recieve an IRQ every millisecond which will allow you to increment internal kernel timer
Last edited by Nessphoro on Thu May 26, 2011 12:27 pm, edited 1 time in total.
User avatar
nicola
Member
Member
Posts: 32
Joined: Mon May 16, 2011 2:05 pm
Location: hanoi

Re: get computer ticks

Post by nicola »

i know like Pascal of the old time, it uses dummy instructions for taking time of CPU
in a 'for' loop.

setting up a PIC at 1GHz will lower down the performance significantly i think,
immagine if a CPU is 3GHz, this would take 33% of CPU time.
any other solution?
I'm using AMD Sempron 140 Single core 2.7GHz
User avatar
Nessphoro
Member
Member
Posts: 308
Joined: Sat Apr 30, 2011 12:50 am

Re: get computer ticks

Post by Nessphoro »

1000 hz is not 1 Ghz :)
User avatar
nicola
Member
Member
Posts: 32
Joined: Mon May 16, 2011 2:05 pm
Location: hanoi

Re: get computer ticks

Post by nicola »

Nessphoro wrote:1000 hz is not 1 Ghz :)
oh, oops, YES, 1KHz
I'm using AMD Sempron 140 Single core 2.7GHz
User avatar
Nessphoro
Member
Member
Posts: 308
Joined: Sat Apr 30, 2011 12:50 am

Re: get computer ticks

Post by Nessphoro »

so on your 3Ghz pc you'd be using about what 0.0033%of cpu?
User avatar
nicola
Member
Member
Posts: 32
Joined: Mon May 16, 2011 2:05 pm
Location: hanoi

Re: get computer ticks

Post by nicola »

Nessphoro wrote:so on your 3Ghz pc you'd be using about what 0.0033%of cpu?
yeah, might be that amount,
but by the way, you have any assembly sample of setting up a PIC (IRQ?) for keyboard? :roll:
(especially in long mode)
I'm using AMD Sempron 140 Single core 2.7GHz
User avatar
Nessphoro
Member
Member
Posts: 308
Joined: Sat Apr 30, 2011 12:50 am

Re: get computer ticks

Post by Nessphoro »

PIC and IRQ are two different things which are not directly related to the keyboard but in a sense when you press a key there is an IRQ being set down the line to your kernel. While PIC on the other hand generates IRQs with a specified interval.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: get computer ticks

Post by Brendan »

Hi,
nicola wrote:
Nessphoro wrote:so on your 3Ghz pc you'd be using about what 0.0033%of cpu?
yeah, might be that amount,
Might be closer to "IRQ_cycles * timer_frequency / CPU_frequency".

"IRQ_cycles" itself could be broken into 2 parts - the time taken to access any hardware (e.g. sending the EOI to the PIC chip), and the interrupt handler's cycles (pipeline flush, etc). The first part takes the same "fixed" amount of time regardless of how fast the CPU is.

That gives "overhead = (CPU_frequency * fixed_time + cycles) * timer_frequency / CPU_frequency":
  • If sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz CPU a 1 KHz timer ends up being about "(3000000000 * 0.0000001 + 123) * 1000 / 3000000000 = 423 / 3000000 = 0.000141 = 0.0141% overhead".
  • If sending the EOI to the PIC costs 1 us and the everything else costs 123 cycles, then for a 3 GHz CPU a 1 KHz timer ends up being about "(3000000000 * 0.000001 + 123) * 1000 / 3000000000 = 3123 / 3000000 = 0.001041 = 0.1041% overhead".
  • If sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 25 MHz CPU a 1 KHz timer ends up being about "(25000000 * 0.0000001 + 123) * 1000 / 25000000 = 125.5 / 25000 = 0.00502 = 0.502% overhead".
  • If sending the EOI to the PIC costs 1 us and the everything else costs 123 cycles, then for a 25 MHz CPU a 1 KHz timer ends up being about "(25000000 * 0.000001 + 123) * 1000 / 25000000 = 148 / 25000 = 0.00592 = 0.592% overhead".
What if you want to maximise the timer tick precision (or timer frequency) while ensuring estimated overhead never exceeds 0.1%?

Rearranging the formula gives "timer_frequency = overhead * CPU_frequency / (CPU_frequency * fixed_time + cycles)":
  • For 0.1% overhead (0.001), if sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz CPU you get "0.001 * 3000000000 / (3000000000 * 0.0000001 + 123) = 3000000 / 423 = 7092 Hz".
  • For 0.1% overhead (0.001), if sending the EOI to the PIC costs 1 us and the everything else costs 123 cycles, then for a 3 GHz CPU you get "0.001 * 3000000000 / (3000000000 * 0.000001 + 123) = 3000000 / 3123 = 961 Hz".
  • For 0.1% overhead (0.001), if sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 25 MHz CPU you get "0.001 * 25000000 / (25000000 * 0.0000001 + 123) = 25000 / 125.5 = 199 Hz".
  • For 0.1% overhead (0.001), if sending the EOI to the PIC costs 1 us and the everything else costs 123 cycles, then for a 25 MHz CPU you get "0.001 * 25000000 / (25000000 * 0.000001 + 123) = 25000 / 148 = 169 Hz".
Of course that's only for single-CPU. For multi-CPU the overhead would be spread across different CPUs, so "overhead = (CPU_frequency * fixed_time + cycles) * timer_frequency / CPU_frequency / CPUs" and "timer_frequency = overhead * CPUs * CPU_frequency / (CPU_frequency * fixed_time + cycles)".

If sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz quad-core CPU a 1 KHz timer ends up being about "(3000000000 * 0.0000001 + 123) * 1000 / 3000000000 / 4 = 423 / 12000000 = 0.00003525 = 0.003525% overhead".

For 0.1% overhead (0.001), if sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz quad-core CPU you get "0.001 * 4 * 3000000000 / (3000000000 * 0.0000001 + 123) = 12000000 / 423 = 28368 Hz".

Basically, to get a good compromise between overhead and timer precision, you want to determine number of CPUs, CPU speed and APIC/PIC type during boot and then setup the timer to suit (and timer frequency could be anything from about 100 Hz to 200 KHz). :D


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
rdos
Member
Member
Posts: 3310
Joined: Wed Oct 01, 2008 1:55 pm

Re: get computer ticks

Post by rdos »

Setting up an OS to be able to measure real-time and to support timing of fast events is non-trivial. Since the APIC timer /TSC are so much more efficient (and with higher precision), you'd want to use them if they are available, while defaulting to PIT only when necesary. It gets even more complex in a SMP-setup with many cores that should keep time synchronized. The RTC clock is a good source for synchronization, but it lacks in precision.
rdos
Member
Member
Posts: 3310
Joined: Wed Oct 01, 2008 1:55 pm

Re: get computer ticks

Post by rdos »

Brendan wrote:Basically, to get a good compromise between overhead and timer precision, you want to determine number of CPUs, CPU speed and APIC/PIC type during boot and then setup the timer to suit (and timer frequency could be anything from about 100 Hz to 200 KHz). :D
You can do better than that. For really critical applications it would be possible to achieve 100s of MHz if using a dedicated core for timing. Eliminating IRQs, and only busy-polling checking TSC could probably achieve sub-nanosecond resolution on a modern CPU.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: get computer ticks

Post by Brendan »

Hi,
rdos wrote:
Brendan wrote:Basically, to get a good compromise between overhead and timer precision, you want to determine number of CPUs, CPU speed and APIC/PIC type during boot and then setup the timer to suit (and timer frequency could be anything from about 100 Hz to 200 KHz). :D
You can do better than that. For really critical applications it would be possible to achieve 100s of MHz if using a dedicated core for timing. Eliminating IRQs, and only busy-polling checking TSC could probably achieve sub-nanosecond resolution on a modern CPU.
You can do better than that. The kernel could use "sleep()" to get as close as it can to the desired time while running other tasks (if necessary), then switch to the task that needed the precise delay and busy-loop waiting for TSC, local APIC count, HPET counter or PIT count to reach the exact time before returning to user space. You shouldn't need to waste an entire CPU at all (which would be hard to justify, as the need for such precise timing is rare).

The thing to notice here is that a more precise "sleep()" timer means more time running other tasks and less time busy-waiting, and better efficiency.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
rdos
Member
Member
Posts: 3310
Joined: Wed Oct 01, 2008 1:55 pm

Re: get computer ticks

Post by rdos »

Brendan wrote:You can do better than that. The kernel could use "sleep()" to get as close as it can to the desired time while running other tasks (if necessary), then switch to the task that needed the precise delay and busy-loop waiting for TSC, local APIC count, HPET counter or PIT count to reach the exact time before returning to user space. You shouldn't need to waste an entire CPU at all (which would be hard to justify, as the need for such precise timing is rare).

The thing to notice here is that a more precise "sleep()" timer means more time running other tasks and less time busy-waiting, and better efficiency.
Yes, with a decent sleep, there is possibilty of pretty good accuracy, but it still won't beat the dedicated core design. This is because when threads are run under the scheduler, they can be interrupted by ISRs, by scheduler (higher-priority threads), and the interrupt latency in the OS sets limits to the accuracy of sleep.
Post Reply