Page 1 of 1

get computer ticks

Posted: Thu May 26, 2011 11:51 am
by nicola
u guys know how to get computer ticks? (like milliseconds of system time)
in assembly (PC)?

i found this article in osdev wiki CMOS,
but it shows only how to get year,month,day,hours,mins,secs.

i need this 'milliseconds' value coz i want to make a delay for my keyboard input
from port 60h and also for calculating algorithm effectiveness.

Re: get computer ticks

Posted: Thu May 26, 2011 12:03 pm
by Nessphoro
Set up a PIC, and set the frequency to 1000 hz, you will recieve an IRQ every millisecond which will allow you to increment internal kernel timer

Re: get computer ticks

Posted: Thu May 26, 2011 12:27 pm
by nicola
i know like Pascal of the old time, it uses dummy instructions for taking time of CPU
in a 'for' loop.

setting up a PIC at 1GHz will lower down the performance significantly i think,
immagine if a CPU is 3GHz, this would take 33% of CPU time.
any other solution?

Re: get computer ticks

Posted: Thu May 26, 2011 12:28 pm
by Nessphoro
1000 hz is not 1 Ghz :)

Re: get computer ticks

Posted: Thu May 26, 2011 12:36 pm
by nicola
Nessphoro wrote:1000 hz is not 1 Ghz :)
oh, oops, YES, 1KHz

Re: get computer ticks

Posted: Thu May 26, 2011 2:42 pm
by Nessphoro
so on your 3Ghz pc you'd be using about what 0.0033%of cpu?

Re: get computer ticks

Posted: Thu May 26, 2011 2:45 pm
by nicola
Nessphoro wrote:so on your 3Ghz pc you'd be using about what 0.0033%of cpu?
yeah, might be that amount,
but by the way, you have any assembly sample of setting up a PIC (IRQ?) for keyboard? :roll:
(especially in long mode)

Re: get computer ticks

Posted: Thu May 26, 2011 3:14 pm
by Nessphoro
PIC and IRQ are two different things which are not directly related to the keyboard but in a sense when you press a key there is an IRQ being set down the line to your kernel. While PIC on the other hand generates IRQs with a specified interval.

Re: get computer ticks

Posted: Thu May 26, 2011 5:36 pm
by Brendan
Hi,
nicola wrote:
Nessphoro wrote:so on your 3Ghz pc you'd be using about what 0.0033%of cpu?
yeah, might be that amount,
Might be closer to "IRQ_cycles * timer_frequency / CPU_frequency".

"IRQ_cycles" itself could be broken into 2 parts - the time taken to access any hardware (e.g. sending the EOI to the PIC chip), and the interrupt handler's cycles (pipeline flush, etc). The first part takes the same "fixed" amount of time regardless of how fast the CPU is.

That gives "overhead = (CPU_frequency * fixed_time + cycles) * timer_frequency / CPU_frequency":
  • If sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz CPU a 1 KHz timer ends up being about "(3000000000 * 0.0000001 + 123) * 1000 / 3000000000 = 423 / 3000000 = 0.000141 = 0.0141% overhead".
  • If sending the EOI to the PIC costs 1 us and the everything else costs 123 cycles, then for a 3 GHz CPU a 1 KHz timer ends up being about "(3000000000 * 0.000001 + 123) * 1000 / 3000000000 = 3123 / 3000000 = 0.001041 = 0.1041% overhead".
  • If sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 25 MHz CPU a 1 KHz timer ends up being about "(25000000 * 0.0000001 + 123) * 1000 / 25000000 = 125.5 / 25000 = 0.00502 = 0.502% overhead".
  • If sending the EOI to the PIC costs 1 us and the everything else costs 123 cycles, then for a 25 MHz CPU a 1 KHz timer ends up being about "(25000000 * 0.000001 + 123) * 1000 / 25000000 = 148 / 25000 = 0.00592 = 0.592% overhead".
What if you want to maximise the timer tick precision (or timer frequency) while ensuring estimated overhead never exceeds 0.1%?

Rearranging the formula gives "timer_frequency = overhead * CPU_frequency / (CPU_frequency * fixed_time + cycles)":
  • For 0.1% overhead (0.001), if sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz CPU you get "0.001 * 3000000000 / (3000000000 * 0.0000001 + 123) = 3000000 / 423 = 7092 Hz".
  • For 0.1% overhead (0.001), if sending the EOI to the PIC costs 1 us and the everything else costs 123 cycles, then for a 3 GHz CPU you get "0.001 * 3000000000 / (3000000000 * 0.000001 + 123) = 3000000 / 3123 = 961 Hz".
  • For 0.1% overhead (0.001), if sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 25 MHz CPU you get "0.001 * 25000000 / (25000000 * 0.0000001 + 123) = 25000 / 125.5 = 199 Hz".
  • For 0.1% overhead (0.001), if sending the EOI to the PIC costs 1 us and the everything else costs 123 cycles, then for a 25 MHz CPU you get "0.001 * 25000000 / (25000000 * 0.000001 + 123) = 25000 / 148 = 169 Hz".
Of course that's only for single-CPU. For multi-CPU the overhead would be spread across different CPUs, so "overhead = (CPU_frequency * fixed_time + cycles) * timer_frequency / CPU_frequency / CPUs" and "timer_frequency = overhead * CPUs * CPU_frequency / (CPU_frequency * fixed_time + cycles)".

If sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz quad-core CPU a 1 KHz timer ends up being about "(3000000000 * 0.0000001 + 123) * 1000 / 3000000000 / 4 = 423 / 12000000 = 0.00003525 = 0.003525% overhead".

For 0.1% overhead (0.001), if sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz quad-core CPU you get "0.001 * 4 * 3000000000 / (3000000000 * 0.0000001 + 123) = 12000000 / 423 = 28368 Hz".

Basically, to get a good compromise between overhead and timer precision, you want to determine number of CPUs, CPU speed and APIC/PIC type during boot and then setup the timer to suit (and timer frequency could be anything from about 100 Hz to 200 KHz). :D


Cheers,

Brendan

Re: get computer ticks

Posted: Fri May 27, 2011 12:59 am
by rdos
Setting up an OS to be able to measure real-time and to support timing of fast events is non-trivial. Since the APIC timer /TSC are so much more efficient (and with higher precision), you'd want to use them if they are available, while defaulting to PIT only when necesary. It gets even more complex in a SMP-setup with many cores that should keep time synchronized. The RTC clock is a good source for synchronization, but it lacks in precision.

Re: get computer ticks

Posted: Fri May 27, 2011 1:03 am
by rdos
Brendan wrote:Basically, to get a good compromise between overhead and timer precision, you want to determine number of CPUs, CPU speed and APIC/PIC type during boot and then setup the timer to suit (and timer frequency could be anything from about 100 Hz to 200 KHz). :D
You can do better than that. For really critical applications it would be possible to achieve 100s of MHz if using a dedicated core for timing. Eliminating IRQs, and only busy-polling checking TSC could probably achieve sub-nanosecond resolution on a modern CPU.

Re: get computer ticks

Posted: Fri May 27, 2011 1:42 am
by Brendan
Hi,
rdos wrote:
Brendan wrote:Basically, to get a good compromise between overhead and timer precision, you want to determine number of CPUs, CPU speed and APIC/PIC type during boot and then setup the timer to suit (and timer frequency could be anything from about 100 Hz to 200 KHz). :D
You can do better than that. For really critical applications it would be possible to achieve 100s of MHz if using a dedicated core for timing. Eliminating IRQs, and only busy-polling checking TSC could probably achieve sub-nanosecond resolution on a modern CPU.
You can do better than that. The kernel could use "sleep()" to get as close as it can to the desired time while running other tasks (if necessary), then switch to the task that needed the precise delay and busy-loop waiting for TSC, local APIC count, HPET counter or PIT count to reach the exact time before returning to user space. You shouldn't need to waste an entire CPU at all (which would be hard to justify, as the need for such precise timing is rare).

The thing to notice here is that a more precise "sleep()" timer means more time running other tasks and less time busy-waiting, and better efficiency.


Cheers,

Brendan

Re: get computer ticks

Posted: Fri May 27, 2011 5:08 am
by rdos
Brendan wrote:You can do better than that. The kernel could use "sleep()" to get as close as it can to the desired time while running other tasks (if necessary), then switch to the task that needed the precise delay and busy-loop waiting for TSC, local APIC count, HPET counter or PIT count to reach the exact time before returning to user space. You shouldn't need to waste an entire CPU at all (which would be hard to justify, as the need for such precise timing is rare).

The thing to notice here is that a more precise "sleep()" timer means more time running other tasks and less time busy-waiting, and better efficiency.
Yes, with a decent sleep, there is possibilty of pretty good accuracy, but it still won't beat the dedicated core design. This is because when threads are run under the scheduler, they can be interrupted by ISRs, by scheduler (higher-priority threads), and the interrupt latency in the OS sets limits to the accuracy of sleep.