get computer ticks
get computer ticks
u guys know how to get computer ticks? (like milliseconds of system time)
in assembly (PC)?
i found this article in osdev wiki CMOS,
but it shows only how to get year,month,day,hours,mins,secs.
i need this 'milliseconds' value coz i want to make a delay for my keyboard input
from port 60h and also for calculating algorithm effectiveness.
in assembly (PC)?
i found this article in osdev wiki CMOS,
but it shows only how to get year,month,day,hours,mins,secs.
i need this 'milliseconds' value coz i want to make a delay for my keyboard input
from port 60h and also for calculating algorithm effectiveness.
I'm using AMD Sempron 140 Single core 2.7GHz
Re: get computer ticks
Set up a PIC, and set the frequency to 1000 hz, you will recieve an IRQ every millisecond which will allow you to increment internal kernel timer
Last edited by Nessphoro on Thu May 26, 2011 12:27 pm, edited 1 time in total.
Re: get computer ticks
i know like Pascal of the old time, it uses dummy instructions for taking time of CPU
in a 'for' loop.
setting up a PIC at 1GHz will lower down the performance significantly i think,
immagine if a CPU is 3GHz, this would take 33% of CPU time.
any other solution?
in a 'for' loop.
setting up a PIC at 1GHz will lower down the performance significantly i think,
immagine if a CPU is 3GHz, this would take 33% of CPU time.
any other solution?
I'm using AMD Sempron 140 Single core 2.7GHz
Re: get computer ticks
1000 hz is not 1 Ghz
Re: get computer ticks
oh, oops, YES, 1KHzNessphoro wrote:1000 hz is not 1 Ghz
I'm using AMD Sempron 140 Single core 2.7GHz
Re: get computer ticks
so on your 3Ghz pc you'd be using about what 0.0033%of cpu?
Re: get computer ticks
yeah, might be that amount,Nessphoro wrote:so on your 3Ghz pc you'd be using about what 0.0033%of cpu?
but by the way, you have any assembly sample of setting up a PIC (IRQ?) for keyboard?
(especially in long mode)
I'm using AMD Sempron 140 Single core 2.7GHz
Re: get computer ticks
PIC and IRQ are two different things which are not directly related to the keyboard but in a sense when you press a key there is an IRQ being set down the line to your kernel. While PIC on the other hand generates IRQs with a specified interval.
Re: get computer ticks
Hi,
"IRQ_cycles" itself could be broken into 2 parts - the time taken to access any hardware (e.g. sending the EOI to the PIC chip), and the interrupt handler's cycles (pipeline flush, etc). The first part takes the same "fixed" amount of time regardless of how fast the CPU is.
That gives "overhead = (CPU_frequency * fixed_time + cycles) * timer_frequency / CPU_frequency":
Rearranging the formula gives "timer_frequency = overhead * CPU_frequency / (CPU_frequency * fixed_time + cycles)":
If sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz quad-core CPU a 1 KHz timer ends up being about "(3000000000 * 0.0000001 + 123) * 1000 / 3000000000 / 4 = 423 / 12000000 = 0.00003525 = 0.003525% overhead".
For 0.1% overhead (0.001), if sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz quad-core CPU you get "0.001 * 4 * 3000000000 / (3000000000 * 0.0000001 + 123) = 12000000 / 423 = 28368 Hz".
Basically, to get a good compromise between overhead and timer precision, you want to determine number of CPUs, CPU speed and APIC/PIC type during boot and then setup the timer to suit (and timer frequency could be anything from about 100 Hz to 200 KHz).
Cheers,
Brendan
Might be closer to "IRQ_cycles * timer_frequency / CPU_frequency".nicola wrote:yeah, might be that amount,Nessphoro wrote:so on your 3Ghz pc you'd be using about what 0.0033%of cpu?
"IRQ_cycles" itself could be broken into 2 parts - the time taken to access any hardware (e.g. sending the EOI to the PIC chip), and the interrupt handler's cycles (pipeline flush, etc). The first part takes the same "fixed" amount of time regardless of how fast the CPU is.
That gives "overhead = (CPU_frequency * fixed_time + cycles) * timer_frequency / CPU_frequency":
- If sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz CPU a 1 KHz timer ends up being about "(3000000000 * 0.0000001 + 123) * 1000 / 3000000000 = 423 / 3000000 = 0.000141 = 0.0141% overhead".
- If sending the EOI to the PIC costs 1 us and the everything else costs 123 cycles, then for a 3 GHz CPU a 1 KHz timer ends up being about "(3000000000 * 0.000001 + 123) * 1000 / 3000000000 = 3123 / 3000000 = 0.001041 = 0.1041% overhead".
- If sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 25 MHz CPU a 1 KHz timer ends up being about "(25000000 * 0.0000001 + 123) * 1000 / 25000000 = 125.5 / 25000 = 0.00502 = 0.502% overhead".
- If sending the EOI to the PIC costs 1 us and the everything else costs 123 cycles, then for a 25 MHz CPU a 1 KHz timer ends up being about "(25000000 * 0.000001 + 123) * 1000 / 25000000 = 148 / 25000 = 0.00592 = 0.592% overhead".
Rearranging the formula gives "timer_frequency = overhead * CPU_frequency / (CPU_frequency * fixed_time + cycles)":
- For 0.1% overhead (0.001), if sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz CPU you get "0.001 * 3000000000 / (3000000000 * 0.0000001 + 123) = 3000000 / 423 = 7092 Hz".
- For 0.1% overhead (0.001), if sending the EOI to the PIC costs 1 us and the everything else costs 123 cycles, then for a 3 GHz CPU you get "0.001 * 3000000000 / (3000000000 * 0.000001 + 123) = 3000000 / 3123 = 961 Hz".
- For 0.1% overhead (0.001), if sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 25 MHz CPU you get "0.001 * 25000000 / (25000000 * 0.0000001 + 123) = 25000 / 125.5 = 199 Hz".
- For 0.1% overhead (0.001), if sending the EOI to the PIC costs 1 us and the everything else costs 123 cycles, then for a 25 MHz CPU you get "0.001 * 25000000 / (25000000 * 0.000001 + 123) = 25000 / 148 = 169 Hz".
If sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz quad-core CPU a 1 KHz timer ends up being about "(3000000000 * 0.0000001 + 123) * 1000 / 3000000000 / 4 = 423 / 12000000 = 0.00003525 = 0.003525% overhead".
For 0.1% overhead (0.001), if sending the EOI to the APIC costs 100 ns and the everything else costs 123 cycles, then for a 3 GHz quad-core CPU you get "0.001 * 4 * 3000000000 / (3000000000 * 0.0000001 + 123) = 12000000 / 423 = 28368 Hz".
Basically, to get a good compromise between overhead and timer precision, you want to determine number of CPUs, CPU speed and APIC/PIC type during boot and then setup the timer to suit (and timer frequency could be anything from about 100 Hz to 200 KHz).
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: get computer ticks
Setting up an OS to be able to measure real-time and to support timing of fast events is non-trivial. Since the APIC timer /TSC are so much more efficient (and with higher precision), you'd want to use them if they are available, while defaulting to PIT only when necesary. It gets even more complex in a SMP-setup with many cores that should keep time synchronized. The RTC clock is a good source for synchronization, but it lacks in precision.
Re: get computer ticks
You can do better than that. For really critical applications it would be possible to achieve 100s of MHz if using a dedicated core for timing. Eliminating IRQs, and only busy-polling checking TSC could probably achieve sub-nanosecond resolution on a modern CPU.Brendan wrote:Basically, to get a good compromise between overhead and timer precision, you want to determine number of CPUs, CPU speed and APIC/PIC type during boot and then setup the timer to suit (and timer frequency could be anything from about 100 Hz to 200 KHz).
Re: get computer ticks
Hi,
The thing to notice here is that a more precise "sleep()" timer means more time running other tasks and less time busy-waiting, and better efficiency.
Cheers,
Brendan
You can do better than that. The kernel could use "sleep()" to get as close as it can to the desired time while running other tasks (if necessary), then switch to the task that needed the precise delay and busy-loop waiting for TSC, local APIC count, HPET counter or PIT count to reach the exact time before returning to user space. You shouldn't need to waste an entire CPU at all (which would be hard to justify, as the need for such precise timing is rare).rdos wrote:You can do better than that. For really critical applications it would be possible to achieve 100s of MHz if using a dedicated core for timing. Eliminating IRQs, and only busy-polling checking TSC could probably achieve sub-nanosecond resolution on a modern CPU.Brendan wrote:Basically, to get a good compromise between overhead and timer precision, you want to determine number of CPUs, CPU speed and APIC/PIC type during boot and then setup the timer to suit (and timer frequency could be anything from about 100 Hz to 200 KHz).
The thing to notice here is that a more precise "sleep()" timer means more time running other tasks and less time busy-waiting, and better efficiency.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: get computer ticks
Yes, with a decent sleep, there is possibilty of pretty good accuracy, but it still won't beat the dedicated core design. This is because when threads are run under the scheduler, they can be interrupted by ISRs, by scheduler (higher-priority threads), and the interrupt latency in the OS sets limits to the accuracy of sleep.Brendan wrote:You can do better than that. The kernel could use "sleep()" to get as close as it can to the desired time while running other tasks (if necessary), then switch to the task that needed the precise delay and busy-loop waiting for TSC, local APIC count, HPET counter or PIT count to reach the exact time before returning to user space. You shouldn't need to waste an entire CPU at all (which would be hard to justify, as the need for such precise timing is rare).
The thing to notice here is that a more precise "sleep()" timer means more time running other tasks and less time busy-waiting, and better efficiency.