I was wondering what would be the advantage of having a fully poll-driven OS? no interrupts of any sort.
Would it really be worth 'downgrading' an OS if it was only doing a handful of things, like say network support .
yes, I am considering this as a useful option for my OS.
I only use 3 things (for the most part), and two of those are network cards. The other is a timer (which I don't even reeally need).
would it be wise to switch over to only handler network operations?
I think it would speed things up a bit, but I am probably wrong.
Fully poll-driven OS?
Fully poll-driven OS?
Website: https://joscor.com
Re: Fully poll-driven OS?
Hi,
Of course if you're using polling and trying to do more than one thing (per CPU) at a time, then your latency will depend on the scheduler (e.g. how long until a task gets CPU time again and checks whatever it's polling). In this case latency will be much much worse than using IRQs.
Also, for 80x86 no IRQs means nothing to take the CPU out of a sleep state (e.g. the HLT instruction) and more power consumption/heat while doing nothing (except waiting for something to happen).
Lastly, 80x86 hardware is designed to tolerate some latency (e.g. NICs with ring buffers and "almost empty" or "almost full" IRQs, instead of "empty" and "full" IRQs), so you don't really gain much by sacrificing a CPU for reduced latency.
In general, polling might be beneficial in special situations, but mostly it's used by lazy programmers who couldn't be bothered to do things properly...
However, one of those "special situations" is very high speed NICs, but in this case you'd only use polling when necessary (e.g. switch over to "polling mode" once a certain load is reached, and switch back to "IRQ mode" if the load reduces) to avoid wasting CPU time when there's no reason to.
Cheers,
Brendan
The advantage would be latency - for e.g. when a transfer completes you'd know almost immediately, and wouldn't need to wait for IRQ handling overhead (trace cache flush, saving some registers, loading some registers, sending EOI, etc).01000101 wrote:I was wondering what would be the advantage of having a fully poll-driven OS? no interrupts of any sort.
That depends. For e.g. for a special purpose OS on an embedded system where the number of things being done is equal to the number of CPUs, you could get rid of IRQs and have no scheduler at all and get better latency.01000101 wrote:Would it really be worth 'downgrading' an OS if it was only doing a handful of things, like say network support
Of course if you're using polling and trying to do more than one thing (per CPU) at a time, then your latency will depend on the scheduler (e.g. how long until a task gets CPU time again and checks whatever it's polling). In this case latency will be much much worse than using IRQs.
Also, for 80x86 no IRQs means nothing to take the CPU out of a sleep state (e.g. the HLT instruction) and more power consumption/heat while doing nothing (except waiting for something to happen).
Lastly, 80x86 hardware is designed to tolerate some latency (e.g. NICs with ring buffers and "almost empty" or "almost full" IRQs, instead of "empty" and "full" IRQs), so you don't really gain much by sacrificing a CPU for reduced latency.
In general, polling might be beneficial in special situations, but mostly it's used by lazy programmers who couldn't be bothered to do things properly...
However, one of those "special situations" is very high speed NICs, but in this case you'd only use polling when necessary (e.g. switch over to "polling mode" once a certain load is reached, and switch back to "IRQ mode" if the load reduces) to avoid wasting CPU time when there's no reason to.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Thanks.
I have a quick related question: if an instruction such as for(;;); is given, is the CPU still highly active? or is it smart enough to dull the power consumption on its own?
I have a quick related question: if an instruction such as for(;;); is given, is the CPU still highly active? or is it smart enough to dull the power consumption on its own?
Website: https://joscor.com
- piranha
- Member
- Posts: 1391
- Joined: Thu Dec 21, 2006 7:42 pm
- Location: Unknown. Momentum is pretty certain, however.
- Contact:
Well, when I have a for(;;); statement, it uses 100% of the CPU (thats what the CPU monitor says), but for(;;)asm("hlt"); uses 0%.
-JL
-JL
SeaOS: Adding VT-x, networking, and ARM support
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
In real-time systems, polling is often used instead of interrupt-driven device control because it's deterministic.
If you disable interrupts, you can determine the maximum execution time of various tasks with strict upper bounds. With interrupts, there is always the possibility that a task will be preempted many times, throwing off your upper bound calculations.
So, interrupts are most efficient (for CPU utilization, and the power use mentioned above), but polling can be more responsive (depends on how many devices you're polling in the main loop) and deterministic.
A polling-driven system would presumably require little or no synchronization (in a single-processor environment), and the complexity and binary size savings may be suited to the machine/device.
Like Brendan said, polling is often used in cases of laziness or just testing some device routines, but there are times when it's appropriate. Odds are these aren't in general-purpose operating systems, but are more likely a part of an embedded or real-time system.
If you disable interrupts, you can determine the maximum execution time of various tasks with strict upper bounds. With interrupts, there is always the possibility that a task will be preempted many times, throwing off your upper bound calculations.
So, interrupts are most efficient (for CPU utilization, and the power use mentioned above), but polling can be more responsive (depends on how many devices you're polling in the main loop) and deterministic.
A polling-driven system would presumably require little or no synchronization (in a single-processor environment), and the complexity and binary size savings may be suited to the machine/device.
Like Brendan said, polling is often used in cases of laziness or just testing some device routines, but there are times when it's appropriate. Odds are these aren't in general-purpose operating systems, but are more likely a part of an embedded or real-time system.