Fully poll-driven OS?

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
User avatar
01000101
Member
Member
Posts: 1599
Joined: Fri Jun 22, 2007 12:47 pm
Contact:

Fully poll-driven OS?

Post by 01000101 »

I was wondering what would be the advantage of having a fully poll-driven OS? no interrupts of any sort.

Would it really be worth 'downgrading' an OS if it was only doing a handful of things, like say network support :wink: .

yes, I am considering this as a useful option for my OS.
I only use 3 things (for the most part), and two of those are network cards. The other is a timer (which I don't even reeally need).

would it be wise to switch over to only handler network operations?

I think it would speed things up a bit, but I am probably wrong.
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Fully poll-driven OS?

Post by Brendan »

Hi,
01000101 wrote:I was wondering what would be the advantage of having a fully poll-driven OS? no interrupts of any sort.
The advantage would be latency - for e.g. when a transfer completes you'd know almost immediately, and wouldn't need to wait for IRQ handling overhead (trace cache flush, saving some registers, loading some registers, sending EOI, etc).
01000101 wrote:Would it really be worth 'downgrading' an OS if it was only doing a handful of things, like say network support :wink:
That depends. For e.g. for a special purpose OS on an embedded system where the number of things being done is equal to the number of CPUs, you could get rid of IRQs and have no scheduler at all and get better latency.

Of course if you're using polling and trying to do more than one thing (per CPU) at a time, then your latency will depend on the scheduler (e.g. how long until a task gets CPU time again and checks whatever it's polling). In this case latency will be much much worse than using IRQs.

Also, for 80x86 no IRQs means nothing to take the CPU out of a sleep state (e.g. the HLT instruction) and more power consumption/heat while doing nothing (except waiting for something to happen).

Lastly, 80x86 hardware is designed to tolerate some latency (e.g. NICs with ring buffers and "almost empty" or "almost full" IRQs, instead of "empty" and "full" IRQs), so you don't really gain much by sacrificing a CPU for reduced latency.

In general, polling might be beneficial in special situations, but mostly it's used by lazy programmers who couldn't be bothered to do things properly... ;)

However, one of those "special situations" is very high speed NICs, but in this case you'd only use polling when necessary (e.g. switch over to "polling mode" once a certain load is reached, and switch back to "IRQ mode" if the load reduces) to avoid wasting CPU time when there's no reason to.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
01000101
Member
Member
Posts: 1599
Joined: Fri Jun 22, 2007 12:47 pm
Contact:

Post by 01000101 »

Thanks.

I have a quick related question: if an instruction such as for(;;); is given, is the CPU still highly active? or is it smart enough to dull the power consumption on its own?
User avatar
piranha
Member
Member
Posts: 1391
Joined: Thu Dec 21, 2006 7:42 pm
Location: Unknown. Momentum is pretty certain, however.
Contact:

Post by piranha »

Well, when I have a for(;;); statement, it uses 100% of the CPU (thats what the CPU monitor says), but for(;;)asm("hlt"); uses 0%.

-JL
SeaOS: Adding VT-x, networking, and ARM support
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
User avatar
01000101
Member
Member
Posts: 1599
Joined: Fri Jun 22, 2007 12:47 pm
Contact:

Post by 01000101 »

ok thanks.
speal
Member
Member
Posts: 43
Joined: Wed Mar 07, 2007 10:09 am
Location: Minneapolis, Minnesota
Contact:

Post by speal »

In real-time systems, polling is often used instead of interrupt-driven device control because it's deterministic.

If you disable interrupts, you can determine the maximum execution time of various tasks with strict upper bounds. With interrupts, there is always the possibility that a task will be preempted many times, throwing off your upper bound calculations.

So, interrupts are most efficient (for CPU utilization, and the power use mentioned above), but polling can be more responsive (depends on how many devices you're polling in the main loop) and deterministic.

A polling-driven system would presumably require little or no synchronization (in a single-processor environment), and the complexity and binary size savings may be suited to the machine/device.

Like Brendan said, polling is often used in cases of laziness or just testing some device routines, but there are times when it's appropriate. Odds are these aren't in general-purpose operating systems, but are more likely a part of an embedded or real-time system.
Post Reply