Hi,
eddyb wrote:I think these loops are occupying the processor and it never take attention at irqs and stuff...
I'm right?
I doubt it..
There are/were a few CPUs with "errata", that can behave strangely in some situations involving tight loops. This includes Cyrix "6x86" CPUs (the CPU can completely lock up - see
"the coma bug") and some Pentium 4 CPUs with hyper-threading (where one logical CPU doing a tight loop that includes an instruction with a LOCK prefix can hog CPU resources and prevent the other logical CPU from doing anything).
However, it's extremely unlikely that CPU errata is your problem. It's much much more likely that your IRQ isn't firing, that you've disabled IRQs (e.g. the CLI instruction), that you failed to send an EOI, that you didn't tell the compiler that "timer_ticks" is volatile, or something else.
eddyb wrote:Exists a solution for this? is there any type of loop that isn't blocking all processes?
You could find your bug (check through my "quick list of likely problems" above) and fix it. However, that only fixes the bug in your implementation, and doesn't fix the design flaw (wasting heaps of CPU time polling).
To fix the design flaw you'd want to tell the scheduler "don't give this task any CPU time until <foo> happens", and then when "<foo>" does happen you want to tell the scheduler "Hey, you can give this task CPU time again now". How you do this depends on how your scheduler works.
For example, if your scheduler has a list of running tasks, then when the task wants to delay for a while you'd remove the task from the scheduler's list of running tasks and put the task on the timer's list of tasks waiting to wake up, and then do a task switch to something else. The timer IRQ would increment the "timer_ticks" variable and then check it's list of tasks waiting to wake up. For each task that should wake up, the timer would remove it from it's list and put it back on the scheduler's list of running tasks.
Cheers,
Brendan