The main idea was to minimize cpu idle time during critical sections when another thread has a lock - (busy waits, especially, some code must work under the assumption there is no scheduler so one can't yield() instead)
Basically, i intend to use CLI/STI (disable interrupt) pairs as the primary locking mechanism, so i can do critical code without the scheduler interfering.
I'll make these basic assumptions:
- critical code has a finite execution time
- all threads will get a fair share of cpu time
By my knowledge it has the mutual exclusion property: if IF is cleared there will not be any threads running critical code
no starvation: a thread will always be able to enter the critical section - basically because when the thread is scheduled IF must be set and the critical section can be entered
and progression: the code will always enter the critical section if available (there's no check anyway), and it will always leave it.
The most interesting question to ask here, how much manipulation can and should i put in those blocks while keeping interrupt latency neglegible?
Second question, on MP systems i'll probably have to busy wait as to assure synchronisation between processors, but on the other hand, you KNOW the other end is currently executing critical code and that it must exit soon, so the delay is at most the time to execute the critical section times no. of processors - 1, what do you think?
Fire away your comments, remarks and ideas. All creative thinking welcome (or just a nod of approval so i know i'm not building an inherently flawed kernel
