Hi,
pini wrote:
What technique are you all using for this ?
I have 2 different methods for locking re-entrancy locks (which are all spinlocks). The first method locks the re-entrancy lock and prevents thread switches, e.g.:
Code: Select all
%macro LOCK_THREADS 2
inc dword [gs:CPULISTentryStruct.schedulerLocks]
%ifdef KERNSUPPORT_MP
lock bts dword %1,%2 ;Try to acquire lock
jnc %%locked ;Continue if lock acquired
%%wait:
pause
lock bts dword %1,%2 ;Try to acquire lock
jc %%wait ;Keep trying if still locked
%%locked:
%endif
%endmacro
%macro UNLOCK_THREADS 2
%ifdef KERNSUPPORT_MP
lock btr dword %1,%2 ;Unlock the lock
%endif
pushfd
cli
sub dword [gs:CPULISTentryStruct.schedulerLocks],1
ja %%l1
cmp byte [gs:CPULISTentryStruct.switchThreadsFlag],0
je %%l1
mov byte [gs:CPULISTentryStruct.switchThreadsFlag],0
call switchThreads
%%l1: popfd
%endmacro
On MP computers this only stops the scheduler for the current CPU. The only reason this is done is to reduce the time between acquiring the lock and releasing it (ie. to prevent thread switches between acquiring and releasing the lock so that the lock is released faster, which reduces lock contention). By itself it does not make it safe to modify the scheduler's data structures (these have thier own locks which must be locked first). If any code attempts to switch threads when one or more locks have been acquired the thread switch will be delayed until all locks have been released.
The second type of lock does the same as the first but in addition prevents IRQ's from being handled:
Code: Select all
%macro LOCK_IRQS 2
inc dword [gs:CPULISTentryStruct.schedulerLocks]
inc dword [gs:CPULISTentryStruct.IRQlocks]
%ifdef KERNSUPPORT_MP
LAPICwrite LAPICtaskPrioriy,0xF0
lock bts dword %1,%2 ;Try to acquire lock
jnc %%locked ;Continue if lock acquired
%%wait:
pause
lock bts dword %1,%2 ;Try to acquire lock
jc %%wait ;Keep trying if still locked
%%locked:
%endif
%endmacro
%macro UNLOCK_IRQS 2
%ifdef KERNSUPPORT_MP
lock btr dword %1,%2 ;Unlock the lock
%endif
sub dword [gs:CPULISTentryStruct.IRQlocks],1
ja %%l1
call retryIRQs
%ifdef KERNSUPPORT_MP
LAPICwrite LAPICtaskPrioriy,0x00
%endif
%%l1:
pushfd
cli
sub dword [gs:CPULISTentryStruct.schedulerLocks],1
ja %%l2
cmp byte [gs:CPULISTentryStruct.switchThreadsFlag],0
je %%l2
mov byte [gs:CPULISTentryStruct.switchThreadsFlag],0
call switchThreads
%%l2:
popfd
%endmacro
If an IRQ occurs when one or more "IRQ locks" have been acquired, then the IRQ will be put onto a queue. The IRQ queue is handled as soon as all "IRQ locks" have been released.
Either of these methods can be used to lock and unlock any re-entrancy lock (as long as the "unlock" matches the "lock"). In general the first method ("LOCK_THREADS") is used by default, while the second method ("LOCK_IRQS") is only used to protect code that might be used within an IRQ handler.
My OS uses "extremely fine-grained" locking to severely reduce lock contention - the chance of a thread spinning while waiting for a lock to be released (so that it can be acquired by the spinning thread) is very small, even when there's 256 CPUs using all the locks. Currently there's one lock used to prevent threads from being spawned or terminated, up to 513 seperate locks for physical memory management, 4 locks per CPU for scheduler run queues, a seperate lock for each message queue (2 message queues per thread - user & kernel), 2 seperate locks for my "IPI calls", a lock for each process to protect it's part of linear memory, and 3 different locks to protect different areas within the kernels part of the address space (which will increase to about 10 by the time I'm done).
Almost everything outside of the micro-kernel uses messaging to serialize access to data structures, so that no re-entrancy locks are needed. This works because each thread owns part of the address space that no other thread can access. The exception to this rule is data structures that are stored in "process space", which will need some form of re-entrancy locking - I haven't implemented anything for this yet, but when I do it won't be spinlocks (it'll do a thread switch to who-ever has the lock instead of spinning).
Cheers,
Brendan