Types of spinlocks (I need a new type!)
Posted: Thu Dec 15, 2011 4:38 am
My typical spinlocks looks like this:
These work between cores, and also between baseline code and IRQs on a specific core since they disable interrupts. The sections of code protected by these types of spinlocks must be short in order not to kill interrupt performance. For obvious reasons, they are not nestable, since nesting would imply the inner spinlock would enable interrupts, which is not allowed. If interrupts are enabled within the spinlock, deadlocks with IRQs could occur.
This kind of spinlock cannot protect the schduler as it switches threads. First because the scheduler uses other spinlocks (nesting not allowed), and secondly because the code to switch threads is too long.
The scheduler instead has an within core protection mechanism (and keeps local lists of available threads).
The first set of locks are only allowed to be called within basline code (not from IRQs or other asynchronous callbacks like timers):
The second set of locks are called by ISRs (it's automatically part of the entry/exit portion of ISRs) and when timers fire:
The scheduler locks can make sure that a thread cannot be scheduled away from until the lock is returned to default (ps_nesting = -1). Within these sections of code interrupts are enabled, and spinlocks can be accessed. The lock works well when threads get blocked, if the thread is put into the block-list after it's state is saved. If it is put into some block list earlier than that, it could malfunction on multicore, but not on singlecore.
The Signal/WaitforSignal synchronization primitive is one case where the thread must be placed in the block-list before it's state is saved. This is because this pair must garantee that WaitForSignal should be exited regardless if Signal is done before, during or after the WaitForSignal call. On singlecore, this pair works well, but it seems to occasionally malfunction on multicore.
What is needed to protect Signal/WaitForSignal is a spinlock on the thread involved. However, the above spinlock cannot be used (code is too long, and it accesses other spinlocks). What would be needed is a spinlock that doesn't disable interrupts, and thus cannot tolerate reentrancy from IRQs. However, the environment in the scheduler locked section is such that this spinlock can be placed in sections that are garanteed to be non-IRQ (baseline code), and thus it should never deadlock with IRQs on the same core.
It could look like this:
How are others doing this, or is it simply not allowed to wakeup threads from ISRs?
Code: Select all
RequestSpinLock Proc
reqSpin:
mov ax,ds:timer_spinlock
or ax,ax
je reqGet
;
sti
pause
jmp reqSpin
reqGet:
cli
inc ax
xchg ax,ds:timer_spinlock
or ax,ax
jne reqSpin
ret
RequestSpinLock Endp
ReleaseSpinLock Proc near
mov ds:timer_spinlock,0
sti
ret
ReleaseSpinLock Endp
This kind of spinlock cannot protect the schduler as it switches threads. First because the scheduler uses other spinlocks (nesting not allowed), and secondly because the code to switch threads is too long.
The scheduler instead has an within core protection mechanism (and keeps local lists of available threads).
The first set of locks are only allowed to be called within basline code (not from IRQs or other asynchronous callbacks like timers):
Code: Select all
LockCore Proc near
push word ptr core_data_sel
pop fs
add fs:ps_nesting,1
jc lcDone
;
CrashGate ; this is an error, so panic!
lcDone:
mov fs,fs:ps_sel
ret
LockCore Endp
UnlockCore Proc near
push ax
ucRetry:
cli
sub fs:ps_nesting,1
jc ucNestOk
;
CrashGate ; this is an error so panic!
ucNestOk:
test fs:ps_flags,PS_FLAG_TIMER
jnz ucSwap
;
mov ax,fs:ps_wakeup_list ; check if something happened while scheduler was locked
or ax,ax
jz ucDone
ucSwap:
add fs:ps_nesting,1 ; relock
jnc ucRetry
;
sti
push OFFSET ucDone
call SaveLockedThread ; schedule
jmp ContinueCurrentThread
ucDone:
sti
pop ax
ret
UnlockCore Endp
Code: Select all
TryLockCore Proc near
push word ptr core_data_sel
pop fs
add fs:ps_nesting,1
mov fs,fs:ps_sel
ret
TryLockCore Endp
TryUnlockCore Proc near
push ax
tucRetry:
cli
sub fs:ps_nesting,1
jnc tucDone
;
mov ax,fs:ps_curr_thread
or ax,ax
jz tucDone
;
test fs:ps_flags,PS_FLAG_TIMER
jnz tucSwap
;
mov ax,fs:ps_wakeup_list
or ax,ax
jz tucDone
tucSwap:
add fs:ps_nesting,1
jnc tucRetry
;
sti
push OFFSET tucDone
call SaveLockedThread
jmp ContinueCurrentThread
tucDone:
sti
pop ax
ret
TryUnlockCore Endp
The Signal/WaitforSignal synchronization primitive is one case where the thread must be placed in the block-list before it's state is saved. This is because this pair must garantee that WaitForSignal should be exited regardless if Signal is done before, during or after the WaitForSignal call. On singlecore, this pair works well, but it seems to occasionally malfunction on multicore.
What is needed to protect Signal/WaitForSignal is a spinlock on the thread involved. However, the above spinlock cannot be used (code is too long, and it accesses other spinlocks). What would be needed is a spinlock that doesn't disable interrupts, and thus cannot tolerate reentrancy from IRQs. However, the environment in the scheduler locked section is such that this spinlock can be placed in sections that are garanteed to be non-IRQ (baseline code), and thus it should never deadlock with IRQs on the same core.
It could look like this:
Code: Select all
RequestThreadLock Proc
reqSpin:
mov ax,es:p_spinlock
or ax,ax
je reqGet
;
pause
jmp reqSpin
reqGet:
inc ax
xchg ax,es:p_spinlock
or ax,ax
jne reqSpin
ret
RequestThreadLock Endp
ReleaseThreadLock Proc near
mov es:p_spinlock,0
ret
ReleaseThreadLock Endp