Page 1 of 1

Spinlocks on a UP

Posted: Thu Sep 04, 2008 11:27 am
by salil_bhagurkar
Hi..

I really spent a lot of time, digging the linux source to understand whether spinlocks are required in a uniprocessor system. In my opinion, for a preemptible os, some kind of synchronization is required in the kernel mode, so that resources do not get screwed up. So we could use simple spin locks for protecting may be the memory allocator structures. So when one process comes in and accesses the tables it acquires the spin lock. But when another process comes and accesses them, then it loops on the spin lock in order to acquire it. But in case of a uniprocessor system, the spinning is a total waste of time and could in fact be substituted by yielding of the processor by the task who fails to acquire the spin lock.

Alternatively a semaphore could be used which would involve putting the new process to sleep. This is what I have thought. But i am still somehow unsatisfied with my conclusions. I don't really understand when exactly would a spin lock be used and when a semaphore. Considering the performance issues, in case of a uniprocessor system, a spin lock would be a bit faster (just spinning) if it were to be released by the other process in a small time. This would probably save the scheduler from the overhead of putting the new process to sleep. But again the context switch from the current process to the new process, is also an overhead...

Basically, i just need to organize this stuff in my mind properly, since i am getting confused about it...

Re: Spinlocks on a UP

Posted: Thu Sep 04, 2008 3:28 pm
by Colonel Kernel
Spinlocks are unnecessary on a uniprocessor. All you need to guarantee mutual exclusion is to disable interrupts. This is true at the lowest level of abstraction, anyway. Depending on how complex your kernel is, you may want to implement semaphores for in-kernel use, but they will be implemented in terms of something lower-level. That would be disabling interrupts on a UP, or spinlocks or compare-and-swap primitives otherwise.

Re: Spinlocks on a UP

Posted: Tue Sep 09, 2008 8:13 am
by dc0d32
Research shows that spinning times are considerably low for a average loaded UP system. Go with spinlocks if you can make sure that the non-re-entrant critical section is skinny.
It entirely depends on what use you will put your system to. If it is something like apache, then semaphore with wait queues would make more sense.

Also take care to plan ahead should you go for MP later.

Re: Spinlocks on a UP

Posted: Tue Sep 09, 2008 10:42 am
by bewing
Designing for MP optimization is the hard part. You may have seen me ranting before about how awful spinlocks are, and how lockless algorithms are so much better. But if you can guarantee that your code will only ever be run on a UP, then spinlocks are perfectly fine -- as prashant says, they are quite fast, in fact. The big problem with spinlocks only happens when you have multiple CPUs with separate caches -- it is strictly a cache invalidation problem on MP systems.
Of course, on a singletasking UP, a spinlock shoud sleep/yield, rather than spin. It actually makes a lot of sense to write code for both methods (lockfree and spinlocks) and only use the spinlocks when you only have one CPU (multiple cores on one CPU is ok, too, and then actual spinning is acceptable).

Re: Spinlocks on a UP

Posted: Thu Sep 18, 2008 1:09 am
by Colonel Kernel
First, sorry for the late reply... I've been AFK for a few weeks in Maui. :)
bewing wrote:Of course, on a singletasking UP, a spinlock shoud sleep/yield, rather than spin.
IMO, if it doesn't spin, it's not a spinlock, but a mutex or semaphore. Spinning on a UP leads to instant deadlock, which is why I think a lot of people get confused when spinlocks are discussed in the context of UP. Let's straighten out the terminology before giving advice...

Re: Spinlocks on a UP

Posted: Thu Sep 18, 2008 1:31 am
by bewing
I dunno why I put the word "singletasking" in there. I must have been sleepy. :-k
But on a multitasking UP system, I'm not sure I entirely agree. The spinlock algorithm is a fairly specific one -- and whether a process has a sleep() in between polling attempts is pretty irrelevant.

Re: Spinlocks on a UP

Posted: Thu Sep 18, 2008 1:50 am
by Colonel Kernel
bewing wrote:I dunno why I put the word "singletasking" in there. I must have been sleepy. :-k
But on a multitasking UP system, I'm not sure I entirely agree. The spinlock algorithm is a fairly specific one -- and whether a process has a sleep() in between polling attempts is pretty irrelevant.
Hmmm... We're talking past each other here.

To me, a spinlock is a lock that is acquired by having the CPU do an atomic test-and-set of a global variable in a loop. AFAIK this is done with interrupts disabled because the lock should protect against re-entrance of the critical section from the same CPU, as well as from other CPUs. If you do this on a UP system, you've just hung the system because no other CPU exists to change the variable that is being tested by the spinning CPU.

To put it another way, I don't think spinlocks allow for the possibility of the spinning thread to block for any reason whatsoever. They just keep spinning until the lock becomes available, by the very definition of what a "spinlock" is.

What you're talking about just sounds like polling in general, which can be done at any level of abstraction.

Re: Spinlocks on a UP

Posted: Thu Sep 18, 2008 8:12 am
by bewing
Yes, we are. :wink: Hope you enjoyed Maui. :D

It seems to me that the OP is talking about having a resource on a UP system, and perhaps being interested in using a spinlock-type algorithm to allocate that resource between threads. His question has nothing to do with multiple CPU systems. My comments are aimed at the concept of an "atomic test and set", followed by a "polling" test loop if the lock fails to aquire. And I think saying that spinlocks only apply in a MP situation is far too narrow. It is a locking algorthm that can be used in any situation where you can have asynchronous accesses to any shared resource, at any degree of abstraction.

If the lock fails to aquire, that already means that you have multiple concurrent allocation attempts on the same lock, and there is never any guarantee that allocator A is going to be the next one that aquires the lock. The next agent to get the lock is just the one who happens to test the global variable first, once it becomes free the next time. So there is no reason not to sleep or block during the polling loop. It just reduces your chances of being "next".

Re: Spinlocks on a UP

Posted: Thu Sep 18, 2008 9:22 am
by Joshw
Does this sound right?

If a spinlock is an atomic test-and-set in a tight loop, as follows:

Code: Select all

mov eax,1
lea edx,[var]
spinlock:
lock xchg [edx],eax ; lock prefix is optional, because xchg is always atomic
and eax,eax
jnz spinlock
then the only practical applications for a spinlock are:
- Waiting for a thread on another processor to release the spinlock (mov [var], 0).
- Waiting for an interrupt to release the spinlock.
The latter seems impractical to me in a multitasking system, because the spinlock is just sitting there wasting time that could be given to other processes. To me, it doesn't even seem correct for a singletasking system, because the interrupt and the code may be waiting for each other to release the spinlock (correct me if this is impossible, I'm too tired to figure it out). Even in a multitasking system where another process has the spinlock and eventually the process is preempted, it still seems wasteful to me.

That being said, I think that a thread on a UP system should yield its timeslice during a spinlock if another thread has aquired it. Something like this...

Code: Select all

spinlock:
mov eax,1
lock xchg [var],eax ; lock prefix is optional, because xchg is always atomic
and eax,eax
jz spinlock_end
push spinlock        ; call thread_yield
jmp thread_yield   ; jmp spinlock
spinlock_end:
And that should take care of mutual exclusion. That atomic operation is necessary, even in a UP system, because the thread could be preempted between the instruction to test, and the instruction to set. Would it be practical to use extra code to determine exactly how to behave in a spinlock? for example:

Code: Select all

// psuedocode
extern int UP; // Whether this is a Uniprocessor system
typedef struct
{
    int value;
    int processor;
} Spinlock; 
Spinlock s;
void acquire(Spinlock *s)
{
    if (UP)
        while (!test_and_set(&s->value)) thread_yield();
    else
    {
        int p = get_processor();
        while (!test_and_set(&s->value))
        {
            if (p == s->processor) thread_yield(); // If the thread that has the spinlock is on the same processor, give up this timeslice
        }
        s->processor = p;
    }
}

void release(Spinlock *s)
{
    s->value = 0;
}
This seems like the best way to me, what do you think?

Re: Spinlocks on a UP

Posted: Thu Sep 18, 2008 12:38 pm
by Brendan
Hi,

Spinlocks should only be used for fast critical sections - situations where it's better for performance to spin instead of doing thread switches. For example, if it costs 1000 cycles to do one thread switch, then switching to another thread and then switching back when the lock becomes free will cost 2000 cycles of overhead, so if the critical section is less than 2000 cycles it'd probably be better to spin (as the overhead of spinning is less than the overhead of thread switches). This also depends on how likely lock contention is (but to be perfectly honest, if lock contention is likely you've probably got some more important design problems to worry about).

So, what happens if you acquire the lock and a thread switch occurs before you release it? In this case it's a huge performance problem - other threads that are waiting for the spinlock are expecting it to be free again "soon", and aren't expecting to wait for any number of other threads to use the CPU first. Worse, the scheduler could switch from the thread that can free the lock to a thread that's waiting for the lock to be free and then spend an entire time slice spinning. This is all fairly silly - it's much better to disable or postpone thread switches until the thread isn't holding any spinlocks, *especially* for kernel code (where spinlocks are the most useful and you don't need to worry about something hogging the CPU by never freeing a dummy lock). For user-level code, I don't think spinlocks are a good idea at all - better to use a mutex, futex, semaphore or something else.

Also, you can't use normal spinlocks in interrupt handlers, because it's virtually impossible to avoid deadlocks (where the IRQ needs the lock but the thread the IRQ handler interrupted already acquired the lock). There are ways to avoid this with locking code specifically designed for the situation (for e.g. if the IRQ handler can't get the lock immediately, set a callback and do an IRET, and call the callback when whoever had the lock tries to free the lock).
Joshw wrote:That being said, I think that a thread on a UP system should yield its timeslice during a spinlock if another thread has aquired it.
As soon as you decide to yield (or do anything else to deliberately cause a thread switch) your "spinlock" stops being a spinlock (and starts being a mutex or something). If you're an OS developer designing mutex or semaphore code, then design the scheduler so that a thread that's waiting for a lock doesn't get any CPU time unless the lock it's waiting for is free (and don't just use yield, because that way you end up with lots of pointless thread switches).


Cheers,

Brendan

Re: Spinlocks on a UP

Posted: Thu Sep 18, 2008 12:50 pm
by Colonel Kernel
bewing wrote:And I think saying that spinlocks only apply in a MP situation is far too narrow.
No, it's a natural consequence of the generally accepted definition of what it means to "spin". You're confusing "polling" and "spinning". Brendan explained it the best I think (as usual):
Brendan wrote:As soon as you decide to yield (or do anything else to deliberately cause a thread switch) your "spinlock" stops being a spinlock (and starts being a mutex or something).

Re: Spinlocks on a UP

Posted: Thu Sep 18, 2008 12:53 pm
by Joshw
Brendan wrote:If you're an OS developer designing mutex or semaphore code, then design the scheduler so that a thread that's waiting for a lock doesn't get any CPU time unless the lock it's waiting for is free (and don't just use yield, because that way you end up with lots of pointless thread switches).
Hey that's a good idea!

Re: Spinlocks on a UP

Posted: Thu Sep 18, 2008 3:16 pm
by bewing
Colonel Kernel wrote: You're confusing "polling" and "spinning".
No, polling is entirely passive. Once you do an atomic test-and-set, it's a locking algoritm -- it's not polling anymore.

Re: Spinlocks on a UP

Posted: Wed Oct 08, 2008 1:38 am
by mystran
bewing wrote:
Colonel Kernel wrote: You're confusing "polling" and "spinning".
No, polling is entirely passive. Once you do an atomic test-and-set, it's a locking algoritm -- it's not polling anymore.
Uhm?

Polling is the act of reading some state repeatedly until the state is what you want it to be. That makes it pretty "active" process to me.

Spinlock is a form of lock based on busy-loop polling.

You can put a timed wait-state into your polling loop, or you can busy-loop, but in both cases you do something repeatably.

Typically what is called a "mutex" is another type of lock, functionally similar to a spinlock, but different in the sense that rather than poll a state variable, you put the thread into a sleep-state, and wake it up when another thread unlocks the mutex.. but since the word "mutex" comes from "mutual exclusion" you could just as well call spinlocks "polling mutexes."

Modern UNIX does have a rather confusingly named poll() system call, which doesn't actually poll anything, rather frees you from the having to poll the state of sockets, instead notifying you when the state is what you want. Better name for that would be avoidpoll() or unpoll() but in UNIX not all stuff named logically.

edit: Oh and forgot to say: there's no reason whatsoever to ever use spinlocks on uniprocessors. Seriously.