Page 1 of 1

Mutexes and "fairness" - one task hogs the mutex

Posted: Mon Nov 05, 2012 7:06 am
by exscape
I've recently implemented mutexes - or at least I hope I have - in my kernel. I'm testing it in part by running two copies of this task/process:

Code: Select all

static mutex_t *test_mutex = NULL;
static void mutex_test(void *data, uint32 length) {
    if (test_mutex == NULL)
        test_mutex = mutex_create();

    while (true) {
        mutex_lock(test_mutex);
        printk("locked mutex in pid %d @ %u\n", getpid(), gettickcount());
        sleep(500); // milliseconds
        mutex_unlock(test_mutex);
        printk("unlocked mutex in pid %d @ %u\n", getpid(), gettickcount());
    }
}
Here's the mutex implementation:

Code: Select all

void mutex_lock(mutex_t *mutex) {
    assert(mutex != NULL);
    uint8 success = 0;

    while (success == 0) {
        asm volatile("LOCK BTSL $0, %[mutex];"
                           "SETNCB %[success];" // sets success to 1 if the mutex was locked; 0 otherwise
                     :
                      [mutex]"=m"(mutex->mutex),
                      [success]"=m"(success)
                      : : "cc", "memory");

        if (success) {
            mutex->owner = (task_t *)current_task;
        }
        else {
            if (task_switching)
                asm volatile("int $0x7e"); // yield this time slice
        }
    }
}

void mutex_unlock(mutex_t *mutex) {
    assert(mutex != NULL);
    assert(mutex->mutex != 0); // mutex is locked
    assert(mutex->owner == current_task);
    mutex->mutex = 0;
    mutex->owner = NULL;
}
The problem I'm having is that the first process locks the mutex, and essentially holds it forever. Later processes barely have any chance of locking the mutex, since the only chance they'll ever have is when the scheduler switches away from task #1 BETWEEN mutex_unlock() and the mutex_lock() executed soon thereafter.
When a second task does manage to grab the mutex, clearly, THAT task starts hogging it instead.

Is this something to be expected, or should a good OS solve this? And, more importantly, how?
Also, if I'm doing something else wrong, please point that out. :)

Re: Mutexes and "fairness" - one task hogs the mutex

Posted: Mon Nov 05, 2012 7:19 am
by bluemoon
A more fair mutex may respect waiting thread with queue(see #1); however, the test itself do not reflect reality since it is racing for the mutex, which is a bad usage to begin with.

#1: The implementation of queue is depend on your design, in some case you may abuse the process/thread list so that the next awake thread acquire the lock.


As a side note, in your code the 2 testing thread may create two mutex, you may solve it by allowing static initialization.

Re: Mutexes and "fairness" - one task hogs the mutex

Posted: Mon Nov 05, 2012 7:28 am
by Brendan
Hi,

The normal way to fix fairness issues is to implement a queue. If a task attempts to acquire a mutex and fails it is put on the end of the queue; and when a task unlocks the mutex it checks if there is anyone waiting for it and passes ownership to the first task on the queue.

A queue is not strictly necessary. If you understand Lamport's bakery algorithm you'll probably realise that you only really need 2 counters. The "2 counter" approach is fine for spinlocks (and is probably what Linux uses, for example). For mutexes (rather than spinlocks) everything gets messy because of the desire to keep as much as possible in user-space (to avoid CPL=3 <-> switching) and the scheduler's involvement (task switches, race conditions, etc); and the "2 counter" approach might not work as well as (for e.g.) a lockless queue.


Cheers,

Brendan