Page 2 of 3
Re: Priority Inversions... The curse of multitasking.
Posted: Tue Nov 10, 2020 7:49 pm
by bloodline
linguofreak wrote:bloodline wrote:
So I did ponder a semaphore system, which would put the task back into the waiting list, only to be scheduled back into when it received the signal that the lock was free. But since a task switch in my system is just a simple stack swap, keeping the task in the ready list has less overhead than the bookkeeping needed to keep track of which task is waiting for which lock, and then alter the task state depending upon the conditions. But this is definitely something I would like to add in future for higher level resources!
You could do something like giving each task a function pointer for a callback "isWaiting()".
For the scheduler, the isWaiting() pointer is used as follows: When a new task is created, the pointer is initialized as a null pointer. When the scheduler selects a task to run, it checks to see if the isWaiting() pointer is null. If it is, it schedules the task. If it is not, it calls the function on the other end of the pointer. If the pointer is non-null when the function returns, the scheduler rejects the task as a candidate to run and moves on down the list of ready tasks. If it is now null, the scheduler runs the task.
For the task, the isWaiting() pointer is used like this: When the task needs to wait on a lock, it sets the isWaiting() pointer to point to a function that tests that lock. When the function is called, it performs the test, and if another task still holds the lock, it does nothing and returns. If the lock is free, the function nulls the isWaiting() pointer and returns.
I do like this idea! I think I would need a separate list of “lockWaiting” tasks which the scheduler would scan every os quantum, execute the callback and then move any tasks which are now unblocked to the ready list. This would probably have a similar overhead to my existing locking system, but with the advantage that task priority is no longer an issue.
I’ll think about this some more, but I do really like the idea!
Re: Priority Inversions... The curse of multitasking.
Posted: Wed Nov 11, 2020 3:40 am
by xenos
Instead of using GCC builtins, you might want to stick to the C/C++ standard (depends on which one you use) and use the corresponding functions from the Atomic operations library:
http://en.cppreference.com/w/c/atomic
http://en.cppreference.com/w/cpp/atomic
Re: Priority Inversions... The curse of multitasking.
Posted: Wed Nov 11, 2020 4:14 am
by bloodline
I need to use freestanding libraries, I'm not sure how these will function without the C/C++ runtime.
Re: Priority Inversions... The curse of multitasking.
Posted: Wed Nov 11, 2020 11:36 am
by nullplan
bloodline wrote:I need to use freestanding libraries, I'm not sure how these will function without the C/C++ runtime.
The C versions won't. stdatomic.h is not a header required to be present in freestanding implementations. However, the C++ header <atomic>
is required to be present. Now I don't know if GCC can be made to provide a freestanding C++ implementation, or if you need other stuff as well.
Re: Priority Inversions... The curse of multitasking.
Posted: Wed Nov 11, 2020 3:28 pm
by xenos
bloodline wrote:I need to use freestanding libraries, I'm not sure how these will function without the C/C++ runtime.
Actually I use C++ <atomic> in my kernel, which also uses the freestanding implementation, and I use GCC for that:
https://github.com/xenos1984/NOS/blob/m ... SpinLock.h
https://github.com/xenos1984/NOS/blob/m ... inLock.cpp
Re: Priority Inversions... The curse of multitasking.
Posted: Sun Nov 15, 2020 10:45 pm
by kzinti
My freestanding cross compilers are missing all the C++ headers, including "atomic".
I am not building libc or libstdc++ when building the compiler, but I do get the "freestanding headers" for C.
What is the trick? The googles are not helping tonight.
Here is what I currently use to build GCC:
Code: Select all
./configure --target=i686-elf --prefix=$(HOME)/opt/cross --without-headers --with-gnu-as --with-gnu-ld --enable-languages=c,c++ --with-arch=i686
make all-gcc
make all-target-libgcc CFLAGS_FOR_TARGET="-g -O2"
make install-gcc
make install-target-libgcc
Re: Priority Inversions... The curse of multitasking.
Posted: Mon Nov 16, 2020 2:14 am
by xenos
kzinti wrote:My freestanding cross compilers are missing all the C++ headers, including "atomic".
I am not building libc or libstdc++ when building the compiler, but I do get the "freestanding headers" for C.
What is the trick? The googles are not helping tonight.
That's indeed a rather tricky issue, since GCC does not very well support building a freestanding bare metal C++ toolchain. I ran across this problem quite a while ago:
http://forum.osdev.org/viewtopic.php?f=8&t=23947
In the end my workaround is to build newlib so that it can be used to build a hosted libstdc++ (even though that will not be used later), and thus get the freestanding C++ headers as well:
https://github.com/xenos1984/cross-tool ... olchain.sh
I haven't looked into this for a while to figure out whether there is a better way now, which allows obtaining only the freestanding headers without building a full hosted libstdc++.
Re: Priority Inversions... The curse of multitasking.
Posted: Mon Nov 16, 2020 4:26 pm
by kzinti
Thanks for pointers... I might or might not give it a shot. I am happy with __sync_xxx() for now... But obviously this is GCC specific
Re: Priority Inversions... The curse of multitasking.
Posted: Mon Nov 16, 2020 6:51 pm
by kzinti
This worked like a charm. Thanks again!
Re: Priority Inversions... The curse of multitasking.
Posted: Tue Nov 17, 2020 9:53 am
by bloodline
bloodline wrote:linguofreak wrote:bloodline wrote:
So I did ponder a semaphore system, which would put the task back into the waiting list, only to be scheduled back into when it received the signal that the lock was free. But since a task switch in my system is just a simple stack swap, keeping the task in the ready list has less overhead than the bookkeeping needed to keep track of which task is waiting for which lock, and then alter the task state depending upon the conditions. But this is definitely something I would like to add in future for higher level resources!
You could do something like giving each task a function pointer for a callback "isWaiting()".
For the scheduler, the isWaiting() pointer is used as follows: When a new task is created, the pointer is initialized as a null pointer. When the scheduler selects a task to run, it checks to see if the isWaiting() pointer is null. If it is, it schedules the task. If it is not, it calls the function on the other end of the pointer. If the pointer is non-null when the function returns, the scheduler rejects the task as a candidate to run and moves on down the list of ready tasks. If it is now null, the scheduler runs the task.
For the task, the isWaiting() pointer is used like this: When the task needs to wait on a lock, it sets the isWaiting() pointer to point to a function that tests that lock. When the function is called, it performs the test, and if another task still holds the lock, it does nothing and returns. If the lock is free, the function nulls the isWaiting() pointer and returns.
I do like this idea! I think I would need a separate list of “lockWaiting” tasks which the scheduler would scan every os quantum, execute the callback and then move any tasks which are now unblocked to the ready list. This would probably have a similar overhead to my existing locking system, but with the advantage that task priority is no longer an issue.
I’ll think about this some more, but I do really like the idea!
So I've though more about this idea.
This is the new design. It's still just on paper at the moment (I'm working on something else right now).
1. The kernel will maintain a list of locks.
2. When resource is locked by a task, the opaque lock object will record that lock.
3. If another task tries to obtain a lock on that object, the task is moved to the taskWait list, and the lock it wishes to obtain is added to the kernel's lockList with a record of the requesting task.
4. Every reschedule, the lock list is scanned and each lock tested, if the lock is now free, the associated task is move back into the ready list, the resource is locked for the new task, and the lock is removed from the lockList.
This list is strictly first come, first served.
This has a higher rescheduling overhead, but appears to be more elegant, than my current priority boosting solution.
When I implement it, I will no doubt find that it is flawed.
Re: Priority Inversions... The curse of multitasking.
Posted: Tue Nov 17, 2020 11:37 pm
by linguofreak
bloodline wrote:
4. Every reschedule, the lock list is scanned and each lock tested, if the lock is now free, the associated task is move back into the ready list, the resource is locked for the new task, and the lock is removed from the lockList.
Well, the big thing with my idea is that we only check for a lock if a task would otherwise be scheduled. So, run the scheduler, select a task, make a last minute check to see if the task is waiting on a lock, if not run the task, if so continue looking for runnable tasks.
Re: Priority Inversions... The curse of multitasking.
Posted: Wed Nov 18, 2020 1:55 am
by rdos
This seems pretty backward. Tasks waiting for locks are NOT runnable and so should never be possible to schedule and should not be checked by the scheduler. It's typically asynchronous events that make these tasks runnable, and it should be when these happen that tasks are made runnable, and possibly the scheduler is invoked to check if the now active task should be run immediately.
Re: Priority Inversions... The curse of multitasking.
Posted: Wed Nov 18, 2020 8:43 am
by bloodline
rdos wrote:This seems pretty backward. Tasks waiting for locks are NOT runnable and so should never be possible to schedule and should not be checked by the scheduler. It's typically asynchronous events that make these tasks runnable, and it should be when these happen that tasks are made runnable, and possibly the scheduler is invoked to check if the now active task should be run immediately.
It's not clear as I haven't really given details of how my multitasking works, but currently if a task wants a resource which it locked, it uses its scheduled time slice to check if the lock is free, if not it then immediately relinquishes it's time slice before its quantum ends, it will do this until the resource is free. A task switch has a really low overhead in my system since everything runs in a single address space (it's only slightly more costly than a couple of regular function calls, with a stack swap).
The problem with this approach is that if the locking task is lower priority than the one which wants the resource, then the higher priority task will take all the CPU from the low priority task which will never be able to free the lock.
The current fix, is to temporarily adjust the priority of the lockwaiting task to the same priority as the locking task. This works fine, but feels inelegant.
I plan to implement a new locking system which takes lock waiting tasks out of the ready list and then the scheduler (when it runs) will check the resource for them, only returning them to the ready list once the resource is free.
Re: Priority Inversions... The curse of multitasking.
Posted: Wed Nov 18, 2020 10:21 am
by reapersms
bloodline wrote:
The current fix, is to temporarily adjust the priority of the lockwaiting task to the same priority as the locking task. This works fine, but feels inelegant.
Usually, you want to go the other direction, and pull the locking task priority up to the lockwaiting priority while it holds the important lock. Otherwise, the high priority task will end up blocked by intermediate priority tasks that pop up in addition to the low priority one holding the lock it wants.
Re: Priority Inversions... The curse of multitasking.
Posted: Wed Nov 18, 2020 2:18 pm
by bloodline
reapersms wrote:bloodline wrote:
The current fix, is to temporarily adjust the priority of the lockwaiting task to the same priority as the locking task. This works fine, but feels inelegant.
Usually, you want to go the other direction, and pull the locking task priority up to the lockwaiting priority while it holds the important lock. Otherwise, the high priority task will end up blocked by intermediate priority tasks that pop up in addition to the low priority one holding the lock it wants.
That’s a really good point... perhaps I should do that instead? Have the locking task priority boosted to the level of the highest lock waiting task... that does seem very sensible.
I need to run through a few thought experiments, as my architecture relies on the idea that a high priority task will always preempt a lower one... I realise this basic design feature is going to cause me issues when I move to SMP... this is the problem of using a kernel I design for a microcontroller on a desktop chip