Page 1 of 1

Scheduler run queue access on SMP

Posted: Mon Dec 14, 2020 3:32 pm
by nexos
Hello,
I am redesigning my model of scheduler development in favor of a better system. Currently, I am a bit confused about one thing. Every CPU, IMO, should have lockless access to its run queue. But what if a thread on a different CPU tries to unblock a thread on that CPU? It now must access another CPU's run queue! I have a few solutions that I would like suggestions on:

Make that CPU send an IPI to the CPU it is trying to unblock the thread on. That IPI handler would, with ints disabled, add that thread to the run queue. The only issue is that interrupts have a large overhead.

Bite the bullet and make run queues be access with locks. Seems inefficient and bug prone, however.

Any other ideas?
nexos

Re: Scheduler run queue access on SMP

Posted: Wed Dec 16, 2020 5:16 am
by OSwhatever
I'm not sure if I understand the issue correctly. Locking isn't that evil, especially in the kernel. Another CPU can push a thread on the queue for another CPU regardless what algorithm you use. You can either use a lockless queue or you can use a locked queue. Locking in the kernel is easy as you just need a spinlock and disable the interrupts for a short time. List operations are short enough not causing too much trouble.

Implementing the same in userspace is more complicated as you don't have the same opportunity to disable interrupts.

Re: Scheduler run queue access on SMP

Posted: Wed Dec 16, 2020 10:57 am
by Korona
In general this is a hard problem and there is no good solution. Your observations are correct: either you need to ask other CPUs to migrate threads via IPIs (high overhead), you can go lockless or you have to accept the possibility that one CPU can stall another for a few 100 ns. Lockless queues are quite possible but everything becomes a lot harder if you're not doing round-robin. For example, for a CFS-style scheduler, you need a heap or balanced tree data structure and doing that in a lockless way is really hard (at least if it should also be fast).

Re: Scheduler run queue access on SMP

Posted: Wed Dec 16, 2020 2:22 pm
by rdos
nexos wrote:Hello,
I am redesigning my model of scheduler development in favor of a better system. Currently, I am a bit confused about one thing. Every CPU, IMO, should have lockless access to its run queue. But what if a thread on a different CPU tries to unblock a thread on that CPU? It now must access another CPU's run queue! I have a few solutions that I would like suggestions on:

Make that CPU send an IPI to the CPU it is trying to unblock the thread on. That IPI handler would, with ints disabled, add that thread to the run queue. The only issue is that interrupts have a large overhead.

Bite the bullet and make run queues be access with locks. Seems inefficient and bug prone, however.

Any other ideas?
nexos
I use a temporary queue (rather a 256 entry array) for threads that have become active, either from IRQs or from another core. The implementation is lock-free and so doesn't need a spinlock which makes the scheduler simpler. The core owing the temporary queue will empty it and put the threads in it's local scheduler queue. When a core activates a thread on another core it will send an IPI to the target core after it has added the thread to it's temporary queue.

Re: Scheduler run queue access on SMP

Posted: Wed Dec 23, 2020 7:20 am
by nexos
I think what I'm going to do is use spinlocks on the queue, but assign one CPU queue per proximity domain. This queue will be allocated in its domain's heap area, which will make things more efficient. Every domain will have one CPU nominated to handle scheduling. Note that every CPU queue will have a maximum of 4 CPUs using it to reduce contention. CPUs outside this queue will have to send an IPI to work with this queue. Btw, @Korona, what solution do you use in managarm?