vinnie wrote:If you implement a scheduler per core, do you run a separate process and stack for each?
For stacks, a common model with desktop OSes is to have one kernel stack per task, as well as a few exception stacks per CPU. The exception stacks are for the critical exceptions Double Fault, Machine Check, and Non-Maskable Interrupt, because those can happen any time, including half-way through a stack switch. But they then get used very rarely.
In any case, this model supports quite a simple way of doing multi-tasking, where each task is just suspended on its own stack, in whatever state it wants. And switching tasks just means switching stacks.
The only alternative I have ever heard is to have one kernel stack per CPU. In that model you have to enumerate all the reasons why a task might be blocked, and switching tasks means figuring out what was going on in the new task before it got switched out. Also possible, but a bit harder.
But you always need separate CPUs to operate on separate stacks. Having them on the same stack can only lead to disaster.
vinnie wrote:I always wrote pre-emptive OSes for MCUs such that the Kernel sits on-top of the scheduler and the Kernel is a process itself inside the scheduler. Is this a good way of doing it on modern CPUs?
In that case we have a very different understanding of what a kernel is. To me, a kernel is a collection of functions. It does contain the system initialization function, yes, but that ends somewhere, and then the important ones are the system calls and the interrupt handling functions. Such a collection of functions cannot work as a coherent process, because it does not follow a coherent thread of execution.
Sure, a microkernel might move a lot of the interrupt processing to user space processes, but all that means is that the kernel still handles the interrupt, but then activates another process in response to it. The kernel has to handle all the interrupts, at least on the CPU level.