Up until now, I haven't really worried about this design, because all of the system calls are relatively fast (drivers are out of the kernel), and unlikely to slow things down much. But from this quote by Brendan:
I'm now afraid I may eventually incur the wrath of "serious performance implications". Should I be worried, or is this a minor problem for me? I don't really plan to run my OS on servers, but I would also like to be as future proof as possible while keeping things reasonably simple. And how exactly would you implement one stack per task if you have multiple task switch points?One kernel stack that's used by everything would mean that only one thread can be in the kernel at a time, which has serious performance implications. For example, you might have 16 CPUs where one of the CPUs is running kernel code, and a second CPU might get an IRQ. In this case, does the second CPU need to wait until the first CPU leaves the kernel before it can use the kernel stack?
Maybe you need one kernel stack per CPU, or one kernel stack per process, or one kernel stack per thread? One kernel stack per thread is much easier to do, especially if your kernel can be pre-empted.
Also, is there any way to figure out exactly how much time a system call takes to preform? From the wiki, it seems like the RTC needs interrupts on, so it would be impossible to use it. Is there some sort of Bochs feature that could do this?