Page 1 of 1
Separate kernel stack for each task?
Posted: Wed Feb 27, 2008 3:31 pm
by Wave
Is it normal to have one kernel stack for each task or just one in the whole system?
Posted: Wed Feb 27, 2008 3:47 pm
by Colonel Kernel
Different OSes use different approaches. I guess "normal" in this case depends on your goals.
For example, the NT kernel has a separate kernel stack for each thread. NT is not a microkernel -- lots of code runs in kernel space, including many device drivers, so to keep this code pre-emptible, each thread running in kernel space has its own stack. The drawback is the extra memory usage.
In contrast, QNX Neutrino has a single kernel stack for each CPU. It is a microkernel and tries to minimize the amount of time spent in kernel mode. However, it is also pre-emptible, just in a different way than NT. When a thread running in kernel-mode in NT is pre-empted, its context is saved to its kernel stack. When it is later resumed, it is resumed at exactly the point at which it was pre-empted earlier. In QNX, when a thread is pre-empted while running in kernel space, its context record is modified to trigger the same syscall it was running at the time of pre-emption, and the kernel stack is forcibly taken away from it. In other words, the thread re-starts the same syscall the next time it is scheduled. Some forward progress is lost, but it keeps the microkernel smaller and faster and decreases scheduling latency, which is important for a real-time OS (which is what QNX is).
Depending on your goals, you can do it either way. I think a lot of the people on this forum probably chose to use a kernel stack for each thread, but that's just a guess...
Posted: Wed Feb 27, 2008 4:01 pm
by jerryleecooper
In my kernel each thread has one kernel stack each, the stack is only used to save the registers values, you can't preempt an interrupt for exemple. no preemptable syscalls. The normal way is the hard way, put the process that called the syscall to sleep, put the call in a queue, if the call was for a bios interrupt for example, make an event for it, the bioscall thread sees theres an item in the queue and makes a v86 process, process terminated, awake the process that called the syscall. That's the way it works in my kernel, not preempted, but not too far away from it.
Posted: Thu Feb 28, 2008 2:30 am
by JoeKayzA
My original design was to use one kernel stack per cpu, shared by all userspace threads. This stack would then be used to execute isr and (entry) syscall code (all non-preemptible code) in the context of a userspace thread. All of the real work however would be executed by dedicated kernel threads, which should have a private kernel stack each (and execute preemptible code). The idea was to use a lightweight messaging system in kernel space for communication between kernel and userspace threads.
I have since lowered my goals though, and right now I'm heading for the more traditional way - one kernel stack per thread, and let most syscall and driver code execute directly in the context of the interrupted (or calling) thread.
Posted: Fri Feb 29, 2008 1:37 am
by bewing
In my OS, each core has a scheduler; each scheduler has a stack; each scheduler has its own job table; only core#0 can run the kernel; when core#0 is in "kernel mode" it uses the scheduler#0 stack.
User threads have a tiny bit of space assigned as part of their job table entry as the place to push/pop their registers on a task switch. That is the only thing that space is used for -- and it is the only "kernel stack" that each process has.
Interrupts can be nested, but they make sure they only preempt user processes -- not kernel mode.