Hi,
DevNoteHQ wrote:Brendan wrote:It depends on how you want to do kernel stacks.
Thanks!
Can you tell me what to do when the stack is full?
Most people assume that if a kernel stack becomes full then the kernel has bugs (e.g. got stuck in an "infinite recursion" loop), so they do some kind of "kernel panic" and halt the machine. In this case you'd have to (try to) make sure kernel stacks are large enough to begin with.
Unfortunately "how large is large enough?" can be hard to determine; especially if you support nested IRQs, and especially if it's a monolithic kernel where third-party device drivers can install their own IRQ handlers. Note that some OSs have special/extra "interrupt handler stacks" so that the worst case that normal kernel stacks have to handle is much smaller (which can save a lot of RAM if you've using a kernel stack for each user thread, and also makes it easier to figure out how large a kernel stack needs to be).
DevNoteHQ wrote:Do i just use stack traces to detect if the stack is full and create a new one somewhere?
"Dynamically growing kernel stacks" ends up being excessively complicated and/or error prone.
DevNoteHQ wrote:And are heap and stack for a process/thread usually created by the kernel? Because the kernel reservates some space for the stack itself. And is the heap for the kernel initialized the same way as the stack for the kernel (="resb HEAP_SIZE")? Or do you usually initialize the heap on top of the stack (=on top of the kernel.elf) when the kernel is already running?
For processes, my kernel gives the process a virtual address space and lets the process (or the run-time for whatever language that process was written in) do whatever it feels like with the virtual address space it was given. My kernel only provides a way for the process to set/change the "virtual area type" for areas of the process' virtual address space (e.g. if a process tells my kernel it wants the area from 0x00000000 to 0x12345000 in its virtual address space to be changed to the "not used" virtual area type, then my kernel does what it's told).
For kernels, I'd start by drawing a map of how you feel like using kernel space - maybe one area for kernel code, maybe one area for the "recursive page table mapping" trick, maybe one area set aside for memory mapped devices, etc. If kernel has one or more heaps, then you'd add space for those too. How you tell the compiler about this memory map is up to you - I'm lazy so I typically just use the preprocessor (e.g. "#define MESSAGE_QUEUE_AREA_ADDRESS 0xD800000").
DevNoteHQ wrote:And what use could swapgs have? The only usage of the GS register that i saw was to save CPU-specific information (TSS-address, current thread,...) in the kernelmode register. What could i put into the usermode register? Or is the usermode GS usually empty to prevent processes of reading CPU-specific information?
Usually a kernel has various pieces of code that want to find CPU specific information (e.g. which task is the current CPU running, how long until the current CPU needs to do a task switch, how much load is the current CPU under, is the current CPU in a special state, etc). To find this information quickly on 80x86 a lot of kernels use a segment register, where the segment's base points to that CPU's information. Unfortunately, nothing prevents user-space code from changing the segment register that the kernel uses so the kernel has to guard against this possibility. SWAPGS is what AMD provided for this purpose - so that the kernel can use SWAPGS to make sure GS is set correctly (e.g. just after CPU switches from CPL=3 to CPL=0), then use GS to find CPU specific information.
User code shouldn't have any reason to touch (or modify) GS, and shouldn't be able to access the kernel's data (including the kernel's CPU specific data that GS is used for). For thread local storage; threads can use a different segment register (e.g. FS).
Cheers,
Brendan