Hi,
prasoc wrote:At the moment, I haven't implemented any userspace, so everything is based in the kernel (monolithic)
One (kernel) stack per core does seem a lot simpler, but I would like to have multiple kernel driver threads to keep the screen refreshed, and other updating mechanisms, all kept away from the user.
It is currently a lot more of a monolithic kernel than I would like, but maybe in the future I will work on a microkernel, it definitely seems a lot cleaner in theory, but I don't have the necessary knowledge to implement one yet.
Ok; in that case I have to assume "one kernel stack per thread".
prasoc wrote:In practical code, I now save the registers in the IRQ1 scheduling function (with C, nothing fancy - just a memcpy to the thread's state) but now I have created a function which restores the register from a struct pointer in ASM. However, I am having difficulties with the fact that it is being called inside an interrupt, and I'm trashing registers that the function which calls it needs (ebp, esp)
The words "IRQ1 scheduling function" don't make any sense.
At the start of the IRQ handler you need to save registers that you're going to use (to avoid trashing the state of whatever you interrupted), and at the end of the IRQ handler you need to restore the registers you saved; but (for "one kernel stack per thread") this has nothing to do with task switching or scheduling at all in any way whatsoever.
prasoc wrote:I guess my question is, *where* do I set these registers?
You save the register in the same place you'd save them for every function call - on the stack.
prasoc wrote:They are necessary for my scheduler to perform it's task! Seems like a chicken-and-egg problem. Especially with eip, surely it would instantly jump out of the interrupt and straight into the thread's execution? Do I need to do any cleanup?
No, it has nothing to do with your scheduler.
To write a scheduler (for "one kernel stack per thread"):
a) Write a "switch to task" kernel function that saves any registers that the calling convention you're using says need to be "preserved by callee"; then saves the old task's stack pointer somewhere, then loads the new task's stack pointer from somewhere, then loads any registers that the calling convention you're using says need to be "preserved by callee" and returns like a normal function normally would. This should be in 100% assembly (because you're doing "strange" things with the stack).
b) Write some code to create any management data for the thread that the kernel has been using since the computer booted (e.g. a "thread data structure" or something, which is where the tasks stack pointer would be saved during a task switch).
c) Write some code to create a new task, which includes allocating a stack and making sure that the new stack contains whatever the "switch to task" function pops off of the stack in the correct order (this is also why you want that "switch to task" function to be in 100% assembly - so you can guarantee the stack layout is known and won't change).
d) Test everything above by spawning a second task; where the first task does "switch to task #2" and the second task does "switch to task #1", so that you end up with a huge number of task switches.
e) Implement a "find task to switch to, then switch to it" function. This will involve having some kind of structure to keep track of tasks (if this is your first scheduler, I'd suggest using a simple linked list for "round robin" scheduler initially, just to get some experience with it - that initial "round robin" scheduler can be replaced by something that doesn't suck later). In any case; this function would find a task to switch to and then call the "switch to task" kernel function you wrote earlier.
f) Test your "find task to switch to, then switch to it" function with the same pair of kernel threads you used for testing earlier; just by making both tasks call your "find task to switch to, then switch to it" function (instead of switching directly to each other). When that works, you should be able to spawn 123 more threads and test that too.
g) Write some kind of "block_task(reason)" and "unblock_task(taskID)" functions. Whenever a task has to wait for something the kernel calls "block_task(reason)", which removes the task from whatever data structure/s the scheduler is using to keep track of tasks and then calls your "find task to switch to, then switch to it" function; which means that the task doesn't get any CPU time anymore. Whenever something happens that a task was waiting for the kernel calls "unblock_task(taskID)", which adds the task back into whatever data structure/s the scheduler is using to keep track of tasks, which means the task starts running again. Note that (for more advanced schedulers) the "unblock_task(taskID)" might also check if the task that was unblocked is higher priority than the currently running task and call your "switch to task" kernel function - this allows higher priority tasks to be able to respond "immediately" when (e.g.) the user presses a key, or a network packet arrives, or whatever. Also note that (for almost all OSs under almost all conditions) the majority of task switches are caused by tasks blocking and unblocking.
h) Test the "block_task(reason)" and "unblock_task(taskID)" functions by having a kernel thread call "block_task()" and then getting a different kernel thread to wake it back up using "unblock_task(taskID)".
i) Write code to do "sleep()", which puts a task onto some kind of "list of tasks to wake up when" and then calls "block_task(SLEEPING);". You will need some kind of timer IRQ that checks that "list of tasks to wake up when" and calls "unblock_task(taskID)" to wake them up. Test this too.
j) Consider writing some kind of IPC (messages, pipes, whatever). This will end up using the "block_task(reason)" and "unblock_task(taskID)" functions too.
k) Modify the code a little to set some kind of time-out during task switches; and modify your timer IRQ so that when that time-out expires it calls the "find task to switch to, then switch to it" function. This is mostly just a little hack to prevent CPU hogs.
Cheers,
Brendan