Page 1 of 1

Building multithreading at kernel level

Posted: Sat Apr 11, 2020 12:23 pm
by Thpertic
Hi all,
I start off saying I'm a kind-of newbie here. In the sense that I have builded something, but it's the first time for everything.
Now, I'm following the list in the wiki to choose the part to implement in order (https://wiki.osdev.org/Creating_an_Operating_System). I'm at the point of "Multithreaded Kernel" but there isn't a page to start, so I'm lost.

Do you have any suggestion or direction on where to go?
Thanks all.

My OS: https://github.com/thpertic/LostOS

Re: Building multithreading at kernel level

Posted: Sun Apr 12, 2020 7:03 am
by nullplan
Well, now that interrupts are working, you should be able to create multiple kernel threads. At this point, having multiple tasks is as easy as it will ever get, since all threads share address space. So you can focus on the structures that make your scheduler go. You can figure out when to do task switching, how to do it, how to select the next task, what a task even is, etc. You may even be able to figure out how to control multiple CPUs, which will come in handy once you add user-space tasks.

This whole thing can speed up your system initialization, since you can run the initialization tasks in parallel (be sure to add the necessary locking. Only one task at a time can access the PCI bus, for instance, unless you have MMIO config. But exploring the PCI bus, you will find multiple devices you might want to initialize in parallel. For instance, you might find USB host controllers.

If you need some pointers, I would implement something similar to Linux's task flags scheme, where each task has flags that change internal kernel behavior. Once you detect a process timeout (timer interrupt), you would simply set the "timeout" flag on the task. That is then implemented in the return path to user space or to the idle task, and instead of returning to user space, it would first schedule in another task, then return.

Here is how I did the multi-tasking support: First "schedule()". That function picks whatever task is runnable and calls the arch code to switch to it.

Code: Select all

void schedule(void) {
  struct task *tsk = pick_next_task();
  if (tsk != this_cpu_read(current))
    arch_schedule(tsk);
}
The architecture specific part only does the switching, but calls assembly code for the actual switch:

Code: Select all

void arch_schedule(struct task *new) {
  struct task *this = this_cpu_read(current);
  if (this->flags & TASK_FPU)
    arch_asm_save_fpu(this->arch.fxsave_area);
  arch_asm_schedule(&this->arch.kstack, new->arch.kstack);
  this_cpu_write(current, this);
  if (this->flags & TASK_FPU)
    arch_asm_load_fpu(this->arch.fxsave_area);
  if (!(this->flags & TASK_KTHREAD))
    arch_asm_load_vm(this->vm->arch.cr3);
}
And finally the actual assembly:

Code: Select all

arch_asm_schedule:
/* rdi - pointer to pointer to current task's stack
 * rsi - pointer to new task's stack
 */
  pushq %rbx
  pushq %rbp
  pushq %r12
  pushq %r13
  pushq %r14
  pushq %r15
  movq %rsp, (%rdi)
  movq %rsi, %rsp
  popq %r15
  popq %r14
  popq %r13
  popq %r12
  popq %rbp
  popq %rbx
  ret
The assumption being that the original task we came from will at some point be scheduled again. From the point of view of the caller, nothing happens on call to arch_asm_schedule(), but because the stack is switched, another task can run in that time. If the stack is ever switched back to the current task, all the preserved registers will be restored and all is right with the world again.

For creating a new task: I just have to write things into the kernel stack that will be picked up by arch_asm_schedule(). So I just allocate seven words on the new kernel stack and set them so that good things will happen:

Code: Select all

int arch_init_ktask(struct task *tsk, void (*routine)(void*), void* arg)
{
  uintptr_t ks = (uintptr_t)tsk; /* kernel stack begins below task structure */
  ks &= -16ul;
  uint64_t *kstack = (void*)ks;
  kstack -= 7;
  kstack[0] = (uint64_t)tsk;
  kstack[1] = (uint64_t)routine;
  kstack[2] = (uint64_t)arg;
  kstack[6] = (uint64_t)arch_asm_start_kthread;
  tsk->arch.kstack = kstack;
}
And, of course:

Code: Select all

arch_asm_start_kthread:
  movq %15, %gs:CPU_CURRENT /* save current task in CPU-local data. */
  movq %r13, %rdi /* move argument to argument register */
  xorl %ebp, %ebp /* clear RBP to indicate root frame */
  callq *%r14
  call arch_asm_exit_kthread
  /* this should be dead code */
  movq $strExitReturned, %rdi
  call panic
1:
  cli
  hlt
  jmp 1b

arch_asm_exit_kthread:
  movq %gs:CPU_CURRENT, %rdi
  lock orl $TASK_CONDEMNED, TASK_FLAGS(%rdi)
1:
  call schedule
  jmp 1b
And TASK_CONDEMNED tells the scheduler to never, ever, schedule in that task again. And to clean it up at the next convenience. I clean up condemned tasks in the idle loop, which might be suboptimal in situations with lots of task turnaround and therefore I also allow the task allocator to revive one of the condemned tasks. It is a bit tricky to get the locks correct on this problem, though.

I hope I've given you enough inspiration to get started.

Re: Building multithreading at kernel level

Posted: Sun Apr 12, 2020 12:46 pm
by Thpertic
Thank you so much!
I thought that multithreading meant to create threads within processes, but as I can see it also means to create other processes.

Thank you anyway, I’m going to take your help and start.