Trying to wrap my head around processes
Posted: Wed Aug 24, 2016 2:39 pm
I've been fooling around with writing an OS, having read MikeOS, and also Nick Blundell's paper on how to write your own OS. I've also read (well, perused) various books on OS design and internals. But I'm struggling with the idea of processes, or rather their implementation.
I understand the concepts (at least, how they're typically implemented in Unix) but not how one might code an OS that can interrupt and schedule processes.
Let's say
The OS decides the program at 100000 should execute, so it JMPs there. It marches through the addresses: 1000001, 1000002, etc. Let's say it gets to 1001000 and the OS decides it's time to switch to the other process.
THAT is what I don't get. How is that done? The processor in my mind is sequentially moving through the addresses, jmping/adding/moving/etc. How can an OS (which is just another program) jump in and say "hold up, I'm going to freeze your register states and then switch to memory address 200000"?
If there was cooperative multitasking, I could understand a process "yielding" by having a JMP back to some location where task-switching opcodes execute. But this is preemptive.
It's almost like there needs to be a separate computer that runs "above" the regular computer to direct it. Are we into kernel protection rings here? But even there, I'm not seeing how that would work in assembler.
I'm sure there's a concept, paper, chapter, or book I'm missing...anyone point me at a breadcrumb trail? Thanks.
I understand the concepts (at least, how they're typically implemented in Unix) but not how one might code an OS that can interrupt and schedule processes.
Let's say
- we have a single-processor system to keep things simple. I want two processes to be running in a multiprogramming environment.
- Obviously, only one process is truly executing at any given moment, but the OS can simulate switching back and forth so it appears to the user that both are running.
- Let's leave out any process queue optimization, priorities, etc. and just say we're going to round-robin the processes with some arbitrary time for each to run.
The OS decides the program at 100000 should execute, so it JMPs there. It marches through the addresses: 1000001, 1000002, etc. Let's say it gets to 1001000 and the OS decides it's time to switch to the other process.
THAT is what I don't get. How is that done? The processor in my mind is sequentially moving through the addresses, jmping/adding/moving/etc. How can an OS (which is just another program) jump in and say "hold up, I'm going to freeze your register states and then switch to memory address 200000"?
If there was cooperative multitasking, I could understand a process "yielding" by having a JMP back to some location where task-switching opcodes execute. But this is preemptive.
It's almost like there needs to be a separate computer that runs "above" the regular computer to direct it. Are we into kernel protection rings here? But even there, I'm not seeing how that would work in assembler.
I'm sure there's a concept, paper, chapter, or book I'm missing...anyone point me at a breadcrumb trail? Thanks.