Process management

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
richie

Re:Process management

Post by richie »

Hello!
To Pype.Clicker and Perica Senjak:
It's much simpler. There is no need for the shifting. The processor first look at Bits 0 and 1 to see the prevelege level. Is this OK the processor looks at Bit 2. This is the table indicator. If it is set the descriptor will be found in the Local Descriptor Table. Otherwise it is in the Global Descriptor Table.
The the processor reads the whole 16 Bits bit ignores Bits 0,1 and 2 (he threats them like thy were zeroed). This read 16 Bits are the byte-offset in the GDT.
Of course the result is the same like these shifting but it is simpler. Just ignore Bits 0, 1, 2 and you have the byte-offset.

As it seems there are a lot of fans of stack task switching here. I want to mention the main problem with stack task switching: The scheduler uses the user stack and there for it might be possible that a stack overflow occur while task switching. This would cause this one task to crash.
Even worse would it be if there were no room in the data segment (I assume that the data and stack segment are both together). Consider of a very simple program which has no need for a stack. Thus the value of esp can be 0. This would lead to a General Protection Fault if a timmer interrupt occur.
There is also a problem of security: If a task get control again everything seems like before. But there ws a change: If you consider of a esp-value of 488 the eip-adress can be found at address 484 and all the register values are still in memory. This information is nothing new for the process because the process is allowed to know its own register values. But if the scheduler is doing something non-normal like copying values from one process to another or dealing with information that should only be known be the kernel the process can get these information out of the stack (or better to say: the memory where the stack was while scheduling).
Because of all these problems Intel (the idea is much older) introduces the TSS. With this structure user processes and the scheduler are seperated.

There are a lot of methods dealing with the problems that might happen with stack task switching but keep in mind that a user program might do something unusual (like setting esp to zero) or the user program might have access to secret information.
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:Process management

Post by Pype.Clicker »

richie, who told you we were switching user-level stacks ?
the registers are of course pushed to the kernel-level stack which is the one active when an interrupt raises or when a system call is done ...

remember: if the processor is in user mode and an interrupt occurs:
  • the IDT entry is looked up and the processor sees it should transfer the execution to OSCODE_SEGMENT (at DPL0: only DPL0 segments are allowed for interrupt gates).
  • as the target DPL!=current protection level (read out from selector in CS), a hardware stack switch occurs, and the processor will set SS<-TSS.SS0, ESP<-TSS.ESP0 and push on that new stack informations about the old stack (old_esp, old_ss, old_es .. old_ds)
  • only then it will push on the kernel stack the address of the old code position (eflags with IF=1, old_cs with CPL>0 and old_eip)
  • once all this is done, it clears the interrupt flag and transfer execution to the interrupt handler (by loading CS = IDT[vector].selector and EIP=IDT[vector].offset)
if we have later a call to scheduler() which will then call stack_switch(), we push on the kernel stack the current values of the hardware registers, and then stores ss and esp into from_task->ss and from_task->esp before reloading them with to_task->ss and to_task->esp

Once the stack has been switched this way (we assume every kernel thread use the same CS and DS segments), we pop registers (thus with the state of the "to" task as we're in his stack), including eip which will restore the operation the "to" task was doing before it get preempted the last time, and so goes the pops on until we IRET from the kernel to the new task's user-level operations.

None of the problems you mentions can arise if the kernel stack's size is set up correctly. No spying is possible as user-level process can't access kernel stacks (neither their own one, nor other tasks')

Note that if "from" and "to" tasks aren't in the same address space, we must issue an address space change (reloading CR3) before we start popping values from the "to" stack.
It also implies the code for the scheduler, the task structures and the stack_switch() code exists in every address space at the same logical address. Kernel-level stacks (one per task) can be bound to their own address space, though :)
richie

Re:Process management

Post by richie »

But if you use the kernel stack you first have to switch to the kernel stack. The only way of switching to the kernel stack without modifiing any registers of the process can be done by TSS.
I thought of a solution without TSS like it is done on a 8086 in real-mode (for example the first version of Minix). If you use the kernel stack you wouldn't get problems.
But if you use the TSS for getting a new SS and ESP (like in the second MINIX version) when an interrupt occur you actually use hardware task switching (between to processes: kernel and user). And a task switch is much more inefficent. The hardware first has to save all registers and then all the registers are again saved on the kernel stack.

Or do you use any other trick for switching to kernel-stack without losing the current user-process SS and ESP value?
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:Process management

Post by Pype.Clicker »

check the updated response ...
Post Reply