Help Understanding TSS's

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
enigma
Posts: 18
Joined: Tue Dec 25, 2007 10:52 am

Re: Help Understanding TSS's

Post by enigma »

New problem, despite whatever I do to try and switch tasks, nothing seems to work. I've tried linked lists, queues, simple structures with all the registers and nothing seems to work. I know that when an interrupt fires off, it saves the current state of the machine (not the general purpose registers or segments) and then calls the interrupt handler. So I figured the easiest way to switch tasks was to implement it in the PIT code (set to execute the scheduler once per second as a temporary test to see how it's working), where right before calling the scheduler, the CPU naturally saved everything that it had been doing, add a few more pushes onto the equation, then just switched ESP according to where the next task needed it. Right now, no other task except for the main kernel is executing, however, with every implementation of a scheduling structure the machine gets lost somewhere (not a triple fault, it simply executes to a point where it doesn't). Is this even the "correct" concept of how I should be doing software task switching?
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re: Help Understanding TSS's

Post by Brendan »

Hi,
enigma wrote:New problem, despite whatever I do to try and switch tasks, nothing seems to work. I've tried linked lists, queues, simple structures with all the registers and nothing seems to work. I know that when an interrupt fires off, it saves the current state of the machine (not the general purpose registers or segments) and then calls the interrupt handler. So I figured the easiest way to switch tasks was to implement it in the PIT code (set to execute the scheduler once per second as a temporary test to see how it's working), where right before calling the scheduler, the CPU naturally saved everything that it had been doing, add a few more pushes onto the equation, then just switched ESP according to where the next task needed it. Right now, no other task except for the main kernel is executing, however, with every implementation of a scheduling structure the machine gets lost somewhere (not a triple fault, it simply executes to a point where it doesn't).
Make sure that (in your timer IRQ handler) you send an EOI to the PIC chip *before* you do any task switch; and that EFLAGS is part of the state saved and restored (so that the "interrupts disabled" flag is restored and you don't end up waiting for an IRQ that's disabled).
enigma wrote:Is this even the "correct" concept of how I should be doing software task switching?
Different people do things differently, but IMHO it's not the correct way to implement task switching...

People that use the "PIT IRQ does the task switches" method often end up screwed later on when a task needs to block (e.g. wait for disk I/O). Sometimes they even get the CPU to do absolutely nothing until the next timer IRQ occurs; which can lead to wasting almost all of the CPUs time when all of the running tasks are I/O bound; which is relatively common when you look at things like compilers (constantly waiting for file I/O) and networking stuff (constantly waiting to send or receive network packets). It can also completely kill performance for "event driven" software (including most GUIs which become "sluggish" due to latency, and just about everything else if the OS is a message based OS).
Brendan wrote:Implement code to do one task switch (e.g. "switchToTask(newTask)"), then build on it...
After you've written a "switchToTask(newTask)" function, you'd implement a "switchToNextTask(void)" function which selects the best task to run next and calls the "switchToTask(newTask)" function.

The timer IRQ would call the "switchToNextTask(void)" function. When a task blocks it'd call the "switchToNextTask(void)" function. When something happens that unblocks a higher priority task you'd call the "switchToTask(newTask)" function directly (so that when a high priority task is unblocked it gets CPU time immediately by preempting lower priority tasks, and doesn't need to wait until the next timer IRQ occurs).

Mostly, IMHO to begin with you want that "switchToTask(newTask)" function and a pair of kernel threads, where each kernel thread uses the "switchToTask(newTask)" function to do a task switch to the other kernel thread (so you end up constantly doing task switches between the 2 kernel threads). This is a very good way to test the "switchToTask(newTask)" function - most of the scheduler and no IRQs are needed.

Then you'd implement the "switchToNextTask(void)" function and still keep using kernel threads, where each kernel thread uses the "switchToNextTask(void)" function to do a task switch to some other kernel thread (so you're still constantly doing task switches between the kernel threads). Now you could try lots more kernel threads, and maybe give each kernel thread a different priority.This is a very good way to test the "switchToNextTask(void)" function - no IRQs needed.

The last thing you'd do is remove the calls to the "switchToNextTask(void)" function from the kernel threads and put it in the PIT timer IRQ handler, but by the time you do that you'd know that most of it works.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
enigma
Posts: 18
Joined: Tue Dec 25, 2007 10:52 am

Re: Help Understanding TSS's

Post by enigma »

Brendan wrote:Hi,

Different people do things differently, but IMHO it's not the correct way to implement task switching...

People that use the "PIT IRQ does the task switches" method often end up screwed later on when a task needs to block (e.g. wait for disk I/O). Sometimes they even get the CPU to do absolutely nothing until the next timer IRQ occurs; which can lead to wasting almost all of the CPUs time when all of the running tasks are I/O bound; which is relatively common when you look at things like compilers (constantly waiting for file I/O) and networking stuff (constantly waiting to send or receive network packets). It can also completely kill performance for "event driven" software (including most GUIs which become "sluggish" due to latency, and just about everything else if the OS is a message based OS).
Brendan wrote:Implement code to do one task switch (e.g. "switchToTask(newTask)"), then build on it...
After you've written a "switchToTask(newTask)" function, you'd implement a "switchToNextTask(void)" function which selects the best task to run next and calls the "switchToTask(newTask)" function.

The timer IRQ would call the "switchToNextTask(void)" function. When a task blocks it'd call the "switchToNextTask(void)" function. When something happens that unblocks a higher priority task you'd call the "switchToTask(newTask)" function directly (so that when a high priority task is unblocked it gets CPU time immediately by preempting lower priority tasks, and doesn't need to wait until the next timer IRQ occurs).

Mostly, IMHO to begin with you want that "switchToTask(newTask)" function and a pair of kernel threads, where each kernel thread uses the "switchToTask(newTask)" function to do a task switch to the other kernel thread (so you end up constantly doing task switches between the 2 kernel threads). This is a very good way to test the "switchToTask(newTask)" function - most of the scheduler and no IRQs are needed.

Then you'd implement the "switchToNextTask(void)" function and still keep using kernel threads, where each kernel thread uses the "switchToNextTask(void)" function to do a task switch to some other kernel thread (so you're still constantly doing task switches between the kernel threads). Now you could try lots more kernel threads, and maybe give each kernel thread a different priority.This is a very good way to test the "switchToNextTask(void)" function - no IRQs needed.

The last thing you'd do is remove the calls to the "switchToNextTask(void)" function from the kernel threads and put it in the PIT timer IRQ handler, but by the time you do that you'd know that most of it works.
Thanks a lot Brendan for all of that information. I think I have a reasonable task switching system, but the only issue is actually creating a task. What I've been doing is using my malloc() to allocate the size of the user's stack, and then pass that and the entry function to a task creation function. It all works fine and dandy (the main kernel task isn't processed this way out of the need to not lose the procedural/minimal object orientated structure that is set up) except when I try to add new tasks. The function (written in external asm) goes something like this:

Code: Select all

new_temp_esp dd 0               ; reserve a space for the new stack pointer (not in regs)
new_temp_eip dd 0               ; reserve the return address (again, not in regs)

[global new_regs]
new_regs:
   pop dword [temp_eip]         ; move the current return address to the area above
   pop dword [temp_esp]         ; do the same for the stack
   push dword [temp_esp]        ; push it back to maintain stack congruity
   push dword [temp_eip]        ; do the same for the return address
   xchg esp, [temp_esp]         ; swap the current stack out with the new task's
   xor eax, eax                 ; set all registers to 0 for the pushad
   mov ebx, eax
   mov ecx, eax
   mov edx, eax
   pushad
   pushfd                       ; push eflags
   mov eax, esp                 ; move the new stack pointer into eax
   xchg esp, [temp_esp]         ; restore the old stack
   ret                          ; return the new stack pointer
I haven't implemented the actual entry point and what-not into the function yet because of the fact that when the switch_to_task(new_task) function is called, I get either a triple fault or GPF. That function just pushes all that it needs to, switches the stack to the new task, and pops all the new stuff. However, when the first pop happens, the machine fails. How can I fix this?
User avatar
mystran
Member
Member
Posts: 670
Joined: Thu Mar 08, 2007 11:08 am

Re: Help Understanding TSS's

Post by mystran »

enigma wrote:Just curious, since I'm going with software based multitasking, is it even necessary to have a TSS in the GDT? I'm pretty sure that Linux uses 4 total:
Kernel Code
Kernel Data
User Code
User Data

Should I even use TSS's at all, or is it one of those must-haves?
Linux uses 4 segments, plus one TSS per CPU (for basic x86 kernel at least). You need one TSS per CPU, if you want to do it the (simple) Linux way, and have kernel mapped the same in each address space, because that's the only way to have separate kernel stacks for each CPU. You only need to fill a couple of bytes in TSS, and esp0 is the only value that you actually ever have to modify (set to whatever stack you want on next user->kernel switch). Linux also fills the iomap for processes that have some (but not all) I/O ports allowed (it does it lazily, since IIRC you can disable the iomap by changing TSS length, so no point filling it each time). For blanked all-or-nothing policy you can use IOPL instead.

It also used to use some stuff for thread-local data, not sure if that was in GDT or LDT but no idea if they use those anymore.

The basic logic is, that you can't use the userspace stack in kernel, because you can't rely on userspace application to having a valid stack (and having kernel stack that is userspace writable is not such a great idea anyway) so you need to use something (TSS) to provide the CPU with a new stack address when it switches to kernel.
The real problem with goto is not with the control transfer, but with environments. Properly tail-recursive closures get both right.
Post Reply