I have always had an issues with Task Switching... My last kernel... "kernel2" crashed randomly.
I have read the mini-tutorial from: Sig-ops: http://www.acm.uiuc.edu/sigops/roll_your_own/5.a.html and I really really like where its going.
Let me prefix by saying this about my "long-term" goals for my os
1.) Will always run on 386+ (x86 compat hardware)
2.) SMP is Not on the design specs at all
3.) Performance is important, but I would choose stability, code-cleanness, and my own personal understanding over it. Meaning if the os is slower than it could be... I am ok with that.
The project is solely for my fun and benefit.
Anyway back to task switching...
What I want to do is use TSS based hardware tasking switch like shown in the tutorial combined with Preemptive task switching. When the timer fires, we save then load the next TSS up and continue... Some people say its slower, and some dont. Like I said, speed is second for me.
Using TSS based task switching comes with the limitation of something like ~8000 tasks maximum at any given time. Now I seriously doubt that would ever be reached in my OS.
Is there away around this limitation... perhaps with the LDT?
I was also thinking about off loading certain task TSS(s) from the GDT when there are waiting or blocked or idle priority. Then moving them back when they become ready. I understand that this would have additional overhead with both speed and memory penalties... again thats a second place thing to me...
It is more rewarding to actually see things working... then them working "fast" and unreliable (not because software t/s is... but because I want to try something different).
Is anyone actually implementing their preemptive task switching via TSS?
Thanks,
Rich
P.S. I don't mean to come off snotty or arrogant at all. I am doing this to learn and have fun. I could care less about performance and SMP... maybe if this kernel "kernel3" ever matures... it will be rewritten to "kernel4" and will SMP and performance is mind.
Need Some advice about H/W TSS Task Switching.
Re: Need Some advice about H/W TSS Task Switching.
That's a worry - if it was after you added multitasking, that generally points to one of:astrocrep wrote:I have always had an issues with Task Switching... My last kernel... "kernel2" crashed randomly.
1) Stack creep
2) General stack mangling
3) Buggy memory management
In case you hadn't already decided this, I would be very cautious about copying and pasting any of your old code in to your newer kernel until you have resolved the problem.
Sounds like you have thought about this a lot. Just one pointer here - think about actually designing all your module interfaces rather than just letting them evolve. If you have clean, well thought out interfaces, there's no reason why you can't extend implementation later.1.) Will always run on 386+ (x86 compat hardware)
2.) SMP is Not on the design specs at all
3.) Performance is important, but I would choose stability, code-cleanness, and my own personal understanding over it. Meaning if the os is slower than it could be... I am ok with that.
No - other than copying data which you mention later and, as you point out, is slow.Ysing TSS based task switching comes with the limitation of something like ~8000 tasks maximum at any given time. Now I seriously doubt that would ever be reached in my OS.
Is there away around this limitation... perhaps with the LDT?
No - I'm implementing a 64 bit OS which has to use software task switching. This is when I normally launch in to my tirade about the cons of hardware multitasking, but as I said earlier, you have obvoiusly thought about this and if that's what you want to do and you are aware of the theoretical limitations, fine.Is anyone actually implementing their preemptive task switching via TSS?
I would really continue to investigate "kernel 2" and why it crashed 'randomly' - it could be that you have done something wrong which would crash a multi-TSS based system too and you don't want to make that the basis of your new OS. Also, if you fix that kernel, you will have a direct comparison between hardware and software multitasking and you can then choose which you want to go for - don't go with hardware MT just because you have stumbled with software MT before. If you need any help, I'm sure the forum will try to help with debugging your old kernel (i.e. they probably won't actually do the debugging for you ).
Not at all! After all - it is your osP.S. I don't mean to come off snotty or arrogant at all.
Cheers,
Adam
Re: Need Some advice about H/W TSS Task Switching.
Hi,
However (IIRC), during a task switch the CPU only actually needs one TSS descriptor in the GDT. The CPU caches the address of the currently running task's TSS, and only needs to know where to find the new task's TSS. This means you can use one GDT entry for millions(?) of TSSs, just by changing the address of the TSS in the TSS descriptor before doing a task switch.
Note: This doesn't include TSS descriptors that you use for special purposes. For example, if you use a "task gate" for the double fault exception handler then the double fault exception handler's TSS descriptor would also need to be in the GDT at all times.
Cheers,
Brendan
For hardware task switching there's an "unlimited" number of TSSs (they're just structures in memory), but a limited number of "TSS descriptors" because each "TSS descriptor" must be in the GDT (e.g. you can't put a TSS descriptor in an LDT).astrocrep wrote:Using TSS based task switching comes with the limitation of something like ~8000 tasks maximum at any given time. Now I seriously doubt that would ever be reached in my OS.
Is there away around this limitation... perhaps with the LDT?
However (IIRC), during a task switch the CPU only actually needs one TSS descriptor in the GDT. The CPU caches the address of the currently running task's TSS, and only needs to know where to find the new task's TSS. This means you can use one GDT entry for millions(?) of TSSs, just by changing the address of the TSS in the TSS descriptor before doing a task switch.
Note: This doesn't include TSS descriptors that you use for special purposes. For example, if you use a "task gate" for the double fault exception handler then the double fault exception handler's TSS descriptor would also need to be in the GDT at all times.
I've implemented hardware task switching twice in the past - once with a maximum of 8187 tasks, and once with an "unlimited" number of tasks. I haven't used hardware task switching for many many years though - it's knowledge I learnt for no reason (I haven't used anything I learnt about hardware task switching for any of my OSs since)...astrocrep wrote:Is anyone actually implementing their preemptive task switching via TSS?
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Thanks guys for the advice and info, I seriously appreciate the time spent responding.
I am still in the design part (the paper planing) of kernel3. kernel3 will not be a cut and paste of kernel2, but kernel2 will be used as a learning tool to see what works and want doesn't and how I can improve it.
With the help of Brendan (and many others) I have a very strong Physical MM and Virtual MM with full Paging support. I will be reusing / reimplementing that part of the kernel... however, the big change will come in the form of c++ support. I am 90% sure I will being coding kernel3 in c++ as opposed from kernel2 in c. My build enviornment will also be Visual Studio 2005 w/ Visual Source Safe and compiling with CYGWIN gcc although if memory serves me correctly, I will need to build a cross compiler.
kernel2's task switching fell on its face when it came to system calls, waiting and sleeping a process, and something else (cannot remember).
Either way, what I like about using H/W tss is that I have the nested flag and the backlink selector. However, I think I can reproduce / emulate that in s/w too.
My understanding of S/W task switching is as follow:
Scenario 1:
Process A is running along...
[TIMER INTERRUPT]
Push all regs onto Stack
Call the Scheduler and pass the stack pointer, and the value 0)*
{Scheduler}
Is it time to change?
Save stack to currentprocess struct.
Get new process struct
Load new process struct stack.
exit scheduler
iret
Process B is running along...
Scenario 2:
Process A is running along...
Process A requests resource that is locked
Gets next request ticket, and yields
[SYSTEM CALL yield() {yield is int 0x80, ax 0x12} ]
So Process calls the yield function, which sets AX = 0x12 and fires in 0x80
[SYSTEM INTERRUPT]
{System Interrupt Handler}
Calls System Call C handler,
Does Select on AX
if AX = 0x12 then it should push all regs onto the stack
Call the scheduler with passing the stack pointer and the value 1*
Scheduler does its magic, loads a new stack pointer,
eventual runs down to an iret.
Process B is now running along.
* = Adding 0 to the scheduler tells it the current process has finished it timeslice. Passing 1 says it gave up its time slice and instead will not be burried at the end of the running queue, but placed further up the line.
So when the timer interrupt fires again, and the scheduler wants to load back the yielded thread, It will continue running after the interrupt call, correct?
So am I close to being correct on S/W task switching or way off base?
Thanks,
Rich
I am still in the design part (the paper planing) of kernel3. kernel3 will not be a cut and paste of kernel2, but kernel2 will be used as a learning tool to see what works and want doesn't and how I can improve it.
With the help of Brendan (and many others) I have a very strong Physical MM and Virtual MM with full Paging support. I will be reusing / reimplementing that part of the kernel... however, the big change will come in the form of c++ support. I am 90% sure I will being coding kernel3 in c++ as opposed from kernel2 in c. My build enviornment will also be Visual Studio 2005 w/ Visual Source Safe and compiling with CYGWIN gcc although if memory serves me correctly, I will need to build a cross compiler.
kernel2's task switching fell on its face when it came to system calls, waiting and sleeping a process, and something else (cannot remember).
Either way, what I like about using H/W tss is that I have the nested flag and the backlink selector. However, I think I can reproduce / emulate that in s/w too.
My understanding of S/W task switching is as follow:
Scenario 1:
Process A is running along...
[TIMER INTERRUPT]
Push all regs onto Stack
Call the Scheduler and pass the stack pointer, and the value 0)*
{Scheduler}
Is it time to change?
Save stack to currentprocess struct.
Get new process struct
Load new process struct stack.
exit scheduler
iret
Process B is running along...
Scenario 2:
Process A is running along...
Process A requests resource that is locked
Gets next request ticket, and yields
[SYSTEM CALL yield() {yield is int 0x80, ax 0x12} ]
So Process calls the yield function, which sets AX = 0x12 and fires in 0x80
[SYSTEM INTERRUPT]
{System Interrupt Handler}
Calls System Call C handler,
Does Select on AX
if AX = 0x12 then it should push all regs onto the stack
Call the scheduler with passing the stack pointer and the value 1*
Scheduler does its magic, loads a new stack pointer,
eventual runs down to an iret.
Process B is now running along.
* = Adding 0 to the scheduler tells it the current process has finished it timeslice. Passing 1 says it gave up its time slice and instead will not be burried at the end of the running queue, but placed further up the line.
So when the timer interrupt fires again, and the scheduler wants to load back the yielded thread, It will continue running after the interrupt call, correct?
So am I close to being correct on S/W task switching or way off base?
Thanks,
Rich
Hi,
1) How does Process A request a resource (and discover that it is locked) without a system call?
2)
Just a few thoughts. Otherwise looks good!
JamesM
This seems perfect to me, with one exception: Nowhere in there do you change address spaces (write to CR3). That should be done just before loading the new process stack (pointer).Scenario 1:
Process A is running along...
[TIMER INTERRUPT]
Push all regs onto Stack
Call the Scheduler and pass the stack pointer, and the value 0)*
{Scheduler}
Is it time to change?
Save stack to currentprocess struct.
Get new process struct
Load new process struct stack.
exit scheduler
iret
Process B is running along...
This one I see a few more problems with.Scenario 2:
Process A is running along...
Process A requests resource that is locked
Gets next request ticket, and yields
[SYSTEM CALL yield() {yield is int 0x80, ax 0x12} ]
So Process calls the yield function, which sets AX = 0x12 and fires in 0x80
[SYSTEM INTERRUPT]
{System Interrupt Handler}
Calls System Call C handler,
Does Select on AX
if AX = 0x12 then it should push all regs onto the stack
Call the scheduler with passing the stack pointer and the value 1*
Scheduler does its magic, loads a new stack pointer,
eventual runs down to an iret.
Process B is now running along.
1) How does Process A request a resource (and discover that it is locked) without a system call?
2)
Why is this neccessary? Surely your interrupt service routine that fired when the int $0x80 was executed will have pushed all registers anyway?if AX = 0x12 then it should push all regs onto the stack
Just a few thoughts. Otherwise looks good!
JamesM
JamesM,
Thank you for your comments. I was unclear and also at short-sighted in my examples. I wasn't even considering threads external to the kernel, and I SHOULD BE!!! I do have a successfull mutex implementation with tickets to support resource locking and blocking.
My issue is in the wrappers for both the timer AND the sys-int. However, as I am doing a re-write... I am currently not at that point in developement.
Also, JamesM, your tutorials rock! I read in another thread you were thinking about starting a Multitasking tut... I sure would love to see that!
Thanks again,
Rich
Thank you for your comments. I was unclear and also at short-sighted in my examples. I wasn't even considering threads external to the kernel, and I SHOULD BE!!! I do have a successfull mutex implementation with tickets to support resource locking and blocking.
My issue is in the wrappers for both the timer AND the sys-int. However, as I am doing a re-write... I am currently not at that point in developement.
Also, JamesM, your tutorials rock! I read in another thread you were thinking about starting a Multitasking tut... I sure would love to see that!
Thanks again,
Rich