Multitasking: Should the kernel have its own running task?
Multitasking: Should the kernel have its own running task?
This might sound like a dumb question, but I've just gotten into multitasking (I thought WAY too much into it, and it got really confusing for a while ) and I was wondering whether most OS's have a running kernel process, or does the kernel run strictly from system calls, and things of that nature? If the kernel does not need a constantly executing task, what should be loaded after the kernel? (granted, I have not arrived at loading executable's yet, but it seems like a valid question).
I'm new here, and am still learning the design part of this whole thing. Haha.
I'm new here, and am still learning the design part of this whole thing. Haha.
Re: Multitasking: Should the kernel have its own running tas
You should at least have a process that run the HLT instruction in lowest priority,
so that when all process are sleeping and withdraw their time-slices, the CPU won't busy switching tasks.
as HLT require ring0 this is usually run in the kernel process, along with house-keeping activities.
so that when all process are sleeping and withdraw their time-slices, the CPU won't busy switching tasks.
as HLT require ring0 this is usually run in the kernel process, along with house-keeping activities.
- piranha
- Member
- Posts: 1391
- Joined: Thu Dec 21, 2006 7:42 pm
- Location: Unknown. Momentum is pretty certain, however.
- Contact:
Re: Multitasking: Should the kernel have its own running tas
It depends on how you design your scheduler and task system. For me, the kernel does have it's own task (running in ring0 obviously) that does:
-JL
- Finishes cleaning up after tasks that have exited (they can't clear out everything that is their own)
- Reaps sections of the kmalloc memory that have been completely freed.
- Cleans up some data structures that are alloc'd by the kernel at bootup
- And (in the future) perform swapping
-JL
SeaOS: Adding VT-x, networking, and ARM support
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
Re: Multitasking: Should the kernel have its own running tas
That makes sense. So something like this?bluemoon wrote:You should at least have a process that run the HLT instruction in lowest priority,
so that when all process are sleeping and withdraw their time-slices, the CPU won't busy switching tasks.
Code: Select all
void busy_task()
{
while(1)
asm volatile ("hlt");
}
Code: Select all
global busy_task:function
busy_task:
hlt
jmp busy_task
Oh, I see. e.g. you can't hold your own funeralpiranha wrote:It depends on how you design your scheduler and task system. For me, the kernel does have it's own task (running in ring0 obviously) that does:
* Finishes cleaning up after tasks that have exited (they can't clear out everything that is their own)
* Reaps sections of the kmalloc memory that have been completely freed.
* Cleans up some data structures that are alloc'd by the kernel at bootup
* And (in the future) perform swapping
But really, the most important feature is that if all the tasks sleep, the scheduler has a place to go to (the kernel task) so that it doesn't hang up inside the scheduler. I would recommend that the kernel has it's own task, you'll find uses for it.
So, how should the kernel task (if it is going to do more then just take up extra time space) keep running constantly without taking up too much CPU? At this point, I don't have a great scheduler. It doesn't have "timeslices" it just switches tasks when IRQ0 fires (~100hz is what I have it at now). So a task running in a constant loop is going to suck up a lot of processing power in comparison to a process who actually does something in that time. (I know that at this point I don't have a bunch of processes, but I just don't want to find out later that it isn't working right :/ lol)
Re: Multitasking: Should the kernel have its own running tas
Each process can have different priority, simplest way is to set PIT to higher frequency, say 1000Hz, when schedule switch process, it give a time slice counter, and decrease when PIT triggered. Once the count reach zero or the process give up it's time slice (by setting count directly to zero), the scheduler pick next task.Caleb1994 wrote:So, how should the kernel task (if it is going to do more then just take up extra time space) keep running constantly without taking up too much CPU?
Furthermore, you can monitor the processor usage, and decide to HLT or give up it's time slice and just invoke next task.
- piranha
- Member
- Posts: 1391
- Joined: Thu Dec 21, 2006 7:42 pm
- Location: Unknown. Momentum is pretty certain, however.
- Contact:
Re: Multitasking: Should the kernel have its own running tas
I just have the kernel task sleep if its not needed. Really, it invokes the scheduler when it has nothing to do, and it jumps to the next task.
Other than that, if its the only task that can run, then It'll eat up CPU time, but it doesn't matter at that point cause...well, no other tasks can run.
-JL
Other than that, if its the only task that can run, then It'll eat up CPU time, but it doesn't matter at that point cause...well, no other tasks can run.
-JL
SeaOS: Adding VT-x, networking, and ARM support
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
Re: Multitasking: Should the kernel have its own running tas
Okay, I have quickly whipped together some basic code for my scheduler (like I said, it was VERY basic before. Wasn't really a scheduler at all actually. More like a task switcher.) I read up on different algorithms, and decided on this one: http://wiki.osdev.org/Scheduling_Algori ... ound_Robin it's not too complicated, but at this same time still allows for priorities. What do you think of this setup? I don't want to get too far if someone who has done this before thinks it's going to be badbluemoon wrote:Each process can have different priority, simplest way is to set PIT to higher frequency, say 1000Hz, when schedule switch process, it give a time slice counter, and decrease when PIT triggered. Once the count reach zero or the process give up it's time slice (by setting count directly to zero), the scheduler pick next task.
task.h:
Code: Select all
/*! Defines all the information needed by the scheduler about a
certain task.
*/
typedef struct _task task_t;
typedef struct _runqueue run_queue_t;
struct _task
{
u32 pid; // Process ID
u32 esp0; // For the TSS so that each process has it's own kernel stack
registers_t* regs; // Saved Register State
pdir* dir; // Page Directory
program_t* program; // A structure defining some higher level attributes
run_queue_t* queue; // Specifies what run queue this task is a part of
task_t* next; // next task
};
/*! defines the runqueue of a priority */
typedef struct _runqueue
{
task_t* ready; // Tasks ready to be run
task_t* done; // Tasks waiting to be renewed (timeslice is exhausted)
u32 ready_count; // Number of tasks in the ready queue
} run_queue_t;
/*! Defines all the information needed by the scheduler in a nice structure */
typedef struct _scheduler
{
run_queue_t prior[SCHED_MAX_PRIOR+1]; // Run queues for different priorities
u32 timeslice; // The number of ticks left in the current timeslice
task_t* current; // The current task being executed
} scheduler_t;
Code: Select all
void task_switch(registers_t* regs)
{
if( task_prev_handler ) task_prev_handler(regs); // This should be the timer handler
// Is this timeslice finished?
if( sched.timeslice != 0 ){
--sched.timeslice; // No, decrement the counter and return
return;
}
// Save the current context, and remove the task from the ready queue
task_t* task = sched.current;
task->queue->ready = task->next; // Set the next ready task
task->next = task->queue->done; // Shift the done queue down
task->queue->done = task; // Set the next done task
task->queue->ready_count--; // Decrement the ready count for that queue
memcpy(task->regs, regs, sizeof(registers_t)); // Save the current register state
sched.current = 0;
// Check the priorities, starting with the highest, for ready tasks
u32 i;
for(i=0; i <= SCHED_MAX_PRIOR; ++i){
if( sched.prior[i].read_count ){
sched.current = sched.prior[i].ready; // Set the current task as the top of this ready queue
break;
} else if( sched.prior[i].done ) swap_queue(&sched.prior[i--]); // If there are tasks in the done queue, swap the queues and recheck
}
memcpy(regs, sched.current->regs, sizeof(registers_t)); // Grab the saved registers
set_kernel_stack(sched.current->esp0); // Grab the new tasks kernel stack pointer (this does not affect the current stack)
vmm_switch_dir(sched.current->dir); //
sched.timeslice = SCHED_TIMESLICE(SCHED_TIMESLICE_LENGTH);
}
Code: Select all
scheduler_t sched;
#define SCHED_MAX_PRIOR 3 // We have 4 priorities. zero being the highest, and 3 being the lowest.
#define SCHED_TIMESLICE_LENGTH 35 // The recommended size is between 20 and 50. I figured 35 was a good number.
#define SCHED_TIMESLICE(length) ((length/1000) * TIMER_TICKS_PER_SECOND) // Defines how many ticks are in a timeslice (mostly to make things look nice should be optimized out...)
Sound like what I was thinking. Sorry if these questions seem really basic. I want a firm grasp of what I want to (and need to) do before I get waist deep in garbage code :Opiranha wrote:I just have the kernel task sleep if its not needed. Really, it invokes the scheduler when it has nothing to do, and it jumps to the next task.
Other than that, if its the only task that can run, then It'll eat up CPU time, but it doesn't matter at that point cause...well, no other tasks can run.
-JL
NOTE:
I'm not asking you to fix errors in my code, just an opinion on my design. Just thought I'd put that out there for people who don't like it when people just ask for the forum to fix their code (Which is very annoying, so I understand completely )
-
- Member
- Posts: 595
- Joined: Mon Jul 05, 2010 4:15 pm
Re: Multitasking: Should the kernel have its own running tas
You can make the kernel it's own task if you think it's beneficial for you. It can help you since it is a container for resources associated with the kernel, however it becomes somewhat like a bastard since not all rules apply to it. Right now I don't have it but I've thought about it since there are several resources that could be handled by a process container and methods.
When it comes to the idle process or thread, they are quite simple to implement in the beginning. The other version is that you call the wait for interrupt instruction in scheduler itself. Then you don't need the task switch to the idle thread, you camp in the thread that was last running. Avoiding this task switch can be beneficial as you can sleep longer. This method is however a little bit trickier to implement. Otherwise, nothing wrong with an idle thread either as it does the job as well.
When it comes to the idle process or thread, they are quite simple to implement in the beginning. The other version is that you call the wait for interrupt instruction in scheduler itself. Then you don't need the task switch to the idle thread, you camp in the thread that was last running. Avoiding this task switch can be beneficial as you can sleep longer. This method is however a little bit trickier to implement. Otherwise, nothing wrong with an idle thread either as it does the job as well.
Re: Multitasking: Should the kernel have its own running tas
Yes, that was my worry. I knew that there would be some restriction placed on tasks, so I wasn't sure how the kernel task would fit in since it is "all powerful" Ha.OSwhatever wrote:You can make the kernel it's own task if you think it's beneficial for you. It can help you since it is a container for resources associated with the kernel, however it becomes somewhat like a bastard since not all rules apply to it.
In my current setup, waiting in the scheduler wouldn't work. My scheduler's switch task function is one of the IRQ0 handlers, which means the EOI would not be sent to the PITs while I was sleeping, and therefore no more IRQs would fire. That doesn't sound good...OSwhatever wrote:]When it comes to the idle process or thread, they are quite simple to implement in the beginning. The other version is that you call the wait for interrupt instruction in scheduler itself. Then you don't need the task switch to the idle thread, you camp in the thread that was last running.
- piranha
- Member
- Posts: 1391
- Joined: Thu Dec 21, 2006 7:42 pm
- Location: Unknown. Momentum is pretty certain, however.
- Contact:
Re: Multitasking: Should the kernel have its own running tas
Well, the thing with the kernel task is that, while not all rules apply to it (e.g., it runs in ring0), you have complete control over the code that is inside it. Eventually, the other processes will probably be running executable files, and so they need the protection, but the kernel task is in the kernel, and you write it's code. The fact that the rules are different for the kernel task is not important (or detrimental).Yes, that was my worry. I knew that there would be some restriction placed on tasks, so I wasn't sure how the kernel task would fit in since it is "all powerful" Ha.
-JL
SeaOS: Adding VT-x, networking, and ARM support
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
dbittman on IRC, @danielbittman on twitter
https://dbittman.github.io
Re: Multitasking: Should the kernel have its own running tas
True. I hadn't thought of it that way.Well, the thing with the kernel task is that, while not all rules apply to it (e.g., it runs in ring0), you have complete control over the code that is inside it.
On a side note, I realized something. The design I posted will not work, because when a run_queue is exhausted (all tasks in that queue have run their timeslices) the queue is swapped and rechecked for ready tasks. This means that if we have two tasks in, let's say, priority 0 (the highest) A, and B and one in priority 3 (the lowest), C, then C would not execute until A and B have finished since every time A and B use their timeslices, they are made ready again, and they have a higher priority.
So I guess I will only swap the queues when all tasks in all priorities have finished their timeslice. This can be done with a counter. since I know the number of priorities. Simple fix, but I figured I needed to post that for anyone else that might read this later If someone has a better solution, lemme know. This is just what is "rolling off my fingertips" lol
Actually, now that I think of it Each time a task finishes it's timeslice, you move that task into the done queue and out of the ready queue. You then check if the ready queue is empty, if so, we don't need to check this queue in the loop that chooses the next task and because we aren't going to check it again until all queues have been exhausted, we can swap it now. This will reduce the overhead of swapping queues because we do it on the fly instead of all at once. Then, instead of a counter, when the current ready queue is empty and we are the last queue, we swap the queue and start back at the top queue! This all works because we are executing tasks in priority order, so when a queue is finished, we know that we will not touch it again until all other queues below it are finished, and we also know that all queues ahead of it are already finished since they have a higher priority!
Of course, that all just popped into my head, so it sounds good on paper, but I'm not sure how well it will actually work.
Re: Multitasking: Should the kernel have its own running tas
If you do as you described and run the high prio tasks and then not run those tasks again until you've run the low prio tasks, you're effectively running everything at the same priority level. I think you're forgetting that tasks generally block and temporarily disappear from the ready list. So your low priority tasks will get a chance to run when the higher priority tasks are blocked. Simple realtime schedulers generally work this way.Caleb1994 wrote:On a side note, I realized something. The design I posted will not work, because when a run_queue is exhausted (all tasks in that queue have run their timeslices) the queue is swapped and rechecked for ready tasks. This means that if we have two tasks in, let's say, priority 0 (the highest) A, and B and one in priority 3 (the lowest), C, then C would not execute until A and B have finished since every time A and B use their timeslices, they are made ready again, and they have a higher priority.
So I guess I will only swap the queues when all tasks in all priorities have finished their timeslice. This can be done with a counter. since I know the number of priorities. Simple fix, but I figured I needed to post that for anyone else that might read this later If someone has a better solution, lemme know. This is just what is "rolling off my fingertips" lol
More clever and complex schedulers penalise non-realtime tasks that use their entire timeslice by temporarily lowering their priority and sometimes promote (temporarily) the priority of low priority tasks if the scheduler judges that the 'system' would benefit.
I don't think you need a 'done' queue.
If a trainstation is where trains stop, what is a workstation ?
Re: Multitasking: Should the kernel have its own running tas
Good thought. I use kernel process called primary task. It has many functions such as to be container for kernel threads, garbage collector for some data resources, parent process for orphan processes, manager for processes/users and so on.OSwhatever wrote:You can make the kernel it's own task if you think it's beneficial for you. It can help you since it is a container for resources associated with the kernel, however it becomes somewhat like a bastard since not all rules apply to it.
I use 4-level run queue. Each level corresponds to specific priority class: idle, normal, real-time, system real-time. System idle thread is a non-waiting thread placed on idle level that does something like this:
while (TRUE) { if (next==SYSTEM_IDLE) halt(); else yield(); }
System real-time thread is a waiting thread placed on system real-time level that can take control from all other threads when it is needs. It is fully cooperative thread.
If you have seen bad English in my words, tell me what's wrong, please.
- gravaera
- Member
- Posts: 737
- Joined: Tue Jun 02, 2009 4:35 pm
- Location: Supporting the cause: Use \tabs to indent code. NOT \x20 spaces.
Re: Multitasking: Should the kernel have its own running tas
Yo:
Well I'd like to propose that in the "modern" kernel, the kernel never "acts" so to speak of its own volition. The kernel only responds to requests. This implicitly makes it so that when there is no service being requested from userspace, the system is mostly then able to go into a power saving state since the kernel is only there to service applications.
Using that model, a kernel does not have a "main()", or a "main thread", or a thread that runs and represents "the kernel being run". For any sequence that the kernel must run which was not prompted first by userspace (for example, bootstrapping, or cache flushing in the background, or heap compacting) the kernel should spawn a thread for that action, and have it killed when it is no longer needed. Such a thread should have a low priority so that the kernel by its very nature prioritizes allowing the user to get work done.
Bootstrapping is done in its own thread, which is killed when the boot sequence is over. At that point, assuming your kernel isn't complex enough to require compacting of its heap in the background, or say, flushing of a cache of fast-allocate pages or some other background "housecleaning", your kernel should "do" nothing. After bootstrap, the userspace environment is loaded (whatever that means for your kernel), and from that point onward, the kernel switches into a "reactive" mode.
Things such as an "idle" thread to be run on CPUs with nothing in the run queue aren't the same as a "kernel task" that represents "the kernel" doing something. An idle thread isn't something that will be run naturally: it will only be run when a particular CPU can't find anything else to run, and an idle thread would, when run on a CPU usually cause that CPU to wait a certain period after which, if there is nothing for it to do, it will place it into a sleep state.
So in theory, a kernel would have a "bootup thread", a "shutdown thread", and any number of "maintainance" threads which are loosely scheduled when nothing else needs to be done. But it has no "process main thread" which is scheduled regularly along with user threads.
--All the best,
gravaera
Well I'd like to propose that in the "modern" kernel, the kernel never "acts" so to speak of its own volition. The kernel only responds to requests. This implicitly makes it so that when there is no service being requested from userspace, the system is mostly then able to go into a power saving state since the kernel is only there to service applications.
Using that model, a kernel does not have a "main()", or a "main thread", or a thread that runs and represents "the kernel being run". For any sequence that the kernel must run which was not prompted first by userspace (for example, bootstrapping, or cache flushing in the background, or heap compacting) the kernel should spawn a thread for that action, and have it killed when it is no longer needed. Such a thread should have a low priority so that the kernel by its very nature prioritizes allowing the user to get work done.
Bootstrapping is done in its own thread, which is killed when the boot sequence is over. At that point, assuming your kernel isn't complex enough to require compacting of its heap in the background, or say, flushing of a cache of fast-allocate pages or some other background "housecleaning", your kernel should "do" nothing. After bootstrap, the userspace environment is loaded (whatever that means for your kernel), and from that point onward, the kernel switches into a "reactive" mode.
Things such as an "idle" thread to be run on CPUs with nothing in the run queue aren't the same as a "kernel task" that represents "the kernel" doing something. An idle thread isn't something that will be run naturally: it will only be run when a particular CPU can't find anything else to run, and an idle thread would, when run on a CPU usually cause that CPU to wait a certain period after which, if there is nothing for it to do, it will place it into a sleep state.
So in theory, a kernel would have a "bootup thread", a "shutdown thread", and any number of "maintainance" threads which are loosely scheduled when nothing else needs to be done. But it has no "process main thread" which is scheduled regularly along with user threads.
--All the best,
gravaera
17:56 < sortie> Paging is called paging because you need to draw it on pages in your notebook to succeed at it.
Re: Multitasking: Should the kernel have its own running tas
By having a good sleep mechanism, you don't need to regularly create/destroy maintenance threads.