Synchronization by IPC
Synchronization by IPC
Hi after a longer time
I have now some problem with synchronization of processes in my OS .
Actually when process is sending a message by IPC it can chose if it wants just to send a message or send it and be freezed until the receiver of that message will read it.
So this is basic sync. because when message is readed, process that send it is just waked to the list of processess that awaits for execution - but it lands at its end so several different processess can be executed after "receiver" before "sender" will have opportunity to run. This is more processor save trick than process sync.
Then I started to rewrite receive function in such way that "sender" that is woked up is putted just after "receiver" (that is running) in list of processess to run. It would guarantee that "sender" will be executed just after "receiver" that reads its message - but - it also carrys a danger.
If process A will send message to B and go sleep, and then B after reading mail from A and doing some work will also do thesame, then they will be running one after another all time (A->B->A->B etc.) and other waiting processess will be starving :/.
Do you folks have some better idea to sync processess ?
I have now some problem with synchronization of processes in my OS .
Actually when process is sending a message by IPC it can chose if it wants just to send a message or send it and be freezed until the receiver of that message will read it.
So this is basic sync. because when message is readed, process that send it is just waked to the list of processess that awaits for execution - but it lands at its end so several different processess can be executed after "receiver" before "sender" will have opportunity to run. This is more processor save trick than process sync.
Then I started to rewrite receive function in such way that "sender" that is woked up is putted just after "receiver" (that is running) in list of processess to run. It would guarantee that "sender" will be executed just after "receiver" that reads its message - but - it also carrys a danger.
If process A will send message to B and go sleep, and then B after reading mail from A and doing some work will also do thesame, then they will be running one after another all time (A->B->A->B etc.) and other waiting processess will be starving :/.
Do you folks have some better idea to sync processess ?
- carbonBased
- Member
- Posts: 382
- Joined: Sat Nov 20, 2004 12:00 am
- Location: Wellesley, Ontario, Canada
- Contact:
I generally consider synchronization, and IPC two different things.
In my OS, IPC is carried via message queues. When process A sends a message, it continues to execute until its timeslice is finished, or it voluntarily gives up control. Why put a process on the waiting list when it doesn't need to be? Sending a message is not a blocking function.
Receiving a message, however, can be a blocking function. If process B is waiting for a message in a queue, it will be put on the waiting list until a message is available for it. If a message is immediately available, it never enters the waiting list.
Synchronization would be carried out through mutex's and semaphores and is completely independent of IPC. The only connection being that mutex and semaphore handles may often be passed as the part of the contents of a message sent to another thread. For example, if I created two threads that were both going to control a piece of hardware, I might create a mutex in one thread, to synchronize access to this hardware, and send a handle/pointer to this mutex to the second thread. In this way, both threads could access the hardware synchronously.
--Jeff
In my OS, IPC is carried via message queues. When process A sends a message, it continues to execute until its timeslice is finished, or it voluntarily gives up control. Why put a process on the waiting list when it doesn't need to be? Sending a message is not a blocking function.
Receiving a message, however, can be a blocking function. If process B is waiting for a message in a queue, it will be put on the waiting list until a message is available for it. If a message is immediately available, it never enters the waiting list.
Synchronization would be carried out through mutex's and semaphores and is completely independent of IPC. The only connection being that mutex and semaphore handles may often be passed as the part of the contents of a message sent to another thread. For example, if I created two threads that were both going to control a piece of hardware, I might create a mutex in one thread, to synchronize access to this hardware, and send a handle/pointer to this mutex to the second thread. In this way, both threads could access the hardware synchronously.
--Jeff
You misunderstand me . Process can send message and run after that normally or if HE wants he can be freezed after sending (there are two different syscall's for that:
SEND - send message and continue working
SEND_WAIT - send message and wait until it will be readed
Thesame I have with receiving messages, process can call:
RECEIVE - it will receive message if there is any or return error number
RECEIVE_WAIT - it will receive message if there is any or will freez process until message will come to it.
So I can synchronize processess by IPC messages but it won't be Real Time synchronization ;/. I think synchronization by mutexes also isn't RT sync.
SEND - send message and continue working
SEND_WAIT - send message and wait until it will be readed
Thesame I have with receiving messages, process can call:
RECEIVE - it will receive message if there is any or return error number
RECEIVE_WAIT - it will receive message if there is any or will freez process until message will come to it.
So I can synchronize processess by IPC messages but it won't be Real Time synchronization ;/. I think synchronization by mutexes also isn't RT sync.
- carbonBased
- Member
- Posts: 382
- Joined: Sat Nov 20, 2004 12:00 am
- Location: Wellesley, Ontario, Canada
- Contact:
Hmm, okay, fair enough. You could definitly use these to synch threads. I would still consider this IPC vs. synchronization, but I think that's simply my own semantics, as they obviously do synchronize, as well.
As for mutex's and messages being real time... I don't see why they can't be (!?). It's all in your multitasking scheme, really. How you pre-empt tasks and bring them back into the ready list, based on their states (which are effected by your IPC and other synchronization methods), is more relevent, I think.
--Jeff
As for mutex's and messages being real time... I don't see why they can't be (!?). It's all in your multitasking scheme, really. How you pre-empt tasks and bring them back into the ready list, based on their states (which are effected by your IPC and other synchronization methods), is more relevent, I think.
--Jeff
Hmm.. In my sheduling algorithm process is swithed 100 times per second
(100Hz now, maybe 1000 in future). I'm using RR1 algorithm with exceptions when by IPC process is going sleep or by delay. So if sender is resumed to the end of the list it can be executed just after now executing process (if it was old tail of the list) or in wors case after executing all processess before him on the list. So it is far from RT sync I think ;/.
How could I update that so delays between executing two sync processess would be smaller? I now get idea that maybe resuming "sender" process
in second place after "receiver" on the list will be optimal -> between reading a message and resuming sender will be only one task and there won't be so sitiation that two sync processess will blocking rest .
I think I need to read more about Real Time boundaries etc. At the moment I think I will stay with resuming waked sender at the end of the list.
(100Hz now, maybe 1000 in future). I'm using RR1 algorithm with exceptions when by IPC process is going sleep or by delay. So if sender is resumed to the end of the list it can be executed just after now executing process (if it was old tail of the list) or in wors case after executing all processess before him on the list. So it is far from RT sync I think ;/.
How could I update that so delays between executing two sync processess would be smaller? I now get idea that maybe resuming "sender" process
in second place after "receiver" on the list will be optimal -> between reading a message and resuming sender will be only one task and there won't be so sitiation that two sync processess will blocking rest .
I think I need to read more about Real Time boundaries etc. At the moment I think I will stay with resuming waked sender at the end of the list.
- carbonBased
- Member
- Posts: 382
- Joined: Sat Nov 20, 2004 12:00 am
- Location: Wellesley, Ontario, Canada
- Contact:
To be considered Hard Realtime, you must be able to garauntee a certain minimum latency, umong other things. I don't recall, offhand, what this is.
Most of this (I believe) will be obtained by changing your scheduling algorithm. Technically, a (basic!) round-robin scheduler can switch tasks faster then any scheduler, as all it does is advance to the ->next pointer. However, to be hard-realtime, I believe you also have to support a minimum set of priorities (does anybody know the requirements in order to call your OS hard-realtime?), which will slow an RR scheduler quite a fair bit.
You'll want to develop an O(1) scheduler that's fast.
You'll also want a way for force a task switch. A lot of people just interrupt a few thousand times a second and switch tasks. This is a nice and easy way to multitask, but if a hardware interrupt occurs in the middle of a task's quantum, you probably shouldn't wait 'till the next scheduling opportunity to service it. You'll need to switch to the handling task immediately.
I would apply the same ideas to synchronization. If a task is waiting on a message, and a message becomes available for it, don't wait for the next scheduling oppourtunity to switch to that task, switch to it immediately (assuming it has a higher priority then what's currently running).
In a real-time scheduler, your concept of "position" in the list isn't entirely applicable, as priority has to be taking into account. The next task in line is always the task with the highest priority that's in the ready state.
--Jeff
Most of this (I believe) will be obtained by changing your scheduling algorithm. Technically, a (basic!) round-robin scheduler can switch tasks faster then any scheduler, as all it does is advance to the ->next pointer. However, to be hard-realtime, I believe you also have to support a minimum set of priorities (does anybody know the requirements in order to call your OS hard-realtime?), which will slow an RR scheduler quite a fair bit.
You'll want to develop an O(1) scheduler that's fast.
You'll also want a way for force a task switch. A lot of people just interrupt a few thousand times a second and switch tasks. This is a nice and easy way to multitask, but if a hardware interrupt occurs in the middle of a task's quantum, you probably shouldn't wait 'till the next scheduling opportunity to service it. You'll need to switch to the handling task immediately.
I would apply the same ideas to synchronization. If a task is waiting on a message, and a message becomes available for it, don't wait for the next scheduling oppourtunity to switch to that task, switch to it immediately (assuming it has a higher priority then what's currently running).
In a real-time scheduler, your concept of "position" in the list isn't entirely applicable, as priority has to be taking into account. The next task in line is always the task with the highest priority that's in the ready state.
--Jeff
No matter which method you use: Reducing the delay between two IPC task will always be unfair for all other apps. By decreasing the overhead you steal time from those that would normally run if scheduling had continued as planned. With equality and fairness being one of the most basic requirements of any scheduler, you should really think twice before implementing such a hack/feature.mrkaktus wrote:How could I update that so delays between executing two sync processess would be smaller?
A much better idea for a desktop system would be to include support for priority levels. As important tasks could then run on a high privilege level, communication between them would be more or less instancious. For the lower priority tasks IPC might be a bit slower, but that shouldn't be a problem as they aren't interactive anyway.
In any (periodic) real-time system tasks must be able to finish a fixed amount of work within a certain time limit. An example would be a DVD player that has to decode a frame every 40 ms to ensure a smooth playback. Hard real-time systems fail fatally if a deadline is missed (f.ex. robot falls from the stairs and explodes) while soft real-time systems can still continue, although the quality of their service may suffer (f.ex. jitter or dropped frames).carbonBased wrote:To be considered Hard Realtime, you must be able to garauntee a certain minimum latency, umong other things. I don't recall, offhand, what this is.
Apart from that all services should execute in a predictable amount of time. A kernel that returns after 15 µs most of time, but sometimes takes as long as 80 ms, makes it impossible for the tasks to not miss their deadlines sooner or later.
Round-Robin was never build to ensure that the task will always run within its deadline. If the scheduler is not aware of these time constraints it will especially hard to add/remove tasks. With priority the problem only gets worse as they make the whole scheduler quite unpredictable and arbitrary.Technically, a (basic!) round-robin scheduler can switch tasks faster then any scheduler, as all it does is advance to the ->next pointer. However, to be hard-realtime, I believe you also have to support a minimum set of priorities.
If you're really serious about building a real-time system, you should use a specialized algorithm like RMS or EDF. You should find some basic information about this topic on wikipedia and I could also explain some details if you still have questions (mind that I'm by no means a real-time expert though).
In case that real-time is however just another buzz-word that you think might look nice on your project's feature list, you should in my opinion refrain from trying to implement it. It's a huge topic of it's own and there're no almost-real-time systems: either your whole kernel is designed for it or real-time tasks won't be able to run.
Real-time don't have to be extremly fast, that's just a common misconception. In fact they can be pretty slow as long as their performce is always that bad and thus predictable. Of course the system does have to be fast enought to be able to meet the deadlines, but that should normally not be the main problem..You'll want to develop an O(1) scheduler that's fast.
regards,
gaf
Last edited by gaf on Thu Sep 14, 2006 8:09 am, edited 1 time in total.
- carbonBased
- Member
- Posts: 382
- Joined: Sat Nov 20, 2004 12:00 am
- Location: Wellesley, Ontario, Canada
- Contact:
I entirely agree... if this is the intention, I don't believe it's worthwhile. From a priority-based stand-point, however, if a thread with high priority is waiting on a queue, and that queue receives a message, it should be waken, and executed as quickly as possible.gaf wrote: No matter which method you use: Reducing the delay between two IPC task will always be unfair for all other apps. By decreasing the overhead you steal time from those that would normally run if scheduling had continued as planned. With equality and fairness being one of the most basic requirements of any scheduler, you should really think twice before implementing such a hack/feature.
Indeed. I believe I mistook the original poster, as I thought the intent was simply to decrease latency in the scheduler in a general/scalable way. I also originally assumed priorities were already implemented, which of course, is not the case.gaf wrote: A much better idea for a desktop system would be to include support for priority levels. As important tasks could then run on a high privilege level, communication between them would be more or less instancious. For the lower priority tasks IPC might be a bit slower, but that shouldn't be a problem as they aren't interactive anyway.
I second the idea that a priority based system should be implemented. I, personally, think all schedulers should support a form of priority based scheduling. Even something basic like 8 levels of priority.
I think this is the case a lot of times... real-time, in my opinion, generally implies embedded. Given that most OS's on this list are being written for x86 processors, this seems counter intuitive. Not that an x86 can't be an embedded processor... but they're certainly not the most popular (at least not that I've seen...).gaf wrote: In case that real-time is however just another buzz-word that you think might look nice on your project's feature list, you should in my opinion refrain from trying to implement it. It's a huge topic of it's own and there're no almost-real-time systems: either your whole kernel is designed for it or real-time tasks won't be able to run.
That being said, though, if you think you can handle it, I think writting a real-time OS could be quite enjoyable (and incredibly painful, at times). But Gaf's right... do it because you want to, not because the word's tossed around a lot lately.
The O(1) is probably the more relevant portion of my quote above. Meaning that you'll want a scheduling system that scales well with multiple threads. It would be adventageous if the scheduler took N cycles all the time, regardless of the number of threads currently running. If the scheduler takes more and more time as threads are being added to the system, then it becomes more difficult to obtain accurate timing at the application level (which might be controlling a very time sensitive device).gaf wrote:Real-time don't have to be extremly fast, that's just a common misconception. In fact they can be pretty slow as long as their performce is always that bad and thus predictable. Of course the system does have to be fast enought to be able to meet the deadlines, but that should normally not be the main problem..You'll want to develop an O(1) scheduler that's fast.
Although, as mentioned, if the degredation is predictable, this might be considered a lesser issue. I, personally, belief an O(1) scheduler is a good idea to have in any OS... that doesn't necessarily mean the scheduler algorithm is incredibly fast, but that its not dependant on the number of threads it's scheduling.
--Jeff
Ok, I'm back after a break (I have an exam to pass ).
I was thinking about Real Time thing because I want to improve my kernel on every step, in every way, not because everybody is talking about it.
And I need to tell that, to that moment I have not got priorities in my tasks sheduling algorithm . I was thinking about it but I don't know how to decide which task schould have upgraded or degraded task level (what decides on it).
From time of my last post I rewrited all IPC twice to gain time, (decrese cycles needed to send/receive message). For now I have remove function that supports freezeeng sender until message is delivered because there is no way that there woldn't occure process blocking for now.
So I think I will go after yors advices and first implement priorities and then I will try to close my OS to soft Real-Time .
I was thinking about Real Time thing because I want to improve my kernel on every step, in every way, not because everybody is talking about it.
And I need to tell that, to that moment I have not got priorities in my tasks sheduling algorithm . I was thinking about it but I don't know how to decide which task schould have upgraded or degraded task level (what decides on it).
From time of my last post I rewrited all IPC twice to gain time, (decrese cycles needed to send/receive message). For now I have remove function that supports freezeeng sender until message is delivered because there is no way that there woldn't occure process blocking for now.
So I think I will go after yors advices and first implement priorities and then I will try to close my OS to soft Real-Time .
Yep, if a message unblocks a thread, the scheduler should update its tables and switch to the highest priority task available. Whether that really means that kernel jumps to the just reactivated task depends on the priority this task was assigned by its creator. It's thus a transparent decision that allows the task's parent to indirectly decide what to do in such situations.carbonBased wrote:From a priority-based stand-point, however, if a thread with high priority is waiting on a queue, and that queue receives a message, it should be waken, and executed as quickly as possible.
A kernel that tries to reduce the time between two communicating task implicitly enforces a policy that is not even visible to the user: It always preferes task involved in IPC, although other tasks with the same priority level may also be waiting to run. This behaviour might be reasonable in most cases, but I'm sure that there are always situations where a different policy would provide a much more efficient solution (f.ex there's no hurry delivering if the message is only an "ACK: Please stand-by.."). If the policy is part of the scheduler, you however can't alter it at run-time, even if you know that it's not optimal with your setup.
An abstraction should thus always expose decisions to its users. The ideal is a well defined and yet simple mechanism that bases its decisions on the parameters provided (-> seperation of mechanism and policy).
At least in an interactive system they're more or less indispensible to reduce system latency. Priorities can however be implemented in very different ways..I second the idea that a priority based system should be implemented. I, personally, think all schedulers should support a form of priority based scheduling. Even something basic like levels of priority.
For the hard real-time systems this is probably true, soft real-time could however also make sense on a normal desktop system. As multimedia applications (audio, video, games) process their data at a constant rate, they might also benefit from a time aware schedulers. This of course doesn't mean that a traditional round-robin system couldn't do the job: I just believe that the results should always be better with a more specialized algorithm..Real-time, in my opinion, generally implies embedded.
If it's only about the switch itself, even my (still imaginary) scheduler should execute in a constant amount of time. It just iterates to the next node on the "active" list to get the task's TCB, and thus its registers aswell as the context pointer. The main work has to be done when a task blocks/unblocks or a process either gets started or killed: The scheduler then edits the active list by incorporating all kinds of data that must mostly be obtained from lists or trees. Making this portion execute with O(1) is much less trivial and I actually have some doubts whether its's really worth all the trouble, given that the number of threads is normally not that high. Even on my bloated KDE installation I still only have 67 right now - realtime systems probably tend to have even fewer.The O(1) is probably the more relevant portion of my quote above. Meaning that you'll want a scheduling system that scales well with multiple threads. It would be adventageous if the scheduler took N cycles all the time, regardless of the number of threads currently running.
Well, I guess that these devices would provide something similiar to double-buffering to avoid such problems. Otherwise even something as unpredictably as TLB faults or random IRQs could delay the timing.If the scheduler takes more and more time as threads are being added to the system, then it becomes more difficult to obtain accurate timing at the application level (which might be controlling a very time sensitive device).
The higher the priority of a task is, the smaller will its latency be. A very simple solution would be to give a priority of 0 (highest) to all driver and lower priorities to all regular applications. An interactive task could for example run on level 1, while a batch application that spends most of its time calculating could be on level 2. With such a setup interactive tasks (like the shell) may interrrupt all batch tasks and interrupts always get served immediately. It's of course possibe to extend the design to something more sophisticated..mrkaktus wrote:And I need to tell that, to that moment I have not got priorities in my tasks sheduling algorithm. I was thinking about it but I don't know how to decide which task schould have upgraded or degraded task level (what decides on it).
One thing you should have an eye on is that your low priority tasks don't starve eventually. If there're too many high priority tasks it may happen, that they hardly ever get the possibility to run. It might thus be necessary to rebalance the system every now and then: Task that have been waiting for too long might then be assigned a higher privilege level and/or the priority of often running task may get reduced in time.
regards,
gaf
It could be interestiong but design of my OS is specyfic. I have microkernel that provides support for execution object (I call them so). EO could be everything he wants to, he can support some functionality to other EO, then you can call it driver or server, but it can just work on his own, then you can call it just process. There is no division on drivers or aps because everything is working in thesame virtual machine on DPL3 (so main power of ACDOS is IPC). From that reason I think the only way to +/- prioryty of some EO is to study its need's - how often it is going sleep or how much it uses system resorces etc.A very simple solution would be to give a priority of 0 (highest) to all driver and lower priorities to all regular applications. An interactive task could for example run on level 1, while a batch application that spends most of its time calculating could be on level 2.
- carbonBased
- Member
- Posts: 382
- Joined: Sat Nov 20, 2004 12:00 am
- Location: Wellesley, Ontario, Canada
- Contact:
I don't know if that's entirely true... certain EO's acting as a driver/server should probably have higher priority, despite their sleep time, or activity.mrkaktus wrote:It could be interestiong but design of my OS is specyfic. I have microkernel that provides support for execution object (I call them so). EO could be everything he wants to, he can support some functionality to other EO, then you can call it driver or server, but it can just work on his own, then you can call it just process. There is no division on drivers or aps because everything is working in thesame virtual machine on DPL3 (so main power of ACDOS is IPC). From that reason I think the only way to +/- prioryty of some EO is to study its need's - how often it is going sleep or how much it uses system resorces etc.A very simple solution would be to give a priority of 0 (highest) to all driver and lower priorities to all regular applications. An interactive task could for example run on level 1, while a batch application that spends most of its time calculating could be on level 2.
For example, if you value a very snappy interface -- one that reacts quickly to use events -- then your keyboard, mouse and other input device drivers should have a high priority. The fact that they're sleeping a lot of the time is irrelevant, as they simply will not gain focus while sleeping -- they would be on the blocked/waiting list.
Other drivers could have a lot of activity -- a disk driver, perhaps. It could, however, be advantageous for your disk driver (at least when writting) to have *low* priority, and to buffer all requests. In this case, the disk only gets written to when the system is near idle (ie, no high priority tasks are blocking the disk writes).
In this case, I think each of your EO's should have a priority associated with them. Playing around with various levels for your servers will change the characteristics of your OS, and how the user interacts with it.
--Jeff
Keeping the disk spinning is probably much more important than saving a couple of cpu cycles. As HDs are a million times slower than the processor it already takes long enough to get the data, so that you don't want to waste any time scheduling the request. By giving the disk driver a high priority you make sure that the I/O operations are distributed more equally, allowing the maximum band-width to be used.carbonBased wrote:It could, however, be advantageous for your disk driver (at least when writting) to have *low* priority, and to buffer all requests. In this case, the disk only gets written to when the system is near idle (ie, no high priority tasks are blocking the disk writes).
Tracking is not the way to go as it's not only prone of errors but also time consuming. What your kernel really needs is some kind of a capability system that allows privileged tasks to perform restricted actions that are not available to all users. For an example just think of a system that assigns a second priority-level to each task ("max_priority"). This level is not used by the task itseld, but rather defines the maximum priority for child tasks:mrkaktus wrote:I agree, so for now I need to develop mechanism of priviledging EO's in kernel that will base on tracking they activity (interrupts/maybe % of sleep/giving back control/IPC activity).
Code: Select all
bool CreateTask(task_image, priority, max_priority) // kernel
{
if(priority <= current_tcb->priority)
{
if(max_priority <= current_tcb->max_priority)
{
new_tcb = CreateTask(task_image);
new_tcb->priority = priority;
new_tcb->max_priority = max_priority;
}
}
}
regards,
gaf