In my quest for simple and clear semantics for my microkernel, multi-server OS, I've come to a new problem.. which has to do with the concept of signals, which I've decided to separate from the (CPU) exceptions.
Now, my logic goes like this: Signals should be delivered to mailboxes, be those for threads or objects. Doesn't matter really, since one can redirect signals from an object to a thread that has indicated willingness to handle them. It's easy enough to poll the mailbox once in a while, which is safely implemented in pretty much any language. But there's a problem: I was going to use synchronous IPC (RPC) calls.
So, if I have my thread waiting for an IPC call to finish, I have two options. Either I deliver the signal, interrupting the IPC action, which might lead to consistency problems with the other end of the connection and various undefined states, or I just wait for the completion, which might take a long time and has to do with the all to well known trouble of thread cancellation.
So it seems the only reasonable solution is doing asynchronous IPC instead, and have finished IPC calls drop a signal (or event) into a mailbox. This way if thread get some other signal, it can either handle that, or notify the other end of IPC that it doesn't care about the call anymore, if supported by the service. No need for kernel to know when to interrupt, when to not, since interrupting the calling thread doesn't disturb the IPC process.
This means more or less going to a fully event-based model, which makes sequential programming languages like C pain to use.. but might it be worth it?
Now, I've seen people claim that Windows uses asynchronous I/O because it suits multi-threaded environment better than synchronous. I've finally starting to see reasons behind it..
Makes me wonder if the whole idea of doing multi-threading POSIX-style is flawed. Seems like UNIX is good for combining small programs by scripting and piping and that's about it, the monolith-kernel, half a million special cases and good luck being the only things that make it work at all.
Or is it just a problem with my head...
Threads and synchronous IPC: non-sense?
Re:Threads and synchronous IPC: non-sense?
Ok, I take back a little, since normal UNIX I/O is actually buffered, not purely synchronized... and you can use it in non-blocking mode. Don't know if this actually makes things simplier, but I admit it makes the system work.. at least for piping things.
What a horribly rant that post of mine. Maybe I'm just trying to make things too perfect. Maybe I should either allow for some "undefined" behaviour, or drop some flexibility. I don't really know.
From the beginning, I was going to implement synchronous IPC calls. Now it seems that if I want to provide processes with predictable, clearly defined semantics, that make it possible to write fail-safe code, I have to use asynchronous IPC instead, which makes it more difficult to program at all, at least without heavy use of some kind of signal/slots mechanism.
???
What a horribly rant that post of mine. Maybe I'm just trying to make things too perfect. Maybe I should either allow for some "undefined" behaviour, or drop some flexibility. I don't really know.
From the beginning, I was going to implement synchronous IPC calls. Now it seems that if I want to provide processes with predictable, clearly defined semantics, that make it possible to write fail-safe code, I have to use asynchronous IPC instead, which makes it more difficult to program at all, at least without heavy use of some kind of signal/slots mechanism.
???
Re:Threads and synchronous IPC: non-sense?
What a wonderful thing a rant is...
Don't worry about piping. It is merely stuffing some data into a buffer and notifying the counterpart as soon as the buffer is full or EOF is reachedso that it fetches the buffers contents - and notifys the otherone to write stuff into the now empty buffer as long as not yet EOF. So they ARE synchronizing each other.
Mark: a process can only read from a full buffer, and who else but the process writing to the pipe is to notify him about that?
Signals are in my opinion something for asynchronous communication of IMPORTANT events to a process. (sighup, sigkill,etc...) The signals trigger the execution of signalhandlers related to the process.
ah ... and see what? Operating system programming IS event driven programming, never forget about that. Interrupts, exceptions
and system call traps are the events that keep the kernel and the processes running and turning round the cpu - this is most valid for micro kernel design.
Stay safe and ... if you feel to just rant around. *ggg* we in vienna call this sometimes "raunzen" or "granteln"
Don't worry about piping. It is merely stuffing some data into a buffer and notifying the counterpart as soon as the buffer is full or EOF is reachedso that it fetches the buffers contents - and notifys the otherone to write stuff into the now empty buffer as long as not yet EOF. So they ARE synchronizing each other.
Mark: a process can only read from a full buffer, and who else but the process writing to the pipe is to notify him about that?
Signals are in my opinion something for asynchronous communication of IMPORTANT events to a process. (sighup, sigkill,etc...) The signals trigger the execution of signalhandlers related to the process.
ah ... and see what? Operating system programming IS event driven programming, never forget about that. Interrupts, exceptions
and system call traps are the events that keep the kernel and the processes running and turning round the cpu - this is most valid for micro kernel design.
Stay safe and ... if you feel to just rant around. *ggg* we in vienna call this sometimes "raunzen" or "granteln"
Re:Threads and synchronous IPC: non-sense?
Ah, my point about event-driven programming was that unless one uses a special userlevel library that fakes something else, it's all going to be event-driven on user-level too..
Like, you won't have a read code that blocks. The best abstraction you are going to have is to have a loop after that read() call that processes events until it gets "read-completed"-event and moves on to do other things.
Now, the main reason I ever started OS devel was that I became frustrated with trying to write applications with UNIX api. Maybe it's my shortcoming but I just feel there's far too many things you just can't handle reliably, especially if you'd like to do multi-threaded programming. You can't live with synchronous I/O since then there's no way to shutdown threads in a reliable way. You can't even (in any sane way) have two threads, one reading a socket, and the other writing it, without adding another level of abstraction just to provide asynchronous I/O. And it still doesn't solve the thread-cancellation problem, I mean, you can't just rely on POSIX cancellation points, since you can't do any clean shutdown then. So you have to add more levels of indirection. Sure you can, but is it worth it? And the whole thing breaks the very moment someone sends one signal in to the mess.
I'd want an operating system that has a clearly specified set of primitives, on top of which it is possible to build reliable programs. Programs that can be (at least in principle) proven to work, no matter what unfortunate events happen.
Ok, now that I've been thinking about asynchronous IPC.. it seems I could actually build a nice actor model on top of it. Means a lot of threads, means a lot of overhead, but far less than would be required to implement the same thing on something like UNIX. Windows might be a better platform, but you can't really rely on any single library function really working, so you can prove nil about your programs.
Now, don't get me wrong. I don't care if my system ever becomes useful, but if it does, it will be rock solid, fundamentally though out to endure anything it might ever face. Well, at least until hardware failures. It's hard to do anything on x86 if the CPU happens to burn, except, perhaps, provide transactional filesystem, so that atleast a consistent system state can be restored later.
This is why I'm putting so much attention to little details like what happens if a thread gets a signal when it's in a middle of an IPC call.
There's no safe method to do it safely, so unfortunately, it won't be done. To provide the same functionality in a safe way, asynchronous system has to be used instead. It won't be perfect, but it's better than nothing, and at least the semantics can be specified in a prefectly clear way.
Like, you won't have a read code that blocks. The best abstraction you are going to have is to have a loop after that read() call that processes events until it gets "read-completed"-event and moves on to do other things.
Now, the main reason I ever started OS devel was that I became frustrated with trying to write applications with UNIX api. Maybe it's my shortcoming but I just feel there's far too many things you just can't handle reliably, especially if you'd like to do multi-threaded programming. You can't live with synchronous I/O since then there's no way to shutdown threads in a reliable way. You can't even (in any sane way) have two threads, one reading a socket, and the other writing it, without adding another level of abstraction just to provide asynchronous I/O. And it still doesn't solve the thread-cancellation problem, I mean, you can't just rely on POSIX cancellation points, since you can't do any clean shutdown then. So you have to add more levels of indirection. Sure you can, but is it worth it? And the whole thing breaks the very moment someone sends one signal in to the mess.
I'd want an operating system that has a clearly specified set of primitives, on top of which it is possible to build reliable programs. Programs that can be (at least in principle) proven to work, no matter what unfortunate events happen.
Ok, now that I've been thinking about asynchronous IPC.. it seems I could actually build a nice actor model on top of it. Means a lot of threads, means a lot of overhead, but far less than would be required to implement the same thing on something like UNIX. Windows might be a better platform, but you can't really rely on any single library function really working, so you can prove nil about your programs.
Now, don't get me wrong. I don't care if my system ever becomes useful, but if it does, it will be rock solid, fundamentally though out to endure anything it might ever face. Well, at least until hardware failures. It's hard to do anything on x86 if the CPU happens to burn, except, perhaps, provide transactional filesystem, so that atleast a consistent system state can be restored later.
This is why I'm putting so much attention to little details like what happens if a thread gets a signal when it's in a middle of an IPC call.
There's no safe method to do it safely, so unfortunately, it won't be done. To provide the same functionality in a safe way, asynchronous system has to be used instead. It won't be perfect, but it's better than nothing, and at least the semantics can be specified in a prefectly clear way.
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
Re:Threads and synchronous IPC: non-sense?
Hm... Don't you think it makes SENSE to block a *thread* that want's something from the disk until the data is available?
In a multithreaded environment, I think it is easy to spawn off a thread of execution which takes care of the issued read/write call while the rest of the program does what ever is due (more often than less nothing). the thread responsible for the read call goes then and fetches the data. that's it.
The point is: Someone *has* to wait for a notification at a certain point. There is no need to have it loop around. Just block it with a "receive" and upon "wake up" have it check it's mailbox again.
And for the read call *has* to deliver a result, the rechecking of the mailbox for the "ready/error"-notification from the fs-server has to be done in the library funciton for "read". It is just "receive".
I have to stress it again: interrupts, system calls and exceptions are events. period. Why bothering with "abstraction" of something "natural" on a computer? when it comes to GUI programming, you have to deal with events again. I 'd recommend working WITh them not around of them just to avoid some *blocking*. Believe me, sometimes it's worth the time to wait for something. *gg*
Your point about the posix signals and threads: A Unix Kernel knows only about processes for it 's most time the process structure it deals with. I don't know exactly, but hasn't each struct for a thread the same PID as the process it belongs to? One doesn't see the threads in ps or top. so it is no wonder. When you send sigkill to a process, all it's threads receive that signal too as it is -- PID related? don't know the right words.
Given this, I understand your dillemma. But tell me, would it make sense to send SIG KILL to a process and let all its threads alive? You have to build a set of functions and structures to manipulate such stuff on a per thread basis to solve this problem.
ad socket read/write: No no no, I do not agree with this rant of yours regarding this reading/writing of two threads to One Socket. It HAS to be synchronized, because if not ere long there'd be a whole mess of crap instead of data.
ad signal to thread in middle of ipc call: put it to a signal pending" field and have the signal handlers deal with it later. First the ipc call *has* to be carried out. You as the developer have to lay out a policy concerning this issues.
stay safe
ps: i started os dev because of pure curiousity. I've learned some basic theory about it in study.
In a multithreaded environment, I think it is easy to spawn off a thread of execution which takes care of the issued read/write call while the rest of the program does what ever is due (more often than less nothing). the thread responsible for the read call goes then and fetches the data. that's it.
The point is: Someone *has* to wait for a notification at a certain point. There is no need to have it loop around. Just block it with a "receive" and upon "wake up" have it check it's mailbox again.
And for the read call *has* to deliver a result, the rechecking of the mailbox for the "ready/error"-notification from the fs-server has to be done in the library funciton for "read". It is just "receive".
I have to stress it again: interrupts, system calls and exceptions are events. period. Why bothering with "abstraction" of something "natural" on a computer? when it comes to GUI programming, you have to deal with events again. I 'd recommend working WITh them not around of them just to avoid some *blocking*. Believe me, sometimes it's worth the time to wait for something. *gg*
Your point about the posix signals and threads: A Unix Kernel knows only about processes for it 's most time the process structure it deals with. I don't know exactly, but hasn't each struct for a thread the same PID as the process it belongs to? One doesn't see the threads in ps or top. so it is no wonder. When you send sigkill to a process, all it's threads receive that signal too as it is -- PID related? don't know the right words.
Given this, I understand your dillemma. But tell me, would it make sense to send SIG KILL to a process and let all its threads alive? You have to build a set of functions and structures to manipulate such stuff on a per thread basis to solve this problem.
ad socket read/write: No no no, I do not agree with this rant of yours regarding this reading/writing of two threads to One Socket. It HAS to be synchronized, because if not ere long there'd be a whole mess of crap instead of data.
ad signal to thread in middle of ipc call: put it to a signal pending" field and have the signal handlers deal with it later. First the ipc call *has* to be carried out. You as the developer have to lay out a policy concerning this issues.
stay safe
ps: i started os dev because of pure curiousity. I've learned some basic theory about it in study.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image
Re:Threads and synchronous IPC: non-sense?
Ofcourse. My point was that with "synchronous" IPC you can usually expect to get the result from the disk as the first thing, or not at all.beyond infinity wrote: Hm... Don't you think it makes SENSE to block a *thread* that want's something from the disk until the data is available?
Yeah, sure. That's fine as long as you have one central authority. But if you don't know who's on the other end, you can't really expect it to return at all, which means you can't guarentee termination, unless you have some way to cancel the thread reliably.In a multithreaded environment, I think it is easy to spawn off a thread of execution...
And if you have a nice distributed object model, you might want to notify the service handling your request, that it should quit as well, which means that you need some way to notify a thread that it should quit, without stopping what it's doing. You want to interrupt it, yes, but you want to make it possible to resume the original operation. Hence, asynchronous IPC.
That's what mailboxes are for. But who says you can't have all kinds of messages in your mailbox, possibly out of order, from various sources. It's easy to do on kernel level, but it makes application development in C a pain.The point is: Someone *has* to wait for a notification at a certain point...
Ofcourse there must be a semaphore so nobody's going to poll anything in a busy loop, but you still need some kind of event-dispatch loop, since you could get other messages first.
Sure, it's possible to have the event-dispatch loop in a library function that understands what the read() is supposed to do, and then let's the calling thread think that it's doing normal synchronous I/O.And for the read call *has* to deliver a result...
Works as long as the library can specifically support the type of object you are accessing, and provides wrappers for all possible methods with varying semantics, which makes it no single bit simplier than traditional UNIX.
I wasn't trying to avoid blocking, just blocking indefinitely without making it possible for other threads to notify the blocking thread that it should consider stopping what it is doing, and clean-up after itself.I have to stress it again: interrupts, system calls and exceptions are events. period. Why bothering with "abstraction" of something "natural" on a computer?
Now if other threads can send messages to the same mailbox the blocking thread is waiting on to receive it's return/error code, it solves the problem, but makes programming hard.
But it's hard anyway...
I don't really care about system calls though. There are just overhead really, a necessary evil.
Actually, on Linux every thread IS a process. They each DO have a pid, and they are shown in the "ps" feed as individual processes. Signals are always delivered by PID.I don't know exactly, but hasn't each struct for a thread the same PID as the process it belongs to?
This doesn't exactly make signals easy to use in a multi-threaded application, even if you only wanted it to work on Linux. Other systems have other issues.
I know, and it makes me feel uneasy. I'll probably allow forced KILL on per-process basis, since it's sometimes necessary to kill a process, and it's easier to clean a process as a whole, than it's to clean individual threads.Given this, I understand your dillemma. But tell me, would it make sense to send SIG KILL to a process and let all its threads alive?
NO NO NO... my point was to read with one thread, write with another.. very useful for things like IRC. =)ad socket read/write: No no no, I do not agree with this rant of yours regarding this reading/writing of two threads to One Socket.
Not every IPC call *has* to be carried out. It's not necessary to always wait for input from a socket, as the user might have already hit the cancel button, and you'd throw the data away anyway.ad signal to thread in middle of ipc call: put it to a signal pending" field and have the signal handlers deal with it later. First the ipc call *has* to be carried out. You as the developer have to lay out a policy concerning this issues.
Ok, have to admit curiosity plays a role in my development too. I mean, that's the reason I've started programming in the first place, and that's the reason I've wasted hours after hours in front of the computer.ps: i started os dev because of pure curiousity. I've learned some basic theory about it in study.
Ps. Sorry for so long post.
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
Re:Threads and synchronous IPC: non-sense?
ad knowing from whom to receive a message: It is possible to tell themessage dispatcher from which other process one wants to have a message first. Upon receipt of the message dispatcher checks if a certain process' message has to be delivered first and then it picks the message out of the mail box and delivers it.
ad threads/sockets: I am still not convinced. At the very low level, these two *have* to sync, because : can you *write* to a socket whilst another one reads from it? Any Layer of abstraction might hide this fact from the user, but the os developer has to bear it in mind.
ad ipc calls and thread termination: Ok, in this point you are right. Didn't consider it.
stay safe
edit: bad english is no excuse for letting recognized faults linger around... *lol*
ad threads/sockets: I am still not convinced. At the very low level, these two *have* to sync, because : can you *write* to a socket whilst another one reads from it? Any Layer of abstraction might hide this fact from the user, but the os developer has to bear it in mind.
ad ipc calls and thread termination: Ok, in this point you are right. Didn't consider it.
stay safe
edit: bad english is no excuse for letting recognized faults linger around... *lol*
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:Threads and synchronous IPC: non-sense?
<coin val="2 cents">
Imho, there are situtations where the thread should block and others where it shouldn't, as there are situations where the thread's blocking should be interrupted and others where the "interrupting" event should be deferred.
in my mind, this translates in queues of events (i.e. one event can only be delivered and processed when the previous one are completed) with different priviledge levels (i.e. you - as a system designer - assign priorities to events, and only a more important event can interrupt a less important one).
If a thread is waiting for an I/O completion, it should still be able to respond to "KILL" events. However, this makes no sense to interrupt a thread that is rendering a window (upon the window manager's request) to give it additionnal GUI requests, but it makes sense to be able to deliver him a "TIMEDOUT" event however.
</coin>
Imho, there are situtations where the thread should block and others where it shouldn't, as there are situations where the thread's blocking should be interrupted and others where the "interrupting" event should be deferred.
in my mind, this translates in queues of events (i.e. one event can only be delivered and processed when the previous one are completed) with different priviledge levels (i.e. you - as a system designer - assign priorities to events, and only a more important event can interrupt a less important one).
If a thread is waiting for an I/O completion, it should still be able to respond to "KILL" events. However, this makes no sense to interrupt a thread that is rendering a window (upon the window manager's request) to give it additionnal GUI requests, but it makes sense to be able to deliver him a "TIMEDOUT" event however.
</coin>
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
Re:Threads and synchronous IPC: non-sense?
<pick_up (two_cents);>
I remember a thread where we have discussed this issue... queues of messages belonging to threads.
It is up to the developer to set up a corresponding policy which is applicable for the whole system: when to block a thread and when not.
see: in a gui environment it makes sense to have some thread send a message and carry on with work (receiving input, calculating sthg, parsing mouse packets etc) whilst an other thread has to block coz it needs feedback (ret values).
(ha, i hear some cats sharpening claws and squealing) - but no, pals, don't frown or be disgruntled now: It still remains a question of system policy. This is a thing the system designer sets up.
In such cases, setting up a portfeulle of situations or a use-case-diagram on a sheet of paper can be of help.
</pickup();> *gg*
I remember a thread where we have discussed this issue... queues of messages belonging to threads.
It is up to the developer to set up a corresponding policy which is applicable for the whole system: when to block a thread and when not.
see: in a gui environment it makes sense to have some thread send a message and carry on with work (receiving input, calculating sthg, parsing mouse packets etc) whilst an other thread has to block coz it needs feedback (ret values).
(ha, i hear some cats sharpening claws and squealing) - but no, pals, don't frown or be disgruntled now: It still remains a question of system policy. This is a thing the system designer sets up.
In such cases, setting up a portfeulle of situations or a use-case-diagram on a sheet of paper can be of help.
</pickup();> *gg*
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image