My idea for IPC
My idea for IPC
My idea for IPC in my OS is for each process to have a 100 byte area were the message that is currently getting processed would be stored. The first byte would be a status byte that would hold one of these values 0x00 (ready) to receive), 0x01 (processing message) and 0x80 (lockout). when a process wants to send a message to another process it would load the PID of the target into eax and a pointer to the message in memory into ebx. It would then do a system call and then the kernel would check the status bytes of the target process. If the status is 0x00, the kernel would copy the message from the sender into the memory of the receiver. But if the status is 0x01, the message would be moved into a buffer. If the status is 0x80 that means that the message buffer is full and the message will be discarded also informing the sender that the message failed to be sent. On the next context switch to a task the next message in the buffer will be copied into the message memory if the status byte is 0x00. If the process does not want to wait it can copy all the messages into it's own memory and them handle all at once.
Questions? Comments?
Questions? Comments?
- einsteinjunior
- Member
- Posts: 90
- Joined: Tue Sep 11, 2007 6:42 am
-
- Member
- Posts: 2566
- Joined: Sun Jan 14, 2007 9:15 pm
- Libera.chat IRC: miselin
- Location: Sydney, Australia (I come from a land down under!)
- Contact:
I would suggest using message boxes with limited length queues. This way you can limit the message box to containing, say, 1024 messages, but it only uses the amount of memory that the queue takes up.
It also means you can avoid having a rogue process using up all the resources by sending thousands of messages, as after the 1024th it can't send anymore.
It also means you can avoid having a rogue process using up all the resources by sending thousands of messages, as after the 1024th it can't send anymore.
- einsteinjunior
- Member
- Posts: 90
- Joined: Tue Sep 11, 2007 6:42 am
how about smaller messages?
Much of the ipc code message size was power of 2.
I dont have my os on this computer so i cant tell bu i use
fixed size messages and a per process bitmap where the value of the first byte in a message is stored.
if code wants to see if message type 12 was sent it looks at the bitmap
and if bit 12 is 1 then this message is on the queue.
Much of the ipc code message size was power of 2.
I dont have my os on this computer so i cant tell bu i use
fixed size messages and a per process bitmap where the value of the first byte in a message is stored.
if code wants to see if message type 12 was sent it looks at the bitmap
and if bit 12 is 1 then this message is on the queue.
- einsteinjunior
- Member
- Posts: 90
- Joined: Tue Sep 11, 2007 6:42 am
Hi,
There is a little compromise between proessing a large amount of messages and letting a large size for the message box.If the mailbox is too small then you will loose the processing of some messages from the message queue and if the message queue is too large then you will waste a lot of address space.
There is a little compromise between proessing a large amount of messages and letting a large size for the message box.If the mailbox is too small then you will loose the processing of some messages from the message queue and if the message queue is too large then you will waste a lot of address space.
What about message intensive programs? Say a WM passes mouse events to the program via IPC... a lot more than 1024 messages will be sent for something like a media player over the course of an hour.
What about having one message of a slightly longer length, that is overriden with the next one, once the original has been read. This would need some sort of waiting que for each program (for maybe 15 messages), as each would need to be delivered after the previous had been read.
What about having one message of a slightly longer length, that is overriden with the next one, once the original has been read. This would need some sort of waiting que for each program (for maybe 15 messages), as each would need to be delivered after the previous had been read.
Yes, but if you really want to use fixed size packages you can split big packages into smaller ones. There are different approaches for ipc that don't suffer from such a problem, e.g. you could use pipes and let the driver block until the pipe's internal buffer is empty, then send some bytes and wait until the buffer is empty again.
My microkernel's ipc scheme is extremely simple: the kernel simply transfers buffers between processes only when processes ask for it. When a process wants to receive a message, the message is copied directly from the sender's address space to the next. The kernel can also do an indirect copy but then messages cannot be more than 64 bytes.
For situations where this does not suffice, processes can also open message queues(mqueue), which is like a blocking FIFO: one process sends, the other receives. Finally, for situations where speed is a concern, the kernel provides shared memory. Mqueues act like a kind of interprocess jack of all trades. They can be used to send messages but they also can be used for synchronization. Using mqueues in conjunction with shared memory makes for a very speedy IPC system. I use this method for communication between FS servers and the VFS. I use the first method I described for processes that only need to do IPC with one other process and I use mqueues when a process wants to be able to talk to more than one process.
Just thought I'd give an example of another microkernel's IPC system. You, however, are free to do whatever you chose.
For situations where this does not suffice, processes can also open message queues(mqueue), which is like a blocking FIFO: one process sends, the other receives. Finally, for situations where speed is a concern, the kernel provides shared memory. Mqueues act like a kind of interprocess jack of all trades. They can be used to send messages but they also can be used for synchronization. Using mqueues in conjunction with shared memory makes for a very speedy IPC system. I use this method for communication between FS servers and the VFS. I use the first method I described for processes that only need to do IPC with one other process and I use mqueues when a process wants to be able to talk to more than one process.
Just thought I'd give an example of another microkernel's IPC system. You, however, are free to do whatever you chose.
Well, for my I/O message passing, I use a circular buffer. I simply use the driver to add messages, then the process/kernel can read them as they are available. They don't interfere with each other (so they can be both adding a new message, and reading a current message simultaneously), this makes multi threading/tasking much simpler. The same can easily be done, and safely using a single 4k page (since this is the minimum size you need to keep pages safe from other processes, you can't share a 4k page otherwise they can mess with each other). So, each process would have a 4k page, then you split it into however big of sections you require (typically you have more incoming than outgoing for a process, but more outgoing for a driver, or at least 50/50 for a driver). So, say you split it up 2k/2k for incoming outgoing, process/kernel both map the page. Both use ring buffers, so they are non-interfering. You can even store a size variable so you can use variable sized messages, simply make sure the message will fit in the remaining space, write the message, update the tail pointer. The process would see the head != tail, and read the size, grab the message, update head with the message size. This means, you don't have a fixed size message, just a fixed sized buffer (although that can be changed as well). If you were transfering large chunks for a disk driver (as mentioned above), you would simply send a message to share a page with the driver (or, in my case, my drivers are in kernel memory, and all process have kernel memory mapped in as priveledged pages, so no need to even share unless it's with another process). The process would go like this: Process 1 asks the kernel to allocate some memory, then tells it it wants to share it with process 2. So, process 2 would have a page mapped in sharing the same physical address, so they can both now read/write directly to the same physical address (doesn't matter what the virtual address is in each process space). Now, they can resume using messages to read data, and notify of completion to use the buffer. So, after the buffer was established, process 1 tells process 2 (disk driver) to read a sector to the buffer, process 2 reads the sector to the buffer and notifies process 1 that it's complete, then process 1 reads the data from the buffer.