Does anyone have ideas on how to solve these issues?

Easiest way I could see is have a flag for each item in the que saying whether there is a message in progress, and it's originator (great for IPC as well). Of course, there is always that chance that the kernel is going to pre-empt inbetween you finding the next available free slot, and you setting this bit, so maybe a flag saying that you are editing the message que before finding the free slot, in essence, locking it. Possibly you could have the kernel add messages to the que in another direction as so the 2 don't interfere with each other. Another possibility is instead of a flag to lock the que, set the # of messages you want to allocate, so if the kernel does preempt this, it knows how many messages to skip without interfering. The kernel can add the # of messages it's going to be using, and return, just incase another process needs to send it's own messages before the entire thing is allocated. I would really have to know how you plan on storing this que and telling where open slots are, etc to get a better idea on what is going on though.deadmutex wrote:I want to design a very slim microkernel which won't contain a memory allocator. I want the kernel's message passing system to simply transfer messages from one message queue to the other. Since the message queues are in the users' address spaces, the kernel shouldn't have to worry about allocating memory. However, when I looked into this design further, I mainly ran into some concurrency issues... the receiver may/may not be manipulating its message queue when the kernel is adding a message. If the receiver is manipulating its queue, then the kernel might leave the queue in an inconsistent state after a message transfer. Also, on the sender's side, the sender may queue up its messages and have them sent after the sender is preempted. The problem with doing this is that the sender may also be manipulating its queue when the kernel preempts it. To me, there seems to be a potential for data inconsistencies by doing it this way.
Does anyone have ideas on how to solve these issues?
I thought about using an array of fixed-sized messages as the queue. The user will specify the location of the queue, max number of messages, etc. to the kernel. This is how the kernel can transfer messages. The "ready/used" bit can also function as a "present" bit. If the 'present' bit is set, then the entry is a valid message; if not, then that entry can be regarded as free.I would really have to know how you plan on storing this que and telling where open slots are, etc to get a better idea on what is going on though.
The receiver can use a system call (wait_for_msg) that will cause the receiver to block until it either receives a message or if it still contains messages on its queue.Only question I have is this, how will the user application know when it receives a message, will it be notified by the kernel to check it's messages, or constantly poll a status to see if any messages have arrived?
Ok, so fixed-sized array, etc. Will there be one qeue for incoming and one for outgoing, can processes send each other messages, or do they send the kernel a message to send another process a message (much safer, but slower). How will the kernel know when a message is waiting, will the user notify it, or will it constantly poll. If there is only comms between the kernel and a single process (no direct talking between processes), and you have 2 queue's (one incoming, 1 outgoing), then you don't even have to store a present bit, simply store the location in the queue, and the last section that has a message, then you dont' have to traverse it to check for messages, you know which is the next message, and which is the last message always. If you have some sort of notification function on both sides, then you can do away with polling completely, and each time it's notified just update a message count variable, so you know how many messages you have left to process, and you will never miss one, and avoid polling and traversing the array checking bits completely. If the kernel is adding a message, while the user process is reading a message, they wont' interfere, as the kernel never updates the Current message variable, and the user app never updates the Last message variable, so no chance of them interfering (and the exact opposite for the other direction of course!)deadmutex wrote:I thought about using an array of fixed-sized messages as the queue. The user will specify the location of the queue, max number of messages, etc. to the kernel. This is how the kernel can transfer messages. The "ready/used" bit can also function as a "present" bit. If the 'present' bit is set, then the entry is a valid message; if not, then that entry can be regarded as free.I would really have to know how you plan on storing this que and telling where open slots are, etc to get a better idea on what is going on though.
In my design, each thread has the ability to register a queue for outgoing messages with the kernel. A thread may also own a set of mailboxes from which messages are received. Each mailbox must be associated with a message queue. When it is time to do a context switch, the kernel will send any messages that are in the old thread's send queue(if any). The user also has the option of disabling the kernel auto-send and/or issuing a system call to manually send messages. The messages in my system are small. Large message transfers will be performed by using shared memory.Ok, so fixed-sized array, etc. Will there be one qeue for incoming and one for outgoing, can processes send each other messages, or do they send the kernel a message to send another process a message (much safer, but slower). How will the kernel know when a message is waiting, will the user notify it, or will it constantly poll.