Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Owen wrote:Don't try to invent "One grand unified solution" if it doesn't make sense. As I'd do it:
Small IPCs might go through a memory mapped ringbuffer shared between the ends of the communications channel (With the kernel providing the ability to "prod" or "wake up" the other end)
The shared memory ring buffer is great as it is a zero copy solution and allocation can in some cases be done lock free but this requires that you map at least one page for each open channel. The VFS for example is likely to have many clients, in the hundreds even which consume a lot of virtual and physical address space. Each program is also likely to use several other services and not only the VFS so the amount mapped channel pages must be quite large or am I exaggerating the problem? Do you see this as a problem or is the extra memory used here worth it?
Assuming a page per direction, 1024 clients is... 8MB of virtual address space use. That's assuming one connection per client process and a lot of processes, or many connections per client process and less processes.
Actually, if you're going for synchronous IPC, you can share that page between both directions.
If you are worried about locking at the process side, and keep the file descriptor table "unified" between multiple connections at the VFS side, then you can have one connection per thread (good for multithreaded I/O intensive apps)
The cost of a thread is likely to be >> one 4kB buffer.
Owen wrote:Assuming a page per direction, 1024 clients is... 8MB of virtual address space use. That's assuming one connection per client process and a lot of processes, or many connections per client process and less processes.
Actually, if you're going for synchronous IPC, you can share that page between both directions.
If you are worried about locking at the process side, and keep the file descriptor table "unified" between multiple connections at the VFS side, then you can have one connection per thread (good for multithreaded I/O intensive apps)
The cost of a thread is likely to be >> one 4kB buffer.
Actually it's not that bad when it comes to memory consumption compared to the benefits. I have a concern when it comes to security, the metadata of the shared allocator will be shared meaning that a process can destroy the metadata of the service making it crash. For example with VFS this is not a behaviour that you want, if one client process misbehave it shouldn't bring down the whole VFS so what can you do here in order to prevent this?
What shared allocator? Its' a ringbuffer; the metadata is two variables (head offset, tail offset). Validating those is easy.
If you're going for synchronous RPC-style-IPC, then just splat the to-be-sent data into the page starting at offset zero and then the server can splat back the to-be-returned data in the same page
Owen wrote:What shared allocator? Its' a ringbuffer; the metadata is two variables (head offset, tail offset). Validating those is easy.
If you're going for synchronous RPC-style-IPC, then just splat the to-be-sent data into the page starting at offset zero and then the server can splat back the to-be-returned data in the same page
Ring buffer would not work for me and I would need an allocator that could allocate and free arbitrary sized chunks allocated and freed at any time. I use a kernel assisted allocator as it is right now but it would be lovely to move that to user space and also avoid message copying over the process border. However, I must also ensure that processes cannot kill other processes by corrupting shared data.