Standardized IPC protocol

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
Jezze
Member
Member
Posts: 395
Joined: Thu Jul 26, 2007 1:53 am
Libera.chat IRC: jfu
Contact:

Re: Standardized IPC protocol

Post by Jezze »

I wanted to chime in with my approach. Like bzt I have asynchronous messaging only. Each task has a mailbox it reads messages from and if it is empty it blocks. So far pretty standard stuff. Like bzt I do synchronous messaging by sending a message and waiting for a response. What is a bit unusual is that even though the main flow of the task is blocked waiting for a response, if by chance any other message arrive that it wasnt specifically waiting for, the task can still run off and take care of it. So a synchronous block doesnt interfere with the asynchronous events. This is useful in that you can for example wait for a timer interrupt every 5 seconds but still instantly handle a key being pressed.
Fudge - Simplicity, clarity and speed.
http://github.com/Jezze/fudge/
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Standardized IPC protocol

Post by AndrewAPrice »

Finally got a running implementation of Permebufs.

Here's an example of a .permebuf file.

I lost a lot of motivation because it was a ton of work writing all the generic classes: PermebufArrayOfStrings, PermebufArrayOfOneofs, PermebufArrayOf<T>, PermebufListOf<T>, PermebufBytes, PermebufString, etc.. and the code generation.

Here's the first sample code of it all coming together:

Code: Select all

	Permebuf<permebuf::perception::test::HelloWorld> hello_world;

	hello_world->SetBool1(true);
	hello_world->SetBool5(true);
	hello_world->SetBool9(true);
	hello_world->SetName("testy name");

	std::cout << "Bool1: " << hello_world->GetBool1() << std::endl;
	std::cout << "Bool5: " << hello_world->GetBool5() << std::endl;
	std::cout << "Bool9: " << hello_world->GetBool9() << std::endl;
	std::cout << "Name: " << *hello_world->GetName() << std::endl;
Permebuf<T> is the root object that owns the allocated memory. The underlying message is page aligned. My intention is that to minimize copying during IPC, you'd be able to populate a hefty message, then 'gift' those pages to the receiving process. The vision I have for the RPC API is something like this:

Code: Select all

Permebuf<MyServicesRequest> request;
// ... populate request ...
Permebuf<MyServicesResponse> response = some_service->MyFunction(std::move(request));
// `request` is invalid now, because the underlying buffer was sent to `some_service`.
My OS is Perception.
User avatar
eekee
Member
Member
Posts: 872
Joined: Mon May 22, 2017 5:56 am
Location: Kerbin
Discord: eekee
Contact:

Re: Standardized IPC protocol

Post by eekee »

AndrewAPrice wrote:I lost a lot of motivation because it was a ton of work writing all the generic classes: PermebufArrayOfStrings, PermebufArrayOfOneofs, PermebufArrayOf<T>, PermebufListOf<T>, PermebufBytes, PermebufString, etc.. and the code generation.
I'm sure! Good going.

I admit I wasn't going to take type definition to that level. Rather, as I'm working at the language level, my arrays etc would include type data and... I guess I was thinking of run-time type checking. :) But static typing does help catch bugs. It might be possible to mechanically derive data structure from code and compare it with a specification similar to a permebuf file. That's something I'd like to explore, but it seems a long way off right now.
Kaph — a modular OS intended to be easy and fun to administer and code for.
"May wisdom, fun, and the greater good shine forth in all your work." — Leo Brodie
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Standardized IPC protocol

Post by AndrewAPrice »

I'm expanding my Permebuf definition to include 'services' and 'minimesages'. Mini-messages are like messages, but their size is able to be statically evaluated (which limits what you can do with them, e.g. they can't hold arrays) and they must be <= 32 bytes. The idea is, that if a service's method takes a mini-message rather than a message, the IPC will happen in registers without heavy memory copying, but if a standard message is sent, then we donate the memory pages the Permebuf was on to the receiving process.

e.g. From my MouseListener service

Code: Select all

// A service that can listen to mouse events.
service MouseListener {
	// The mouse has moved. This message is sent when the listener has held the
	// mouse captive.
	minimessage OnMouseMoveRequest {
		// How far the mouse moved horizontally.
		DeltaX : float32 = 1;

		// How far the mouse moved vertically.
		DeltaY : float32 = 2;
	}
	OnMouseMove : OnMouseMoveRequest = 0;

	// The mouse has scrolled. This message is sent either when the listener
	// has held the mouse captive, or the cursor is over the window.
	minimessage OnMouseScrollRequest {
		// How far the mouse scrolled.
		Delta : float32 = 1;
	}
	OnMouseScroll : OnMouseScrollRequest = 1;
	.....
}
Then the mouse driver can have a service such as:

Code: Select all

service MouseDriver {
	// Specifies the service that we should send mouse events to.
	minimessage SetMouseListenerRequest {
		// The service we should send mouse events to.
		NewListener : MouseListener = 1;	
	}
	SetMouseListener : SetMouseListenerRequest = 1;
}
The plan is for the MouseListener to be the window manager, but if a program (like a first person shooter video game) wants to capture the mouse, the window manager can pass that program's MouseListener to the MouseDriver, and the MouseDriver process can send messages directly to the program that's listening.

The above examples only show messages without a response type, but I'm going to support:

Code: Select all

<method name> : <request type> -> <response type> = <method number>;
I'm also going to support 'streams' that open a channel that can send multiple consecutive messages until the channel is closed.
My OS is Perception.
moonchild
Member
Member
Posts: 73
Joined: Wed Apr 01, 2020 4:59 pm
Libera.chat IRC: moon-child

Re: Standardized IPC protocol

Post by moonchild »

AndrewAPrice wrote:I'm intrigued about the idea implementing RPCs as fat function calls - when a process registers an RPC, it registers the address of the entrypoint, and when you issue an RPC you stay in the same thread, but the function call barrier changes the address space (and preserves registers.) The disadvantage of this method is that all RPC handlers must be thread-safe (although, worst case scenario is you lock the same mutex as the start of all of your handlers and your program is effectively single threaded.)

But, it becomes apparent that there are times we don't want to block the caller, e.g. the mouse driver notifying the program that's in focus that the mouse have moved shouldn't by synchronous otherwise we risk a userland program blocking the mouse driver. It might be useful to a mechanism that's send-and-forget. So, I think it would be useful to have two IPC mechanisms
In general, it's better to have fewer primitives and compose them. If all you have are async messages, you can easily implement a (synchronous) message handling queue within a given application. And you can put that code into your standard library/whatever so it's the same amount of friction. Why should the kernel have to deal with it? The whole point of microkernels is to shift complexity into userspace.
nexos
Member
Member
Posts: 1078
Joined: Tue Feb 18, 2020 3:29 pm
Libera.chat IRC: nexos

Re: Standardized IPC protocol

Post by nexos »

In my microkernel, I have completely forgotten about async messaging. The problem with async messages is the same problems Mach had with its messaging. The approach I plan on using is like what @AndrewAPrice mentioned. A message when sent, will work as follows:

1) User calls sendmsg(), passing a memory address containing message data and the message ID and other things, and the thread ID of the destination.

2) Kernel acquires a semaphore, which contains how many threads are waiting to send a message to this thread.

3) Kernel obtains physical address of message data, and then alters the PTEs and PDEs for the destination thread to map that data at a virtual address. In my kernel, memory management is outside of the kernel (save for the MMU and a kernel use only physical memory allocator and kernel object pool), so the region from 0x0 - 0x1000, is unmapped, 0x1000 - 0x2000 contains the TCB of the current thread, and 0x3000-0x300000 contains a message area where message data is. This does limit the size of a message, especially if there are multiple messages at once in a process, but it was the best solution I came up with :D .

4) The kernel then temporarily boosts the priority of the receiving thread so that it would preempt us, and then initiates a context switch to this thread.

5) Receiver thread handles message

6) Receiver thread calls ackmsg(), which awakes the thread waiting for this thread to process the message

7) Sender wakes up, and goes about its business

Please tell me any kinks in this system :D
"How did you do this?"
"It's very simple — you read the protocol and write the code." - Bill Joy
Projects: NexNix | libnex | nnpkg
andrew_w
Posts: 19
Joined: Wed May 07, 2008 5:06 am

Re: Standardized IPC protocol

Post by andrew_w »

For UX/RT I am using the seL4 microkernel, which implements synchronous register-based IPC optimized for RPC-type call-return semantics. On top of the kernel IPC I am going to implement a minimalist transport layer that provides several variations on Unix-type read()/write() APIs. Along with the traditional read()/write() that always copy, there will also be variants that expose the non-reserved message registers directly, as well as ones that use a shared buffer, with all three APIs being interchangeable (if the receiving process reads the message with a different API than was used, the message will be copied to the provided private buffer, the message registers, or the shared buffer as necessary). Support for returning error statuses and optionally preserving message boundaries (for certain types of special files) will be present, but other than that the transport layer will be untyped and unstructured, unlike most other IPC transport layers for microkernel OSes. Structured RPC with marshalling will be provided as an optional presentation layer on top of the transport layer. Since APIs that deal in bulk unstructured data are common, I see no good reason to impose the overhead of structured RPC with marshalling on them.
Developer of UX/RT, a QNX/Plan 9-like OS
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Standardized IPC protocol

Post by AndrewAPrice »

@andrew_w What do you mean by unstructured data? Something like JSON?

I like how simple it is in JavaScript/Node.js that you can send and store arbitrary JSON in files and across networks. But, I don't know if I could get over the serialization+federalization for every message.

One of my motivations for creating Permebuf is to have a way to describe data that is essentially as simple as writing data into a struct (except the whole message is page aligned so I can send the memory pages to another process.)
My OS is Perception.
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Standardized IPC protocol

Post by AndrewAPrice »

I've been investing a lot of time into my Permebuf to C++ code generator. It's still not done - I have a stream syntax that opens a two way channel between the caller and sender but I'm probably not going to write the code generation for that until I actually need it. There's a lot of super-tedious work.

The generated service code is coming along nicely.

Assuming you define a Permebuf service called MyService, the generated class will have functions such as:

Code: Select all

static std::optional<MyService> MaybeFindFirstInstance();
static void ForEachInstance(const std::function<void(MyService)>&);
static MessageId NotifyOnNewInstances(const std::function<void(MyService)>&);
static void StopNotifyingOnNewInstances(MessageId);
void NotifyOnDestruction(const std::function<void(MyService)>&);
One you have an instance, you can call methods, e.g. MyFunction with syntax such as CallMyFunction with both sync and async versions (the async versions take a callback function),

There is a generated subclass called MyService::Server, and to implement the service you can inherit the class, override HandleMyFunction, and then when you create an instance of it, it'll automatically register with the kernel and other processes will be able to find it.
My OS is Perception.
andrew_w
Posts: 19
Joined: Wed May 07, 2008 5:06 am

Re: Standardized IPC protocol

Post by andrew_w »

AndrewAPrice wrote:@andrew_w What do you mean by unstructured data? Something like JSON?
I'm talking about things like (Unix-style) disk filesystems and TCP/UDP sockets, where the API transfers a single blob of data at a time and doesn't care about the structure of the data. Since such APIs are ubiquitous in Unix-like systems, I think it is completely pointless to have any support for typed data or multiple arguments in the IPC transport layer in a microkernel-based Unix-like system, and instead have the transport layer implement only read()/write()-type APIs rather than implementing read()/write() on top of a structured RPC transport layer (which seems to be more common for some reason).

As long as the transport layer provides support for message-boundary-preserving semantics it is easy enough to implement an optional structured RPC presentation layer on top of it for services that need one (under UX/RT a lot of APIs will be implemented using collections of simple files rather than structured RPC, much like Plan 9, although there will still be some servers that use RPC).
Developer of UX/RT, a QNX/Plan 9-like OS
nexos
Member
Member
Posts: 1078
Joined: Tue Feb 18, 2020 3:29 pm
Libera.chat IRC: nexos

Re: Standardized IPC protocol

Post by nexos »

One question I've had about RPC is whether or not the receiver blocks to receive RPC calls, or are they implemented asynchronously with something like signals or APC? I'd assume it could be either way.

Also, IMO, it would be better to have typing and high level RPC abstractions implemented in a user space library. That way, it gives the client and server more flexibility in their communication method.
"How did you do this?"
"It's very simple — you read the protocol and write the code." - Bill Joy
Projects: NexNix | libnex | nnpkg
moonchild
Member
Member
Posts: 73
Joined: Wed Apr 01, 2020 4:59 pm
Libera.chat IRC: moon-child

Re: Standardized IPC protocol

Post by moonchild »

nexos wrote:One question I've had about RPC is whether or not the receiver blocks to receive RPC calls, or are they implemented asynchronously with something like signals or APC? I'd assume it could be either way.
You can fairly easily implement synchronous calls on top of async ones, but not the other way around (unless you spawn a thread for every call, which is ... ...)

So I think it's better to make async messages the primitive.
Gigasoft
Member
Member
Posts: 855
Joined: Sat Nov 21, 2009 5:11 pm

Re: Standardized IPC protocol

Post by Gigasoft »

Asynchronous completion could mean a number of different things. It could be based on an event, a queue or an APC. It makes sense to implement synchronous calls on top of event based completion, but not on APCs. APCs are an awkward and probably very uncommonly preferred way to get notified of incoming RPC calls. However, the option should be there, for completeness.

Mapping sender pages into the receiver address space for each message being sent is costly compared to a simple memory copy, if the messages are small. On Windows, it is up to the caller. The sender can pass along shared memory buffers which will be mapped into the receiver's address space, and object handles can also be passed.
andrew_w
Posts: 19
Joined: Wed May 07, 2008 5:06 am

Re: Standardized IPC protocol

Post by andrew_w »

moonchild wrote:You can fairly easily implement synchronous calls on top of async ones, but not the other way around (unless you spawn a thread for every call, which is ... ...)

So I think it's better to make async messages the primitive.
The issue with making asynchronous messaging the primitive and building synchronous messaging on top of it is that it would make QNX/L4-style direct process switching (where the kernel does an immediate context switch from the sending process to the receiving process without going through the scheduler queue) more difficult. Having to wait for the scheduler to decide to schedule the server adds significant overhead for client processes, and direct process switching makes the overhead of making a call to a server process in a microkernel system closer to that of making a system call in a monolithic kernel (basically the only extra overhead is that of the context switches, and that can be minimized by avoiding excessive vertical modularization of subsystems).
Developer of UX/RT, a QNX/Plan 9-like OS
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Re: Standardized IPC protocol

Post by AndrewAPrice »

Gigasoft wrote:Asynchronous completion could mean a number of different things. It could be based on an event, a queue or an APC. It makes sense to implement synchronous calls on top of event based completion, but not on APCs. APCs are an awkward and probably very uncommonly preferred way to get notified of incoming RPC calls. However, the option should be there, for completeness.
I haven't looked into APCs. What are some use cases for them? Would the kernel or other userland code cause an APC to execute?

For me, I have an event loop and messages directed to a particular service get dispatched to a response handler. My C++ code passes the handler two things: the request, and a response object (if applicable.) One could implement the handler to push these objects into a queue to be processed by a thread pool, or even create a thread and pass them in as parameters, if desired.
My OS is Perception.
Post Reply