Kevin wrote:Did you forget error handling here? Or are you assuming that you never need to check for errors because the next function would automatically fail, like read() when passed a -1 file descriptor?
You could of course add some kind of conditional execution here. And probably you'll soon find uses for loops (handling short reads maybe). Eventually it might turn out that what you just started to write is a VM.
Yes, I should have mentioned that I was pretty much assuming that if one of the scheduled operations fails, that any other operation that depends on it will end up being cancelled with a different error code to indicate that it has been cancelled (as you usually want to ignore cancelled operations anyway). That would work because those operations won't end up being executed until the operation(s) they depend on have actually completed. The difference is that if any of those operations fail, you simply end up not executing any dependent operations and inform the process that they have been cancelled.Sik wrote:To be fair, he did say it wasn't well defined yet. My first thought was that the chain would immediately terminate the moment a call fails. That seems like the obvious approach, especially since errors would be isolated to that chain and wouldn't affect other chains.
On a less serious note, you could probably go as far as writing a Lisp interpreter for the system calls, but I think people would both hate and love you for that :').
I always thought the point was that the kernel wasn't aware of coroutines/green threads, as they would otherwise be normal threads scheduled by the kernel. The trade off mostly is that they are a lot cheaper to create, because they don't have to be backed by kernel resources, but that normal threads can be assigned to any processor core by the kernel and that such threads don't have to share the same time slice. One of the big problems with green threads, however, is that you definitely want to use any kind of asynchronous interface, because if an operation ends up blocking the green thread, it actually ends up blocking all the green threads within that process. With an asynchronous system call interface, the userland scheduler could simply schedule the system call and switch to the next green thread and wait for any of the operations to complete in the meanwhile.Kevin wrote: Yes, coroutines are nice. And no, you don't need your asynchronous syscall interface if the kernel understands them. You already save and restore the register state when processing a syscall, so doing a context switch here comes for free. Even with a synchronous syscall interface, the kernel can just queue the syscall and switch to a different thread/coroutine until the operation has completed. The important part here is that the kernel is working asynchronously internally, but if you want the userspace to feel synchronous, there's little reason to change the traditional syscall interface. Essentially you get something that feels like blocking syscalls, except that they block only a single coroutine instead of the whole thread.
However, the use of green threads was just a suggestion, as I definitely like the programming model offered by programming languages like Erlang, and while that language has some weak points (e.g. string handling), it definitely has proven to be scalable. I am also quite sure that there are many other programming languages and libraries that benefit from something this (e.g. Go and libmill/libdill). Part of the reason to implement asynchronous system calls is that they are very flexible: you could end up implementing an existing (partially) synchronous interface for compatibility on top, you could end up implementing something like coroutines/green threads, or you could end up using the asynchronous system call interface directly. Another part of the reason is that they can help saving some of the costs of switching between the process and the kernel, or even other processes (in the case of microkernels), as I have outlined in my earlier comment.
Yours sincerely,
Stephan.