Page 1 of 1
Opinions on my event system
Posted: Sun Oct 30, 2011 8:08 pm
by Jezze
Hi guys!
I wanted to write a few lines on my new event system I implemented to see what you think about it. I just implemented this just now and before venturing any further I could use some opinions.
It works like this. Every syscall will triggers an event. A program in user space can attach listeners to these events containing a callback to one of it's function that it wants to run if this event occurs.
This is what it looks like:
Code: Select all
#include <fudge.h>
void explain()
{
file_write_format(FILE_STDOUT, "You executed a program it seems...\n");
call_detach(0x03);
call_exit();
}
void main(int argc, char *argv[])
{
call_attach(0x03, explain);
call_wait();
}
The code works like this. When the program have been started from shell it will do a syscall called attach that will associate a callback with a number. The number 0x03 in this case means it listens to the syscall called exec. Then it runs a syscall I called wait which is like exit except it doesnt deallocate the memory and just leaves the program in a waiting state.
After wait we are back at the shell again. Now I start another program and just as I press enter the text "You executed..." appears before the normal output of the other program because the callback was triggered. The callback will detach the event and exit but it could just run call_wait() again and keep printing that text before every program I start.
This event system is very basic at the moment and I'll have to think more about what needs to be included.
EDIT: Added an example output for clarity
Code: Select all
fudge:/$ date
Date: 2011-10-31 2:27:41
fudge:/$ event1
fudge:/$ date
You executed a program it seems...
Date: 2011-10-31 2:27:41
fudge:/$
Where event1 is the exact program you see in the top of this post.
Re: Opinions on my event system
Posted: Mon Oct 31, 2011 3:31 am
by ACcurrent
Nice! Can see how that works for GUI Based systems & Daemons. An Event Based Kernel. Cool less ram usage as processes can sleep until an event is triggered.
Re: Opinions on my event system
Posted: Mon Oct 31, 2011 8:11 am
by bluemoon
It seems that you're bringing back the idea of TSR+service hooks back in the DOS day. Did I understand correctly?
Re: Opinions on my event system
Posted: Mon Oct 31, 2011 8:50 am
by Jezze
ACcurrent: Yeah that was what I was thinking. That while a program waits it won't consume any resources.
Berkus: I didn't think of it like it is simular to signals but yes I think you are right.
Bluemoon: Wow didn't know DOS had that. Yes it seems like it is exactly the same thinking. Now I need to read up on it. Found the wikipedia article.
Re: Opinions on my event system
Posted: Mon Oct 31, 2011 9:56 am
by bluemoon
While modern multi-tasking OS with swap memory, "TSR", or daemon usually be done with setting a process's priority to idle(which the process still get executed when the system is in idle); and let the unused memory page in naturally;
the idea of "never-execute" priority, and get priority on events, sound interesting for mobile platform, just to save battery life.
Re: Opinions on my event system
Posted: Mon Oct 31, 2011 10:29 am
by xenos
bluemoon wrote:the idea of "never-execute" priority, and get priority on events, sound interesting for mobile platform, just to save battery life.
Wouldn't this "never-execute" priority be equivalent to blocking the process, and getting priority on an event would be equivalent to unblocking the process when an event occurs and maybe adjusting its priority?
Re: Opinions on my event system
Posted: Mon Oct 31, 2011 10:47 am
by NickJohnson
@Jezze: how exactly does the kernel call the event handler? Does it interrupt whatever the process is currently doing (like a signal) or does it spawn a new thread to handle the event?
My system does the latter in a similar fashion to yours, where a driver/daemon process kills its main thread but stays resident, until a message/event spawns a thread on its handler, and performance is quite good even though this mechanism is used for absolutely all IPC. It seems like a signal-style thing would be simpler and faster, but probably won't scale if a lot of events are being sent.
Re: Opinions on my event system
Posted: Mon Oct 31, 2011 4:15 pm
by Jezze
As the design is at the moment it interrupts the program that sent the syscall which in this case would be the shell. Actually you could say the shell already interrupted itself because it did a syscall so we are already in kernel mode so there is basically no overhead like there would be with signals. Then it just swaps in the program that listened on the current syscall (in other words event1). When event1's callback is complete it swaps back the shell back in and continue as nothing happened. By swap I just mean it changes the page directory and some registers (eip, esp, ebp).
I think your idea about spawning a new thread would scale a lot better if there are many listeners active on many syscalls but I won't be using that because my goal is to maximixe the predictibility or determinism for my os which means I don't schedule stuff internally. I can only speculate on how many listeners there would be on average for a big system (think a system the size of Linux). I imagine they could be quite many in which case my idea would suck.
It is good to know the performance is promising. I'm especially worried when it comes to adding listeners on interrupts and especially the timer interrupt that would let user programs work as schedulers which is the next step.
Re: Opinions on my event system
Posted: Mon Oct 31, 2011 7:46 pm
by Brendan
Hi,
My first concern would be security. Can malicious code attach listeners to monitor the activity of others and how much data (in addition to timing info) about other software's syscalls can malicious code obtain; and can malicious code use it to do things like "selective denial of service". Even something relatively benign (like adding a 500 ms delay when a competitor's product uses a syscall) might be a problem.
My next concern would be concurrency. For multi-threaded software (of reasonable complexity), things like your callbacks are extremely hard to use: in general, things like mutexes/semaphores/locks are used to ensure orderly access to data, but the event handler can't assume that the thread it interrupted wasn't already holding a mutex/lock and therefore can't assume that an attempt to acquire a mutexe/semaphore/locks won't lead to deadlock (e.g. event handler waiting to acquire a lock that can't be released until the event handler returns to whatever it interrupted). To get around that problem the event handler must either be so simple that it can use lockfree/blockfree algorithms; or spawn a thread (and then have no way of gracefully handling "failed to spawn" errors).
If the kernel automatically spawns a new thread for each event, then I'd be worried about performance and error handling. The overhead of creating a thread and then destroying, plus the overhead of task switching can vary (how fast is "fast enough"?), and tends to get worse as an OS gains features (things like support for FPU/MMX/SSE/AVX, support for measuring CPU time/cycles used by each thread, support for "per-thread" debugging and performance monitoring, etc all add to the overhead of thread creation, destruction and task switching). You'd also have the same problem with software being unable to gracefully handle errors when the kernel is unable to spawn the thread (e.g. out of memory, out of space, out of "thread IDs", etc).
Finally, I'd be worried about scheduling and thread priorities. For example, if a high priority thread calls a syscall but 100 low priority threads have installed listeners, then does it make it impossible for the scheduler to make sure important (high priority) work is done before unimportant (low priority work) is done? Can things like
priority inversion be prevented?
Note: "event queues" would solve almost all of these problems - when an event occurs, you push it onto a FIFO queue, and the receiver takes events from the FIFO queue it wants to.
Cheers,
Brendan
Re: Opinions on my event system
Posted: Tue Nov 01, 2011 11:08 am
by Jezze
My first concern would be security. Can malicious code attach listeners to monitor the activity of others and how much data (in addition to timing info) about other software's syscalls can malicious code obtain; and can malicious code use it to do things like "selective denial of service". Even something relatively benign (like adding a 500 ms delay when a competitor's product uses a syscall) might be a problem.
Yeah I need to be very careful when I decide what data a program can get from an event. At the moment I've gone with nothing but that will probably change. I think I might go with a solution where the only processes it can listen to are the ones that are child processes of it's own. For listening on interrupts the only processes that can listen will be the ones that has the right privileges i.e. running as root.
My next concern would be concurrency. For multi-threaded software (of reasonable complexity), things like your callbacks are extremely hard to use: in general, things like mutexes/semaphores/locks are used to ensure orderly access to data, but the event handler can't assume that the thread it interrupted wasn't already holding a mutex/lock and therefore can't assume that an attempt to acquire a mutexe/semaphore/locks won't lead to deadlock (e.g. event handler waiting to acquire a lock that can't be released until the event handler returns to whatever it interrupted). To get around that problem the event handler must either be so simple that it can use lockfree/blockfree algorithms; or spawn a thread (and then have no way of gracefully handling "failed to spawn" errors).
If the kernel automatically spawns a new thread for each event, then I'd be worried about performance and error handling. The overhead of creating a thread and then destroying, plus the overhead of task switching can vary (how fast is "fast enough"?), and tends to get worse as an OS gains features (things like support for FPU/MMX/SSE/AVX, support for measuring CPU time/cycles used by each thread, support for "per-thread" debugging and performance monitoring, etc all add to the overhead of thread creation, destruction and task switching). You'd also have the same problem with software being unable to gracefully handle errors when the kernel is unable to spawn the thread (e.g. out of memory, out of space, out of "thread IDs", etc).
Finally, I'd be worried about scheduling and thread priorities. For example, if a high priority thread calls a syscall but 100 low priority threads have installed listeners, then does it make it impossible for the scheduler to make sure important (high priority) work is done before unimportant (low priority work) is done? Can things like priority inversion be prevented?
Note: "event queues" would solve almost all of these problems - when an event occurs, you push it onto a FIFO queue, and the receiver takes events from the FIFO queue it wants to.
Part of these problems are solved by the fact I don't have a scheduler so all of them are not applicable here. I try to stay away from locks as much possible but I don't think I can live totally without them so I'll have to see what problematic cases appear later on.
Thanks for the great feedback!
Re: Opinions on my event system
Posted: Sun Nov 13, 2011 9:01 pm
by ACcurrent
Some people should look at capability based OSes like EROS.