Hi All,
Having just started our OS course at uni, my interest has been renewed and have been thinking about various things (apologies if the following ideas are trash or if I'm effectively talking about common knowledge, but it has been some time!).
I'm wanting to design a microkernel, so obviously a major worry is the speed of IPC. The basic design is that each task acts as a server for whatever device/service it provides, except that the kernel will take care of the listening part (i.e. an initial thread for each task sets up that task's data structures etc., and then the kernel manages the worker threads - trying to keep alive/kill threads in a way it deems to be most efficient - another part I'm need to figure out!). The kernel obviously needs to know what each is listening on (which could include IRQs etc.), and other tasks also need to know what task to contact for disk access etc. so this will all be setup in some config file - and these allocations will be protected, i.e. if the config file says that the "fat32" task should use "disk0", it will be unable to use "disk1" even if it tried.
When a task requests one of these other tasks, the kernel will add it to a queue, then allocate it the next available worker thread for the task specified (there'll also be settings for maximum number of threads for each task etc.). When this thread has been allocated, a shared memory region will also be setup between the 2 threads (of either a fixed size, or a size which was specified in the request). The idea is that all communication will be done through this memory region (user-space libraries will be provided so that this can be done easily by apps which don't need/or want to implement their own use of it). Originally I was hoping that this might lead to an incredibly small/simple set of kernel syscalls:-
- Memory Allocation / Freeing (i.e. mem_alloc, mem_realloc, mem_free)
- Task Requesting, Enlarging the Shared Space and Ending a Request* (task_request, task_enlarge, task_end)
*Ending a request will result in the freeing of the shared memory in both threads and the "requestED" worker thread being marked for termination or allocation to another request. There is no need for a call so that a thread can "die" independently as it will always be the "requestED" thread of something (even if it's just an IRQ).
However, I then started to wonder about whether it would be useful to have a syscall such that the requestING thread could share another bit of memory (which might already have data in), and realised that it basically comes down to whether it's quicker to copy data into the existing shared memory (no syscalls required), or to make a few syscalls and not need to copy.
Sorry for the length of the post but could people please comment on the general design, and also the speed comparison of syscalls / memory copying.
Thanks in advance,
Pete
General MicroKernel Design / IPC Relative Speeds
Some ideas from a largely practical as opposed to theoretical point of view:
* Create/register a new server?
* fork?
* exec?
* get my current process ID (pid)?
I would have thought you'd need more syscalls as a minimal set than you've specified.
Just some thoughts
JamesM
How do I (a task/ the kernel/ whatever) read the config file? If it's stored on disk, how do I get to it? (bear in mind that I have to use the config file to look up the ATA driver ID...) - point also valid for in memory because I still need to find the VFS driver to traverse the filesystem graph.so this will all be setup in some config file -
How does the kernel know what "disk0" is? I would assume the same ATA server would serve both disk0 and disk1 (talking master and slave, for example), so in order for the kernel to apply protection and differentiate between the two it requires some knowledge of what the server is supposed to be doing/what interface it uses (meant at a high level, such as how many disks it has, are these partitioned? etc).- and these allocations will be protected, i.e. if the config file says that the "fat32" task should use "disk0", it will be unable to use "disk1" even if it tried.
(i.e. an initial thread for each task sets up that task's data structures etc., and then the kernel manages the worker threads - trying to keep alive/kill threads in a way it deems to be most efficient - another part I'm need to figure out!)
Seems like a large overhead, I would have thought that the most efficient method might be to not multithread each server at all? Why does each request require exclusivity over a thread?When a task requests one of these other tasks, the kernel will add it to a queue, then allocate it the next available worker thread for the task specified (there'll also be settings for maximum number of threads for each task etc.).
How do I:Originally I was hoping that this might lead to an incredibly small/simple set of kernel syscalls:-
- Memory Allocation / Freeing (i.e. mem_alloc, mem_realloc, mem_free)
- Task Requesting, Enlarging the Shared Space and Ending a Request* (task_request, task_enlarge, task_end)
* Create/register a new server?
* fork?
* exec?
* get my current process ID (pid)?
I would have thought you'd need more syscalls as a minimal set than you've specified.
Just some thoughts
JamesM
Thanks for the comments.
The plan for loading the config/servers etc. was to have this done by some nice bootloader! Clearly even this will require some kind of interpretation by the kernel, but the way GRUB simply loads modules to memory addresses means that this is fairly trivial.
I hadn't really got to thinking about how this configuration could be changed at runtime, or servers could be registered/removed - I'd mainly just been thinking about what the standard running design might be. So, any ideas would be appreciated! (obviously this is potentially a major security risk).
The kernel wouldn't need to know the difference between various servers, for things such as ATA, a single task might provide 2 servers, which would be treated as separate by the kernel. Or 2 instances of the same server could be created for multiple instances of the device (this would be the "nicer" way, but I realise that it might not be the most efficient for some devices). I'll try and draw up a diagram later which shows the structure, as I think that might explain better (or make me realise the flaw!).
Each request has exclusivity over a thread for the duration of the request, that thread is not then terminated, but then reused. I believe this is how must single threaded servers work (correct me if I'm wrong), the only difference is that the kernel controls the request part (not sure whether this means that each thread will be of the form "while(info = get_request()) { ... }" or whether it will simply be "void handle_request(info) {...}" and the kernel handles putting the info in the right place etc.)
Information about the location of the shared memory, the requesting thread, the pid etc. would be given in the "info" structure mentioned above. I'm not quite sure on fork/exec, I'm half wondering whether the way that I'm trying to do things would mean that they could be excluded - but won't say that yet as it's bound to be wrong.
Definitely stuff for me to think about!
Pete
The plan for loading the config/servers etc. was to have this done by some nice bootloader! Clearly even this will require some kind of interpretation by the kernel, but the way GRUB simply loads modules to memory addresses means that this is fairly trivial.
I hadn't really got to thinking about how this configuration could be changed at runtime, or servers could be registered/removed - I'd mainly just been thinking about what the standard running design might be. So, any ideas would be appreciated! (obviously this is potentially a major security risk).
The kernel wouldn't need to know the difference between various servers, for things such as ATA, a single task might provide 2 servers, which would be treated as separate by the kernel. Or 2 instances of the same server could be created for multiple instances of the device (this would be the "nicer" way, but I realise that it might not be the most efficient for some devices). I'll try and draw up a diagram later which shows the structure, as I think that might explain better (or make me realise the flaw!).
Each request has exclusivity over a thread for the duration of the request, that thread is not then terminated, but then reused. I believe this is how must single threaded servers work (correct me if I'm wrong), the only difference is that the kernel controls the request part (not sure whether this means that each thread will be of the form "while(info = get_request()) { ... }" or whether it will simply be "void handle_request(info) {...}" and the kernel handles putting the info in the right place etc.)
Information about the location of the shared memory, the requesting thread, the pid etc. would be given in the "info" structure mentioned above. I'm not quite sure on fork/exec, I'm half wondering whether the way that I'm trying to do things would mean that they could be excluded - but won't say that yet as it's bound to be wrong.
Definitely stuff for me to think about!
Pete