There is not much disagreement
Agree . Note i'm not a believer in Micro kernels as a goal , the goal should be specific for each key component.Quote:
The reason for a Micro Kernel is
- The Kernel is small and hence has less bugs
I'm not sure whether this is a good reason for it. Even a monolithical kernel can be modularized.
i do believe in independent modules ( and since independent can be restarted and don't affect others) and safe message passing . Half the time they make the Kernel small and leak object Data and API's ( or put things in user space JUST for the sake of the kernel being small) Above im just stating the common belief and one of the key reasons for Micro kernels that less code = less bugs. Personally i think better code even if more lines = less bugs. ie I dont think Singularity with its 100K line TCB will have 12 times as many bugs as Minix 3 ( 8k) kernel ( though Minix doesnt count all the OS exposed stuff as its not in the "kernel" ) . It could be argued that Singularity is not technically a Microkernel since device drivers and apps run a Ring 0 and even share address space with the kernel but they achieve the same goals .
Agree for services in general and by independent i also meant to imply they cant trash each other however im only questioning the scheduler and MM.Quote:
- Apps and services are isolated in independent user processes and service can be restarted ( since they are independent).
This is the same fallacy you mentioned before: it is not just about the possibility to restart service, but also to shield the system from a bad service: crashing is the ultimate demise, but before that, it can easily run havoc on the rest of your system.
I dont understand this . If they crash , the other processes cant get its threads scheduled or memory mapped in on a context switch so does it matter shielding these ? Though dubious you could even argue that these critical components overwriting something is good if they can continue running rather than faulting.Quote:
For MM and Schedulers these do not hold true.
You are setting up straw man, I never claimed it did (although theoretically, it could be possible to do so for super robust systems, e.g. by having their data in a known location). It's indeed not possible to restart the MM or scheduler (without effectively rebooting the system), but by shielding them in user space they can at least not trash user processes when they fail (perhaps causing data corruption on disk, etc.).
And i agree it may be possible for one of these to be restarted but its a hard task - When this is done it should be moved out of the kernel but not before.
Anyway im not saying one way is better than the other ( as it would take a more detailed design doc to argue the pros and cons) just to consider this carefully for key kernel components like the MM and scheduler and dont just try to go Nano kernel for the sake of it. My base criteria would be if the crash of the service is fatal than it takes a very good case for it not to be in the kernel ( obviously drivers , Video , File systems , Network etc crashes are non fatal ) . Having the scheduler and MM in the kernel also follows KISS.
Regards,
Ben