Rusky, just to be clear, my only concern in this thread is that the OP has chosen to use a kernel API to read and write IO ports. I'm just saying that it achieves nothing. My example about the blob is just to point out that using a kernel API to do port IO does not protect against a malicious blob any better than any other method (and has all the disadvantages that have been mentioned).Rusky wrote:That's missing the point. You have to trust some code to be your keyboard driver, no matter what kernel architecture you use. The benefit of a microkernel is that you don't have to trust the blob with anything else.gerryg400 wrote:When the blob says "I'm a keyboard driver, give me access to the IO ports related to the keyboard" how do you know to trust the blob?
Monolithic vs. Microkernel hardware boot drivers
Re: Monolithic vs. Microkernel hardware boot drivers
If a trainstation is where trains stop, what is a workstation ?
-
- Member
- Posts: 283
- Joined: Mon Jan 03, 2011 6:58 pm
Re: Monolithic vs. Microkernel hardware boot drivers
Hmm. This may fall under Rule #0 (There are exceptions to every rule)... I will think on this.gerryg400 wrote:Here we disagree. And I think that's okay. Remember we are talking about a microkernel here. Device drivers are not part of the kernel and so userspace and userspace libraries by design have more responsibility in this area than in a system with a monolithic kernel.tjmonk15 wrote:Any OS that requires Userspace to deal with portability is a failure in this day and age (IMHO, IL of some sort is the way to go, compiled at install time) At the very least, it should be as easy as compiling multiple binaries from one codebase without any "#ifdef"s for arch(s).
This I do have an issue with. A Kernel API is no worse than using a GPF handler (and is probably the same). And more importantly in terms of a "malicious" blob, if the user decides to install it, the OS should oblige. The OS should however be able to guarantee it wont crash everything else. Beyond that the disadvantages you have mentioned seem to either be non-issues/irrelevant to "this" discussion (Trust), or are made up (Time requirements, ie. i've seen minimum times required everywhere, never seen a maximum time)gerryg400 wrote:... my only concern in this thread is that the OP has chosen to use a kernel API to read and write IO ports. I'm just saying that it achieves nothing. My example about the blob is just to point out that using a kernel API to do port IO does not protect against a malicious blob any better than any other method (and has all the disadvantages that have been mentioned).
- Monk
Last edited by FallenAvatar on Mon Jul 11, 2016 12:04 am, edited 1 time in total.
Re: Monolithic vs. Microkernel hardware boot drivers
For the security model in the kernel that I'm thinking about using, one could argue that it is the other way around - "easier" isn't a term that has definition here. Yes, I could set up TSS/IOPBs for every running process and then allow driver processes to manually leverage in/out, but I'm looking at running through the kernel because it's a simple-enough abstraction. Run the I/O call through a system call structure where the kernel has the ability to determine whether or not the requesting process has the need or privilege to access that resource.gerryg400 wrote:His full quote began " It's intuitive. It'll probably be easier to plumb the driver-hardware I/O through the kernel at first, and later change or rewrite those services later on." I was just pointing out that it's not easier.
Could it have timing/performance side-effects or issues that are insurmountable and that require a system redesign? Sure. But at least by that point I'll have hard evidence that the existing system does not work and I'll then have the ability to look at that existing system and see if it can be redeemed or whether it needs to be changed entirely.
I threw that out as a simple means with which to check if a module was indeed valid. When I realized that cryptographic signing and verification of driver modules is essentially DRM 101, I dropped the notion. Hence, the reason I reapproached the subject from the notion of a central manifest and driver permission groups/class codes.gerryg400 wrote:It was the OP who offered that he is going to use cryptographic hashes to protect his kernel API. I never considered such a thing but since he is using them I thought that the kernel API was not necessary.
Re: Monolithic vs. Microkernel hardware boot drivers
Agreed. I wouldn't use a GPF handler either. I would choose either of 2 things depending on how fine-grained I needed control to be. For fine-grained control I would use an IO bitmap for device drivers to control access to IO space. I don't believe there are any disadvantages to this scheme at all compared to a kernel API. It has the same complexity, same logging possibilities, same protection level but is significantly faster.tjmonk15 wrote:This I do have an issue with. A Kernel API is no worse than using a GPF handler (and is probably the same). And more importantly in terms of a "malicious" blob, if the user decides to install it, the OS should oblige. The OS should however be able to guarantee it wont crash everything else. Beyond that the disadvantages you have mentioned seem to either be non-issues/irrelevant to "this" discussion (Trust), or are made up (Time requirements, ie. i've seen minimum times required everywhere, never seen a maximum time)
If I felt that fine control wasn't needed I would simply set the IOPL level of driver process to 3.
If a trainstation is where trains stop, what is a workstation ?
-
- Member
- Posts: 283
- Joined: Mon Jan 03, 2011 6:58 pm
Re: Monolithic vs. Microkernel hardware boot drivers
How would you log a specific driver accessing a specific port only using an IO Bitmap? (Hint: you can't)gerryg400 wrote:Agreed. I wouldn't use a GPF handler either. I would choose either of 2 things depending on how fine-grained I needed control to be. For fine-grained control I would use an IO bitmap for device drivers to control access to IO space. I don't believe there are any disadvantages to this scheme at all compared to a kernel API. It has the same complexity, same logging possibilities, same protection level but is significantly faster.tjmonk15 wrote:...
If I felt that fine control wasn't needed I would simply set the IOPL level of driver process to 3.
- Monk
Re: Monolithic vs. Microkernel hardware boot drivers
Considering the primary purpose of this thread was to ask how core, required boot-level drivers are loaded or handled by microkernels, I'd say that purpose has been fulfilled.
For any of you who are interested, tomorrow morning I will come up with some new ideas as to how to tackle the issues at hand (the driver-hardware interface) after having done some reading in the research regarding the subject. As this thread has shown, it's an involved-enough conversation (and that's putting it mildly) that the discussion probably deserves its own thread.
To cap my end of this discussion, though, I will say something that I think has been overlooked (or, at least, rarely acknowledged in this thread) - this is my project. I haven't had a working kernel yet, and I'm ready to openly admit that. On top of this, I'm intending to write this kernel in a language I'm just starting to get a grasp of (Ada 2012), so it will be a long time (years) before any of these ideas, arguments or revelations come to pass. These ideas could work badly. There's even a good chance that if I don't engineer the other parts in the right manner that the driver-hardware interface simply won't work, regardless of how good or bad it was constructed.
At this point, I'm not prioritizing a high degree of efficiency - I'm prioritizing something that works well enough to demonstrate that it has the fundamentals of a microkernel operating system. If at that point, I realize that it sucks and needs radical improvement, well, I'll go from there. Is a kernel API probably the "best" or the "easiest" way of doing things? As I said before, probably not. But I've always been of the camp that before you can make something work well, you must make it work.
For any of you who are interested, tomorrow morning I will come up with some new ideas as to how to tackle the issues at hand (the driver-hardware interface) after having done some reading in the research regarding the subject. As this thread has shown, it's an involved-enough conversation (and that's putting it mildly) that the discussion probably deserves its own thread.
To cap my end of this discussion, though, I will say something that I think has been overlooked (or, at least, rarely acknowledged in this thread) - this is my project. I haven't had a working kernel yet, and I'm ready to openly admit that. On top of this, I'm intending to write this kernel in a language I'm just starting to get a grasp of (Ada 2012), so it will be a long time (years) before any of these ideas, arguments or revelations come to pass. These ideas could work badly. There's even a good chance that if I don't engineer the other parts in the right manner that the driver-hardware interface simply won't work, regardless of how good or bad it was constructed.
At this point, I'm not prioritizing a high degree of efficiency - I'm prioritizing something that works well enough to demonstrate that it has the fundamentals of a microkernel operating system. If at that point, I realize that it sucks and needs radical improvement, well, I'll go from there. Is a kernel API probably the "best" or the "easiest" way of doing things? As I said before, probably not. But I've always been of the camp that before you can make something work well, you must make it work.
Re: Monolithic vs. Microkernel hardware boot drivers
You would be able to log disallowed port accesses by handling GPFs. To log the allowed port accesses would requires disallowing access while logging is turned on which is less than desirable but no worse than doing a kernel call and if it stops the driver working you have the option of turning of logging.tjmonk15 wrote:How would you log a specific driver accessing a specific port only using an IO Bitmap? (Hint: you can't)gerryg400 wrote:Agreed. I wouldn't use a GPF handler either. I would choose either of 2 things depending on how fine-grained I needed control to be. For fine-grained control I would use an IO bitmap for device drivers to control access to IO space. I don't believe there are any disadvantages to this scheme at all compared to a kernel API. It has the same complexity, same logging possibilities, same protection level but is significantly faster.tjmonk15 wrote:...
If I felt that fine control wasn't needed I would simply set the IOPL level of driver process to 3.
- Monk
If a trainstation is where trains stop, what is a workstation ?
-
- Member
- Posts: 283
- Joined: Mon Jan 03, 2011 6:58 pm
Re: Monolithic vs. Microkernel hardware boot drivers
Basically, you have to handle GPFs (or use a Kernel API) to achieve that? Seems like extra work to do one of those + all the work of using a TSS per thread/process... Seems like a major disadvantage to me.gerryg400 wrote:You would be able to log disallowed port accesses by handling GPFs. To log the allowed port accesses would requires disallowing access while logging is turned on which is less than desirable but no worse than doing a kernel call and if it stops the driver working you have the option of turning of logging.tjmonk15 wrote:...
- Monk
Re: Monolithic vs. Microkernel hardware boot drivers
physecfed, few people are actually developing microkernels so I did not wish to discourage you in any way. I hope you post more micro-kernel questions and ideas as you come up with them. I hope you didn't get the feeling I was trying to put you off. Good luck with your project.physecfed wrote:Considering the primary purpose of this thread was to ask how core, required boot-level drivers are loaded or handled by microkernels, I'd say that purpose has been fulfilled.
For any of you who are interested, tomorrow morning I will come up with some new ideas as to how to tackle the issues at hand (the driver-hardware interface) after having done some reading in the research regarding the subject. As this thread has shown, it's an involved-enough conversation (and that's putting it mildly) that the discussion probably deserves its own thread.
To cap my end of this discussion, though, I will say something that I think has been overlooked (or, at least, rarely acknowledged in this thread) - this is my project. I haven't had a working kernel yet, and I'm ready to openly admit that. On top of this, I'm intending to write this kernel in a language I'm just starting to get a grasp of (Ada 2012), so it will be a long time (years) before any of these ideas, arguments or revelations come to pass. These ideas could work badly. There's even a good chance that if I don't engineer the other parts in the right manner that the driver-hardware interface simply won't work, regardless of how good or bad it was constructed.
At this point, I'm not prioritizing a high degree of efficiency - I'm prioritizing something that works well enough to demonstrate that it has the fundamentals of a microkernel operating system. If at that point, I realize that it sucks and needs radical improvement, well, I'll go from there. Is a kernel API probably the "best" or the "easiest" way of doing things? As I said before, probably not. But I've always been of the camp that before you can make something work well, you must make it work.
If a trainstation is where trains stop, what is a workstation ?
Re: Monolithic vs. Microkernel hardware boot drivers
The GPF handling code and the system call code would be essentially identical in complexity other than a handful of lines of code to recognise the IO instruction and log something. If an API is used one still needs some sort of data structure to store which ports are available to each process. There's really no difference in the amount of work needed.tjmonk15 wrote:Basically, you have to handle GPFs (or use a Kernel API) to achieve that? Seems like extra work to do one of those + all the work of using a TSS per thread/process... Seems like a major disadvantage to me.gerryg400 wrote:You would be able to log disallowed port accesses by handling GPFs. To log the allowed port accesses would requires disallowing access while logging is turned on which is less than desirable but no worse than doing a kernel call and if it stops the driver working you have the option of turning of logging.tjmonk15 wrote:...
- Monk
If a trainstation is where trains stop, what is a workstation ?
-
- Member
- Posts: 283
- Joined: Mon Jan 03, 2011 6:58 pm
Re: Monolithic vs. Microkernel hardware boot drivers
Right, you can do your preferred way + one of your non-preferred ways, or you can just do one of your non-preferred ways. Seems an obvious choice to me.gerryg400 wrote:The GPF handling code and the system call code would be essentially identical in complexity other than a handful of lines of code to recognise the IO instruction and log something. If an API is used one still needs some sort of data structure to store which ports are available to each process. There's really no difference in the amount of work needed.tjmonk15 wrote:...
- Monk
Re: Monolithic vs. Microkernel hardware boot drivers
Hi,
Note: Technically, keyboard driver would communicate with "PS/2 controller" driver, and keyboard driver itself wouldn't be given access to any IO ports. If "PS/2 controller driver" is dodgy it'd be able to interfere with data going to/from all PS/2 device drivers (keyboard, mouse, touchpad, bar-code scanner, etc).
Note that my kernels typically have support for "batch kernel functions". Essentially; instead of doing one kernel function at a time (and paying the "CPL=3 -> CPL=0 -> CPL=3" switching cost for each function) you can build a list and ask kernel to process the list (and only pay the "CPL=3 -> CPL=0 -> CPL=3" switching cost once for the entire list). This means that (e.g.) a driver can ask kernel to write to an IO port, then send a message, then do the "I finished with the IRQ" and then do the "get message/block waiting for message"; all in a single "do this list" kernel function.
Cheers,
Brendan
You should be able to trust that the blob:gerryg400 wrote:When the blob says "I'm a keyboard driver, give me access to the IO ports related to the keyboard" how do you know to trust the blob?Brendan wrote:A micro-kernel sacrifices some performance (due to IPC costs), and the only reason to do this is to avoid the need for "trusted drivers".
You should be able to download a "binary blob" device driver via. a Chinese peer to peer file sharing site (that was uploaded by a guy who calls himself "PawninU"), and use that driver on a server for an international bank without worrying about anything (other than the driver crashing or not working with the device) because you're using a micro-kernel.
- Can't touch any kernel data
- Can't touch any IO ports, memory mapped areas, etc; that belong to other devices
- Can't touch any data that belongs to any other process (drivers, applications, etc)
- (Optionally/hopefully) Can't use networking
Note: Technically, keyboard driver would communicate with "PS/2 controller" driver, and keyboard driver itself wouldn't be given access to any IO ports. If "PS/2 controller driver" is dodgy it'd be able to interfere with data going to/from all PS/2 device drivers (keyboard, mouse, touchpad, bar-code scanner, etc).
My mistake - I thought you meant "harder than setting IOPL to 3 and not keeping track of which process can access which IO ports at all" (e.g. just an "all IO ports or no IO ports" flag for each process).gerryg400 wrote:Of course, linked lists, bitmaps, dynamic memory management and the like are trivial. But so is setting up the IO bitmap. The point is that this method saves no complexity over the other methods.Brendan wrote:That information is trivial to obtain from PCI configuration space BARs. Of course I'd have a "device manager" process that does device enumeration, etc (and starts drivers, and tells kernel which driver should be allowed to use which resources).
It's also trivial to add a few fields to whatever the kernel uses as a "process data structure".
Timing is all variable anyway, due to things like different computers being faster/slower, interrupts, task switching, power management, SMI/SMM, etc. There are no devices that have timing requirements that are so strict that it causes problems.gerryg400 wrote:And if IO is required to be done while processing an interrupt message (prior to sending the EOI at the end of processing) wouldn't there be a concern about the extra time taken to make all the system calls required to process the interrupt under this scheme. I'm really thinking that this method is probably unworkable.
Typically; the kernel's interrupt handler sends "IRQ occurred" messages out, causing (extremely high priority) "driver thread/s" to be unblocked, which ("immediately") preempt any (lower priority) threads that were running causing a task switch; then the thread/s examine the message they received, check if their device caused the interrupt and handle it if necessary, and call an "I finished with that IRQ" kernel function (and then block waiting for a message). While all this is going on interrupts are enabled and only lower priority IRQs (as determined by the interrupt controller's "IRQ priority" scheme) are blocked. The overhead of driver thread/s using a kernel function to access IO ports (within its interrupt handling) is negligible compared to the rest of it.gerryg400 wrote:And if IO is required to be done while processing an interrupt message (prior to sending the EOI at the end of processing) wouldn't there be a concern about the extra time taken to make all the system calls required to process the interrupt under this scheme. I'm really thinking that this method is probably unworkable.
Note that my kernels typically have support for "batch kernel functions". Essentially; instead of doing one kernel function at a time (and paying the "CPL=3 -> CPL=0 -> CPL=3" switching cost for each function) you can build a list and ask kernel to process the list (and only pay the "CPL=3 -> CPL=0 -> CPL=3" switching cost once for the entire list). This means that (e.g.) a driver can ask kernel to write to an IO port, then send a message, then do the "I finished with the IRQ" and then do the "get message/block waiting for message"; all in a single "do this list" kernel function.
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Re: Monolithic vs. Microkernel hardware boot drivers
That's fair, and in this new thread I'll try to lay my ideas out with a more solid foundation to provide a better and more clear ground for discussion. In concession, I am worried about the timing somewhat - the current way I have it laid out is:gerryg400 wrote:physecfed, few people are actually developing microkernels so I did not wish to discourage you in any way. I hope you post more micro-kernel questions and ideas as you come up with them. I hope you didn't get the feeling I was trying to put you off. Good luck with your project.
Code: Select all
User Application/Other Process ---(syscall)---> Kernel ---(IPC)---> Driver ---(syscall)---> Kernel ---(I/O Access)---> Hardware
For what it's worth, I like the fact that I have people around who are willing to ask the tough questions with a tough stance - it forces me to not just have a perspective, but to be able to defend it as well. At the end of the line with whatever sort of kernel results from this, I'll want to say that I made the decisions I did because I had reasons and viewpoints to support them, and conversations like this build that. I wasn't too sure even coming into this question that a microkernel was my end objective, but after some of the discussions here it's the clear pursuit for me because I see potential (and some interesting challenges) in it.
Re: Monolithic vs. Microkernel hardware boot drivers
I wouldn't worry about performance too much. (after I criticised your idea for its speed ). Try to design so that parts can be easily replaced and refactored as you build your system. Don't get too wedded to a particular idea about things.One of the main purposes of this new thread that I'm going to work on tomorrow is to ask the question on whether or not that can be made more efficient (i.e. cutting out parts of the pipeline while leaving the kernel-mediated access in place) or whether or not it should be abandoned. I'm also going to look into resources on other microkernels (MINIX, L4, Mach, etc.) to see how they approach the issue.
One problem with microkernels is that you need to build a fair pile of stuff before you are in a position to test your ideas. I'd encourage you to do just that. Start building and see how it goes.
If I get some time later in the week I'll replace all the in and out instructions to system calls on my OS and see what happens.
If a trainstation is where trains stop, what is a workstation ?
Re: Monolithic vs. Microkernel hardware boot drivers
Hi,
For monolithic kernels the kernel is a "very significant" part of booting; and a lot of people (and tutorials, etc) naturally make a "kernel is very significant" assumption.
For a micro-kernel, this isn't true - the kernel itself is more like a small (but special) "pthreads" library. A huge amount of stuff happens, then you start the little "library" (kernel), then a huge amount of stuff continues. The kernel is relatively insignificant - a tiny 64 KiB piece dwarfed by tens of MiB of "everything else that's needed to get from start of boot loader all the way to user login".
Cheers,
Brendan
One thing that I don't think has been fully fulfilled is..physecfed wrote:Considering the primary purpose of this thread was to ask how core, required boot-level drivers are loaded or handled by microkernels, I'd say that purpose has been fulfilled.
For monolithic kernels the kernel is a "very significant" part of booting; and a lot of people (and tutorials, etc) naturally make a "kernel is very significant" assumption.
For a micro-kernel, this isn't true - the kernel itself is more like a small (but special) "pthreads" library. A huge amount of stuff happens, then you start the little "library" (kernel), then a huge amount of stuff continues. The kernel is relatively insignificant - a tiny 64 KiB piece dwarfed by tens of MiB of "everything else that's needed to get from start of boot loader all the way to user login".
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.