Modules Tutorial....?
Posted: Sun Dec 02, 2007 2:16 pm
Does anyone here know a good tutorial for Module Loading/Unloading etc...?
I already searched the web.
-JL
I already searched the web.
-JL
The Place to Start for Operating System Developers
http://f.osdev.org/
Any arguments to support this statement? Both windows and linux support it and they seem to have no intentions of removing it so it can't be as bad as a design error.I will stand by the statement that module loading is almost always a "Bad Idea".
The problem is, tutorials don't teach you to figure things on your own. And in the context of OS development, tutorials are rare and limited to the basic steps. Loading modules is highly kernel-specific so do not count on the existance of any tutorial, let alone that it applies to your kernel.Does anyone here know a good tutorial
Don't look at tutorials They're only to start you off.piranha wrote:Does anyone here know a good tutorial for Module Loading/Unloading etc...?
I already searched the web.
-JL
Countless hours of banging my head on my desk dealing with module unloading in linux have led me to a 'bad gut feeling' about this. The real points have been made countless times in micro versus monolithic kernel debates. I guess a microkernel is really doing 'modules' at heart anyway. Anyway, don't see much need in going into that can of warms again, it's been gone into again and again.Any arguments to support this statement? Both windows and linux support it and they seem to have no intentions of removing it so it can't be as bad as a design error.
You missed the main, important reason why monolithic kernels use modules in the first place:Fate wrote:Countless hours of banging my head on my desk dealing with module unloading in linux have led me to a 'bad gut feeling' about this. The real points have been made countless times in micro versus monolithic kernel debates. I guess a microkernel is really doing 'modules' at heart anyway. Anyway, don't see much need in going into that can of warms again, it's been gone into again and again.Any arguments to support this statement? Both windows and linux support it and they seem to have no intentions of removing it so it can't be as bad as a design error.
I fall in the microkernel line of thought, and the 'module' method seems like adding more complication in a monolithic kernel design for practical benefits that just don't matter for a hobby design. (Unless some aspect of that design REQUIRES modules for it's neato factor.) The real benefits (and reason why) linux and NT use modules are greatly diminished in a hobby OS:
1. The abstraction gained will likely be lost by the fact the developer has complete knowledge of the rest of the kernel.
2. That same abstraction could be gained by designing your kernel appropriately.
3. You probably aren't going to be running this thing on any range of hardware (other than your computer), and chances are near 0 that another developer will code up a driver.
Micro or monolithic kernel design, you need to get to the point of task switching and running programs before thinking of either - and at that point, wouldn't need a tutorial.
That is module unloading, and a specific instance of it. That it is linux does not mean it can't be done better.Fate wrote:Countless hours of banging my head on my desk dealing with module unloading in linux have led me to a 'bad gut feeling' about this.
The key difference is that drivers are loaded in either kernel space or user space. Monolithics are faster, Micros are more secure. Given that the OP wanted module loading I guess he already made the choice.The real points have been made countless times in micro versus monolithic kernel debates. I guess a microkernel is really doing 'modules' at heart anyway. Anyway, don't see much need in going into that can of warms again, it's been gone into again and again.
Even hobbyists can be perfectionists. Having the perfect design can be more satisfying than having a working kernel. The Eleanore Semaphore archetype isn't too rare at all.I fall in the microkernel line of thought, and the 'module' method seems like adding more complication in a monolithic kernel design for practical benefits that just don't matter for a hobby design.
That doesn't mean that the writer adheres to his own design principles - you have coders and good coders. Not writing kluges is a key feature of the latter.The real benefits (and reason why) linux and NT use modules are greatly diminished in a hobby OS:
1. The abstraction gained will likely be lost by the fact the developer has complete knowledge of the rest of the kernel.
That's not the point of modules - keeping the unneccessary stuff out of the main kernel.2. That same abstraction could be gained by designing your kernel appropriately.
Again, you're making a huge guesstimation on the OP's ambitions. Still, retrofitting is worse than having it from the start. Even if you want to support both your desktop and laptop module loading can already be handy.3. You probably aren't going to be running this thing on any range of hardware (other than your computer), and chances are near 0 that another developer will code up a driver.
Before you can load your programs you must have some location to load them from. If you want to have something you can show off with, you need at least a video driver. For which you need drivers, for which you need to have made the micro/monolith decision.Micro or monolithic kernel design, you need to get to the point of task switching and running programs before thinking of either - and at that point, wouldn't need a tutorial.
because that can be accomplished with a monolithic driver without "modules" - it's a monolithic versus micro argument.
You seem to be confusing what a microkernel is: Microkernels do not have 'modules' at all. All their 'drivers' are user space processes and as such there is no 'loading' or 'unloading', merely fork()/exec() and kill(). The argument seems to be about having the ability to dynamically load/unload modules as opposed to statically compiling them into the kernel. And make no mistake, "Loading modules once at boot" is a complete waste of time. You might as well just have everything statically #ifdef'd into the kernel and recompile when you want a new 'module'.And the ability to do that would need to be accomplished before loadable modules in either a micro or monolithic kernel.
That's what an initial ramdisk is for.and passing through "my first program" as a binary image to the kernel.
Maybe you misunderstand the context Combuster used the term 'retrofitting'. He was implying that your assumption that "You aren't going to be running this thing on any range of hardware" may prove to be false, and so you would end up trying to retrofit a module loading/unloading architecture into an architecture with all drivers statically compiled in. Bad Idea (tm).A don't know why you mentioned retrofitting either. Using linux as an example, it's perfectly possible to design subsystems such that they are compiled into the kernel, but can be removed entirely or loaded/unloaded at runtime.
Yeh wha'? Those two sentences are completely disjoint! You can have a monolithic kernel without module unloading, so why mention microkernels?And, as a matter of opinion, module unloading is, was, and remains a bad idea. Microkernels at least give a minimum of protection from nasty hardware scenarios...
I don't understand this. Here you're implying that 'components' are actual physical devices, but then you say "cleanup component B (which is a different module)" - but I thought B wasn't a module, but a device?You load your module, which creates an interrupt handler, and turns power on to component A. That causes an effect in component B. You unload your handler, but the system state change in component B remains, and that state change will require handling somehow. Maybe this is a documented thing, maybe not - but now you need to add code in component A to cleanup component B (which is a different module), or add code in component B to cleanup for A... The whole thing gets messy quick.
Good that we got that straightened outFate wrote:I'm not really one for argument, and this is a matter of opinion. Given that kernel "modules" can often be termed drivers, kernel extensions, etc.. my thoughts here were specifically about linux-style loadable and unloadable during runtime, after boot. I don't have a problem with "load once at boot" modules in a monolithic kernel design.
Life isn't binary - True monoliths have all drivers in the first loaded binary. Microkernels have every driver separate, and loaded into userspace. The hybrid form commonly used is the Modular Kernel: drivers in kernel space, but loaded dynamically. You could decide to put more in the kernel proper, or you could delegate parts to userspace striking a balance between the microkernel and the monolithic kernel. Modular kernels are popular because they are faster than microkernels, while not needing a rebuild eveytime you plug in a new PCI card (or become bloated).The paging argument completely misses the point, because that can be accomplished with a monolithic driver without "modules" - it's a monolithic versus micro argument. Depending on your architecture, you could even do a Microkernel approach and avoid that as well
Kernel code and user code are different, and the same way are kernel modules and userland plugins. To make kernel modules be a bad thing, you just assumed that writing kernel code must be a bad thing. Again, we are NOT discussing a microkernel.The sentiment that loading/unloading a kernel module is the same as a program plugin is the main reason I would advocate against this. Loading and unloading kernel code in the same address space is a very different animal than user space. You might use the same methods, but the complications are completely different.
micro vs mono. Off topic and a big "No Duh"I have about as much faith in kernel modules being stable as I do in microkernels outperforming monolithic -it can and has happened, but don't hold your breath...
Linux originally didn't support modules - it is a feature added later. Home developers have the advantage of putting this in from the beginning. Besides, the basement geek only needs two or three drivers to control his computer as opposed to the hundreds that were written for linux.Driver unloading can have more than a few complications. The fact is, the linux kernel has some really awesome developers, paid and working full time, and to say that somone in their basement can do better is unrealistic at best.
Dependencies are indeed a thing to take care of. But once again, this isn't module specific, or even kernel-specific. The same could happen in a microkernel. Or userspace. Try writing a mod for Unreal TournamentHere's a scenario
You didn't read the remark - even if you could run programs in userspace, you couldn't see that is was doing so without having a driver. You can start drivers without task switching or the like, then you'd at least have something to see what you are doing.On a final note, being at the point of task switching and running programs does not imply that you have any sort of driver framework. I'd actually recommend using a smart boot loader, and passing through "my first program" as a binary image to the kernel. It doesn't matter where you load it, just that you can get it loaded and execute code there. And the ability to do that would need to be accomplished before loadable modules in either a micro or monolithic kernel. If you don't have the ability to do that, then modules are impossible anyway, unless they are part of the kernel, in which case, they aren't modules....
that isn't retrofitting. That is delaying something until you have the base layer to build it on. What I meant is that with adding module loading to the linux kernel a huge number of things needed to be changed, and you will likely find traces of the two conflicting design concepts: drivers must be built into the kernel and allowing modules. If you assume that drivers are contained in the binary and later change that you'll suffer the consequences.A don't know why you mentioned retrofitting either. Using linux as an example, it's perfectly possible to design subsystems such that they are compiled into the kernel, but can be removed entirely or loaded/unloaded at runtime. My recommendation is to do the design up front, but save the loading/unloading until enough of the system is built up around it.
Once more, this isn't the place for micro vs monolith arguments.And, as a matter of opinion, module unloading is, was, and remains a bad idea. Microkernels at least give a minimum of protection from nasty hardware scenarios...
Well, module loading and to a limited extend unloading is on my list of things to do in the near future as well.. just a few things before it, namely:Combuster wrote: BTW There's enough proof of the opposite if you would just took the time to look for it (Brendan, Candy, Colonel Kernel, and most likely quite a few more who haven't shown off their programming mastery yet)