To all of you microkernel people, I'm working on one too and I'd like to have some sort of rudimentary hardware abstraction layer presented to the userspace stuff. Does anybody here have a HAL in their OS and if so, how did you go about implementing it and what exactly does it do?
It's conspicuously absent from the wiki, so maybe this thread could also serve as the basis for a wiki page on HALs.
µkernels and HALs
Re:µkernels and HALs
My OS design had basically a single layer of abstraction for hardware. The drivers would directly deal with what they had to control (as they're the ones that know how to) and programs would talk to the drivers through one of two standard ways:
- A standard driver interface (jumping via my own call tables) exposing requests such as read/write/status etc etc
- A system service interface (via the IPC system) to a system service provided by the driver, this allows the driver to essentially have automatic request queueing and suchlike
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
Re:µkernels and HALs
My uKernel is going to have system calls to have thread wait on particular interrupts (interrupts will be a special case of IPC), to allocate particular regions of physical memory (for DMA and memory-mapped devices), for timing-related things (not a low-level interface to the PIT or anything -- rather things like sleep(), etc.), and possibly also calls for reserving I/O port ranges (if I decide to use the I/O permission bitmap in my kernel).
I wouldn't call it a HAL per se...
I wouldn't call it a HAL per se...
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
- Pype.Clicker
- Member
- Posts: 5964
- Joined: Wed Oct 18, 2006 2:31 am
- Location: In a galaxy, far, far away
- Contact:
Re:µkernels and HALs
i shall admit that while i'm for lean kernels, i'm still perplexed about the "pure" microkernel approach for drivers.
For instance:
[*] ATA has two disks per channel, that is, 2 different devices (and potentially drivers) sharing the same I/O registers ;
[*] PCI DMA basically means by writing in some device-specific area, you enable the device to access any area of physical memory;
[*] small delays are an important part of driver programming, but in userspace, you cannot prevent the kernel to interrupt you when you're waiting for a small delay
[*] interrupts are omnipresent. Even worse is shared interrupt: if each device on the irq line has to probe the device to tell if he's the interrupt source or not -- how do you handle that with "send message to X on interupt" ...
[*] VM86 hooks to change video mode in "default" video driver ...
For all these things, there's (imvho) no clean way to get things solved by simply "expose only I/O resources of device x to driver handling x" policy...
So i'd rather go for a solution where drivers are split in two part:
[*] a "core" driver, running at DPL0, which would be abstracting architecture contraints (e.g. it will have to do virt_to_phys address translations) and dealing with direct I/Os
[*] a "service", running at DPL3 that makes use of core drivers to provide higher-level abstractions such as partitions, network connections, etc.
However, to be handy, the "core driver" interface needs to be domain-specific. E.g. for a CDROM i would be offering the regular "read/write" block interface plus a "send_scsi_command" that can deliver any kind of command packet for eject/refresh/playaudio and the like...
For instance:
[*] ATA has two disks per channel, that is, 2 different devices (and potentially drivers) sharing the same I/O registers ;
[*] PCI DMA basically means by writing in some device-specific area, you enable the device to access any area of physical memory;
[*] small delays are an important part of driver programming, but in userspace, you cannot prevent the kernel to interrupt you when you're waiting for a small delay
[*] interrupts are omnipresent. Even worse is shared interrupt: if each device on the irq line has to probe the device to tell if he's the interrupt source or not -- how do you handle that with "send message to X on interupt" ...
[*] VM86 hooks to change video mode in "default" video driver ...
For all these things, there's (imvho) no clean way to get things solved by simply "expose only I/O resources of device x to driver handling x" policy...
So i'd rather go for a solution where drivers are split in two part:
[*] a "core" driver, running at DPL0, which would be abstracting architecture contraints (e.g. it will have to do virt_to_phys address translations) and dealing with direct I/Os
[*] a "service", running at DPL3 that makes use of core drivers to provide higher-level abstractions such as partitions, network connections, etc.
However, to be handy, the "core driver" interface needs to be domain-specific. E.g. for a CDROM i would be offering the regular "read/write" block interface plus a "send_scsi_command" that can deliver any kind of command packet for eject/refresh/playaudio and the like...
Re:µkernels and HALs
Pype, your solution is very much a better description of how I intended my system I just had a slight difference in not requiring a service (even though the vast majority of drivers would implement one anyway).
-
- Member
- Posts: 1600
- Joined: Wed Oct 18, 2006 11:59 am
- Location: Vienna/Austria
- Contact:
Re:µkernels and HALs
Hm. I'm all for microkernel. But I'd have the device drivers at a location where they can act quickly and without too much constraints - at ring0 and with higher priority than services or user tasks.
I would have block devices expose methods like "init","read","write","generalinterfaceforhwcontrol"
"generalinterfaceforhwcontrol" because there are things which a floppy doesn't implement, but a cdrom implements them: eject f. ex. I for one didn't want to clutter my messaging with all the dozen system call message types for cdrom eject. Nah, the driver knows these parameters, the FS Service just passes them along to the correct device. So,if I eject device1, I say "eject cdrom1" and the door of first cdrom opens.
Pype has put it straight. It is advised, that you hide the device driver pecularities away from processes by offering abstraction layers: FS Service, GUI Service, NET Service f. ex.
I give one more advice and it is good advice and free and you will not have to pay for it *rofl*: Have some global device database which you can query for devices. Something in the wake of clickers KDS I reckon. It *will* give you more bang for the buck.
(and as we speak of the net service - it is my biggest break with my own paradigm. In the net service I've included the nic drivers in the net service process image to save address space switching and system call overhead to get hold of packets - I could have included the whol enet service into the kernel as well, but that would have been to cruel a thing to do)
stay safe
I would have block devices expose methods like "init","read","write","generalinterfaceforhwcontrol"
"generalinterfaceforhwcontrol" because there are things which a floppy doesn't implement, but a cdrom implements them: eject f. ex. I for one didn't want to clutter my messaging with all the dozen system call message types for cdrom eject. Nah, the driver knows these parameters, the FS Service just passes them along to the correct device. So,if I eject device1, I say "eject cdrom1" and the door of first cdrom opens.
Pype has put it straight. It is advised, that you hide the device driver pecularities away from processes by offering abstraction layers: FS Service, GUI Service, NET Service f. ex.
I give one more advice and it is good advice and free and you will not have to pay for it *rofl*: Have some global device database which you can query for devices. Something in the wake of clickers KDS I reckon. It *will* give you more bang for the buck.
(and as we speak of the net service - it is my biggest break with my own paradigm. In the net service I've included the nic drivers in the net service process image to save address space switching and system call overhead to get hold of packets - I could have included the whol enet service into the kernel as well, but that would have been to cruel a thing to do)
stay safe
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
BlueillusionOS iso image
Re:µkernels and HALs
Hi,
For me, I can't think of a clean way to prevent malicious (or even just buggy) device drivers from trashing anything they like.
Of course I'm not saying that the monolithic approach (or the hybrid approach suggested) is bad - only that there's a trade-off between performance and protection.
The only form of protection violation a "pure" microkernel approach can't protect against is PCI bus masters (ISA DMA is easy to protect). I must admit I haven't found an adequate way around this a problem...
Cheers,
Brendan
Either one device driver controlling both ATA devices, or 3 device drivers (one for the controller and one for each device, where the disk driver talks to the controller driver instead of I/O ports).Pype.Clicker wrote:[*] ATA has two disks per channel, that is, 2 different devices (and potentially drivers) sharing the same I/O registers ;
I'm not sure how this can be hard to figure out, unless you're talking of the possibility of protection violation (of course even if you do nothing here, a possibility of protection violation via. DMA is better than a possibility of protection violation for everything the device driver does).Pype.Clicker wrote:[*] PCI DMA basically means by writing in some device-specific area, you enable the device to access any area of physical memory;
Can you actually think of a device where a small delay is required? More often, you need a minimum of X ns rather than exactly X ns, and a delay that is much longer is fine (except for rare performance implications). I've never had a problem using high priority threads and "nanosleep()".Pype.Clicker wrote:[*] small delays are an important part of driver programming, but in userspace, you cannot prevent the kernel to interrupt you when you're waiting for a small delay
There's 2 methods here. You could use "send message to X, Y and Z". Alternatively, when X does "EOI(status)" you could check if the IRQ was handled and if it wasn't do "send message to Y".Pype.Clicker wrote:[*] interrupts are omnipresent. Even worse is shared interrupt: if each device on the irq line has to probe the device to tell if he's the interrupt source or not -- how do you handle that with "send message to X on interupt" ...
VM86 is only ever a temporary solution anyway. I refuse to use V86 - instead I set a default video mode during boot and use it as a frame buffer (no acceleration, no video mode changes, etc) until a decent/device specific video driver is started (if ever).Pype.Clicker wrote:[*] VM86 hooks to change video mode in "default" video driver ...
I don't see any problem.Pype.Clicker wrote:For all these things, there's (imvho) no clean way to get things solved by simply "expose only I/O resources of device x to driver handling x" policy...
For me, I can't think of a clean way to prevent malicious (or even just buggy) device drivers from trashing anything they like.
Does your OS automatically scan PCI buses and look for device drivers to suit the devices it finds? If I write a "trojan" device driver that wipes all your hard drives (or perhaps sends all keypresses to my IP address), will your OS automaticly load it and run it? To prevent this, will you need a large/expensive "driver signing" program like Microsoft, or will you have open source drivers (where it's impossible to comply with the NDA's some hardware manufacturers want)? Can you think of a clean way to solve this?Pype.Clicker wrote:So i'd rather go for a solution where drivers are split in two part:
[*] a "core" driver, running at DPL0, which would be abstracting architecture contraints (e.g. it will have to do virt_to_phys address translations) and dealing with direct I/Os
[*] a "service", running at DPL3 that makes use of core drivers to provide higher-level abstractions such as partitions, network connections, etc.
Of course I'm not saying that the monolithic approach (or the hybrid approach suggested) is bad - only that there's a trade-off between performance and protection.
The only form of protection violation a "pure" microkernel approach can't protect against is PCI bus masters (ISA DMA is easy to protect). I must admit I haven't found an adequate way around this a problem...
Cheers,
Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
Re:µkernels and HALs
AFAIK, nobody has. It's ironic that so much effort is put into the MMU but none into making DMA respect virtual address translation. For example, DMA is the only hole in Singularity's otherwise MMU-less protection scheme.Brendan wrote:The only form of protection violation a "pure" microkernel approach can't protect against is PCI bus masters (ISA DMA is easy to protect). I must admit I haven't found an adequate way around this a problem...
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
Re:µkernels and HALs
The design I have right now is a ?kernel based on the L4Ka::Pistachio codebase... great IPC performance and while it might have been neater to write everything myself, I wanted to use the L4 API anyway.... but that's irrelevant.
The design is what I like to call a "three-layer model," as in, it's not just kernel-space and user-space. At the highest priority is the microkernel itself with it's memory allocator and IPC handler. On top of that, with a lower priority than the microkernel, but a higher priority than anything in userspace, are the device drivers and a few other essential tasks.
In effect, it's like a monolithic kernel that itself contains a microkernel and a lot of servers... I'm not sure I explained it very well though.
Basically, drivers are protected but not as much as userspace processes, and the hardware interface that is actually presented to userspace is standardized. That's my definition of a HAL, and that's what I'm planning to implement... please don't kill me if that's a bad idea
The design is what I like to call a "three-layer model," as in, it's not just kernel-space and user-space. At the highest priority is the microkernel itself with it's memory allocator and IPC handler. On top of that, with a lower priority than the microkernel, but a higher priority than anything in userspace, are the device drivers and a few other essential tasks.
In effect, it's like a monolithic kernel that itself contains a microkernel and a lot of servers... I'm not sure I explained it very well though.
Basically, drivers are protected but not as much as userspace processes, and the hardware interface that is actually presented to userspace is standardized. That's my definition of a HAL, and that's what I'm planning to implement... please don't kill me if that's a bad idea