Portability research

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
Post Reply
paulbarker

Portability research

Post by paulbarker »

Thinking about the current "Portability" thread, I've had an idea. In order to keep that thread on topic I've started a new thread, but moderators should feel free to merge the two if you think thats better.

Instead of the usual portability methods (generic kernel code with separate folders for arch-dependent parts) where the generic code is in charge and calls methods of the arch-dependent code, I'm wondering whether (weather? I can't remeber which) the reverse of this has been tried before.

By using an object-oriented language (I'm not a fan of C++ but I need to investigate language choice) it would be possible to write a set of generic kernel classes. For each platform, a kernel would be written from scratch, which *could* (does not have to, but it would need a very good reason not to) use the generic classes. By deriving a class (say phys_x86) from a generic base class (phys) and replacing only the methods needed, a lot of work could be saved while still allowing maximum choice of implementation to the platform-dependent parts.

The kernel would be a microkernel, and would be defined only by its interface with the outside world. Additional functions may be exported by the kernel for any particular platform, as long as they do not interfere with the normal inteface (eg. provide DMA support where applicable).

I have only just thought of this and need to spend some more time on it. I am going to do my own research into this but I think its a good direction to take my kernel if no-one has done it before. I'm more interested in writing an "academic" kernel than a general-purpose OS. A lot of the principles I have discussed previously about a component-based kernel could be adapted to this design.

Any input on this would be greatly appreciated. I'm guessing it must have been done before, or the ideas must have at least been discussed before, but if not then I think its certainly worth looking at. I'm not looking for people to work with me on this yet, but I might do depending on how things turn out.

As usual I'm just throwing ideas around and won't be offended if anyone thinks this is a bad idea.

Thanks,
Paul Barker
dc0d32

Re:Portability research

Post by dc0d32 »

that is what i am actually thinking of :D
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Portability research

Post by Colonel Kernel »

I've thought about that too, but I arrived at a slightly different idea. What I observe is that dynamic polymorphism is usually overkill in a microkernel. The only cases I can think of where you need a different implementation of something "on-the-fly" is for machine-specific parts rather than architecture-specific parts (i.e. -- loading a different "HAL" at boot time depending on the pecularities of your motherboard or particular CPU model).

For the really fundamental differences between architectures, I think static polymorphism is enough -- i.e., templates. You would have generic template classes or functions that are portable, and they would take architecture-specific "traits classes" as template parameters. So instead of your phys_x86 deriving from phys, you'd have instead phys<phys_x86_impl> or phys<phys_ia64_impl> or phys<phys_ppc_impl>, etc. This is just another way to organize things beyond the usual separate-folders-and-header-files scheme. I'm not sure if it really offers any big advantages.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
paulbarker

Re:Portability research

Post by paulbarker »

The HAL is the complete oposite of what I am aiming at - the generic kernel calls HAL methods rather than the other way round (probably not 100% true but its a case of which part is in control).

I don't intend to do much dynamic polymorphism - its still useful when theres 5 ways of handling a timer and the user should be able to choose, but it's not my main aim here.

The problem with templates is that the generic code is still in charge. I know that will work, its been done in a million different ways by most OS's ever written for portability. I'm trying to work out if the opposite is worth trying - it the platform-dependent code is in charge.

I'm now quoting from the other thread so it may be time for a merge but here goes:

Brendan said:
For my OS the abstraction layer will be the micro-kernel's API. Porting the OS will mean writing an entirely new micro-kernel and boot code, and then recompiling everything else.
This is what I am thinking of, but with a generic framework of classes to help writing each port *if* you want to use them. I also want to avoid it being an all-or-none choice. I'll follow this up with some more ideas and research if people are interested.
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Portability research

Post by Colonel Kernel »

paulbarker wrote:The problem with templates is that the generic code is still in charge.
Why is this a problem?
'm trying to work out if the opposite is worth trying - it the platform-dependent code is in charge.
In a sense, it always is. The kernel is invoked by a system call or interrupt, and that trap-handling code is always architecture-specific. It's all a matter of control flow, and the control flow always starts there (or from an architecture-specific bootloader). It's then a choice as to whether the kernel should call into generic code or not. Eventually that generic code will need architecture-specific services (HAL-ish things), so it will call back out again. I think this is just the inherent nature of kernel control flow -- I see no reason to fight it.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
paulbarker

Re:Portability research

Post by paulbarker »

I'll try to clarify what I'm thinking of here. I want to treat the generic code as a set of optional library functions, and the kernel will be defined purely by its interface to the rest of the system. The main advantage of hierarchical classes in this context is that the platform-dependent function can have the same name as the generic function it replaces, without needing an #ifdef around every generic function.

If the virtual memory manager uses a particular interface to the physical memory manager, then I see two choices. Either the function names of the generic PMM implementation are used in the VMM, or another set of function names are used which must always be provided by the platform-dependent code, even if they are just #define'd to the generic versions. I would much prefer the first choice, but this would require some object-oriented features to allow the platform-dependent code to wrap the generic code.

Rather than the job of a HAL-like layer being to abstract the hardware for the generic code, I want the job of the generic code to be to help the developer write a version of the kernel for a new architecture.

My last point is that generic code (as normally used) can often feel like a straight-jacket when trying to port a kernel to a platform which the original developer never dreamed of. The generic code may make assumptions which are invalid for the new platform, and so if the generic code is the authority (it is in charge, it is what is called "the kernel") then the new platform cannot be supported.

I hope this makes a bit more sense. Its more the perception of the generic code that I am thinking of changing. So, would you say this is a worthwhile experiment or a waste of time?
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Portability research

Post by Colonel Kernel »

I think you might be fighting an uphill battle. The problem is that the architecture-dependent stuff can be almost the same in every kernel design, but the generic stuff can vary wildly. For example, the VMM of a microkernel can be considerably simpler than the VMM of a monolithic kernel. In writing the generic code, you'd have to make many design decisions about how to abstract the system resources (threads, address spaces, etc.), how to represent and track those resources with data structures, and how to let apps manipulate those resources via system calls.

Another way to put it is that if you abstracted the HAL-ish stuff as usual, but then made the other architecture-dependent code "in charge" (i.e. -- implementing the kernel designer's "vision" of how things are supposed to work), you aren't really left with anything useful in between.

I think it's possible to design a particular kernel to be portable (that's what I'm doing), but IMO it's difficult if not impossible to create an SDK that helps portability and doesn't foist a boatload of unwanted assumptions on the OS designer (see OSKit for example). My $0.02.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
Post Reply