Hi,
In general, mixing assembly and another language (e.g. C) can result in inefficiencies. A simple example of this is IRQ handling - the C compiler can't do it alone, so you end up with assembly language "stubs" which save alll the registers, call the C routine then restore all the registers. This results in the same things being saved and restored twice and extra function calling overhead.
Mixing languages is also messy and harder to maintain. It's not just C and assembly, but the complications of inline assembly (specifying input and output parameters correctly, and telling the compiler what the code trashes). This also tends to make the OS "compiler specific" - you couldn't write all the inline assembly for GCC and expect it to compile properly on a different compiler.
For a microkernel, a large portion of it is platform dependant and requires assembly language, because all of the "parts" that can be done in pure C (without any assembly) are shifted out of the kernel.
Because of this most micro-kernels tend to have a platform specific micro-kernel, with portable "everything else". To port the kernel to another architecture you write a new kernel and recompile everything else. Some people say this approach allows the micro-kernel to take much better advantage of the platform's features and can result in a better system (for e.g. L4 and it's "small address spaces"). To be honest this is only partially true, as the same features can usually be taken advantage of with C (and inline assembly) if you're willing to do the necessary acrobatics.
There are also other alternatives - for example, creating a "hardware abstraction layer" to hide all the platform specific things and then building the kernel on top of that. This means the kernel can be pure C and you don't end up with the "mixed language mess", but also means you end up with the worst inefficiencies, as the kernel ends up being written for the lowest common set of features (either that or you end up with a lot of messy "if HAL supports X then FOO else BAR" code).
In all cases you've got platform specific code and portable code. The difference is where the platform specific code meets the portable code (the boundary between portable and non-portable). For a monolithic kernel the boundary is often scattered everywhere and hidden with conditional compiling and macros. For a micro-kernel the boundary is between the kernel and everything else. For the "hardware abstraction layer" idea the boundary is between the HAL and the kernel.
Getting back to the topic, before I chose languages, etc I'd decide where the boundary between platform specific and portable code is.
I'd recommend something like this:
- Decide what sort of kernel you want and where the non-portable/portable boundary is
Decide which initial architecture to implement the OS on
Choose which languages to use
Decide which tools to use
Learn as much as possible about the initial architecture
Learn as much as possible about the languages and tools
Learn how similar exisiting OSs do things, and/or general techniques for things like memory management, scheduling, IPC, etc
Write down the goals of the OS - a short paragraph (e.g. "this OS is intended for small embedded systems", "the OS is for educational purposes", "this OS is intended for both desktop computers and servers", etc).
Write a list of features that you want and don't want (multi-tasking, multi-user, SMP, NUMA, real-time, real mode/protected mode/long mode, headless, diskless, etc)
Try to find an existing open source OS/kernel that is very similar to what you want (there's no point re-inventing the wheel if you can get an existing wheel and paint it ).
Determine how you're going to deal with "scope creep" and the dependancies between parts - if you decide to add an extra feature later on, how much of the existing code will it break? You'd be surprised how much can break when you decide to add something that sounds simple, like internationization or SMP support. The best idea is to stick to the list of features you wrote above, but this isn't always possible (e.g. CPU manufacturers might add something like 64-bit support or hardware virtualisation and mess up your plans).
Design the boot code and the kernel (work out how each part will work). This should be a minimum of one page for the boot code, the memory management, the scheduler, etc plus an "overview" of how the parts fit together, and can include formal specifications for the interfaces between parts.
In general, this sounds like a lot of work, but for each month you spend doing the above you'll probably save a year or more later on.
Most people want to start writing code straight away. If you want to actually acheive something it's the worst thing you can do. I should know - so far I've spent over 10 years working on my OS, and the current version consists of some boot code only (I haven't even started any of the kernel code for the new version). Of course I'm not the sort of person who compromises, and (IMHO) the code I have done is better than any OS I've seen. In general if you don't care about the quality of your OS you can probably ignore most of the above...
Cheers,
Brendan