From my dictionary
Kernel: Central or essential part; part within hard shell of nut or stone fruit.
Now the more cynical people on the board would go with the first part of that definition, but I'm a romantic so I'll go with the second part. A kernel is the root, the wellspring, the source of all good things about your OS. The mightiest oak can grow from the smallest seed (Zen OS design, cool
).
Back to reality. There are a few different types of kernel designs around (I'll go into it later), but essentially what a kernel does is attempt to place a layer between the hardware itself and the application software written above it, and control access to the hardware and resources in a controlled, understood manner. The idea of an OS is to remove the low level hardware programming requirements from the application programmer and provide them with much easier methods of controlling the machine (Some of you may not agree with that, but IMO a GUI is
not part of an actual OS).
So your kernel will do such things as loading programs, dynamically linking those programs (If required), allocating memory, scheduling processes, handling inter-process communications (IPC), handling Input/Output to devices etc. If you are designing for multiple processors then you would also handle that in the kernel. The idea being to remove all these complex problems from the application software.
I suppose in very general terms you could think of the kernel as a controlling program, or the main loop of a program, and applications as sub-programs that get called at scheduled intervals. Eg Your entire kernel could consist of printing "Hello World" on the screen, of course it could also be as complex as real-time control of a nuclear power plant. How far to go is up to you.
Some basic designs (Hopefully someone can correct what's wrong here, I'm shaky on it myself):
Monolithic:
All the drivers, scheduler, IPC control etc are part of a single large program operating in kernel space (Ring 0 on x86 systems), usually with the kernel as one process. This gives you a speed boost because you don't require complex IPC code, but cuts down on flexibility. Windows and Linux (Yeah, I know about modules and I'm not buying it
) are examples of monolithic kernels.
Micro-kernel:
This is where the kernel itself mostly consists of a message passing system that provides access to a number of service processes (Eg video services) that run in either kernel or user mode. You get a far more flexible system with micro-kernels, but pay a price in speed because of the IPC needed to use the service processes. I think Mach is an example of this.
Exo-kernel:
Very odd, very weird, any examples you're likely to come across are probably research projects at a Uni. Essentially an exo-kernel provides a few basic structures on top of the hardware and allows running apps to supersede these structures with their own OS modules. You can apparently get a huge speed increase using an exo-kernel, but to do so you need an app that has OS modules written with the app in mind instead of it using the defaults in the OS. This (IMHO) places part of the low level programming back into the realm of the application programmer, which may or may not be a good thing.
So where does that get you? A kernel is just a program, it can be as simple or complex as you like. In terms of an OS the kernel is the program(s) which controls all the other programs in a controlled fashion.
Hope that helps, if not hopefully someone else can fill in the blanks/correct my errors.
BTW, if you're just starting out I heartily reccomend a monolithic kernel designed for a single processor, it's a good place to begin, things just start getting weirder and weirder after that ;D.
Curufir