The return of the monotasking OS?

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Post by Brendan »

Hi,
pcmattman wrote:The curse of backward compatibility... Imagine if we did actually step forward, all of a sudden almost everyone would have to upgrade their systems because the software would not support their hardware.
It would be possible for CPU manufacturers to start producing "long mode only" CPUs. In this case firmware would need to be changed (e.g. to EFI) and OS boot code would need to change, but most of the code in 64-bit OSs wouldn't need any changes, and (32-bit and 64-bit) applications wouldn't know the difference.

There are other legacy things that could also be discarded without applications caring - PIC, ISA DMA, PIT, gateA20, PS/2 keyboard/mice, etc. If these things were removed most OSs would only need minor changes.

The problem is that OSs would need to support older/current systems and the newer systems - in the short term (until current computers become obsolete), simplifying the hardware makes operating systems more complicated (not less complicated).

It'd also mean old (16-bit/DOS) software wouldn't work, but this isn't that much of a problem for most people - as long as their current 32-bit and 64-bit software still work they'd be mostly happy.


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
Avarok
Member
Member
Posts: 102
Joined: Thu Aug 30, 2007 9:09 pm

Post by Avarok »

I would personally only write an OS for long mode at this point. That's what my system has. Why should I support obsolete things I don't need?

Why does the bootloader go back to 16 bit mode so a bootloader can put you in 32 bit mode again, just because we're all afraid of breaking the interface 28 years after it was developed?

How long do we support this ****? <-- being the question
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
- C. A. R. Hoare
User avatar
AJ
Member
Member
Posts: 2646
Joined: Sun Oct 22, 2006 7:01 am
Location: Devon, UK
Contact:

Post by AJ »

I agree absolutely (despite writing a 32 bit OS at present - I only have an oldish PC to test on and don't just want to rely on Bochs).

Sometimes, when a new processor is released, you *have* to upgrade mobo, memory, graphics card (thinking AGP->PCI-E) etc... Why not just go that bit further the next time this happens and upgrade the architecture as a whole. Ans *surely* there must be some way of putting a little instruction that identifies the CPU as the 'new type' without breaking an old system - thus allowing an OS to have both types of bootloader?

Cheers,
Adam
User avatar
Dex
Member
Member
Posts: 1444
Joined: Fri Jan 27, 2006 12:00 am
Contact:

Post by Dex »

Some people forget, that what may seem old, may come back :shock: .
If you try and keep up with the big boys, you will loose :cry: .

Example: soon there will be millions of theres http://en.wikipedia.org/wiki/ASUS_Eee_PC
http://crave.cnet.co.uk/laptops/0,39029 ... 655,00.htm
http://event.asus.com/eeepc/
A gold mine for hobby OS dev's like us, i have mine on order do you ? :wink:
Crazed123
Member
Member
Posts: 248
Joined: Thu Oct 21, 2004 11:00 pm

Post by Crazed123 »

I think they tried a clean break with the Itanium architecture. Nobody bit.
User avatar
JackScott
Member
Member
Posts: 1031
Joined: Thu Dec 21, 2006 3:03 am
Location: Hobart, Australia
Contact:

Post by JackScott »

For two reasons they didn't:
  • The x86 architecture still worked perfectly as far as users were concerned, so they didn't upgrade.
  • The x86-64 architecture had just been built by AMD, and developers knew how to develop for that already, as an extension to what they already knew.
Basically, Itanium offered nothing greatly superior to what was already available. So they lost.
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Post by Colonel Kernel »

Yayyak wrote:The x86-64 architecture had just been built by AMD, and developers knew how to develop for that already, as an extension to what they already knew.
Itanium pre-dated x86-64 by a year or two I think... The problems with Itanium have more to do with the lackluster performance of the first generation, the higher cost, and the need for sophisticated compiler optimizations to really take advantage of the architecture. A lot of developers don't have the time to spend doing the kind of profile-guided optimizations necessary to make their code really scream on the Itanium.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
Avarok
Member
Member
Posts: 102
Joined: Thu Aug 30, 2007 9:09 pm

Post by Avarok »

Itanium bit because:

1) The performance sucked.

2) It was single language for all intents and purposes; since you couldn't effectively write anything in assembler and get comparable efficiency to their closed source compiler, and they only offered a C++ compiler, I couldn't possibly port over my Fantabulastic compiler, and run that. It took the GCC community ages to catch up.

3) It cost twice as much as other processors when it was released.

4) Intel didn't publish instruction set specs until it was already released, so we had no time to study.

5) AMD came out with a chipset that performed better and allowed graft-loving hippies to keep their old poo.

~~~~

I wish Intel hadn't done us ugly on that.
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
- C. A. R. Hoare
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Post by Schol-R-LEA »

Colonel Kernel wrote:
Yayyak wrote:The x86-64 architecture had just been built by AMD, and developers knew how to develop for that already, as an extension to what they already knew.
Itanium pre-dated x86-64 by a year or two I think...
Rather more, if you count development time. Intel and HP began design work on the 'Merced' processor (which eventually led to the Itanium) in 1994. The Itanium itself went into design in 1999, and was released in 2001 and produced for a year before being abandoned. The AMD Opteron was announced shortly afterwards, but didn't hit the shelves until the end of 2003 or beginning of 2004, so the 'year or two' is about right in that regard. By that time the 'Itanic' had already sank, though the Itanium 2 came out in early 2003, I think it was, to absolutely no interest whatsoever.
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Post by Schol-R-LEA »

Getting back to the original topic, I should point out that there have been systems like that before, most notably VM/CMS on the late-1970s IBM mainframes; it consisted of a virtualizer named VM/370, and a terminal shell called Conversational Monitor System. VM/370 would provide an abstract machine (the code ran on the real CPU, but the peripherals were virtual) which allowed them to strip CMS down to the bare essentials, with very simplified drivers and no real memory mgt or protection - all of that was in the underlying virtualizer. Also, VM/370 could run other IBM mainframe systems such as OS/360 or MVFT in virtual sessions as well as CMS, for backwards compatibility; there was even a Unixoid system that run under it if memory serves. AFAIK, there wasn't much in the way of IPC; each virtualized system thought it was alone in the machine.

A similar system for PCs came out in the late 1980s called PC/VM386, which allowed you to virtualize several MSDOS systems (or even Quarterdeq and Windows, if memory serves) at once, something that was otherwise nearly unknown at the time.

Conversely, exo-kernels such as L4 (EDIT: I was incorrect here; L4 is a microkernel design, not an exo-kernel) basically are just virtualizers as well, designed not to run operating systems but rather individual applications. The idea is to eliminate the overhead of operating system abstractions by letting the apps run more or less on bare hardware. They generally provide multiplexing and memory protection and nothing else; each app would be responsible for it's own drivers, memory management, etc., the idea being that you could tailor such things to each application for maximum efficiency and use shared libraries for the non-critical sections. While they have gained a lot of attention in theoretical circles over the past 20 years, they don't seem to be taking over the real world any time soon.
Last edited by Schol-R-LEA on Sun Sep 30, 2007 11:53 pm, edited 2 times in total.
Crazed123
Member
Member
Posts: 248
Joined: Thu Oct 21, 2004 11:00 pm

Post by Crazed123 »

Actually, the reason most microkernels haven't caught on is that they don't do what you said.

They are basically virtualizers, but they're designed to run operating-system personalities rather than individual applications. Now, the problem is that they don't actually provide any real advantages to OS personalities that can't also be gained from full-blown virtualization, which is easier to do right and easier to code to.

Exceptions are places where you absolutely need certain stability requirements, like in an embedded QNX system. There microkernels excel.
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Post by Schol-R-LEA »

I was discussing exo-kernels, actually, not microkernels; they are radically different concepts. A microkernel design is intended to be at a higher level of abstraction than a a monolithic kernel, by providing a set of standard protocols by which new components can be added to the system, and removing many of the modules which would otherwise be part of the kernel and making them user processes instead. Exo-kernels, conversely, are intended to remove abstractions, by providing no services except basic memory protection and multiplexing of the hardware - everything else is done in userland libraries and/or application-specific low-level code, even memory management and IPC and process/thread control. The whole reason for calling it an exo-kernel is because it doesn't really have a kernel at all in the usual sense - the 'kernel' services are outside of the 'system', invisible to the users and even to the programs, which are given the illusion of running alone on bare metal.

More succinctly, a microkernel works by making an operating system more extensible and simplifying the kernel part of the system; an exo-kernel bypasses the need for an operating system entirely.

It is incorrect to describe a microkernel as a virtualizer; microkernels provide many other services than that, and conversely, not all microkernels virtualize (e.g., the original 8088 versions of Minix and QNX). The issue of 'os personalities' is orthogonal to the monolithic vs micro vs exo-kernel issue entirely; only a handful of systems provide such services, and some of those which do are monolithic systems.

There are dozens of microkernel systems in wide use, starting with the current versions of Windows (which does indeed use it for OS personalities, primarily, though on a level that is essentially transparent to the user - the 'personalities' part is entirely in the support for executable formats and system services, not user interface) and MacOS (which uses the Mach kernel, just as the NeXT did; you'll often hear it incorrectly described as being a FreeBSD kernel, but this is a misunderstanding stemming from the fact the Mach was initially a derivative of the old BSD 4.2 system in the 1980s). Needless to say, just how 'micro' a microkernel system has to be is a relative thing :roll:

There are no commercial exo-kernel 'systems' in existence, AFAIK; all current exo-kernel systems are experimental designs, and the only really important one of those was ExOS (apparently, I was incorrect in describing L4 as an exo-kernel; it is a microkernel, according to the designers, and from what I see looking it up I find that this is the case).
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Post by Colonel Kernel »

Schol-R-LEA wrote:There are dozens of microkernel systems in wide use, starting with the current versions of Windows (which does indeed use it for OS personalities, primarily, though on a level that is essentially transparent to the user - the 'personalities' part is entirely in the support for executable formats and system services, not user interface) and MacOS (which uses the Mach kernel, just as the NeXT did; you'll often hear it incorrectly described as being a FreeBSD kernel, but this is a misunderstanding stemming from the fact the Mach was initially a derivative of the old BSD 4.2 system in the 1980s).
Neither Windows nor Mac OS X are microkernels. At best you could call them "hybrids" or "macrokernels". Most of the OS services in Windows are provided by the kernel. The user-mode server processes don't do much except keep track of user sessions and help to launch other processes. In the move from NT 3.51 to NT 4.0, the GUI sub-system was moved out of csrss.exe into win32k.sys, which resides in the kernel. The "kernel" portion of Win32 itself is just a thin veneer on the NT kernel system calls, making "Windows" by far the more dominant "personality" of the system -- the POSIX and MS-DOS sub-systems (and the old OS/2 subsystem) effectively map their respective OS APIs to the "kernel" portion of Win32.

Mac OS X has a microkernel in it, but I wouldn't exactly call xnu itself a microkernel. Maybe a better way to put it is this -- if you take all the OS services that are supposed to run in separate processes (file systems, device drivers, etc.), stuff them all in a single process, then make that process run in kernel mode, is the architecture any longer a "microkernel" architecture? IMO, no.
Needless to say, just how 'micro' a microkernel system has to be is a relative thing :roll:
It's never really been about being "micro". The size of a microkernel is a side-effect of the way it is designed, it is not the reason for the designation in the first place. IMO the definition of microkernel is very straightforward: It is an OS architecture where the process abstraction is a primitive upon which nearly all other OS services are built. In Windows, nearly all OS services are implemented in the NT kernel -- the same kernel where processes themselves are implemented. In xnu, some OS services (like low-level thread management) are provided by Mach, but most of the rest are provided by the BSD layer and IOKit, which mostly wrap Mach abstractions with their own (e.g. -- UNIX processes add things like file descriptors to Mach tasks), and communicate directly without IPC. Again, the OS services themselves sit alongside processes, not on top of them.

Whether L4 is a microkernel or exokernel seems to be a point of some confusion... I would say it is definitely a microkernel that could perhaps be used as the basis for an exokernel, given its tendency towards offloading policy decisions to user space.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
User avatar
Schol-R-LEA
Member
Member
Posts: 1925
Joined: Fri Oct 27, 2006 9:42 am
Location: Athens, GA, USA

Post by Schol-R-LEA »

At this point, it really is coming down to a Layne's Law/Dictionary Flame issue - and for what it is worth, I actually agree that 'microkernel' doesn't really describe them, not in the classic sense. Still, they are described as such not only in the corporate press puffery, but in a number of textbooks and published papers as well... obviously, someone other than their respective marketing depts. feels that they fit the description.

I suppose that the view in some places is that a system which offloads anything into user space is a micro, without regard to whether it is a general abstraction of the system - in which case even VMS would count as a microkernel. Or perhaps it's felt that if the creators call it a microkernel, then a microkernel it is. I digress. Suffice it to say, I retract my description of them as micros, though I expect that, in true Layne's Law fashion, the moment I do so someone will pop up and insist that they are microkernels, and do their best to justify it. You just can't win.

Part of the matter is the question of, what constitutes a critical kernel service, and what can be safely and efficiently abstracted out to user space? Its not as straight-forward a question as it seems. The exo-kernel answer, for example, is that almost nothing should be kernel-space, while the traditional monolithic approach is that all system services - loadable or not - are part of the kernel. The safety issue is another perennial debate - while classical OS theory states that anything important should be protected in the kernel space, part of the goal of a microkernel is to reduce the number of things that can cause a kernel panic - by moving them into user processes, they make the services just another daemon, which can be killed and restarted without risking the critical process and memory services.

Perhaps what is needed is a layering or spaces - that there be a kernel space, a user space, and a 'service space' which isn't part of the kernel but does have additional protections beyond those of user processes - but that probably complicates issues unnecessarily, as the kernel/user distinction has proven to be more than adequate. Iin any case it would mean yet another level of mode switching, which is already used as an argument against micros. Adding yet another abstraction, which is almost just like a user process but not the same, simply pollutes the existing abstractions and system interface... and make the question of what goes in which sort of space even more problematic. Hmmn, never mind.

As for the 'micro' jab, I was being facetious. Several of the prominent microkernel systems - Mach being the usual example - have been very large indeed, while still qualifying as microkernels. Conversely, some monolithic kernels - especially older ones, when hardware constraints were much tighter - have been quite small. As you state, the term 'microkernel' reflects abstractions and policies, not size.
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Post by AndrewAPrice »

Schol-R-LEA wrote:I was discussing exo-kernels
Monotasking exokernel? :?

What is the threshold before a kernel is considered a microkernel? Placing severs and drivers into their own threads? Placing servers and drivers into their own memory? Running servers and drivers in user mode?
My OS is Perception.
Post Reply