Microkernel Design Info

Question about which tools to use, bugs, the best way to implement a function, etc should go here. Don't forget to see if your question is answered in the wiki first! When in doubt post here.
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:Microkernel Design Info

Post by Pype.Clicker »

i've been crawling through some L3 design/implementation whitepapers and it's surprising to see things like "for efficiency, microkernel XYZ has a single shared-by-all-process message-swapping area -- thus the servers are unable to receive safe messages because nothing guarantees the message has been deposited by the process who claims to do so and the message may be altered by the sender while the receiver is reading it (possibly between consitency check and execution ?)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re:Microkernel Design Info

Post by Brendan »

Hi,
kiran wrote: But what is the problem with Amoeba? Isnt it really a distributed operating system? I thought it was a true distributed os unlike mach which are extensions of unix.
You're right - Amoeba is technically a distributed operating system. It consists of four distinct classes of computers - workstations (X-windows clients), processing machines, file servers and directory servers. For simplicity let's call this a "cluster" of computers. Basically it's not too different from having a normal *nix file server being used via networked file system by another (diskless) *nix computer running X-servers for remote (diskless) clients.

The only thing Amoeba workstations do is run an X-windows client, so most of the CPU power and any storage space/devices on these computers is mostly wasted. The processing machines do all of the "heavy computing" and would run all of the apps and X-servers. The file servers contain file data without any directory information. The directory servers don't store any file data, just information on where the data is. All communication is done via RPC (Remote Procedure Calls), where the client sends a message to the "server thread" and is blocked until the server thread sends a return message.

When an application wants to load an icon to be displayed on a user's screen it'd send an RPC message over the network to the directory server and block until it got a reply, then there's another RPC/network message to the file server (more blocking until it gets that reply). Once it has this data it'd process it and send the updated video data to over the network to the user's workstation. It's a slow mess of RPC and network traffic.

To make this sort of system work with reasonable performance you'd need some high speed networking (not too expensive), some workstations (not too expensive), some processing machines with heaps of CPU power (expensive) and some good file servers (expensive). I'd estimate that a cluster that performs as good as a 10 PCs running *nix would cost roughly 5 times as much. Once you've got your cluster working, what if you want to edit a text file for five minutes on the weekend? - you'd need to have a minimum of 3 computers running for that.

Wouldn't it make sense if the processing computers were capable of using local storage space to cache files (reducing network traffic and other delays)? How about if the workstations did some processing (and file caching) too?

What I'm basically saying is dispose of the client/server thinking and instead make everything asynchronious peer-to-peer. Make every computer in the network act as a combined workstation, processing machine, file server and directory server. In this way all of the cluster's resources can be fully utilized, data can be cached close to where it is used most often, networking delays can be dramatically reduced, and the bang per buck factor would be much better.

Because a single PC would be capable of handling all functions you could also have a cluster of one computer, which would be (almost) the same as a single PC running a normal/non-distributed OS. A standard PC can handle 2 video cards, 2 PS/2 keyboards and 2 serial mouses, so a single computer could handle 2 seperate users without much hassle. Because available resources could be fully utilised a collection of 5 standard/cheap Pentium II machines with 2 users per computer could perform better than a $10,000 server running *nix (with 10 dumb terminals or X clients), or a $20,000 collection of computers running Amoeba.

Companies that already have suitable equipment for Amoeba would be relatively rare, and I doubt home users would even consider it. With the peer-to-peer model a standard PC could be used instead, and there's millions of offices that already use a collection of PCs (with perhaps one or 2 general servers thrown in).

IMHO Tanenbaum just doesn't seem to see much further than Unix (which dates back to around 1960)...


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re:Microkernel Design Info

Post by Candy »

IMO, Amoeba was developed to make use of future chips, where the processing power didn't increase, but the count of chips did. This way, you can make two processing/file boxes and have a number of shitty cheap boxes as x terms. He didn't foresee the processing power and storage becoming so cheap that it'd be useless.

Consider using 4 16-bit NES things for workstation stuff, as clients. Now it makes sense not to run it there, but somewhere else.
User avatar
Pype.Clicker
Member
Member
Posts: 5964
Joined: Wed Oct 18, 2006 2:31 am
Location: In a galaxy, far, far away
Contact:

Re:Microkernel Design Info

Post by Pype.Clicker »

agree with candy. Amoeba's design dates back from 1994. By this time, *many* universities were still using diskless X-stations connected to a medium-sized departemental bi-processor expensive machine.

Now we're rather at a point where each machine has soo much resource that it exports them to p2p communities (think of file swappers, SETI@home etc.)
User avatar
Brendan
Member
Member
Posts: 8561
Joined: Sat Jan 15, 2005 12:00 am
Location: At his keyboard!
Contact:

Re:Microkernel Design Info

Post by Brendan »

Hi,
Pype.Clicker wrote: agree with candy. Amoeba's design dates back from 1994. By this time, *many* universities were still using diskless X-stations connected to a medium-sized departemental bi-processor expensive machine.

Now we're rather at a point where each machine has soo much resource that it exports them to p2p communities (think of file swappers, SETI@home etc.)
I was considering Amoeba as an operating system for the real world rather than as an operating system for an isolated environment like a university. In the late 1990's I attended a university myself where dumb terminals where being used as telnet clients to teach students about "modern computing" (even though Windows 95, 80486 and Pentium PCs where in wide use everywhere else), so perhaps I should've given it more thought.

Then again perhaps I was right the first time considering that the same computers are used for file servers, workstations and pool processors. Taken from "The Amoeba Distributed Operating System" written by Andrew S. Tanenbaum & Gregory J. Sharp, which can be found at http://www.cs.vu.nl/pub/amoeba/Intro.pdf:
Minimum configuration for 386/486/Pentium systems:
File server: 16 MB RAM, a 300 MB disk, 3.5" floppy drive, Ethernet card, VGA card, keyboard, monitor, mouse.
Workstation: 8 MB RAM, Ethernet card, VGA card, keyboard, monitor, mouse.
Pool processor: 4 MB RAM, 3.5" floppy drive, Ethernet card.
Supported Ethernet cards: SMC/WD 8013, NE 2100, NE2000, 3Com 503

Minimum configuration for a SPARCstation system:
File server: 16 MB RAM, a 300 MB disk, a SCSI tape drive.
Workstation: 8MB RAM, monitor, keyboard, mouse.
Pool processor: 8 MB RAM.

Minimum configuration for a Sun 3/60 system:
File server: exactly 12 MB RAM, a 300 MB disk, a QIC-24 tape drive.
Workstation: 4 MB RAM, monochrome monitor, keyboard, mouse.
Pool processor: 4 MB RAM.
Sun 3/50s can also be used for pool processors and workstations.
And then this from the very beginning of the same document:
Roughly speaking, we can divide the history of modern computing into the
following eras:

1970s: Timesharing (1 computer with many users)
1980s: Personal computing (1 computer per user)
1990s: Parallel computing (many computers per user)

Until about 1980, computers were huge, expensive, and located in computer centers.
Most organizations had a single large machine.

In the 1980s, prices came down to the point where each user could have his or her
own personal computer or workstation. These machines were often networked together,
so that users could do remote logins on other people?s computers or share files in various
(often ad hoc) ways.
As you can see Tanenbaum himself seems to think networks of PC's were common well before 1994...

I still find other aspects of Amoeba's design rather lacking. Namely RPC; the isolation between processing machines (pool processors), file servers and directory servers; raw disk IO built into the "micro-kernel". It also seems too much like "Unix split into chunks", rather than anything innovative to me. If I took the ketchup/sauce out of a hotdog and put it on one plate, put the meat on another plate and the remaining bun on a third plate, could I call it a 3 course meal?


Cheers,

Brendan
For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.
kiran

Re:Microkernel Design Info

Post by kiran »

hello

Well I also dont think that anyone would use Amoeba at home using three PC's. But why not these situations

A wide userbase like a community of poor people
using the processing power of the processor pool as if they use electricity. If cheap dumb X terminals can be supplied to all the people in a village they can connect to the amoeba system and do their work. If the government can pay for the processor pool, the file servers and the network equipment
wont it be much cheaper for the government than giving each user a pc? And wont it be cheaper for the villagers since they dont have to pay only for the dumb terminal.

Cant it also replace expensive supercomputers for carrying out
huge and complex calcualations. The processor pool will be
less expensive than supercomputers.

And regarding design, I think the heterogenous processor pool is a very good idea. Was this idea first introduced by Amoeba or is it a copied one?

Kiran
Curufir

Re:Microkernel Design Info

Post by Curufir »

kiran wrote: A wide userbase like a community of poor people
using the processing power of the processor pool as if they use electricity.
That's a very old idea. Look up information on Multics sometime.

Problem is that the concept was thought of when decent processing power was VERY expensive. Nowadays we have really stupid levels of processing power on a home PC or laptop.

For anything but scientific, industrial graphics, and possibly server work there is very little reason for a person to need more processing power than they already have with their desktop computer. I'd go so far as to say that modern processors are ALREADY too powerful, and that manufacturers should start looking at ways to reduce power consumption rather than chasing after extra cycles.
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re:Microkernel Design Info

Post by Candy »

Curufir wrote: That's a very old idea. Look up information on Multics sometime.
Multics was one mainframe, number of dumb terminals. Not exactly the same.
Problem is that the concept was thought of when decent processing power was VERY expensive. Nowadays we have really stupid levels of processing power on a home PC or laptop.
The word you probably mean is stupendous. Stupid would mean low in this case, which isn't true. Stupendous means unimaginable up to a short while ago, stopping most people in their thinking-tracks.
For anything but scientific, industrial graphics, and possibly server work there is very little reason for a person to need more processing power than they already have with their desktop computer. I'd go so far as to say that modern processors are ALREADY too powerful, and that manufacturers should start looking at ways to reduce power consumption rather than chasing after extra cycles.
Why do you think people don't buy new computers, and I am still using my own old 1200 duron? I don't need more memory, no more speed, only more harddisk space atm. My computer, as well as lots of others (windows puters not counted yet, they eat it all alive of course) just can't use it all to any good use. All you can do with it is be happy that you got it and not spend more to get anything less than the most budget-line you can get.

There is NO reason in this decade to go above 2ghz.
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Microkernel Design Info

Post by Colonel Kernel »

There is NO reason in this decade to go above 2ghz.
Except Doom 3.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
Curufir

Re:Microkernel Design Info

Post by Curufir »

The word you probably mean is stupendous. Stupid would mean low in this case, which isn't true.
Nope. I actually meant stupid, as in ridiculous.

I can't think of a more succint way to describe Grandma needing a 2Ghz+ computer to keep in touch with her sewing circle.
Dreamsmith

Re:Microkernel Design Info

Post by Dreamsmith »

Sadly, it still takes your average 3GHz P4 longer to boot than it took my 1MHz Apple IIc to boot, launch AppleWorks, let me type in a short memo, print it, and shutdown. And I didn't have to install a comprehensive anti-virus package or face wading through a dozen popups to do it.

Processors today aren't overpowered. They're still too slow to get computers running today's operating systems to get anywhere close to the speed computers used to run at in the 80's. My jaw would drop off and roll around the floor if I saw a computer today that was half as fast as my old 8-bit Apple IIc was. I estimate we'd need about a 30-40GHz processor to match it, though. And by the time we get processors that fast, Bill will have "innovated" new ways to bog the system down more than ever before.

By projecting current trends into the future, taking Moore's Law into account, I estimate that by 2010, it'll take five hours to type that memo. By 2020, it'll just be easier to walk from New York to Los Angeles and tell whomever it was what you wanted them to know, rather than taking the time to type and send the memo. By the 22nd century, computers will be operated like the fictional Deep Thought. You'll turn it on, and generations later your decendents type the memo in, assuming anyone still cares.

By then, open source zealots, retro-computing fans, afficianadoes of people like Chuck Moore, and other assorted riff-raff will have computers that can do anything faster than you can even think about asking them to do it, but everyone will think they're nuts for using them because they won't be able to load popular software like Windows THX2105 Home and Garden Edition or AOL 276.0 on them, and thus are too difficult for anyone but the evolved trans-humans to use.

And that's if we're lucky... ::)
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:Microkernel Design Info

Post by Solar »

Dreamsmith wrote: Sadly, it still takes your average 3GHz P4 longer to boot than it took my 1MHz Apple IIc to boot, launch AppleWorks, let me type in a short memo, print it, and shutdown. And I didn't have to install a comprehensive anti-virus package or face wading through a dozen popups to do it.

Processors today aren't overpowered. They're still too slow to get computers running today's operating systems to get anywhere close to the speed computers used to run at in the 80's.
It's the operating systems and the system builders to blame, not the CPU's.

I'm reading / writing this on a 206 MHz StrongARM (HP Jornada), which "boots" in <1 sec., has the Internet Explorer coming up in <1 sec., and which I can "power down" in <1 sec.

That's the primary reason I started to get interested in writing my own OS. Time and again, and in a thousand places throughout a system (no matter whether Windows or Linux), developers claim that they mustn't worry about the cycles wasted by their approach, systems are soooooo fast today and just get faster... and they add yet another "feature" that bugs down the system.

The embedded systems show how it should be, and I seriously cannot think of a reason why I should pay >100? for a high-end CPU, not so that my apps run faster, but only so that the OS / application developers can be lazy in their designs.

Hardware has become too cheap, that's the problem. Everywhere where hardware is scarce, the point is proven that the sluggishness of modern systems is home-grown.
Every good solution is obvious once you've found it.
distantvoices
Member
Member
Posts: 1600
Joined: Wed Oct 18, 2006 11:59 am
Location: Vienna/Austria
Contact:

Re:Microkernel Design Info

Post by distantvoices »

@Solar: well spoken.

Take a powerful cpu. Give it a huge, bloated operating system and huge, bloated apps - and you will experience a slow down compared to what's been possible whilst operating MacOS 7.5 on a PowerPC 7100/80: thats power, I say: a lean system, lean applications - just compare it to MacOS9 - needs presque twice or thrice the time of MacOS 7.5 to boot up and offers basically the same functinality: macs cooperative multitasking.

I wish the time came back, when developers thought of good software in following terms: have it boot quick, have it operate seamlessly and fluent, have it offer the user the *required* possibilities to adjust things - less is more, as we say in unix land, isn't it so? *gg*

Your HP jornada is the best example for lean Development ,Solar. Such a thing is seldom found in PC world, where they consider processing power an excuse for sloppy development.

regarding the topic: what about having a separate process in the micro kernel environment, which gets alive upon crash and offers various debugging and stack inspection tools? and eventually notifies some connected fall-back-machine either via rs232 or tcp/ip to take over all the work? At least this is possible with Novell Netware as far as I know.
... the osdever formerly known as beyond infinity ...
BlueillusionOS iso image
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Re:Microkernel Design Info

Post by Colonel Kernel »

I'm not sure that the performance problems we suffer today are because of wasted CPU cycles. I think it's a lot more likely the result of the widening performance gap between the CPU, RAM, and hard drives. One thing that's definitely true of modern PC (in the "personal" not "x86" sense) software is that it's really memory-hungry. Then you have stupid design decisions from the likes of MS like making most of the OS pageable so that its responsiveness evaporates as soon as a memory-hogging app starts running roughshod over the rest of the system. I've been running various flavours of Windows for a long time, and getting more RAM has solved 90% of my performance problems. The rest are all caused by hard drives that are too slow to keep up (or maybe really crappy drivers that hang the rest of the system while they're busy). Of course, it doesn't help when the OS itself is a memory hog.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Re:Microkernel Design Info

Post by Solar »

Still it's not the hardware to blame but the OS design.

I have a Win2k running on a 1,2 GHz Athlon, enjoying 256 MByte RAM and residing on a U2W SCSI hard drive. There's hardly any swapping, ever, since all the stuff I use fits comfortably in there.

Still, my WinCE (206 MHz StrongARM and 32 MByte RAM) boots / starts applications faster, and my late Amiga (50 MHz 68060 with 32 MByte RAM) did beat the living daylight out of my Win2k tower in just about everything but MP3 encoding and GCC compilation times.

The better the hardware, the less return-on-MHz you get, because everybody - mainboard designer, OS designer, app designer - thinks he's entitled to enjoy the added hardware capabilities. Some of 'em even think they're doing the user a favour, because now everything is animated and automagically guessing your every whim...

I believe that's a seriously flawed thought. I am the user. I payed for that hardware, and I am the only one to enjoy its benefits - not in terms of eye candy, but snappy response, short compilation / encoding times, and flawless multimedia.

But eye candy sells, as Microsoft and SuSE know all too well.
Every good solution is obvious once you've found it.
Post Reply