Page 1 of 3

The return of the monotasking OS?

Posted: Tue Sep 25, 2007 1:50 am
by JamesM
I was just pondering on virtualisation (the field I'm working in atm) - The main reason systems are virtualised is underuse of processing resources. In the server world this stems from the fact that most admins like to run only one big app on each server - in case they interfere with each other, or manage somehow to together crash the machine.

Now, I was thinking that, isn't this negating the entire point of a multitasking OS? The OS is designed to run multiple processes, (hopefully) not destroying each other. So, as I see it, either admins are running very badly designed user-processes, or the operating system itself is not doing it's job properly.

Are we on a degenerate road where we'll see the return of an (almost) monotasking OS? Which runs one major process (the app) and several minor ones (bash over ssh, for example. Services could be implemented as modules)?

I'd be interested in opinions!

JamesM

Re: The return of the monotasking OS?

Posted: Tue Sep 25, 2007 2:15 am
by AJ
JamesM wrote: Are we on a degenerate road where we'll see the return of an (almost) monotasking OS?
Personally, I doubt it. The focus for processor developpers at the moment seems to be 'how do I fit as many cores as possible on to a single piece of silicon?'. I don't think that fits in with monotasking (even if that single task is multithreading). I am no expert in servers, but I think that the desktop PC will very much start taking better advantage of the existing multitasking/threading facilities available - perhaps the focus will be on making task switches happen more quickly and smoothly?

Cheers,
Adam

Posted: Tue Sep 25, 2007 2:19 am
by JamesM
ut I think that the desktop PC will very much start taking better advantage of the existing multitasking/threading facilities available
While I heartily agree with you with respect to desktop PCs, My point was more directed towards (and more valid for) server setups.

JamesM

Posted: Tue Sep 25, 2007 7:23 am
by Colonel Kernel
Even on a server with only one app running, there could still be many threads running. For example, imagine a web app being used by 100 people at the same time. Even though there may not be any hardware parallelism being used (if the VM is assigned to a single core for example), the use of threads makes it easier to model the natural concurrency of the app. Otherwise you'd have to implement the server as an event-driven state machine, which is a difficult way to structure some apps. Most real-time OSes still support multitasking for much the same reason.

Posted: Tue Sep 25, 2007 7:33 am
by Brynet-Inc
I just learnt the mars rovers are running a commercial real-time Unix-like OS 8)

Don't ruin my day by mentioning monotasking systems :cry:

Posted: Tue Sep 25, 2007 3:56 pm
by Avarok
In my opinion, this does suggest we're moving towards virtual engines as Operating Systems, because the old style didn't work well enough.

Xen for example, or VMWare.

I think ultimately, even the desktop will move in this direction. Thing is, neither Xen nor VMWare is doing it "right".

Posted: Tue Sep 25, 2007 6:49 pm
by Crazed123
Not really, IMO. In the end, someone writing a word processor doesn't want to deal with reading and writing hardware ports, so they'd rather run on top of an operating system than a virtual machine.

But there was that research paper about a recursive virtual machine OS that let each VM define its own architecture...

Posted: Tue Sep 25, 2007 6:58 pm
by JackScott
Crazed123 wrote:Not really, IMO. In the end, someone writing a word processor doesn't want to deal with reading and writing hardware ports, so they'd rather run on top of an operating system than a virtual machine.
One solution to that is a library that does the I/O. But I don't think a word processor fits the discussion anyway.

Really, I think we're with multitasking for a long time now. We have all these cores, and it's just going to grow more and more. If you start day dreaming too much, there are a couple of interesting ideas that could happen (in my dreams, anyway).

We could have processors tailored to do a single task. You could buy a word processor chip, for example. This is how it used to be in the olden days, with those single-use devices. Choosing a program would be like choosing a chip and plugging it into the I/O and storage. This idea would work best for server processes like HTTP and email servers.

I'm not saying the idea is a good one. :P

Posted: Tue Sep 25, 2007 9:39 pm
by AndrewAPrice
The word processing example gave me an idea of how a custom virtual machine may have specific instructions to make it easier for a low level programmer to access IO, the dictionary, etc. These 'instructions' could be stored in the binary until run-time in which the OS would convert them to native instructions inside a 'safe' environment. I was getting all excited until I realised that's exactly what .NET and Java is doing. :(

Posted: Wed Sep 26, 2007 1:27 am
by JamesM
MessiahAndrw wrote:The word processing example gave me an idea of how a custom virtual machine may have specific instructions to make it easier for a low level programmer to access IO, the dictionary, etc. These 'instructions' could be stored in the binary until run-time in which the OS would convert them to native instructions inside a 'safe' environment. I was getting all excited until I realised that's exactly what .NET and Java is doing. :(
Also the "System-on-a-chip-on-an-FPGA" chip I talked about in earlier posts would fit that criteria - being able to morph itself to suit whatever application is using it.

But to return to my original point - yes, programs such as web/database servers need multithreading. But isn't that pretty much all they need? Is there any real need for memory protection in such a system, as the main program is mainly multithreaded and thus shares memory anyway?

Posted: Wed Sep 26, 2007 1:36 am
by AJ
What happens when someone finds an exploit, though. If you are running with no memory protection in a ring0-type environment, anything which can execute unwanted code is now able to do whatever it likes.

If you have some kind of memory protection, it can bring down the server app, but still gives the underlying OS a chance to do something about it (even if this just involves an error report to the server admin) - at best, the underlying OS may be able to re-start the app.

Cheers,
Adam

Posted: Wed Sep 26, 2007 1:57 am
by Dex
Multi-tasking came about because processor where very expensive, now they are not, we will move towards giving each program a core or two.
But the problem at the moment, with multi core's, is that they still share too many resources. once they click on, that the way to go is more self-containd cores, like GPU are, it will take off.

Posted: Wed Sep 26, 2007 2:23 am
by AJ
Do you mean like asymmetric multiprocessing? Or SMP but each core has a dedicated function?

Looks like it may be the games industry (sticking with your GPU example) that pushes this area forward, with PPU's (PhysX), talk of AI Units (APU's? :) ) and now Creative building more advanced processors in to their cards with X-Fi.

These trends do seem to go in circles - I wonder if we are going back to needing an additional maths coprocessor and off-chip cache memory :)

Posted: Wed Sep 26, 2007 6:34 am
by Avarok
My favorite specialized processor was the RPU (raytracing processor) that was supposed to be able to handle 1600x1200 at a frequency of 200mhz and do every last pixel on the screen for 3 light bounces.

I thought that was a pretty cool place to start off, since it does it parallel to the GPU and CPU. I don't think a word processor has any needs that warrant an outside chip.

I'm also waiting for a LinuxBIOS supporting AMD64 board. First one out will be bought by me.

--------------------------------------

I always tend to want to strip away the ugliness - the PIT and PIC should be abandoned as the APIC has been out for over a decade, PCI for PCIe, IDE/PATA for AHCI/SATA, modem for ethernet, and the legacy BIOS for linuxBIOS. Not because it's Linux, but because it loads a 32 bit elf rather than a 16 bit boot sector. Don't you think it's time we stepped forward?

Posted: Wed Sep 26, 2007 11:42 pm
by pcmattman
Avarok wrote:Don't you think it's time we stepped forward?
The curse of backward compatibility... Imagine if we did actually step forward, all of a sudden almost everyone would have to upgrade their systems because the software would not support their hardware.

Just because it's an easy way out doesn't make it the right thing to do.