The return of the monotasking OS?

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

The return of the monotasking OS?

Post by JamesM »

I was just pondering on virtualisation (the field I'm working in atm) - The main reason systems are virtualised is underuse of processing resources. In the server world this stems from the fact that most admins like to run only one big app on each server - in case they interfere with each other, or manage somehow to together crash the machine.

Now, I was thinking that, isn't this negating the entire point of a multitasking OS? The OS is designed to run multiple processes, (hopefully) not destroying each other. So, as I see it, either admins are running very badly designed user-processes, or the operating system itself is not doing it's job properly.

Are we on a degenerate road where we'll see the return of an (almost) monotasking OS? Which runs one major process (the app) and several minor ones (bash over ssh, for example. Services could be implemented as modules)?

I'd be interested in opinions!

JamesM
User avatar
AJ
Member
Member
Posts: 2646
Joined: Sun Oct 22, 2006 7:01 am
Location: Devon, UK
Contact:

Re: The return of the monotasking OS?

Post by AJ »

JamesM wrote: Are we on a degenerate road where we'll see the return of an (almost) monotasking OS?
Personally, I doubt it. The focus for processor developpers at the moment seems to be 'how do I fit as many cores as possible on to a single piece of silicon?'. I don't think that fits in with monotasking (even if that single task is multithreading). I am no expert in servers, but I think that the desktop PC will very much start taking better advantage of the existing multitasking/threading facilities available - perhaps the focus will be on making task switches happen more quickly and smoothly?

Cheers,
Adam
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Post by JamesM »

ut I think that the desktop PC will very much start taking better advantage of the existing multitasking/threading facilities available
While I heartily agree with you with respect to desktop PCs, My point was more directed towards (and more valid for) server setups.

JamesM
User avatar
Colonel Kernel
Member
Member
Posts: 1437
Joined: Tue Oct 17, 2006 6:06 pm
Location: Vancouver, BC, Canada
Contact:

Post by Colonel Kernel »

Even on a server with only one app running, there could still be many threads running. For example, imagine a web app being used by 100 people at the same time. Even though there may not be any hardware parallelism being used (if the VM is assigned to a single core for example), the use of threads makes it easier to model the natural concurrency of the app. Otherwise you'd have to implement the server as an event-driven state machine, which is a difficult way to structure some apps. Most real-time OSes still support multitasking for much the same reason.
Top three reasons why my OS project died:
  1. Too much overtime at work
  2. Got married
  3. My brain got stuck in an infinite loop while trying to design the memory manager
Don't let this happen to you!
User avatar
Brynet-Inc
Member
Member
Posts: 2426
Joined: Tue Oct 17, 2006 9:29 pm
Libera.chat IRC: brynet
Location: Canada
Contact:

Post by Brynet-Inc »

I just learnt the mars rovers are running a commercial real-time Unix-like OS 8)

Don't ruin my day by mentioning monotasking systems :cry:
Image
Twitter: @canadianbryan. Award by smcerm, I stole it. Original was larger.
Avarok
Member
Member
Posts: 102
Joined: Thu Aug 30, 2007 9:09 pm

Post by Avarok »

In my opinion, this does suggest we're moving towards virtual engines as Operating Systems, because the old style didn't work well enough.

Xen for example, or VMWare.

I think ultimately, even the desktop will move in this direction. Thing is, neither Xen nor VMWare is doing it "right".
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
- C. A. R. Hoare
Crazed123
Member
Member
Posts: 248
Joined: Thu Oct 21, 2004 11:00 pm

Post by Crazed123 »

Not really, IMO. In the end, someone writing a word processor doesn't want to deal with reading and writing hardware ports, so they'd rather run on top of an operating system than a virtual machine.

But there was that research paper about a recursive virtual machine OS that let each VM define its own architecture...
User avatar
JackScott
Member
Member
Posts: 1031
Joined: Thu Dec 21, 2006 3:03 am
Location: Hobart, Australia
Contact:

Post by JackScott »

Crazed123 wrote:Not really, IMO. In the end, someone writing a word processor doesn't want to deal with reading and writing hardware ports, so they'd rather run on top of an operating system than a virtual machine.
One solution to that is a library that does the I/O. But I don't think a word processor fits the discussion anyway.

Really, I think we're with multitasking for a long time now. We have all these cores, and it's just going to grow more and more. If you start day dreaming too much, there are a couple of interesting ideas that could happen (in my dreams, anyway).

We could have processors tailored to do a single task. You could buy a word processor chip, for example. This is how it used to be in the olden days, with those single-use devices. Choosing a program would be like choosing a chip and plugging it into the I/O and storage. This idea would work best for server processes like HTTP and email servers.

I'm not saying the idea is a good one. :P
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Post by AndrewAPrice »

The word processing example gave me an idea of how a custom virtual machine may have specific instructions to make it easier for a low level programmer to access IO, the dictionary, etc. These 'instructions' could be stored in the binary until run-time in which the OS would convert them to native instructions inside a 'safe' environment. I was getting all excited until I realised that's exactly what .NET and Java is doing. :(
My OS is Perception.
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Post by JamesM »

MessiahAndrw wrote:The word processing example gave me an idea of how a custom virtual machine may have specific instructions to make it easier for a low level programmer to access IO, the dictionary, etc. These 'instructions' could be stored in the binary until run-time in which the OS would convert them to native instructions inside a 'safe' environment. I was getting all excited until I realised that's exactly what .NET and Java is doing. :(
Also the "System-on-a-chip-on-an-FPGA" chip I talked about in earlier posts would fit that criteria - being able to morph itself to suit whatever application is using it.

But to return to my original point - yes, programs such as web/database servers need multithreading. But isn't that pretty much all they need? Is there any real need for memory protection in such a system, as the main program is mainly multithreaded and thus shares memory anyway?
User avatar
AJ
Member
Member
Posts: 2646
Joined: Sun Oct 22, 2006 7:01 am
Location: Devon, UK
Contact:

Post by AJ »

What happens when someone finds an exploit, though. If you are running with no memory protection in a ring0-type environment, anything which can execute unwanted code is now able to do whatever it likes.

If you have some kind of memory protection, it can bring down the server app, but still gives the underlying OS a chance to do something about it (even if this just involves an error report to the server admin) - at best, the underlying OS may be able to re-start the app.

Cheers,
Adam
User avatar
Dex
Member
Member
Posts: 1444
Joined: Fri Jan 27, 2006 12:00 am
Contact:

Post by Dex »

Multi-tasking came about because processor where very expensive, now they are not, we will move towards giving each program a core or two.
But the problem at the moment, with multi core's, is that they still share too many resources. once they click on, that the way to go is more self-containd cores, like GPU are, it will take off.
User avatar
AJ
Member
Member
Posts: 2646
Joined: Sun Oct 22, 2006 7:01 am
Location: Devon, UK
Contact:

Post by AJ »

Do you mean like asymmetric multiprocessing? Or SMP but each core has a dedicated function?

Looks like it may be the games industry (sticking with your GPU example) that pushes this area forward, with PPU's (PhysX), talk of AI Units (APU's? :) ) and now Creative building more advanced processors in to their cards with X-Fi.

These trends do seem to go in circles - I wonder if we are going back to needing an additional maths coprocessor and off-chip cache memory :)
Avarok
Member
Member
Posts: 102
Joined: Thu Aug 30, 2007 9:09 pm

Post by Avarok »

My favorite specialized processor was the RPU (raytracing processor) that was supposed to be able to handle 1600x1200 at a frequency of 200mhz and do every last pixel on the screen for 3 light bounces.

I thought that was a pretty cool place to start off, since it does it parallel to the GPU and CPU. I don't think a word processor has any needs that warrant an outside chip.

I'm also waiting for a LinuxBIOS supporting AMD64 board. First one out will be bought by me.

--------------------------------------

I always tend to want to strip away the ugliness - the PIT and PIC should be abandoned as the APIC has been out for over a decade, PCI for PCIe, IDE/PATA for AHCI/SATA, modem for ethernet, and the legacy BIOS for linuxBIOS. Not because it's Linux, but because it loads a 32 bit elf rather than a 16 bit boot sector. Don't you think it's time we stepped forward?
pcmattman
Member
Member
Posts: 2566
Joined: Sun Jan 14, 2007 9:15 pm
Libera.chat IRC: miselin
Location: Sydney, Australia (I come from a land down under!)
Contact:

Post by pcmattman »

Avarok wrote:Don't you think it's time we stepped forward?
The curse of backward compatibility... Imagine if we did actually step forward, all of a sudden almost everyone would have to upgrade their systems because the software would not support their hardware.

Just because it's an easy way out doesn't make it the right thing to do.
Post Reply