The return of the monotasking OS?
The return of the monotasking OS?
I was just pondering on virtualisation (the field I'm working in atm) - The main reason systems are virtualised is underuse of processing resources. In the server world this stems from the fact that most admins like to run only one big app on each server - in case they interfere with each other, or manage somehow to together crash the machine.
Now, I was thinking that, isn't this negating the entire point of a multitasking OS? The OS is designed to run multiple processes, (hopefully) not destroying each other. So, as I see it, either admins are running very badly designed user-processes, or the operating system itself is not doing it's job properly.
Are we on a degenerate road where we'll see the return of an (almost) monotasking OS? Which runs one major process (the app) and several minor ones (bash over ssh, for example. Services could be implemented as modules)?
I'd be interested in opinions!
JamesM
Now, I was thinking that, isn't this negating the entire point of a multitasking OS? The OS is designed to run multiple processes, (hopefully) not destroying each other. So, as I see it, either admins are running very badly designed user-processes, or the operating system itself is not doing it's job properly.
Are we on a degenerate road where we'll see the return of an (almost) monotasking OS? Which runs one major process (the app) and several minor ones (bash over ssh, for example. Services could be implemented as modules)?
I'd be interested in opinions!
JamesM
Re: The return of the monotasking OS?
Personally, I doubt it. The focus for processor developpers at the moment seems to be 'how do I fit as many cores as possible on to a single piece of silicon?'. I don't think that fits in with monotasking (even if that single task is multithreading). I am no expert in servers, but I think that the desktop PC will very much start taking better advantage of the existing multitasking/threading facilities available - perhaps the focus will be on making task switches happen more quickly and smoothly?JamesM wrote: Are we on a degenerate road where we'll see the return of an (almost) monotasking OS?
Cheers,
Adam
- Colonel Kernel
- Member
- Posts: 1437
- Joined: Tue Oct 17, 2006 6:06 pm
- Location: Vancouver, BC, Canada
- Contact:
Even on a server with only one app running, there could still be many threads running. For example, imagine a web app being used by 100 people at the same time. Even though there may not be any hardware parallelism being used (if the VM is assigned to a single core for example), the use of threads makes it easier to model the natural concurrency of the app. Otherwise you'd have to implement the server as an event-driven state machine, which is a difficult way to structure some apps. Most real-time OSes still support multitasking for much the same reason.
Top three reasons why my OS project died:
- Too much overtime at work
- Got married
- My brain got stuck in an infinite loop while trying to design the memory manager
- Brynet-Inc
- Member
- Posts: 2426
- Joined: Tue Oct 17, 2006 9:29 pm
- Libera.chat IRC: brynet
- Location: Canada
- Contact:
In my opinion, this does suggest we're moving towards virtual engines as Operating Systems, because the old style didn't work well enough.
Xen for example, or VMWare.
I think ultimately, even the desktop will move in this direction. Thing is, neither Xen nor VMWare is doing it "right".
Xen for example, or VMWare.
I think ultimately, even the desktop will move in this direction. Thing is, neither Xen nor VMWare is doing it "right".
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.
- C. A. R. Hoare
- C. A. R. Hoare
Not really, IMO. In the end, someone writing a word processor doesn't want to deal with reading and writing hardware ports, so they'd rather run on top of an operating system than a virtual machine.
But there was that research paper about a recursive virtual machine OS that let each VM define its own architecture...
But there was that research paper about a recursive virtual machine OS that let each VM define its own architecture...
One solution to that is a library that does the I/O. But I don't think a word processor fits the discussion anyway.Crazed123 wrote:Not really, IMO. In the end, someone writing a word processor doesn't want to deal with reading and writing hardware ports, so they'd rather run on top of an operating system than a virtual machine.
Really, I think we're with multitasking for a long time now. We have all these cores, and it's just going to grow more and more. If you start day dreaming too much, there are a couple of interesting ideas that could happen (in my dreams, anyway).
We could have processors tailored to do a single task. You could buy a word processor chip, for example. This is how it used to be in the olden days, with those single-use devices. Choosing a program would be like choosing a chip and plugging it into the I/O and storage. This idea would work best for server processes like HTTP and email servers.
I'm not saying the idea is a good one.
- AndrewAPrice
- Member
- Posts: 2299
- Joined: Mon Jun 05, 2006 11:00 pm
- Location: USA (and Australia)
The word processing example gave me an idea of how a custom virtual machine may have specific instructions to make it easier for a low level programmer to access IO, the dictionary, etc. These 'instructions' could be stored in the binary until run-time in which the OS would convert them to native instructions inside a 'safe' environment. I was getting all excited until I realised that's exactly what .NET and Java is doing.
My OS is Perception.
Also the "System-on-a-chip-on-an-FPGA" chip I talked about in earlier posts would fit that criteria - being able to morph itself to suit whatever application is using it.MessiahAndrw wrote:The word processing example gave me an idea of how a custom virtual machine may have specific instructions to make it easier for a low level programmer to access IO, the dictionary, etc. These 'instructions' could be stored in the binary until run-time in which the OS would convert them to native instructions inside a 'safe' environment. I was getting all excited until I realised that's exactly what .NET and Java is doing.
But to return to my original point - yes, programs such as web/database servers need multithreading. But isn't that pretty much all they need? Is there any real need for memory protection in such a system, as the main program is mainly multithreaded and thus shares memory anyway?
What happens when someone finds an exploit, though. If you are running with no memory protection in a ring0-type environment, anything which can execute unwanted code is now able to do whatever it likes.
If you have some kind of memory protection, it can bring down the server app, but still gives the underlying OS a chance to do something about it (even if this just involves an error report to the server admin) - at best, the underlying OS may be able to re-start the app.
Cheers,
Adam
If you have some kind of memory protection, it can bring down the server app, but still gives the underlying OS a chance to do something about it (even if this just involves an error report to the server admin) - at best, the underlying OS may be able to re-start the app.
Cheers,
Adam
Multi-tasking came about because processor where very expensive, now they are not, we will move towards giving each program a core or two.
But the problem at the moment, with multi core's, is that they still share too many resources. once they click on, that the way to go is more self-containd cores, like GPU are, it will take off.
But the problem at the moment, with multi core's, is that they still share too many resources. once they click on, that the way to go is more self-containd cores, like GPU are, it will take off.
Do you mean like asymmetric multiprocessing? Or SMP but each core has a dedicated function?
Looks like it may be the games industry (sticking with your GPU example) that pushes this area forward, with PPU's (PhysX), talk of AI Units (APU's? ) and now Creative building more advanced processors in to their cards with X-Fi.
These trends do seem to go in circles - I wonder if we are going back to needing an additional maths coprocessor and off-chip cache memory
Looks like it may be the games industry (sticking with your GPU example) that pushes this area forward, with PPU's (PhysX), talk of AI Units (APU's? ) and now Creative building more advanced processors in to their cards with X-Fi.
These trends do seem to go in circles - I wonder if we are going back to needing an additional maths coprocessor and off-chip cache memory
My favorite specialized processor was the RPU (raytracing processor) that was supposed to be able to handle 1600x1200 at a frequency of 200mhz and do every last pixel on the screen for 3 light bounces.
I thought that was a pretty cool place to start off, since it does it parallel to the GPU and CPU. I don't think a word processor has any needs that warrant an outside chip.
I'm also waiting for a LinuxBIOS supporting AMD64 board. First one out will be bought by me.
--------------------------------------
I always tend to want to strip away the ugliness - the PIT and PIC should be abandoned as the APIC has been out for over a decade, PCI for PCIe, IDE/PATA for AHCI/SATA, modem for ethernet, and the legacy BIOS for linuxBIOS. Not because it's Linux, but because it loads a 32 bit elf rather than a 16 bit boot sector. Don't you think it's time we stepped forward?
I thought that was a pretty cool place to start off, since it does it parallel to the GPU and CPU. I don't think a word processor has any needs that warrant an outside chip.
I'm also waiting for a LinuxBIOS supporting AMD64 board. First one out will be bought by me.
--------------------------------------
I always tend to want to strip away the ugliness - the PIT and PIC should be abandoned as the APIC has been out for over a decade, PCI for PCIe, IDE/PATA for AHCI/SATA, modem for ethernet, and the legacy BIOS for linuxBIOS. Not because it's Linux, but because it loads a 32 bit elf rather than a 16 bit boot sector. Don't you think it's time we stepped forward?
-
- Member
- Posts: 2566
- Joined: Sun Jan 14, 2007 9:15 pm
- Libera.chat IRC: miselin
- Location: Sydney, Australia (I come from a land down under!)
- Contact:
The curse of backward compatibility... Imagine if we did actually step forward, all of a sudden almost everyone would have to upgrade their systems because the software would not support their hardware.Avarok wrote:Don't you think it's time we stepped forward?
Just because it's an easy way out doesn't make it the right thing to do.