Casm wrote:How can swapping to hard drive work without an MMU?
Almost exactly the same way.
Casm wrote:How will an expandable stack work?
This would possibly be a bit more expensive to implement in software alone. However, see my comments on COW, below.
Casm wrote:Managed code can only be as stable as the virtual machine it runs on, and since that must itself run directly on the hardware I don't see the point. You are just giving yourself (or somebody else) an extra layer of code to write.
The bugs you speak of also exist in CPU's. The problem is that bugs in CPU's more expensive to fix.
Casm wrote:How does having an operating system, which runs an operating system, which runs applications, manage to be an improvement upon an operating system which runs applications directly?
That's not how it works at all. There's one OS and one compiler, which translates bytecode to native code so that only trusted programs can (safely) run. You can do that with AOT compilation (e.g., translate applications at install time) or JIT compilation (and have translation caches so you don't have to repeat any of the work next time you run the program). You probably shouldn't do it with interpretation if it's a general-purpose OS we're talking about.
Solar wrote:Tossing the idea of unmanaged code because it's no longer "vogue" or "state-of-the-art" (I challenge the latter) certainly feels a bit hasty here.
Whenever someone makes a prediction, everyone tries to imagine whether it would make sense for that prediction to come true the next day. That's not how it works... Things are gradually left behind during a considerable amount of time (e.g., see DOS, Win16 API, and now we've pretty much left BIOS behind; all this means we are basically ready to also leave real mode behind in newer x86 CPU's because the new PC's aren't expected to be able to run legacy code anyway).
Farok wrote:For running managed code (and I'm not speaking about managed code compiled to native code; there's no real point in doing that) you'll always need an abstraction layer written in native code, call it the virtual machine.
So to counter the prediction of having managed CPU's you talk about how these CPU's will need to have a VM layer?
Farok wrote:because you can easily reserve space in virtual memory for things to grow continuously (for example the stack, just as someone already mentioned in this thread)
There's no reason why we still couldn't have a stripped-down MMU (or whatever you want to call this hardware device) that only helps with things like COW and growing stacks so as not to have to keep track of these things in hardware.
Brendan wrote:There are 3 reasons to use an MMU:
- Protection
- Massive performance/efficiency improvements (e.g. copy on write, memory mapped files, etc)
- Breaking hardware limits (e.g. 2 processes using 3 GiB each, on a CPU that only supports 32-bit addresses and only has 1 GiB of RAM)
Let's tackle one thing at a time. I already know you agree with me that the way to scalability is message-passing rather than shared memory. With managed code, your IPC can be as fast as passing a reference (this is
lightning fast), at least in the same NUMA domain. So basically, your whole distributed system's foundation is as good as it can be. If you go outside your NUMA domain, then there are other latency problems to worry about with either scheme.
Next, the hardware limit. The advantage you talk about is having a bigger address space that is accessible in a convenient way to the programmer. However, remember, the bigger address space is an illusion---it requires swapping. But swapping can very well be implemented in a managed system, too. In fact, similar technologies are all over the place (overlays in old DOS programs, DLL's in Windows programs, etc.)---except, of course, their goal is different. What about convenience? That doesn't go away either. It doesn't matter what the underlying address space looks like; you're working with the semantics your managed language, which may implement swapping in a transparent way. It's similar to Java, where you don't really see an address space and mourn for a bigger virtual address space. Not only that, but if your program was developed with a 32-bit address space in mind and you suddenly move it to a platform that offers it a 64-bit address space, it can just take advantage of the extra memory without any modification.
Brendan wrote:
There are 2 reasons to use managed code:
- Protection
- Vendor lock-in (e.g. preventing people from using alternative "untrusted" toolchains so you can sell more copies of your compiler)
The vendor lock-in thing is simply not true. The system can work with binaries that come from any toolchain as long as the output is valid bytecode. For example, there's more than one Java compiler that produces Java bytecode.
Brendan wrote:Sadly, the "natural progress" in the I.T. industry is for the same ideas to resurface and die again in a cyclic manner.
You're not being specific enough. Sometimes, old, abandoned ideas re-emerge because they make sense with the new technologies. E.g., embedded systems pretty much went to the same kinds of transitions that microcomputers did, except they came from the opposite direction.
Brendan wrote:As far as I can tell, for managed code the current "wave of hype" was started in 2007/2008 by the Singularity project; the wave peaked in about 2010, and the wave is currently collapsing. It'll take a few years before almost everyone stops caring about managed code, and maybe another 15 to 20 years before the next wave of hype starts again.
You are ignoring the fact that most application code out there is currently managed and this trend is only increasing. This was not true in the past; it is for this reason that it's becoming interesting to look at operating systems from a new perspective.