Yes! I agree whole-heartedly. However, if you deploy applications compiled from such languages as bare machine code, how do you go about verifying that they haven't been tampered with? And how can you run them on different platforms without re-compiling...? If such languages must be interpreted, then they're not that useful for applications where performance is important.
You've never heard of code-signing...? Code signatures verify that the program hasn't been tampered with, if the signature is missing or invalid then it isn't authentic. About re-compiling, it shouldn't be that difficult to simply run GCC again with different target parameters, the only difference here is the size of the distribution which is really only significant if it has to be downloaded provided it'll still fit on 1 CD.
I didn't claim that it was, just that it allows for more granular and robust memory protection, especially in the presence of shared libraries.
This depends, you could just remove the global data section and be done with it, or you could facilitate something similar using a special pipe (write read command, write variable ID number, read variable - this way random memory corruptions won't damage the library's state).
Intermediate representation of instructions (bytecode, IL, whatever)
I don't see how this changes anything, I could distribute py files, sure it's the full code but it's still "intermediate" as it isn't machine code. The only difference byte code makes is the speed at which it is interpreted other than that there is no real difference as it is still interpreted into machine code anyway. Then you could also call a normal binary, a "pre-interpreted program".
A JIT compiler (or interpreter, if JIT compilation isn't feasible... e.g. on memory-constrained devices) to make everything run
Again I don't see the necessity, the performance gain from compiling at runtime rather than installing an appropriate machine code build should be minimal apart from distribution size, or just distribute the source (which is what you are doing with .Net/Java anyway since they can be decompiled let alone disassembled)
Metadata that describes all the types referenced by the code to facilitate code verification, GC, reflection, etc.
This doesn't verify anything other than the fact that it has a matching ABI (it can all be faked if you have no method to verify who wrote it), .Net and Java both come back to code-signing for authenticity. Type safety is more granular than machine code, but it does have facilities for this stuff though (_Function@4, 4 bytes of paramaters pushed on stack, _ZN7Namespace6FunctionEPcz, takes a character pointer and variable length list), going to the effort to make the function name conventions lie is pointless but code signatures can prevent any malicious actions anyway.
To clarify my point, using Java/.Net doesn't (necessarily) make you any worse a programmer but Virtual Machines risk a lack of knowledge syndrome. If new programmers enter straight into VMs without ever having used machine code directly, when the "elite" programmers who wrote the VMs begin to retire, who is going to maintain the VMs themselves? All the programmers on the VM don't have a clue how the stuff beneath the VM works and they're stuck.
Using the right tool is a good thing, the thing that started me off was the comment about 'running any code outside a VM is a bad idea'. If you can't run code outside a VM than your OS design is flawed not the system, taking advantage of the hardware directly if that is what the programmer deemed appropriate for a given situation should definitely be permitted.