OK, this is getting away from the real topic (since as has already been stated, the name 'Bytecode Alliance' is not wholly descriptive), but I will say that I am aware of some the touted security advantages, even if
I don't necessarily think they are as significant as they are often made out to be. But then, I am wary of
anything that smacks of marketing rather than technology, and my primary objections to the claims are that they are often overstated and misleading, and distract attention from the need for security procedures to be an active process rather than
just a part of the software environment (the claims themselves do, that is, not necessarily the actual use of bytecode for that aim).
- The ability to incorporate encryption, signatures, and other verification markers into the 'executable' in a way which would be impractical for a casual attacker, or even a determined one on a time or budget constraint, to adequately spoof.
- The ability to run in purely interpretive mode, isolating/sandboxing the execution and thus limiting the ability to interfere with other processes.
- The ability to have all memory accesses to be automatically bounds-checked at runtime.
- The ability to force all code being run or compiled to be statically checked before execution for certain classes of exploits.
- The ability to inspect each instruction for validity before execution.
- The use of an extended virtual instruction set which incorporates complex security-related instructions at the simulated 'machine' level.
Now, here's the thing: there are other ways to get all of these advantages while still running in native code. The main argument regarding bytecode - or more generally (and more apropos of my own intended OS design) any other system which separates source compilation and code generation in a way that only permits JIT code generation - over most of the more ad-hoc approaches is that they are part and parcel of the AOT/JIT approach, and are combined with other advantages regarding portability and code compactness.
As I said earlier, prior to Microsoft's push for .NET, the majority of interest in p-codes were their portability and code density (there had been talk of security earlier with the Java JVM, though you must recall that Sun was widely
criticized for creating what many called a 'virus propagation platform', which was the main reason Sun made a point about it at all). This was, however, often seen as counterbalanced - and then some - by the runtime overhead of the interpreter, which is why Sun quickly moved to a JIT bytecode compiler rather strict interpretation. Microsoft, having seen what happened with Java, and having their own reasons to go with a bytecode (mostly related to making Windows applications less reliant on x86, at the time before AMD64 came about when it looked as if the platform was going to falter soon), applied JIT from the first in .NET, as well.
But therein lies the rub, since as soon as you translate the program from the non-native bytecode to the native instruction set, many of those advantages are lost, or at least limited. This means that the JIT either has to be limited to static analysis for the runtime behavior, or else it needs to insert all of those runtime checks directly into the executable code (or some combination of the two).
This was always Brendan's argument against this, as he was of the belief that runtime checks were unnecessary and wasteful - though his ideas of what did or did not constitute a runtime check were at odds with those of most others. He was designing his language and compiler with the idea that it would reject
a priori any code which it couldn't absolutely verify statically, which would
require the programmer to manually insert boundary checking and so forth; he argued that this
would not constitute a 'run-time check', but instead was 'normal control flow', based on the fact that the compiler obliged the coder to insert them manually rather than automatically doing so itself (this sort of hair-splitting was part of why I didn't want to quote that thread earlier, as was the fact that I needed to spend pages of posts dragging an explanation out of him as to what he saw as being a 'run-time check'). He also was intending to place hard limits on several things, including making
all types have defined ranges, and that the behavior on an over- or underflow would have to be defined for each type (similar to how Ada handles numeric types) - with the default being that any code which
could overflow would cause a compiler error (which is rather different from the exception based approach most other languages with such requires use).
You do gain some Gentoo-esque ability to tune the generated code to the specific system, though in practice the JITs being used are desultory at best in doing so. The performance tradeoffs are neither simple, nor consistent across different programs and platforms (or even just different implementations of the same platform; the optimization of an Intel CPU can be quite different from that of an AMD CPU, or even from that of a different CPU by Intel themselves).
I will mention one web page that had come up in the earlier discussion which might be relevant to Love4Boobies post, however:
"Abstract Interpretation in a Nutshell" by Patrick Cousot.