Hi,
embryo2 wrote:Brendan wrote:For managed environments, initially overhead and number of bugs caused by the environment will both be bad, and (in theory) as the JIT and/or compiler improves the overhead and number of bugs caused will reduce; but the overhead can never be zero and performance can't be as good as unmanaged.
Why unmanaged has better performance? Because it doesn't check dangerous code output. So, you trade danger for speed. I prefer reverse trade.
In general; unmanaged has better performance because it doesn't need to check for anything dangerous (because hardware already checks that for "free").
For some specific cases (and not "in general"), where things like hand optimised assembly is beneficial managed environments are a worse performance problem.
For other specific cases managed environments are far worse for performance. An example of this is run-time generated
deterministic finite automatons (used by utilities like "grep"); where to really achieve max. performance you want to generate a jump table and code for each state, and where the code generated for each state does something a bit like "state00: nextByte = goto state00jumpTable[buffer[pos++]];".
For other specific cases the performance problems caused by managed environments become severe. An example of this is things like Qemu; where (assuming there's no hardware virtualisation - e.g. emulating ARM on an 80x86 host) you can kiss goodbye to acceptable performance as soon as you hear the word "managed".
embryo2 wrote:Brendan wrote:Note that managed environments don't avoid application crashes - it's "application crashed due to exception detected by software/managed environment" instead of "application crashed due to exception detected by hardware". The number of application crashes can only be decreased by detecting bugs at compile time instead of detecting them at run-time.
It was discussed and shown that managed environment allows 99% of an application functionality to be useful after an exception is thrown. While unmanaged crashes without any other option. So, 100% of functionality is unavailable in case of unmanaged and 1% of functionality is unavailable in case of managed.
Wrong. For unmanaged, code that failed to do simple things correctly can still attempt to do much more complicated recovery (which in my opinion is completely insane to begin with) by using things like exceptions and/or signals that were originally generated from hardware detection. In the same way; a managed environment can be sane and terminate the process without giving incompetent programmers the chance to turn their failure into a massive disaster.
Basically; its a "language design" thing that has nothing at all to do with managed vs. unmanaged at all.
embryo2 wrote:Brendan wrote:For what AlexHully was proposing (unmanaged, where "100%" of bugs are detected at compile time)
Now it's impossible, so it's only theoretical possibility somewhere in the future.
It's as possible as managed environments - both have the same "security vs. performance vs. complexity" compromise.
embryo2 wrote:Brendan wrote:a managed environment can only detect bugs caused by the "ahead of time" compiler while introducing additional bugs in the managed environment itself. The bugs in the compiler are likely to be approximately equal to the bugs in the managed environment itself; so the end result is more overhead with no difference in number of bugs.
Bugs in the environment are less important. They are found quicker and more efforts are targeting such kind of bugs. And most bugs in JRE, for example, are of the same kind as is the case in any unmanaged library. For example it is the bugs related to SSL or SSO or web server functionality or other library functions. It's just logical errors in library functions. It's not a consequence of being managed or affected by managed (or unmanaged). It's kind of bugs every developer will have in any language and environment. But bugs attributed to the environment are very rare.
I think you missed the point here. Where "100%" of bugs are detected at compile time, a managed environment is a pointless pile of bloat and you still need hardware protection to protect from bugs in the managed environment.
embryo2 wrote:So, we have the situation when complexity of the environment doesn't produce more viable bugs. And because you stress the bugs related to the environment I should decline such objection.
Now we have the situation where a fool is ignoring facts. Bugs in the environment are extremely important (far more important than any bug in any one application that will only ever effect that one application); and "most bugs in JRE" is not the same as "all bugs in JRE".
embryo2 wrote:Brendan wrote:Hardware protection will never be obsolete because hardware can do the checks in parallel with normal execution, making it effectively free.
If hardware does something it means there are some algorithms implemented and the hardware just invokes the algorithms. Why should we move algorithms toward hardware level? Is the hardware implementation of algorithms easier? No. Does hardware algorithm implementation consume more silicon and power? Yes. So, why not to implement these algorithms in software and make hardware simpler and cheaper (in terms of total costs of ownership, including unit price and power consumption)?
What you're saying is as stupid as something like:
It can be proven that a Turing complete CPU can be implemented with a single instruction and nothing else; so anything more than one instruction and nothing else just consumes more power.
The fact is that for a lot of things software is a lot less efficient and consumes far more power than implementing it in hardware would. Note that this is also why
Intel is planning to add FPGA to Xeon chips and why Intel
already added FPGA to some embedded CPU - so that (for specialised applications) people can program the hardware/FPGA to get efficiency improvements that software can't achieve.
For silicon alone you're right; but silicon is cheap and no sane person cares. CPU manufacturers like Intel are literally sitting around trying to think of ways to use more transistors to improve performance (especially single-threaded performance). They're not going to say "We've got a budget of 1 billion transistors, so lets only use half of them so that software is slower and we consume maximum power for 3 times as long!".
embryo2 wrote:Brendan wrote:For performance, the overhead of managed environments can never be "effectively free" and will always be worse.
Yes, if we will be willing to trade danger for speed. It is possible to write a program for personal use that is used only in one way and doesn't expose itself to the network or other security vulnerable environments. Then we can disable all checks and get some speed increase (not very big, in fact). But what prevents us from ordering the managed environment to produce such "unchecked" version of a code and run it in isolated space? So, if we want to live dangerously we can do it even with managed environment. But if we do not want to be in danger, then we just allow the environment to guard us against many threats.
Yes. Please note that I've been trying to help you understand that it's possible to run managed languages in an unmanaged environment, or run an unmanaged language in a managed environment, for a relatively long time now. It's nice to see I've at least partially succeeded.
embryo2 wrote:Brendan wrote:You're conflating unrelated issues. You're saying that higher level languages and/or huge fat libraries (where programmers do less) reduce bugs; and that a higher level language with a huge fat library that is unmanaged is better than a lower level language with no libraries that is managed.
You misunderstood me. Fat libraries are the same for managed and for unmanaged. The different thing here is the environment. But environment allows us to make less bugs because it frees us from doing tedious work. So we can have less bugs even in fat libraries. And while the environment also can have bugs it always will have less critical bugs just because such bugs affect so many users and the attention paid to such bugs is too large. As a result we can have a robust environment and fat libraries with less bugs. Add to it security and reliability of managed approach. In the end number of bugs in unmanaged will prevail over the number of bugs in managed.
Managed environments (e.g. valgrind, Bochs) do make debugging easier; but that only matters when you're debugging, and doesn't help when you're writing code, or when the end user is using it; and it's still inferior to detecting bugs at compile time (sooner is always better than later).
Note that my idea of "ideal" is an unmanaged language; where the IDE checks for bugs while you type and has a built-in managed environment (e.g. a source level interpreter) for debugging purposes; and where the compiler detects as many bugs as possible when the code is compiled.
Cheers,
Brendan