Owen wrote:Love4Boobies wrote:Indeed. Too bad there's nothing worse than shared memory. Well, okay, register passing, you've got me there.
Your argument against shared memory is?
I was hoping this won't turn into a shared memory vs. message-passing discussion, but here goes...
There was quite some fuss some years ago where people were trying to figure out what the one true way of IPC is. The main problem is that it's not scalable - you have to keep using locking mechanisms in order to properly synchronize data accesses - and even then, you might not be able to enforce this. It gets even worse for MP systems. Another problem is that it is difficult to do for a programmer (except... see below). Last but not least, it doesn't syngergize with networking (see below).
Tuple spaces, transactional memory, they're all very nice and easy (yes, I went there, heh) but
none of these scale.
I find it even worse as a distributed paradigm and I'm not alone. Even as early as 1992, here's what the Plan 9 people had to say about it:
Rob Pike, Ken Thompson, Dave Presotto and Phil Winterbottom wrote:
This one baffles us: distributed shared memory is a lousy model for building systems, yet everyone seems to be doing it. (Try to find a PhD this year on a different topic.)
Most high-end multiprocessor operating systems implement message passing today.
Clicky!
Owen wrote:JITs produce better code than AOT compilers for branchy code where the developer hasn't profiled it properly. If you take the code, run it through a profiler, then inform the compiler accordingly (For example, use GCC's __builtin_expect to tell it whether a branch is regularly taken or not), it can generate very good code; in fact, in this case, GCC is able to smoke most JITs. Of course, it is extra work for the programmer, but if your software is CPU intensive, then it should be done. And, of course, much math intensive code is not branchy...
It's not really the same thing, but ok. That is something that will give static optimization a good boost.
However, some optimizations, and even crucial parts of a compiler - register scheduling, for example, are rather expensive processes, and JITs have to use less CPU intensive methods of assigning registers.
That is indeed true. The LLVM folks have found an algorithm that uses
puzzle solving to get near-optimum register allocation in real time. I have read this paper some time ago, it's very possible that someone came up with something even better by now.
The other thing that all compilers are poor at - but JITs in particular - is vectoriztaion. Most languages don't have the vector intrinsics provided to C programmers by processor developers, and both AOT and JIT compilers are horrid at autovectorization.
Lolwut?
However, even if you turn the optimizer right up on your vector code, the hand assembler can always beat the compiler. Always.
The main problem is overhead. The other problem is that compilers aren't perfect -- hand-written assembly can anways tune performance or decrease size. Perhaps some day in the future we will figure out how to make compilers that can always figure out the best way to generate code (if humans can do it, then so can computers - it's just a matter of "how?").
Regarding your statement, we usually use a lot of mid- and high-level languages and find their performance acceptable. If the benefits outweigh the overhead we should go for it.
Colonel Kernel wrote:Managed OS != JIT. For example, Singularity uses AOT compilation.
Indeed, that's true - it's just that this discussion seems to lean to towards the (non-)benefits of the JIT approach. In fact, that's what people usually use today, JIT has not been explored much as far as managed OSes are concerned (although there is some interest in this).