Re: Functional Programming (was Pitfalls of lacking a MMU)
Posted: Fri Feb 24, 2012 8:36 pm
Java & outperforming... in the same sentence. I've seen it all.
The Place to Start for Operating System Developers
https://f.osdev.org/
It's one thing to know how, another to understand why. Functional programming theory is not "academic wank," it's central to some of our deepest insights about formal systems. It's called the Church-Turing thesis for a reason. Let's break it down: Brendan appears to understand Alan Turing's theoretical contribution to computing but completely derides the brilliant insight offered by Alonzo Church. Turing certainly thought the lambda calculus was valuable; I'm not surprised that Rusky (in particular) became irritated by his attitude.Are you kidding me? Did you even read the "demeaning and insulting" posts aimed towards him, one of our most knowledgeable and valued members? I think this is enough to get anybody irritated:
Indeed, it might be good. I've always found theory to be quite usefulMight be good to learn some computing theory.
First, bear in mind that I'm comparing apples to apples, Java code compiled down to machine code vs C++ code compiled down to machine code (ignore the JVM.)Java & outperforming... in the same sentence. I've seen it all.
Code: Select all
template <class any_t> array_fill(any_t * const ptr, const size_t size, const any_t value)
{
for (size_t i = 0; i < size; i++) ptr[i] = value;
}
It's clear that Brendan doesn't understand at all what functional programming is about (nor do a lot of people in this thread to be honest.) Functional programming languages treat functions as first class objects, meaning functions can take functions as arguments and return functions. Brendan seems to think this is equivalent to C function pointers, it is not; a function pointer is a reference to a function, not a function. You can't compose a function in C. He refuses to even try to understand this point. Functional programming is about much more than just functions without side effects. He does need to learn some computing theory.So if you do not agree that functional programming is the greatest, you need "to learn some computing theory." ? That is absurd.
You clearly did not read my last post. Just In Time compilation (and the overhead implied) does not apply if you are compiling a Java program down to machine code; just a plain ordinary executable file. You'll find that if you compile a Java program with GCJ to run natively, and statically link it, it'll be about the same size as a statically linked C++ program that does the same thing, and run at least as fast.Eh, Java can never outperform anything. Ever. I don't know about optimization for random code snippets, but unless your codebase is like 1-2gb of source (out weighting the libs and random bloat), it won't matter. Java is slow. Always have been. I'm not sure if this is because of the libs, JIT, whatever. It just eats CPU and RAM for breakfast, so whether or not it optimizes a for-loop better doesn't matter in the long run.
Me too.childOfTechno wrote:I'm going to have a rant ...
Well, it was...childOfTechno wrote:Anyway, this thread is dead Jim ...
Ignorance is bliss...childOfTechno wrote:It's amazing how the uninformed can be so sure of themselves
Not really. For example, how would proving that a program performs poorly have performance advantages?childOfTechno wrote:principle #1:
"Being able to prove things about a program has performance advantages."
If we are assumming both compilers are equally good (Which I doubt, because I have yet to see a good Java-to-native compiler, whereas good compilers for C++ have been around for years) And we are comparing Java to C++, we still have 2 unknowns here. The actual performance of said compiled programs, and the programming skill of the 2 programmers (Even if its the same person, it's not likely that they are equally good with both languages.) If we assume equal programming skill in each program, C++ will out preform java in a vast majority of situations. This includes processing power, file size, RAM usage, etc. Java loses for file size automatically because of the Java run time. And I don't know why, but every Java program just eats RAM for breakfast.childOfTechno wrote:Code generated by a good Java compiler (for instance) will often outperform code for the same algorithm generated by an equally good C++ compiler. The simple fact is that it is a lot harder for the compiler to reason about a C/C++ program, which limits the optimisations it can do (primarily because of pointers.) By the way, I'm talking about compiling Java straight to machine code, not bytecode.
This would only happen for 2 reasons. Either an inexperienced or inattentive programmer failing to use their language properly, which, why would we be would about programmers like that? Or intentionally breaking this contract with the compiler, which generally shouldn't be done, but I'm sure there are instances when this is desired. Either way, the compiler DOES assume that the variable is constant, and optimizes it as such. If that breaks your code, well, you're doing it wrong.childOfTechno wrote:an example:
In C++ "const" doesn't really mean "const". The compiler can't rely on a pointer/reference to a const variable not having it's constness cast away at some point: the assumption that when you pass a reference to a local const variable to some arbitrary function it's value won't change simply doesn't hold. (although it should, and you should assume that the compiler thinks this way.) In more strict languages const (or whatever it's equivalent is) means "bloody well immutable," so the compiler absolutely knows that no implicitly called function will modify the value of a variable referenced as such, and can optimise accordingly.
That statement is so vague, it actually means nothing. When defining a principle, you generally want to be a explicit as possible. 'Things' and 'sometimes' are not explicit.childOfTechno wrote:principle #2:
"Being able to prove things about programs is sometimes absolutely required."
This may be true, but programmers don't generally have to prove anything. They think, plan, implement, test, test, and test some more to be able to give a high likely-hood of success. The larger the project/code-base, the more true this statement becomes. And I will tell you, these systems are either written in ASM or very simple and custom languages that are specific to the task at hand. Java has never, and will never be used for these systems. (Since you seem to be a Java advocate here.)childOfTechno wrote:The simple fact is: C/C++ is a pretty poor language if you want to formally prove anything about your code (automatically or otherwise.) This is why programs that absolutely have to work (fly-by-wire systems, control systems for nuclear power plants, etc) are almost never written in C/C++. Really, just about ANY language is better for formal verification methods!
I'm not sure where "life critical systems" can into this discussion, but its kind of a moot point. Given your temperament, I doubt you will ever work on said systems, and I certainly know I won't either. You also say "ideally you can...", and yes, ideally this would be true. But nothing is ever ideal, so this is also a moot point. Even ignoring that, it is very possible to describe how a function will behave for all inputs in C++. I won't bother giving an example, since it's impossible to prove a negative.childOfTechno wrote:A collection of functions with no side effects is amenable to mathematical analysis; ideally you can absolutely say how it will behave for ALL possible input values, not just those you think to test. Just because "it seems to work fine" is usually good enough for your daily coding needs doesn't mean that it's anywhere NEAR good enough for life critical systems.
I find it interesting you call Turing's contribution "theoretical" and Church's "brilliant insight". Especially from my point of view (Which I give no guarantees of being correct, or the most well informed) that I have heard of Turing plenty of times, and yet never heard mention of Church...childOfTechno wrote:It's one thing to know how, another to understand why. Functional programming theory is not "academic wank," it's central to some of our deepest insights about formal systems. It's called the Church-Turing thesis for a reason. Let's break it down: Brendan appears to understand Alan Turing's theoretical contribution to computing but completely derides the brilliant insight offered by Alonzo Church. Turing certainly thought the lambda calculus was valuable; I'm not surprised that Rusky (in particular) became irritated by his attitude.
More constraints means more accurate, but also more restrictive. Java has to have a richer tree just to match C++ in terms of actual applied optimizations. So without some factual numbers, this argument is moot.childOfTechno wrote:The parse tree for a Java program has a lot more constraints on it than a parse tree for a C++ program; the space in which the object (the parse tree) is defined is richer, it has more structure. There are simply more axioms that can be applied that preserve the algorithm the tree expresses, ergo, the compiler can do a much more exhaustive search for possible optimisations.
Why is it bad that the compiler assumes the programmer knows what they are doing? And why should pointer arithmetic be avoided? Pointer arithmetic, especially when talking domain knowledge, that only the programmer has, into account, can speed up computations immensely.childOfTechno wrote:This is why, with modern C++ compilers, it's so very important to write const correct code and avoid trying to manually optimise things that the compiler will deal with. It's better to avoid pointer arithmetic.
This is why static analysis is so important. And unfortunately, to date, partially manual static analysis is really required to see a worthwhile improvement.childOfTechno wrote:It also pays to consider the statistics involved. The probability that a given line of code will be executed is not uniform, indeed it is generally a long tailed distribution, that is, there is a small amount of code that is executed very frequently (the inner parts of loops) and a large amount that is executed infrequently (outer control and dispatch code.)
Except that "old school C/C++ programmers" that make this assumption also keep code that requires run time checking outside of loops whenever possible. And also, don't forget that just because code is not called frequently right now does not mean that it never will be. Espicially by a well-meaning, but lacking in knowledge future intern.childOfTechno wrote:This is why run-time bounds checking is nowhere near as expensive as old school C/C++ programmers assume, in 99% of cases the compiler can shunt the bounds check outside of any loops, and with dependent type analysis can often remove it entirely.
So .Net is a functional programming language? (See System.Reflection.MethodInfo and similar classes) And how is a function pointer not treating functions as a first class object? Because it doesn't have type info? So what? Just make a struct that holds that info (although I don't know why you would ever need it...) Or use typed function pointers (Which should be all the type info you need.)childOfTechno wrote:Functional programming languages treat functions as first class objects, meaning functions can take functions as arguments and return functions.
JIT is just part of the reason Java is slow. And compiling Java like this, you either lose a lot of the features that Java programmers take for granted, or still incur overhead everywhere from run-time checks. It's just the only way to support some of the higher level features Java uses.childOfTechno wrote:You clearly did not read my last post. Just In Time compilation (and the overhead implied) does not apply if you are compiling a Java program down to machine code; just a plain ordinary executable file. You'll find that if you compile a Java program with GCJ to run natively, and statically link it, it'll be about the same size as a statically linked C++ program that does the same thing, and run at least as fast.
Seeing as you have 7 posts, and are attacking a mod like that (or anyone on this forum for that matter) I doubt you will be here for very long...childOfTechno wrote:...incredibly personal attack...
Just doin' my job This thread was already a cesspit of vitriol when I arrived, when in rome. I'm actually a very nice personCombuster wrote:It's good to know we have a new troll in the house.
Code: Select all
foo() {
for(i = 0; i < 10; i++) {
printf("%d\n", i);
}
}
Code: Select all
foo1() {
foo2(0);
}
foo2(int i) {
printf("%d\n", i);
i++;
if(i < 10) foo2(i);
}
Code: Select all
foo1() {
foo2(0);
}
foo2(int i) {
printf("%d\n", i);
i++;
@(i): i < 10
}
Code: Select all
int lcs(char *a, char *b)
{
// If one or the other is empty
// their lcs is the empty string
// which has length 0
if (*a == 0 || *b == 0)
return 0
// If they match the lcs is the matching
// character plus the lcs of the
// remaining string, so length
// 1 + remaining string
if (*a == *b)
return 1 + lcs(a+1, b+1);
// What is the length if we ignore the
// first character of a
int ignA = lcs(a+1, b);
// What is the length if we ignore the
// first character of b
int ignB = lcs(a, b+1);
// Return the maximum of the previous two
if (ignA > ignB)
return ignA;
else
return ignB;
}
Code: Select all
-- Empty strings dont have a common substring with any string
lcs [] b = 0
lcs b [] = 0
-- The strings have a common head, then length is 1 + length of
-- the lcs of their tails
lcs [a:as] [a:bs] = 1 + lcs as bs
-- The strings dont have a common head, ignore the most head
-- which gives the highest result
lcs [a:as] [b:bs] = max (lcs a:as bs) (lcs as b:bs)
brendan wrote:(elaborate joke that doubles as bait)
bubach's hypothesis wrote:Maybe not so great in the sense of keeping peace on the forum.
Academics would say that qualifies as being empirically proven.davidv1992 wrote:(bite)