Page 1 of 1

last call optimization

Posted: Sat Sep 02, 2006 5:16 am
by B.E
I been looking over the internet and I came across this. How does it work and how do you implement it in a language like C/C++.

Re:last call optimization

Posted: Sun Sep 03, 2006 7:52 am
by Cheery
what the last call does is simply:

{ address call ret } turns to: { address jump }

In near call and jump, this simply means that if the latest instruction is call and you are doing ret, you increment the opcode of last instruction, which turns it to jump and leave the ret away. (thought return instruction could be put there, but it wouldn't make much.

Re:last call optimization

Posted: Wed Sep 06, 2006 2:23 am
by Schol-R-LEA
First off, you have to understand that a normal function call, each call requires it's own local function environment (also called an activation record), which is used to a) preserve the calling function's state, and b) hold the local variables for the function itself. In C++, the caller pushes the arguments for the function onto the stack, followed by the address immedaitely after the function call, so that processing can return to the caller. The called function then pushes the registers it uses (so they can be restored later) and it's own local variables. Needless to say, there is a fair amount of overhead in this, especially if there are several recursive calls in succession.

Tail call optimization is a method for reducing the overhead in certain types of recursive function calls by automatically converting them to iterative loops, and is considered a form of procedure integration (a category of optimizations which also includes function inlining). The basic premise is that if the call is the last action taken by the function, and the calling function's return is for all intents and purposes the return value of the call it makes itself, then the calling environment (i.e., the stack activation record) doesn't need to be saved and thus can be 'recycled' for the following call. Thus, the compiler can replace the function call with a simple jump back to the beginning of the function body, and the values of the old function get overwritten by those of the new one. Most modern optimizing compilers, including gcc, will perform this form of basic TCO.

Tail call elimination is not restricted to recursion. In principle, any time that one function calls another function as it's last action before returning, the activation record of the old function can be replaced by that of the new one, and the function call converted to a jump. This is a key technique in using Continuation Passing Style. However, in C++ this is rarely feasible, because most functions have activation records of different sizes; unless the calling function's activation record is the same size as the caller's, then it would require a certain amount of pointer juggling before the jump to the new function, which would in many cases ruin any advantage in the TCO itself.

Also, if the function which is perfoming the tail call is a target for some form of extra-functional continuation (e.g., [tt]setjmp()[/tt]), then it is possible that overwriting the stack frame could trash the continuation. This is part of the reason why TCO (and CPS) are generally associated with functional programming languages like Scheme or Haskell, and also why even those compilers which do eliminate tail recursion often don't do general TCO.

It's easy to test whether a given compiler is perfoming TRE: simply compile and run a small program that has an unbounded tail call, that is, one which loops inifintely. If the compiler doesn't perform tail recursion, then it will eventually get a stack overflow (though this can take a while on virtual memory systems; you can usually tell before the program actually crashes that it's eating up all the available memory for stack as the system grinds to a halt - Needless to say, if you try this, don't run it on a system whose stability is paramount, and be prepared to do a hard reset). General TCO is a bit trickier to test for, but a pair of mutually recursive functions should do the trick:

Code: Select all

void bar();

void foo()
{
  bar();
}

void bar()
{
  foo();
}
References:
WardsWiki: Tail Call Optimization
Wikipedia: Tail Recursion
What the heck is a tail call
Scheme and It's Implementation: Tail Recursion
Tail Call Optimization
Tail Call Optimization in tcl
Tackling C++ Tail Calls
Tail calls in C++
GCC Optimizations
Compilation of Functional Programming Languages using GCC
Hidden complexities of tail-call/tail-recursion optimization
CONS Should Not CONS Its Arguments, Part II: Cheney on the M.T.A.
Revised[sup]5[/sup] Report on the Algorithmic Language Scheme, sect. 3.5: Proper tail recursion

Re:last call optimization

Posted: Thu Sep 07, 2006 3:02 am
by B.E
Thanks Schol-R-LEA and Cheery.

I use to hate using recursion because of the extra overhead in calling a function and returning from a function( making and deleting stack frames). after doing some research I found that the compiler can remove this overhead and only use a constant stack frame. This means that the is little chance of stack overflow. I also hated Java for this reason. I found that before 2002 the JVM did not support such optimizations. Since that Sun has added this feature to the JVM.