Darkpony wrote:
This semester in my algorithms and data structures class we had the old backtracking recursion assignment.
Which one? There are several widely used examples of
backtracking by recursion used in such courses; from that sentence alone, I would have probably guessed that you meant
Eight-Queens, but from the description given after that:
Darkpony wrote:
I found all the paths from a random start point and end point with an obstacle on a simulated 2d array board. The prof recorded the times it took to compute the boards and I placed in the middle.
I am guessing you mean some variant of a
Maze Path problem (where on a simply-connected maze or not), or some general
pathfinding problem.
Darkpony wrote:
I was wondering if I could improve my time by programming bare metal.
Probably not, though without a lot more information about the previous implementation and you're assembly language skills, I couldn't hazard a guess.
The first thing to realize is that the speed of most of the problems which fit the description given are dominated by
algorithmic time complexity, not implementation efficiency - no matter how tight your code is, an
O(n log n) algorithm (I am assuming here that the course covered
Big-O notation) is going to be faster than an
O(n^2) algorithm for any
n above the difference in the marginal implementation cost in time of the two algorithms (by Marginal Implementation Cost in Time, I mean the time usage for a single pass of a given implementation of an algorithm - that is to say, its unit operational time. Time performance of an implementations is roughly proportional to the product of the MICT and the algorithmic complexity applied to the number of elements operated on. This means that, as a rule, for
n below some point, the simpler implementation would be faster, but above that point, the more efficient algorithm would perform better even if it had a significantly higher MICT).
For a given implementation, the MICT in turn depends on how good the running code is - both how good a programmer you are, and how good the source code translator's generated code is. With assembly, the latter depends almost entirely on the former, while for a compiled program, it depends on how good the compiler's optimization is. Now, while it is true that a capable assembly programmer can - in principle - always outperform a compiler, in practice, few assembly programmers are skilled enough to beat even a modestly-optimizing compiler (such as GCC or Visual C++), and the larger the program, the harder it is to sustain that level of excellence. A really highly optimizing compiler can work with a lot more data points than a programmer can keep in their head, and grind over it a lot faster than the programmer could; but again, that level of optimization is a black art, so most compilers don't go to the extremes of ones like, say the Intel C compiler, or the Stalingrad Scheme compiler. As a rule, the results mirror that described above - for a really short program, hand-coding often wins, but the longer and more complex the program gets, the more things favor the compiler, even with no real optimizations applied.