Monday, January 28, 2013

On calculating Fibonacci numbers in C

A few months ago Evan Miller wrote an essay called The Mathematical Hacker. While an interesting post he does make a mistake when he gives the "proper way" to calculate the Fibonacci numbers.

The essay claims that you shouldn't use the tail-recursive method you would learn in a CS class to compute the Fibonacci numbers because, as any mathematician knows, an exact analytical solution exists. His C example looks like:
But there are actually a few more optimizations I picked up while studying linear recurrence sequences that I thought I'd share. The first drops the time almost by half:
The reason why this works is because the last part of the expression (the (0.5 - 0.5 × sqrt(5.0))/sqrt(5.0) part) has a magnitude less than 0.5 so you get the same result just by using the round.

Note I benchmarked these with: using gcc fibonacci.c -O2 -o fibonacci

Using these benchmarks I get 12006 ms vs 7123 ms. And the validation number matches as 0x6744

But there's yet another optimization:
That's right, we can do even better by using the tail-call recursion method dismissed in the essay. Now we get a time of 2937 ms.

For the observant of you you'll notice that what my benchmark does is just recalculates the first 40 Fibonacci numbers over and over again while summing them and taking the last 4 hex digits of the sum for validation. (It's not just for validation. We also do this because if you gcc with -O2 and you don't do anything with the output gcc is smart enough to skip the whole step. We need -O2 so gcc will recognize the tail-call.)

You could call foul on me right now. After all the reason why the analytic approach is slower is because of pow, and pow gets way more efficient with larger exponents.

Alright, fair enough. Let us run the test again except we'll sum the first 90 Fibonacci numbers instead (not much point going much past 90 since the 96th Fibonacci number is the largest to fit in an unsigned long int). So we update the code to r = (r + f(i % 90)) % 0x10000;

Now we get 7795 ms for the recursive solutions and 12840 ms and 7640 ms for the analytical solutions. I ran the benchmark a few times and the recursive method is consistently faster statistically but I think that 2% faster has to be within the gcc optimization margin of error.

But there's something else to notice. For the two analytic solutions the validation number is 0x2644 but for the recursive solution it is 0x2f9c. Two against one right? Well, votes don't count in math and dictatorships.

What happened is at the 71st Fibonacci number both analytical solutions lost precision. This is because C doesn't check what we're trying to do. It does what we tell it to do. And we told it to take a float approximation of an irrational number, with only the precision a float has, and take it to a power.

I do want to stop here a moment and say I'm not pointing out this error as a gotcha moment or as evidence that Evan Miller is poor at math. I think How Not To Run An A/B Test is an incredibly important essay and should be understood by anyone who is using A/B test results. Also if you are doing statistics on a Mac you probably should have bought Wizard by now.

However, I do think this mistake illustrates an important lesson. If we tell programmers that the more math (or should that be Math) they use the better programmer they are, we are setting up unnecessary disasters.

One reason is because virtually no programmer spends a majority of their time doing things that look like Math. Most spend 99.5% doing things that don't look like Math. If a programmer takes this message to heart then they are going to spend a lot of time feeling like they aren't a true programmer; which is silly and bad for its own sake.

The other issue is that a focus on better programming looking like Math can be a major distraction. And it can lead to really silly time wasting debates (eg https://github.com/twitter/bijection/issues/41).

But most dreadfully if we tell programmers that they should give more weight to the more mathematical solutions they will often not choose the best solution for their context. I've certainly not given the best solution for finding Fibonacci numbers for all contexts. Heck, I'd bet you could get better results for my own benchmark by using memoization (for the record there's a memoization technique for both recursion and the analytical solution -- but it's easier with the recursion solution).

My solution wouldn't be that all programmers learn more Math. My solution would be that it is good to be part of a group where different people know different things. And we should take advantage of this knowledge base by not being embarrassed to ask questions. I have a number of friends who send me emails from time to time that just has the subject "math question." And I all the time send emails with the subject "programming questions," "statistics question," "writing question," or even "math question." I find it works really well for me.

So no, I don't think every programmer needs to be taught more Math. Except for linear algebra of course. Everyone should learn more linear algebra.

(You can download the full source for these examples from github.)

8 comments:

j2kun said...

I don't think you gave enough credit to either programmers or mathematicians here. A true programmer's solution would be to precompute all 90 numbers and store them in an array, since you're fixing the problem size. As you said, the analytical solution really shows its worth asymptotically. I'd be more interested to see a comparison using a data structure for unbounded integer arithmetic, and scale the fibonacci numbers up to, say, the thousandth or ten thousandth. I would bet the analytical solution holds its weight much better in that situation, in particular because instead of doing twenty thousand additions you're only doing a constant number of operations and exponentiation.

Nick Craig-Wood said...

Is there anything wrong with the iterative approach? It seems to me to be easier to understand and it is a bit quicker than both the approaches you've discussed.

(apologies for formatting)

unsigned long int fib_iterative(unsigned long int n) {
unsigned long int curr = 0, prev = 1, temp, i;
for (i = 0; i < n; i++) {
temp = curr;
curr += prev;
prev = temp;
}
return curr;
}

Ben Tilly said...

First of all, if you want an exact answer for large n, neither the analytic or the naive recursive answer is best. The analytic has floating point problems, and naive recursive is too slow. Instead you can take advantage of recursive identities. See http://www.plover.com/misc/math/fibonacci for an exposition of this technique.

Secondly Evan Miller's article is by no means the last word on A/B testing. See http://elem.com/~btilly/ab-testing-multiple-looks/part1-rigorous.html for the first installment of a reply that explains some of the trade-offs that we need to accept to produce useful statistics in a business setting.

steven said...

@nick Nothing wrong with the iterative method to me. Someone from hackernews took your comment and made a branch that adds that that as a version https://github.com/snoble/benchmarking_fibonacci/tree/WithIterartiveSolution

@j2kun I really dislike the term "true programmer" and I think it's that attitude in general that motivates this post. Different contexts call for different solutions. I just took the data structure I was given from the original essay and pointed out that the approach that essay dismisses actually has advantages in a fairly common context.

@btilly I've seen your ab testing post before and found it quite interesting. The plover post seems quite interesting too. Thanks for sharing.

Unknown said...

Next time I'm required to compute Fibonacci numbers in C, I'll be well equipped! (Sarcasm :P)

Unknown said...

The "classic" addition using recursion for calculating fibonacci series may be faster than the golden ratio approximation method you outline in your blog post. Mainly because division and square root operations on a processor have a much higher CPU cost (in cycles) than an addition operation.

It would be more interesting to see a benchmark of the two methods when compared summing an n length series. What may be "mathematically pretty" does not always translate to a "better" implementation on a computer architecture.

Metrics would truly prove whether or not your proposed method is better (which I don't think it is computationally).

SAM

steven said...

@SAM Unsurprisingly calculating Fibonacci numbers is a problem that has been deeply investigated. My post doesn't even come close to covering the depth of this material; nor does it try. From a complexity perspective the analytic solution is superior. But there are even more superior solutions. One nice one uses the analytic identity but only uses integer math.

The point I was trying to communicate was that context matters. In the context of solving for a long int in C the analytic approach simply fails when you care about getting the right integer.

But there are other questions of context to ask. As @j2kun points out if you cared simply about speed you would just precompute. As I tried to hint at in my post, if you are iteratively solving memoization matters. It's a lot faster to solve the 70th Fibonacci number if you kept the 68th and 69th numbers around from a previous run. But if your program isn't doesn't stay around for multiple runs this isn't helpful.

I'm surprised no one has talked about the value of a more readable solution yet. In some contexts this is the most valuable attribute.

Interestingly a commenter on hackernews asked how @Nick's solution would have performed comparatively. People started to respond talking about how gcc optimizes tail-call recursion and how the theory works. I suggested he just fork the source and run the benchmark; and I'm glad he did.

Anonymous said...

Hi Steven, I came across your post while I was searching for The Mathematical Hacker on Google. Really very impressive article you have written. I never expected the kind of recurisve solution (fib_helper) you posted. You focused on Software and I really think you made 2 pretty valuable points that every programmer needs to understand:

1) There are multiple solutions for a problem but the correct one is the one that fits the context, the current environment.

2) Readability: that most programmers do not take much into account. Knuth created Literate Programming just because of this.


I was quite impressed after reading Evan miller's article "Don't Kill math" where he states that Math is actually a good vehicle of understanding nature. Computer Programming and understating how nature works are two different domians (engineering and medicine are different e.g). I think he should have mentioned this concept more clearly in "The Mathematical Hacker"