The essay claims that you shouldn't use the tail-recursive method you would learn in a CS class to compute the Fibonacci numbers because, as any mathematician knows, an exact analytical solution exists. His C example looks like:
But there are actually a few more optimizations I picked up while studying linear recurrence sequences that I thought I'd share. The first drops the time almost by half:
The reason why this works is because the last part of the expression (the
(0.5 - 0.5 × sqrt(5.0))/sqrt(5.0) part) has a magnitude less than 0.5 so you get the same result just by using the round.
Note I benchmarked these with: using
gcc fibonacci.c -O2 -o fibonacci
Using these benchmarks I get 12006 ms vs 7123 ms. And the validation number matches as 0x6744
But there's yet another optimization:
That's right, we can do even better by using the tail-call recursion method dismissed in the essay. Now we get a time of 2937 ms.
For the observant of you you'll notice that what my benchmark does is just recalculates the first 40 Fibonacci numbers over and over again while summing them and taking the last 4 hex digits of the sum for validation. (It's not just for validation. We also do this because if you gcc with -O2 and you don't do anything with the output gcc is smart enough to skip the whole step. We need -O2 so gcc will recognize the tail-call.)
You could call foul on me right now. After all the reason why the analytic approach is slower is because of pow, and pow gets way more efficient with larger exponents.
Alright, fair enough. Let us run the test again except we'll sum the first 90 Fibonacci numbers instead (not much point going much past 90 since the 96th Fibonacci number is the largest to fit in an unsigned long int). So we update the code to
r = (r + f(i % 90)) % 0x10000;
Now we get 7795 ms for the recursive solutions and 12840 ms and 7640 ms for the analytical solutions. I ran the benchmark a few times and the recursive method is consistently faster statistically but I think that 2% faster has to be within the gcc optimization margin of error.
But there's something else to notice. For the two analytic solutions the validation number is 0x2644 but for the recursive solution it is 0x2f9c. Two against one right? Well, votes don't count in math and dictatorships.
What happened is at the 71st Fibonacci number both analytical solutions lost precision. This is because C doesn't check what we're trying to do. It does what we tell it to do. And we told it to take a float approximation of an irrational number, with only the precision a float has, and take it to a power.
I do want to stop here a moment and say I'm not pointing out this error as a gotcha moment or as evidence that Evan Miller is poor at math. I think How Not To Run An A/B Test is an incredibly important essay and should be understood by anyone who is using A/B test results. Also if you are doing statistics on a Mac you probably should have bought Wizard by now.
However, I do think this mistake illustrates an important lesson. If we tell programmers that the more math (or should that be Math) they use the better programmer they are, we are setting up unnecessary disasters.
One reason is because virtually no programmer spends a majority of their time doing things that look like Math. Most spend 99.5% doing things that don't look like Math. If a programmer takes this message to heart then they are going to spend a lot of time feeling like they aren't a true programmer; which is silly and bad for its own sake.
The other issue is that a focus on better programming looking like Math can be a major distraction. And it can lead to really silly time wasting debates (eg https://github.com/twitter/bijection/issues/41).
But most dreadfully if we tell programmers that they should give more weight to the more mathematical solutions they will often not choose the best solution for their context. I've certainly not given the best solution for finding Fibonacci numbers for all contexts. Heck, I'd bet you could get better results for my own benchmark by using memoization (for the record there's a memoization technique for both recursion and the analytical solution -- but it's easier with the recursion solution).
My solution wouldn't be that all programmers learn more Math. My solution would be that it is good to be part of a group where different people know different things. And we should take advantage of this knowledge base by not being embarrassed to ask questions. I have a number of friends who send me emails from time to time that just has the subject "math question." And I all the time send emails with the subject "programming questions," "statistics question," "writing question," or even "math question." I find it works really well for me.
So no, I don't think every programmer needs to be taught more Math. Except for linear algebra of course. Everyone should learn more linear algebra.
(You can download the full source for these examples from github.)