tag:blogger.com,1999:blog-32413693.post3906024395629444373..comments2018-03-20T07:33:30.635-07:00Comments on A mathematician at risk: On calculating Fibonacci numbers in CSteven H. Noblehttps://plus.google.com/112847383659204756166noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-32413693.post-83741791481668392842014-11-27T22:58:14.994-08:002014-11-27T22:58:14.994-08:00Hi Steven, I came across your post while I was sea...Hi Steven, I came across your post while I was searching for The Mathematical Hacker on Google. Really very impressive article you have written. I never expected the kind of recurisve solution (fib_helper) you posted. You focused on Software and I really think you made 2 pretty valuable points that every programmer needs to understand:<br /><br />1) There are multiple solutions for a problem but the correct one is the one that fits the context, the current environment.<br /><br />2) Readability: that most programmers do not take much into account. Knuth created Literate Programming just because of this.<br /> <br /><br />I was quite impressed after reading Evan miller's article "Don't Kill math" where he states that Math is actually a good vehicle of understanding nature. Computer Programming and understating how nature works are two different domians (engineering and medicine are different e.g). I think he should have mentioned this concept more clearly in "The Mathematical Hacker"arnuldhttp://lispmachine.wordpress.com/noreply@blogger.comtag:blogger.com,1999:blog-32413693.post-67784240900597828052013-01-30T08:27:12.305-08:002013-01-30T08:27:12.305-08:00@SAM Unsurprisingly calculating Fibonacci numbers ...@SAM Unsurprisingly calculating Fibonacci numbers is a problem that has been deeply investigated. My post doesn't even come close to covering the depth of this material; nor does it try. From a complexity perspective the analytic solution is superior. But there are even more superior solutions. One nice one uses the analytic identity but only uses integer math.<br /><br />The point I was trying to communicate was that context matters. In the context of solving for a long int in C the analytic approach simply fails when you care about getting the right integer.<br /><br />But there are other questions of context to ask. As @j2kun points out if you cared simply about speed you would just precompute. As I tried to hint at in my post, if you are iteratively solving memoization matters. It's a lot faster to solve the 70th Fibonacci number if you kept the 68th and 69th numbers around from a previous run. But if your program isn't doesn't stay around for multiple runs this isn't helpful.<br /><br />I'm surprised no one has talked about the value of a more readable solution yet. In some contexts this is the most valuable attribute.<br /><br />Interestingly a commenter on hackernews asked how @Nick's solution would have performed comparatively. People started to respond talking about how gcc optimizes tail-call recursion and how the theory works. I suggested he just fork the source and run the benchmark; and I'm glad he did.Steven H. Noblehttps://www.blogger.com/profile/16796611979115715515noreply@blogger.comtag:blogger.com,1999:blog-32413693.post-49600522280997016662013-01-30T07:15:53.282-08:002013-01-30T07:15:53.282-08:00The "classic" addition using recursion f...The "classic" addition using recursion for calculating fibonacci series may be faster than the golden ratio approximation method you outline in your blog post. Mainly because division and square root operations on a processor have a much higher CPU cost (in cycles) than an addition operation.<br /><br />It would be more interesting to see a benchmark of the two methods when compared summing an n length series. What may be "mathematically pretty" does not always translate to a "better" implementation on a computer architecture.<br /><br />Metrics would truly prove whether or not your proposed method is better (which I don't think it is computationally).<br /><br />SAMUnknownhttps://www.blogger.com/profile/11743904262735233078noreply@blogger.comtag:blogger.com,1999:blog-32413693.post-60821871474947257542013-01-29T10:37:32.456-08:002013-01-29T10:37:32.456-08:00Next time I'm required to compute Fibonacci nu...Next time I'm required to compute Fibonacci numbers in C, I'll be well equipped! (Sarcasm :P)Aaron Snoswellhttps://www.blogger.com/profile/10357185097220083178noreply@blogger.comtag:blogger.com,1999:blog-32413693.post-71805237284544425822013-01-29T07:51:57.815-08:002013-01-29T07:51:57.815-08:00@nick Nothing wrong with the iterative method to m...@nick Nothing wrong with the iterative method to me. Someone from hackernews took your comment and made a branch that adds that that as a version https://github.com/snoble/benchmarking_fibonacci/tree/WithIterartiveSolution<br /><br />@j2kun I really dislike the term "true programmer" and I think it's that attitude in general that motivates this post. Different contexts call for different solutions. I just took the data structure I was given from the original essay and pointed out that the approach that essay dismisses actually has advantages in a fairly common context. <br /><br />@btilly I've seen your ab testing post before and found it quite interesting. The plover post seems quite interesting too. Thanks for sharing.Steven H. Noblehttps://www.blogger.com/profile/16796611979115715515noreply@blogger.comtag:blogger.com,1999:blog-32413693.post-13782730689949554012013-01-28T10:16:34.271-08:002013-01-28T10:16:34.271-08:00First of all, if you want an exact answer for larg...First of all, if you want an exact answer for large n, neither the analytic or the naive recursive answer is best. The analytic has floating point problems, and naive recursive is too slow. Instead you can take advantage of recursive identities. See http://www.plover.com/misc/math/fibonacci for an exposition of this technique.<br /><br />Secondly Evan Miller's article is by no means the last word on A/B testing. See http://elem.com/~btilly/ab-testing-multiple-looks/part1-rigorous.html for the first installment of a reply that explains some of the trade-offs that we need to accept to produce useful statistics in a business setting.btillyhttps://www.blogger.com/profile/04335648152419715383noreply@blogger.comtag:blogger.com,1999:blog-32413693.post-54097051587835099422013-01-28T09:57:40.990-08:002013-01-28T09:57:40.990-08:00Is there anything wrong with the iterative approac...Is there anything wrong with the iterative approach? It seems to me to be easier to understand and it is a bit quicker than both the approaches you've discussed.<br /><br />(apologies for formatting)<br /><br />unsigned long int fib_iterative(unsigned long int n) {<br /> unsigned long int curr = 0, prev = 1, temp, i;<br /> for (i = 0; i < n; i++) {<br /> temp = curr;<br /> curr += prev;<br /> prev = temp;<br /> }<br /> return curr;<br />}Nick Craig-Woodhttps://www.blogger.com/profile/14632470462857333563noreply@blogger.comtag:blogger.com,1999:blog-32413693.post-24507631341770305582013-01-28T09:01:49.853-08:002013-01-28T09:01:49.853-08:00I don't think you gave enough credit to either...I don't think you gave enough credit to either programmers or mathematicians here. A true programmer's solution would be to precompute all 90 numbers and store them in an array, since you're fixing the problem size. As you said, the analytical solution really shows its worth asymptotically. I'd be more interested to see a comparison using a data structure for unbounded integer arithmetic, and scale the fibonacci numbers up to, say, the thousandth or ten thousandth. I would bet the analytical solution holds its weight much better in that situation, in particular because instead of doing twenty thousand additions you're only doing a constant number of operations and exponentiation. j2kunhttps://www.blogger.com/profile/08041921049916424212noreply@blogger.com