Computing massive Fibonacci numbers using matrix exponentiation in ℤ[√5], FFT-accelerated large integer multiplication with Apple's Accelerate framework, and real-time logarithmic visualization.
Beyond Naive Recursion
The Fibonacci sequence is every computer science student's first encounter with algorithmic complexity. The naive recursive implementation has exponential time complexity—computing the 50th Fibonacci number takes seconds because the same subproblems are solved trillions of times.
The iterative fix gets us to linear time: maintain two variables, update in a loop. But what if we want F(10,000,000)? Even O(n) becomes impractical when n is in the millions.
Matrix Representation
Fibonacci has a beautiful matrix formulation. The recurrence F(n) = F(n-1) + F(n-2) can be written as a matrix multiplication, and computing F(n) becomes computing the nth power of a 2x2 matrix.
Matrix exponentiation by squaring gives us O(log n) matrix multiplications. To compute M^1000, we don't multiply M by itself 999 times—we compute M^2, then M^4, then M^8, building up powers of two and combining them.
But there's a subtlety. Each matrix multiplication involves arithmetic on the Fibonacci numbers themselves. F(10,000,000) has over two million digits. Multiplying two million-digit numbers using grade-school multiplication is O(d^2)—still quadratic in the digit count.
The Ring Z[sqrt(5)]
Here's where abstract algebra offers an elegant shortcut. Instead of matrices, we can work in the algebraic ring Z[sqrt(5)]—numbers of the form a + bsqrt(5) where a and b are integers.
Why this particular ring? Because the golden ratio phi = (1 + sqrt(5))/2 lives there. Binet's formula expresses F(n) directly in terms of phi^n. Computing phi^n in this ring requires only O(log n) multiplications, and each multiplication is just four integer multiplications and two additions.
The algebraic structure simplifies the bookkeeping. Instead of tracking four matrix entries, we track two ring coefficients. The mathematics remains the same—we're still exponentiating by squaring—but the representation is more natural.
The FFT Connection
We've reduced to O(log n) ring multiplications. Each involves multiplying integers that grow to millions of digits. This is where the Fast Fourier Transform enters.
The key insight: polynomial multiplication in the coefficient domain is convolution, which is O(n^2). But convolution in the frequency domain is pointwise multiplication, which is O(n). The FFT converts between domains in O(n log n).
Numbers are just polynomials evaluated at the base. The integer 1234 is 110^3 + 210^2 + 310 + 4—a polynomial in powers of 10. Multiplying large integers is equivalent to multiplying polynomials, which is equivalent to convolution, which becomes pointwise multiplication after FFT.
This reduces integer multiplication from O(d^2) to O(d log d). For million-digit numbers, that's the difference between intractable and fast.
Numerical Considerations
FFT-based multiplication has practical complications. We're converting integers to floating-point for the transform, which introduces rounding errors. The trick is using a small enough base (say, 10^4 per "digit") that intermediate products don't exceed floating-point precision.
There's also the carry propagation step. After pointwise multiplication and inverse FFT, we have a polynomial with potentially large coefficients. These must be normalized by propagating carries—similar to the final step in grade-school multiplication, but operating on the entire result at once.
Modern implementations use the Number Theoretic Transform (NTT) instead, which works in modular arithmetic and avoids floating-point entirely. The theory is the same—DFT over a finite field instead of complex numbers—but the implementation is cleaner.
The Complexity Picture
Putting it together: O(log n) ring multiplications, each involving O(d log d) integer multiplication where d is the digit count. The digits grow with n—roughly O(n) digits for F(n). So the total complexity is O(n log^2 n), which is effectively linear with a logarithmic factor.
Compare to naive iteration: O(n) additions of O(n)-digit numbers gives O(n^2). Matrix exponentiation with naive multiplication: O(log n) multiplications of O(n)-digit numbers at O(n^2) per multiplication gives O(n^2 log n). FFT-accelerated ring exponentiation beats both decisively.
Mathematical Themes
What I find compelling about this problem is how it connects disparate areas of mathematics. Number theory gives us the ring structure. Complex analysis underlies the FFT. Algebra provides the exponentiation framework. Signal processing suggests the convolution perspective.
Each field contributes a piece of the puzzle. The final algorithm is greater than the sum of its parts—not just faster, but more illuminating. You see the Fibonacci sequence not as a curiosity but as a special case of linear recurrence, not as arithmetic but as algebra.
The question "how do we compute F(n)?" becomes "what mathematical structures make this computation tractable?" That reframing is the real lesson. The algorithm is a corollary.