The Math Behind
Infinite Zoom
How perturbation theory, rescaling tricks, and self-similarity enable 1010,000,000+ magnification at 120 FPS.
The Precision Wall
IEEE 754 doubles give us 52 mantissa bits—roughly 15-16 significant decimal digits. Every 3.32× zoom costs one more bit of precision.
At 1016 zoom, you've exhausted every bit. The image dissolves into noise. Subtracting nearly-equal coordinates produces garbage.
This is the precision wall. Every fractal explorer hits it.
Perturbation Theory
Instead of computing each pixel independently, we compute one reference orbit Zn at high precision. Every other pixel is expressed as a small perturbation δ from that reference.
The key insight: δ stays small even as Z grows large. We only need high precision for one orbit—the rest fit in standard floats.
z="text-sky">n</sub> = Z="text-sky">n</sub> + δ="text-sky">n</sub> δ="text-sky">n</sub>+1 = 2·Z="text-sky">n</sub>·δ="text-sky">n</sub> + δ="text-sky">n</sub>²The Rescaling Trick
At zoom 1050, δ0 is around 10-50. Standard floats underflow at 10-38. We can't even represent the starting point.
The Problem
Initial perturbations underflow before iteration even begins.
1// At zoom "text-rose">10^"text-rose">502"text-accent">let delta0 = "text-rose">1e-50 // Underflows!3// "text-sky">Float min is ~"text-rose">1e-38The Solution
Multiply by 2k to bring δ back into normal range.
1// Rescale to normal range2"text-accent">let k = ceil(log2(zoom)) + "text-rose">503"text-accent">let deltaPrime = delta0 * pow("text-rose">2, k)4// deltaPrime ≈ "text-rose">1.0 ✓The Beautiful Part
After rescaling, the quadratic term (δ'n)² / 2k becomes negligible. The recurrence simplifies to a linear formula:
δ'="text-sky">n</sub>+1 = 2·Z="text-sky">n</sub>·δ'="text-sky">n</sub>A single complex multiply per iteration. The GPU loves this.
BLA: Skipping Iterations
Even with perturbation theory, iterating 100,000 times per pixel is expensive. BLA (Bivariate Linear Approximation) collapses multiple iterations into one.
For small δ, many iterations act as a single linear transformation. We precompute skip coefficients and apply them when δ is small enough.
Result: 100,000 iterations → under 1,000. A 100× speedup from math alone.
Putting It Together
1"text-accent">func iterateRescaled(2 delta0: "text-sky">Complex,3 referenceOrbit: ["text-sky">Complex],4 k: "text-sky">Int5) -> "text-sky">Int {6 "text-accent">var deltaPrime = delta0 * pow("text-rose">2.0, "text-sky">Double(k))7 8 "text-accent">for n "text-accent">in "text-rose">0..<referenceOrbit.count {9 // Linear perturbation step10 deltaPrime = "text-rose">2 * referenceOrbit[n] * deltaPrime11 12 // Escape check with scale correction13 "text-accent">let Zn = referenceOrbit[n]14 "text-accent">let correction = "text-rose">2 * dot(Zn, deltaPrime) * pow("text-rose">2.0, "text-sky">Double(-k))15 16 "text-accent">if Zn.magnitudeSquared + correction > "text-rose">4.0 {17 "text-accent">return n18 }19 }20 "text-accent">return maxIterations21}Self-Similarity Wrapping
Perturbation handles zoom to ~10300. Beyond that, even the reference orbit computation becomes intractable.
But near repelling periodic points, the fractal contains exact copies of itself. When we've zoomed by factor |λ| (the multiplier), the structure repeats.
Reset coordinates and continue. Chain precision layers to infinity.
Why Apple Silicon
Unified memory eliminates CPU-GPU data transfer. Reference orbits computed on CPU are instantly available to GPU shaders—zero copy, zero latency.
The M4 Max delivers 546 GB/s memory bandwidth. Our 1.6 MB reference orbit transfers in ~3 microseconds.
Metal Compute
Complex multiplication maps perfectly to SIMD operations. Each thread processes one pixel independently.
1// "text-sky">Complex multiplication: (a+bi)(c+di)2"text-accent">inline "text-sky">float2 cmul("text-sky">float2 a, "text-sky">float2 b) {3 "text-accent">return "text-sky">float2(4 a.x * b.x - a.y * b.y,5 a.x * b.y + a.y * b.x6 );7}8 9// "text-sky">Complex square: z² = (x²-y²) + 2xyi10"text-accent">inline "text-sky">float2 csqr("text-sky">float2 z) {11 "text-accent">return "text-sky">float2(12 z.x * z.x - z.y * z.y,13 "text-rose">2.0 * z.x * z.y14 );15}16 17"text-accent">kernel "text-accent">void perturbedIterate(18 "text-accent">device "text-sky">float2* deltas [[buffer("text-rose">0)]],19 "text-accent">constant "text-sky">float2* orbit [[buffer("text-rose">1)]],20 "text-accent">device "text-sky">uint* results [[buffer("text-rose">2)]],21 "text-sky">uint id [[thread_position_in_grid]]22) {23 "text-sky">float2 delta = deltas[id];24 25 "text-accent">for ("text-sky">uint n = "text-rose">0; n < maxIter; n++) {26 delta = cmul("text-sky">float2("text-rose">2.0, "text-rose">0.0), cmul(orbit[n], delta));27 28 "text-accent">if (dot(delta, delta) > "text-rose">4.0) {29 results[id] = n;30 "text-accent">return;31 }32 }33 results[id] = maxIter;34}The Result
1010,000,000 zoom would need 33,200,000 bits of mantissa using naive arithmetic. Our implementation does it with 32-bit floats and clever coordinate transformations.
The precision wall is real. We didn't break it. We found a door through.
Sources & Further Reading
- Douady & Hubbard, Orsay Notes (1984-85)
- Milnor, Dynamics in One Complex Variable
- Tan Lei, Self-similarity theorem (1990)
- Claude Heiland-Allen, mathr.co.uk
- Kalles Fraktaler (reference implementation)
- Philip Turner, metal-float64