Pi in the Mandelbrot set and computational limits. Part 2: An elusive pi appears!
In part 1 we met the Mandelbrot set:
*
*
* ***
********** *
* ************
*************
***********
******* *
* * ** ** ********************* ***
***** ******************************* *
******************************************* ******
***********************************************
*************************************************
* ******************************************************
*******************************************************
******************************************************* **
*************************************************************
** * * ************************************************************
***** * **** * ************************************************************
****************** ************************************************************** *
********************** *************************************************************
************************* **************************************************************
*************************** ************************************************************
* * ****************************************************************************************
**********************************************************************************************
****************************************************************************************************************************
**********************************************************************************************
* * ****************************************************************************************
*************************** ************************************************************
************************* **************************************************************
********************** *************************************************************
****************** ************************************************************** *
***** * **** * ************************************************************
** * * ************************************************************
*************************************************************
******************************************************* **
*******************************************************
* ******************************************************
*************************************************
***********************************************
******************************************* ******
***** ******************************* *
* * ** ** ********************* ***
******* *
***********
*************
* ************
********** *
* ***
*
*
It’s a bit blurry looking at it through the console, but even so, it’s clear the boundary of the Mandelbrot set isn’t a simple shape. And its complexity isn’t fixed at what we see here. Unlike zooming into a bitmap image, as we magnify some portion of the Mandelbrot set boundary, we never find its “pixels”. There are no indivisible, fundamental parts, no atoms, that are ultimately revealed given a sufficiently powerful microscope. There’s just always more complexity. Including eerie echoes of the whole set that keep popping up ever deeper within it.
Let’s write some code to see one of these miniMandelbrots.
If you didn’t code along with part 1, you can get an Xcode project with everything you need at https://github.com/toddthomas/mandelplot0. It’s a command line project standing in for a playground, so we can use the Swift Numerics package to work with complex numbers. It’s not a nice command line program—in a future post I’ll show how to make one of those using Swift Argument Parser. For now, we’re just typing code into main.swift, then hitting cmdr to build and run and see the results.
Add this to the end of main.swift:
extension ComplexPlotPoints {
init(
centeredOn center: SomeComplex,
width: RealType,
height: RealType,
horizontalPointCount: Int,
verticalPointCount: Int
) {
self.horizontalPointCount = horizontalPointCount
self.verticalPointCount = verticalPointCount
let halfWidth = width / RealType(2)
let halfHeight = height / RealType(2)
self.upperLeft = SomeComplex(center.real  halfWidth, center.imaginary + halfHeight)
self.lowerRight = SomeComplex(center.real + halfWidth, center.imaginary  halfHeight)
verticalStep = (upperLeft.imaginary  lowerRight.imaginary) / RealType((verticalPointCount  1))
}
}
This initializer lets us choose the center point, width, and height of the plot we’d like to make. We can zoom in to an interesting point near the set boundary by making several plots which have the same center, but eversmaller width and height.
Let’s also wrap up our Mandelbrotplotting logic into a handy function.
func mandelPlot<RealType>(over points: ComplexPlotPoints<RealType>, maxIterations: Int) > [[Character]] {
points.map { row in
row.map { point in
plotValue(for: IteratedFuncValues(f: f(c: point), iterations: 30).bounded)
}
}
}
Now let’s zoom in! We’ll choose the point \(c \approx 0.1586  1.0330i\). This is below the bottommost bulb in the plot above, an area which appears to be blank. But as we zoom in, we’ll see a cluster of set points there, connected to the main body by a thin, forking stream, which takes on an increasingly familiar shape.
for size in [2.5, 1.0, 0.35, 0.2, 0.1, 0.0325] {
let centeredMandelPoints = ComplexPlotPoints(centeredOn: Complex(0.15861105466703304, 1.0329522841161523), width: size, height: size, horizontalPointCount: 110, verticalPointCount: 55)
let centeredMandelPlot = mandelPlot(over: centeredMandelPoints, maxIterations: 30)
print("zoom and enhance...")
for row in centeredMandelPlot {
print(String(row))
}
}
Build and run.
*************** ***************************************************
** ***************** *************************************************
******************** *************************************************
* **********************************************************************
**************************************************************************
*************************************************************************
*************************************************************************
**********************************************************************
******************** **************************************************
* *************** ************************************************ *
************* * *************************************************
* *************************************************
*************************************************
********************************************
********************************************
* **************************************
***************************************
**** *************************** *****
* ************************
* ** ***** *
*********
**********
**********
* ** *
*
*************************************************************************************************
*********************************************************************************** ************
* ********** ********************************************************************* ***********
********* ************************************************************** ** * ****
********* * ** ********************************************************** *
* ** ** **** ************************************************* ** *****
** ** **** *** ******************************** *** * **
* * * ** * ************* * ** *
* * ******************** *
**********************
***************************
************************
* *************************
* ***********************
************************
*********************
** * ************* * *
* * * ******
********
*********
*
*
*
**
* ********************* * *
* ** *********************
* * ********************
********************* *
***************** *** *
*** ** ******** *
* * ***********
** * ***
* *
**
*** *
**
*
**
* *
*
** **
*
*
*
***
*****
*****
*
*
*
* *
***
*
* * *
**
* * ** *
*** ***** *
*
** * *
** *
***
* * *
*
*
* * *
*******
*******
********
****
*
* * *
* *
***** *
********
* **
* * *
*
*
*
**
*
**
* *
* *
*
*
*** * ** * * *
*********
************
* **************
***************
*****************
* ****************
* ****************
******* * *
*
*
*
*
That looks familiar…
* **
*
**** *
** * *
**
* *
* *
*
** *
* * *
* * *
* * * * * * ** * **
* *********** * * ****** * * * *
******************************
******************************* * * *
******************************** * *
******************************** * *
* ************************************ *
********************************** **
*************************************
*************************************
* * ******************************************
* * ***************************************** *
****************************************** *
******************************************** *
********************************************
***********************************************
****************************************************
* **************************************************
* * *************************************************** * *** *
****************************************************
**************************************************
**************************************************
*************************************************
**************************************************
************************ ** ****** *********
************************ * ***
* ******** * ** ****** * *
* ***** * * **
****
* * *
* * *
* *
* * *
* *
* *
*
*
*
There’s a rotated version of our insectoid friend here! And there are many, many more to be found as you zoom in near the borders of the Mandelbrot set.
Do these miniMandelbrots really belong to the set, or are they just bugs in our plotting code? Let’s look at the sequence of function values for the center point of our zoomed plots.
let zoomPoint = Complex(0.15861105466703304, 1.0329522841161523)
var zoomPointSeq = IteratedFuncValues(f: f(c: zoomPoint), iterations: 30)
print(zoomPointSeq.bounded)
// > true
print(zoomPointSeq.suffix(10).map(\.length).map(\.description).joined(separator: "\n"))
// > 1.0448770445654016
// > 1.392467105447006
// > 1.0256243556944444
// > 0.018151298490781582
// > 1.0448714135545816
// > 1.3924511828173076
// > 1.0255814379986634
// > 0.018140162937738652
// > 1.044874357549279
// > 1.3924576705430893
None of these values’ lengths is greater than \(2\), and there seems to be a repeating pattern of roughly
\[\sim1.0449, \sim1.3925, \sim1.0256, \sim0.0182, ...\]Let’s try ten times more iterations to see if it settles down to repeat the same values, at least to the precision of a Double
.
zoomPointSeq.iterations = 300
print(zoomPointSeq.suffix(10).map(\.length).map(\.description).joined(separator: "\n"))
// > 1.0255949530922306
// > 0.01814854124765523
// > 1.044873390224647
// > 1.3924558668754305
// > 1.0255949530922306
// > 0.018148541247655997
// > 1.044873390224647
// > 1.3924558668754308
// > 1.0255949530922306
// > 0.01814854124765523
Still no repeating pattern of four values, as it seemed like there might be, but there is a repeating pattern of eight.
let lastSixteen = zoomPointSeq.suffix(16).map(\.length)
print(lastSixteen[0..<8])
print(lastSixteen[8..<16])
// > [1.044873390224647, 1.3924558668754305, 1.0255949530922306, 0.018148541247655997, 1.044873390224647, 1.3924558668754308, 1.0255949530922306, 0.01814854124765523]
// > [1.044873390224647, 1.3924558668754305, 1.0255949530922306, 0.018148541247655997, 1.044873390224647, 1.3924558668754308, 1.0255949530922306, 0.01814854124765523]
The miniMandelbrot we plotted isn’t an artifact of a bug. It’s made out of genuine Mandelbrot set points.
We’ve taken plotting in the console about as far as it can go. In future posts I’d like to show how to make a highresolution color Mandelbrot plotter, but until then, check out this great opensource one. For example, here’s a razorsharp, colorful version of the penultimate frame of our crude console zoom.^{1} You can clearly see another miniMandelbrot in the upper right corner, where our console plot had only an intriguing blob. And there are at least two more microMandelbrots along the forked stream connecting back to the main body of the set.
I encourage you to explore the set further. For example, check out seahorse valley, in the gap between the insectoid’s head and body: dozens of miniMandelbrots line the walls!
Finding treasure at the bottom of seahorse valley
The rich complexity of the Mandelbrot set border has yielded surprising discoveries. Dave Boll found one down at the bottom of seahorse valley. He was “trying to show that the ‘neck’ of the mandelbrot set at (.75,0) is actually of zero thickness.”^{2}
As Boll mentions, the “neck” between the head and the body is at \(\frac{3}{4} + 0i\) in the complex plane, which is the same as \(\frac{3}{4}\) on the real number line. We know from our experiments in part 1 that
\[f(z) = z^2 + c\]is bounded for \(c = \frac{3}{4}\). Boll was proposing that even infinitesimally above and below that point in the complex plane, \(f\) is not bounded. In other words, he thought
\[c = \frac{3}{4} \pm \epsilon\]with even the tiniest \(\epsilon\) you can imagine, causes \(f\) to blow up.
To demonstrate this, he wrote a program—much like we’ve been doing.^{3} He found something mindblowing, and we can too.
We’ll test the boundedness of points in a vertical line dropping straight down seahorse valley to the neck of the insectoid. We’ll start relatively high above, at \(\frac{3}{4} + i\) (i.e., \(\epsilon = 1\)), then decrease \(\epsilon\) by several orders of magnitude, checking the boundedness of \(f\) at each step.
How many iterations should we use? We’ve seen that as we zoom in close to the border, it can take hundreds or thousands of iterations to determine boundedness. Let’s try a thousand.
extension IteratedFuncValues where Number: Lengthy {
var boundednessDescription: String {
bounded ? "bounded" : "blows up"
}
}
let neck = Complex(0.75, 0)
let epsilons = [1.0, 0.1, 0.01, 0.001, 0.0001, 1e5, 1e6, 1e7]
let cees = epsilons.map {
neck + Complex(0, $0)
}
for c in cees {
let description = IteratedFuncValues(f: f(c: c), iterations: 1000).boundednessDescription
print("c = \(c): \(description)")
}
// > c = (0.75, 1.0): blows up
// > c = (0.75, 0.1): blows up
// > c = (0.75, 0.01): blows up
// > c = (0.75, 0.001): bounded
// > c = (0.75, 0.0001): bounded
// > c = (0.75, 1e05): bounded
// > c = (0.75, 1e06): bounded
// > c = (0.75, 1e07): bounded
This is not what Boll expected to see. It looks like \(f\) blows up for the first three values of \(c\), but then is bounded as we get closer to the neck point. Perhaps, as we get closer, the sequences are increasing ever more slowly, and take more than a thousand iterations to exceed \(2\). Let’s write some code to check that theory. We’ll add some computed properties to IteratedFuncValues
, and a convenient function which uses them, so that it’s easy to print out how many iterations it takes for \(f\) to blow up.
extension Lengthy {
var lengthGreaterThanTwo: Bool {
!lengthTwoOrLess
}
}
extension IteratedFuncValues where Number: Lengthy {
var blowupIndex: Int? {
if let (iterations, _) = enumerated().first(where: { $1.lengthGreaterThanTwo }) {
return iterations + 1 // `enumerated()` yields zerobased indices.
} else {
return nil
}
}
var behaviorDescription: String {
if let blowupPoint = blowupIndex {
return "blows up after \(blowupPoint) iterations"
} else {
return "bounded after \(iterations) iterations"
}
}
}
func printBehavior<Number: Lengthy>(for c: Number, iterations: Int, terminator: String = "\n") {
let behavior = IteratedFuncValues(f: f(c: c), iterations: iterations).behaviorDescription
print("c = \(c): \(behavior)", terminator: terminator)
}
for c in cees {
printBehavior(for: c, iterations: 1000)
}
// > c = (0.75, 1.0): blows up after 3 iterations
// > c = (0.75, 0.1): blows up after 33 iterations
// > c = (0.75, 0.01): blows up after 315 iterations
// > c = (0.75, 0.001): bounded after 1000 iterations
// > c = (0.75, 0.0001): bounded after 1000 iterations
// > c = (0.75, 1e05): bounded after 1000 iterations
// > c = (0.75, 1e06): bounded after 1000 iterations
// > c = (0.75, 1e07): bounded after 1000 iterations
It seems to be taking roughly ten times as many iterations for \(f\) to blow up each time \(\epsilon\) is decreased by an order of magnitude. And each new iteration count is slightly larger than \(3\) times a power of ten. In fact the iteration count looks to be related to \(\epsilon\) by
\[iterations \approx 3 \times \frac{1}{\epsilon}\]If the pattern holds, the next iteration count after \(315\) will be something larger than \(3,000\), the one after that something larger than \(30,000\), and so on. Let’s codify this hunch into a function, and test it. BinaryFloatingPoint
is the protocol that provides initializers that take floatingpoint values as arguments. We need them to convert the literal 3.3
to whatever concrete floatingpoint type T
is.
func enoughIterations<T: BinaryFloatingPoint>(for epsilon: T) > Int {
Int((T(3.3) / epsilon).rounded(.up))
}
for (e, c) in epsilons.map({ ($0, neck + Complex(0, $0)) }) {
printBehavior(for: c, iterations: enoughIterations(for: e))
}
// > c = (0.75, 1.0): blows up after 3 iterations
// > c = (0.75, 0.1): blows up after 33 iterations
// > c = (0.75, 0.01): blows up after 315 iterations
// > c = (0.75, 0.001): blows up after 3143 iterations
// > c = (0.75, 0.0001): blows up after 31417 iterations
// > c = (0.75, 1e05): blows up after 314160 iterations
// > c = (0.75, 1e06): blows up after 3141593 iterations
// > c = (0.75, 1e07): blows up after 31415927 iterations
Cool! They all eventually blew up, just as Boll theorized. The pattern held: each subsequent iteration count is just about ten times larger, as each \(\epsilon\) is onetenth smaller. And you probably noticed that each takes about ten times longer to compute as well. As we go on, feel free to comment out previous code that takes a long time to run, like
//for (e, c) in epsilons.map({ ($0, neck + Complex(0, $0)) }) {
// printBehavior(for: c, iterations: enoughIterations(for: e))
//}
But wait a minute…there’s another pattern here! Let’s look at \(iterations \times \epsilon\) for each value of \(\epsilon\).
let 🤯: [Double] = epsilons.map { e in
let iterations = IteratedFuncValues(
f: f(c: neck + Complex(0, e)),
iterations: enoughIterations(for: e)
).blowupIndex! // Forced unwrap is safe here: we already know `f` blows up for all these `c`.
return Double(iterations) * e
}
print(🤯.map(\.description).joined(separator: "\n"))
// > 3.0
// > 3.3000000000000003
// > 3.15
// > 3.1430000000000002
// > 3.1417
// > 3.1416000000000004
// > 3.141593
// > 3.1415927
print(Double.pi)
// > 3.141592653589793
\(\pi\) has emerged from the Mandelbrot set. How very strange and wonderful!
We inadvertently computed seven digits of \(\pi\). Can we get eight? The iterations for \(\epsilon = 10^{7}\) took a noticeable amount of time. Let’s print how long each set takes so we can estimate when to expect the result for \(\epsilon = 10^{8}\).
func benchmark(block: () > ()) > CFAbsoluteTime {
let start = CFAbsoluteTimeGetCurrent()
block()
let end = CFAbsoluteTimeGetCurrent()
return end  start
}
func printBenchmarkedBehavior<Number: Lengthy>(for c: Number, iterations: Int) {
let time = benchmark {
printBehavior(for: c, iterations: iterations, terminator: "")
}
print(" (in \(time) secs)")
}
let moreEpsilons = epsilons + [1e8]
for (e, c) in moreEpsilons.map({ ($0, neck + Complex(0, $0)) }) {
printBenchmarkedBehavior(for: c, iterations: enoughIterations(for: e))
}
// > c = (0.75, 1.0): blows up after 3 iterations (in 1.990795135498047e05 secs)
// > c = (0.75, 0.1): blows up after 33 iterations (in 2.7894973754882812e05 secs)
// > c = (0.75, 0.01): blows up after 315 iterations (in 0.00022304058074951172 secs)
// > c = (0.75, 0.001): blows up after 3143 iterations (in 0.0019290447235107422 secs)
// > c = (0.75, 0.0001): blows up after 31417 iterations (in 0.01805102825164795 secs)
// > c = (0.75, 1e05): blows up after 314160 iterations (in 0.1806180477142334 secs)
// > c = (0.75, 1e06): blows up after 3141593 iterations (in 1.7576509714126587 secs)
// > c = (0.75, 1e07): blows up after 31415927 iterations (in 17.330674052238464 secs)
// > c = (0.75, 1e08): bounded after 330000000 iterations (in 182.17747807502747 secs)
The iterations for \(\epsilon = 10^{8}\) took about ten times longer than for \(\epsilon = 10^{7}\), as we expected. But we didn’t get an eighth digit of \(\pi\). Instead, as surprisingly as it appeared, \(\pi\) has vanished. We performed more than enough iterations, but \(f\) didn’t blow up. What’s going on?
Let’s peek at what’s happening when the iteration count is around \(\pi\).
let aLittlePastPi = 314159268
let closeToNeckValues = IteratedFuncValues(
f: f(c: neck + Complex(0, 1e8)),
iterations: aLittlePastPi
)
for (i, value) in closeToNeckValues.enumerated().suffix(7) {
print("iteration \(i + 1): \(value) = \(value.length)")
}
// > iteration 314159262: (0.9143789121143432, 0.25162107125167865) = 0.9483680500824836
// > iteration 314159263: (0.022775631421567466, 0.4601540127923111) = 0.46071731558029977
// > iteration 314159264: (0.9612229861022153, 0.020960606385025837) = 0.9614514943725906
// > iteration 314159265: (0.17351028199123164, 0.040295623319855395) = 0.178127918124623
// > iteration 314159266: (0.721517919302059, 0.013983399930481118) = 0.7216534094339111
// > iteration 314159267: (0.22960742759964325, 0.020178557245218588) = 0.23049239679743014
// > iteration 314159268: (0.6976876033635732, 0.009266283243493567) = 0.697749135357655
The lengths are all much less than \(2\), so \(f\) doesn’t seem to be in danger of blowing up around \(314159265\) iterations.
Maybe this is just how it is. Why should we expect to be able to accurately compute more and more digits of \(\pi\) in this completely unexpected way? To have even its first seven digits appear as a side effect of iterating the decidedly nontranscendental function
\[f(z) = z^2  \frac{3}{4} + \epsilon i\]for increasingly small \(\epsilon\) is mindboggling enough. Perhaps whatever’s happening here mathematically (which, to be clear, I don’t know) causes the iteration counts to approach, but then recede from, the digits of \(\pi\). Perhaps we should just appreciate this fleeting moment, like a chance to see a rare bird blown off course into a suburban back yard.
Nice. Those lovely seven digits of \(\pi\), arising so unexpectedly from the astounding complexity at the border of the Mandelbrot set. Just take a moment to savor them.
Or, maybe we can dare to hope for more. Though nothing seems obviously wrong with the function values we examined, do we know what we should expect to see after over three hundred million iterations? Let’s start from the beginning and look for any evidence of the computations going awry. We can compute some iterations by hand so we know what to expect.
\[\begin{align} f(0 + 0i) &= (0 + 0i)^2 + (0.75 + 10^{8}i) = 0.75 + 10^{8}i \\ f(0.75 + 10^{8}i) &= (0.75 + 10^{8}i)^2  0.75 + 10^{8}i \\ &= 0.5625  2 \times 0.75 \times 10^{8}i + 10^{16}i^2 0.75 + 10^{8}i \\ &= 0.5625  0.75  10^{16}  1.5\times10^{8}i + 10^{8}i \\ &= 0.1875  10^{16}  0.5\times10^{8}i \\ &= 0.1875000000000001  5\times10^{9}i \end{align}\]Whew, more tedious FOILing. Let’s compare our handiwork with the computer’s.
print(closeToNeckValues.prefix(2).map(\.description).joined(separator: "\n"))
// > (0.75, 1e08)
// > (0.1875000000000001, 5.000000000000002e09)
Interesting: on the second iteration, there’s a little error in the imaginary part: a \(2\) in the fifteenth decimal place.
We’ve been making the computer multiply and add and subtract floatingpoint numbers a lot, but we haven’t really talked about how they work and what limitations they might have. Or maybe you’re a floatingpoint expert, and you’ve been telling me about these things for a while now. Unfortunately, in this very inferior sort of pair programming we’re doing, I can’t hear anything you’re saying.
I could use a review of floatingpoint basics. If you are a floatingpoint expert, feel free to skip ahead. Or follow along and let me know what I get wrong!
In fact, let me be completely honest with you. Many of the floatingpoint details we’re about to “review”, I learned just now, while writing this. So please allow me to cement these learnings by reciting them to you.
Even though Int
and Double
each use 64 bits to store a value, Double
values can take on much larger magnitudes, as well as have a fractional part, which Int
s can’t handle at all. Clearly they are using those bits differently!
Int
, and other integer types like UInt8
and Int32
, store every binary digit (every binary digit, so literally every “bit”!) of the numbers they represent. Int
is one of the signed integer types that devote their most significant bit to record whether a value is positive or negative, so it has 63 bits remaining to record the magnitude. Consequently, the largest magnitude an Int
can store is
That’s actually the magnitude of the least negative Int
. Zero steals one of the possible positive values, because it’s convenient^{4} for zero to be represented by a string of all 0
s, and for negative values to have the sign bit set to 1
. So the greatest positive number that fits in an Int
is
Let’s test this in Swift. Create a new playground (cmdoptshiftn)—we don’t need any of the code we’ve written for drawing the Mandelbrot set or inadvertently computing \(\pi\).
print(Int.max)
// > 9223372036854775807
print(Int.min)
// > 9223372036854775808
Because an Int
records every one of those 63 binary digits, it can also represent every integer between max
and min
exactly. This is great for precise counts, where every digit matters—for example, when counting the number of iterations it takes for \(f\) to blow up, to check if its digits are digits of \(\pi\)!
However, we’ve seen that we can easily compute values that are much greater than \(2^{63}1\). Iteratively squaring even small integers gets us there real fast^{5}.
let squares = sequence(first: 2) { $0 * $0 }
print(
squares.prefix(6).enumerated().map {
"\($1) = 2^\(pow(2,$0))"
}.joined(separator: "\n")
)
print("\(Int.max) = `Int.max`")
// > 2 = 2^1
// > 4 = 2^2
// > 16 = 2^4
// > 256 = 2^8
// > 65536 = 2^16
// > 4294967296 = 2^32
// > 9223372036854775807 = `Int.max`
We can only iteratively square \(2\) five times before we exceed the capacity of an Int
. By the way, we’re using the Standard Library’s sequence(first:next:)
function to generate this Sequence
of function results, since we don’t need the boundednesschecking capabilities we added to IteratedFuncValues
.
So, it would be nice to be able to work with larger numbers. But we don’t have unlimited memory, or unlimited transistors in our CPUs. Using 128 or 256bit integer types would require more of both resources.
Besides, when numbers get really big or really small, we often don’t care about—or can’t accurately measure—every digit. In many cases it’s enough to know that some quantity is “9 quintillion” or “9.22 quintillion” or “9.22 quintillionths”.
For example, our best estimate of the distance to Proxima Centauri, the nearest star to our sun, is \(1.3012 \pm 0.0003\) parsecs^{6}, or about \(4.0152 \times 10^{16}\) meters. That distance in meters is a seventeendigit number, of which we’re pretty confident we know the first three. And that’s an astounding accomplishment of humanity, requiring thousands of years of advancement in theory and practice, including the invention of rocketry, orbital spacecraft, and computers!
Scientific notation was invented for cases like these. If we know several digits of a decimal number with many leading or trailing zeros, we can rescale the number such that there’s one digit to the left of the decimal point, and multiply it by a power of ten to get the decimal point back where it needs to be. Instead of memorizing and saying and typing all the digits of Int.max
, it’s more efficient, and in many cases just as effective, to remember, speak, and type something like
The compactness of this form is exactly how floatingpoint numbers achieve their efficient representation of very large, very small, and fractional values. Okay, there’s one difference: since they live in a computer, they use a binary number and a power of two.
As always, in practice it’s slightly more complicated. A Swift Double
implements the 64bit floatingpoint format defined by the IEEE 754 standard. That means it encodes a real number \(R\) in the form
where \(S\) is a sign bit, \(E\) is an 11bit exponent, and \(F\) is a 52bit significand. The bits are laid out in that order as well: the most significant bit is the sign bit, followed by the exponent bits, followed by the significand bits.
So it is like scientific notation using a power of two, except for two weird differences you may have noticed in the equation.
First, the value stored in the 11bit exponent field is the actual exponent minus \(1023\). This converts the signed actual exponent to an unsigned value. Since the significand is also unsigned, that means hardware or software can determine if one Double
is larger or smaller than another the same way as for signed integers.
Second, the 52 bits of the significand are (usually^{7}) considered to be the fractional part of a binary number whose integer part is \(1\). Just like any decimal number, any binary number can be rescaled to have only its most significant digit to the left of the decimal point. Unlike a decimal number, the most significant binary digit will always be 1
. Thus, for a binary number normalized in this way, the leading bit can be assumed and therefore discarded, giving the significand an extra bit of precision for free.
Let’s try building a few Double
s by hand.^{8}
Powers of two are easiest: all that’s needed is the exponent, since the significand will be just the assumed 1
.
For example, Int.min
is \(2^{63}\). It’s negative, so we set the sign bit to 1
. We set the exponent to \(63 + 1023 = 1086\) or 0b10000111110
, all significand bits to 0
(again, the 1.
is assumed), and we’re done. Let’s check our work.
let intMin = Double(
sign: .minus,
exponentBitPattern: 0b10000111110,
significandBitPattern: 0b0
)
print(String(format: "%.18e", intMin))
// > 9.223372036854775808e+18
Note that all nineteen digits are present and correct. This is a special case for powers of two. A Double
typically has \(\log_{10}({2^{53})} \approx 16\) decimal digits of precision.
Fractional powers of two are easy as well. For \(2^{2} = \frac{1}{4}\), we set the exponent to \(2 + 1023 = 1021\) or 0b1111111101
.
let oneQuarter = Double(
sign: .plus,
exponentBitPattern: 0b01111111101,
significandBitPattern: 0b0
)
print(String(format: "%.18f", oneQuarter))
// > 0.250000000000000000
We could change the format specifier to print as many decimal places as we like, and we’d see zeros all the way to the end.
What about our compact approximation of Int.max
: \(9.22 \times 10^{18}\)? First, we find the nearest power of two less than the value we’re encoding. We know \(2^{63}\) is larger, because we obtained our approximation by zeroing out a bunch of digits of \(2^{63}\). So we’ll use \(2^{62}\) as our power of two, which we accomplish by setting the exponent field to \(62 + 1023 = 1085\) or 0b10000111101
. The significand will be
exactly. I computed it in Mathematica using
f = 9220000000000000000/2^62; SetPrecision[f, Length[RealDigits[f][[1]]]]
That’s a bit unwieldy, but it’s the best way I could find to say, “give me the exact decimal expansion of this fraction”, given that Mathematica is very reluctant to put a decimal point in an exact number. There are no decimal points in the numerator or denominator of the quotient assigned to f
, so it has infinite precision.
Precision[f]
// > \[Infinity]
The precision is then set to the length of the digits in the exact real number corresponding to the quotient.
So we need the binary form of
\[1.999268806063270176309742964804172515869140625\]except the leading \(1\) is assumed, so we really need the binary form of
\[0.999268806063270176309742964804172515869140625\]I used Mathematica for this, too.
BaseForm[0.999268806063270176309742964804172515869140625`46., 2]
// > 0.11111111110100000001010010011001111101000110100000000000000000000000\
0000000000000000000000000000000000000000000000000000000000000000000000\
00000000000000
That says, “Give me the base two form of this number that has 46 digits of precision.” There are a lot of trailing zeros; we want the first 52 bits after the .
.
0b1111111111010000000101001001100111110100011010000000
Does Swift agree?
let nineTwoTwoQuintillion = Double(
sign: .plus,
exponentBitPattern: 0b10000111101,
significandBitPattern: 0b1111111111010000000101001001100111110100011010000000
)
print(nineTwoTwoQuintillion)
// > 9.22e+18
Success! We were able to represent this number exactly, since the required significand fit within 52 bits.
But what if we wanted to encode all nineteen digits of Int.max
, a number that’s not a power of two: it’s \(2^{63}  1\). Let’s see what happens.
As before, \(2^{62}\) is the nearest power of two less than our goal. We can guess the significand is going to be a tiny bit less than two. Indeed, using Mathematica again, we find that it’s
\[\frac{2^{63}  1}{2^{62}} = 1.99999999999999999978315956550289911319850943982601165771484375\]a number with 63 decimal digits! In binary it’s
1.11111111111111111111111111111111111111111111111111111111111111
which, intriguingly, is 63 1
s. As usual, we get the leading digit for free, but we’re still going to have to chop off ten bits to fit this into a Double
’s significand.
let almostIntMax = Double(
sign: .plus,
exponentBitPattern: 0b10000111101,
significandBitPattern: 0b1111111111111111111111111111111111111111111111111111
)
print(String(format: "%.18e", almostIntMax))
// > 9.223372036854774784e+18
We’re forcing the printing of 18 decimal places, but those last four digits are not to be trusted. Omitting the least significant 10 bits of the significand has resulted in an error of
\[\frac{92233720368547758079223372036854774784}{9223372036854775807} \approx 0.00000000000001\%\]To be sure, this error is tiny. The number we got is \(99.9999999999999889\%\) of what we wanted. Fifteen nines in a row!
But what if we wanted to know the result of iteratively squaring Int.max
, say, five times? Using Double
, we get
let big = sequence(first: almostIntMax, next: { $0 * $0 }).prefix(6)
print(big.map(\.description).joined(separator: "\n"))
// > 9.223372036854775e+18
// > 8.50705917302346e+37
// > 7.237005577332259e+75
// > 5.2374249726338223e+151
// > 2.7430620343968395e+303
// > inf
It turns out we can’t square it five times, because that exceeds even Double
’s capacity, as indicated by inf
. But the result of the fourth squaring is about \(2.7430620343968395\times10^{303}\). How close is that to the correct answer? Here’s the same sequence computed with Mathematica’s arbitrary precision arithmetic.
NestList[(#^2) &, 2^63  1, 5]
// > {9223372036854775807, 85070591730234615847396907784232501249, \
// > 7237005577332262210834635695349653859421902880380109739573089701262786\
// > 560001, 52374249726338269874783614880766155793371912454611337886264179\
// > 5775320003709193437647875410417339251040259100355670327509228458427426\
// > 40075519159173120001, \
// > 2743062034396844336869514018464698837952741034352782431735406935422555\
// > 2356596046115747958004859021025898780638553812209802474141496520796438\
// > 9913801754802787377183151320139822670075302546549761535660459702314933\
// > 6546797754176993249443973844794089529533475153606348844332504619566761\
// > 300314793168852746240001, \
// > 7524389324549354423906826059867985026620849363015297363538931775726563\
// > 4489223606993528230559344470229026462223411229385590016314551139111119\
// > 3598683416252988848958916649615315608296195059513020594051541882549273\
// > 7142009870703723094729573833219186725893664705796908922536873946693338\
// > 9620256771289132902698104528630370957208333219923065902668290636805707\
// > 2716357501330788792249322470295126850048013087226847583241398330127359\
// > 6593659987073481698566971666199551326463654838858806614357333644155075\
// > 1000540061190861768897550425218309479374392873122528373786047835048867\
// > 91100277728803500511128107182376171843092480001}
NestList
is pretty much identical to the Swift Standard Library’s sequence(first:next:)
. The first argument is a pure function that squares whatever is passed to it, the second argument is the initial value to pass to the squaring function, and third argument is the number of times to iterate. Just like sequence
, the first element in the resulting list is the initial value unmodified.
With arbitrary precision arithmetic, we get a result for the fifth iteration, which is a 607digit number! The result for the fourth iteration is a 304digit number like we computed in Swift with Double
, but it has a lot fewer zeros!
With 53bit precision, we got
\[2743062034396839500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\]With arbitrary precision,
\[2743062034396844336869514018464698837952741034352782431735406935422555235659604611574795800485902102589878063855381220980247414149652079643899138017548027873771831513201398226700753025465497615356604597023149336546797754176993249443973844794089529533475153606348844332504619566761300314793168852746240001\]How accurate is the Double
version?
SetPrecision[
274306203439683950000000000000000000000000000000000000000000000000000\
0000000000000000000000000000000000000000000000000000000000000000000000\
0000000000000000000000000000000000000000000000000000000000000000000000\
0000000000000000000000000000000000000000000000000000000000000000000000\
0000000000000000000000000/
27430620343968443368695140184646988379527410343527824317354069354225\
5523565960461157479580048590210258987806385538122098024741414965207964\
3899138017548027873771831513201398226700753025465497615356604597023149\
3365467977541769932494439738447940895295334751536063488443325046195667\
61300314793168852746240001, 20]
// > 0.9999999999999982367
We’ve traded our fifteen nines for a mere fourteen!
Now we see what might be happening after more than three hundred million iterations of
\[f(z) = z^2 + (0.75 + 10^{8}i)\]In each iteration, we’re computing a complex multiplication and addition to obtain a new value for the function. And then we’re checking the length of that complex value to see if it’s larger than \(2\).
Swift Numerics is opensource, so we can look at the implementation of those complex arithmetic methods to get an idea of how many floatingpoint operations they perform.
Addition and multiplication are defined in https://github.com/apple/swiftnumerics/blob/master/Sources/ComplexModule/Arithmetic.swift.
extension Complex: AdditiveArithmetic {
public static func +(z: Complex, w: Complex) > Complex {
return Complex(z.x + w.x, z.y + w.y)
}
}
extension Complex: AlgebraicField {
public static func *(z: Complex, w: Complex) > Complex {
return Complex(z.x*w.x  z.y*w.y, z.x*w.y + z.y*w.x)
}
}
I count four floatingpoint multiplications and four additions total.
length
is defined in https://github.com/apple/swiftnumerics/blob/master/Sources/ComplexModule/Complex.swift.
extension Complex {
public var length: RealType {
let naive = lengthSquared
guard naive.isNormal else { return carefulLength }
return .sqrt(naive)
}
public var lengthSquared: RealType {
x*x + y*y
}
}
Okay, this is more complicated. There’s some logic to compute the length differently if the initial attempt gives a denormalized value. Let’s ignore that. lengthSquared
has two multiplications and one addition, and then there’s sqrt
. We could check how that’s implemented because it’s also opensource, but we’re just trying to get a ballpark understanding. We’re at six multiplications and five additions already. Even without sqrt
, that’s more than ten floatingpoint operations per iteration, so iterating over three hundred million times requires more than three billion floatingpoint operations. Maybe a lot more!
If only there was a floatingpoint type with more precision than Double
, so the rounding errors we saw would be pushed a few bits further to the right. We might be able to get accurate enough results after more than three hundred million iterations to reveal another digit of \(\pi\).
And there is! Swift has an 80bit floatingpoint type named Float80
. It works just like Double
, except it devotes 63 bits to the significand. That’s enough to store Int.max
exactly, for example. With the assumed leading bit, that’s \(log_{10}(2^{64}) \approx 19\) decimal digits of precision. It also increases the size of the exponent field to 15 bits, and uses \(16383\) as the bias.
Let’s verify. We’ll set the significand to all 62 of the bits we computed last time, plus a trailing 0
to fill out the field. The exponent will be \(62 + 16383 = 16445\) or 0b100000000111101
.
let intMax = Float80(
sign: .plus,
exponentBitPattern: 0b100000000111101,
significandBitPattern: 0b111111111111111111111111111111111111111111111111111111111111110
)
print(intMax)
// > 9223372036854775807.0
Excellent! All nineteen digits are correct.
Now for the real test. Can we get an eighth digit of \(\pi\)? Let’s return to our Xcode commandline project and find out. Add the following code to the end of main.swift, then build and run.
typealias Complex80 = ComplexModule.Complex<Float80>
let neck80 = Complex80(0.75, 0)
let epsilons80 = [Float80(1.0), Float80(0.1), Float80(0.01), Float80(0.001), Float80(0.0001), Float80(1e5), Float80(1e6), Float80(1e7), Float80(1e8)]
for (e, c) in epsilons80.map({ ($0, neck80 + Complex80(0, $0)) }) {
printBenchmarkedBehavior(for: c, iterations: enoughIterations(for: e))
}
// > c = (0.75, 1.0): blows up after 3 iterations (in 8.606910705566406e05 secs)
// > c = (0.75, 0.1): blows up after 33 iterations (in 3.1113624572753906e05 secs)
// > c = (0.75, 0.01): blows up after 315 iterations (in 0.0002219676971435547 secs)
// > c = (0.75, 0.001): blows up after 3143 iterations (in 0.0019540786743164062 secs)
// > c = (0.75, 0.0001): blows up after 31417 iterations (in 0.01812911033630371 secs)
// > c = (0.75, 1e05): blows up after 314160 iterations (in 0.1822800636291504 secs)
// > c = (0.75, 1e06): blows up after 3141593 iterations (in 1.7093499898910522 secs)
// > c = (0.75, 1e07): blows up after 31415927 iterations (in 17.033315062522888 secs)
// > c = (0.75, 1e08): blows up after 314159266 iterations (in 173.35339200496674 secs)
print(Float80.pi)
// > 3.1415926535897932385
Eight digits of \(\pi\) in a mere three minutes. With more precision, the pattern continues.
Let’s take a look at these eleven extra bits in action. First we’ll compare the values from the first two iterations, forcing the Complex<Double>
values to the same level of precision and formatting as the Complex<Float80>
values.
print(
closeToNeckValues
.prefix(2)
.map {
String(format: "(%.20f, %.19e)", $0.real, $0.imaginary)
}
.joined(separator: "\n")
)
// > (0.75000000000000000000, 1.0000000000000000209e08)
// > (0.18750000000000011102, 5.0000000000000017590e09)
Here we see that Double
isn’t able to represent 1e8
exactly. In fact it’s not possible for floatingpoint types to represent any of the negative powers of ten exactly, so there will always be rounding error for these values. Try it in Swift like we’ve been doing, or at Float Toy or float.exposed.
How does Float80
fare?
let closeToNeckValues80 = IteratedFuncValues(
f: f(c: neck80 + Complex80(0, 10e9)),
iterations: aLittlePastPi
)
print(
closeToNeckValues80
.prefix(2)
.map(\.description)
.joined(separator: "\n")
)
// > (0.75, 1e08)
// > (0.18750000000000010002, 5.0000000000000000004e09)
We see in the value from the second iteration that Float80
pushes the rounding errors several places to the right. That means that several billion moreaccurate floatingpoint operations later, \(f\) blows up at iteration \(314159266\), and we’ve computed the eighth digit of \(\pi\).
for (i, value) in closeToNeckValues80.enumerated().suffix(7) {
print("iteration \(i + 1): \(value) = \(value.length)")
}
// > iteration 314159262: (1.0116863356929538323, 0.43891660137812484814) = 1.1027950964676800822
// > iteration 314159263: (0.08086145886251232403, 0.88809186624608003554) = 0.89176551762322537383
// > iteration 314159264: (1.5321685873630715391, 0.14362481781717839593) = 1.5388855280348723417
// > iteration 314159265: (1.5769124918091324986, 0.44011485845044946176) = 1.6371786999141378199
// > iteration 314159266: (1.5429519181948282034, 1.3880452262426437917) = 2.0754204807595235866
// > iteration 314159267: (0.29596892823389236505, 4.2833740787445230593) = 4.293587207096230689
// > iteration 314159268: (19.009695891980572701, 2.5354912806217051388) = 19.178040930702274312
But does Float80
have enough accuracy to get the ninth digit? Execute this code and take a halfhour break, or just read on.
printBenchmarkedBehavior(for: neck80 + Complex80(0, 1e9), iterations: enoughIterations(for: 1e9))
// > c = (0.75, 1e09): blows up after 3141592656 iterations (in 1704.3665910959244 secs)
Yes it does—and with a minute and a half to spare.
At the risk of seeming ungrateful for an unexpected gift of the universe, it has to be said that this surprising method is not the most efficient way to compute \(\pi\). Nevertheless, are you wondering how much further can we go? I certainly was, so I typed this in, hit cmdr, and went to bed.
printBenchmarkedBehavior(for: neck80 + Complex80(0, 1e10), iterations: enoughIterations(for: 1e10))
When I woke up the next morning, here’s what was waiting for me.
// > c = (0.75, 1e10): bounded after 33000000000 iterations (in 18266.33482694626 secs)
After five hours of computation, once again, \(\pi\) has slipped out of our grasp.
Now we’ve run into two computational limits. One is a hard limit that we’ve broken through before: we don’t have enough precision in our floatingpoint type to go further. Unfortunately, Swift doesn’t currently have a higherprecision type to offer us.
The other limit is more pliable: our time and patience. We have to wait for a lot of floatingpoint operations. It took five hours to obtain the ninth digit. We can sleep while the FLOPs are FLOPing, but we’re dealing with an exponential algorithm: the next digit will require fifty hours! I don’t feel like waiting another two days to publish this, but maybe I’ll try it some time. Maybe you’ll try it. Tell me how it goes!
Is there a faster algorithm? Of course, there are many, if we’re just interested in computing digits of \(\pi\). Our interests are more esoteric: is there a faster algorithm which mysteriously emerges from the complexity at the border of the Mandelbrot set to inadvertently compute digits of \(\pi\)?
In fact, there is! In an excellent Numberphile video^{9}, Dr. Holly Krieger demonstrates another appearance of \(\pi\) in counting the iterations for
\[f(z) = z^2 + \frac{1}{4} + \epsilon\]for increasingly small real \(\epsilon\).^{10} \(c = \frac{1}{4}\) is precisely at the cusp (or, less formally, the “butt”) of the main cardioid of the Mandelbrot set. Since the computations involve only real numbers, there is only one multiplication and one division per iteration—a significant savings.
Let’s see what we can do with this.
extension Float80: Lengthy {}
let cusp80 = Float80(0.25)
var moreEpsilons80 = epsilons80 + [Float80(1e9), Float80(1e10)]
for (e, c) in moreEpsilons80.map({ ($0, cusp80 + Float80($0)) }) {
printBenchmarkedBehavior(for: c, iterations: enoughIterations(for: e))
}
// > c = 1.25: blows up after 2 iterations (in 6.99758529663086e05 secs)
// > c = 0.35: blows up after 8 iterations (in 9.059906005859375e06 secs)
// > c = 0.26: blows up after 30 iterations (in 1.2993812561035156e05 secs)
// > c = 0.251: blows up after 97 iterations (in 2.9087066650390625e05 secs)
// > c = 0.2501: blows up after 312 iterations (in 8.392333984375e05 secs)
// > c = 0.25001: blows up after 991 iterations (in 0.0002510547637939453 secs)
// > c = 0.250001: blows up after 3140 iterations (in 0.0007920265197753906 secs)
// > c = 0.2500001: blows up after 9933 iterations (in 0.00268399715423584 secs)
// > c = 0.25000001: blows up after 31414 iterations (in 0.007649064064025879 secs)
// > c = 0.250000001: blows up after 99344 iterations (in 0.0235140323638916 secs)
// > c = 0.2500000001: blows up after 314157 iterations (in 0.07716691493988037 secs)
Interesting: the same \(\epsilon\) work differently in this scenario. The iteration counts that approximate the digits of \(\pi\) are interleaved with counts that look increasingly like \(9.9\) times powers of ten. Let’s eliminate the \(\epsilon\) that correspond with those, and also get rid of \(\epsilon = 1.0\), because it blows up on the second iteration, polluting our nice sequence of \(\pi\) approximations. It turns out we want the negative even powers of ten.
var cuspEpsilons80 = [
Float80(1e2),
Float80(1e4),
Float80(1e6),
Float80(1e8),
Float80(1e10),
Float80(1e12),
Float80(1e14),
Float80(1e16)
]
for (e, c) in cuspEpsilons80.map({ ($0, cusp80 + Float80($0)) }) {
printBenchmarkedBehavior(for: c, iterations: enoughIterations(for: e))
}
// > c = 0.26: blows up after 30 iterations (in 1.5974044799804688e05 secs)
// > c = 0.2501: blows up after 312 iterations (in 0.00011599063873291016 secs)
// > c = 0.250001: blows up after 3140 iterations (in 0.0007550716400146484 secs)
// > c = 0.25000001: blows up after 31414 iterations (in 0.007490992546081543 secs)
// > c = 0.2500000001: blows up after 314157 iterations (in 0.07494890689849854 secs)
// > c = 0.250000000001: blows up after 3141591 iterations (in 0.7695610523223877 secs)
// > c = 0.25000000000001: blows up after 31415920 iterations (in 7.364894986152649 secs)
// > c = 0.2500000000000001: blows up after 314174320 iterations (in 73.83197391033173 secs)
That’s better. As before, each sequence of iterations is taking ten times longer to compute another digit of \(\pi\), but we’re getting the same number of digits in about sevenseventeenths or twofifths the time it took for the complex computations.
Except \(\pi\) has eluded us again! After an estimate of \(31415920\), accurate to seven digits, we get \(314174320\), with only four accurate digits. As with \(c = \frac{3}{4} + \epsilon i\), floatingpoint rounding errors artificially slowed the rate of increase of the sequence, but this time it blew up before the maximum number of iterations was exceeded.
We have a faster algorithm, but we need yet more precision. IEEE 7542008 specifies a 128bit (quadrupleprecision!) floatingpoint format that devotes 112 bits to the significand’s fraction. That’s a decimal precision of \(log_{10}(2^{113}) \approx 34\)! Swift doesn’t yet support this, but GCC provides libquadmath, a C library which does.
Here’s a notveryidiomatic C++ program we can compile against libquadmath to see if we can break our record of nine veryinefficiently computed digits of \(\pi\).
#include <chrono>
#include <iomanip>
#include <iostream>
#include <quadmath.h>
__float128 m(__float128 z, __float128 c) {
return z*z + c;
}
long test_membership(__float128 c) {
const long max_iterations = 314160000000; // Enough to approximate π to 12 digits.
long i = 1;
__float128 z = 0.0Q;
std::chrono::steady_clock::time_point start = std::chrono::steady_clock::now();
while (i < max_iterations) {
z = m(z, c);
if (z > 2.0Q) {
break;
}
else {
++i;
}
}
std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();
std::cout << "(" << std::chrono::duration_cast<std::chrono::microseconds>(end  start).count() << " µs) ";
return i;
}
void print_result_for(__float128 c) {
char rendered_c[32] = {};
quadmath_snprintf(rendered_c, sizeof(rendered_c), "%.22Qf", c);
std::cout << rendered_c << ": " << test_membership(c) << "\n";
}
int main(int argc, const char * argv[]) {
print_result_for(1.25Q);
print_result_for(0.26Q);
print_result_for(0.2501Q);
print_result_for(0.250001Q);
print_result_for(0.25000001Q);
print_result_for(0.2500000001Q);
print_result_for(0.250000000001Q);
print_result_for(0.25000000000001Q);
print_result_for(0.2500000000000001Q);
print_result_for(0.250000000000000001Q);
print_result_for(0.25000000000000000001Q);
print_result_for(0.2500000000000000000001Q);
return 0;
}
I ran this on a modest Linode instance running Ubuntu 18.04 and gcc 7.5.0.
tt@galileo:~/src/mtest++$ cat /proc/cpuinfo  grep "model name"
model name : Intel(R) Xeon(R) CPU E52680 v3 @ 2.50GHz
tt@galileo:~/src/mtest++$ cat /proc/meminfo  grep "MemTotal"
MemTotal: 987940 kB
tt@galileo:~/src/mtest++$ lsb_release d
Description: Ubuntu 18.04.4 LTS
tt@galileo:~/src/mtest++$ gcc version  grep gcc
gcc (Ubuntu 7.5.03ubuntu1~18.04) 7.5.0
It took more than 6 hours. (Sorry, other users of my shared instance!)
tt@galileo:~/src/mtest++$ g++ Wall o mtest++128 mtest++128.cpp lquadmath
tt@galileo:~/src/mtest++$ ./mtest++128
1.2500000000000000000000: (2 µs) 2
0.2600000000000000000000: (3 µs) 30
0.2501000000000000000000: (23 µs) 312
0.2500010000000000000000: (227 µs) 3140
0.2500000100000000000000: (4840 µs) 31414
0.2500000001000000000000: (43322 µs) 314157
0.2500000000010000000000: (230520 µs) 3141591
0.2500000000000100000000: (2361880 µs) 31415925
0.2500000000000001000000: (23647129 µs) 314159263
0.2500000000000000010000: (235801964 µs) 3141592652
0.2500000000000000000100: (2220785973 µs) 31415926534
0.2500000000000000000001: (21597067475 µs) 314159265357
With a faster algorithm (such as it is) and much more floatingpoint precision, we can very inefficiently compute eleven digits of \(\pi\). I haven’t yet tried running for over 60 hours to see whether this implementation can go further. If not, we may have run into computational limits that are intractable with today’s technology. IEEE 7542008 specifies a 256bit (octupleprecision!) floating point format, but it has no known hardware implementation. And though there are faster processors than the one I used, they’re not ten times, or even twice as fast.
But never mind all that. We’ve been able to join Dave Boll in a wonderful experience: the moment of unexpected discovery. We’ve even explored a little further^{11}, thanks to almost thirty years of technological advancement.
Mathematicians have figured out why \(\pi\) emerges from the Mandelbrot set in these places, and that’s another beautiful thing in itself. There’s a paper by Aaron Klebanoff and another video by Dr. Holly Krieger that I look forward to studying. But I want to ponder it more on my own before I learn the answer.
Besides, I’m still relishing the mystery of how a mist of thousands, millions, billions of simple multiplications and additions suddenly clears, revealing a glimpse of the endless string of evervarying digits which is somehow imprinted in the rotating machinery of the universe.

If you wondered how I chose such a precise point to zoom in on, https://mandelbrot.ophir.dev is how. ↩

I first learned of Boll’s discovery in Gary William Flake’s wonderful book The Computational Beauty of Nature (ISBN 0262062003), page 125. You can still read a post to the Usenet newsgroup sci.math in which Boll announced his discovery. Beware: there are spoilers in there! ↩

Though it was spring of 1991, so Boll wasn’t using Swift. And his computations must have taken a lot longer! At that time the fastest available PC and workstation CPUs were the 33 MHz 80486, 33 or maybe 40 MHz 68040, 40 MHz SPARC, and 33 or maybe 40 MHz MIPS R3000. My personal computer then (my first) was a Macintosh Classic with an 8 MHz 68000 that had no hardware floatingpoint support—I shudder to think how long these computations would take on that beloved but sluggish machine. I’m currently using a 2018 15” MacBook Pro with an Intel i78750H that can run a single core at up to 4.1 GHz. ↩

Also note that the more we square, the closer we get to doubling the number of significant digits each time. Iterated squaring will require a lot of bits if we care about accuracy! ↩

As published in the second release of data from the amazing Gaia space observatory, which is building the greatest star chart humanity has ever known. ↩

If you find that you enjoy this, you’ll love Float Toy and float.exposed. ↩

Every Numberphile video about the Mandelbrot set I’ve seen has been illuminating. This one has amazing visualizations of iterations of \(f\) in the complex plane. Complex multiplication is rotation! ↩

In a 1992 post to alt.fractals, Boll listed iteration counts for \(c = \frac{3}{4} + \epsilon i\) up to \(\epsilon = 10^{7}\) (31415928), and for \(c = \frac{1}{4} + \epsilon\) up to \(\epsilon = 10^{12}\) (3141625). But note that the latter count has suffered from floatingpoint rounding error, as our count for \(10^{12}\) using
Float80
is 3141591. He doesn’t give programming details in that post, but in his 1991 post to sci.math he says he used “double precision”. ↩