# Can you trust floating-point arithmetic on Apple Silicon?

In the last five months, we’ve read endless benchmarks run on M1 Macs, and a great deal about their speed. This article asks the other essential question: how accurate are they? Specifically, how does ARM floating-point arithmetic compare with that on Intel processors?

If speed benchmarks seem a bit geeky, floating-point arithmetic might appear as dull as ditchwater. But it’s very important, as so much in macOS relies on the processor’s floating-point instructions working perfectly. Way back in the days of crude colour displays, screen graphics were computed using integers, after all that’s what a display pixel was. For a long time now, they’ve been replaced by floating-point numbers, so every calculation for the display relies on floating-point arithmetic.

One good way of testing the ARM CPU’s floating-point accuracy against that of Intel processors is to look at some well-known calculations which are generally performed incorrectly, yielding results which vary in their errors. I have chosen three from the Handbook of Floating-Point Arithmetic (see reference), in which current Intel processors don’t return the result obtained by exact calculation. This may seem perverse, but looking through a vast number of correctly-performed calculations tells you far less. By their errors you shall know them!

The Muller-Kahan Test (Muller 1.3.2.1)

This calculates a sequence which seems to converge to an incorrect limit, when compared against exact calculations. Swift code is:
```let max = 21 var u : Double = 2.0 var v: Double = -4.0 var w: Double for i in 3...max { w = 111.0 - 1130.0/v + 3000.0/(v*u) u = v v = w }```

The value of v should, using exact arithmetic, converge on 6 as the number of iterations (max) tends towards infinity. On otherwise accurate processors, rounding errors occur even in early iterations, and the sequence converges on 100. On Intel processors, when max = 21, the final value of v is 99.8985692661829, and that’s exactly the same when run on an M1.

The Chaotic Bank (Muller 1.3.2.2)

This is phrased in a story in which a man goes to a bank, which promises him that, if he deposits exactly \$(e – 1) in an account, they will deduct \$1 each year as their fee, and multiply the remaining balance by the age of the account in years plus one, i.e. double at the end of the first year, triple at the end of the second, until it’s multiplied by 25 in the 25th and final year. Swift code is:
```var account: Double = 1.71828182845904523536028747135 for i in 1...25 { account = (Double(i)*account) - 1.0 }```

Exact arithmetic shows that the amount in the account tends to 0. However, if the initial estimate of (e – 1) is slightly below the actual value, the result tend to minus infinity; if that initial estimate is slightly above actual, the result tends to positive infinity. On Intel processors, the final value of account is 1201807247.4104486, and again that’s exactly the same when run on an M1.

Rump’s Function (Muller 1.3.2.3)

Rump designed a function in 1988 which has continued to return incorrect results since he first ran it on an IBM S/370 computer. The numbers have been carefully chosen here to be exactly representable in binary floating-point arithmetic with a precision of more than 17 bits. Swift code is:
```let a:Double = 77617.0 let b:Double = 33096.0 let b2:Double = b*b let b4:Double = b2*b2 let b6:Double = b4*b2 let b8:Double = b4*b4 let a2:Double = a*a let firstexpr:Double = (11.0*a2*b2) - b6 - (121.0*b4) - 2.0 let f = (333.75*b6) + (a2*firstexpr) + (5.5*b8) + (a/(2.0*b))```

On Rump’s IBM S/370, this returned 1.172603… in single, double and extended precision, although the exact result is -0.827396…. On a modern Intel processor using Doubles, this returns -1.1805916207174113e+21, and that’s exactly the same when run on an M1.

Conclusion

On each of these three unusual test cases, the erroneous results returned by the M1 using double precision arithmetic are identical to those returned by current Intel processors, specifically a 3.2 GHz 8-Core Intel Xeon W. These imply that results from floating-point arithmetic on the two processors should be almost (if not completely) identical, even when they don’t match the results of exact calculation.

Reference

Muller, Brunie et al. (2018) Handbook of Floating-Point Arithmetic, 2nd ed, Birkhäuser, ISBN 978 3 319 76525 9.