Last Week on My Mac: Swallowing a fly

Computing has for me been an endless succession of rabbit holes, and just at the moment I’m going deeper all the time. It all started with wanting accurate benchmarks for SSD performance and my app Stibium, and now I find myself in linear programming and matrices, which is why you haven’t had any of the promised updates to Stibium yet. Let me make my excuses.

When I was developing Stibium, I came across speed measurements which were anomalous, to say the least, and were dominating the results returned by some other disk benchmarking apps. These were so fast – from 9 GB/s upwards – that the choice was stark: either the M1 Mac’s hardware was breaking the laws of physics, or the figures were spurious, and reflected transfer in memory, not with an SSD. With the help of others, including Ric Ford of MacInTouch in particular, we came to recognise these as outliers which could and did cause havoc with benchmarks.

The engineering answer to this might have been to identify results which are outliers, remove them, and carry on without. Although pragmatic, you then have to decide how you might identify those outliers, and the only real answer is because they seem higher (or in other cases, lower) than other results. This goes against the grain of most benchmarking, which is directed at demonstrating how much faster and better your new system is compared to those of your competitor. Coming up with unbiased and objective rules for classifying outliers is therefore extremely contentious.

So I turned to statistics for a solution, in what’s known as robust linear regression. Conventional linear regression – fitting a straight line to a series of points by the method of least squares – is quick and simple, but too easily swayed by outliers. By using a robust method, the influence of those outliers would be reduced, and the line fitted more closely to where most of the points are.

The current version of Stibium does just that, in an ingenious method devised by Theil and Sen. In turn, it finds every possible pair of points through which a line could be fitted, and for each of those works out the equation of that individual line. It then calculates the median gradient of all those lines, and offers that as the gradient of the line which best fits all the points. And it works well in the face of outliers.

If you thought that was the end of this rabbit hole, it’s only the start.

Theil-Sen regression isn’t the only game in town, not by a long way. More popular and arguably far superior is a group of techniques known as quantile regression. For some reason, those who like Theil-Sen and other robust methods don’t seem to talk with those who are into quantile regression, and the other way around. Finding any direct comparisons hasn’t been possible, which seems odd, as they start from the same place, just follow different routes to get to a similar goal, but like rival bus companies, one pretends the other doesn’t exist.

Instead of analysing all the possible lines, quantile regression uses linear programming to optimise the best line of them all. You may have vague recollections of linear programming from graphical examples in the distant past. It’s quite an old method which manipulates matrices – those two- and more-dimensional arrays of numbers which seem to have been designed to make serious maths a minority sport. You may be surprised to learn that macOS features superb support for matrix maths, in what Apple terms its Accelerate libraries.

More than a generation ago, when I was crunching numbers heavily in Classic Mac OS, the predecessors of Accelerate were known as SANE, the Standard Apple Numerics Environment. I still have a hardbound copy of the second edition of its reference manual, published by Addison-Wesley in 1988.

Working with matrices is understandably more complicated, so my next task is to get my head around the strengths and limitations of the Accelerate libraries, and work out how I can best use them to support linear programming, so that I can use quantile regression, and compare it with Theil-Sen linear regression, and decide which I should use in Stibium, to ensure that its results are as accurate as possible and not dominated by those outliers which were the start of all these problems.

Or to put into song,
There was an old lady who swallowed a cow;
I don’t know how she swallowed a cow!
She swallowed the cow to catch the goat,
She swallowed the goat to catch the dog,
She swallowed the dog to catch the cat,
She swallowed the cat to catch the bird,
She swallowed the bird to catch the spider
That wriggled and jiggled and tickled inside her!
She swallowed the spider to catch the fly;
I don’t know why she swallowed a fly – Perhaps she’ll die!


The trick, I believe, is to stop short of the horse.