Measuring performance of different SSDs using Stibium 1.0b3

Encouraged by my initial results from a very early beta-test version of my utility to measure the performance of storage systems, Stibium, I have been progressing its development and looking at whether its results make sense.

Beta 3 now uses Mach absolute time rather than the system clock, so it should be more accurate in measuring smaller time differences. There is an important point when using Mach ‘ticks’ which can get you into trouble if you don’t apply the correct timebase scaling to convert from Mach ticks to time units: on Intel processors, the scaling is 1 so is often ignored, but that’s very different when running native on an ARM processor.

Anyone wanting to try out Stibium should also welcome the fact that this new beta has a simple styled text Help page now. Included within that is the Swift code which I am using to perform read and write tests.

Comparing different storage types

Although benchmarks for internal SSD storage in different Macs may be a matter of discussion, there’s good consensus on the performance of external storage. I therefore set out to measure single read and write transfer rates on 1 GB files on a range of different storage types I have to hand.

On my T2-equipped iMac Pro, internal SSD performance on 1 GB files was consistently good, but slower than on my M1 Mac mini. The T2 read at 2.1 GB/s, against the M1’s 3.5 GB/s, and the T2 wrote at 2.7 GB/s against the M1’s 3.4 GB/s.

When accessing an external SATA 6.0 Gb/s SSD over USB-C, the M1 was significantly slower when writing. The T2 read at 476 MB/s and wrote at 511 MB/s, which is of the order I had expected. The M1, though, wrote at 392 MB/s (I don’t have a reliable read speed for the M1, though).

Putting the same class SSD into an OWC ThunderBay 4 TB3 enclosure improved performance on the T2 iMac Pro, with a read of 569 MB/s, and writing at 539 MB/s.

My Samsung X5 500 GB SSD running over Thunderbolt 3 delivered value for money, with a read speed of 2.1 GB/s on the T2, and write speeds of 2.3 GB/s (T2) and 2.1 GB/s (M1). Again, these are of the order which I have come to expect from other benchmark apps.

I therefore conclude that, whatever Stibium is measuring is closely correlated with generally-accepted performance of a range of SSD media. That’s an important step forward.


From the outset, I’ve recognised the threat posed to performance testing by macOS’s obsession with caching everything it can. In my first set of tests, I steered clear of that by performing 19 write and read operations between each test in which a file which has been written has been read back. Determining what actions are necessary to ensure that caching doesn’t make results incorrect is clearly an important matter.

From these and previous tests, it appears that my code doesn’t run into problems with write caching. Although repeated writes can become slightly faster, they seem consistent through each batch of tests. The problem is with read caching, where macOS, having just written a file, keeps it in memory, so when it comes to read that file back it doesn’t fetch it from storage but from memory instead.

I ran into this in this series of tests, as my M1 Mac mini doesn’t have many large files which I could readily read between writing and reading back the same test file. Inadequate flushing returned consistent read rates of around 8 GB/s irrespective of the storage being accessed. Although that may seem slow for memory access, I suspect there’s a bit more going on. In any case, a read speed of 8 GB/s from an external SATA SSD is clearly erroneous.

In practice, this means that Stibium will be able to perform consecutive write tests, but their matching reads will have to be carefully spaced so as to avoid such caching problems. Unless, of course, someone knows how to temporarily, on a per-app basis, disable read caching for such relatively high-level access.


Now Stibium can measure time intervals as small as a few Mach ticks, on an M1 down to a couple of microseconds, I took the opportunity to look at the shortest times achieved in tests on the M1. For reading, those occur with file sizes of 10 KB, with an implied latency of 0.000017 seconds, giving a read speed of 588 MB/s on such small files. For writing, the file size is the same, 10 KB, but the latency longer at 0.0005 seconds, a speed of 20 MB/s. At larger file sizes, this latency quickly becomes small compared with the transfer time.

This demonstrates that there’s no point in trying to use Stibium for such small files, where results reflect the overhead rather than transfer performance.

Design aims

My next step is to bring together series of write and read tests into a sequence, to see if Stibium can avoid falling foul of caching and deliver batch results. Initially, I aim to do this programmatically until I’m confident that I understand what’s feasible.

I’d like Stibium to let the user devise their own tests, specifying the operations to be performed and file sizes, probably in a property list, so that you can run the tests which are most relevant to your application. Once I understand the rules under which these need to be applied, this should improve the relevance of results.

I’m always open to your suggestions, and to constructive criticism.

Stibium version 1.0b3 is now available from here: stibium10b3