In an Intel Mac with a T2 chip, it’s that chip which appears to act as disk controller for its internal SSD, and performs on-the-fly encryption and decryption as well. The T2 was released in 2017, and is based on the A10 SoC as used in the iPhone 7 from 2016. The M1 chip in Apple Silicon Macs was released three years later, and is more advanced than the A14 Bionic used in the iPhone 12 from 2020. Although Apple has provided only limited details, the M1 includes the Secure Enclave, and it’s likely that its ‘Fabric’ section contains the disk controller for the internal SSD. Comparing internal SSD performance between T2 and M1 models, you might therefore expect improved performance in the latter.
I’ve now completed my first extensive SSD benchmark testing on my iMac Pro, with its T2 chip, and my M1 Mac mini. I’ve found significantly increased read but not write speeds on the M1, and evidence that both systems have a similar range of file sizes over which they attain very high read speeds. Details of how I performed these tests are given in the Appendix at the end. The short summary is that I got my app Stibium to write 140 test files ranging between 10 KB and 2 GB in size, during which I measured the time it took to write each of them. I then restarted that Mac, and Stibium read each of those 140 files, again being timed on each.
As in some of my early work with Stibium, I start by looking at the relationship between file size read/written and time taken. With ten measurements at each file size and the precision of Mach timing, the best way to do this is to regress a line showing that relationship. This turns out to produce straight lines which fit the data points closely.
For the iMac Pro, the Y intercepts are very close to 0 seconds (around 0.0006 for writing, and 0.00004 for reading), and the gradients give an overall write speed of 2.9 GB/s, and read speed of 2.2 GB/s. Those compare with median values of 2.7 and 2.2 GB/s respectively.
The M1 Mac mini results also regressed well to linear relationships, with a small number of outliers. Y intercepts are both slightly below 0 seconds, and the overall write speed calculated from the gradient is 2.6 GB/s (median 2.6), and the read speed 2.8 GB/s (median 2.9).
I therefore conclude that, over file sizes between 10 KB and 2 GB, both SSDs are fast, with the iMac Pro being slightly faster at writing (2.9 against 2.6 GB/s), but the M1 is significantly faster at reading (2.8 against 2.2 GB/s).
Looking at the results from the read tests on both SSDs, some were exceptionally high: the maximum for the iMac Pro was 5.9 GB/s, and for the M1 an almost incredible 10.8 GB/s.
Transfer rates by file size
Using linear regression to estimate transfer rates is an unusual technique, although it’s widely used when tackling similar problems in many other applications. To look at exceptionally high transfer rates, I revert to a method more generally used in disk benchmarking: plotting transfer rates by the amount of data transferred. Because of the range of file sizes, and the importance of smaller ones, I here set the X axis to a logarithmic scale, which spreads the sizes more evenly across the range.
On the iMac Pro, write speeds (in red) generally increase with increasing file size, until they reach a maximum at about 50 MB, beyond which they don’t change much. Read speeds also start low and end up quite constant, but at file sizes of 100 KB to 1 MB there is very high scatter, with the majority of results higher than 3.0 GB/s, reaching as much as 5.9 GB/s, more than double the speeds attained between 100 MB to 2 GB.
Results for the M1 Mac mini are uncannily similar, although read and write speeds are more similar, and the maximum read speeds are even higher, reaching 10.8 GB/s. Six of the ten measurements at 500 KB exceed 6 GB/s, and five of the ten at 1 MB.
The pattern of these very high speeds is almost identical between the T2 and the M1. Having ruled out the possibility that these files are being read not from the SSD but from caches in memory, I can only conclude that these are real transfer rates, and represent accelerated reads within a specific size range. This gives credence to earlier ATTO benchmarks, which showed similarly high read speeds from 12 MB upwards.
Why do these high speeds appear? I’ve heard of ‘burst’ mode for writing to SSD, but haven’t come across a read burst mode. Although it’s possible that alignment effects could produce this, it seems odd that it affects a discrete band of file sizes; note that these files are all sized according to ‘decimal’ bytes (1 KB = 1,000 bytes), not binary bytes (1 KB = 1,024 bytes); as alignment usually works in binary byte sizes, none of the file sizes in these tests should align well.
Have you any ideas to explain this unusual performance?
Appendix – methods
Methods used were intended to provide a consistent environment during normal running, which would be comparable between Macs, and would eliminate any risk of files being cached in memory, resulting in spurious results.
My app Stibium 1.0b4 was used, running as a Universal App. Source code of the critical sections is provided in the app’s Help page, and uses the Swift functions
Data.write(to:) to write, and
NSData.init(contentsOf: options: NSData.ReadingOptions.uncached) to read. Timing is performed using
mach_absolute_time(), with appropriate timebase correction.
The sequence used on each Mac was:
- Restart Mac and allow a couple of minutes to initialise and settle.
- Ensure test folder is on internal SSD, in ~/Documents, and empty.
- Open Stibium, configure to write 10 repeats with 0x41 bytes throughout the file, with the Verbose option on.
- When no backups are being made, click the Write… button and select the test folder as destination.
- Once complete, copy output to text document for analysis in Numbers and DataGraph.
- Restart Mac and allow a couple of minutes as before.
- Ensure test folder is on internal SSD, in ~/Documents, and contains the 140 files written earlier.
- Repeat 3-5 above, with the No Cache option ticked, and clicking on Read… instead.
The write test writes a series of 14 files in increasing size from 10 KB to 2 GB, cycling through those a total of ten times, to produce 140 files. No file copying is involved, with each file being named and written from scratch. The read test reads all those 140 files from the test folder in a sequence determined by the file system, and not in the order that they were written or named. Although that isn’t random, it’s far from being sequential.