Last Week on My Mac: Has Time Machine slowed?

Before Catalina 10.15.3, Time Machine seemed to work well, at least when backing up to local storage. For years, I’d kept my backups on a Promise Pegasus RAID with its four spinning hard disks. When I eventually replaced them with an OWC Thunderbay 4 full of costly SSDs, those backups got pleasantly quicker, at least until I updated my iMac Pro to Catalina 10.15.3 on 28 January 2020.

tmfix03

Something broke then, and full Time Machine backups suddenly took an age. I discovered the offender after looking in the log: the hidden .DocumentRevisions-V100 folder, containing the macOS versioning system database, was bringing each backup to a standstill, sometimes only copying one tiny item a second. I reported this bug, but it wasn’t fixed in Catalina, not until Apple released Big Sur, by which time Time Machine was already backing up to APFS volumes.

The fix adopted by Apple was essentially the same as I had already proposed and implemented myself: add .DocumentRevisions-V100 folders to Time Machine’s exclusion list. Since then, others have encountered similar problems elsewhere. It has been noticed by many in their Photos libraries, in Apple’s Xcode app, and in strange out-of-the-way folders. The only common factor is that, when trying to back up folders containing seriously large numbers of very small files, some of which may be hard links, the rate of copying falls to ridiculously low numbers.

For those backing up to local storage, this may be tolerable, but it hurts most those backing up to network storage, such as a NAS. Last April I documented this as a reproducible problem when backing up Xcode, and have since advised those using Time Machine for their backups, whether to HFS+ or APFS, to exclude items like Xcode, and any folders containing very large numbers of small files.

At the time, we had much debate over whether this was primarily an issue with SMB, which perhaps was something of a red herring. By Monterey 12.1, SMB performance appears to have improved considerably, but there are still users with local or network storage whose backups turn pedestrian whenever backupd hits a folder with very large numbers of small files.

Looking back before 10.15.3, Time Machine never seemed to have problems with copying Xcode, or with the .DocumentRevisions-V100 folder. Exclude those, and anything like them, from backups now, and it performs well, even to a NAS via SMB. I’ve recently been benchmarking a range of different NAS systems, and the fastest transfer rate for a first full backup reached 35 MB/s, and a second backup hit 37 MB/s, over 1 Gb/s ethernet to hard disk. But for those tests I knew I had to exclude Xcode, or they would have returned dismal results.

Monterey introduced a new hidden feature in Time Machine: before making its first backup to a new backup set, backupd runs a speed test. You can inspect the results using several of my utilities such as T2M2, Mints, Ulbow or Consolation 3. Typical entries might be:
Checking destination IO performance at "/Volumes/External2TBssd 1"
Wrote 1 50 MB file at 240.05 MB/s to "/Volumes/External2TBssd 1" in 0.208 seconds
Concurrently wrote 500 4 KB files at 10.85 MB/s to "/Volumes/External2TBssd 1" in 0.189 seconds

It’s unclear what Time Machine does with those results, or why it should perform the second test using many small files, unless Apple knows there’s a problem, perhaps.

As it’s very nearly two years since I first noticed this bug, and all I can see are workarounds that leave some users with interminably slow backups, it’s surely time for Apple to come clean and explain what’s going wrong, and when it will be fixed. For many, the benefits of being able to back up to APFS storage are now being outweighed by dismally poor performance.