Does removing I/O throttling make backups faster?

One of the most common complaints about Time Machine backups has been performance. When Time Machine to APFS (TMA) was introduced in Big Sur, we all hoped it would reduce the time required to make large backups, in particular the first to a new backup store. Although TMA has realised some of that promise, other backup utilities can still deliver better performance.

With the advent of M1 Macs, this problem could extend more widely too. The major determinant appears to be I/O policy, in which TMA has its access to storage restricted to ensure other processes are given priority. This is set in the IOPOL_THROTTLE policy, which could also be imposed more generally on threads run at the minimum Quality of Service (QoS). This article considers whether backups made by TMA and other background QoS threads are slowed by IOPOL_THROTTLE policy.

Documentation

Most detailed documentation of I/O policy is in the man page for the getiopolicy_np() call in the Standard C Library, which was revised in 2019 with important consequences to TMA and more.

The current version of the man page explains that this “can mean either the I/O policy for I/Os to local disks or to remote volumes. I/Os to local disks are I/Os sent to the media without going through a network, including I/Os to internal and external hard drives, optical media in internal and external drives, flash drives, floppy disks, ram disks, and mounted disk images which reside on these media. I/Os to remote volumes are I/Os that require network activity to complete the operation. This is currently only supported for remote volumes mounted by SMB or AFP.”

The previous version of this man page, from 2006, explicitly excluded “remote volumes mounted through networks (AFP, SMB, NFS, etc) or disk images residing on remote volumes.”

Five different policies are supported:

  • IOPOL_IMPORTANT, the default, where I/O is critical to system responsiveness.
  • IOPOL_STANDARD, which may be delayed slightly to allow IOPOL_IMPORTANT to complete quickly.
  • IOPOL_UTILITY, for brief background threads which may be throttled to prevent impact on higher policy levels.
  • IOPOL_THROTTLE, for “long-running I/O intensive work, such as backups, search indexing, or file synchronization”, which will be throttled to prevent impact on higher policy levels.
  • IOPOL_PASSIVE, for mounting files from disk images, and the like, intended more for server type situations so that lower policy levels aren’t slowed by them.

Modifying I/O policy

Although the command tool taskpolicy can run tasks with IOPOL_THROTTLE removed, it’s unable to modify that on background tasks like backupd which are already running, and it’s also not capable of rescheduling low QoS processes. However, sysctl can modify kernel states, which include global I/O policy. Some years ago, it was discovered that the user can globally disable IOPOL_THROTTLE with the command (run as root)
sudo sysctl debug.lowpri_throttle_enabled=0
although that doesn’t persist across restarts, and isn’t documented in the man page for sysctl. For convenience, this is provided in an option in St. Clair Software’s App Tamer, to “Accelerate Time Machine backups”, for those who’d rather avoid the command line.

apptamer

To assess the effect of IOPOL_THROTTLE policy, standardised TMA backups were run with and without the policy in force, on an M1 Pro MBP backing up to an external USB-C/SATA SSD under macOS 12.2.1. As it’s unclear whether this extends to other threads run at the lowest QoS setting, the performance of file compression and decompression of a 10 GB test file on the internal SSD was also measured, using my free utility Cormorant, as it’s one of the few apps which can perform such storage-intensive tasks at selectable QoS levels.

Time Machine backups

Before each TMA backup starts, backupd peforms two checks of disk performance, by writing one 50 MB file and 500 4 KB files to the backup volume. In these, disabling the IOPOL_THROTTLE policy had little effect, with transfer rates slightly slower:

  • policy on – single file 204 MB/s, 500 files 18.3 MB/s;
  • policy off – single files 200 MB/s, 500 files 17.5 MB/s.

However, transfer rates during the backup itself showed that turning the policy off resulted in a huge increase in transfer rates:

  • policy on – copying phase only 193 MB/s, overall backup 160 MB/s;
  • policy off – copying phase only 332 MB/s, overall backup 276 MB/s.

Those are measured on backups requiring the copying of more than 10 GB in 3590 and 267 files respectively.

Minimum QoS threads

Results from Cormorant weren’t as impressive. Compression of a standard 10 GB test file took 60.3 s with the policy on, and 55.6 s with it off. Decompression was almost identical, at 8.2 s with policy on, and 8.3 s with it off. This isn’t convincing evidence that IOPOL_THROTTLE is applied by default to threads run at minimum QoS, and further work needs to be done to clarify whether that’s true.

Global disadvantages

The big disadvantage of using kernel states to completely disable IOPOL_THROTTLE policy is that this can only be applied globally, which includes I/O for Spotlight indexing, and file synchronisation and other background tasks. While it could be used to great effect to accelerate a large first full backup, leaving the policy disabled in normal circumstances could readily lead to adverse side-effects, in which indexing threads swamp important user storage access.

Conclusions

  • Disabling IOPOL_THROTTLE policy can lead to great improvements in performance of TMA backups.
  • Those improvements should be seen when backing up to local storage, and could also apply to network backups, for example over SMB to a NAS.
  • Effects on other threads run at minimum QoS aren’t as convincing, so it’s not possible to know whether they too are run with IOPOL_THROTTLE policy.
  • Because I/O policy can only be controlled globally, side-effects could be serious, so this is best reserved for special situations such as performing large first full backups, when user processes can be kept to a minimum.
  • With faster I/O and larger backups, the user needs finer control over I/O throttling than is currently supported in macOS. This is most important on M1 Macs, where TMA remains disappointingly slow in spite of the high performance of Apple Silicon.