Scheduled activities: 2 Running the activity

In the previous article, I explained how Duet Activity Scheduler (DAS) schedules background activities for Centralized Task Scheduling (CTS) to run the activity. This article looks at how CTS works, continuing the example of an automatic timed backup by Time Machine.

False completion

The starting point is the request from DAS for CTS to proceed with the activity.
28:38.564 CTS Initiating XPC Activity: (0x7f7882b0db70)

CTS then dispatches the activity, reporting in the log when it starts to run.
28:38.564 CTS _xpc_activity_dispatch: beginning dispatch, activity name, seqno 0
28:38.565 CTS Running XPC Activity (PID 331): (0x7f7882b0db70)

Almost immediately, CTS records the XPC activity has completed. DAS is still catching up with this, it appears.
28:38.565 CTS Completed XPC Activity: (0x7f7882b0db70)
28:38.565 DAS STARTING activity <private>!
28:38.565 DAS Activity has been running for 4.986921946207683e-07 minutes

The elapsed time given there is a mere 0.00003 seconds!

DAS reports the activity complete, noting its priority (QoS) of 30, which appears high, being between the two highest defined QoS values of userInteractive and userInitiated (see below). CTS then submits that activity to DAS for rescheduling an hour later.
28:38.566 DAS COMPLETED at priority 30 <private>!
28:38.566 DAS NO LONGER RUNNING ...Tasks running in group [] are 4!
28:38.566 CTS Rescheduling XPC Activity: (0x7f7882b0db70)
28:38.568 DAS Submitted Activity: at priority 30 with interval 3600 (Sat Jan 21 22:28:37 2023 - Sat Jan 21 22:28:37 2023)

As far as DAS and CTS are concerned, the activity was run at the correct time, and now it’s complete, it won’t be run again for another hour. This may appear confusing, as no backup has been performed yet. However, it’s one way to ensure that the next scheduled backup will occur in an hour, and not creep forward by the time required to perform that backup.

Running the activity

Time Machine’s backupd doesn’t start making that backup for another 30 seconds, when salient log entries include:
29:08.681 TM Starting automatic backup
29:09.864 TM Backing up 1 volumes to ThunderBay2 (/dev/disk9s2,6): /Volumes/ThunderBay2
29:09.864 TM Checking destination IO performance at "/Volumes/ThunderBay2"
29:09.988 TM Wrote 1 50 MB file at 404.40 MB/s to "/Volumes/ThunderBay2" in 0.124 seconds
29:10.042 TM Concurrently wrote 500 4 KB files at 69.64 MB/s to "/Volumes/ThunderBay2" in 0.029 seconds
29:13.821 TM Finished copying from volume "External1"
29:16.158 TM Completed backup: 2023-01-21-212915.backup

The actual backup in this case took almost another 8 seconds to complete, which is very brief. Early in the sequence, Time Machine now runs disk write performance checks, here recording quite good performance from the destination SATA SSD being used to store backups. What isn’t recorded is the QoS set for backup threads, another primary determinant of performance. If we’re to believe the launch daemon configuration file, the scheduled XPC service is given a Priority (QoS) of ‘utility’.

Quality of Service

The latter is set by an assigned Quality of Service (QoS), chosen from the macOS standard list:

  • QoS 9 (binary 001001), named background and intended for threads performing maintenance, which don’t need to be run with any higher priority.
  • QoS 17 (binary 010001), utility, for tasks the user doesn’t track actively.
  • QoS 25 (binary 011001), userInitiated, for tasks that the user needs to complete to be able to use the app.
  • QoS 33 (binary 100001), userInteractive, for user-interactive tasks, such as handling events and the app’s interface.

There’s also a ‘default’ value of QoS between 17 and 25, and an unspecified value.

Most launch daemon and similar configuration files set system XPC services to a QoS of utility. From testing with user threads at different QoS, those with a QoS of background are essentially confined to running on the E cores, while those of higher QoS can be run on P or E cores, as macOS wishes. If the prevailing QoS for these XPC services is to be believed, then their threads would be distributed across both core types. But that isn’t what happens.

Core allocation

Although Activity Monitor’s CPU History window only shows averages and not detail, the period soon after logging into an Apple silicon Mac is dominated by high load on the E not P cores.


It’s this period in which processes such as mds, mds_stores, mdworker, biomesyncd, triald and backupd are all very active, and taking significant CPU %, largely but not exclusively on E cores as shown in CPU History. But if all those are run at a QoS of utility, how does that happen?

The answer is that the priorities set in their configuration files don’t determine those of their worker threads, which are generally set to background. This ensures that Apple silicon Macs remain fully responsive to user processes from the moment you’ve logged in, as the brunt of macOS service activity is borne by its E cores.


The easiest way to watch this all happening is using the DAS Scheduling log window in my free utility Mints. Activity Monitor gives a general and averaged picture of core load, but details are best from the command tool powermetrics.