When something doesn’t change for a long time we tend to get complacent and cut corners. For many years, the precision Mach clock in Macs has ticked away in nanoseconds, and we’ve come to assume that it always will. When it’s written to an entry in the unified log, because the
machTimestamp has always been in nanoseconds, it’s easy to assume that it is and calculate time intervals accordingly.
Apple has warned us that its new Apple Silicon Macs won’t be the same, and continuing to make the assumption that Mach clock ticks occur once every nanosecond will prove seriously wrong. If any of your code uses
mach_absolute_time, or anything derived from it, including the
machTimestamp field in unified log extracts, now is the time to put your clock right.
As I have explained previously, Macs contain several different clocks and views of the time. Local reference clocks and timers which already work in proper time units such as seconds aren’t affected by this change. It’s hardware clock ticks, Mach precision time, which changes. On Intel processors, that tick has been one nanosecond long, so working out a time interval has been all too easy. Get the tick number (an unsigned 64-bit integer) at the first moment, get it at the second, take the first from the second and your answer is already in nanoseconds.
Apple’s only documentation that I can find is Technical Q&A QA1398, which dates from fifteen years ago and has now been archived. Sadly the CoreServices conversion functions it refers to have long since been removed from macOS. Maybe with this change, Apple might like to restore them?
What we should have been doing was applying a conversion factor to the difference in clock ticks, to convert from ticks to nanoseconds. Something like:
var info = mach_timebase_info()
((secondTicks - firstTicks) * UInt64(info.numer)) / UInt64(info.denom)
returns the time interval in nanoseconds.
For macOS running on Intel processors, the numerator and denominator of
mach_timebase_info are the same, and have often been omitted. Big Sur running on Apple Silicon Macs won’t be the same, though, the correction may well be large, and could vary from model to model. If you use any code which assumes that Mach ticks are at nanosecond intervals, now is the time to incorporate this correction. Some ideas which may work for languages other than Swift and Objective-C are given in this handy cross-platform compilation.
This also affects entries written to the log. Every entry in the macOS unified log, including Signposts, features two separate records of the time. One is in the recognisable format of a
timestamp, such as
which resolves down to one microsecond (0.000001 of a second). The other is a direct reading of the Mach absolute time and termed the
machTimestamp, which is given in system ticks as a large positive integer like
If you store or process log records, the
machTimestamp from Apple Silicon Macs will be very different from that of Intel Macs. If you calculate time intervals from them, you’ll need to correct those using the
mach_timebase_info as above.
Unlike processor instructions which are automatically translated from Intel to ARM code by Rosetta 2, if you run an Intel app which uses Mach ticks and fails to correct the values (or derived values, such as time intervals), then all those times will be incorrect on Apple Silicon Macs. I know that this affects timing values in my own RouteMap, which calculates time intervals between Signpost entries in the unified log. I’ll be releasing a new version of RouteMap tomorrow to address this.
So don’t be surprised if running older apps on Apple Silicon systems sometimes results in strangeness in time, and time intervals. This is least likely in apps which were developed when PowerPC systems were still around, and in those which share code with an iOS version. If you find an app that is affected by this, there’s nothing the user can do other than report it to its developer as a bug.
I have a feeling that a few apps running on Apple Silicon will get this spectacularly wrong.