Apple’s announcement of its first Macs to feature higher-performance variants of its M2 chip series was uncharacteristically low key. While early reports of their benchmarks are encouraging, real-world performance doesn’t always reflect better benchmarks. To interpret those, we need to consider the CPU cores and what they will deliver our apps. Throughout this, I’ll consider just CPU cores, and ignore improvements in memory, GPU and other hardware.
Apple silicon CPU cores are grouped in fours, forming a cluster run at the same frequency and sharing cache. M1 Pro and Max chips are unusual in only having half a cluster of Efficiency (E) cores, for which their frequency is managed differently from the full cluster in a base M1 chip. This ensures that background tasks are completed no slower on their two E cores than they would be on a base M1 chip with twice that number.
Increasing the total number of cores for the M2 Pro and Max shouldn’t have been a difficult design choice. Adding more Performance (P) cores would have required a third cluster, greatly increased energy usage and heat production, and resulted in a larger and more expensive chip. E cores are smaller, more frugal in their energy use, and produce less heat. Adding two E cores was probably the least Apple could do to improve the performance of the M2 Pro/Max over those M1 variants.
The effect of this doubling of E cores in terms of CPU throughput is also limited. In M1 chips, an E core can deliver around a third of the throughput of a P core, and it’s likely that will hold good in M2 chips. Thus, the base M2 provides around 5.2 P cores of power, and the M2 Pro/Max a total of around 9.2 P cores. Equivalent figures for M1 variants are 5.2 and 8.6, so the capability gap between base and Pro/Max has widened slightly.
In practice, the more important question for the user is how much benefit will they see from those two additional E cores, and that depends on how macOS schedules threads. If what you do is comfortably handled by eight P cores, then those extra E cores aren’t going to have much impact on overall performance. The additional cores are only going to make a noticeable difference when there are background tasks exceeding the capacity of two E cores, or higher Quality of Service (QoS) threads that can usefully be run on the extra cores. These are subtleties that you simply can’t read from traditional benchmarks.
Apple here has the great advantage that it now owns the whole show. macOS can and does allocate threads to different types of core according to whatever strategy Apple determines, and the QoS assigned by the developer. There’s no hardware Thread Director doing its own thing, and strategies can be changed with a macOS update. This should get the best use out of the cores available, but doesn’t guarantee that every app will run significantly faster on an M2 Pro/Max than on its M1 predecessor.
This could all change if macOS 14 gains more heavyweight background services that are likely to be scheduled on E cores. For those, an extra couple of cores could make a big difference.
The more interesting question is how Apple will configure the M2 Ultra. As the M1 Ultra is essentially a pair of M1 Pro/Max chips working as Siamese twins, the current chip has four E cores and 16 P cores. Will the M2 Ultra have one or two clusters of E cores? As it’s destined for desktop systems where efficiency of energy and heat are of lesser importance, it currently looks doubtful that eight E cores would be gainfully employed for the user and show real-world performance improvements over the M1 Ultra.
If you’re still running Intel Macs and have been waiting to switch to Apple silicon, these new M2 models are compelling. For those wanting a faster and more capable Mac mini, the decision should be even easier. If you’ve already got M1 Macs, the choice is more nuanced. Until you can be confident that you’ll see worthwhile improvement, you’d be wiser waiting to compare your current model with its replacement: look then at application benchmarks rather than benchmark applications.