Apple’s M2 chip uses a newer version of the CPU core instruction set. This increases its capability, thus how well it will cope with future apps and macOS, compared with the M1.
M1
M1 CPUs support ARMv8.5A, which doesn’t support the new bfloat16 floating-point format now widely used in AI. That’s likely to put them at a disadvantage.
M3 chips widen the gap between Pro and Max variants. They also change relative performance between P and E cores to make M3 CPUs more versatile.
The M1 cycle took 16 months from basic to Ultra; that shortened to 12 months for the M2. As the first Studio M2 Ultras were being prepared for shipping, the M3 cycle started.
Coping with 64-bit code, APFS, the different CPU, the SSV, System Settings, Recovery Mode, and how to get the best from migration and sharing in iCloud.
Comparison between 2 Intel and 2 Apple silicon Macs running vector and matrix functions from Apple’s Accelerate library. Was that new M3 worth the money?
There’s more to getting best performance and energy efficiency on Apple silicon. These vary greatly depending on how apps are coded, as shown here.
How to compare an undocumented if not secret co-processor? Using different tests that use very high power, and can result in strange patterns of core allocation. So how does the M3 Pro fare here?
Comparison with M1 variants, energy use with comparison between M3 Pro and Max, virtualisation, Game Mode, vector processing and matrix co-processing – all in summary.
Assessing throughput using tests of fast Fourier transforms and sparse Cholesky factorisation from the Accelerate library. Is there an AMX there?
