One of the major new hardware features of Apple Silicon Macs, including those launched on 10 November, is that they use “unified memory”. This article looks briefly at what this means, its consequences, and where the M1 and its successors are taking hardware design.
Previous hardware architectures have largely been modular. There’s the processor and memory; as memory management becomes more complex, the two get a bit more intertwined with a separate or integrated memory management unit. Then there are peripherals like disks, which have their own interface modules, and the display is driven by a separate graphics card.
This has some advantages. It’s eminently upgradeable by the user; when you can afford a better graphics card, you can replace the existing one with a faster GPU and more graphics memory. You can also buy a basic version of that model with little main memory, and upgrade that when you can afford it. This has allowed graphics cards to have less, faster and more expensive memory which is only accessible to the GPU. Main memory isn’t shared between the CPU and GPU, although ironically more basic designs, such as laptops with simpler graphics, may use main memory rather than the faster chips of a graphics card.
There are problems with this architecture, though. GPUs are now being used for a lot more than just driving the display, and their computing potential for specific types of numeric and other processing is in demand. So long as CPUs and GPUs continue to use their own local memory, simply moving data between their memory has become an unwanted overhead. If you’d like to read a more technical account of some of the issues which have brought unified memory to Nvidia GPUs, you’ll enjoy Michael Wolfe’s article on the subject.
At the same time, chip design has changed, with far tighter integration of what have been separate chips into a System on a Chip (SoC), a field in which Apple is one of the leaders, largely as a result of its hardware development for iPhones and iPads. SoCs can run faster, use less power, and stay cooler, and when built in large quantities are considerably cheaper – all advantages which Apple has used in its products, but only now in Macs.
In this new model, CPU cores and GPUs access the same memory. When data being processed by the CPU needs to be manipulated by the GPU, it stays where it is. That unified memory is as fast to access as dedicated GPU memory, and completely flexible. When you want to connect a high-resolution display, that’s not limited by the memory tied to the GPU, but by total memory available. Imagine the graphics capability of 64 or even 128 GB of unified memory.
If demands made on unified memory are more variable, and could require that a high proportion of physical memory is used for graphics and the display, this might result in increased use of virtual memory, and CPU cycles lost to caching. That was certainly a problem when caching to rotating disks, but with modern high-speed SSDs effects on performance are minimised. After all, an internal SSD is only larger and slightly slower-access memory. That requires tighter integration of internal storage too.
Apple’s first M1 Macs are its first convergence of these features: sophisticated SoCs which tightly integrate CPU cores and GPUs, fast access to unified memory, and tightly-integrated storage on an SSD. Together they offer unrivalled versatility, what Apple sees as relatively low-end systems which can turn their hand and speed to some of the most demanding tasks while remaining cool, consuming little power, and being relatively inexpensive to manufacture in volume.
They break many of the concepts that we have come to accept. You may have needed 32 GB of upgradeable memory in the past, and internal storage which you can replace as your Music library grows. Now all the main components – CPU, GPU, memory and internal storage – are tightly integrated and interdependent. There are, and will remain, some hardware options, but you don’t need to specify a kit any more: M1 Macs come ready-built.