Explains how Huffman code works, how it’s used for compressed files, and how other techniques including LZ, LZW and LZFSE differ.
Storage has to be reliable, efficient and resilient. However, efficiency and resilience oppose one another. What’s the best solution? New file formats, CRC in the file system, or what?
Lossy and non-lossy compression explained, with examples and an outline of how JPEG compression is performed.
Now has three presets to control speed and energy consumption of compression and decompression, together with complete custom control.
How to work out how many threads and which cores are needed to achieve a compression rate up to 1.7 GB/s, and how to estimate power and energy.
Provided in Archive Utility and three command tools, its LZFSE compression even supports APFS special files like clones and sparse files.
If apps control the Quality of Service, which sets how macOS allocates them to different processor cores in an M1 chip, how can we have any control?
By segregating macOS background tasks on Efficiency cores, M1 Macs can run user apps unfettered on their Performance cores. And that feels really fast.
How the M1’s asymmetric cores can run background tasks more efficiently, or deliver high performance, according to Quality of Service.
How can you compress and decompress small files faster? In AppleArchive, just send more of them for processing at the same time.