From its outset, APFS hasn’t tested the integrity of file data stored on it. Would this be a good idea, or should macOS switch to the ZFS file system instead?
This can now be incorporated into your own scripts, to check the integrity of important files automatically.
Can you use error-correcting code to repair very large files, for example of around 20 GB or more?
What difference does it make when a file has a block of 512 bytes corrupted instead of just a single byte? Results from image formats and recovery using ECC.
For all their compactness and ease of access, are our files going to prove less durable than a clay tablet recording a commercial transaction over 4,000 years ago?
Tests bring some surprises, with encrypted sparse bundles looking resilient to small amounts of corruption.
Looks at plain text, CSV, XML, JSON, RTF, RTFD, .docx, .xlsx, and PDF. Which should you trust with your important documents in archives?
Which format – alongside Camera Raw – should you store archived images in: JPEG, PNG, TIFF or Apple’s new HEIC?
Simply having ECC enabled doesn’t mean that damaged files can be recovered. It appears promising, but needs careful real-world evaluation.
We assume what we know to be impossible, and pretend that just making ‘safe’ copies of important documents will preserve them for the future.