Integers of different sizes and endianness, floating point numbers with a radix of 2 that can result in rounding and cancellation errors and NaNs, and bfloat16 for AI.
floating point
The Motorola 68000 CPU had no floating point instructions, so Apple introduced SANE, then went on to the PowerPC Velocity Engine, and its Accelerate framework, and more.
Did you know that, in 64-bit double floating-point format, the number pi is 4009 21FB 6000 0000? Now you can discover that, and reverse from hex to regular decimal, using Mints.
New version fixes a bug, and adds a new window to explore floating-point number formats, as demonstrated here. And a surprise from Apple.
Few acts can excite an audience as much as the plate-spinner darting between crockery threatening to wobble out […]
Apple’s M2 chip uses a newer version of the CPU core instruction set. This increases its capability, thus how well it will cope with future apps and macOS, compared with the M1.
M1 CPUs support ARMv8.5A, which doesn’t support the new bfloat16 floating-point format now widely used in AI. That’s likely to put them at a disadvantage.
Some apps and other code doesn’t appear to run faster on M1 chips, and some even runs more slowly. Could this be a result of it not using the best acceleration for vectors and matrices?
Many processors like the ARM64 have instructions to perform fused multiply-add operations. Do they deliver reduced error and better performance?
A new tailor-made log view shows what’s happened in interactions with the App Store, and can give clues as to the cause of delays and failures.
