Which type of CPU cores are most active during Visual Look Up? How their frequencies and active residencies change? How demanding is it?
Visual Look Up
If Visual Look Up is so easy and low-power for Apple silicon Macs, maybe Tahoe’s new Foundation Models will prove more challenging, and wake up the neural engine.
Using powermetrics and log entries, a single image was processed on an M4 Pro, with content analysis and object recognition and look-up. How much power and energy did that use?
You can disable Live Text in Language & Region settings, but what exactly does that do? Does it also block Visual Look Up, or connection to Apple?
We’re in the midst of a drought in the UK. We’ve all been recommended to delete old emails and photos to economise of water use. Is that wise?
How to find images containing objects recognised by Visual Look Up, and text by Live Text, using Spotlight, mdfind and in an app.
How images have text recognised within them as Live Text, and objects of interest are classified, using VisionKit, mediaanalysisd, Espresso, the ANE and more.
macOS gained Live Text and Visual Look Up when Apple declared its intent to check our images. Both are used in Spotlight’s indexes, but their function seems unreliable and highly variable.
Does Sonoma analyse and classify your images? What content does it obtain from them? How to discover what search terms it recognises? And could this be used to detect CSAM?
A thorough look at Live Text, and why it might need to connect to Apple’s servers. Could it be sending image identifiers or text extracted from your local images?
