Visual Look Up also recognises flowers, landmarks and pets, as well as well-known paintings. Here’s how it does those, and how Live Text is different.
Visual Look Up
The first phase analyses, classifies and detects any objects within the image. When the user clicks on the white dot, this completes with a search for the best match.
Not only does this version of Mints extract information from the log detailing what happens during Visual Look Up, but it includes it own browser window to look up with.
Remember Apple’s failed attempt to detect CSAM in images? Would that have been similar to the way that Visual Look Up works? Is this the thin end of the wedge?
You might have been using Visual Look Up for a few months now, or could still be unable to get it to work. How some features aren’t available everywhere, or on all supported Macs.
A promising start for a new feature which could, with a little improvement, become a uniquely powerful tool.
By any measure, Monterey’s 12.3 update is very substantial, introducing major new features like Universal Control and Spatial […]
This should be one of the most transformative features we’ve seen in a recent version of macOS. Its OCR is excellent, and uses recognised text to link to knowledge.
