For some images, Visual Look Up fails so completely that it’s not even offered. Could this be exploited as a way of blocking image recognition?
VisionKit
Visual Look Up also recognises flowers, landmarks and pets, as well as well-known paintings. Here’s how it does those, and how Live Text is different.
The first phase analyses, classifies and detects any objects within the image. When the user clicks on the white dot, this completes with a search for the best match.
Not only does this version of Mints extract information from the log detailing what happens during Visual Look Up, but it includes it own browser window to look up with.
Remember Apple’s failed attempt to detect CSAM in images? Would that have been similar to the way that Visual Look Up works? Is this the thin end of the wedge?
A promising start for a new feature which could, with a little improvement, become a uniquely powerful tool.