For some images, Visual Look Up fails so completely that it’s not even offered. Could this be exploited as a way of blocking image recognition?
NeuralHash
Visual Look Up also recognises flowers, landmarks and pets, as well as well-known paintings. Here’s how it does those, and how Live Text is different.
The first phase analyses, classifies and detects any objects within the image. When the user clicks on the white dot, this completes with a search for the best match.
Apple’s controversial proposals to check photos for CSAM content before they’re uploaded to iCloud depend on image matching, as explained here.