I’ve seen a few comments here and elsewhere from users who don’t think that Monterey has anything to attract them. In this article, I try to explain why one feature – Live Text – should prove one of the most important new features in any recent version of macOS.
For as long as I’ve been using computers (since around 1977), text has been text, and pictures have been pictures. Once you put your text inside an image, however well you can read it, it’s no longer extractable, although there have been some notable exceptions. Apple’s Classic Mac OS PICT format could retain text without it being assimilated into an image. PostScript and its derivatives EPSF and PDF separate text and other graphic objects, but as we’ve seen, extracting text from them often only ends in tears.
If you’ve wanted to extract the text from within an image, whether a scan, photograph or fine art, you’ve been able to use optical character recognition (OCR) software. That’s designed to work with scanned pages: throw it an image with a painted inscription such as Murillo’s self-portrait (below) and it simply doesn’t know what to look for.
Live Text is much more versatile. If you’re running a beta-release of Monterey, go to this page and scroll down to that self-portrait or use the copy above. Click on the image to bring it into full window, and click again to take it to full size. Hover your pointer over the inscription at the foot and it should shortly change into an I-beam text selector. You can then select the whole of that inscription and drag and drop its text into a TextEdit document.
Select just the name Murillo and bring up the contextual menu (Control-click etc.). In that select Look Up “Murillo” and you can see a brief biography of the artist. Select Siri Knowledge and you’ll get a fuller biography and a link to Wikipedia’s lengthy account.
OCR in Live Text is better than any that I’ve used. On PNG images of rather blurry printed text, it’s nearly perfect over page after page. It’ll also tackle a lot of text which conventional OCR won’t touch, like handwriting and inscriptions such as that in Murillo’s painting. When the going gets really tough, my experienced brain still does much better, but it’s also far more labour-intensive.
Add Visual Look Up – which we used to find information about Murillo in the demo above – and the synergy is greater than the sum of its parts. Select Princeton, New Jersey in an image of a scientific paper, and you’ll be offered information about that location, and can even find it in Maps. Live Text isn’t just about recovering text from images, but connecting that text with knowledge.
The most important feature in Live Text isn’t its excellent OCR, or its link with Visual Look Up, all of which could have been done in a specialist app, and for all I know may well have already been. It’s that this works in Safari, Preview and other everyday tools which we use in macOS. In Monterey there are no longer the age-old barriers between text and pictures. See a word in a photo and you can immediately use that as a springboard to more information without touching your keyboard.
When Apple originally provided full details of what’s coming in Monterey, Live Text was listed as only being available on M1 Macs. Those using current beta-releases of Monterey on Intel Macs report that Live Text works there too. It might not be as swift as with the M1’s Neural Engine, but it still seems every other bit as good.
Now if there was some way to train Live Text to recognise and read signatures on paintings…