triald is all about Machine Learning

I’ve recently drawn attention to a new service introduced in Big Sur which can, at times, consume high CPU and disk space: Trial, seen through its background service triald and its frequent appearance in the log. I’m very grateful to all those who have provided information to help discover what this is doing and why. I think I’m much closer to understanding what’s going on.

While Apple now documents precious little about macOS internals, it does provide developers with tantalising glimpses inside. In this case, I refer to its documentation on Core ML, a suite of frameworks to support Machine Learning (ML). The way this works is that a developer creates a model based on a set of training data, such as a large collection of images or words. The model is made using an ML algorithm or method, most commonly neural networks these days, and is then built into an app which might sort images into categories, recognise objects within them, or analyse text. Domains supported by Core ML and its tools include:

  • Images (Vision)
  • Text (Natural Language)
  • Speech (conversion of audio to text)
  • Audio analysis
  • Numbers (numeric analysis and prediction).

Once the user has installed an app using Core ML on their Mac (or device, as this is supported by iOS and iPadOS too), its developer has two ways for refining and changing the model: Core ML can be used on-device with that user’s data, or they can update the model remotely, outside of a conventional app update.

Apple explains how third-party developers can get their apps to download and compile models with that app instead of bundling them within the app. It suggests this could be a good plan to reduce the size of the app on the App Store, to pick the ‘right’ models for that particular user, or simply to update the model.

These models, compiled on-device by Core ML, are then stored in “a temporary location”, but Apple recommends using permanent storage in a folder within Application Support.

There are fuller details for deploying Model Collections, which Apple specifically touts as sending “models to users’ devices without submitting an app update”. When a developer does this, “the operating system on each user’s device automatically downloads the model collection from the deployment in the background.” While they can notify the app user of the deployment, that doesn’t appear to be recommended by Apple.

Currently, the major supplier of apps using Core ML is Apple. Trial, and possibly a sibling going under the name of Biome which also has its own root directory in ~/Library, appears to be Apple’s system for deploying new and updated models for use in Siri, Photos image analysis and recognition, Visual Look Up, Live Text, and other features in Big Sur and Monterey.

Security is clearly a concern here. These late-deployed models don’t pass through any App Store approval process, nor the malware checks involved in notarization. They aren’t executable code in the normal sense, but it’s unclear whether malicious code of any form could be embedded within them. There seems little to stop a malware developer from selling or giving away an innocuous app through the App Store and using deployment of models to distribute malicious content under the radar.

I have seen both XProtect and MRT run at the same time as triald activity in the log, and it’s likely that Apple has anticipated some security concerns over this mechanism. However, that only helps for malware which can be detected by Apple’s tools, and on-device processing by Core ML could even be exploited to help avoid detection.

More to the point is that deployment of changes to an app, whether it’s built into macOS or created by a third-party developer, take place without the user even being informed, let alone given the option to not receive them. In this sense, parts of macOS are now automatically updated regardless of your settings in Software Update.

It still remains strange that Apple should use terms like trial and experiment, if these are just model updates. Despite what some may think, ML doesn’t normally proceed by running large-scale trials or experiments across user systems. The normal sequence of events runs:

  • developers build libraries of training and test data
  • the ML model is built using the training data to train the chosen ML algorithm
  • that model is validated against test data (a step often omitted it seems)
  • the model is deployed to user systems.

Training, sometimes known as learning but never trialling or experimenting, is normally the most demanding step, as it involves both questions (raw data) and answers (what the algorithm is supposed to detect or predict). Once that’s complete, the algorithm should perform very well against samples of that training data. That’s why validation testing is so important, as it assesses the algorithm’s performance on unseen data, comparing its results against those obtained by some gold standard method.

Although Apple rightly considers that on-device training is valuable, it’s demanding and normally impossible to validate. I don’t recall any on-device training being used by Apple’s software. It’s easily recognised, as it requires the user to assess whether the algorithm has obtained the ‘correct’ answer. For example, in object recognition the user would need to confirm whether the algorithm has recognised the object(s) correctly by marking the results. So when using Visual Look Up, the user would have to be able to tell the app whether its answer was correct, a requirement for either training or testing phases.

I don’t think for a moment that Apple is actually using our Macs and devices to augment its training and testing datasets. Doing so without informing users would open it to severe criticism. How Trial came to be so named, and why experiments I’ll never fathom.