AI and robots: a better understanding

Much of our loosest language and most meme-driven mush emanates from journalists.

Over the last few months, they have been peddling two opposing themes: how artificial intelligence is making everything work better, and how robots are a huge threat to us. AI is machine translation, Siri, all sorts of wonderful new computer features, and is supergood. Robots are cold, ruthless, out of control, underhand, and dangerously bad. In journalism, there seems to be no interest in the gradual, progressive, or middling.

Artificial intelligence is rather less popular among those who have worked in the field, as it is as crisp and clear as mountain mist. There is still no generally-accepted definition of human intelligence, ongoing debate about whether the concept of animal intelligence is meaningful, so how anyone can start referring to any machine having intelligence is a complete mystery. To the outsider, many of the things that computers (in the broadest sense) do apparently mimic or even outperform the human mind. In truth, most of these use clever and fixed analytical solutions which are embodied in algorithms, and implemented in programming code.

No matter how sophisticated and ‘clever’ the algorithm or code, fixed and unchanging code bears little resemblance to human thought. In this respect, the distinctive property of the human mind is its ability to adapt and learn, and what most researchers consider to be closer to that is what is best known as machine learning: where the programmer’s code changes or adapts in some way to improve its performance.

A classic model for this is a software system which is ‘trained’ to predict (or whatever its purpose is) on the basis of an input dataset, so that it performs better when fed fresh data. Handwriting recognition systems and spam filters do this: you provide them with examples, tell the system what is right and wrong, and the system steadily improves until its accuracy approaches your own.

Those two examples of machine learning illustrate well its successes, and its failures.

Most of us now use spam filters in our mail clients, and once they have been taught what is spam and what is not, they perform so well that we barely have to correct them. They do not normally use any sophisticated form of machine learning, but most of the best are based on Bayesian probability, first described by the Reverend Thomas Bayes in a posthumous paper which was read in 1763.

Handwriting recognition has been, to be blunt, a dismal failure despite huge sums of money, vast numbers of brilliant minds, and several innovative devices (like Apple’s Newton) being thrown at it. If you do not want to input text using a keyboard, you are better off using speech or optical character recognition, neither of which has exactly set the world on fire with their success.

Many of the techniques which are used in machine learning are based on analogies. Popular at the moment are neural networks, which are modelled after simplified nerve cells, such as those in central nervous systems. The first of these, known as a perceptron, was thought to be of enormous value and importance until it was demonstrated to be almost impotent in 1969, after which it was hurriedly abandoned. Modern neural networks should be much more powerful and capable, but there is always the danger of repeating the errors of the perceptron.

My favourite machine learning technology is genetic programming, devised as a means of using evolutionary processes by John R Koza in the 1980s. During the 1990s it became a popular tool in investment and other financial trades, and has spawned many other techniques which are based on evolutionary biology. If you prefer, you could use metallurgical models in simulated annealing, devised around 1983, or others.

In today’s flush of enthusiasm for artificial intelligence, it is being credited with much that has no intelligence at all. A recent promotion by Intel, for example, claimed that artificial intelligence is now important in weather forecasting. Although some specialised forecasting systems do now feature some machine learning, the most successful systems for making regular weather forecasts (such as the ECMWF model) are based on huge and computationally complex physical models. They have input data, but do not learn in any intelligible way. They have more in common with the engineering modelling systems used to design structures such as bridges, and are quite unlike any human form of intelligence.

This love affair with machine intelligence becomes all the more puzzling when you compare it with the bad press that robotics has been having, and the similarly sloppy if not incorrect uses of the term robot. These peaked this summer, when a US gunman in Dallas was killed by explosives taken to him by a police remote control bomb ‘wheelbarrow’, which was widely termed a robot.

We are all familiar with radio-controlled cars and aircraft, and know that they are not robots. To be a robot requires some degree of autonomy: not only is the device mobile, but its systems determine where it goes and what it does. The bomb wheelbarrow in question is descended from the first such remotely-controlled devices, which were developed for the British Army to defuse terrorist bombs in Northern Ireland, in 1971.

Many reports included the statement that the robot used was not autonomous, but was controlled by police officers throughout. Yet even the manufacturers and police keep referring to these remotely-operated vehicles as robots. They are no more robots that the radio-controlled cars which are still tucked away in odd corners of our house.

For some reason, though, we are being encouraged to have deep fears about robots, even devices which do have any sort of autonomy. When deployed on a battlefield, we are now supposed to consider them as threatening as chemical or biological weapons (which of course have no autonomy, or even discrimination between good and bad guys, military or civilian). So artificial intelligence is only good when it is (actually not) forecasting the weather, dealing with our spam mail, or chatting to us through an iPhone. And human intelligence cannot be trusted when it is operating a vehicle remotely.

The snag with painting the world in black and white, as many seem to want to do, is when things overlap, and get mis-labelled.