Artificial intelligence (AI) is not what the name describes at all. We can barely define human intelligence, let alone create our own version. So, what does the term really mean? And what about the numerous other terms that get thrown around? Is ‘machine learning’ a synonym for AI or a distinct type of AI? Let’s try to demystify some of the terms.
- Artificial intelligence (AI)
Computers are ideal for tasks requiring lots of fast computations, such as playing chess, and they have been good at them for decades. Then there are things that humans frequently do badly, such as driving cars or writing Mrs. Brown’s Boys. Computers will take a while to learn how to do them but will one day do a better job. We think of these tasks as requiring human intelligence and so, when computers do them, we call them ‘artificial intelligence’.
What computers lack is ‘general intelligence’ – they can’t switch from the task they are programmed for, such as playing chess, to one they aren’t, like driving a car. They even struggle to walk across a room if you put a chair in the way. Humans usually manage that without what we would describe as intelligence.
- Machine learning (ML)
Not all AI is machine learning, but all machine learning is a kind of AI. Think of the difference this way: a chess-playing AI is programmed with the rules of chess and considers every potential move before moving a piece. It doesn’t learn because it already knows every scenario it will ever see. In machine learning, the AI is ‘trained’ by being fed data. For example, if you want a computer to diagnose cancer, you could feed it lots of scans and tell it which ones showed cancerous tumours. It would start to work out the signs of cancer and refine its model each time you added more data.
However, a computer doesn’t learn like a human. No matter how much data it absorbs, it doesn’t develop what we would call understanding; it just accumulates data points. A human, with general intelligence, might look at thousands of cancer scans and suddenly be struck with inspiration for a new cancer treatment. A computer cannot make such a leap.
- Representation learning
Go, the ancient Chinese game, is far more complex than Chess. That’s why Google’s Go-playing AI, the first computer program to beat a human world champion, had to rely on machine learning – there are too many potential moves. It used a kind of machine learning called ‘representation learning’, in which the computer began with the basic rules, then played itself over-and-over again to develop its own winning strategies.
You might think spending billions on a computer that plays a game by itself is odd, even for Google. But cracking problems like this could help develop AI that one day figures out problems for which humans have no framework. Such as, ‘What is Kanye West on about?’
- Neural networks
A neural network is a machine learning system inspired by the workings of neurons in biological brains. In 1958, the US Navy created the Perceptron, the first neural network, claiming that it would “be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” As you might have guessed from the lack of Perceptrons around, they were a little over-optimistic. Of the six capabilities predicted for the Navy’s machine, computers are still working on most of them.
- Deep learning
Deep learning refers to the layering of neural networks. The information is processed by one neural network layer, then passed to another deeper one, and then another and so on. This is about scale: the layered approach makes it possible to process more data, more quickly. The result is that we can tackle truly profound human problems. For example, Netflix uses deep learning to determine what you should watch next.
OK, perhaps that isn’t the most profound human problem, but it certainly feels like it on a boring Friday night in when the only thing on telly is Mrs. Brown’s Boys, doesn’t it?