Insights: Blogs

Dr. Mark K. Smith
Dr. Mark K. Smith CEO

‘Artificial Intelligence is an oxymoron. It doesn’t make sense. It’s like saying ‘artificial love’ or ‘artificial god’.’

First published on FirstCapital.co.uk

The machines will not take over the world. There, I’ve said it. I do not believe computers will ever become sentient. They won’t dream, fall in love or express the emotions of a new parent holding their child for the first time. They won’t compose The Lark Ascending, nor will they conjure melancholy and pathos while painting the Fighting Temeraire being towed to its watery grave……

Good. I’m glad to have got that off my chest. I do, however, love technology. I really, really love technology. So, it was with some pleasure that I spoke at a conference recently on the Cote D’Azur on AI and Machine Learning.

Here’s my problem: I love technology but know its limitations. This was wonderfully illustrated with Google’s demonstration of Duplex – a quite remarkable development. But then read why it works… and it’s here, in this seemingly innocuous sentence: “The technology is directed towards completing specific tasks, such as scheduling certain types of appointments.”

Aha! That is the nub of the issue; they have closed the question down so the answers are containable and predictable. It’s good, but it ain’t Skynet.

I have a PhD in the Natural Sciences. It’s one of the humbler disciplines because it acknowledges that there is absolutely loads we don’t know. We biologist types reckon there are about 10 million species on Earth – eight million on land and two million in the sea. Get this: 86 per cent of all species on land and 91 per cent of those in the seas have yet to be discovered, described and cataloged.

We don’t exactly know how water gets to the top of a Giant Sequoia, either, and we absolutely, completely don’t have any idea how humans actually think.

Artificial Intelligence is an oxymoron. It doesn’t make sense. It’s like saying ‘artificial love’ or ‘artificial god’. BUT here’s where the field gets really interesting: how do you weigh an ox?

Well, a British scientist called Francis Galton noticed something really interesting. Ask 100 people at a country fair how much an ox weighs and, almost certainly, no one will get it right. However, if you take all their answers and divide by the number of people – in other words, take an average – then you get an answer that is very, very close.

Galton called this phenomenon the Wisdom of the Crowds. And that takes you to the heart of what is happening in the world of AI. Machines beat us at chess and go because they are better at processing data than we are. When you let them play against themselves, they get smarter still.

If you show a computer, say, 100,000 photos of cancerous tumors, with all those that were correctly diagnosed, then, with some smart kids programming the AI, the computer will be better than the human. That’s inevitable. It doesn’t mean it’s clever though – just better than we are at remembering.

Machine Learning, which is a subset of AI, even though the terms are often used interchangeably, can take advantage of crowd wisdom too. Let’s say instead of weighing an ox you wanted to understand messages received from customers wanting to change an appointment. (Yes Google, we’ve been doing this for ages.) First, you ask the crowd what they think each message actually meant. You ask loads of people, loads of times, and then – bingo – the machines eventually out-perform the humans.

This is interesting because it’s a key area for our business. We mimic outbound call centers by arranging things like appointments, outages or bill shock communications – that sort of thing. Just like the Google example, our questions have relatively few possible replies. Mostly, we get a ‘thanks for letting us know’ or a ‘yes’ or a ‘no’ reply, but roughly one-fifth of the time we get more, different, words. In that world, we can quickly outperform humans in replying and every new message we receive can go into the ox weighing pot, if you get my drift.

Now that’s exciting – it’s practical, it’s not making a machine sentient and sending it back in time to kill John Connor Esq., it’s just making a customer’s journey better, making the company delivering a service look good and saving money too. Nice.

And this is where AI will take us – getting machines to give better experiences by not wasting humans’ time, or getting the machines to do things that are, in truth, beyond us (like holding 100,000 images in your head).

Did you know the average call center churn rate is 25 per cent and it costs up to £9,000 to train a replacement? So, how about giving machines the crap comms tasks that humans clearly don’t like and give the humans more time to spend on customers with issues too complex for the machines? That’s called customer care and that’s why deep down I really, really do love technology after all.