‘When are humans best and when is it better to hand off to a computer?’
Life is full of decisions. To shave or not to shave; to shower or to bathe; to tea or to coffee; yogurt or croissant? And that's just within the first 15 minutes of the day. There are even irritating micro-decisions like milk no milk, or micro-micro decisions like low fat, high fat, soy, oat or hemp (don't do hemp, it's truly vile). So, when I'm asked about the future of communication and the impact of AI on customers I'm actually quite relieved, because I've got this giant laboratory…
My latest quandary is this – when are humans best and when is it better to hand off to a computer? Or is it best to start with a computer, take a stab at programmable empathy and see how it goes? Well it’s a brave new world out there.
Here’s a funny story. I was with a bank last week. They have a bot. Now, I don’t like bots – I think they miss the point. Communication should be both ways; from a brand to their customers and, obviously, back again. Bots, these semi-intelligent, searchable FAQ services, exist to stop you reaching a human. The sector has even coined the phrase ‘containment rates’ to measure bot success, as if customers are a kind of virus.
Bots assume that the customer starts the interaction and, usually, when a customer summons the energy to interact with a brand it’s because something has gone wrong. The order is late, lost, broken, or whatever. In that situation, the customer typically wants a human, not a bot. And they certainly don’t want to be ‘contained’.
Anyway, back to the bank and that funny story. The bot made its own decision. That’s good. Scary, but good. It’s AI in action. When it read a message that it considered a joke, the bot decided to reply: “You’re trying to be funny, right?” Clever, huh? Well, not entirely. That ‘right’ at the end can be interpreted different ways.
According to my exhaustive survey of three American friends, an American reader would think nothing of it. Here in the UK, however, it can sound aggressive, like an invitation to fisticuffs. It depends how it’s said, of course, and on the accent, location and so on, but it’s open to interpretation. In some contexts, it’s friendly. In others, anything but. For example, if you’re in the pub and someone says, “That’s my drink, right?”, then they might be checking with you or they could be making an aggressive assertion. Tone is everything.
Give the bot one point for initiative but no points for empathy. Leave the jokes to the humans, right?
But how do we know when the machine is best and when the human is better? Often, it’s related to stressful situations vs non-stressful ones. Let’s illustrate that with three real case studies:
The appointment: your washing machine has broken and you are quite cross, so what is the experience you want regarding the repair? Well, you mostly just want to know about progress. In this case, machine works best – it can learn which channel you like, tell you what’s happening, let you know when it’s going to happen, tell you if anything is stopping it happening, and then ask you if it all went well. Nice short messages, written or spoken, at a frequency that the machine learns is best. And using words that the machine works out are nicest. So that’s all machine – and everyone is happy. What’s more, the machine can prove that by asking a nice survey question at the optimum moment.
The car crash: it’s a cold and frosty morning, you hit the bend at your usual 30mph but physics decides that today – being as the water is now frozen solid - you will actually go straight on. Into a ditch. Whoops. No-one’s hurt, but you’re a wee bit stressed. Now human next or bot? Mmmmm, I wonder? OK, so it’s easy: human. Today, and for a few years yet, empathy in voice (sometimes called prosidy – the patterns of stress and intonation in a language) will come only from us humans, not a computer. Once that calming chat is over and done with, the machines can kick in to arrange the appointment with you, your car, a hook, a chain and a ditch. The pattern goes: Human – Machine – Human.
A bank loan: if you’re of a certain age and adept at getting into debt, loans are not that scary. Yet those aliens we call ‘millennials’ are rather worried about debt. They start with a machine – filling in a form that runs their data through a clever algorithm that tells them if they can borrow what they’ve asked for. Then they want to talk to a human about getting properly saddled with debt. After that, it’s a boring process of confirmation and document verification which machines are best at. So, this example goes: Machine – Human – Machine.
So that’s clear then? Machines need to be used where they are best and people when empathy is needed. Alas, if only it were that simple! This is where truly smart algorithms offer more. If your training data is massive and your questions simple, then it is possible to get better and better performance. You might go from 90 per cent to 95 per cent, to 99.9 per cent confidence in its performance. Then you need to decide when you can let the algorithm loose on the world.
Let’s say your service is available only between 9am and 5pm, five days a week. A contact at 5.01pm on a Friday is ignored until Monday if a human is needed to reply. That’s no good, so maybe you’ll decide that 95 per cent confidence is preferable to a zero per cent response because you don’t have the people.
It’s trade-offs like these that we are seeing all the time. The overall message for me is simple: machines and humans work best together and I’m right, right?