Skip to content Go to Homepage
menu
Insights: Blog
Make 2020 the year of Explainable AI
by Mark Smith

It’s odd isn’t it, but some technologies present such a useful service that exactly how they work is irrelevant. A car, a train, a plane, even a phone – most people have no idea how they actually work but are very happy they do, and all is fine.

However, when technology starts to reach Turing Test magic (where a computer starts to appear human-like) people start to worry. Add to that unease Data Protection issues and AI really, really started to worry corporates when in 2018, GDPR (the European Union’s General Data Protection Regulation) came into force and people’s ‘social right to explanation’ kicked off.

That’s why I think 2020 is the year of Explainable AI (XAI). 

XAI refers to methods and techniques in the application of Artificial Intelligence technology (AI) such that the results of the solution can be understood by human ‘experts’. It contrasts with the concept of the ‘black box’ in machine learning where even their designers cannot explain why the AI arrived at a specific decision (for a light-hearted read, try this blog on the challenges of explaining the unexplainable).

What is Explainable AI?

Now, the only problem with XAI is the problem of ‘experts’… It is not actually unreasonable that explainability should be via people who know lots of smart things – but actually, XAI could, as all my old school reports suggested, do better. And why not try and make it clear why a computer made the decision that it did?

Take some examples: we all understand why car insurance is more expensive for 18-year-olds, right? Especially if genes have a Y in them rather than a pair of X’s. Likewise, when you’re earning a smaller paycheck, you’re likely to have a lower credit limit on your credit cards than when you might be earning a wee bit more later. So far, so simple, so fair, but let’s unpick the algorithm that went all a bit sexist recently and gave women with excellent salaries who had joint bank accounts with their male partners, a lower credit score than their beloved. Funny? No. Not at all funny.

In this case, it’s likely that the data set used was biased in some way – either it was properly broken, or more likely, the algorithms actually reflected a societal problem which is that women (on average) earn less than men (on average). Please don’t shoot the messenger, The Guardian, explains why here. 

It is, however, my contention that XAI should not be an optional feature but should be built in from the ground up. 

Explainable AI in context

The explainability of a decision is a non-trivial task. Let’s take self-driving cars: the sheer complexity of autonomous vehicles means nano-second by nano-second the computer is processing millions of permutations. Its decisions are then damned hard to explain. But nevertheless, we must try. 

Here’s a really good example: A Native American, a Hispanic, and a Caucasian walk into a hospital (what a terrible joke this could become…) and they all have diabetes (really, absolutely no comedy potential here at all), but in a data purest sense, they should each be treated differently because depending on their ages and race (and probably gender as well, what with women often being a damn sight smarter than men – see extra X’s above) they will be readmitted to the hospital at very different rates (on average). 

So the doctor or nurse could choose to not explain why their advice is slightly different, or they could show them this rather lovely dashboard built by some smart people from the Digital Society School (from, well, the world…) and say – ‘look the data tells us this – it’s not biased, it’s truth’:

https://medium.com/digitalsocietyschool/why-computer-says-no-explainable-ai-and-other-ways-to-collaborate-with-the-black-box-8d06c1169f8e

When Explainable AI is not needed 

Now there are also lots of times when the decision that has been taken is so advantageous that no-one cares just how the magic occurred. Imagine you want a new TV – you just want to find the best price. How many clicks are needed? Well, not many – and through the search process, helpful AI is busy sifting through the world to get you the best deal. And if you’re canny enough you might even deliberately abandon your cart (online, not in the actual shop – that’s rude) knowing that a smart piece of AI might even offer you a wee bit more off later...

Happy with that? Of course, you are. Need to know how it was done? You don’t really care, do you? Now on the other hand when it’s costing, we get very cross. For example, someone crashes their car and their premium jumps – oh blood boils then, huh? It’s so unfair. Well no it isn’t – it’s actually the very embodiment of fair. That person costs more than they paid so their premium rises so that those of us who do not crash pay less – and in fact, their crash has probably just put up my ruddy premium by 0.00001c… so pack it in and drive more carefully.

So when it’s for people, people don’t care, when it’s against them, they care. But there are also other times where there are more subtleties when people need to be protected – like with a driverless car, or when a person of colour is in a court that uses biased data to decide a verdict. Then people really need a) to care and b) for people to look out for them – that is why XAI is so important. 

How Explainable AI will change customer experiences and jobs

Apocalyptic predictions of the end of humanity aside, it seems clear that AI is an enhancing technology rather than a wholesale human replacement. People expect good customer service – and the immediacy of social media and the broad similarity of many services is making that more and more the case. The services of a bank, an insurer, broadband mongers, cell-phone purveyors, even somebody’s doctor are all pretty much the same irrespective of the provider. So, it is when stuff happens that customer service must be perfect – a delivery, a repair, an outage, a bill shock, an appointment, a renewal, a claim, etc. etc. etc. If those events are poor, people forget the good that happened and only recall the bad, and with the ease by which the fickle consumer can switch – the churn event is frictionless and, in many ways, entirely predictable.

Worse, the open availability of price comparison sites means the consumer benefits from cheaper prices – but on the other hand, customer service can often be compromised as profitability is reduced. Just recently in the UK, the water regulator imposed the toughest crackdown that water companies have ever seen by ordering a price reduction. If things are made cheaper and cheaper, service with real live humans simply becomes too expensive. 

The solution is clearly automation. It’s already happened: quicker, smarter computers behaving in empathetic ways that learn the best time, best words, best tone of voice, best language, and best channel to engage with people can deliver perfection in customer service. Well, near perfection – and the times when they fail or when humans are best? Well, just let humans do what they do best and display all the millions of years of communication evolution that makes them the best - and better than the best computer will ever be. 

Celebrate the future, it’s already here.

Icon Book Demo

Isn’t it time you stop assuming? Book a demonstration with us

Thank you, we will be in touch shortly!
loader

Error Submitting Form

Back to top