Skip to content Go to Homepage
menu
Insights: Blog
Cheers to BEERS - staying focussed on Explainable AI
by Mark Smith

It's too easy to pay lip service to explainable AI. Unless you bake XAI in to your systems you could be storing up trouble for later. This blog explains how we do that through our guiding principles. 

I’ve written before about how ethics impacted the early part of my career with a terribly difficult decision foisted upon me by the presence of something called LD50 data in a short technical book about a chemical called Phosgene. Basically, the data was collected from (mostly fatal) experiments on actual people in 1940s Nazi Germany and the decision was to delete or use? I choose use – so that in a tiny, tiny way those lives were not destroyed for nothing at all.

I also mused that my company is at the other end of a badness spectrum – we just try and let people know what’s happening in as proactive a way as possible and make their lives a little easier as a consequence. But it would be wrong to suggest a company that looks to automate communication has no moral jeopardy, so here are a few of our ethical dilemmas and how we address them. It also allows me to introduce our BEERS principle.

Automation: this one doesn’t trouble me much. If you accept that ‘today’ is the most advanced humanity is, then why, with years and years of automation, are we all still working? Odd huh? In my own country we have unemployment rates that some economists argue is, in effect, zero, and yet automation is all around us. So it seems to me that there will always be a happy co-existence between technology and people. Basically, Singularity (a technologically-created cognitive capacity far beyond what’s possible for humans) is for Hollywood. Given we don’t know what intelligence actually is, making an artificial version of it is just ill-thought through arrogance. So in my world – making communication better and smarter takes a tedious task that humans hate doing (cold calling to confirm appointments, say) and lets them do the things that make them happy – speaking to other humans and helping them out.

Deep insight: this one is a more interesting dilemma. Gartner suggest that by 2023 40% of customer service cases will be predictable[1]. Wrong. We reckon it’s probably 50% and it’s now, not in 2023. Gartner are just hedging their bets! We have done this several times – by taking a look at vast data sets for customer communications and using fancy maths, we could see very interesting trends for propensity to respond – which we use to converse with people using the right message at the right time. Now, this is on a spectrum which ranges from ‘do nothing’ to Cambridge Analytica (a British political consulting firm which combined data mining, data brokerage, and data analysis with strategic communication during elections and then lost the plot). What broke Cambridge Analytica (and quite deservedly so by the way) was not the idea of mining data and personalising comms, it was because they took advantage of a Facebook app used by a small % of Facebook users to reach the networks of each of those users, thereby worming their way into data from people who had never given their permission for it to be shared. So, with this one we are absolutely rigid in being open and transparent in what we are doing and how, and so long as the customer gets the right message at the right time and are made happy then all seems well.

Explain yourself: this is the most interesting. XAI is a new buzzword in town – it means Explainable AI and refers to techniques in artificial intelligence which can be trusted and easily understood by humans. It contrasts with the concept of the ‘black box’ in machine learning where even their designers cannot explain why the AI arrived at a specific decision. Now I am not saying this is easy – I read an article recently which argued that you don’t dissect a sniffer dog just because it detected drugs at an airport – you just accept it’s a very complicated biochemical and cognitive process which dogs are just damned good at. Maybe…. and it is ruddy hard to unpick decisions that very complex algorithms make, but does that mean we should just pat people on the head and say ‘there, there it’s really very complicated you should just trust us’? Well sorry, I don’t trust social media, I don’t trust search engines, I don’t trust the phone in my pocket, I don’t read all of the thousands of words I tick to accept just so I can use ‘free stuff’ – so trust is broken and companies cannot be arrogant and ask you to trust them. We just don’t/shouldn’t. Whose ethics should I buy into, anyway?

Here in ContactEngine Towers our principles are XAI – made easier, I admit, by the fact that our algorithms are dealing with relatively simple, short conversations, and are not trying to drive a car hands-free.

Now back to beers. So in order to make it simple to remember we approved, with our own AI Board (made up of some of the best minds in the UK on AI matters – see here), a principle that goes like this:

  • Beneficial: Must benefit each of ContactEngine, our clients, and their customers
  • Ethical: We do not compromise our ethics, and everything we do must pass the ‘red face’ test
  • Explainable: We need to be able to explain what our AI is doing, or has done, but not necessarily (exactly) how
  • Relevant: We only apply AI where it is relevant to do so, avoiding unnecessary over-complications
  • Secure: Data security is vital, with data privacy and confidentiality protected at all times

So cheers to BEERS I say, and now whose ethics do I actually like? I reckon Socrates is a decent start (the Greek geezer not the soccer player. Though, that said…).

 

_______________________________________________________________________________________

[1] https://www.gartner.com/document/3895585  (requires subscription)

Icon Book Demo

Isn’t it time you stop assuming? Book a demonstration with us

Thank you, we will be in touch shortly!
loader

Error Submitting Form

Back to top