Skip to content Go to Homepage
menu
Insights: Blog
A bear walks out of a pub... the problem with AI
by Mark Smith

Let’s imagine for a moment that you’ve gone to a fancy-dress party dressed as a bear. It’s not impossible - I’ve done a passable impersonation myself (you should have seen the look on Leonardo DiCaprio’s face....) Anyway, it’s hot, what with all the fur and that, you need some fresh air and you wander outside.

Boom.

You suddenly find yourself 15 feet in the air as an autonomous vehicle decided that killing a bear was a much more acceptable decision than running down Little Red Riding Hood (it’s fancy dress silly, keep up).

So, what’s this got to do with AI? Well, the almost entirely made up example above (apart from the bit about the author in the bear suit) illustrates a massive problem with AI. It’s not at all easy to explain why Little Red Riding Hood lived and Baloo the Bear died. Why? Well it’s very, very hard to explain.

And that gets to the crux of the matter. AI for all its puff and wind is just ultra-hard maths, and layers of it as well. Now if you consider keeping score at darts (treble 17 anyone?) is beyond most ‘ordinary’ people, then explaining very hard maths is well, hard. But it has to be done. And its doings are called XAI – Explainable Artificial Intelligence.

The BBC captured it nicely with a mildly amusing defence of Google from one Professor A. Moore (now there’s a Nominative Determinism gag just waiting to be made…), their lead on Cloud AI stuff. Hilariously, Prof Moore suggested the solution to hard maths explanation was in fact ‘really cool fancy maths’. Mmmmmmm oh dear I see where this is going...

So here’s my tongue-in-cheek translation of that interview:

‘That was before my time…’ Now this is just never a good start. For a company that holds more data on citizens than maybe anyone else in the world, time - past and present - is kinda irrelevant.

‘Google's AI principles say that they're not going to be working on offensive weapons systems’ Oh you’ve got to enjoy the use of that adjective... This defence was first put forward in 1962 with the glorious doctrine of MAD (Mutually Assured Destruction). Under MAD each side has enough nuclear weaponry to destroy the other side. But you have to build the nuclear missile first to make sure you’re covered. It’s not offensive though is it? Mmmmmmm

Next, a personal favourite - anyone who watched The Capture from the BBC, where a poor soldier was properly set up by deep-fake videos, will admire this approach... I call it the budgie strategy; ‘look at this mirror, not that mirror’: ‘I don't want to talk about any specific contracts. But for example, Google is actively helping out with a question of "deepfake" detection, which is this new fear that artificially constructed videos or images might become so realistic that they actually cause societal problems. And so we're partnering with a major government agency in the United States to help deal with that potential’.

Jack Bauer one assumes?

And finally the chink of light maybe even truth: ‘...massive internal arguments’ but sadly followed up by the disingenuous newspeak ‘going to disagree and having leadership commit’…

So Google I share your pain. But my life is made a little easier. I automate conversations. We ask a singular question (spoken or written) and invite a response. Because the intents that are returned are somewhat reduced – meaning if I ask you a question about football you are unlikely to reply mentioning pineapples – then the hard maths we call an algorithm (that our clever maths-sorts made) can reply in a human like way.

Making XAI when your conversation is not too long and the consequences are not so life-threatening is easier than the bear/Hood nexus. But, dear reader, be under no illusion XAI, in a world where privacy and protection are the buzzwords, means patting us on the head and saying ‘sorry Dave was in Baloo, it’s just too hard to explain what happened’ is both patronising and offensive (see what I did there?).

In another post I’ll explore just how boringly technical XAI will have to be – and it is hard enough for us to begin sponsoring some PhD's to actually craft the future and make sense of this really cool super fancy maths.

Icon Book Demo

Isn’t it time you stop assuming? Book a demonstration with us

Thank you, we will be in touch shortly!
loader

Error Submitting Form

Back to top