24 May Why our conversational AI is explainable – it’s all about ethics
I used to have a problem with ethics. Thing is, I was trained as a biologist and our glorious leader was one beardy Victorian geezer called Charles. He had this idea. It was a really good idea. It was called the Origin of Species. Now the underlying principle of evolution is the survival of the fittest. You’re too slow, you get eaten, you can’t climb, you get eaten, you can’t light a fire, you get eaten and so on an so forth. But if you run fast, climb quick and rub sticks together real fast you get to get all the girls or boys. Bingo you survive, well your genes do anyway. No ethics, see, nature, red in tooth and claw.
So years ago I embarked on a very short and rather underwhelming scientific career. I worked on endophytic fungi of a tree called Picea sitchensis, a magnificent spruce which can live for 500 years (unless the fungi decides to go postal) and is the state tree of Alaska. So I collected lots of specimens, mashed them up, poisoned them with cyanide, fired rays at them and counted how many of them there were. No ethics, see.
So far, so simple. I didn’t really have to think too much about the ethical position of evolution or fungi or plants, I was just an unbiased observer.
Then I went into scientific publishing and a small book was being reprinted – called a Monograph – all about a chemical called Phosgene. Now Phosgene is not something you may have heard of before – it’s mostly used to make plastics and pesticides. It’s not naturally occurring. But you will recall the horror of the First World War and know that gassing soldiers was considered acceptable (by both sides). Well, it was Phosgene that killed the most.
Now here’s another thing biologists do. We calculate LD50s, which is long hand for Lethal Dose 50% – the amount of the substance required (usually per body weight) to kill 50% of the test population. We use LD50s to work out how poisonous things are. Why? So we can make judgements on how safe chemicals are to store or to transport. But the thing is, during the Second World War, the evil that was the Nazi party took it upon themselves to carry out LD50 experiments on humans. Serious. Very, very serious.
So I had to make a choice – we could have just ignored the appendix which listed the LD50 data, we could have deleted it or we could explain why it was included. Now that is a very, very hard decision to make, and I was 25. It is obviously entirely unacceptable that the experiments were undertaken at all. We cannot imagine how the people felt who were the subject of such breath-taking levels of inhumanity. So the easy choice would have been to delete it, pretend it never happened.
I didn’t do that.
Instead we explained how the data had been collected and that we felt that by using the data we could help to save the lives of others, even though lives were so cruelly ended by the evil experiment. It wasn’t about money – the monograph series was very low volume and the publisher was a charity. It was a very, very serious ethical choice.
So I became really, really ethical really early on.
Now how does that manifest itself with ContactEngine – we really don’t do anything harmful. We help customers to know stuff, we help companies to be better at sharing stuff with customers and we make the world everso slightly less stressed. BUT there is an ethical challenge: AI.
We use our own Machine Learning algorithm – called ALAN (Advanced Language ANalysis) to understand the replies from our clients’ customers and extend the conversation further through smart automation – we call it Outbound Conversational AI. Thing is though, most AI isn’t really explainable – the maths are so hard, so complex that mere mortals could not understand why the car went ‘Human? Bear? Human? Bear? Oh hell, let’s run over the Human wearing the Bear costume’. But the decisions we make are not anywhere near as complex – so hand on heart we can go for Explainable AI – and visualise why we made the decisions we made. And that’s ethical and makes me happy. And shortly I will follow this blog up with one that explains the explainable and why it is so important.