It may be counter-intuitive to think that people might prefer to speak with a computer over a human being. And it might seem even less likely that this was understood as early as 1966, decades before many had even heard of artificial intelligence. But early experiments with AI revealed some illuminating human behaviours that are central to understanding our preference for non-judgemental entities in personal matters.
Mathematician Alan Turing — after his stint during World War Two at the UK codebreaking centre in Bletchley Park — published a paper in 1950 discussing the creation of thinking machines, or artificial intelligence (AI). In the same paper, Turing would devise a test — originally called the imitation game — to define whether a machine was ‘thinking’. If the machine could have a conversation with a human, and that conversation was indistinguishable from two humans speaking, then it qualifies.
In an office at MIT, 3,000 miles away from Turing in Cambridge, Massachusetts, and sixteen years after his paper was published, Joseph Weizenbaum — another WWII Vet with a mathematics background — created a program that would beat that test.1
Annoyingly for Weizenbaum though, he was trying to demonstrate the opposite – “that the communication between man and machine was superficial”.2
Weizenbaum spent two years at MIT creating a natural language processing program called Eliza – a 1960’s chatbot if you will — that interacted with humans using realistic dialogue that mimicked therapists using Rogerian — or person-centred — psychotherapy techniques. Test subjects would sit at a typewriter and see questions or statements from Eliza, and type responses, as if in a real conversation with another human being.
Initial expectations that the experience of interacting with a computer would naturally feel mechanical, cold, or inhuman did not surface – instead, Eliza seemed to open up people like never before.
It was like they’d just been waiting for someone, or something, to just ask.
One of the first test subjects was Weizenbaum’s own secretary. Eminently aware that she was communicating with a computer created and coded by him, she began to type responses to Eliza, as Weizenbaum monitored the interactions over her shoulder. After two or three exchanges into the test, she turned to Weizenbaum and said, “Would you mind leaving the room please?”
Weizenbaum had not designed Eliza to be an effective tool for therapy, but test subjects reported exactly that; terrible news for Weizenbaum himself, who was unsettled that the mechanical parody of therapy he had created appeared to be helping people.
It turns out that, as with psychotherapy, people like to feel they are not being judged when talking about sensitive subjects. Weizenbaum had parroted Rogerian psychotherapy techniques, adopting the idea that subjects would get the best outcomes if they felt empathy, unconditional positive regard and congruence from their therapist, in this case Eliza.
What Weizenbaum had not anticipated was how valuable the non-human aspect of the interaction with Eliza was. The therapist should not judge. But the computer cannot. And it was precisely because they were not speaking to a human that they could open up without fear of judgement.
Stress, Debt and AI
The anecdotal conclusions drawn from the Eliza experiment were decades ahead of any formal research. But it appears that Weizenbaum’s secretary was not so different to the way most of us feel about AI today.
A recent global survey of over 12,000 workers found that 82% preferred to talk to AI over humans on matters of mental health. And more than two-thirds even preferred talking to AI about stress and anxiety at work over their manager.
The World Health Organisation found that global depression and anxiety levels rose by 25% during the first year of the COVID-19 pandemic. With energy bills soaring, and no immediate solutions to cost-of-living increases, the coming months are likely to bring more stress and anxiety to people, especially those who could struggle to pay their debts.
But for people who are ready to settle the debt, but for some reason just haven’t got around to it yet, AI technology is proving itself effective.
A recent case study using ContactEngine AI technology revealed a significant uptick in payments, and a reduction in days taken to pay, simply by using AI software for one week.
Why? For the same reason Weizenbaum was asked to leave the room.
Eliza may have been a failure for Weizenbaum, but concealed within the experiment was a valuable insight that he had not anticipated – the value of non-human interaction.
The therapist should not judge. But the computer cannot.