Skip to content Go to Homepage

ContactEngine is Now Part of NICE: Re-inventing Proactive Conversational AI Together

Insights: Blog
“Would you mind leaving the room please?”
by Henrietta Hurll

It may be counter-intuitive to think that people might prefer to speak with a computer over a human being. And it might seem even less likely that this was understood as early as 1966, decades before many had even heard of artificial intelligence. But early experiments with AI revealed some illuminating human behaviours that are central to understanding our preference for non-judgemental entities in personal matters.

Mathematician Alan Turing — after his stint during World War Two at the UK codebreaking centre in Bletchley Park — published a paper in 1950 discussing the creation of thinking machines, or artificial intelligence (AI). In the same paper, Turing would devise a test — originally called the imitation game — to define whether a machine was ‘thinking’. If the machine could have a conversation with a human, and that conversation was indistinguishable from two humans speaking, then it qualifies.

In an office at MIT, 3,000 miles away from Turing in Cambridge, Massachusetts, and sixteen years after his paper was published, Joseph Weizenbaum — another WWII Vet with a mathematics background — created a program that would beat that test.1

Annoyingly for Weizenbaum though, he was trying to demonstrate the opposite – “that the communication between man and machine was superficial”.2   

Weizenbaum spent two years at MIT creating a natural language processing program called Eliza – a 1960’s chatbot if you will — that interacted with humans using realistic dialogue that mimicked therapists using Rogerian — or person-centred — psychotherapy techniques. Test subjects would sit at a typewriter and see questions or statements from Eliza, and type responses, as if in a real conversation with another human being.

Initial expectations that the experience of interacting with a computer would naturally feel mechanical, cold, or inhuman did not surface – instead, Eliza seemed to open up people like never before.

It was like they’d just been waiting for someone, or something, to just ask.

One of the first test subjects was Weizenbaum’s own secretary. Eminently aware that she was communicating with a computer created and coded by him, she began to type responses to Eliza, as Weizenbaum monitored the interactions over her shoulder. After two or three exchanges into the test, she turned to Weizenbaum and said, “Would you mind leaving the room please?”

Weizenbaum had not designed Eliza to be an effective tool for therapy, but test subjects reported exactly that; terrible news for Weizenbaum himself, who was unsettled that the mechanical parody of therapy he had created appeared to be helping people.

It turns out that, as with psychotherapy, people like to feel they are not being judged when talking about sensitive subjects. Weizenbaum had parroted Rogerian psychotherapy techniques, adopting the idea that subjects would get the best outcomes if they felt empathy, unconditional positive regard and congruence from their therapist, in this case Eliza.

What Weizenbaum had not anticipated was how valuable the non-human aspect of the interaction with Eliza was. The therapist should not judge. But the computer cannot.  And it was precisely because they were not speaking to a human that they could open up without fear of judgement. 

Solutions in your sector


Collecting funds faster using dynamic engagement strategies and easy-to-trigger payment transactions

Time to talk? Just drop us a line and we’ll be in touch
We’ll share our news and insights with you
Back to top