In my previous post on linguistic interactions between computer-managed communications and people, I discussed the interesting phenomenon in human-computer interaction (HCI) whereby humans can sometimes be unaware that they are communicating with a computer – an automated system – rather than a person. As I explained, this phenomenon is clear in some of the responses we receive to our AI-driven conversations: people write long-winded messages that include irrelevant extra information, personal details, and pleasantries. In other words – small talk.
This is very forgivable, of course – why should everyone know that we now have technology sophisticated enough to carry on a conversation without the involvement of a human on one side of it? People may be unaware of the way big companies use communications, or they may not be expecting it, or they may be unused to technology. All absolutely fair enough.
But! This lack of understanding or awareness certainly isn’t true for many. Lots of people – the vast majority, in fact – are perfectly aware they're being contacted by an AI system. And yet – lots of these people still don’t stick to the prescribed responses. They still give extra information, like expressions of frustration or uncertainty or gratitude. This makes it more difficult for them, as it interrupts their journey and could delay their service, and more difficult for us, as it could mean they have to be routed to a human agent unnecessarily.
So why do people respond like this, even when they must know they’re interacting with AI? Logically, there’s every reason to believe that, if we were only more aware that there wasn’t a human on the other side of the conversation, we would moderate our language accordingly.
But this isn’t necessarily true. Even when humans are perfectly aware they’re interacting with a machine, there’s still something strange going on. The fact is that we humans tend to treat computers as though they, too, are humans. We attribute emotions to them; we treat them as though they can feel and react. We’re polite to them. In a study where subjects were asked to rate the efficiency of a computer twice, first answering the questions on that same computer, and then answering them on a different computer, people were more likely to be complimentary when the computer was asking about itself, and more critical when the computer was asking about another device. People seem to assume that the computer will be offended, as a human might be, if you are rude about it to its face.
We love to anthropomorphise objects – and now we’re doing it with computers too[i]
We also don’t like it when computers break social rules, like principles of politeness. You have to say hello and goodbye at the beginning and end of an interaction, for example, if you don’t want to be rude or weird. Another study gives the example of a situation when, during a computer game, a character suddenly disappeared due to a glitch. People were unsettled; they interpreted this as anger or displeasure on the part of the character. Otherwise, it would have said goodbye before it went!
People like to be flattered and don’t like to be criticised – and this is true even when the flattery or criticism is coming from a computer. Even if the flattery is unfounded, and they know it, people still like it and feel more positively towards the machine than otherwise. I find it absolutely wild that despite knowing with our logical brains that it’s just meaningless 1s and 0s coming from a machine, we still fall for these compliments! And it’s such a lovely and positive thing to fall for! Humans are so sweet.
‘But this is all still new’, you might say, to rationalise this behaviour. In our centuries-long human history – millennia-long, even – of conducting trade, and making arrangements, and communicating, we have been using computers and AI for about two seconds. People just aren’t used to it yet. And because we aren’t quite familiar with these human-machine interactions, we react in a human way because that’s all we know how to do.
This may well be true – but then again, it might not. This is all occurring on a subconscious level, regardless of how aware we are that we’re interacting with a computer. In another study, when people were asked whether they treated computers like humans, they mostly said no – before going on to do exactly that in the next part of the study. Even people who worked with computers all the time and were very familiar with them – web developers, programmers – and who might therefore be the least likely group to take this sort of approach, still did.
The studies referenced are taken from The Media Equation[ii], a book (turned theory) which gives a brilliant, fascinating look at how people respond to computers. Also, it was published in 1996 – more than 20 years ago! It’s certain that things will have changed in this time, in terms of HCI. As Thurlow (2003) notes,
‘Although something of a cliché, it is necessary to acknowledge the speed with which these communication technologies are changing and how academic research in this area slides towards obsolescence before it even gets going.’[iii]
However, as mentioned before, even judging only by how customers respond to ContactEngine conversations, it’s clear that the Media Equation effect is still very much present. Customers are polite in their responses; they say please and thank you; they do ‘like’ or ‘love’ reactions to messages (a function of iMessage); they bless the sender and wish it a good day. Some of these customers must assume their replies will be read by a human – but I’m sure that others, consciously or not, still simply respond to machine messages as they would human ones.
At the end of my previous post I asked, how can we tailor our conversations just right so that people know they aren’t talking to a human and therefore give us the responses we want? But when we take the Media Equation into account, it becomes more complicated. Maybe people already know they aren’t talking to a human! And they still don’t want to reply with terse, one-word answers, because it just feels rude. Perhaps instead the question we should be asking is, how can we phrase our conversations so that people are comfortable replying to them? And moreover, how can we continue to advance our AI so that ultimately it can handle even the most chatty, pleasantry-filled, small-talk-y, human responses?
[ii] Reeves, Byron & Clifford Nass (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. University of Chicago Press.
[iii] Thurlow, Crispin (2003). ‘Generation Txt? The sociolinguistics of young people's text-messaging’, Discourse Analysis Online.