Hi.

Welcome to my site.
It's where I keep track of my thoughts and published articles.

Three reasons not to make conversational AI more 'human'

Three reasons not to make conversational AI more 'human'

Whenever conversational AI is discussed in marketing circles these days you will likely hear a common buzz phrase; “how do we make the experience more human?”. On the surface this seems hard to argue with, when our world is transforming before us how could making technology more human not be a good thing? But trying to make a machine experience the same as a human one can lead to problems. There are times when making the experience more machine-like has clear advantages.    

The Uncanny valley

The term uncanny valley refers to the sense of unease humans feel when machines resemble humans enough to be confusing but are not convincing enough to be imperceptible. It usually refers to physical robots but it also applies to AI and conversational interfaces.

Research consistently shows that humans want to know when they’re interacting with a machine rather than a human. Mindshare’s Humanity in the Machine - report (2016) claims 75% would be annoyed if they were tricked into believing they were talking to a human.  Covert conversational experiences – when the user is unaware that they are talking to a machine – are currently being used in parts of a customer service experience with reasonable success. Apple for example will pass you from machine to human as part of their customer service without you ever fully knowing – you may suspect but you can’t be exactly sure who you’re talking to. If your software is good enough then you might just be able to get away with this - for a while at least - but sooner or later your tech will break and that curtain of illusion will come crashing down. The user will then realise they have been duped. They will be talking to a person in a completely different way to a machine, they will have written more words, been more sensitive in their wording and more thoughtful in their phraseology. The realisation that they were talking to a machine all along is not a pleasant one.  

This effect will get worse before it gets better. Google Duplex shows just how blurred the lines between AI and human experiences have become. Increasingly, we are not going to know who we are talking to – machine or human – across a number of experiences. This will cause tension and discomfort. I believe we will look back on Duplex as a mistake that went too far. Tricking people into believing they are talking to a human is smart from a technological perspective, but dumb from a psychological point of view. Being transparent that you’re talking to a machine in an overt experience is in my view far less risky and as I will go on to explain more beneficial for both parties.

Anthropomorphism 

The ironic thing is that machines don’t have to behave like humans for us to connect with them emotionally. As humans we project human characteristics onto objects all the time. This is called anthropomorphism and it means we can build human-like relationships with things that don’t think, behave or look like a human. The best example of this – if you’re a Star Wars fan – is R2D2 and C3PO. The two droids are both much-loved characters in the franchise but without a doubt R2D2 is the more popular. This is despite the fact C3PO looks like a human, talks like a human and acts like a human. It’s incredible that we share far less human commonalities with R2D2 but we love him more.

If you’re not a Star Wars fan there are plenty of other examples of this in our real world. Military bomb disposal experts have talked about the emotional bond they feel with their robots and the distress that occurs when they are harmed. In Canada David Smith and Frauke Zeller created Hitchbot – a hitchhiking robot – that kindly drivers picked up and took around the country. When Hitchbot was destroyed by vandals the following community was visibly shocked and saddened. This is all despite the fact that it was nothing more than a collection of wires and parts. You may even have experienced this yourself, do you feel an emotional bond to your car? Even named it perhaps?

That makes sense for hardware but the same applies for software as well. One of the most successful early uses of chatbots has been helping with mental health. A good example of this is Woebot – a very clear robot that can keep contact with you and provide emotional support to those suffering with a variety of mental health issues. It’s astonishing how quickly a very real emotional bond can be formed between patient and bot.

These are all telling examples that to create emotional human experiences, we don’t have to stick to human traits. Instead we should think about how to create the best possible relationship between user and technology. This might not be just creating a human layer that mirrors exactly what the user would get if they were talking to a real human.

An experience that exceeds what a human can do

That brings me onto my third point. Humans and machines are good at doing different things. If we use machines to act and behave like humans we’re missing out on what machines are best at.

Part of this problem stems from why business are so interested in AI and technology in the first place. Too often they see conversational AI as a cost-efficient way of replacing human staff. Why employ thousands of call centre employees if you can have one AI that can cover the lot? In this case your ambition is to use technology to match what humans can do. In reality your ambition should be to use technology to exceed what humans can do. Here’s the crux, if your AI is pretending to be a human it is limited to what a human can do.

Businesses are tempted by the cost-cutting benefits of conversational AI

Businesses are tempted by the cost-cutting benefits of conversational AI

For example, using your customer data a machine can know who exactly who you are even before your first interaction. It can adapt how it interacts with you, respond immediately and access the answer to thousands of questions in milliseconds. It can only do that however if it is clearly a machine. To make it more human you’d have to intentionally make that experience worse.

Humans and machines are different, think differently and are therefore good at different things. In medicine for example machines are better at diagnosing rare conditions because they can span millions of data points to find similar patterns. Humans can’t do that but they can spot intuitive details that machines might miss. The same is true for banking and many other industries.

We shouldn’t be designing tech-driven conversational experiences that behave like humans, we should design experiences that behave like machines with all the benefits machines can bring. Of course, if you use humans to fill the gaps that machines can’t reach then even better but we shouldn’t be hiding the fact that you’re interacting with a machine. We’re not using it to save money, we’re using it to create a better experience for the customer. That experience is not more human, it is more machine. 

The answer is not always to make AI experiences more human. There are significant benefits to interacting with machines. These are not just practicalities like faster response and a more connected experience but can be psychological as well. Research shows that people like the subservient nature of interacting with machines. With a machine you can be more direct, more blunt without having to worry about what it will think about you. With a human everything we say is a balance between how it will make the other person feel and what they will think about you. Not only can a machine experience be more efficient and connected therefore but it can also be more emotionally satisfying.

What Labour’s Election Loss Can Teach Us About Pitching

What Labour’s Election Loss Can Teach Us About Pitching

Post post-consumerism: What happens when the bubble bursts?

Post post-consumerism: What happens when the bubble bursts?