The uncanny valley: Do you want chatbots to have a face?

Bálint Dercsényi • 7 min read

What is the uncanny valley?

Imagine you rank all the robots, AI agents, and chatbots you‘ve ever interacted with based on their similarity to humans.

At the bottom of this list, you might have your robot vacuum cleaner, because it doesn’t look or behave like a human, cannot speak to or text you, and doesn‘t even have a name. On the top of the list, you may put something like Baby X, a virtual infant developed by Soul Machines, which interacts as a real baby and even mimics what happens in an infant's brain during its learning processes.

Now you have your ranking, imagine you rate all these robots based on how much you like them. You probably love your robot vacuum cleaner, since it’s a simple and handy tool. As you move along your list towards more human-like machines, and right before you get to the cute and super advanced Baby X, there will be some imperfect robots and AI-agents, such as CB2. These attempt to mimic human attributes, but fail to do so. This imperfection makes them unsettling and, well, uncanny.

This is what researchers call the uncanny valley: a region filled with imperfect, weirdly human-like robots, where
affinity takes a nosedive.

 

How do we know the uncanny valley exists?

Masahiro Mori was first to descirbe the uncanny valley in a 1970 article (Mori et. al., 2012). A couple of decades had passed after this initial idea, until researchers started to understand why and in which contexts it exists.

Ciechanowski and colleagues conducted an experiment in 2019, where they asked participants to interact with a virtual agent. Some people chatted with a virtual agent embodied by a simple chatbot providing text responses. For others, an avatar with a human face read the text aloud. As you’d probably guess, people liked former, simple chatbot much better. The researchers didn’t just ask for their reasonings, but also revealed that those interacting with the avatar had a higher heart rate and frowned more – an indication of arousal and negative emotions. 

Researchers Song and Shin found similar results in a later study conducted last year (2022). When they enhanced the human-likeness of their virtual chatbot on a laptop retailer website, users reported a feeling of eeriness, lower trust in the chatbot, and even reduced intentions to buy a laptop.

The uncanny valley is, therefore, not just a fun concept to talk about at nerdy dinner parties. It is something directly affecting sales, which moves us to
the practical considerations.

Practical take-aways

If giving too many human attributes to virtual agents can undermine our trust in them, companies should consider how to optimise these tools’ usability without tapping into the uncanny valley. The following key takeaways will help you hit the sweet spot:

  1. Make it clear to your customers that they’re interacting with something non-human: introduce your chatbot as a chatbot, and explain its purpose. This will help to manage expectations, as well as avoid confusion and unpleasant surprises.

  2. Be wary of artificial faces. Unnatural facial elements can be a deal-breaker, even if they’re hard to spot. Quite often, we hate a strange human-like face, and we don’t quite know why.

  3. Familiarity helps. Humans love things they’re already familiar with, so using well-known names, communication styles and appearances can make a huge difference. If your customers have already got to know your virtual agent, don’t change its attributes too often.

Conclusions

It’s tempting to put a huge emphasis on the functionalities of a product, and try to build the fanciest chatbot or social robot on the market. But what’s the point of advanced solutions if your customers are freaked out by your virtual agent, and leave the website filled with distrust?

As companies continue to replace human agents with robots, and tools like ChatGPT become a standard part of our everyday lives, these issues become more and more relevant. The design and development of robots and AI agents will need to have a pronounced focus on the psychology behind our affinity towards them.

 

References

Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the Uncanny Valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92, 539–548. https://doi.org/10.1016/j.future.2018.01.05

Creating virtual humans: The future of AI [Video]. (2018, September 19). YouTube. https://www.youtube.com/watch?v=PHQhCiVLRpE

Child-robot [CB2] learns from experience and interaction with humans [Video]. (2018, November 22). YouTube. (49) Child-robot (CB2) learns from experience and interaction with humans - YouTube

Mori, M., MacDorman, K., & Kageki, N. (2012). The uncanny valley. IEEE Robotics & Automation Magazine, 19(2), 98–100. https://doi.org/10.1109/mra.2012.2192811

Song, S. W., & Shin, M. (2022). Uncanny valley effects on Chatbot Trust, purchase intention, and adoption intention in the context of e-commerce: The moderating role of avatar familiarity. International Journal of Human–Computer Interaction, 1–16. https://doi.org/10.1080/10447318.2022.2121038 

Want to find out more?

We’d love to chat to you about how to start applying behavioural science - book a slot below to catch up with Jez and find out more.

Book a meeting

Get in touch

We'd really like to stay connected.

Leave us a message to start a conversation: