An Uncanny Can of Worms

The uncanney valley

The Uncanney Valley (IEEE Spectrum, June 2012)

Masahiro Mori coined the term “the uncanny valley” in the 1970s to describe an observed phenomenon of discomfort, rejection and even revulsion at human-like robots (or animated manikins) when they became increasingly human-like. An articulated (but disembodied) talking face or moving wrist and hand “just feels wrong” – even if we rationally appreciate they are not actually “human parts”, but are manufactured robotic components. Theory says we exhibit cognitive dissonance when something is close to something we know or expect but makes a conflict between two different cognitions (or mental models) and so some (irrational) rationalization goes on to explain it – often with a strong element of rejecting the unncanney experience. Zombies in video games look like they should – eek, it’s inhuman, run and get a big gun. But friendly human avatars in games and other virtuality applications often seem wrong or broken and so we choose not to be their friends – something as small as a slightly unusual eye-white shape revealed by eye movement can totally destroy the “suspended disbelief” that those changing pixels on screen really are a character I aspire to help. It only takes the smallest mixed signal. And so, many artificial human-like avatars and assistants are intentionally designed to not approach human appearance and behavior too closely, but to engage in human social interaction without risking being mistaken as a fake human.

Keepon

Keepon by BeatBots

Cartoon and animal aesthetics, avoid the conflict in cognition. But what is lost by avoiding crossing the difficult uncanney valley? I’m not sure, but I am sure it’s a large can of worms we’ve opened here. Robots such as Keepon use a minimal set of actions and stimulate a few human projections of emotion easily. So perhaps there is a spectrum of affordances of social intercourse as physical avatars become increasingly human-like. Likewise, Apple’s Siri (in 2012) is an amazing piece of human-like technology for voice interactions with mobile information systems – but rapidly and inadvertently projects a conceited and arrogant personality because of it’s non-voice deficiencies (my favorite Siri response was “I don’t know who your wife is, and I don’t known who you are either” – Siri doesn’t seem to understand the ownership semantics of a device-human relationship, though some might suggest that I don’t understand the ownership semantic of being an Apple product user).

And so what can we do about this? The only answer we seem to have is to be vigilant. That means we are aware that the cognitive dissonance problem can and will arise, and to check and test if and when we trigger it so we can redesign around that. Or else, be aware that we are loosing human social affordances in our non-human device aesthetics and test that we preserve the ones we need and don’t stimulate ones that would get in the way – but know that we are mostly blind to the full range of affordances in play. For now, that seems to be all we can do. But some day, we will have a better count of the number of worms in this can and then everything might change. (Or should I have used the early bird metaphor?)

This entry was posted in cognition, emotion expression, research. Bookmark the permalink.

2 Responses to An Uncanny Can of Worms

  1. rod says:

    And the Economist (with some scientific foundation from the University of North Carolina in Cognition):

    http://www.economist.com/node/21559316

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s