There’s a theory called ‘The Uncanny Valley’ regarding humans’ emotional response to human-like robots. From The Wikipedia entry:
The Uncanny Valley is a hypothesis about robotics concerning the emotional response of humans to robots and other non-human entities. It was introduced by Japanese roboticist Masahiro Mori in 1970 […]
Mori’s hypothesis states that as a robot is made more humanlike in its appearance and motion, the emotional response from a human being to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes strongly repulsive. However, as the appearance and motion continue to become less distinguishable from a human being’s, the emotional response becomes positive once more and approaches human-human empathy levels.
This area of repulsive response aroused by a robot with appearance and motion between a “barely-human” and “fully human” entity is called the Uncanny Valley. The name captures the idea that a robot which is “almost human” will seem overly “strange” to a human being and thus will fail to evoke the requisite empathetic response required for productive human-robot interaction.
While most of us don’t interact with human-like robots frequently enough to accept or reject this theory, many of us have seen a movie like The Polar Express or Final Fantasy: The Spirit Within, which use realistic – as opposed to cartoonish – computer-generated human characters. Although the filmmakers take great care to make the characters’ expressions and movements replicate those of real human actors, many viewers find these almost-but-not-quite-human characters to be unsettling or even creepy.
The problem is that our minds have a model of how humans should behave and the pseudo-humans, whether robotic or computer-generated images, don’t quite fit this model, producing a sense of unease – in other words, we know that something’s not right – even if we can’t precisely articulate what’s wrong.
Why don’t we feel a similar sense of unease when we watch a cartoon like The Simpsons, where the characters are even further away from our concept of humanness? Because in the cartoon environment, we accept that the characters are not really human at all – they’re cartoon characters and are self-consistent within their animated environment. Conversely, it would be jarring if a real human entered the frame and interacted with the Simpsons, because eighteen years of Simspons cartoons and eighty years of cartoons in general have conditioned us not to expect this [Footnote 1].
There’s a lesson here for software designers, and one that I’ve talked about recently – we must ensure that we design our applications to remain consistent with the environment in which our software runs. In more concrete terms: a Windows application should look and feel like a Windows application, a Mac application should look and feel like a Mac application, and a web application should look and feel like a web application.
Obvious, you say? I’d agree that software designers and developers generally observe this rule except in the midst of a technological paradigm shift. During periods of rapid innovation and exploration, it’s tempting and more acceptable to violate the expectations of a particular environment. I know this is a sweeping and abstract claim, so let me back it up with a few examples.
Does anyone remember Active Desktop? When Bill Gates realized that the web was a big deal, he directed all of Microsoft to web-enable all Microsoft software products. Active Desktop was a feature that made the Windows desktop look like a web page and allowed users to initiate the default action on a file or folder via a hyperlink-like single-click rather than the traditional double-click. One of the problems with Active Desktop was that it broke all of users expectations about interacting with files and folders. Changing from the double-click to single-click model subtley changed other interactions, like drag and drop, select, and rename. The only reason I remember this feature is because so many non-technical friends at Penn State asked me to help them turn it off.
Another game-changing technology of the 1990s was the Java platform. Java’s attraction was that the language’s syntax looked and felt a lot like C and C++ (which many programmers knew) but it was (in theory) ‘write once, run anywhere’ – in other words, multiplatform. Although Java took hold on the server-side, it never took off on the desktop as many predicted it would. Why didn’t it take off on the desktop? My own experience with using Java GUI apps of the late 1990s was that they were slow and they looked and behaved weirdly vs. standard Windows (or Mac or Linux) applications. That’s because they weren’t true Windows/Mac/Linux apps. They were Java Swing apps which emulated Windows/Mac/Linux apps. Despite the herculean efforts of the Swing designers and implementers, they couldn’t escape the Uncanny Valley of emulated user interfaces.
Eclipse and SWT took a different approach to Java-based desktop apps [Footnote 2]. Rather than emulating native desktop widgets, SWT favor direct delegation to native desktop widgets [Footnote 3], resulting in applications that look like Windows/Mac/Linux applications rather than Java Swing applications. The downside of this design decision is that SWT widget developers must manually port a new widget to each supported desktop environment. This development-time and maintenance pain point only serves to emphasize how important the Eclipse/SWT designers judged native look and feel to be.
Just like Windows/Mac/Linux apps have a native look and feel, so too do browser-based applications. The native widgets of the web are the standard HTML elements – hyperlinks, tables, buttons, text inputs, select boxes, and colored spans and divs. We’ve had the tools to create richer web applications ever since pre-standards DOMs and Javascript 1.0, but it’s only been the combination of DOM (semi-)standardization, XHR de-facto standardization, emerging libraries, and exemplary next-gen apps like Google Suggest and Gmail that have led to a non-trivial segment of the software community to attempt richer web UIs which I believe we’re now lumping under the banner of ‘Ajax’ (or is it ‘RIA’?). Like the web and Java before it, the availability of Ajax technology is causing some developers to diverge from the native look and feel of the web in favor of a user interface style I call “desktop app in a web browser”. For an example of this style of Ajax app, take a few minutes and view this Flash demo of the Zimbra collaboration suite.
To me, Zimbra doesn’t in any way resemble my mental model of a web application; it resembles Microsoft Outlook [Footnote 4]. On the other hand Gmail, which is also an Ajax-based email application, almost exactly matches my mental model of how a web application should look and feel (screenshots). Do I prefer the Gmail look and feel over the Zimbra look and feel? Yes. Why? Because over the past twelve years, my mind has developed a very specific model of how a web application should look and feel, and because Gmail aligns to this model, I can immediately use it and it feels natural to me. Gmail uses Ajax to accelerate common operations (e.g. email address auto-complete) and to enable data transfer sans jarring page refresh (e.g. refresh Inbox contents) but its core look and feel remains very similar to that of a traditional web page. In my view, this is not a shortcoming; it’s a smart design decision.
So I’d recommend that if you’re considering or actively building Ajax/RIA applications, you should consider the Uncanny Valley of user interface design and recognize that when you build a “desktop in the web browser”-style application, you’re violating users’ unwritten expectations of how a web application should look and behave. This choice may have significant negative impact on learnability, pleasantness of use, and adoption. The fact that you can create web applications that resemble desktop applications does not imply that you should; it only means that you have one more option and subsequent set of trade-offs to consider when making design decisions.
[Footnote 1] Who Framed Roger Rabbit is a notable exception.
[Footnote 2] I work for the IBM group (Eclipse/Jazz) that created SWT, so I may be biased.
[Footnote 3] Though SWT favors delegation to native platform widgets, it sometimes uses emulated widgets if the particular platform doesn’t provide an acceptable native widget. This helps it get around the ‘least-common denominator’ problem of AWT.
[Footnote 4] I’m being a bit unfair to Zimbra here because there’s a scenario where its Outlook-like L&F really shines. If I were a CIO looking to migrate off of Exchange/Outlook to a cheaper multiplatform alternative, Zimbra would be very attractive because since Zimbra is functionally consistent with Outlook, I’d expect that Outlook users could transition to Zimbra fairly quickly.
i really like this look and feel. not sure if your fonts are big enough though. 😉
The irony is that the Swing version of Azuerus looks more like a native app than the SWT one. Just look at how the icons are disabled (greyed out) in the SWT version and that should tell you everything.
Interesting, Uncanny Valley, is not much different from how I relate to most people on the basic level “model of how humans behave.”
I am a +55yo, with much street/institution time, I sent my first email [name5678@*.*.*.*] in 1984. Yep, I have a few socialization problems. I am a little shy (confused with aloofness), and feel guilty easily.
[Familiar] – If I do not know you, consider you not a possible threat, and you present (for me) a polite and respectful person, then I will get along with you for as long as we interact. If you’re a likable insane/dysfunctional, or “Herr/Frau Genitaler Kopf”, then I chuckle often. So, “The model of how humans should behave is sensibly valid.”
[KNOWN] – If I know you, you are a known, then I am required (my personality) to protect you or harm/remove you. There is no gray area, I know what I will do (status contingent), and I am very comfortable in this interactive environment. So, “The model of how humans behave is correct.”
[Adelo] – If you are neither a “Familiar nor KNOWN” person, I leave quickly or protect family and friends. There is no gray area, I know, I will leave if possible, and, I know, I am dealing with a psychopath. So, “The model of how humans should behave is violated.”
I agree, but not for the same reasons; “We must ensure that we design our applications {products} to remain consistent with the environment in which our software runs {products are used}.”
I think maybe: (1) We interact, because we are familiar with, confident and/or comfortable with a product/person. (2) We interact, because we know, enjoy, and/or care strongly about what we are doing. (3) We disengage/detach, because the unknowns (adelophobia) are very significant and threatening.
The robotics/software… products are familiar, intuitive, and useful, then known-desired/wanted or known-rejected/messy, the adelophobia point occurs when unknowns are overwhelming. A robot that is “KNOWN” we can impart (emotionally) trust a sense of personal security beyond just basic creature comforts we may consider this level of robot a friend (like a pet) or a sort of extended family member. When we think for a moment that a robot could be an agenda-puppet that may harm us, family, or friend then we do our best to control/limit the interactions. Same as you said, but a little different.
“Reality is self-induced hallucination.” (%~§) oh21
[…] Bill Higgins’ The Uncanny Valley of User Interface Design. (via […]
So that’s why I don’t like Firefox. It tries its best to emulate my GNOME desktop, but there’s lots of little things that just aren’t quite right, and it makes it look fragile.
[…] Higgins on ‘uncanny valley,’ The Simpsons and “desktop app in a web browser.” Via Signals vs. […]
[…] the Uncanny Valley of user interface design (Bill Higgins) I haven’t really thought of RIA’s this way before. The point isn’t really that RIA’s shouldn’t be done, but they should fit the mental model of the user. And this mental model has been shaped by normal webpages, does the RIA then fit this model? (tags: ajax web2.0 usability interaction webdesign ai) […]
I think this is the same reason why Linux Desktops Distros should not try to emulate a Windows Desktop.
[…] 25th, 2007 · No Comments Bill Higgins has written an excellent post titled, ‘the Ucanny Valley of user interface design‘. It discusses the application of the Uncanny Valley theory to AJAX web […]
[…] discussion as to why web appilcations should not try too hard to look like a desktop application by Bill Higgins (via Signal vs. […]
[…] The Uncanny Valley of User Interface Design Posted by stevepepple Filed in readings, Design […]
Almost human is one thing. That’s pretty much who we are.
Almost a Windows app is not even close to being the same phenomenon. That’s just a mental paradigm, not a biological one, and those change and evolve all the time.
As long as a change is an improvement, it’ll fly. Flash has already proven that.
[…] Higgins has an interesting discussion about the convergence of desktop and web applications. The basic idea he suggests is that bad […]
To address the specific example of Zimbra vs Gmail, i think that the problem is not that Gmail is inherently better because it *looks* like a web app, but that Zimbra is poorer because it tries to emulate a rather poor and bloated user interface.
That poor and bloated outlook-ish interface design, however may be easier to understand by anyone having used any iteration of Outlook, Eudora or Thunderbird.
I strongly believe that User-Interface Design is meant to make things easier to use for the user. And very often, the user is himself emulating what he actually knows.
I think that explain pretty well why my girlfriend has trouble understanding the “conversations” concept in Gmail. This features, as brilliant as it may be sometimes leads to a lot of confusion for the average user.
[…] web-applikasjonar ikkje bør sjÃ¥ ut som […]
Great post and thread. The Uncanny Valley theory also interests me in its possible relation to the notion of the Gothic (e.g. early films like “Golem”) … and how early robot horror films were the new vampires / frankenstein etc. I guess the Terminator series is the most well-known modern example.
In 1999 I worked on web enabling the email application we used at the company where I worked. I wanted it to be familiar to the users so I took a lot of time to make it look like their desktop version. What happened is that users felt that the web version should act just like the desktop version. This led to confusion and complaints because it didn’t “work right”. When I changed the interface to be more “web like” the project became hugely successful and was eventually labeled as one of the best productivity tools created at the company.
I personally find myself turned off by many Flash applications because they don’t work as expected. In the past, Java applications have just been slow, ugly and haven’t worked as expected. This isn’t to say there shouldn’t be innovation. Google suggest doesn’t work like the web, but I haven’t found someone yet who doesn’t like it.
BTW, I like the large font because it allows me to sit back and read your site in comfort.
First, thank you Bill for finally explaining why I walked out of Who Framed Roger Rabbit in 1988… it ultimately lead to a rather painful breakup ;-). It also got me thinking philosophically, which is a rarity these days.
I find it interesting that as a designer who appreciates novel and innovative approaches, it’s apparently ineluctable human nature to grow comfortable with the familiar. Perhaps as we age the disappointments of technology for its own sake accumulate and we begin to grow intolerant of what seems to be a declining signal-to-noise ratio in the deluge of experiences out there.
The first time I saw the Flash mastery of Joshua Davis I was truly inspired, and yet, when given the choice, I routinely choose the HTML site variant over Flash. As many comments have pointed out, the innovation question is an obvious one with no easy answer.
It seems to me that finding a balance between experimentation and codified communication is at the core of what it means to be a designer. What’s appropriate? In the right environment the 90s-typographic excess of designers like David Carson felt edgy and fresh. Used indiscriminately, on detail-heavy annual reports, it was a disaster.
We need to push the envelope, but ultimately keep in mind the envelope has to be read by someone or it never gets delivered. If you have the resources, hire a user researcher, he/she will keep your design honest.
[…] our applications to remain consistent with the environment in which our software runs” and a similar quote from Bill Higgins: “a Windows application should look and feel like a Windows application, a Mac application […]
I have observed the following in the last 25 years of application design:
1. Users are most comfortable with the computer interaction they first learned. If the first used applications on a IBM 3270 the are uncomfortable with any other type of application, i.e. windows based applications.
2. Applications in the past educated the user on how they should interact. Older users are very uncomfortable in learning another interaction model. For example once a person learns Copy and Paste they will not discover drag and drop because they already have learned how to do that operation. Or they learn that double clicking on an icon will cause an application to appear. So they constantly double click on web page icons.
Our goal as user interface designers is to create a rich environment that encourages curiosity and directs the user to try multiple interaction styles.
The hope is that the younger generation is curious and will try something new because we have given them such a rich environment to experiment in.
This is where the highly interactive Ajax applications are important. The browser technology with DHTML/Ajax can be used to create that right environment for the user to experiment in.
I saw a baby robot on BBC World today. It rolled around, moved its eyeballs, had rubber skin and blinked with its rubber eyelids. Needless to say, it was disgusting.
“Native Application” or “Native Environment” — every time I read either of those phrases they remind of Word 2.0, Lotus 123, and Paradox4.
That may very well be the problem that we’re having migrating philosophically to and accepting the paradigm “Web Application”.
I refer back to post #45 by Robert Barth: “Good design is good design, regardless of the enivornment.”
I think that we all know that none of us has every used MSWord for anything but relatively simple document creation … not coming close to taking advantage of its full set of features. But that sets up and interesting conflict and an expectation that, because of its capability, makes it a good or great interface. Bull-puckie!!!
Along those same lines, Windows emulation of my desktop in the study at my house does not necessarily dictate the best way to interact with the files and folders on a computer … but we’ve gotten use to it again set a particular expectation.
Now our problem is further confused by categorizing application types by native capability versus “native expectation” that can be just as suspect as the Uncanny Valley.
I think the bottom line is the “native environment” is the box sitting in front of us (desktop, laptop, camera phone, thumb drive, iPod). The challenge is understanding the user needs coupled with the content/function being delivered. We are so use to the questionable foundations provided by early attempts that we expect bloat and unnecessary complexity much our own demise and dissatisfaction with the technology whatever it might be.
That’s the challenge … sticking to the task at hand, focusing on the solution to the problem … form follows function, less is more … those solutions have been around far longer than any of us and will continue to be long after we’re gone.
Robotics? … it would be weird sitting next to something close to human but, what problem is a humanoid in my house solving?
[…] for reverence rather than manipulation. Bill Higgins wrote a post on his blog a while ago called the Uncanny Valley of user interface design that considers a UI design metaphor that parallels on in robotics design. In short the theory […]
Nice thoughts, but somehow you only drew half of the consequences. You seem to forget that there are two sides of that Valley for UI as well.
Trying and failing to mimic something surely bothers the users. So one solution, as you said is not to fail mimicing. I.e. go the SWT/native way for java. However, as with robots and catoons there must be the other side of the Valley where the software is different enough. And that must be the reason that there are so many successful applications not looking native at all.
So a swing app can look compelling enough it just has to be consistent and different enough from windows. (GTK themed swing apps look really cool and native.)
[…] is good and i’m all for more interactive interfaces, but not at the expense of usability. This article advocates against recreating the desktop in the web-browser ,suggesting that in doing this […]
So, about the model in our brain, that reflects human styles of behavior, I want to ask a question – may we suppose that insanes are normal people indeed, and we see them as suffering from mental illness just because of this model?
[…] developing the Jazz Web UI technology led to my UI design opinions that I captured in my “Uncanny Valley of UI design” […]
[…] as I had hoped. My first session explored the limits we’re bumping into with Ajax, especially user interface challenges, nontrivial client-server data communication problems, and the fallacies of distributed computing […]
for harisnya reverence rather than manipulation. Bill Higgins wrote a post on his blog a while ago called the Uncanny Valley fehernemu of user interface design that considers a UI design divat metaphor that parallels on in robotics design. In short the furdoruha theory
Development…
Useful Links General The Uncanny Valley of user interface design…
[…] have one more option and subsequent set of trade-offs to consider when making design decisions. Bill Higgins :: the Uncanny Valley of user interface design gruss, Sven __________________ "There are two major products that come out of Berkeley: […]
Nice article: good points well put.
A quick note about your preface: I believe the phrase you wanted was “without further ado” not “without further adieu” (ie. without more bustle, not without additional goodbyes).
I hate to make much ado about nothing, but the Internet is fighting such a hard battle against my mother tongue that an occasional salvo back seems not unwarranted.
Apart from applications, unlike humans, have no objective form? You can do whatever you want with them. There is no stamp that creates applications?
@Carrington: I changed ‘adieu’ to ‘ado’. Thanks very much.
[…] sure how I missed this back in ‘07 but many thanks to my former student Ryan Cannon for pointing this out to me. I […]
This is exactly why I don’t like MobileMe’s web application. They tried to make it just like a Desktop app and it just doesn’t feel right at all. Thanks for the excellent post, it makes perfect sense and I hope web application developers remember this as they’re building their apps.
This post is a subset of a more powerful idea: inertia for change. There are many, many applications that behave and look nothing like the native interface and are a pleasure to use and very successful. Applications like Propellerheads Reason, Softimage, Maya, Adobe Photoshop, Adobe Lightroom, etc.
How do you explain that? Well, the secret is that you *can* introduce strage new behavior, as long as the user hasn’t seen it before (or at least, has not incorporated another method/behavior). Otherwise, it would seem alien to the user and he or she would have to make a real effort to overcome the inertia for change. This also explains why “doing what he/she already knows” works.
(FOund this article thanks to Daring Fireball)
I think what most people are forgetting about the “uncanny valley” is the gentle slope before you hit the chasm, In animation and robotics, its the progression from “this reminds me of X”, sloping through “this is like X” up to “hey cool, this is almost like X” and then suddenly “uh, this is creepily too much like X”.
The uncanny valley here is different, if it is indeed a valley. Users don’t feel creeped out so much as they bring their performance expectations with them, and consider the imitator to be “broken” or “crippled”. The uncanny valley itself would only kick in when the user feels the browser app is trying TOO MUCH to be like a local OS app. I think a better example would be how Google Desktop Search creeped a lot of people out by blurring the border between searching the web and searching local files, as it was essentially a local application, but it behaved like a web app.
So the real challenge isn’t imitation or not, it’s the level of imitation involved. This restriction is, I feel, a positive force on innovation. Instead of trying to copy 100%, interfaces should try to hit the sweet spot of exploiting both familiarity with other tools and interfaces yet also embracing the unique characteristics of its native environment.
[…] Higgins talks about the danger of web designers trying to create online applications look and act too much like their re…. This is an interesting point, but regarding an issue that is still very much in flux. Web design […]
[…] Bill Higgins :: the Uncanny Valley of user interface design – […]
[…] Bill Higgins :: the Uncanny Valley of user interface design "I’d recommend that if you’re considering or actively building Ajax/RIA applications, you should consider the Uncanny Valley of user interface design and recognize that when you build a “desktop in the web browser”-style application, you’re violating users’ unwritten expectations of how a web application should look and behave. This choice may have significant negative impact on learnability, pleasantness of use, and adoption." Yes. (tags: interaction design web ux usability aesthetics billhiggins ) […]
[…] Bill Higgins :: the Uncanny Valley of user interface design We’ve had the tools to create richer web applications ever since pre-standards DOMs and Javascript 1.0, but it’s only been the combination of DOM (semi-)standardization, XHR de-facto standardization, emerging libraries, and exemplary next-gen apps like Google Suggest and Gmail that have led to a non-trivial segment of the software community to attempt richer web UIs which I believe we’re now lumping under the banner of ‘Ajax’ (or is it ‘RIA’?). […]
[…] valley, a notion that begun with robotics but has been extended to software design. Bill Higgins espouses on the software aspects of the uncanny valley when he says […]
I don’t think one can cite Active Desktop and the horribly kludgy Java attempts at mimicking native UI widgets as adequate cause not to deviate from ‘expected norms’. Let’s face it – the problem with these implementations is not that they changed the game, but is that they did so very poorly.
I’m also amused when anyone goes on about UX design based on paradigms that have hardly baked for more than a single generation. I have reasonable doubt my daughter is going to think of a web application as even being a ‘web application’, let alone something that should resemble today’s gmail garbage (I’m hoping she won’t even be shackled to a mouse).
There is nothing wrong with applications for the web, desktop or otherwise that deviate from expectation or blur boundaries, provided they do so purposefully and with good design. But really it is time we stopped worrying about context and focused mainly on purpose.
@alinear Great points. I think I agree with your last paragraph and this is a flaw in my argument.
[…] from now, but for the present trying to erase the line between the two types is what leads us to “The Uncanny Valley of User Interface Design”, as outlined in Bill Higgins’ excellent post. In a nutshell: web apps should conform to a […]
“I went to the Uncanny Valley and all I got were funny looks”
😉
http://www.cafepress.com/Amicus.397866354
Just because it violates YOUR mental model Bill doesn’t mean it does so for others.
Not everyone has the 12 year old baggage on the Web you have and more and more people are expecting rich desktop functionality in their Web apps.
In fact, if this were the case, we’d never have evolved from HTML 1.0 would we?
The reason Gmail is successful is because it is Google.
Google Wave is written with their RIA toolkit GWT looks very much like a desktop application and nothing like your outdated Web model…
[…] familiar with the Uncanney Valley that is the gap between desktop interfaces and accurately recreated, desktop-lik…. But there’s an Uncanney Valley between real web interfaces and mock web interfaces, which […]
[…] few years ago Bill Higgins posted, the Uncanny Valley of user interface design, in which he states We must ensure that we design our applications to remain consistent with the […]