Andrew Shebanow of Adobe recently wrote an interesting blog entry with the unfortunate title of “The Death of UI Consistency“. A few excerpts:

What I’m really talking about here is how the goal of complete UI consistency is a quest for the grail, a quest for a goal that can never be reached.

The reason I think [that RIAs bringing web conventions to the desktop is a good thing] is that it lets us move the conversation away from the discussion of conformance with a mythical ideal and towards a discussion of what a usable UI should be.

I’ve been thinking about UI consistency quite a bit recently. Although Andrew’s on the right track, I think he clouds the issue by arguing that the “the goal of complete UI consistency is a quest for the grail”. I personally don’t know anyone who has argued for complete UI consistency; indeed my recent experience, especially with Ajax-style web applications, has been that many designers don’t consider UI consistency enough. But before going further, I think it’s important to consider what it means to provide UI consistency.

First it’s important to remember that consistency is relative. While we can measure certain UI characteristics, like background color or width, in absolute terms (e.g. ‘white’ or ‘1400 pixels’), we can only measure consistency relative to established visual and behavioral conventions. These conventions vary by platform – for example in a Windows application you expect to see a closing ‘x’ in the upper right hand corner of each application; on a web site you expect clicking on a hyperlink to take you to a new page. So because there are no universal UI conventions, there’s no such thing as absolute consistency; there is only consistency vis-à-vis platform conventions.

I believe Andrew is observing that as rich client and web technologies converge, so too do their UI conventions, and sometimes these conventions conflict with one another. John Gruber complained that the Adobe CS3 close box didn’t follow the Mac convention; Andrew posits that this is because CS3 does not try to follow Mac conventions nor Windows conventions – it follows the conventions of the Adobe platform.

It’s all well and good to say that you’re creating a new platform and that your new platform introduces new UI conventions, but the fact is that users do have certain expectations about how UIs should look and behave, and when you violate these expectations by not following conventions, you’d better be confident that the benefits outweigh the potential pain you’ll cause users.

So how should we decide whether to follow established UI conventions or to attempt something new and different? To answer this question, it’s important to first understand the value of following conventions as well as the costs and benefits of violating conventions.

Observing established UI conventions has two main benefits:

  • You reduce your application’s learning curve because the user can (subconsciously) leverage previous experience within your application. For example, when you see blue underlined text on a web page, no one needs to explain that you can click it.
  • Your app is more pleasant to use or, more accurately, your app is less unpleasant to use; observe Gruber’s comment “God, that just looks so wrong” – have you ever felt that way when you used a Swing application that was trying to emulate a native Windows or Macintosh look and feel but not quite succeeding?

To quote my former colleague Don Ferguson, “different is hard”. Different can also feel awkward. As you interact with a class of apps over time, your mind builds up subconscious expectations about how apps of that class should look and behave. When an app violates its platform conventions, it often becomes harder to use and sometimes just plain annoying. For instance, have you ever used a web site that decided its hyperlinks shouldn’t be underlined and shouldn’t be blue? Not pleasant. All this being said, it seems like we should always observe UI conventions, but this is not the case either.

UI conventions are not the laws of physics. They represent previous human design decisions that became the norm either because they were very useful (the hyperlink) or just because they became entrenched (the ‘File’ menu). Either way it is possible that a smart or lucky designer can invent a new mechanism that violates existing conventions yet overcomes the barriers to entry because of its usefulness. But it’s a high bar. A new UI mechanism must not simply be better than a UI convention; it must be significantly better such that its value negates the added learning curve and strangeness. A good example of a UI innovation that succeeded is the ‘web text box auto-complete dropdown’ pattern that we see in web applications like Google Suggest, del.icio.us, and Google Maps. Many smart people considered this behavior strange and novel when they first discovered it; these days we don’t really notice it though we certainly appreciate its usefulness. In other words it’s on its way to becoming a new convention.

So I believe that designers should observe established UI conventions except when they decide that violating said conventions provides enough value to obviate the costs. In practice, many designers don’t really think about observing or breaking conventions; they just do what feels right. And you know what? Sometimes they succeed and their arbitrary design choices become new conventions. But a design that violates conventions without understanding the trade-offs runs the risk of feeling just plain arbitrary.