Andrew Shebanow of Adobe recently wrote an interesting blog entry with the unfortunate title of “The Death of UI Consistency“. A few excerpts:
What I’m really talking about here is how the goal of complete UI consistency is a quest for the grail, a quest for a goal that can never be reached.
…
The reason I think [that RIAs bringing web conventions to the desktop is a good thing] is that it lets us move the conversation away from the discussion of conformance with a mythical ideal and towards a discussion of what a usable UI should be.
I’ve been thinking about UI consistency quite a bit recently. Although Andrew’s on the right track, I think he clouds the issue by arguing that the “the goal of complete UI consistency is a quest for the grail”. I personally don’t know anyone who has argued for complete UI consistency; indeed my recent experience, especially with Ajax-style web applications, has been that many designers don’t consider UI consistency enough. But before going further, I think it’s important to consider what it means to provide UI consistency.
First it’s important to remember that consistency is relative. While we can measure certain UI characteristics, like background color or width, in absolute terms (e.g. ‘white’ or ‘1400 pixels’), we can only measure consistency relative to established visual and behavioral conventions. These conventions vary by platform – for example in a Windows application you expect to see a closing ‘x’ in the upper right hand corner of each application; on a web site you expect clicking on a hyperlink to take you to a new page. So because there are no universal UI conventions, there’s no such thing as absolute consistency; there is only consistency vis-Ã -vis platform conventions.
I believe Andrew is observing that as rich client and web technologies converge, so too do their UI conventions, and sometimes these conventions conflict with one another. John Gruber complained that the Adobe CS3 close box didn’t follow the Mac convention; Andrew posits that this is because CS3 does not try to follow Mac conventions nor Windows conventions – it follows the conventions of the Adobe platform.
It’s all well and good to say that you’re creating a new platform and that your new platform introduces new UI conventions, but the fact is that users do have certain expectations about how UIs should look and behave, and when you violate these expectations by not following conventions, you’d better be confident that the benefits outweigh the potential pain you’ll cause users.
So how should we decide whether to follow established UI conventions or to attempt something new and different? To answer this question, it’s important to first understand the value of following conventions as well as the costs and benefits of violating conventions.
Observing established UI conventions has two main benefits:
- You reduce your application’s learning curve because the user can (subconsciously) leverage previous experience within your application. For example, when you see blue underlined text on a web page, no one needs to explain that you can click it.
- Your app is more pleasant to use or, more accurately, your app is less unpleasant to use; observe Gruber’s comment “God, that just looks so wrong” – have you ever felt that way when you used a Swing application that was trying to emulate a native Windows or Macintosh look and feel but not quite succeeding?
To quote my former colleague Don Ferguson, “different is hard”. Different can also feel awkward. As you interact with a class of apps over time, your mind builds up subconscious expectations about how apps of that class should look and behave. When an app violates its platform conventions, it often becomes harder to use and sometimes just plain annoying. For instance, have you ever used a web site that decided its hyperlinks shouldn’t be underlined and shouldn’t be blue? Not pleasant. All this being said, it seems like we should always observe UI conventions, but this is not the case either.
UI conventions are not the laws of physics. They represent previous human design decisions that became the norm either because they were very useful (the hyperlink) or just because they became entrenched (the ‘File’ menu). Either way it is possible that a smart or lucky designer can invent a new mechanism that violates existing conventions yet overcomes the barriers to entry because of its usefulness. But it’s a high bar. A new UI mechanism must not simply be better than a UI convention; it must be significantly better such that its value negates the added learning curve and strangeness. A good example of a UI innovation that succeeded is the ‘web text box auto-complete dropdown’ pattern that we see in web applications like Google Suggest, del.icio.us, and Google Maps. Many smart people considered this behavior strange and novel when they first discovered it; these days we don’t really notice it though we certainly appreciate its usefulness. In other words it’s on its way to becoming a new convention.
So I believe that designers should observe established UI conventions except when they decide that violating said conventions provides enough value to obviate the costs. In practice, many designers don’t really think about observing or breaking conventions; they just do what feels right. And you know what? Sometimes they succeed and their arbitrary design choices become new conventions. But a design that violates conventions without understanding the trade-offs runs the risk of feeling just plain arbitrary.
Good post on an interesting topic.
Couple thoughts:
– There is often a tradeoff between shortening the learning curve and putting power in the hands of experts. I’ve been very impressed recently with the Adobe Lightroom UI, which breaks a lot of conventions (and it is cross-platform which can make convention following even more difficult). It is a complex app that has felt approachable for a novice like myself (but requires investment), and powerful enough for those who spend a lot of time working within it.
– The web has had a minimum of conventions for quite a while, and has been hampered by the lack of basic UI building blocks (tree controls, etc. etc.) that in part lead to a lot of experimentation. But because of the lack of adopted conventions (like color palettes, and File / Edit / View menus), the web has a chance to redefine some metaphors that are outdated. For example, one of my favorite features of Gmail is its auto-save feature, which still is not a widely employed technique. (IMHO webapps that collect lengthy information should auto-save, even if only to present a “draft” version later, so as not to lose lots of work.)
– As you mention, there are many newly forming web conventions like having a linked logo in the upper left that takes one “home” (even my parents expect this one). Or having links that are openable in a new tab/window, which some sites break by using JS where they should not. There is still an open frontier to make the web a far more usable and enjoyable place than it is today!
I agree with you Bill. My thoughts on Shabanow’s post here: http://redmonk.com/jgovernor/2007/04/12/ui-consistency-death-but-i-want-plain-text/
Good points, Bill. Perhaps you’re planning on touching on this in a future post, but one of the most aggravating things I’ve had to deal with in UI work is getting an accurate model of the end user’s capabilities and expectations. I have sat through way too many arguments between uber-geeks about what’s the “easiest” way to do something in a UI, and of course neither of them are anywhere remotely close to what our end users wanted. Geeks frequently forget that what’s easy for them can be very, very difficult to understand for the average user.
Glen
Good post. FWIW, the point I was trying to make wasn’t that UI consistency was not worth working towards. Rather, I was trying to say that the goal is to have “cognitive consistency” with the user’s expectations rather than requiring everyone to work from the same design rulebook. Thus its ok to tweak the way your UI works if users still “get it”, but on the other hand its bad to change your UI if it forces users to stop and think when performing an action. The web already gets this right.
[…] Bill Higgins :: the value of UI consistency i concur (tags: BillHiggins UI consistency) […]
[…] a lesson here for software designers, and one that I’ve talked about recently – we must ensure that we design our applications to remain consistent with the environment in which […]
I enjoyed your thoughts, and refinement of Shebanow’s post. I find linguistics apply to the problem, after considering you point.
The structure and conventions of application design are perhaps arbitrary, out-dated, and insufficient, but they are the closest thing we have to a useful, basic user language.
This said, I look forward to the grammar of application and web design evolving. As users use their desktop and the web for novel tasks, corresponding design solutions are important.
This is a good synopsis of what is a prevalent issue in web and console application development. I find that the value UI consistency if often lost in what is the least difficult solution to implement. Much like patching holes in a leaky ship rather than bringing her to dock.
I have found that what is good to consider, from planning and execution stages, is to run those solutions through an architectural and UI checklist. It is equally important to ensure that the UI decisions do not violate design structure. It is all about balance.
It seems the loudest point I read from this is to consider this as a part of the delivery to the customer. A co-worker of mine, named Steve, claims, “The UI IS the application!”. He is right . . . who ultimately uses the software we design?
[…] The tools available to us developers have evolved much in recent years, allowing us to create richer interfaces and interactions. With power comes responsibility – we need to apply discretion when using advanced techniques and tools, as to not confuse users. Breaking interface conventions by using new technologies where they are not needed is a mistake. […]
[…] The tools available to us developers have evolved much in recent years, allowing us to create richer interfaces and interactions. With power comes responsibility – we need to apply discretion when using advanced techniques and tools, as to not confuse users. Breaking interface conventions by using new technologies where they are not needed is a mistake. […]