This article reviews UI patterns based on the simple command button and adds some background information.
Cooper refers to these controls as imperative because they result in immediate action. Imperative controls relate to their objects as verbs to nouns in natural language.
The Basic Command Button
The most basic of all controls is the command button. It is derived from the design of command buttons that has evolved during the past decades — it is 3-dimensional and it visually responds to being clicked.
The figure above shows how command buttons have evolved since 1977 comparing Apple’s and Microsoft’s operating systems.
Why command buttons are 3-dimensional
Buttons can be easily identified by their 3-dimensional appearance — they have the visual affordance of an imperative control. Donald Norman used the term affordance introduced by James J. Gibson and translated for use in human-computer interaction:
Donald Norman, The Design of Everyday Things
Norman states that a psychology of causality surrounds our daily usage of things. Put simply: Clues how things work come from their visible structures — this is exactly what affordance is.
Restrictions on touch screens
Resistive touch screens are not able to emulate a “mouse-over” state of a button, therefore, it is not possible to alter the button’s appearance. Furthermore, there are no default buttons, which can be activated by hitting the Enter key on mouse-and-keyboard systems.
The only possibility on touch screens to indicate importance is by altering the visual design and/or size of the buttons.
Press the button for at least half a second to see its activated state. Note that you need Adobe Flash.
A command button with slightly extended functionality is the hold-and-activate-button. It is used to activate a function as long as it is pressed, e.g., when making a public announcement. The button is capable of displaying various states to indicate whether the connection is open or if the connection has failed. It makes sense to add an icon to this button to indicate its special functionality.
Why buttons alter their appearance
Ben Shneiderman originally described the principle of direct manipulation as a combination of visual representation of screen objects, manipulation of these objects, and feedback of user actions. While Donald Norman has reservations because of the sometimes steep learning curve direct interaction requires, it has become the most popular interaction style nowadays. Interaction is even more direct on touch screens — no intermediaries like mice and pointers are required, objects can be touched directly on the screen.
Cooper uses the term pliancy for controls that can be manipulated on screen and stresses the importance to communicate the pliancy of a control to the user. This can be done in various ways:
- static object hinting: The control has a distinct look
- dynamic visual hinting: The control changes its appearance when the mouse or finger hovers over it
- pliant response hinting: The control displays that it is being pressed but not released with a distinct visual display
Restrictions on touch screens
This pattern should not be used on single touch screens as they may be pressed for a longer period of time. Any interaction with the touch screen while this type of button is pressed would result, depending on the touch screen manufacturer, in an interpolation of the two positions and in deactivating the function associated to the hold-and-activate button. In the worst case, another function could be activated inadvertently in such a case.
The hold-and-activate button extends the pliant response properties of the basic command button: When pressed, it is capable of displaying more than one state. To indicate its special functionality to the user, it should made look differently, i.e., its static object hinting should be adapted.Show all articles