Despite the great potential of medical information technologies, their
implementation and integration into medical practice have often proved to be
more difficult than was
Much research has focused on a range of technical issues in the implementation
of these systems, including computer communication and networking, physical
input devices (e.g., mouse and keyboard), and the development of software
standards for controlled medical
However, research has only recently begun to investigate the cognitive and
social dimensions of physicians' encounters with computer-based technologies
Human—computer interaction is a science of design that seeks to
understand and support human beings interacting with
our research, we are principally interested in characterizing the usability
and learnability of medical
“Usability” refers to the capacity of a technology to be used
easily and effectively by a range of users (e.g., health care workers), given
specified training and user support, to perform a range of tasks (e.g.,
diagnosis and patient management) within a specified range of settings (e.g.,
clinics, offices, and hospital
“Learnability” refers to the ease with which a user can attain
certain levels of competency. Training is an essential (and often overlooked)
ingredient in promoting the effective use of technologies. We have observed
that training is too often narrowly focused on attaining basic competency in
the use of a system (e.g., basic commands, navigation, etc.). Although this is
an essential aspect of the learning process, there is a need to tailor
training toward developing specific cognitive skills that will lead to more
productive use. For example, a medical student can learn to do a Medline
search relatively easily. However, it is more difficult to develop effective
search strategies that can maximize the yield of relevant literature and
minimize extraneous costs. The development of expertise is clearly predicated
on substantial experience. However, advanced training can significantly
accelerate the learning
Our laboratory is actively engaged in research evaluating medical record
systems and computer-based learning
The objectives are twofold: to contribute to a process of iterative design in
the development of more effective systems, and to continue to refine a
theoretic and methodologic framework for the cognitive use of medical
technologies. We employ two classes of usability techniques: usability
inspection methods and usability testing. Usability inspection methods are a
set of analytic techniques for characterizing the usability-related aspects of
the interface. These methods are typically employed by an analyst or
experimenter working with the system being tested. Usability testing involves
observing end users employing a system to perform representative tasks (for
example, recording a patient history in a computer-based patient record
system). We have developed a set of cognitively based video analytic
techniques for characterizing subjects' behavior. These methodologies have
been employed in a range of tasks and settings. In this section, we illustrate
the use of the usability inspection method known as the cognitive walk-through
We have adapted this design evaluation methodology to study the usability
and learnability of medical information
The purpose of a walk-through is to evaluate the process by which users
perform a task and the ease with which they can do this. The CW methodology
involves identifying sequences of actions and goals needed to accomplish a
specific task. More specifically, the primary aims of the CW procedure are to
determine whether the user's background
and the cues
provided by the interface are sufficient to construct the goal structure
necessary to generate the action sequence required to perform a task and to
identify potential usability problems. To do this, an experimenter/analyst
performs a task simulation, “stepping through” the sequence of
actions necessary to achieve a goal. The principal assumption underlying this
method is that a given task has a particular generic goal-action structure
(basically, ways in which a user's objectives can be translated into the
particular actions). This analysis also provides us with substantial insight
into the cognitive demands of a task. For example, tasks that require the user
to execute lengthy sequences of actions or require movement between different
screens make heavier demands on a user's working memory.
The walk-through is an example of a strong theory-based approach with clear
practical implications. This approach draws on theories of problem solving,
skill acquisition, and human—computer interaction. The end product of an
analysis is a set of cognitive models that can both inform theories of human
performance and have obvious consequences for design and implementation. We
illustrate below the CW procedure in the context of a Medline search. This is
an activity with which most readers have some familiarity and which is
sufficiently complex to illustrate the CW methodology. In the following
scenario, the top-level goal is to perform a database search to locate
pertinent review articles about the relationship between diabetes and
pregnancy. The following outline illustrates a goal-action sequence for
accessing the MEDLINE database from the Ovid bibliographic information
Goal: Find recent review articles related to pregnancy and
Subgoal: Do MEDLINE Search.
Subgoal: Access Ovid Browser and Query System.
Action: Open Browser.
System Response: List available database (e.g., PsychINFO,
Subgoal: Open MEDLINE database (1993-1997).
Actions: Scroll down and select MEDLINE.
Action: Press <Enter>
In the preceding sequence, there are three subgoals and three actions
needed to access MEDLINE. The CW characterizes the (hypothetic) goals and
subgoals of the user, related actions, system responses, and potential
problems. Subgoals reflect inferences needed to connect a higher-level goal to
specific actions. The actions arise from the user's intentions but are
critically shaped by system responses. In the following characterization of a
goal-action sequence, the next goal is to do a keyword search for articles
related to diabetes.
Goal: Find articles related to diabetes.
Subgoal: Do a keyword search on diabetes.
Action: Enter key combination <Control> <R>.
System Response: “Enter a word or phrase to be searched in
titles and abstracts.”
Actions: Type in diabetes and press <Enter>.
System Response: Returns 22,998 entries.
The same sequence is then repeated for the term “pregnancy.”
Once these goals have been accomplished, the two sets of results must be
integrated to find those entries that correspond to both diabetes and
pregnancy. This part of the walk-through is illustrated by the following
Goal: Merge list of entries.
Potential Problem: Subject must map term “combine” to
Subgoal: Combine data sets (diabetes and pregnancy).
Action: Enter key combination <Control> <N>.
System Response: Screen with two sets and instructions. “Use
the spacebar to select at least two sets to combine and then press
Actions: Scroll to diabetes and press spacebar.
Actions: Scroll to pregnancy and press spacebar.
Action: Press <Enter>.
System Response: Combine sets screen: Choose Boolean connective
“and” or “or.”
Potential Problem: Choice of connectives.
Actions: Select “and” and press <Enter>.
System Response: Returns 870 entries.
Potential Problem: List is too extensive.
The top-level goal of the entire search necessitates several actions, and
there are a number of potential problems of which only a few are indicated
here. The problems may pertain to the surface characteristics of the interface
(e.g., clarity of dialogue elements) or may be of a more conceptual nature
(e.g., mapping terms to actions). The system returns 870 entries, which would
make the task of finding relevant articles too cumbersome. The user must then
find a way to narrow the search space. The final goal relates to limiting a
search to articles in English, studies of human subjects, and review
A MEDLINE search is a task of minimal-to-moderate complexity. (Current
e-mail programs are systems of minimal complexity, whereas most computer-based
patient record systems are of substantial complexity.) Our analysis indicated
that this complete task requires 22 actions and involves 12 goals and subgoals
and nine transitions between screens. A first-time MEDLINE user may be
frustrated by the complexity of the interface and the sequence of actions
necessary to accomplish a goal. In addition, the transitions between screens
will invariably cause navigational problems for some users. After using the
system a few times, however, the user is likely to develop sufficient facility
to achieve a range of basic goals without too much difficulty.
The walk-through can serve a number of purposes, including contributing to
the iterative software design process and aiding in the development of
instructional materials. We have also used this method to develop a coding
scheme to analyze end users' performance of a
walk-through reveals a subset of potential user problems but is most effective
when used in conjunction with video-based usability testing involving
representative end users. This approach can also yield valuable information
about the efficiency of various procedures (e.g., the number of actions needed
to search a database); the prior knowledge needed to draw various inferences
from the system's behavior; the consistency of tasks supported by a system
(most tasks should have similar goal-action hierarchies); and the transparency
of system feedback (responses to users' actions). Video-based usability
testing can also contribute to effective training by characterizing productive
strategies and by making transparent the various affordances of the system
(e.g., undocumented shortcuts) as well as the constraints.
We have used the CW technique to characterize the learnability of
multimedia instructional software and, more recently, have applied this
technique in the study of various kinds of computer-based patient record
systems. These systems represent immensely complex interactive environments
that make numerous conceptual as well as perceptual and motor demands on the
user. There is a critical need to study cognitive aspects of the interface and
its effects on both advanced and novice users. In addition, these systems
greatly affect information gathering strategies and problem representation. As
these systems proliferate, they are likely to have a substantial impact on the
way medical students as well as novice physicians learn clinical medicine. We
are currently engaged in an effort to understand and delineate the cognitive
dimensions of physicians' interactions with computer-based patient record
Computer-based systems do not merely facilitate or enhance the performance
of a given task; they also have an enduring impact on the mastery of related
tasks. Salomon et
considering the effects of technology on intellectual performance, introduce a
useful distinction between the effects with
technology and the
technology. The former are concerned with the changes in
performance displayed by users while equipped with the technology. For
example, when using an effective medical information system, physicians should
be able to gather information more systematically and represent this
information in a more structured manner. In this capacity, the medical
information technologies may alleviate some of the burden on the physicians'
working memory and permit them to focus on higher-order thinking skills, such
as hypothesis generation and evaluation. The phrase effects of technology
refers to lasting changes in general cognitive capacities (knowledge and
skills) as a consequence of interaction with a technology. For example,
extensive use of information technologies may result in enduring changes in
diagnostic and therapeutic reasoning even in the absence of the system. This
suggests that medical information systems and decision-support technologies
may have ancillary positive consequences but may also induce complacency and
certain dependencies on systems. In our view, effective training can serve to
maximize the positive consequences and minimize the counterproductive
The focus of much research in human—computer interaction has been on
understanding the solitary individual who uses a computer or workstation and
deriving guidelines for design based on this understanding. Although we view
this work as important, there is clearly a need to understand the social,
contextualized, and distributed nature of work in health care settings. This
issue is discussed in the next section.