Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Biomed Inform. Author manuscript; available in PMC 2009 June 1.
Published in final edited form as:
PMCID: PMC2459228

Translational Cognition for Decision Support in Critical Care Environments: A Review

Vimla L. Patel, PhD, DSc,1,2 Jiajie Zhang, PhD,3 Nicole A. Yoskowitz, MA,2 Robert Green, MD, MPH,4 and Osman R. Sayan, MD4


The dynamic and distributed work environment in critical care requires a high level of collaboration among clinical team members and a sophisticated task coordination system to deliver safe, timely and effective care. A complex cognitive system underlies the decision-making process in such cooperative workplaces. This methodological review paper addresses the issues of translating cognitive research to clinical practice with a specific focus on decision-making in critical care, and the role of information and communication technology to aid in such decisions. Examples are drawn from studies of critical care in our own research laboratories. Critical care, in this paper, includes both intensive (inpatient) and emergency (outpatient) care. We define translational cognition as the research on basic and applied cognitive issues that contribute to our understanding of how information is stored, retrieved and used for problem-solving and decision-making. The methods and findings are discussed in the context of constraints on decision-making in real world complex environments and implications for supporting the design and evaluation of decision support tools for critical care health providers.

Keywords: translational cognition, distributed cognition, critical care, intensive care, emergency triage, clinical workflow, technological design, medical errors, decision support, cognitive task analysis, ethnographic analysis, naturalistic decision-making


Several researchers have proposed that the healthcare system can be characterized as a complex system. In a series of articles in the British Medical Journal, Wilson, Plsek and colleagues introduced complexity science in a general medical literature [1] suggesting applications for healthcare organization [2] and clinical practice [3]. Smith argues that an emergency department is a “paradigmatic complex system” [4]. This argument rests on the unpredictability of both patients’ clinical conditions and clinicians’ work patterns, the vast decision space and incomplete evidence that complicate clinical decision-making, and the inherent unpredictability of the system as a whole. This methodological review paper focuses on the cognitive dimensions of the complex critical care environment [5] and their implications for decision support. Although important, we do not explicitly cover other non-cognitive dimensions in this paper.

It is well known that errors increase as a function of complexity. The phrase “error in evolution” denotes the progression of a series of small errors towards a cumulative adverse event. Carlson and Doyle argue that complexity confers behavior that is “robust yet fragile” [6], resulting in a system that is tolerant to common perturbations but vulnerable to failure of certain individual components. Rasmussen’s “error margins” refer to the limits of the cognitive capacity of a given system [79]. When these margins are approached, for example, when operating at maximum productivity, the system is driven toward the boundaries of safety.

Theoretical and methodological notions from the field of distributed cognition (discussed in Section 3.2) and concepts from the complex systems literature are pertinent to the study of the Intensive Care Unit (ICU) and Emergency Department (ED) as complex cognitive systems. There are four major properties of complex adaptive systems: aggregation, non-linearity, flow, and diversity [10]. In the ICU, clinicians aggregate in response to changing patient priorities, for example in the event of an emergency resuscitation. The similarity between aggregate members is cognitive: they are united temporarily by a shared goal where “the local rules of interaction change as the system evolves or develops” (p. xii) [11], making it a nonlinear situation. Local interactions between clinician-agents and artifacts are fundamental to collaborative decision-making. Furthermore, with each decision-action event, the patient is perturbed, providing real-time feedback and prompting re-evaluation of “correct” decisions with unintended consequences. ICU decisions emerge from the flow of information between clinical team members. This team is diverse: it includes attending physicians, residents, nurses, and pharmacists, each with a particular expertise and perspective. Information also flows between artifacts and between artifacts and human agents, such as patient monitors, clinical images, medical records, and clinicians. Research studies need to address factors that push decision-makers toward unsafe boundaries, and then make these identified decision processes at crucial points explicit to decision-makers. Providing decision support in such an environment will rest upon findings from such cognitive studies.

Decision support in the clinical environment is defined as “advice and guidance offered by information and communication technology to aid the problem solving and decision making of health care providers” (p. 6) [12]. Computer-based decision support can be seen as the use of information technology to bring relevant knowledge for the well being of the patient.

Computer-based decision support tools are supposed to help practitioners avoid errors, ensure quality and improve efficiency in healthcare. Yet, there appears to be resistance to the use of such systems. How well a system is accepted by the practitioners depends on the degree to which it supports them in achieving their immediate goals.

In this paper, we review cognitive methodologies for the study of medical cognition and their applications to critical care settings to understand the nature of clinical decision-making and use these findings to inform the design of computer-based decision support.


Translational research is typically defined as the transfer of knowledge from one domain to another. Translational clinical research is specifically about the translation of basic biomedical research findings from bench to bedside. We define translational cognition as the translation of research of basic and applied cognitive issues to the understanding of medical cognition and the evaluation, design, and implementation of decision support tools for healthcare. The translation can be for general principles, generic methodologies, and specific research findings.

In this paper, we include both intensive care (inpatient) and emergency care (outpatient) in our definition of critical care. We focus on the following aspects of translational cognition: the application of general cognitive principles to healthcare domains, and more specifically, how findings from studies of cognition and decision-making in critical care environments can be used to develop decision support tools. To address these issues, we will first discuss why we need to study real world decision-making in order to understand how people make decisions under various constraints. We then describe the framework of distributed cognition which provides a language, a frame of reference, and a perspective for cognitive studies of critical care. The theory of cognitive load, which lends itself to decision-making research, is subsequently introduced. We continue with a review of the cognitive foundation of medical errors, including how the nature of the critical care environment places a cognitive overload on clinicians, thus increasing chances for error, and a cognitive taxonomy for categorizing errors, such that we can outline systematic, principled methods for design of improved medical error reporting systems for the purpose of providing decision support. We then provide an overview of the methods and techniques used to study cognition by other researchers and by our research teams. In the second half of this paper, we discuss specific examples from our studies in critical care that provide support for the value of cognitive methods in understanding decision-making in these environments. These examples are for illustrative purposes only; they are not inclusive. We have selected our own examples because of our familiarity with them and our access to the details of these studies. Subsequently, we integrate our findings from the critical care studies and make recommendations for developing decision support tools, including information technology interventions, to improve quality and safety in critical care.


There are two frameworks that provide the foundation for our research on cognition and errors in critical care: naturalistic decision-making and distributed cognition. We argue that a naturalistic approach to understanding decision-making in medical settings, such as critical care, is necessary for the eventual development of decision support tools for error prevention and management. In conjunction with this naturalistic approach, we conceptualize the critical care environment in terms of the framework of distributed cognition, which is crucial for the understanding, identification, and management of the various factors responsible for the quality and safety of critical care.

3.1. Medical Decision-Making: Naturalistic and Classical Models

A large body of empirical research on medical decision-making has accumulated over the past half century. Much of this work has been conducted within the classical paradigm of decision-making. However, it became increasingly apparent that there are several weaknesses to this paradigm that undermine the value of conclusions of this body of research for the development of effective decision-support technologies for healthcare settings. Thus, it is essential to develop a broader-based and more valid foundation of “basic science” decision research for the study of medical and other decision-making as it occurs in naturalistic settings. This has led to the emergence of the naturalistic decision-making (NDM) approach [5, 1315].

In contrast to the classical decision-making (CDM) paradigm, the NDM approach focuses on developing in-depth, ecologically valid, descriptive models of decision-making performance, which necessitate the use of a wide range of qualitative (and quantitative) methodologies. Whereas CDM studies are usually controlled laboratory studies, NDM research is conducted in real-world settings, where multiple cognitive, social, affective and environmental factors influence decisions and behavior. In such settings, decision strategies are needed that adapt to the constraints of the particular situation, which include stress, time pressure, and risk, among other factors. These strategies may be the product of individuals or teams. Research has moved towards investigating team interaction and performance, as communication and collaboration in the medical environment is critical for successful continuity of the daily workflow.

Because technology mediates clinical performance, decision support technologies need to be conceptualized in the context of actual practice. Towards this end, there is a need for a deeper understanding of clinical performance in real-world settings (under sub-optimal conditions) by both novices and experts, the effects of technology propagating through the different layers of an organization, and the adaptiveness of health professionals to an increasingly technologically-mediated world. Therefore, acquired knowledge of decision-making in complex, real-world environments, based on a cognitive framework, is necessary for designing and implementing technologies that can facilitate decision processes in real-world clinical settings, such as in critical care. In conjunction with this naturalistic approach, we conceptualize the critical care environment in terms of the framework of distributed cognition, which is described in the next section.

3.2. Distributed Cognition

Distributed cognition is the theoretical development in the distributed system approach, originally conceptualized by Hutchins and colleagues and later expanded by others [1623]. It has previously been applied to the study of cognitive systems underlying task performance on naval vessels [16] and in the airplane cockpit [18]. It is a scientific discipline that is concerned with how cognitive activity is distributed across internal human minds, external cognitive artifacts, and groups of people, and how it is distributed across space and time (see Figure 1) [16, 18, 2230]. In this view, people’s intelligent behavior results from interactions with external cognitive artifacts and with other people, and people’s activities in concrete situations are guided, constrained, and to some extent, determined by the physical, cultural, social, and historical contexts in which they are situated [31, 32], as seen in a natural working environment. The unit of analysis is a distributed cognitive system composed of a group of people interacting with external cognitive artifacts (e.g., cockpit of a commercial airplane, emergency department in a hospital, and an air force squadron unit). In general terms, we describe the components of a distributed cognitive system as internal and external representations. Internal representations are the knowledge and structure in individuals’ minds; and external representations are the knowledge and structure in the external environment [23].

Figure 1
Schematic drawing of the conceptual framework of distributed cognition, which focuses on how information, knowledge, and processes are distributed between individual minds and external artifacts, among team members, across space, and across time and how ...

The following describes how cognition is distributed between an individual mind and an external artifact and between individual minds. There are a wide variety of complex tasks that require the processing of information that is distributed across internal minds and external artifacts. External artifacts are defined as objects (e.g., light switches), symbols (e.g., writing), tools (e.g., slide rule, abacus, calculator, computer), and other entities that change, support, or modify human cognitive behavior. It is the interwoven processing of internal and external information that generates much of a person’s intelligence. For example, let us consider multiplying 965 by 273 using paper and pencil. The internal representations are the meanings of individual symbols (e.g., the numerical value of the arbitrary symbol “5” is five), the addition and multiplication tables, and arithmetic procedures, which have to be retrieved from memory. The external representations are the shapes and positions of the symbols and the spatial relations of partial products, which can be perceptually inspected from the environment. To perform this task, people need to process the information perceived from external representations and the information retrieved from internal representations in an interwoven, integrative, and dynamic manner. Zhang & Norman [23] developed a framework of distributed representations to account for the behavior in these types of distributed cognitive tasks. One important aspect emphasized by distributed cognition research is that external representations are more than inputs and stimuli to the internal mind. External representations have many non-trivial properties. For many tasks, external representations are intrinsic components, without which the tasks either cease to exist or completely change in nature.

Cognition can also be distributed across a group of individuals. There are two different views of how this occurs. The reductionist view considers that the cognitive properties of a group can be entirely determined by the properties of individuals. In this view, to understand group behavior, all we need is to understand the properties of individuals. In contrast, the interactionist view considers that the interactions among the individuals can produce emergent group properties that cannot be reduced to the properties of the individuals. In this view, to study group behavior we need to examine not only the properties of individuals but also the interactions among the individuals. Examples of emergent group properties include group affect [33], collective efficacy [34], shared mental models, and transactive memory systems [35].

Hollnagel and Woods [36, 37] recently offered a systematic account of distributed cognition in what they called joint cognitive systems. They consider people and technology as a joint cognitive system for work. Technology and automation, they found, do not necessarily lead to simplification of work. Rather, they introduce more complexity and adaptation. In other words, when a new technology is introduced, people adapt their strategies and artifacts to work around difficulties and accomplish their goals as responsible agents.

In our research, we find that the concept of distributed cognition is very valuable for accounting for activities in critical care, where activities are distributed between people and artifacts, across members of groups, and across space and time [16, 24]. Therefore, the quality of patient care is measured as a function of how well the whole system operates (interaction of individuals, teams, information systems, and the critical care environment). In this framework, medical errors are viewed as inevitable but cognitively useful phenomena that cannot be totally eliminated. They are products of the distributed cognitive activities in the distributed systems that are grounded in complex physical, social, and cultural environments. In order to manage errors during clinical decision-making, it is critical to understand how decisions are made and what underlying cognitive mechanisms are used to process information during interactions with patients, colleagues, and technology in these systems. Albolino’s work on sensemaking (social understanding) in the intensive care unit [38, 39] is related to this area of a high tempo and highly uncertain environment. Results from this work have shown that clinicians in the intensive care unit balance their work between collaborative “sensemaking” episodes and routine work activities in order to organize future courses of action.

3.3. Cognitive Factors in Critical Care

People make use of adaptive strategies to perceive, interpret, organize and communicate information, but their actions are constrained by functional characteristics of the system and constraints of the environment [7]. Reasoning and interpretation of information are influenced by cultural expectations, for example, by the assumptions of responsibility attributed to specific professional roles (e.g., physicians, nurses, clerks), and by the limitations of human attention and memory on cognitive processing [5]. In critical care, the complexity of performing even routine tasks is increased by the constraints of time, insufficient or unavailable information, by stress, and by frequent and unpredictable interruptions [40]. Tasks are often completed in a non-linear progression, as equipment and people move around and need to be located, break-in-tasks need to be attended to as they arise [41], and staff need to temporarily resign from a task due to interruptions [42]. Work in such a highly interruption-driven environment puts extraneous demands on the cognitive resources of each clinician.

As shown by France and colleagues [43], temporary interruptions appear to be a major source of inefficiency in emergency care, and likely a major threat to patient safety, as in other similarly demanding environments such as aviation [42]. In a study comparing clinician workflow in an emergency department with that in a primary care clinic, researchers found that emergency physicians were interrupted at a much higher rate (9.7 times an hour) than primary care physicians (3.9 times an hour), and that emergency physicians were involved more often in simultaneous care of multiple patients [41]. Although interruptions are necessary and important to maintain awareness of the continuously changing working environment, inappropriate management of interruptions can have a detrimental effect on performance, efficiency and error rate.

Coiera and others have studied how communication patterns contribute to the interrupt-driven environment of the ED [4447]. Results from these studies indicate that healthcare providers’ communication preference for synchronous communication to obtain information contributes to the number of interruptions experienced by doctors and nurses. For example, in analysis of nearly 20 hours of observation data for doctors and nurses working in an Australian ED, one study found that 35.5% of the communication events were interruptions [47]. This resulted in a rate of 14.8 interruptions per person per hour. In a study of doctors and nurses employed in a British general district, it was reported that communication behaviors contributed to an interruptive workplace [46]. The researchers reported that while the medical staff received multiple paging interruptions, they generated twice as many outgoing calls. The medical staff also experienced interruptions through face-to-face contacts. Two studies specifically investigated paging as a source of interruption for clinicians working in a hospital, finding that a significant amount of interruptions were a result of pagers [48, 49].

Earlier in this paper, we stated that the goal of the distributed cognition approach to workflow, collaboration, reasoning, and human-computer interaction research is to understand the shifting meaning of information in the context of the environment [31] and explain how it is transformed as it propagates through the system and circulates among collaborating agents. This insight may guide the selection of appropriate technological interventions for specific problems and avoid adding an undesirable level of complexity to an already difficult process. For example, pager interruptions are major issues for healthcare and there is an urgent need to manage such interruptions [49], which exemplifies an information delivery system that needs to be better integrated into clinical work. The most effective interventions will likely center on three approaches: the use of technology to automate information flow, the elimination of unnecessary interruptions, and the development of optimal means of communication to manage unavoidable interruptions.

Recently, France and colleagues [43] conducted a study on the effects of implementing an electronic whiteboard in the ED on physician work, communication and workload. Results showed that physicians in this study performed more tasks and were interrupted less frequently with the introduction of the electronic whiteboard than physicians in previous studies in conventional EDs without such technology. In addition, only 9% of the interruptions affected direct patient care. Although the presence of the electronic whiteboard increased work and communication efficiency, interruptions continued to occur, suggesting the need for more comprehensive interventions, not solely limited to the introduction of information technology into the environment. It is also possible that some minimal interruptions are necessary for “efficiency”, since critical care personnel provide patient care to more than one patient at a time in this environment. Xiao and colleagues [50, 51] have also conducted work on cognitive artifacts (i.e., cognitive properties of a whiteboard) and its effects on collaboration in a Level I trauma center operating room unit. For example, the public display whiteboard was used as an efficient tool for supporting collaborative work and for inventing new ways of representing information, using the magnetic objects on the board. Such tangible aspects of highly collaborative healthcare work have profound implications for research and development of information and communication technology despite the tendency to model work as flow of abstract data items (see also [52]).

Based on findings from studies on interruptions and communication in critical care, Coiera and colleagues recommend focusing on support for better communication practices between clinicians as a way to increase the quality and safety of patient care [53, 54]. In fact, they emphasize that understanding communication patterns will improve our understanding of how decision support systems should be designed to support effective communication [55]. In other words, the human factors involved in information exchange and interaction are fundamental to designing adequate support systems for work in critical care.

Technology alters the way individuals and groups collaborate and work. It may increase, enhance, or speed up performance [56] and reorganize task completion strategies. Its impact on the use of knowledge and reasoning has been evident, for example, in studies of electronic health record systems [57] and web-based patient tools for health management [58].

Many currently available healthcare information systems are not sufficiently sophisticated to operate effectively in highly complex environments and fail to provide adequate support to clinician users [59]. Complex information systems in combination with stressful, high-velocity work environments may add to the extraneous cognitive demand and create ample opportunities for error [60]. Highly complex system interfaces, for example, text-laden, dense and cluttered screens of many information systems, raise considerably the level of cognitive workload and add to the number of cognitive tasks required to monitor and manage a computer-driven work environment [61, 62]. A particularly detrimental aspect of cognitive overload in clinical work is the diversion of attention away from the main medical task. A physician whose attention is constantly shifting and who needs to mentally integrate data from disparate displays may not be able to formulate a complete and coherent picture of the current state of the system [63]. However, a systematic and robust conceptual understanding, or situational awareness, is necessary to recognize unusual or abnormal system states signifying a possible failure. It is therefore necessary that systems present perceptual cues that do not require the conscious effort of drawing meaning and interpretation of screen objects or systems but support integrative views and perceptual judgments [59]. The increasing versatility and complexity of clinical information systems also require users to develop a high level of skill that can be acquired only through hours of training and extended work experience. Such an effort is often unrealistic to demand from clinicians whose time is scarce and expensive [64]. Most users of clinical information systems therefore never achieve a high level of proficiency. Cursory training and an only vague familiarity with a new system then leaves users to rely on opportunistic learning during actual clinical work and may result not only in delays but ultimately translate into medical errors [65].

Careful design may help to minimize routine tasks by automating them and by displaying context-relevant information in formats that require minimal further interpretation or mental manipulation for immediate, direct use [66, 67]. For example, studies in aviation and power plant management have shown that intuitive monitors can improve detection, control, and prediction of future system states [68] and control staff can therefore avoid errors by making more accurate decisions. In medicine, clinical performance can be improved when displays are consistent with the user’s clinical processes and mental models [69]. Clinicians will then be able to conserve their attentional resources and focus fully on higher-order mental activity, such as clinical reasoning, strategy and treatment planning, and devote more time to unusual or non-routine cases [70].

3.4. The Cognitive Foundation for Medical Errors

In order to understand medical errors, we need to categorize them along different dimensions. Most of the medical error taxonomies [7179] are based on clinical, administrative, and other non-cognitive dimensions. They are mostly useful in documenting errors but not useful for explaining, managing, and preventing errors. Medical error is largely a cognitive phenomenon caused by many cognitive as well as non-cognitive factors. In order to understand the cognitive mechanisms underlying various errors, we need to categorize the errors along cognitively meaningful dimensions. To address the need of a cognitive framework specifically developed for medical errors, Zhang, Patel, Johnson, & Shortliffe [80] developed a cognitive taxonomy based on Reason’s theoretical framework of human error and Norman’s “Action Theory” [81, 82].

Cognitive factors are important for understanding medical errors at various levels of the healthcare system hierarchy. At the level of the individual, cognitive factors of individuals (e.g., knowledge, attention, memory, perception, action, reasoning, decision making, etc.) play a critical role [83]. At the next level, errors can occur due to interactions between an individual and technology. This is an issue of human-computer interaction, where cognitive properties of interactions between human and technology affect and sometimes determine human behavior and task complexity [1623, 81, 84, 85]. For example, poor design of the control displays of infusion pumps could lead to serious medication errors. At the next level, errors can be attributed to the social dynamics of interactions between groups of people interacting with complex technology in a distributed cognitive system. For example, errors can emerge in many scenarios such as the failure of coordination and communication between overnight and daytime nurses who must achieve mutual understanding about the state of a patient for whom they both care. At the next few levels up, errors can be attributed to factors of organizational structures (e.g., coordination, communications, standardization of work process), institutional functions (e.g., policies and guidelines), and national regulations. At these higher levels, cognitive factors also play important roles in the forms of organizational memory [86], decision-making [87], problem-solving [56, 88] and communication [89]. For example, at the organizational and institutional levels, the high-urgency nature of the decision-making environments, such as in intensive care, makes them vulnerable to multiple kinds of errors [90].

Many errors in healthcare are systemic institutional errors caused by problems that are not due to any individual or team of individuals, but rather are caused by some fault in a system. This category may include problems with technological systems [91], the physical design of the workspace, or the use of institutionally sanctioned, but faulty protocols. Although the properties at various levels can be to some extent studied independently, a cognitive foundation for the system is essential for a comprehensive and in-depth understanding of medical errors.

Figure 21 shows the cognitive taxonomy developed by Zhang et al [80] by integrating Reason’s taxonomy and Norman’s action theory, where errors are divided into slips and mistakes, which are further divided into two more levels. An example of an execution slip is when a nurse intends to decrease a value using the decrement function, but pushes the down arrow key (which moves to the next field) instead of the minus key. An example of an evaluation slip is when a user presses the start button on an infusion pump after which the pump indicates that it has started infusing, so the user assumes the patient was receiving the drug; however, the user had forgotten to open the clamp on the hose, so no drug was being delivered to the patient.

Figure 2
A cognitive taxonomy of medical errors, human errors during an action sequence (from Zhang, Patel, Johnson, & Shortliffe, 2004 [80]; Reprinted with permission from Copyright Elsevier Limited 2004). There are two types of medical errors: slips ...

This cognitive taxonomy can cover major types of medical errors because a medical error is a human error in an action and any action goes through the seven stages of the action cycle, described by Norman [82]: establishing the goal, forming intentions, specifying the action sequence, executing the actions, perceiving outcomes, interpreting outcomes, and evaluating the outcomes against the goal. Most reasoning and decision-making errors in medicine are under the category of mistakes in the taxonomy. They are due to incorrect or incomplete knowledge, or other factors. This taxonomy also provides preliminary analyses of underlying cognitive mechanisms for each category of errors and recommendations for intervention strategies. More recently, Malhotra and colleagues [92] extended this model to include communication between multiple healthcare providers.

Recently, as a departure from the traditional approaches to human errors, Hollnagel, Woods, and Levenson [93] proposed a Resilience Engineering approach to medical error. One of the key points they make is that past efforts for improving system safety are commonly based on hindsight. The resilience engineering approach proposes a completely new vocabulary, and therefore a completely new way of thinking about safety. They argue that people are usually resilient when adapting to different situations, and therefore, research should focus on the processes and support systems that will allow individuals to be successful and avoid error.

3.5. Cognitive Methods

3.5.1. Modeling the Clinical Workflow

Ethnography, as a research method, is commonly used in sociology and anthropology to acquire detailed accounts of a particular environment, the people involved, and individuals’ interactions within the environment. Though traditional ethnography and cognitive methodology are drawn from different disciplines, and thus have different goals, they can be integrated to develop an innovative, more optimal technique to be used in the study of cognition. Cognitive ethnography (CE) emerged from the adaptation and modification of three of ten principles of prototypical ethnography outlined by Ball and Ormerod [94]. These include replacing the principle of (1) “intensity” with “specificity” of data collection; (2) “independence,” which states that the researcher must not have any existing theories, goals, or beliefs prior to observation, with “purposive” techniques involving specific research goals and theoretical interests; and (3) “personalization” (which requires researchers to make note of their thoughts and feelings on the observations) with “verifiability” (validation of the results across various settings and triangulation across observers). The purpose of this adaptation is to constrain the amount of data to be analyzed, with a more specific goal in mind.

Ethnographic methods used in our studies in critical care include shadowing representative physicians and nurses and audio- or video-recording all of their interactions with each other as well as think-aloud protocols while doing identified tasks (such as medical rounds). It also includes note-taking of non-verbal cues and interactions, while passively observing the clinical workflow, performance of routine and non-routine tasks and the nature of communication between clinicians. Think-aloud tasks are used to capture the individual’s thoughts and reasoning processes during problem-solving and decision-making, as these processes unfold [95, 96]. The collected data also provide information about the style and content of verbal interactions among all members of the team as well as individual reasoning processes. In addition, the dynamics of interaction during weekly meetings are recorded, with the aim of identifying the role of communication in making decisions. Semi-structured interviews with physicians, nurses and other clinical staff are also conducted to inquire into the nature of the interactions observed and to examine error-prone situations that may have occurred during each session.

Data are analyzed to represent the workflow of the critical care environment, which emphasize the importance of representation in the strategizing process of encoding information for making decisions. Observation and interview data are used to build individual pieces of the workflow, depending on the individual and the activity concerned. For example, Malhotra and colleagues [92] identified seven key generic activities (i.e., re-orientation and preliminary planning at the beginning of the workday, goal formulation, goal execution, transfers, admissions, re-assessment, and evening sign-out) in the patient care process, called critical zones, for dividing up the workflow. Next, these individual pieces are integrated according to the critical zones to develop a generalizeable cognitive model of the workflow (see Figure 32), which can be used to identify, characterize and predict medical errors in the ICU.

Figure 3
The cognitive workflow model for inpatient care (from Malhotra et al., 2007 [92]; Reprinted with permission from Copyright Elsevier Limited 2007). The workflow moves in a counter-clockwise fashion, with the sun on the lower left hand corner indicating ...

There are four levels of abstraction from which the model can be interpreted. The first and top level is the model as a whole. The second level includes the three groupings of critical zones (CZs), with the different background colors (yellow, blue, and grey). At this level, we can see indications of where and when medical errors are likely to occur if an individual is multitasking between different CZs and is cognitively overloaded. Within these groupings are the seven individual CZs (green boxes) and the adjacent blue boxes, which are the activities that co-occur in these CZs, and which make up the third level. Here, the interactions between the different members of the healthcare team as well as the flow of information are evident. The fourth level is the individual level, although it is not shown in Figure 3. At this level, we can follow the individual team member and identify his or her incumbent dependencies as well as outcomes related to activities of knowledge acquisition, information processing, task execution, and communication. Depending upon the team member, we may predict which part of the workflow may break down because of a faltering dependency. This workflow model was created as a simplified template that can be modified for use in other critical care settings to identify weaknesses and potential for errors in the clinical workflow. Such data can guide the development and implementation of information and communication technologies that can be targeted as support of each of the areas of weakness.

3.5.2. User and Cognitive Task Analysis

Cognitive Task Analysis (CTA) is a core methodology used in cognitive science and engineering in both laboratory and real-world settings [97]. Using CTA, individuals’ performance can be studied by examining the quality and quantity of domain-specific knowledge required for a task, the information-processing demands of a task, the effectiveness of available technology-based support to perform the task, and decision points [98]. CTA can be used to characterize tasks that place cognitive demands on the individual (e.g., diagnostic reasoning) and require similar reasoning strategies because of a shared underlying structure, and this can be generalized to clinicians across clinical settings [5, 99].

There are different types of task analysis that can be used depending on what is being analyzed and the purpose of the analysis. Hierarchical task analysis is the basic analysis for any task, in which high level tasks are broken down into their constituent subtasks and operations. This process is useful for the understanding and design of user interfaces. Action cycle analysis is based on the 7-stage model of human action by Norman [82] (i.e., establishing the goal, forming intentions, specifying the action sequence, executing the actions, perceiving outcomes, interpreting outcomes, and evaluating the outcomes against the goal). This method is used to analyze key subtasks that are critical for the usability of any device. Important to this analysis is the identification of the points where the action cycle can break down, which are primarily at the interface of execution and evaluation of the task. The execution is influenced by the difference between the goals and intentions of the user and the actions enabled by the system. The evaluation is influenced by the degree to which the user can perceive and interpret the state of the system and determine how well the user’s expectations have been met (e.g., feedback). After identification, ways to improve the interface can be suggested through changes in system design and education of users.

GOMS analysis (Goals, Operators, Methods, and Selection Rules) is a keystroke-level computational model that attempts to predict performance times for error-free expert performance of tasks by summing up the time for key-stroking, pointing, homing, drawing, thinking, and waiting for the system to respond [100, 101]. It is useful for the analysis of tasks that have complex goal-subgoal structures. It is also useful for modeling task performance levels of alternative designs without actually implementing the designs. One end product of the task analysis is the identification of the ideal task structure for good performance, interactions among procedures, and the information flow of the task. Another end product, which is more important, is a taxonomy of tasks. For example, in a task taxonomy based on the types of information processing, there are information tasks for retrieval, encoding, transformation, calculation, and comparison, as well as other information tasks.

3.5.3. Ontology Approach to Medical Errors

The traditional taxonomies of medical errors as isolated constructs do not show much utility in the understanding, explanation, management, and reduction of medical errors. The major weakness in the traditional taxonomies is that the concepts in one taxonomy are isolated, and not semantically linked and integrated with the concepts in other taxonomies.

One new approach to medical error is to use ontology engineering tools to develop a meta-taxonomy of medical errors that integrate taxonomies that were created for different purposes. As a whole, the meta-taxonomy will have much more utility in categorizing, explaining, and managing errors. Along this line of thinking, we have developed a comprehensive medical error ontology to serve as a standard representation for medical error concepts gleaned from various existing published taxonomies [102, 103]. Eight candidate taxonomies were selected from published literature and merged to create a reference ontology consisting of 12 multi-dimensional axes that encompass the major aspects of a medical error event. A general ontology of medical errors is crucial for the following reasons: (1) to provide formal definitions and coverage of an entire range of concepts and relationships about medical errors; (2) to resolve present difficulties in pooling medical error information from varied data sources and classifications; (3) to enable analysis, interpretation, understanding and sharing of “medical errors” in a single, standard framework; (4) to enable identification of strategies for improvement to prevent medical errors; and (5) to provide systematic, principled methods for the design of improved medical error reporting systems.


The theoretical and methodological frameworks of naturalistic decision-making and distributed cognition described earlier provide a foundation for research in critical care. In this section, we describe how research studies in critical care support the value of using cognitive theory and methods for understanding decision-making and errors in critical care environments, and have implications for the design and implementation of decision support at the time and place when needed. The following four themes are elaborated with specific examples from our studies: (1) the clinical workflow in the ED and ICU, with an emphasis on cognitive overload and team decision-making; (2) expert-novice differences in comprehension of Psychiatric ED medical records; and a (3) cognitive analysis of a provider order entry interface and medication support in the ICU.

4.1. The Clinical Workflow

There are several factors that contribute to inefficiency and complexity in the clinical workflow, namely, multitasking, shift changes and handoffs, and interruptions. Such factors provide more opportunities for error, and place a higher cognitive load on each individual clinician. There are several steps in the patient care process through critical care, from triage to registration to the main emergency department to an inpatient intensive care unit. In this section, we use examples from our studies of different areas in the ED to describe the clinical workflow factors that contribute to cognitive overload of clinicians, which result in inefficiencies and delays in the clinical workflow and in patient care. Next, we describe how team interactions affect decision-making and error with examples from our studies of the workflow of the ICU.

4.1.1. Cognitive Overload in the Emergency Department

Using cognitive methods of data collection and analysis described earlier (Section 3.5), several studies were conducted in various areas of the Emergency Department (ED) at a major Medical Center in New York City and another major hospital in the gulf coast region of the United States in view to identify problem areas and develop technological interventions. The triage process

Studies of the workflow in ED Triage were aimed at identifying the task and information flow and the use of information resources throughout the patient care process. Figure 4 shows the model that was constructed based on a cognitive task analysis (see Section 3.5.2) of triage workflow observations, as well as information from questionnaires and interviews with key clinical personnel [104]. This model shows the actions, agents (clinical team members), information resources available and their interactions during a typical patient encounter in triage. The description of agent abbreviations and associated tasks as used in the figure and text is in Table 1. The series of tasks (boxes in Figure 4) follow a chronological order from a patient’s entrance into the ED to either discharge or admission to the hospital.

Figure 4
Task flow and information resource sharing model of patient care in the ED (based on Horsky et al., 2006 [104], a presentation given at the American Medical Informatics Association Annual Conference). The task flow begins at the top of the figure and ...
Table 1
Agent and task description for ED Triage [104]

Upon entering the ED, patients proceed through a pre-triage and triage process. Pre-triage (tasks 1–3) occurs when the patient enters the ED and gives the pre-triage nurse (PTN) his or her chief complaint for coming to the ED. Here, the PTN begins a paper chart (task 1) for the patient, as there is no access to the electronic medical record system at this point. Depending on the severity of the case (task 2), the PTN may have a triage nurse (TN) evaluate the patient immediately or initiate a chart and place it in a cue for the TNs to process in time order (task 3). The next step is the triage of the patient (tasks 4–7), where minimal electronic decision support is utilized. The TN assesses the patient’s complaint (task 4), examines the patient’s vital signs (task 5) and conducts point-of-care tests such as peak expiratory flow volume, finger-stick blood sugar or hemoglobin assessment, or an EKG depending on the patient’s complaint and medical history (task 6). Then, the TN uses the electronic tracking system, “e-track”, to assign the patient to a treatment area of the ED and a district nurse (DN), depending on the current workload of each nurse (task 7).

After the TN completes the triage process and records the patient’s assignment on the chart, the patient is taken to the assigned area in the ED or told to wait in the waiting room, ideally by the Emergency Room Technician (ERT) (task 8). The ERT, or TN, locates the DN (task 9) and gives the DN a brief verbal report about the patient (task 10). The TN then gives the patient’s chart to the registration department (task 11), which is the last step in the triage process. Finally, the TN returns to the triage area and repeats this process with the next patient. The overall process requires the TN to physically move to various areas in the ED. The workflow continues into the Registration area (task 12) and then into the main ED, where patients are assessed by the DN and a physician, diagnosed, and treatment is initiated resulting in either discharge or admission to an inpatient unit (tasks 13–16).

Results of analysis of this data by Horsky and colleagues [104] show significant delays in the triage process that may carry over to the main ED, thus increasing inefficiency and opportunities for error in making patient care decisions. One main reason for delay and an inefficient workflow is the existence of three different electronic information systems and one circulating paper-based chart, which requires the clinicians to access, aggregate and cross-match patient information across the systems.

All patient encounters were categorized according to the main reason for delay in the triage encounter. Twenty percent of encounters were classified as typical findings as they did not include any events uncharacteristic for the triage task and were therefore used as a reference in estimates of triage delay. The mean time of triage encounters in the Reference category was 8 minutes and 38 seconds. There were five types of events found to prolong triage (Interpreting, Workflow, Locating, Extra Tasks and Patient-related) and one type of event that shortened it (Fewer Tasks). For example, 18% of encounters included delays associated with obtaining an interpreter, such as repeated paging and long waits. In addition, 14% of encounters included delays due to difficulty tracking clinical personnel and equipment needed to assess the patient. The longest average encounter time (17 minutes, 47 seconds) occurred in 9% of triage cases where the delay was due to patient-related medical reasoning and consultations about acuity level, or in determining the institutional policy for treating patients who were intoxicated or present with psychiatric symptoms. When aggregating the categories into Reference, Workflow, and Patient-related categories, it was estimated that 23% of nurse contact time with patients was estimated as delay. Most of that delay (79%) was categorized as workflow-related. The registration process

Due to the time-pressured and urgent nature of the patient care process in the ED, clinical personnel tend to use shortcuts when they are able in order to decrease time to patient care by the physician. Hakimzada et al. [105] traced four cases of errors, related to patient misidentification, back to ED Registration, which were ultimately due to the tendency for registration staff to use workarounds and shortcuts during times of high patient volume. Workflow in the main Emergency Department

The process of patient care continues into the main ED. An ethnographic study of the clinical workflow in an Adult ED [106] identified several key tasks in the workflow that contribute to cognitive overload on the clinicians. These tasks include shift changes and handoffs, multitasking, interruptions, and documentation. For example, there was an interruption every 9 and 14 minutes on average for the attending physicians and the residents, respectively, making the communication process more difficult and cognitively taxing for the clinicians. Observed sources of interruption in the ED included other patients, other staff, telephones and pagers. The data indicate a higher frequency of interruptions that are resolved quickly by the attending physician. On the other hand, interruptions caused by the ED residents, while less frequent, have a much longer duration. One additional observation was the consistent need expressed by clinicians for a computer-based tracking system, which would help them monitor and find patients throughout the ED. During the period of observation, a tracking system was implemented within the ED and although it resolved some of the problems, the system fell short of expectations due to its inability to communicate with other information systems within the ED.

A high prevalence of interruptions was also documented in another study focusing on ED nurses working in a Level One Trauma Center at a different location [107, 108]. According to Brixey and colleagues, an interruption is defined as “a break in the performance of a human activity initiated by a source internal or external to the recipient with occurrence situated within the context of a setting or location. This break results in the suspension of an initial task to perform an unplanned task with the assumption that the initial task will be resumed” (p. E38) [109]. Brixey et al. [107] categorized ED interruptions and activities using the HyMCIA (Hybrid Method to Categorize Interruptions and Activities) method through the collection of ethnographic data. Analysis of observations resulted in the development of a taxonomy of interruptions, a non-dynamic representation of the phenomenon. Based on this taxonomy, a timeline of activities and interruptions was constructed, which served to place the discontinuities in the workflow caused by interruptions into context.

In the Brixey et al. study, it was observed that nurses received slightly more interruptions per hour than physicians (an average of 12 vs. 10). In addition, physicians were most frequently the interruption initiators (63% of the time). Interruptions in the workflow were initiated by people, pagers, and telephones, as well as the physical environment when supplies were not available. After the interruption, physicians and nurses usually returned to the original, interrupted activity more often than leaving the activity unfinished. The efficient return to interrupted activities can be supported by information technologies such as memory aids, which would decrease the cognitive load burden on the clinicians and facilitate patient care decisions that are delayed due to interruptions.

The development of a graphical representation of the clinical workflow in the ED (see Figure 4) helped in identifying the problems in communication flow, bottlenecks and repetitive tasks in the ED process of care [104]. Figure 5 is a graphical representation of the observed task flow, communication patterns and patient tracking (top portion of figure) and the proposed technological changes (lower portion of figure), which are aimed at improving the clinical workflow.

Figure 5
Observed system configuration with current information technology which is both paper- and electronic- based (depicted in upper portion of the figure) and suggested system configuration with the introduction of new information technology where the system ...

The data suggest that the use of a solely electronic-based medical record system, including electronic patient tracking, can facilitate the management of patient information and patient care decisions within the limited time frame. An integrated, connected support system would reduce repeated manual copying of information at various stages in the care process, and eliminate the need to physically locate nurses, interpreters, and patients in the busy ED, which only results in increased delays in care. As Horsky and colleagues [104] suggest, in order to ensure interoperability, specific tasks, information sharing and decision support may require different modalities of communication delivered by different technologies. Implementing an integrated system with real-time updates of patient information in the computer system makes the registration desk redundant and unnecessary. For example, during pre-triage, patients are asked to present identification from which personal data are hand-copied on a paper form. The PTN could instead have a workstation networked to the hospital EMR and initiate integrated paperless charting and tracking by searching for an existing record so that returning patients would have history, allergies and other pertinent data ready for the triage nurse. Patients could be issued a bracelet with their name and encoded basic data (e.g., an RFID [radio-frequency identification] tag) for quick identification later in the process. Asynchronous, less interruptive means of communication, differentiated by urgency and priority, could replace the current pattern that relies mostly on verbal or personal contact. These support measures would increase efficiency, create a better task division between the nurses, and decrease the cognitive load on the triage nurses’ memory during patient assignment to nurses.

In summary, these recommendations allow for the management of patient information with decreases in time taken for clinical tasks to increased efficiency of the ED process, from triage to admission or discharge. It should be noted that when implementing new technology, there needs to be a close, careful and ongoing monitoring of the process as new challenges and problems may be introduced.

4.1.2. Distributed Cognitive Workflow of the Psychiatric Emergency Department

The psychiatric emergency department (Psych ED) functions similarly to the general ED; however, there are several characteristics unique to this environment. Cohen and colleagues [110] used the framework of distributed cognition to develop a model of the clinical workflow in the Psych ED.

Findings from this study show that cognitive work is distributed across both agents (individual clinicians) and artifacts. Figure 63 gives a graphical representation of this distribution, showing the various members of the multidisciplinary clinical team, their tasks, including information- gathering tasks (e.g., taking a patient history) and action-execution tasks (e.g., administering a medication), and the artifacts used for recording patient information (e.g., whiteboard, clinical notes) [110]. Although this distribution is functional, analysis of the distribution has revealed several latent flaws in the system related to the underlying distribution of cognition across teams, time, space and artifacts. Errors and near misses derived from the observation and interview data were interpreted in relation to the level of the distribution in which they occurred (see Table 2 for examples of latent flaws identified in this study) [110].

Figure 6
Representation of the distribution of cognitive work in the Psychiatric ED across clinical team members and the resulting artifacts (from Cohen et al, 2006 [110]; Reprinted with permission from Copyright Elsevier Limited 2006). Multiple markers next to ...
Table 2
Examples of latent flaws identified in the Psychiatric ED (from observation, interview and shadowing data) [110]

The analysis of verbal protocols was used to characterize decisions taken in the Psych ED, and to determine what information content was used to support these decisions. The data suggest that when developing systems that provide support for making decisions during these crucial periods of “near misses”, there needs to be consideration for the factors that lead to pushing decision-makers towards error boundaries and how these factors can be monitored.

4.1.3. Decision-making and Team Interactions in the Intensive Care Unit

The intensive care unit (ICU) is another dynamic and complex environment, with high stakes for patient safety and minimal room for error. Patient rounds are one of the most important activities that occur in the ICU, as individual patients are visited and evaluated by the team of clinicians. Patel, Kaufman and Magder [90] investigated the collaborative decision-making and team interactions in a medical ICU, focusing on observations and recordings of morning patient rounds and related information from one patient’s charts. During morning rounds, team members give patient reports that are then discussed as a group for the evaluation of patients’ status, previous decisions and actions made and for the planning of next actions to take.

Analysis showed that the rounds are characterized by three phases [90]. The first phase involves a report from the overnight resident. The report is used to describe to the team the patient’s condition during the previous 24 hours, including critical decisions that were made and actions that were taken. The team then critiques and evaluates these decisions as to their efficacy and appropriateness for stabilizing the patient. The second phase involves a report from the overnight nurse. This report includes an assessment of the patient’s situation specifically regarding the vital signs and symptoms related to fluid balance and food intake, as well as the psychological status of the patient. Then, the resident makes suggestions as to the collection of more information in order to decide on the next course of action. At this point, there is a shift change, with another resident replacing the overnight resident. This phase ends with the expert’s (attending physician) evaluation of the patient’s status to make the next round of decisions for the ICU. The third phase involves a dialogue between all team members, where gaps in information are filled, further information is requested, and sensitivities regarding specific issues are discussed and resolved. In this phase, the pharmacist and nutritionist evaluate the patient’s medication and dietary requirements, respectively. The expert physician concludes with a summary of the actions to be taken during the subsequent 24 hour period, and all team members are updated on the patient’s status and are made aware of their individual responsibilities.

Throughout this process, the expert physician manages the flow of information so that there is a reduction in the cognitive complexity and effort for the team, with an increase in cognitive complexity for himself, as he integrates the multiple pieces of information about the patient, using basic science concepts as needed. The expert relies on the team for maintaining the shared knowledge in a distributed working memory, and for analysis of patient data, which is primarily done by the resident. This balanced process of team-individual decision-making and data synthesis works to make the patient care process efficient.

Patel, Kaufman and Magder [90] also conducted a dialogue analysis of the morning rounds for three days in terms of episodes corresponding to topics of discussion. The analysis included identifying the number of propositions (concepts) and clinical findings (useful or relevant concepts for making decisions) for the three morning rounds. The results of the analysis show several patterns of interaction that change over time, shifting from a focus on the patient’s condition on the first day in the ICU (generating 60 findings), to the effects of medication and adjustment decisions on the second day, and to longer-term therapeutic management issues, resulting in patient discharge on the third day. On the first day, 78% of the 185 concepts used in the discussion reflected distinct or new information. The concepts raised by the resident became the basis of most of the subsequent discussion, and were repeatedly reviewed and evaluated during the discussion. On the second day, 65% of the 145 concepts used in the dialogue reflected new information not previously discussed. As the patient’s condition was quickly changing, there were more focused communication exchanges, suggestions and decisions made during discussions. By the end of the second day, the team had acquired substantial shared knowledge about the patient, which was reflected in the deeper level of analysis and synthesis of information. On the third day, the attending physician (expert) planned to discharge the patient. As such, there were many fewer exchanges between team members, including summarizing the information from the patient record and the expert advising on future management of the patient’s condition. Likewise, there were only a total of 76 concepts used in discussion, of which 61% were new information.

Information used during rounds was at various levels of granularity, from basic medical sciences to pathophysiology to medical information. In addition, the level of information processing was different for nurses, residents and attending clinicians in providing patient care. Knowledge–based decision support during the rounds will need to take into account these differences and be able to deliver information “just in time” during practice.

4.2. Understanding Medical Records for Patient Care Decisions: Expert-Novice Differences

There is evidence that shows that poorly designed electronic medical records (EMR) may decrease productivity and increase errors in clinical practice [111, 112]. Sharda and colleagues [113] investigated the effects of expertise on the comprehension of psychiatric narratives by clinicians. They used cognitive methods to determine design criteria for EMRs. Data were collected using think-aloud protocols (see Section 3.5.1) from expert and novice psychiatrists as they read clinical narratives, based on real discharge summaries. The transcribed protocols were then analyzed using one of the natural language representational methods, propositional analysis [114116], and semantic analysis (see [116] for an extensive review of this methodology). Results showed that novices (2nd year psychiatry residents) (1) were less able to distinguish relevant from irrelevant information in the EMR despite recalling similar quantities of information, and (2) made less accurate inferences than did the expert psychiatrists [113]. In addition, expert subjects were more precise than non-expert subjects both in their use of language and in the accuracy of inferences drawn. On occasion, non-expert subjects would reach correct conclusions, but for the wrong reasons [113].

However, when the discharge cases were restructured, the novice subjects were able to make more inferences from relevant material. The authors note that this has implications for the design of EMR interfaces. It has been shown that such interfaces can affect knowledge organization and reasoning [57], and as such can be considered cognitive artifacts [27]. Paper records can also be considered cognitive artifacts. However, because of their dynamic nature, electronic medical records have the potential to present information in a manner that affects human cognitive performance. The results show that electronic data organization support through structured text helps novices in reducing cognitive load of sifting through massive narrative data and guides them in focusing on relevant data. This can be done relatively efficiently for purposes of screening and managing patients in emergency care.

4.3. Cognitive Analysis of Provider Order Entry Interfaces for Medication Support

Computer-assisted provider order entry is a support tool that is designed for expediting medication ordering. The structure of the order entry system needs to be designed to take into account physicians’ interactions with the system with the aim of reducing the cognitive demands on the individual to facilitate decision-making. Horsky and colleagues [70] developed a methodology for the characterization of cognitive demands of a medical information system, which was based on the distributed resources model, an approach that describes the dimensions of user interfaces that introduce unnecessary cognitive complexity. This method evaluates the relative distribution of external (system) and internal (user) representations embodied in system interaction (see Section 2.2 and [1623]). An expert “walkthrough” evaluation of a commercial order entry system was conducted, followed by a simulated clinical ordering task performed by seven clinicians. This type of analysis involves usability inspection of data collected from tasks performed by the experts, including the visual contents of the computer monitors that are recorded, along with their verbalizations [117]. Use of both verbal and visual data facilitates the identification and characterization of the user’s interaction strategies with the order entry system. The analysis revealed that the configuration of resources in this ordering application placed unnecessarily heavy cognitive demands on the user, especially on those who lacked a robust conceptual model of the system. The distributed resources model also provided some insight into clinicians’ interactive strategies and patterns of associated errors.

A further series of studies explored the relationship between computer-based provider entry (CPOE) systems and medical error [70, 111]. A novel approach to error analysis was used to interpret a dosing error related to computer-based ordering of potassium chloride (KCl) [111]. The sequence of events leading to this error was chronologically reconstructed from disparate sources including usage logs, interviews and usability inspection. Errors in several aspects of the drug ordering process were identified, including system usability difficulties, user training problems and suboptimal clinical system safeguards. Results of the analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggesting new user alerts, proposed changes to user training, and attention to the error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.


The characterization of work and information flow in each clinical context provides a foundation from which to develop and implement decision support in critical care aimed at improving the efficiency of the clinical workflow with a redistribution of cognitive tasks and better communication and collaboration between clinicians. Improved workflow and team interaction will serve to reduce and prevent errors and thus increase patient safety in a complex, dynamic and time-pressured environment. For example, the generalizable cognitive model of the clinical workflow developed by Malhotra and colleagues [92] can be used to inform the development and implementation of decision support tools, such as cognitive aids and other technological information systems, that are responsive to the nature of the clinical workflow. However, unlike the popular goal of achieving flawless performance (through development of error-free systems), the results from these studies will have implications for developing adaptive systems that anticipate errors, respond to them, or substitute less serious errors that allow subsequent interventions before they result in an adverse event.

In each of the critical care environments (ICU, ED and Psych ED), we have identified bottlenecks in the workflow and systemic flaws that leave the system vulnerable to error. These include the loss of information at shift-change, inefficient patient tracking and cognitive overload as a consequence of multi-tasking and frequent interruptions. Furthermore, as part of our distributed cognitive analysis of each environment, we have characterized interactions between human and technological agents (or lack thereof) that underlie the process of patient care. The errors and problems we identified in these critical care settings are likely to occur within similar systems at other hospitals. Increasingly complex systems of care delivery require comprehensive analyses of human actions and errors for design changes that emphasize clarity of communication and the implementation of technology that supports specific user tasks

In this methodological review paper, we attempted to cover the cognitive methodologies and their applications in translating the findings of cognitive research into implications for providing efficient, effective and safe decision support for critical care settings. We illustrated these methodologies with examples mostly from our own research. The focus was on the following aspects of translational cognition: from general cognitive principles and methods to their applications in healthcare domains, including the nature of problem-solving and decision-making, and distributed team cognition. These cognitive studies address the issue of where and when there are limitations to human memory, problem-solving and decision-making strategies as well as communication failures, both individually and in teamwork, which could be circumvented using specific information and communication decision support tools.


Support from the US National Library of Medicine (R01 LM07894) grant to Vimla Patel is greatly acknowledged.


1Reprinted from Journal of Biomedical Informatics, Vol 37, Zhang J, Patel VL, Johnson TR, Shortliffe EH, A cognitive taxonomy of medical errors, pp, 193–204, Copyright Elsevier Limited 2004.

2Reprinted from Journal of Biomedical Informatics, Vol 40, Malhotra S, Jordan D, Shortliffe E, Patel VL. Workflow modeling in critical care: Piecing together your own puzzle, pp. 81–92, Copyright Elsevier Limited 2007.

3Reprinted from Artificial Intelligence in Medicine, Vol 37, Cohen T, Blatter B, Almeida C, Shortliffe E, Patel V, Distributed cognition in the Psychiatric Emergency Department: A cognitive blueprint of a collaboration in context, pp. 73–83, Copyright Elsevier Limited 2006.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.


1. Plsek PE, Greenhalgh T. Complexity science: The challenge of complexity in health care. BMJ. 2001;323:625–8. [PMC free article] [PubMed]
2. Plsek PETW. Complexity, leadership, and management in healthcare organisations. BMJ. 2001;323:746–9. [PMC free article] [PubMed]
3. Wilson T, Holt T, Greenhalgh T. Complexity science: complexity and clinical care. BMJ. 2001;323:685–8. [PMC free article] [PubMed]
4. The emergency department as a complex system. Available at
5. Patel VL, Kaufman DR, Arocha JF. Emerging paradigms of cognition in medical decision-making. J Biomed Inform. 2002;35:52–75. [PubMed]
6. Carlson JM, Doyle J. Complexity and robustness. Proc Natl Acad Sci USA. 2002;99:2538–45. [PubMed]
7. Rasmussen J. Information processing and human-machine interaction: An approach to cognitive engineering. New York: North-Holland; 1986.
8. Rasmussen J. Risk management in a dynamic society: a modelling problem. Safety Science. 1997;27:183–213.
9. Cook RI, Rasmussen J. Going solid”: a model of system dynamics and consequences for patient safety. Qual Saf Health Care. 2005;14:130–134. [PMC free article] [PubMed]
10. Holland J. Hidden order: How adaptation builds complexity. Reading, MA: Addison-Wesley; 1995.
11. Levin S. Ecosystems and the biosphere as complex adaptive systems. Ecosystems. 1998;1:431–436.
12. Greenes RA, editor. Clinical Decision Support: The Road Ahead. Elsevier- Academic Press; 2007.
13. Salas E, Klein GA. Linking expertise and naturalistic decision making. Mahwah, NJ: Lawrence Erlbaum Associates; 2001.
14. Klein GA, Orasanu J, Calderwood R. Decision making in action: Models and methods. Westport, US: Ablex Publishing; 1993.
15. Klein GA, Calderwood R. Decision models: Some lessons from the field. IEEE Trans Syst Man Cybern B Cybern. 1991;21:1018–1026.
16. Hutchins E. Cognition in the Wild. Cambridge, MA: MIT Press; 1995.
17. Flor NV, Hutchins EL. Analyzing distributed cognition in software teams: A case study of team programming during perfective software maintenance. In: Joenemann-Belliveau J, Moher TG, Robertson SP, editors. Empirical Studies of programmers. Ablex Publishing; 1992.
18. Hutchins E. How a cockpit remembers its speed. Cognitive Science. 1995;19:265–288.
19. Hutchins E, Norman DA. Distributed cognition in aviation: a concept paper for NASA (Contract No. NCC 2–591) Department of Cognitive Science, University of California; San Diego: 1988.
20. Norman DA. Cognition in the head and in the world: An introduction to the special issue on situated action. Cognitive Science. 1993;17:1–6.
21. Norman DA. Things that make us smart. Reading, MA: Addision-Wesley; 1993.
22. Zhang J. A distributed representation approach to group problem solving. Journal of American Society of Information Science. 1998;49:801–809.
23. Zhang J, Norman DA. Representations in distributed cognitive tasks. Cognitive Science. 1994;18:87–122.
24. Salomon G. No distribution without individuals’ cognition: a dynamic interactional view. In: Salomon G, editor. Distributed cognition: Psychological and educational considerations. Cambridge, MA: Cambridge University Press; 1997. pp. 111–138.
25. Zhang J. The nature of external representations in problem solving. Cognitive Science. 1997;21:179–217.
26. Zhang J. Distributed representation as a principle for the analysis of cockpit information displays. Int J Aviat Psychol. 1997;7:105–121. [PubMed]
27. Norman DA. Cognitive artifacts. In: Carroll JM, editor. Designing interaction: Psychology at the human-computer interface. New York: Cambridge University Press; 1991.
28. Patel VL. Individual to collaborative cognition: a paradigm shift? Artif Intell Med. 1998;12:93–6. [PubMed]
29. Patel VL, Arocha JF. The nature of constraints on collaborative decision-making in health care settings. In: Salas E, Klein G, editors. Linking expertise and naturalistic decision-making. Mahwah, NJ: Lawrence Erlbaum Associates; 2000. pp. 78–91.
30. Wright PC, Fields RE, Harrison MD. Analyzing human–computer interaction as distributed cognition: the resources model. Hum Comput Interact. 2000;15:1–41.
31. Suchman LA. Plans and situated action: The problem of human-machine communication. Cambridge: Cambridge University Press; 1985.
32. Clancey WJ. Situated cognition: On human knowledge and computer representations. Hillsdale, NJ: Erlbaum; 1997.
33. George JM. Personality, affect, and behavior in groups. J Appl Psychol. 1990;75:107–116.
34. Bandura A. Social foundations of thoughts and actions: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall; 1986.
35. Wegner DM. Transactive memory: A contemporary analysis of the group mind. In: Mullen B, Goethals GR, editors. Theories of group behavior. New York: Springer-Verlag; 1987.
36. Woods DD, Hollnagel E. Joint Cognitive Systems: Patterns in Cognitive Systems Engineering. Boca Raton, FL: CRC Press/Taylor & Francis Group; 2006.
37. Hollnagel E, Woods DD. Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. Boca Raton, FL: CRC Press/Taylor & Francis Group; 2006.
38. Albolino S, Cook RI, O’Conor M. Sensemaking, safety, and cooperative work in the intensive care unit. Cognition, Technology & Work. 2007;9:131–137.
39. Albolino S, Cook R. Making sense of risks: a field study in an intensive care unit. In: Tartaglia R, Bagnara S, Bellandi T, Albolino S, editors. Healthcare Systems Ergonomics and Patient Safety. London: Taylor & Francis; 2005. pp. 208–214.
40. Alvarez G, Coiera E. Interruptive communication patterns in the intensive care unit ward round. Int J Med Inform. 2005;74:791–796. [PubMed]
41. Chisholm CD, Collison EK, Nelson DR, Cordell WH. Emergency department workplace interruptions: are emergency physicians “interrupt-driven” and “multitasking”? Acad Emerg Med. 2000;7:1239–1243. [PubMed]
42. Randell R. Medicine and aviation: a review of the comparison. Methods Inf Med. 2003;42:433–436. [PubMed]
43. France DJ, Levin S, Hemphill R, Chen K, Rickard D, Makowski R, Jones I, Aronsky D. Emergency physicians’ behaviors and workload in the presence of an electronic whiteboard. Int J Med Inform. 2005;74:827–837. [PubMed]
44. Coiera E. Clinical Communication - A New Informatics Paradigm. Proc AMIA Annu Fall Symp. 1996:17–21. [PMC free article] [PubMed]
45. Coiera E, Jayasuria RA, Hardy J, Bannan A, Thorpe M. Communication loads on clinical staff in the emergency department. MJA. 2002;176:415–418. [PubMed]
46. Coiera E, Tombs V. Communication behaviours in a hospital setting: an observational study. BMJ. 1998;7132:673–676. [PMC free article] [PubMed]
47. Spencer R, Logan P. Proceedings HIC. Melbourne: 2002. Role-based communication patterns within an emergency department setting.
48. Harvey R, Jarrett PGP, K M. Patterns of paging medical interns during night calls at two teaching hospitals. Can Med Assoc J. 1994;151:307–11. [PMC free article] [PubMed]
49. Blum NJ, Lieu TA. Interrupted care. The effects of paging on pediatric resident activities. Am J Dis Child. 1992;146:806–808. [PubMed]
50. Xiao Y, Lasome C, Moss J, Mackenzie CF, Farsi S. Cognitive properties of a whiteboard. In: Prinz W, Jarke M, Rogers Y, Schmidt K, Wulf V, editors. Proceedings of the Seventh European Conference on Computer-Supported Cooperative Work. Bonn: Kluwer Academic Publishers; 2001. pp. 16–20.
51. Xiao Y, Schenkel S, Faraj S, Mackenzie CF, Moss J. What whiteboards in a trauma center operating suite can teach us about emergency department communication. Ann Emerg Med. 2007;50:387–95. [PubMed]
52. Wears R, Perry S, Wilson S. Emergency department status boards: User-evolved artefacts for inter- and intra-group coordination. Cognition, Technology and Work. 2007;9:163–170.
53. Coiera E. When conversation is better than computation. J Am Med Inform Assoc. 2000;7:277–286. [PMC free article] [PubMed]
54. Parker J, Coiera E. Improving clinical communication: a view from psychology. J Am Med Inform Assoc. 2000;7:453–461. [PMC free article] [PubMed]
55. Toussaint PJ, Coiera E. Supporting communication in health care. Int J Med Inf. 2005;74:779–781. [PubMed]
56. Patel VL, Arocha JF, Zhang J. Thinking and reasoning in medicine. In: Holyoak K, editor. Thinking and reasoning. MA: Cambridge University Press; 2005. pp. 727–750.
57. Patel VL, Kushniuruk AW, Yang S, Yale JF. Impact of a computerized patient record system on medical data collection, organization and reasoning. J Am Med Inform Assoc. 2000;7:569–585. [PMC free article] [PubMed]
58. Gawande AA, Bates DW. The use of information technology in improving medical performance. Part III Patient-support tools. Med Gen Med. 2000;2:E12. [PubMed]
59. Vicente KJ. Ecological interface design: Progress and challenges. Hum Factors. 2002;44:62–78. [PubMed]
60. Vicente KJ, Moray N, Lee JD, Rasmussen J. Evaluation of a Rankine cycle display for nuclear power plant monitoring and diagnosis. Hum Factors. 1996;38:506–521.
61. Wilson JR, Corlett EN. Evaluation of human work: a practical ergonomics methodology. Bristol, PA: Taylor & Francis; 1995.
62. Bates DW, Cohen M, Leape LL, Overhage JM, Shabot MM, Sheridan T. Reducing the frequency of errors in medicine using information technology. J Am Med Inform Assoc. 2001;8:299–308. [PMC free article] [PubMed]
63. Cook RI, Woods DD. Operating at the ‘sharp end’: The complexity of human error. In: Bogner S, editor. Human Error in Medicine. Mahwah, NJ: Erlbaum; 1994. pp. 255–310.
64. Kuperman G, Bobb A, Payne T, Avery AJ, Gandhi TK, Burns G, Classen DC, Bates DW. Medication-related clinical decision support in computerized provider order entry systems: A review. J Am Med Inform Assoc. 2007;14:29–40. [PMC free article] [PubMed]
65. Weir CR, Hicken B, Nebeker J, Campo R, Drews F, LeBar B. Cognitive task analysis of information management strategies in a computerized provider order entry environment. J Am Med Inform Assoc. 2007;14:65–75. [PMC free article] [PubMed]
66. Cook RI, Potter SS, Woods DD, McDonald JS. Evaluating the human engineering of microprocessor-controlled operating room devices. J Clin Monit. 1991;7:217–226. [PubMed]
67. Gray WD. The nature and processing of errors in interactive behavior. Cognitive Science. 2000;24:205–248.
68. Adams MJ, Tenney YJ, Pew RW. Situation awareness and the cognitive management of complex systems. Hum Factors. 1995;37:85–104.
69. Wachter SB, Agutter J, Syroid N, Drews F, Weinger MB, Westenskow D. The employment of an iterative design process to develop a pulmonary graphical display. J Am Med Inform Assoc. 2003;10:363–372. [PMC free article] [PubMed]
70. Horsky J, Kaufman DR, Oppenheim MI, Patel VL. A framework for analyzing the cognitive complexity of computer-assisted clinical ordering. J Biomed Inform. 2003;36:4–22. [PubMed]
71. MEDWATCH. The FDA Safety Information and Adverse Event Reporting Program. Available at
72. Dovey SM, Meyers DS, Phillips RL, Jr, Green LA, Fryer GE, Galliher JM, Kappus J, Grob P. A preliminary taxonomy of medical errors in family practice. Qual Saf Health Care. 2002;11:233–8. [PMC free article] [PubMed]
73. Battles JB, Shea CE. A system of analyzing medical errors to improve GME curricula and programs. Acad Med. 2001;76:125–33. [PubMed]
74. Runciman WB, Helps SC, Sexton EJ, Malpass A. A classification for incidents and accidents in the health-care system. J Qual Clin Pract. 1998;18:199–211. [PubMed]
75. Benner P, Sheets V, Uris P, Malloch K, Schwed K, Jamison D. Individual, practice, and system causes of errors in nursing: a taxonomy. J Nurs Adm. 2002;32:509–23. [PubMed]
76. Chang A, Schyve PM, Croteau RJ, O’Leary DS, Loeb JM. The JCAHO patient safety event taxonomy: a standardized terminology and classification schema for near misses and adverse events. Int J Qual Health Care. 2005;17:95–105. [PubMed]
77. Elder NC, Dovey SM. Classification of medical errors and preventable adverse events in primary care: A synthesis of the literature. J Fam Pract. 2002;51:927–932. [PubMed]
78. Sentinel Event Statistics. [March 31, 2005]. Available at
79. Taxonomy of Medication Errors. Available at
80. Zhang J, Patel VL, Johnson TR, Shortliffe EH. A cognitive taxonomy of medical errors. J Biomed Inform. 2004;37:193–204. [PubMed]
81. Norman DA. The psychology of everyday things. New York, NY: Basic Books; 1988.
82. Norman DA, Draper SW. User centered system design: new perspectives on human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates; 1986.
83. Reason J. Human Error. Cambridge, England: Cambridge University Press; 1990.
84. Zhang J, Norman DA. Representations in distributed cognitive tasks. Cognitive Science. 1994;18:87–122.
85. Zhang J, Patel VL. Distributed cognition, representation, and affordance. Cognition & Pragmatics. 2006;14:333–341.
86. Ackerman MS, Halverson CA. Organizational memory as objects, processes, and trajectories: An examination of organizational memory in use. Comput Support Coop Work. 2004;13:155–189.
87. Nunamaker J, Dennis A, Valacich J, Vogel D, George J. Electronic meeting systems to support group work. Commun ACM. 1991;34:40–61.
88. Patel V, Arocha J, Kaufman D. Diagnostic reasoning and medical expertise. The Psychology of Learning and Motivation: Advances in Research and Theory. 1994;31:187–252.
89. Te’eni D. Review: a cognitive-affective model of organizational communication for designing IT. MIS Quarterly. 2001;25:251–312.
90. Patel VL, Kaufman DR, Magder SA. The acquisition of medical expertise in complex dynamic environments. In: Ericsson A, editor. The road to excellence: the acquisition of expert performance in the arts and sciences, sports, and games. Mahwah, N.J: Lawrence Erlbaum Associates; 1996. p. 369.
91. Patel VL, Kaufman DR, Allen VG, Shortliffe EH, Cimino JJ, Greenes RA. Toward a framework for computer-mediated collaborative design in medical informatics. Methods Inf Med. 1999;38:158–76. [PubMed]
92. Malhotra S, Jordan D, Shortliffe E, Patel VL. Workflow modeling in critical care: Piecing together your own puzzle. J Biomed Inform. 2007;40:81–92. [PubMed]
93. Hollnagel E, Woods D, Levenson N. Resilience Engineering: Concepts and Precepts. Aldershot, Hampshire, England: Ashgate; 2006.
94. Ball LJ, Ormerod TC. Putting ethnography to work: the case for a cognitive ethnography of design. Int J Hum Comput Stud. 2000;53:147–168.
95. Ericsson KA, Simon HA. Protocol analysis: Verbal reports as data. Cambridge, MA: MIT Press; 1993.
96. Shortliffe E, Patel V. Generation and formulation of knowledge: Human-intensive techniques. In: Greenes RA, editor. Clinical Decision Support: The Road Ahead. Elsevier-Academic Press; 2007. pp. 207–226.
97. Schraagen JM, Chipman SF, Shalin VL. Cognitive task analysis. Mahwah, NJ: Lawrence Erlbaum Associates; 2000.
98. Patel VL, Arocha JF, Kaufman DR. A primer on aspects of cognition for medical informatics. J Am Med Inform Assoc. 2001;8:324–343. [PMC free article] [PubMed]
99. Patel VL, Kaufman DR. Cognitive science and biomedical informatics. In: Shortliffe EH, Cimino JJ, editors. Biomedical informatics: Computer applications in health care and biomedicine. 3. New York: Springer-Verlag; 2006. pp. 133–185.
100. Card SK, Moran TP, Newell A. The psychology of human-computer interaction. Hillsdale, NJ: Erlbaum; 1983.
101. Kieras DE. GOMS models for task analysis. In: Diaper D, Stanton N, editors. Handbook of task analysis for human-computer interaction. London: Lawrence Erlbaum Associates; 2003. pp. 83–116.
102. Mokkarala P, Brixey JJ, Johnson TR, Patel VL, Zhang J, Turley JP. Development of comprehensive medical error ontology. AHRQ: Advances in Patient Safety: New Directions and Alternative Approaches. invited paper, under review. [PubMed]
103. Gong Y, Zhu M, Li J, Turley JP, Zhang J. Communication ontology for medical errors. Proceedings of MedInfo. 2007
104. Horsky J, Gutnik L, Patel VL. Technology for emergency care: Cognitive and workflow considerations. Proc AMIA Annu Fall Symp. 2006:344–8. [PMC free article] [PubMed]
105. Hakimzada AF, Green RA, Sayan OR, Zhang J, Patel VL. The nature and occurrence of registration errors in the emergency department. Int J Med Inform. 2007 doi: 10.1016/j.ijmedinf.2007.04.011. [PMC free article] [PubMed] [Cross Ref]
106. Laxmisan A, Hakimzada F, Sayan OR, Green RA, Zhang J, Patel VL. The multitasking clinician: Decision-making and cognitive demand during and after team handoffs in emergency care. Int J Med Inform. 2007;40:801–11. [PubMed]
107. Brixey JJ, Robinson DJ, Johnson CW, Johnson TR, Turley JP, Patel VL, Zhang J. Towards a hybrid method to categorize interruptions and activities in healthcare. Int J Med Inform. 2007;40:812–20. [PMC free article] [PubMed]
108. Brixey JJ, Tang Z, Robinson DJ, Johnson CW, Johnson TR, Turley JP, Patel VL, Zhang J. Interruptions in a level one trauma center: A case study. Int J Med Inform. 2007 doi: 10.1016/j.ijmedinf.2007.04.006. [PMC free article] [PubMed] [Cross Ref]
109. Brixey JJ, Robinson DJ, Johnson CW, Johnson TR, Turley JP, Zhang J. A concept analysis of the phenomenon interruption. Adv Nurs Sci. 2007;30:E26–E42. [PubMed]
110. Cohen T, Blatter B, Almeida C, Shortliffe E, Patel V. Distributed cognition in the Psychiatric Emergency Department: A cognitive blueprint of a collaboration in context. Artif Intell Med. 2006;37:73–83. [PubMed]
111. Horsky J, Kuperman GJ, Patel VL. Comprehensive analysis of a medication dosing error related to CPOE. J Am Med Inform Assoc. 2005;12:377–82. [PMC free article] [PubMed]
112. Ash JS, Berg M, Coiera E. Some unintended consequences of information technology in health care: the nature of patient care information system-related errors. J Am Med Inform Assoc. 2004;11:104–112. [PMC free article] [PubMed]
113. Sharda P, Das AK, Cohen TA, Patel V. Customizing clinical narratives for the electronic medical record interface using cognitive methods. Int J Med Inform. 2006;75:346–368. [PubMed]
114. Kintsch W. The representation of meaning in memory. Hillsdale, NJ: Lawrence Erlbaum; 1974.
115. van Dijk T, Kintsch W. Strategies of discourse comprehension. New York: Academic Press; 1983.
116. Arocha JF, Wang D, Patel VL. Identifying reasoning strategies in medical decision making: a methodological guide. J Biomed Inform. 2005;38:154–71. [PubMed]
117. Kushniruk AW, Patel VL. Cognitive and usability engineering methods for the evaluation of clinical information systems. J Biomed Inform. 2004;37:56–76. [PubMed]