This report provides an evaluation of the National Library of Medicine–sponsored Woods Hole Medical Informatics (WHMI) course and the extent to which the objectives of the program are achieved. Two studies were conducted to examine the participants' perceptions of both the short-term (spring 2002) and the long-term influences (1993 through 2002) on knowledge, skills, and behavior. Data were collected through the use of questionnaires, semistructured telephone interviews, and participant observation methods to provide both quantitative and qualitative assessment. The participants of the spring 2002 course considered the course to be an excellent opportunity to increase their knowledge and understanding of the field of medical informatics as well as to meet and interact with other professionals in the field to establish future collaborations. Past participants remained highly satisfied with their experience at Woods Hole and its influence on their professional careers and their involvement in a broad range of activities related to medical informatics. This group considered their knowledge and understanding of medical informatics to be of greater quality, had increased their networking with other professionals, and were more confident and motivated to work in the field. Many of the participants feel and show evidence of becoming effective agents of change in their institutions in the area of medical informatics, which is one of the objectives of the program.
As a multidisciplinary field, medical informatics draws on a range of disciplines, such as computer science, information science, and the social and cognitive sciences. The cognitive sciences can provide important insights into the nature of the processes involved in human– computer interaction and help improve the design of medical information systems by providing insight into the roles that knowledge, memory, and strategies play in a variety of cognitive activities. In this paper, the authors survey literature on aspects of medical cognition and provide a set of claims that they consider to be important in medical informatics.
Objective: To assess the effects of a computer-based patient record system on human cognition. Computer-based patient record systems can be considered "cognitive artifacts," which shape the way in which health care workers obtain, organize, and reason with knowledge.
Design: Study 1 compared physicians' organization of clinical information in paper-based and computer-based patient records in a diabetes clinic. Study 2 extended the first study to include analysis of doctor–patient–computer interactions, which were recorded on video in their entirety. In Study 3, physicians' interactions with computer-based records were followed through interviews and automatic logging of cases entered in the computer-based patient record.
Results: Results indicate that exposure to the computer-based patient record was associated with changes in physicians' information gathering and reasoning strategies. Differences were found in the content and organization of information, with paper records having a narrative structure, while the computer-based records were organized into discrete items of information. The differences in knowledge organization had an effect on data gathering strategies, where the nature of doctor-patient dialogue was influenced by the structure of the computer-based patient record system.
Conclusion: Technology has a profound influence in shaping cognitive behavior, and the potential effects of cognition on technology design needs to be explored.
Recent developments in medical informatics research have
afforded possibilities for great advances in health care delivery. These
exciting opportunities also present formidable challenges to the
implementation and integration of technologies in the workplace. As in most
domains, there is a gulf between technologic artifacts and end users. Since
medical practice is a human endeavor, there is a need for bridging disciplines
to enable clinicians to benefit from rapid technologic advances. This in turn
necessitates a broadening of disciplinary boundaries to consider cognitive and
social factors pertaining to the design and use of technology. The authors
argue for a place of prominence for cognitive science. Cognitive science
provides a framework for the analysis and modeling of complex human
performance and has considerable applicability to a range of issues in
informatics. Its methods have been employed to illuminate different facets of
design and implementation. This approach has also yielded insights into the
mechanisms and processes involved in collaborative design. Cognitive
scientific methods and theories are illustrated in the context of two examples
that examine human-computer interaction in medical contexts and
computer-mediated collaborative processes. The framework outlined in this
paper can be used to refine the process of iterative design, end-user
training, and productive practice.
Because scientific research is guided by concerns for
uncovering “fundamental truths,” its time frame differs from that
of design, development, and practice, which are driven by immediate needs for
practical solutions. In medicine, however, as in other disciplines, basic
scientists, developers, and practitioners are being called on increasingly to
forge new alliances and work toward common goals. The authors propose that
medical informatics be construed as a local science of design. A local science
seeks to explain aspects of a domain rather than derive a set of unifying
principles. Design is concerned with the creation, implementation, and
adaptation of artifacts in a range of settings. The authors explore the
implications of this point of view and endeavor to characterize the nature of
informatics research, the relationship between theory and practice, and issues
of scientific validity and generalizability. They argue for a more pluralistic
approach to medical informatics in building a cumulative body of
Abstract Objective: An evaluation of the cognitive processes used in
the translation of a clinical guideline from text into an encoded form so that
it can be shared among medical institutions.
Design: A comparative study at three sites regarding the generation
of individual and collaborative representations of a guideline for the
management of encephalopathy using the GuideLine Interchange Format (GLIF)
developed by members of the InterMed Collaboratory.
Measurements: Using theories and methods of cognitive science, the
study involves a detailed analysis of the cognitive processes used in
generating representations in GLIF. The resulting process-outcome measures are
used to compare subjects with various types of computer science or clinical
expertise and from different institutions.
Results: Consistent with prior studies of text comprehension and
expertise, the variability in strategies was found to be dependent on the
degree of prior experience and knowledge of the domain. Differing both in
content and structure, the representations developed by physicians were found
to have additional information and organization not explicitly stated in the
guidelines, reflecting the physicians' understanding of the underlying
pathophysiology. The computer scientists developed more literal
representations of the guideline; additions were mostly limited to
specifications mandated by the logic of GLIF itself. Collaboration between
physicians and computer scientists resulted in consistent representations that
were more than the sum of the separate parts, in that both domain-specific
knowledge of medicine and generic knowledge of guideline structure were
Conclusion: Because of the variable construction of guideline
representations, understanding the processes and limitations involved in their
generation is important in developing strategies to construct shared
representations that are both accurate and efficient. The encoded guidelines
developed by teams that include both clinicians and experts in computer-based
representations are preferable to those developed by individuals of either
type working alone.
To present a framework for combining implicit knowledge acquisition from multiple experts with machine learning and to evaluate this framework in the context of anemia alerts.
Materials and Methods
Five internal medicine residents reviewed 18 anemia alerts, while ‘talking aloud’. They identified features that were reviewed by two or more physicians to determine appropriate alert level, etiology and treatment recommendation. Based on these features, data were extracted from 100 randomly-selected anemia cases for a training set and an additional 82 cases for a test set. Two staff internists assigned an alert level, etiology and treatment recommendation before and after reviewing the entire electronic medical record. The training set of 118 cases (100 plus 18) and the test set of 82 cases were explored using RIDOR and JRip algorithms.
The feature set was sufficient to assess 93% of anemia cases (intraclass correlation for alert level before and after review of the records by internists 1 and 2 were 0.92 and 0.95, respectively). High-precision classifiers were constructed to identify low-level alerts (precision p=0.87, recall R=0.4), iron deficiency (p=1.0, R=0.73), and anemia associated with kidney disease (p=0.87, R=0.77).
It was possible to identify low-level alerts and several conditions commonly associated with chronic anemia. This approach may reduce the number of clinically unimportant alerts. The study was limited to anemia alerts. Furthermore, clinicians were aware of the study hypotheses potentially biasing their evaluation.
Implicit knowledge acquisition, collaborative filtering and machine learning were combined automatically to induce clinically meaningful and precise decision rules.
Clinical (L01.700.508.300.190); clinical research informatics; cognitive study (including experiments emphasizing verbal protocol analysis and usability); computer-assisted (L01.700.508.100); computerized (L01.700.508.300.695); decision making; decision support systems; expert systems (L01.700.568.110.065.190); human–computer interaction and human-centered computing; improving the education and skills training of health professionals; information retrieval; information storage and retrieval (text and images); measuring/improving patient safety and reducing medical errors; medical records systems; qualitative/ethnographic field study; reminder systems (L01.700.508.300.790)
Critical care environments are complex and dynamic. To adapt to such environments, clinicians may be required to make alterations to their workflows resulting in deviations from standard procedures. In this work, deviations from standards in trauma critical care are studied. Thirty trauma cases were observed in a Level 1 trauma center. Activities tracked were compared to the Advance Trauma Life Support standard to determine (i) if deviations had occurred, (ii) type of deviations and (iii) whether deviations were initiated by individuals or collaboratively by the team. Results show that expert clinicians deviated to innovate, while deviations of novices result mostly in error. Experts’ well developed knowledge allows for flexibility and adaptiveness in dealing with standards, resulting in innovative deviations while minimizing errors made. Providing informatics solution, in such a setting, would mean that standard protocols would have be flexible enough to “learn” from new knowledge, yet provide strong support for the trainees.
Successful handoffs ensure smooth, efficient and safe patient care transitions. Tools and systems designed for standardization of clinician handoffs often focuses on ensuring the communication activity during transitions, with limited support for preparatory activities such as information seeking and organization. We designed and evaluated a Handoff Intervention Tool (HAND-IT) based on a checklist-inspired, body system format allowing structured information organization, and a problem-case narrative format allowing temporal description of patient care events. Based on a pre-post prospective study using a multi-method analysis we evaluated the effectiveness of HAND-IT as a documentation tool. We found that the use of HAND-IT led to fewer transition breakdowns, greater tool resilience, and likely led to better learning outcomes for less-experienced clinicians when compared to the current tool. We discuss the implications of our results for improving patient safety with a continuity of care-based approach.
Federal legislation (Health Information Technology for Economic and Clinical Health (HITECH) Act) has provided funds to support an unprecedented increase in health information technology (HIT) adoption for healthcare provider organizations and professionals throughout the U.S. While recognizing the promise that widespread HIT adoption and meaningful use can bring to efforts to improve the quality, safety, and efficiency of healthcare, the American Medical Informatics Association devoted its 2009 Annual Health Policy Meeting to consideration of unanticipated consequences that could result with the increased implementation of HIT. Conference participants focused on possible unintended and unanticipated, as well as undesirable, consequences of HIT implementation. They employed an input–output model to guide discussion on occurrence of these consequences in four domains: technical, human/cognitive, organizational, and fiscal/policy and regulation. The authors outline the conference's recommendations: (1) an enhanced research agenda to guide study into the causes, manifestations, and mitigation of unintended consequences resulting from HIT implementations; (2) creation of a framework to promote sharing of HIT implementation experiences and the development of best practices that minimize unintended consequences; and (3) recognition of the key role of the Federal Government in providing leadership and oversight in analyzing the effects of HIT-related implementations and policies.
Biomedical researchers often work with massive, detailed and heterogeneous datasets. These datasets raise new challenges of information organization and management for scientific interpretation, as they demand much of the researchers’ time and attention. The current study investigated the nature of the problems that researchers face when dealing with such data. Four major problems identified with existing biomedical scientific information management methods were related to data organization, data sharing, collaboration, and publications. Therefore, there is a compelling need to develop an efficient and user-friendly information management system to handle the biomedical research data. This study evaluated the implementation of an information management system, which was introduced as part of the collaborative research to increase scientific productivity in a research laboratory. Laboratory members seemed to exhibit frustration during the implementation process. However, empirical findings revealed that they gained new knowledge and completed specified tasks while working together with the new system. Hence, researchers are urged to persist and persevere when dealing with any new technology, including an information management system in a research laboratory environment.
biomedical data; bioscience; information management; implementation; collaboration
Handoffs have been recognized as a major healthcare challenge primarily due to the breakdowns in communication that occur during transitions in care. Consequently, they are characterized as being “remarkably haphazard”. To investigate the information breakdowns in group handoff communication, we conducted a study at a large academic hospital in Texas. We used multifaceted qualitative methods such as observations, shadowing of care providers and their work activities, audio-recording of handoffs, and care provider interviews to examine the handoff communication workflow, with particular emphasis on investigating the sources of information breakdowns. Using a mixed inductive-deductive analysis approach, we identified two critical sources for information breakdowns - lack of standardization in handoff communication events and unsuccessful completion of pre-turnover coordination activities. We propose strategic solutions that can effectively help mitigate the handoff communication breakdowns.
The pervasiveness of reasoning errors in emergency care (EC) is commonly acknowledged in clinical research. Much of this work has focused on diagnostic errors; yet, in EC, providing a specific diagnosis is generally secondary to managing the patient. To gain insights into non-diagnostic, treatment-related errors, we presented EC residents with computer-based case simulations and recorded their actions and verbalized thoughts. Nearly all participants diagnosed both study cases correctly yet made a variety of patient management errors, some with serious consequences. More substantial errors could be classified as stemming from incorrect patient status and treatment inferences. These EC reasoning errors are discussed within the framework of underlying cognitive processes.
This paper is based on a panel discussion held at the Artificial Intelligence in Medicine Europe (AIME) conference in Amsterdam, The Netherlands, in July 2007. It had been more than 15 years since Edward Shortliffe gave a talk at AIME in which he characterized artificial intelligence (AI) in medicine as being in its “adolescence” (Shortliffe EH. The adolescence of AI in medicine: Will the field come of age in the ‘90s? Artificial Intelligence in Medicine 1993; 5:93–106). In this article, the discussants reflect on medical AI research during the subsequent years and attempt to characterize the maturity and influence that has been achieved to date. Participants focus on their personal areas of expertise, ranging from clinical decision making, reasoning under uncertainty, and knowledge representation to systems integration, translational bioinformatics, and cognitive issues in both the modeling of expertise and the creation of acceptable systems.
Prior research has questioned the effectiveness of existing methods to identify individuals at high risk for contracting and transmitting the Human Immunodeficiency Virus (HIV) and other sexually transmitted diseases (STDs). Thus, new approaches are needed to provide these individuals with risk-reduction strategies. We review our research on young adults’ sexual decision making, using theories and methods from social and cognitive sciences. Four patterns of condom use and associated levels of risks and beliefs were identified. These patterns suggest value in targeting intervention strategies to individuals at different levels of risk. Findings also imply that the monogamous population may be at higher risk for infection than they realize. Primary-care physicians are the first line of contact for many individuals in the health care system, and may be in the best position to screen for at-risk individuals. Given time demands and other barriers, easy-to-use evidence-based guidelines for such screening are needed. We propose such guidelines for primary-care physicians to use in identifying an individual’s risk, from which custom-tailored intervention strategies can be developed.
youth; HIV/AIDS; decision-making; patterns of reasoning; risky sexual behavior; screening; education
The dynamic and distributed work environment in critical care requires a high level of collaboration among clinical team members and a sophisticated task coordination system to deliver safe, timely and effective care. A complex cognitive system underlies the decision-making process in such cooperative workplaces. This methodological review paper addresses the issues of translating cognitive research to clinical practice with a specific focus on decision-making in critical care, and the role of information and communication technology to aid in such decisions. Examples are drawn from studies of critical care in our own research laboratories. Critical care, in this paper, includes both intensive (inpatient) and emergency (outpatient) care. We define translational cognition as the research on basic and applied cognitive issues that contribute to our understanding of how information is stored, retrieved and used for problem-solving and decision-making. The methods and findings are discussed in the context of constraints on decision-making in real world complex environments and implications for supporting the design and evaluation of decision support tools for critical care health providers.
translational cognition; distributed cognition; critical care; intensive care; emergency triage; clinical workflow; technological design; medical errors; decision support; cognitive task analysis; ethnographic analysis; naturalistic decision-making
The emergency department has been characterized as interrupt-driven. Government agencies and patient safety organization recognize that interruptions contribute to medical errors. The purpose of this study was to observe, record, and contextualize activities and interruptions experienced by physicians and Registered Nurses (RNs) working in a Level One Trauma Center.
A case study that relied on an ethnographic study design using the shadowing method.
A convenience sample of physicians and RNs, each with at least six months of experience in the Emergency Department (ED), were asked to participate. In these kinds of detailed qualitative investigations, it is quite common to have a small sample size.
Approval was obtained from institutional ethic committees prior to initiating the study. Community consent was obtained from the ED staff through in-service education.
All observations were made in the trauma section of the ED of a tertiary teaching hospital. The hospital is situated in a major medical center in the Gulf Coast region of the United States of America (USA).
Five attending ED physicians were observed for a total of 29 hours, 31 minutes. Eight RNs were shadowed for a total of 40 hours, 9 minutes. Interruptions and activities were categorized using the Hybrid Method to Categorize Interruptions and Activities (HyMCIA). Registered nurses received slightly more interruptions per hour than physicians. People, pagers, and telephones were identified as mediums through which interruptions were delivered. The physical environment was found to contribute to interruptions in workflow because of physical design and when supplies were not available. Physicians and RNs usually returned to the original, interrupted activity more often than leaving the activity unfinished.
This research provides an enhanced understanding of interruptions in workflow in the ED, the identification of work constraints, and the need to develop interventions to manage interruptions. It is crucial that interruptions be delivered in such a way that there is minimal negative impact on performance. The significance and importance of the interruption must always be weighed against the negative impact that it has on smooth, efficient workflow.
Interruption; Distraction; Emergency Medicine
Research into the nature and occurrence of medical errors has shown that these often result from a combination of factors that lead to the breakdown of workflow. Nowhere is this more critical than in the emergency department (ED), where the focus of clinical decision is on the timely evaluation and stabilization of patients. This paper reports on the nature of errors and their implications for patient safety in an adult ED, using methods of ethnographic observation, interviews, and think-aloud protocols. Data were analyzed using modified “grounded theory,” which refers to a theory developed inductively from a body of data. Analysis revealed four classes of errors, relating to errors of misidentification, ranging from multiple medical record numbers, wrong patient identification or address, and in one case, switching of one patient’s identification information with those of another. Further analysis traced the root of the errors to ED registration.
These results indicate that the nature of errors in the emergency department are complex, multi-layered and result from an intertwined web of activity, in which stress of the work environment, high patient volume and the tendency to adopt shortcuts play a significant role. The need for information technology (IT) solutions to these problems as well as the impact of alternative policies is discussed.
ED registration; medical errors; misidentification; workarounds; shortcuts; distributed cognition; emergency care
Critical care environments are inherently complex and dynamic. Assessment of workflow in such environments is not trivial. While existing approaches for workflow analysis such as ethnographic observations and interviewing provide contextualized information about the overall workflow, they are limited in their ability to capture the workflow from all perspectives. This paper presents a tool for automated activity recognition that can provide an additional point of view. Using data captured by Radio Identification (RID) tags and Hidden Markov Models (HMMs), key activities in the environment can be modeled and recognized. The proposed method leverages activity recognition systems to provide a snapshot of workflow in critical care environments. The activities representing the workflow can be extracted and replayed using virtual reality environments for further analysis.
The overall aim of this study is to evaluate the usability of U.S. military electronic health record (EHR) system AHLTA using a systematic work-centered evaluation framework UFuRT --- User, Functional, Representational, and Task Analysis. This paper with the focus of Functional Analysis (FA) of AHLTA explores operationalizable methods to study functions supported by user interfaces. A system hierarchy was created to map and uniquely identify all items on the interfaces. These items were then classified independently by 2 evaluators as Operations or Objects. Operations were further classified as either Domain or Overhead function. With acceptable inter-rater agreement, of the 1996 items in the interfaces, 61% were operations, around one fourth of which were Overhead functions. Overhead functions are hypothesized to be targets to be redesigned for improvements in usability.
Medical informatics; functional analysis; user interface; Electronic Health Record (EHR); UFuRT; usability evaluation; work-centeredness
Despite a body of research on teams in other fields relatively little is known about measuring teamwork in healthcare. The aim of this study is to characterize the qualitative dimensions of team performance during cardiac resuscitation that results in good and bad outcomes. We studied each team’s adherence to Advanced Cardiac Life Support (ACLS) protocol for ventricular fibrillation/tachycardia and identified team behaviors during simulated critical events that affected their performance. The process was captured by a developed task checklist and a validated team work coding system. Results suggest that deviation from the sequence suggested by the ACLS protocol had no impact on the outcome as the successful team deviated more from this sequence than the unsuccessful team. It isn’t the deviation from the protocol per se that appears to be important, but how the leadership flexibly adapts to the situational changes with deviations is the crucial factor in team competency.
Biomedical researchers often have to work on massive, detailed, and heterogeneous datasets that raise new challenges of information management. This study reports an investigation into the nature of the problems faced by the researchers in two bioscience test laboratories when dealing with their data management applications. Data were collected using ethnographic observations, questionnaires, and semi-structured interviews. The major problems identified in working with these systems were related to data organization, publications, and collaboration. The interoperability standards were analyzed using a C4I framework at the level of connection, communication, consolidation, and collaboration. Such an analysis was found to be useful in judging the capabilities of data management systems at different levels of technological competency. While collaboration and system interoperability are the “must have” attributes of these biomedical scientific laboratory information management applications, usability and human interoperability are the other design concerns that must also be addressed for easy use and implementation.
Interruptions are known to have a negative impact on activity performance. Understanding how an interruption contributes to human error is limited because there is not a standard method for analyzing and classifying interruptions. Qualitative data are typically analyzed by either a deductive or an inductive method. Both methods have limitations. In this paper a hybrid method was developed that integrates deductive and inductive methods for the categorization of activities and interruptions recorded during an ethnographic study of physicians and registered nurses in a Level One Trauma Center. Understanding the effects of interruptions is important for designing and evaluating informatics tools in particular and for improving healthcare quality and patient safety in general.
The hybrid method was developed using a deductive a priori classification framework with the provision of adding new categories discovered inductively in the data. The inductive process utilized line-by-line coding and constant comparison as stated in Grounded Theory.
The categories of activities and interruptions were organized into a three-tiered hierarchy of activity. Validity and reliability of the categories were tested by categorizing a medical error case external to the study. No new categories of interruptions were identified during analysis of the medical error case.
Findings from this study provide evidence that the hybrid model of categorization is more complete than either a deductive or an inductive method alone. The hybrid method developed in this study provides the methodical support for understanding, analyzing, and managing interruptions and workflow.
Interruption; Activity; Qualitative Methods; Categorization; Emergency Department
Contemporary error research suggests that the quest to eradicate error is misguided. Error commission, detection, and recovery are an integral part of cognitive work, even at the expert level. In collaborative workspaces, the perception of potential error is directly observable: workers discuss and respond to perceived violations of accepted practice norms. As perceived violations are captured and corrected preemptively, they do not fit Reason’s widely accepted definition of error as “failure to achieve an intended outcome.” However, perceived violations suggest the aversion of potential error, and consequently have implications for error prevention. This research aims to identify and describe perceived violations of the boundaries of accepted procedure in a psychiatric emergency department (PED), and how they are resolved in practice.
Clinical discourse from fourteen PED patient rounds was audio-recorded. Excerpts from recordings suggesting perceived violations or incidents of miscommunication were extracted and analyzed using qualitative coding methods. The results are interpreted in relation to prior research on vulnerabilities to error in the PED.
Thirty incidents of perceived violations or miscommunication are identified and analyzed. Of these, only one medication error was formally reported. Other incidents would not have been detected by a retrospective analysis.
The analysis of perceived violations expands the data available for error analysis beyond occasional reported adverse events. These data are prospective: responses are captured in real time. This analysis supports a set of recommendations to improve the quality of care in the PED and other critical care contexts.
This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.