|Home | About | Journals | Submit | Contact Us | Français|
In 1999, the Institute of Medicine1 published, “To Err is Human: Building a Safer Health Care System,” emphasizing the prevalence of preventable medical errors in American health care and the role of systems, processes, and conditions in ensuring (or undermining) safety. As 50–65% of inpatient adverse events are experienced by surgical patients2, and 75% of these occur in intra-operatively3, the operating room (OR) is a high impact area for safety improvements.
Traditionally, surgical vulnerability has been measured in terms of preoperative risk – patient and procedural risk factors4, surgeon volume5, institutional volume6 – while safety has been defined by the absence of postoperative morbidity and/or mortality. The intra-operative phase of care, despite its obvious relevance to the field of surgical safety and rich potential as a data source, has been largely neglected. Because it is understudied, there are many gaps in our knowledge of the intra-operative factors that contribute to or detract from patient safety, and, as a result, few evidence-based guidelines or interventions exist to support hospitals or their providers in the OR7–10.
Surgery is an inherently hazardous work domain requiring high reliability. Safe operations result from the successful coordination of individuals and teams of diverse training and experience levels, working within complex hospital systems, under constraints imposed by time, uncertainty, and health status. Human factors engineering, focusing on “the interaction among humans and other elements of a system…physical, cognitive, organizational, environmental, and other,”11 has been deployed and is responsible for safety and reliability advances in other, similarly high-risk industries, such as aeronautics or nuclear reactor control. Addressing the etiology of error at all levels – individual, team, and system, human factors analysis is an ideal tool for the study of safety in the OR.
In his 1990 treatise, “Human Error,” Jim Reason12 describes his Swiss cheese model of error (Figure 1). In it, the system is represented by a stack of Swiss cheese slices, each analogous to a protective layer in the system, with holes symbolizing the potential for failure at each step in the process. Because the holes in Swiss cheese (the vulnerabilities of a system) are not continuous throughout a stack (the system), most problems are stopped at one layer or another, before they culminate in a larger, more consequential error. In order for a catastrophic failure to occur, the holes must be aligned at every level.
As per Reason, these holes may be of two types: active and latent. Active errors are those that are traditionally invoked during discussions about adverse events: readily apparent, they are committed by a human at “the sharp end,” at the point of care. A retained foreign body, for example, represents an active error: the failure to remove an instrument at the end of an operation. However, humans do not make these errors in isolation; they are predisposed towards them by latent conditions at “the blunt end,” in the system. Leaving an instrument in the patient is not the act of an individual surgeon; it is one precipitated by existing flaws in the organizational design of the entire process – the cumbersome and error-prone nature of the counting protocol, for example13–14.
In recent years, human factors experts have begun to view the human as the hero, rather than the source, of error. Indeed, in his follow-up book, “The Human Contribution,” Reason15 cautions against “an excessive reliance on system measures,” as it is individuals that constitute a system's last line of defense against error. With the uniquely human ability to anticipate and adapt to changing circumstances, people are capable of recovering problems that have managed to propagate through even the most thoughtfully designed systems. This heroism, however, has its limits. Citing Carthey's16 observational study of arterial switch operations, in which an increased risk of death was demonstrated with higher numbers of minor events regardless of compensation, Reason proposes a knotted rubber band model of system resilience (Figure 2). In it, the system is analogous to a rubber band, with a knot in the middle to represent current operating conditions. To maintain safety, the knot must stay within a narrow operating zone; stretch applied in one direction by dangerous perturbations in the system must be counteracted by compensatory corrections in the opposite direction. With a rising number of perturbations and corrections, the system becomes distorted beyond its capacity to respond.
The characterization of intra-operative human and system factors that impact safety has thus far been limited. Among available methodologies, the most widely utilized is the retrospective reconstruction of the intra-operative events – root or common cause analysis17–18, for example, or the analysis of malpractice claims data3, 19–20. While such research has been informative about the specific factors that may lead to adverse outcomes, it is susceptible to bias21. Such post-hoc analyses suffer from inaccurate or incomplete recall; without a contemporaneous record, it is difficult to capture all of the mechanisms that have culminated in error. Furthermore, focusing research efforts on the negative effects of care selects for only part of all the available data; information regarding events that are averted or compensated – processes that would be highly instructive in understanding safety in the OR – is lost.
Prospective data collection in the OR, thus, is needed to completely describe the intra-operative delivery of care. Field observations have been described by several groups14, 16, 22–33, but has yet to be broadly applied; most of these studies are restricted to small case series at single institutions, for a number of reasons. First, human factors engineering is a relatively new field to medicine, and few people with experience in both disciplines (or multidisciplinary collaborations) exist. Access to the OR may be difficult to attain due to an under-recognition of intra-operative safety problems, as well as cultural mores regarding provider privacy in the workplace. Those who are successful in gaining entry are likely to encounter additional cognitive barriers to the complete transcription of intra-operative events: it can be very difficult to completely observe multiple simultaneous conversations or incidents, to link all downstream occurrences to all earlier preconditions, and to recall everything after the operation has ended. Moreover, as only a few extra people may unobtrusively be present in any OR at one time, the comprehension of ongoing events may only be as complete as the knowledge base and/or memory of the observers; consultation with domain experts for clarification purposes may only be realized retrospectively. Nevertheless, the vast majority of evidence about human factors in the OR has been generated using live field observations and will be reviewed.
In circumventing many of the aforementioned methodological limitations, video-based analyses34–35 hold great potential for furthering the study of safety in the OR. Video may be recorded prospectively, but reviewed retrospectively and repeatedly, until all events are fully understood and the connections between them are completely deciphered; as such, it eliminates many of the issues surrounding observer recall and subjectivity. Additionally, it may serve as an educational tool – a mechanism for providing targeted feedback to individuals, teams, and organizational leaders. However, video poses its own challenges. While it is theoretically indiscriminate in its capture, it may still generate incomplete data, depending on the technological capacity and/or functionality of the audiovisual equipment. Additionally, providers may be reluctant to be video-recorded due to fears that the recordings will be used during performance evaluations36 or in courts of law35. Such concerns may be addressed, at least in part, by carefully constructing research protocols with multiple layers of protection for study subjects, including restricted data access, scheduled data destruction, and acquisition of a Certificate of Confidentiality37. We will review the evidence generated using video in the OR.
For the purposes of this discussion, we will divide human factors into those pertaining to humans and those corresponding to the system. Human attributes are relevant to performance both individually and within a team, and include such qualities as communication, coordination, cooperation, leadership, and vigilance. System features circumscribe the environment in which humans work – the equipment they use, the structure of the larger organization in which they work, or the policies that govern them, for example. Several examples from the literature will be detailed below. Please note that these examples are intended to be illustrative only; they are not exhaustive lists of human factors.
As active errors were originally conceived by Reason, human cognitive limitations were the most proximal causal factor. Our own imperfect behavior, it was thought, makes us prone to failures at all stages of performance: planning (mistakes – flawed intentions), memory storage (lapses – omissions), and task execution (slips – failure to act as planned). Although humans are now viewed with increasing positivity – as agents of recovery, rather than sources of erraticism – their abilities are still subject to limitations. While we are recognized for our ability to compensate, this capacity diminishes with progressive perturbations in the system. The most competent nurse can slip when he's counting, and is even more likely to do so if he is simultaneously juggling the surgeons' requests for new instruments and the anesthesiologists' need for blood products, coordinating with the pre-operative and post-operative units, and answering the resident's pager. In the past, we have countered human limitations with increased standardization, theorizing that these additional barriers to atypical behavior would protect against failure. Requiring X-rays or automating the count procedure with bar-coding7 or radiofrequency identification technology38 decrease reliance on the error-prone manual count. However, recent human factors data indicates that flexibility is needed in the system to permit heroes to maneuver39; one must remain cognizant of the fact that even well-intended protocols (like the manual count) run the risk of inadvertently disabling providers13. An appropriate balance between minimizing human slips, lapses, and mistakes and maximizing human heroic potential must be maintained.
Communication is one of the most studied and most critical human factors in medicine. Root cause analyses of sentinel events reported to the Joint Commission on Accreditation of Healthcare Organizations between 2004 and mid-2011 implicate faulty communication in 56% of operative or post-operative complications, 63% of retained foreign bodies, and 68% of wrong patient, wrong site, or wrong procedure cases40. The importance of communication is further supported by reviews of surgical malpractice claims: Griffen41 attributes 22% of complications to miscommunication, making it the most pervasive behavioral problem of all he investigated, while Greenberg19 places 30% of all communication breakdowns in the OR. Lingard42 estimated that 31% of all procedurally-relevant communications in the OR fail; of these, 36% have tangible effects, such as inefficiency, delay, resource waste, or procedural error.
Miscommunication has multiple etiologies, and therefore is best addressed with a multi-pronged approach. As we mentioned above, standardization may help in certain, selected scenarios – protocols may serve as memory aids, for example, ensuring that all salient points are covered in a discussion. After standardized communication was integrated into handoffs at Northwestern University43, surgical residents' perceptions of the accuracy, completeness, and clarity during the transfer of care improved significantly. After implementing the Situation, Background, Assessment, and Recommendation model of communication into their surgical curriculum, the Mount Sinai School of Medicine44 demonstrated a decrease in order entry errors. The University of Washington's45 computerized sign-out system allowed residents to spend more time with patients during pre-rounds and halved the number of patients missed on rounds, while improving resident ratings of continuity of care and workload.
Checklists work analogously in the OR, reminding providers to do the things which are relevant to almost every operation, but also have another important function. Unlike surgical inpatient teams, the OR team is multidisciplinary; the individuals that comprise it are more likely to differ in their understandings of the situation at hand. The checklist compels them towards the establishment of a shared mental model that enables each team member to better anticipate and plan his/her own role. Multi-national studies have demonstrated its impact on patient morbidity and mortality, as well as provider attitudes regarding safety9, 46.
However, standardized protocols cannot help with the majority of communication in the OR – that which occurs spontaneously, in response to continuously evolving events. In such situations, communication is best accomplished ad hoc – with flexibility for individuals to speak up about arising threats to safety as they see fit. To achieve safety, a level of candidness, and hence a sense of team, is needed; the OR must be an environment in which each team member recognizes and is comfortable in his/her role as an equal contributor – a form of checks and balances for his/her colleagues and the system. The importance of such teamwork in the OR setting to patient outcomes is well-established. Across 44 Veterans Affairs Medical centers and 8 academic hospitals, OR team members who reported higher levels of positive communication and collaboration with attending and resident physicians on the surgical service were found to have lower risk-adjusted morbidity rates47. Mazzocco30 demonstrated an increased odds of complications or death when intra-operative information sharing – a communication behavior that she distinguishes from briefing and which incorporates “mutual respect” and “appropriate[ness]”– was observed to be low. In Catchpole's26 observational study of laparoscopic cholecystectomies and carotid endarterectomies, higher leadership and management skills scores for surgeons and nurses correlated with shorter operating times and lower rates of procedural problems and errors outside the operating field, respectively.
As these studies show, there is a high degree of variability surrounding teamwork in the OR30, 47. Indeed, even within a single OR team, the perception of it may differ, depending on the discipline of the reporting party48. Compared to anesthesiologists and nurses, surgeons seem to overestimate the communication and teamwork in the room49–51. These disparities may be the result of the traditional vertical hierarchy in surgery. Surgeons, at its top, are simply not the ones who feel constrained by it, and thus are less likely to recognize the value of a flattened one (i.e. in the open communication or shared decision-making that it would promote)51. Likewise, while nurses, anesthesiologists, and surgeons are equally capable of recognizing tension in the OR, they disagree on the responsibility for creating and resolving it52.
Although certainly more amorphous a target than a successful handoff or briefing, teamwork is indeed amenable to intervention. After introducing a Team Training curriculum, Northwestern University53 reduced their observable intra-operative communication failure rate of 0.7/hour to 0.3/hour. At the University of Oxford54, a non-technical skills course decreased operative technical errors and non-operative procedural errors during laparoscopic cholecystectomies. The Veterans Health Administration, having implemented a medical team training program for OR personnel in its facilities on a rolling basis, documented a decline in risk-adjusted surgical mortality rate that was 50% greater in the trained hospitals than in the untrained ones55. Such interventions represent adaptations of Crew Resource Management (CRM), a training module developed in aviation to educate cockpit crews about communication (e.g. assertiveness, briefing/debriefing), error management (e.g. the recognition of “red flag” situations), and teamwork (e.g. cross checking, interpersonal skills, shared mental models, conflict resolution, and flat hierarchies)54, 56–58, and thus far, have been conducted in a one-time fashion. Despite their apparent success, the investigators of these studies note the limitations of a single intervention; because the adoption of CRM techniques represents a significant cultural and professional shift in medicine, continuous training and feedback are needed54, 56.
Several instruments have been developed for measuring teamwork in the OR, and may be considered for assessments of baseline needs, as well as post-intervention change and sustainability over time. The Observational Teamwork Assessment for Surgery (OTAS) consists of a teamwork-related task checklist (patient tasks, equipment/provisions tasks, and communication tasks) and a global rating scale for teamwork-related behaviors (communication, coordination, leadership, monitoring, and cooperation). While its developers report good inter-observer reliability and content validity59, they have also described a learning curve for using it, which may limit its reproducibility by other groups60. The Oxford Non-Technical Skills System (NOTECHS) rates each OR subteam (anesthesiology, nursing, surgery) on 4 dimensions: leadership & management, teamwork & cooperation, problem-solving & decision-making, and situation awareness. Its developers, too, have demonstrated reliability and validity, as well as correlation with OTAS32, but it also has not found widespread use outside of its home institution.
In an operation, the system may refer to the physical environment of that particular OR or the policies, practices, and organizational structure of the department or hospital. It may also implicate the professional culture or values of an institution as a whole, or that of the discipline of surgery.
As a human factor that describes the system, equipment has face validity for most surgeons; it is easy for us to appreciate the value in having functional, well-designed equipment available at the appropriate times. Healey61 documented 64 instances of unavailable or non-functional equipment in 35 out of 50 observed general surgical procedures, and of all of the intra-operative distractions and/or interruptions he noted, these contributed the most to interference with case. In 31 cardiac operations, Wiegmann24 mapped 11% of surgical flow disruptions (“deviations from the natural process of an operation”) to difficulties with equipment or technology.
Perhaps less readily understood, but no less critical, are the organization processes of the OR. For example, in a traditional OR system, the surgeon provides his/her own estimated case durations, a practice which introduces a great deal of subjectivity as well as variability, hindering attempts to match OR capacity to usage. Without accurate approximations of case length, the appropriate allocation of human and equipment resources is difficult. In our own study62, we observed an instance in which more oncology cases were simultaneously booked than the number of oncology kits available; the team expended extra time and effort to obtain the necessary instruments in a piecemeal fashion, and the case was delayed. At the Mayo Clinic63, the development of surgeon-specific procedural database to provide estimates for case duration based upon historical and prospective moving averages was among several initiatives that led to increased OR efficiency and financial performance.
Similarly, in a traditional OR system, the surgeon specifies the contents of his/her own instrument kits. With this practice, thousands of case-cards or pick-tickets may result, consuming a significant amount of nursing and central processing time and effort. After nursing leadership and surgical faculty collaborated to consolidate and streamline the instrument trays at the University of Alabama64, tray and case cart errors decreased. Because circulators spent less time making phone calls to central sterile and flash-sterilizing instruments, their ability to attend to the case increased.
To our knowledge, there are no instruments that assess the system in isolation. The Safety Attitudes Questionnaire65 and the OR Management Attitudes Questionnaire49 ask respondents to rate teamwork, as well as describe the organizational climate towards safety. Wiegmann's24 includes a teamwork category, and Healey's61 contains several measures of communication and “procedural” interference, as well as the environment. The Disruptions in Surgery Index (DiSI) also incorporates individual and team factors in its list of potential disruptions, and is essentially a survey, rather than an observational tool, querying respondents about the perceived impact on themselves and their team members66.
We captured one particularly illustrative example on video in our own observational study62. During a procedure, an alarm began to sound, yet its source was unclear; the alert did not indicate to which piece of equipment it belonged, or whether it represented equipment malfunction or patient endangerment. As the surgeons continued to operate, the nurses and anesthesiologists became absorbed in its investigation. After several minutes, the circulator took charge of the situation, sent the anesthesiologists back to the head of the bed, and called biomedical engineering for help. Eventually, a failed warmer was determined to be the source, and a new one was brought.
Because nothing adverse happened, this incident would likely have been dismissed, rather than reported. However, it provides us with useful information. An analogous case in the aviation industry, in which recovery was not achieved, warns us against ignoring such data. In 1974, the landing gear indicator light on Eastern Airlines Flight 401 failed to illuminate. While the crew preoccupied itself with troubleshooting the issue, the autopilot became deactivated and the flight crashed67. The tragedy of the event lies in the fact the landing gear could have been lowered manually; 101 people died as a result of a burned-out lightbulb (equipment which was unimportant to the flight) and a failure of the team to maintain situational awareness (vigilance) and to manage their human resources (delegate the responsibilities for troubleshooting and flying to separate people).
Our case displayed similar equipment problems (malfunction and poor design), as well as an initial failure of the team to maintain situational awareness and to manage their human resources. However, the smooth engagement of team – leadership of the circulator and cooperation from the anesthesiologists and the biomedical engineers – led to a quick recovery, and may have prevented the occurrence of a more catastrophic event.
In the OR, as in other high acuity, high reliability work environments, human and system factors interact to impact safety. The complex interrelations and interdependencies between people, resources, information, and technology must be more clearly delineated if successful interventions are to be developed. We emphasize that “success” must not be determined narrowly; given the intricate interconnections between various human factors in the OR and the importance of flexibility to system resilience, all consequences must be evaluated. Emerging techniques for conducting such research hold much promise in the advancement of our understanding of intra-operative safety.
Despite our knowledge that the majority of surgical adverse events occur in the operating room, our understanding of the intraoperative phase of care is incomplete; most studies measure surgical safety in terms of preoperative risk or postoperative morbidity and mortality. Because of the OR's complexity, human factors engineering provides an ideal methodology for studies of intraoperative safety. In this article, we review models of error and resilience as delineated by human factors experts, correlating them to OR performance. We then outline existing methodologies for studying intraoperative safety, focusing on video-based observational research. Finally, we detail specific human and system factors that have been examined in the OR.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
The authors have nothing to disclose.