|Home | About | Journals | Submit | Contact Us | Français|
In their landmark 1999 report, To Err is Human: Building a Safer Health System, the Institute of Medicine estimated that avoidable medical errors contribute to 44,000–98,000 deaths, and more than a million injuries, annually in United States hospitals.1 In response to these disturbing data, accreditation bodies, payers, non-profit organizations, governments, and hospitals launched major initiatives and invested considerable resources to improve patient safety.2–3 Assessing the impact of these patient safety initiatives requires generally accepted, rigorous, standardized, and practical measures of adverse events.4–5
A number of approaches to measuring adverse event rates have been used, including voluntary reports (“incident” or “occurrence” reports), mining of administrative databases (most notably the Agency for Healthcare Research and Quality’s [AHRQ] Patient Safety Indicators), the two stage review process used in the Harvard Medical Practice Study, and the Institute for Healthcare Improvement’s (IHI) “trigger tool” approach.6–7 Each of these methods has advantages and limitations. (Table). By identifying clues that guide chart reviewers to specific events during a patient’s hospitalization more likely to contain an adverse event, the trigger tool approach provides an efficient variation on retrospective chart reviews and overcomes many of the limitations of other methods.7–11 A brief discussion on each of these approaches to patient safety measurement is worth pursuing, as dramatically different adverse event rates are identified depending on the techniques being used to identify and measure harm.
The most well-known strategy to identify and measure patient safety in US hospitals is the use of occurrence (“incident”) reports, submitted by caregivers. Although these data are relatively easy and inexpensive to obtain, evidence suggests that occurrence reports are underutilized12–14 and only identify between 2%–8% of all adverse events in the inpatient setting.7,9,10,12 This underutilization results from the fact that occurrence reports are voluntary, time intensive, and far more likely to be completed by nurses than physicians15, and frequently perceived by staff to result in punitive action.12 While identifying important clues to process flaws, occurrence reports generally identify near misses and sentinel events but rarely reflect the spectrum of adverse events.16–18
Approaches to measuring patient safety using administrative data sets are appealing, as these data are often routinely available, inexpensive to obtain, and are immediately comparable across sites. However, administrative data sets, which are the source of adverse event rates identified by AHRQ’s Patient Safety Indicators,19 are highly susceptible to variation in coding practices and suffer from harms being easily hidden in the medical record. The end result is that present approaches to identify adverse events using administrative data sets have limited sensitivity and specificity, and should probably only be to help hospitals prioritize chart review and improvement initiatives.7,20–21
The Harvard Medical Practice study used retrospective chart review to uncover adverse events.22 Another influential study identified adverse events using a combination of “voluntary and verbally solicited reports from house officers, nurses, and pharmacists; and by medication order sheet, medication administration record, and chart review of all hospitalized patients.”17 Several other significant safety studies used similar methods. The most frequently cited adult studies using a retrospective methodology22–23 revealed adverse event rates of 3.7 and 2.9 per 100 admissions, respectively. This identification strategy suffers from several problems: inconsistency in defining adverse events; poor, incomplete, confusing or conflicting entries in the medical records; and resource intensiveness. This methodology was valuable in the early days of the patient safety field by highlighting the major patient safety risks present in inpatient health care settings. However, it has largely been replaced by the more efficient and more sensitive trigger tool method described below.7
The trigger tool methodology has emerged as the premier approach for adverse event detection.7,24–25 Triggers, defined as “occurrences, prompts, or flags found on review of the medical record that ‘trigger’ further investigation to determine the presence or absence of an adverse event”,26 have been shown to more efficiently identify adverse events than any other published detection method.7,9–10,12–13,25–26 Recent studies using the IHI Global Trigger Tool27 have identified harm rates in adults in US hospitals of 49 per 100 admissions7 (33% of patients), 36 per 100 admissions (28% of patients) in Medicare patients,25 and 25 per 100 admissions (18% of patients) across North Carolina.24 Between 44% and 63% of these adverse events were interpreted as preventable. Examples of “triggers” include abnormal laboratory results such as rising creatinine, prescriptions for antidote medications such as naloxone, and other medical record–based hints that tell the chart reviewer that an adverse event might have occurred, triggering a more thorough review of the medical record.23 The IHI adult Global Trigger Tool,27 the most well studied of the published trigger tools, consistently demonstrates compelling operator characteristics, including excellent inter- and intra-rater reliability, very good to excellent sensitivity, and excellent specificity when compared with the gold standard of detailed expert chart review.7,11,18
A 2011 study by Classen and colleagues highlighted the relative test characteristics of the various adverse event detection methods.7 The authors reviewed 795 closed medical records from 3 large academic medical centers and found that the IHI Global Trigger Tool identified 354 of the 393 adverse events (90%) detected by expert chart review, while the AHRQ Patient Safety Indicators (derived from an algorithm applied to administrative data) identified 35 adverse events (9%), and occurrence reports identified only 4 adverse events (1%). Other studies have demonstrated similar findings.9,10,13,28
In summary, rates of harm in US hospitals remain unacceptably high, with little evidence of significant improvement since To Err is Human was published in 1999.4,7,24–25 One major reason for these persistently high rates has been the lack of an accepted, rigorous, standardized, and practical approach to measuring and tracking adverse events over time. The IHI Global Trigger Tool, along with other more patient population–specific triggers tools, were developed to provide practical and reliable measurement approaches to track rates of harm over time7,24–25,27 at the local, regional, and national level. Though not perfect, trigger tools have better operator characteristics than other measurement approaches and detect significantly more adverse events than occurrence reports, administrative database–derived harm rates, and concurrent or retrospective chart review.29 Present efforts are underway to automate the IHI adult Global Trigger Tool and to construct and automate a pediatric global trigger tool. Once these two automated global trigger tools are validated, it seems likely that the Centers for Medicaid and Medicare Services (CMS) will require hospitals to report “all cause” harm rates, and perhaps report such results publicly or tie them to reimbursement. Other public and private insurance companies are sure to follow. These will be important next steps to move US hospitals forward toward the real work at hand—reliably improving the safety of patients in our health care system.