|Home | About | Journals | Submit | Contact Us | Français|
To examine the association between hospital self-reported compliance with the National Quality Forum patient safety practices and trauma outcomes in a nationally representative sample of level I and level II trauma centers.
Retrospective cohort study using the Nationwide Inpatient Sample.
Level I and level II trauma centers.
Multivariate logistic regression models were estimated to examine the association between clinical outcomes (in-hospital mortality and hospital-associated infections) and the National Quality Forum patient safety practices. We controlled for patient demographic characteristics, injury severity, mechanism of injury, comorbidities, and hospital characteristics.
The total score on the Leapfrog Safe Practices Survey was not associated with either mortality (adjusted odds ratio [aOR], 0.92; 95% confidence interval [CI], 0.79–1.06) or hospital-associated infections (1.03; 0.82–1.29). Full implementation of computerized physician order entry was not associated with reduced mortality (aOR, 1.03; 95% CI, 0.75–1.42) or with a lower risk of hospital-associated infections (0.94; 0.57–1.56). Full implementation of intensive care unit physician staffing was also not predictive of mortality (aOR, 1.13; 95% CI, 0.90–1.28) or of hospital-associated infections (1.04; 0.76–1.42).
In this nationally representative sample of level I and level II trauma centers, we were unable to detect evidence that hospitals reporting better compliance with the National Quality Forum patient safety practices had lower mortality or a lower incidence of hospital-associated infections.
There is marked variability in mortality outcomes across trauma centers.1 Trauma patients admitted to high-mortality hospitals have a 70% higher risk of dying compared with trauma patients who are admitted to average hospitals.2 This variability in trauma center outcomes presents an opportunity to explore the association between specific processes of care and outcomes. Performance measurement is potentially a transformative tool for turning hospitals into learning laboratories to identify and then to implement best practices in order to improve population outcomes.3,4 National initiatives to measure and promote the adoption of evidence-based measures provide an opportunity to assess the impact of these best practices on trauma outcomes in the real world.
In particular, the Leapfrog Group evaluates hospital adherence to a set of Safe Practices for Better Healthcare, endorsed by the National Quality Forum (NQF) as part of a national strategy to improve patient safety and health care quality. The Leapfrog Group was established in response to the Institute of Medicine report,5 To Err Is Human, by a coalition of large businesses seeking to redesign health care in the United States. The list of NQF safe practices includes computerized physician order entry (CPOE), intensivist staffing of intensive care units (ICUs), measures to ensure the adequacy and competence of the nursing workforce and the nonnursing direct care workforce, use of prevention measures for ventilator-associated pneumonia and central venous catheter bloodstream infections, and teamwork training.
There is a substantial body of literature6–10 supporting many of the safety practices included in the Leapfrog Safe Practices Survey (SPS). However, prior work has not established a strong association between hospital self-reported compliance with the Leapfrog process measures and mortality outcomes. Jha and colleagues11 reported lower in-hospital mortality rates for acute myocardial infarctions but not for congestive heart failure or for pneumonia in hospitals that had started to implement CPOE and ICU physician staffing. A recent study12 based on the Nationwide Inpatient Sample did not demonstrate lower inhospital mortality rates in hospitals with higher scores on the SPS. This study, however, was criticized by the NQF for focusing on mortality alone as opposed to other outcome measures, such as hospital-associated infection (HAI) rates, which may be more responsive to improved compliance with the NQF patient safety practices.13 In addition, by examining only the association between mortality and a composite safety score, this study may have missed potentially important associations between individual safety practices and mortality.
The association between trauma center certification and improved outcomes is well established.14,15 However, with the exception of ICU staffing16 and the Surgical Care Improvement Project infection control measures,17 there is little information as to whether specific patient safety practices are associated with improved trauma outcomes. Our goal in this study was to examine the association between hospital self-reported compliance with the NQF patient safety practices and both (1) in-hospital mortality and (2) HAIs in a nationally representative sample of level I and level II trauma centers. To address the possibility that an analysis based only on the aggregate score on the patient safety survey may hide the effect of specific individual safety practices, we separately examined the association between trauma outcomes and the composite score on the SPS and between trauma outcomes and each of the individual patient safety practices.
This analysis was conducted using data from the 2006 Health-care Cost and Utilization Project Nationwide Inpatient Sample. The Nationwide Inpatient Sample is a 20% stratified sample of patients from nonfederal hospitals and is the largest all-payer inpatient database of US patients.18 The Nationwide Inpatient Sample includes information on patient demographic characteristics, admission source, ICD-9-CM diagnostic and injury codes, Agency for Healthcare Research and Quality comorbidity measures,19 in-hospital mortality, hospital characteristics, and hospital identifiers. The 2007 Leapfrog Hospital Quality and SPS data include information on the 30 individual NQF safe practices and a weighted composite safety score based on 27 safe practices (Table 1); ICU staffing, CPOE, and evidence-based referrals are not included in the composite scores. The American Hospital Association Annual Survey was used to obtain information on trauma center status. The University of Rochester School of Medicine Institutional Review Board (Rochester, New York) approved this study after expedited review.
The study sample consisted of patients admitted with a principal ICD-9-CM diagnosis of trauma (800–959.9), after excluding patients with burns (940–949) or unspecified injuries (959–959.9) and patients with the following isolated injuries: late effects of injury (905–909.9), superficial injuries (910–924.9), or foreign bodies (930–939.0). We excluded patients with missing demographic information on age, sex, or outcome (n=1162), those with missing Ecodes (External cause-of-injury coding) (n=8847), those with nontraumatic mechanisms (n=4504), those who were transferred out (n=1377), and those in hospitals that did not participate in the SPS (52 345 observations). The final study cohort consisted of 42 417 patients in 58 level I and level II hospitals.
In this study, a patient was considered to have an HAI if he or she were coded as having either (1) sepsis, (2) pneumonia, (3) Staphylococcus infections, or (4) Clostridium difficile–associated disease. We used previously published criteria20–25 for identifying HAIs using ICD-9-CM codes (Table 2). Cases identified using these algorithms were assumed to represent HAIs because it is not likely that patients admitted with traumatic injuries would have preexisting infections.
The outcome variables of interest were in-hospital mortality and the occurrence of any HAI. Separate patient-level multivariate logistic regression models were estimated to examine the association between each of the outcome variables and (1) total score on the SPS, (2) ICU physician staffing, (3) CPOE, and (4) scores on the individual components of the SPS. We controlled for patient demographic characteristics (age and sex), injury severity, mechanism of injury, and comorbidities. We also controlled for hospital characteristics: teaching status, hospital ownership, and geographic region. Injury severity was coded using empirically derived estimates of injury severity based on the previously validated Trauma Mortality Prediction Model.2,26 The Agency for Healthcare Research and Quality comorbidity algorithm was used to code patient comorbidities.19 Fractional polynomial analysis was used to obtain the optimal specification for age.27 Robust variance estimators were used because observations for patients treated at the same trauma center may be correlated.28 Analyses using HAIs as the outcome variable were limited to patients with length of stay greater than 3 days because we assumed that patients who died or were discharged within 3 days would not have been hospitalized long enough to have developed an HAI.
All statistical analyses were performed using Stata/SE/MP, version 11.0 (StataCorp, College Station, Texas). The performance of the logistic regression models was assessed using measures of discrimination (C statistic) and calibration (the Hosmer-Lemeshow statistic).
Hospital characteristics are displayed in Table 3. Approximately half of the trauma centers were teaching hospitals, most were nonprofit, and the hospitals were nearly equally distributed across geographic regions. Compared with the trauma centers not in the study sample, the hospitals in the study were similar in size (299 vs 273 beds), were as likely to be teaching hospitals, and were more likely to be for-profit (Table 3). Most of the patients were male (59.1%), and the median age of the population cohort was 47. The most frequent mechanism of injury was blunt trauma (47.5%), followed by low fall (18.2%) and motor vehicle accident (17.2%) (Table 4). The overall mortality for the patient sample was 3.15%.
The unadjusted mortality for hospitals in the lowest-scoring SPS quartile was 2.83% compared with 2.80% for hospitals in the highest quartile. Similarly, hospitals in the lowest SPS quartile had an unadjusted HAI rate of 3.12% compared with 3.10% for hospitals in the highest quartile. A test for linearity across SPS quartiles of unadjusted mortality (P=.69 for trend) and HAI (P=.97 for trend) were not significant (Table 4).
After controlling for potential patient- and hospital-level confounders, the total score on the SPS was not associated with either mortality (adjusted odds ratio [aOR], 0.92; 95% confidence interval [CI], 0.79–1.06) or HAI (1.03; 0.81–1.29; P=.25) (Figure 1 and Figure 2). Neither of the 2 patient safety practices not included in the SPS composite score—CPOE or ICU physician staffing—was associated with clinical outcome. Full implementation of CPOE was not associated with mortality (aOR, 1.03; 95% CI, 0.75–1.42; P = .86) or with a lower risk of HAI (0.94; 0.57–1.56; P=.82). Full implementation of ICU physician staffing was also not predictive of mortality (aOR, 1.13; 95% CI, 0.90–1.28; P=.30) or of HAI (1.04; 0.76–1.42; P=.81).
Only one of the individual patient safety practices in the SPS was associated with mortality. Disclosure of adverse events was predictive of lower mortality (aOR, 0.87; 95% CI, 0.80–0.95; P < .001). Pressure ulcer prevention (aOR, 1.35; 95% CI, 1.11–1.66; P=.003), prevention of wrong site surgery (1.17; 1.02–1.33; P=.02), and prevention of myocardial infarction (1.11; 1.01–1.22; P=.03) were associated with a greater likelihood of HAIs. Prevention of anticoagulation adverse events was associated with a lower risk of HAI (aOR, 0.89; 95% CI, 0.81–0.97; P=.009). Prevention of central line–associated bloodstream infection was not associated with mortality (aOR, 0.93; 95% CI, 0.84–1.03; P=.18) or with HAI (1.00; 0.83–1.20; P > .99). Prevention of ventilator-associated pneumonia was also not predictive of mortality (aOR, 1.00; 95% CI, 0.88–1.13; P=.96) or of HAI (1.04; 0.84–1.30; P=.70).
Finally, we also evaluated the association between participation in the SPS and outcome. We found that survey participation was not associated with lower mortality (aOR, 1.03; 95% CI, 0.86–1.23; P=.76) or with a lower risk of HAI (0.87; 0.68–1.10; P=.24).
Each of the models exhibited acceptable discrimination. The C statistic was 0.92 for the mortality models and 0.78 for the HAI models. The Hosmer-Lemeshow statistic ranged from 17.9 to 23.6 for the mortality models and from 12.5 to 30.7 for the HAI models. Model calibration was acceptable given the large size of the data sets and the well-recognized sensitivity of the Hosmer-Lemeshow statistic to sample size.29
Traditionally, physicians and hospitals have been entrusted with ensuring that patients receive high-quality care. However, physicians and hospitals were not responsible for demonstrating “that they were achieving acceptable levels of performance.”30(p21) It was simply assumed that they did so. With the release of the highly publicized Institute of Medicine report describing safety problems in US hospitals, the public has lost some of its confidence in the ability of organized medicine to regulate itself and to ensure high-quality care.30 According to the Institute of Medicine, “[In] its current form, habits, and environment, American health care is incapable of providing the public with the quality health care it expects and deserves.”31(p43) This faith has been replaced by a mandate to publicly report health care outcomes for hospitals and physicians and to incentivize quality using the power of federal funding to drive “value-based purchasing.”32 Hospital performance measures, such as Centers for Medicare & Medicaid Services Hospital Compare report card and the SPS, are the centerpiece of efforts by the public and private sectors to increase transparency and accountability. However, the real value of performance measurement is to create an opportunity to transform hospitals into “learning laboratories” to examine the real-life benefit of compliance with recommended practices in order to discover “true” best practices and then to use this information to improve quality of care.
In this study of a nationally representative sample of level I and level II trauma centers, we did not find evidence that implementation of CPOE or ICU physician staffing is associated with a lower risk of mortality or of HAIs. Similarly, a hospital’s total score on the Safety Patient Survey was not predictive of a lower risk of mortality or of HAI. We did find, however, that a hospital disclosure policy for informing patients and families of systems failure or human errors leading to unanticipated outcomes is associated with lower mortality.
Two previous studies have examined whether Leapfrog performance measures are associated with better outcomes. In one study, Jha and colleagues11 found that CPOE and ICU physician staffing are associated with lower mortality in patients with acute myocardial infarction but not in patients with congestive heart failure or pneumonia. The main limitation of that study is that it used only the Elixhauser comorbidity algorithm to construct the risk-adjusted mortalities and may not have adequately adjusted for severity of disease. The Elixhauser algorithm was designed to adjust for comorbidities and not for severity of disease, and it includes a diagnosis related group screen that specifically excludes conditions that are related to the admission diagnosis related group.19 As a result of the diagnosis related group screen, cardiac conditions would not be included in the risk-adjustment models used to calculate risk-adjusted mortalities for patients admitted with acute myocardial infarction or congestive heart failure.
A second study, conducted by Kernisan and colleagues,12 did not find that the total score on the SPS was predictive of lower in-hospital mortality in a nationally representative sample of US hospitals. This study was criticized for reporting only the association between the survey score and inpatient mortality and not examining a potentially more relevant outcome measure, such as HAI.13 A second criticism was that only some of the safe practices are expected to directly affect mortality.13 Thus, examining only the association between the aggregate score and mortality may mask significant correlations between individual safe practices and outcome.
Other studies have shown a modest correlation between process measures, such as the use of aspirin and β-blockers in patients with acute myocardial infarction, and inpatient mortality.33–36 It is not clear why, with the exception of disclosure, none of the NQF safety practices measured in the SPS were associated with improved outcomes. Our negative findings may reflect the limitations of the SPS itself and not of the NQF safe practices. The extent to which a hospital’s self-reported compliance with NQF safe practices correlates with the actual adoption of safe practices is unknown.12 For process measures that reflect adherence to best practices in caring for individual patients (eg, the use of protocols to reduce ventilator-associated pneumonias), performance measures reflecting actual compliance (eg, percentage of patients receiving recommended care) are likely to be more accurate than a survey completed by the hospital chief executive. For structural measures that are best reported at the hospital level, such as ICU physician staffing and the use of CPOE, data auditing may be required to confirm data accuracy.
Aside from problems with the accuracy of the survey itself, the lack of correlation between self-reported adherence to the NQF safety practices and outcome may reflect selection bias. Some of the lower-performing hospitals may be more likely to adopt the NQF safety practices to improve their outcomes, potentially masking the benefit of adherence to best practices in a cross-sectional study such as ours. Further research using a longitudinal study design is necessary to determine whether improved performance on the SPS is associated with better outcomes over time.
Our study has some significant limitations. The most important limitation is that our ability to understand the effect of specific best practices on trauma outcomes is limited by the quality of the data in the SPS. Thus, our finding that ICU physician staffing, CPOE, and protocols to reduce the incidence of central line–associated bloodstream infections are not associated with improved trauma outcomes must be interpreted with caution.
A second limitation is that it is likely that HAIs are undercoded in administrative data. Previous work20 has validated the accuracy of administrative data for identifying cases of sepsis. Administrative data have high specificity for the identification of pneumonia and a sensitivity of approximately 50%.21 Although administrative data have been used to identify Staphylococcus infections and C difficile–associated disease, the extent of undercoding of these HAIs is not known.22,23 The undercoding of HAIs in our study may reduce the likelihood of detecting a significant correlation between patient safety practices and the incidence of HAIs.
A third limitation is that we were not able to include physiology information in our multivariate models. However, we controlled for injury severity using the Trauma Mortality Prediction Model. This injury severity model has been previously validated and shown to have excellent statistical performance,2,26 thus minimizing the possibility of omitted variable bias.
Finally, it is not possible to determine with certainty whether a case identified as an HAI represents a true complication or was instead present on admission. Despite the limitations of using administrative data, this study provides important insights on the potential effect of safety practices in trauma patients.
In summary, we were unable to detect evidence that trauma outcomes are improved in hospitals with better self-reported performance on the Leapfrog Hospital Survey. These findings do not, however, mean that adherence to the NQF patient safety practices does not result in improved outcomes. It is possible that self-reported measures of hospitalwide adherence with the NQF safety practices may not reliably capture actual clinical practice. Consideration should be given to adding those NQF patient safety practices considered most relevant to trauma outcomes to the American College of Surgeons Committee on Trauma Verification Program. Similarly, some of the process measures based on patient safety practices could become part of the required data collection in the American College of Surgeons Trauma Quality Improvement Program recently created to benchmark trauma care. In this way, actual compliance with the NQF patient safety practices could be more accurately measured. Using these data, we could then identify a core group of safety “best practices,” based on the NQF patient safety practices, for trauma care and could use this information to improve patient outcomes in trauma centers.
Funding/Support: This project was supported by grant R01 HS 16737 from the Agency for Healthcare and Quality Research and grant R01 NR01 0107 (Prevention of Nosocomial Infections and Cost-Effectiveness) from the National Institutes of Health.
Financial Disclosure: None reported.
Disclaimer: The views presented in this article are those of the authors and may not reflect those of the Agency for Healthcare and Quality Research and the National Institutes of Health.
Author Contributions: Dr Glance had full access to the data and takes responsibility for the accuracy of the data analysis. Study concept and design: Glance, Dick, and Osler. Acquisition of data: Glance. Analysis and interpretation of data: Glance, Dick, Osler, Meredith, Stone, Li, and Mukamel. Drafting of the manuscript: Glance and Dick. Critical revision of the manuscript for important intellectual content: Glance, Dick, Osler, Meredith, Stone, Li, and Mukamel. Statistical analysis: Glance, Dick, Osler, Li, and Mukamel. Obtained funding: Glance, Meredith, and Mukamel. Administrative, technical, and material support: Glance. Study supervision: Glance.