Traditionally, physicians and hospitals have been entrusted with ensuring that patients receive high-quality care. However, physicians and hospitals were not responsible for demonstrating “that they were achieving acceptable levels of performance.”30
(p21) It was simply assumed that they did so. With the release of the highly publicized Institute of Medicine report describing safety problems in US hospitals, the public has lost some of its confidence in the ability of organized medicine to regulate itself and to ensure high-quality care.30
According to the Institute of Medicine, “[In] its current form, habits, and environment, American health care is incapable of providing the public with the quality health care it expects and deserves.”31
(p43) This faith has been replaced by a mandate to publicly report health care outcomes for hospitals and physicians and to incentivize quality using the power of federal funding to drive “value-based purchasing.”32
Hospital performance measures, such as Centers for Medicare & Medicaid Services Hospital Compare report card and the SPS, are the centerpiece of efforts by the public and private sectors to increase transparency and accountability. However, the real value of performance measurement is to create an opportunity to transform hospitals into “learning laboratories” to examine the real-life benefit of compliance with recommended practices in order to discover “true” best practices and then to use this information to improve quality of care.
In this study of a nationally representative sample of level I and level II trauma centers, we did not find evidence that implementation of CPOE or ICU physician staffing is associated with a lower risk of mortality or of HAIs. Similarly, a hospital’s total score on the Safety Patient Survey was not predictive of a lower risk of mortality or of HAI. We did find, however, that a hospital disclosure policy for informing patients and families of systems failure or human errors leading to unanticipated outcomes is associated with lower mortality.
Two previous studies have examined whether Leapfrog performance measures are associated with better outcomes. In one study, Jha and colleagues11
found that CPOE and ICU physician staffing are associated with lower mortality in patients with acute myocardial infarction but not in patients with congestive heart failure or pneumonia. The main limitation of that study is that it used only the Elixhauser comorbidity algorithm to construct the risk-adjusted mortalities and may not have adequately adjusted for severity of disease. The Elixhauser algorithm was designed to adjust for comorbidities and not for severity of disease, and it includes a diagnosis related group screen that specifically excludes conditions that are related to the admission diagnosis related group.19
As a result of the diagnosis related group screen, cardiac conditions would not be included in the risk-adjustment models used to calculate risk-adjusted mortalities for patients admitted with acute myocardial infarction or congestive heart failure.
A second study, conducted by Kernisan and colleagues,12
did not find that the total score on the SPS was predictive of lower in-hospital mortality in a nationally representative sample of US hospitals. This study was criticized for reporting only the association between the survey score and inpatient mortality and not examining a potentially more relevant outcome measure, such as HAI.13
A second criticism was that only some of the safe practices are expected to directly affect mortality.13
Thus, examining only the association between the aggregate score and mortality may mask significant correlations between individual safe practices and outcome.
Other studies have shown a modest correlation between process measures, such as the use of aspirin and β-blockers in patients with acute myocardial infarction, and inpatient mortality.33–36
It is not clear why, with the exception of disclosure, none of the NQF safety practices measured in the SPS were associated with improved outcomes. Our negative findings may reflect the limitations of the SPS itself and not of the NQF safe practices. The extent to which a hospital’s self-reported compliance with NQF safe practices correlates with the actual adoption of safe practices is unknown.12
For process measures that reflect adherence to best practices in caring for individual patients (eg, the use of protocols to reduce ventilator-associated pneumonias), performance measures reflecting actual compliance (eg, percentage of patients receiving recommended care) are likely to be more accurate than a survey completed by the hospital chief executive. For structural measures that are best reported at the hospital level, such as ICU physician staffing and the use of CPOE, data auditing may be required to confirm data accuracy.
Aside from problems with the accuracy of the survey itself, the lack of correlation between self-reported adherence to the NQF safety practices and outcome may reflect selection bias. Some of the lower-performing hospitals may be more likely to adopt the NQF safety practices to improve their outcomes, potentially masking the benefit of adherence to best practices in a cross-sectional study such as ours. Further research using a longitudinal study design is necessary to determine whether improved performance on the SPS is associated with better outcomes over time.
Our study has some significant limitations. The most important limitation is that our ability to understand the effect of specific best practices on trauma outcomes is limited by the quality of the data in the SPS. Thus, our finding that ICU physician staffing, CPOE, and protocols to reduce the incidence of central line–associated bloodstream infections are not associated with improved trauma outcomes must be interpreted with caution.
A second limitation is that it is likely that HAIs are undercoded in administrative data. Previous work20
has validated the accuracy of administrative data for identifying cases of sepsis. Administrative data have high specificity for the identification of pneumonia and a sensitivity of approximately 50%.21
Although administrative data have been used to identify Staphylococcus
infections and C difficile
–associated disease, the extent of undercoding of these HAIs is not known.22,23
The undercoding of HAIs in our study may reduce the likelihood of detecting a significant correlation between patient safety practices and the incidence of HAIs.
A third limitation is that we were not able to include physiology information in our multivariate models. However, we controlled for injury severity using the Trauma Mortality Prediction Model. This injury severity model has been previously validated and shown to have excellent statistical performance,2,26
thus minimizing the possibility of omitted variable bias.
Finally, it is not possible to determine with certainty whether a case identified as an HAI represents a true complication or was instead present on admission. Despite the limitations of using administrative data, this study provides important insights on the potential effect of safety practices in trauma patients.
In summary, we were unable to detect evidence that trauma outcomes are improved in hospitals with better self-reported performance on the Leapfrog Hospital Survey. These findings do not, however, mean that adherence to the NQF patient safety practices does not result in improved outcomes. It is possible that self-reported measures of hospitalwide adherence with the NQF safety practices may not reliably capture actual clinical practice. Consideration should be given to adding those NQF patient safety practices considered most relevant to trauma outcomes to the American College of Surgeons Committee on Trauma Verification Program. Similarly, some of the process measures based on patient safety practices could become part of the required data collection in the American College of Surgeons Trauma Quality Improvement Program recently created to benchmark trauma care. In this way, actual compliance with the NQF patient safety practices could be more accurately measured. Using these data, we could then identify a core group of safety “best practices,” based on the NQF patient safety practices, for trauma care and could use this information to improve patient outcomes in trauma centers.