Our analysis of pediatric mortality data from 42 children's hospitals revealed the following: (1) the O/E mortality rate ratio for any given hospital is surrounded by a substantial zone of statistical imprecision (which is even greater for smaller hospitals; data not shown); (2) hospital rankings based on O/E ratios have a large degree of imprecision regarding where any particular hospital (expect for those at the very top or bottom of the rankings) should truly fall in the rank ordering; and (3) with the use of methods that accounted for those uncertainties, only 2 (4.8%) of the 42 hospitals could be potentially identified as outliers having higher or lower mortality rates.
Our findings should be interpreted with several caveats in mind. First, our sample includes only freestanding children's hospitals, which may exhibit less variation in mortality than other hospitals; thus, our findings may not generalize to all hospitals. Second, we could not identify cases in which pediatric patients were admitted to hospitals with the specific purpose of receiving end-of-life care; although the number of such cases is small, quality-of-care ranking systems should take great care not to penalize hospitals for providing this type of care.23
Finally, in the absence of widely accepted statistical methods to compare the performance of a given hospital against the expected range of performance of a group of comparison hospitals (and not simply against the overall mean of that sample), we used bootstrap resampling techniques; although we consider this approach to be appropriate and analytically robust, more methodologic research is warranted.
Despite these caveats, our findings should sound a note of caution in a health care environment in which many have advocated public reporting of quality measures and patient outcomes as a means to provide patients and health plans with appropriate information for selection of “the best” providers and health care institutions and to stimulate quality improvement implementation by health care providers fearing market loss. Increasingly, public reports comparing and often rank-ordering hospitals are published by magazines, private companies, the Department of Health and Human Services, and even hospitals on their Web sites. Hospitals are facing mounting pressure to release data, including mortality rates, for use in such reports, and hospitals are concerned with their public image as based on such rankings.
Evidence suggests that publically releasing performance data stimulates quality improvement initiatives at the hospital level but is inconsistent in leading to direct improvements in quality of care.24
Consumers and purchasers rarely seek performance data information, and many do not understand or trust it.25
Furthermore, public performance reporting may have unintended consequences, such as leading physicians or hospitals to avoid sicker patients in an attempt to improve their quality rankings.26,27
A limited number of studies did conclude that the publication of performance data was associated with improvement in health outcomes.25
For example, New York State has mandated public reporting of risk-adjusted mortality rates for adult coronary intervention procedures for more than 1 decade and, during that period, unadjusted rates of death after such procedures have decreased significantly.8
Additional potential benefits of public reporting include accelerating the adoption of “best practices” and establishing data sets for critical outcomes research.8,28
At first glance, a hospital's unadjusted mortality rate may seem to be a straightforward measure of how well that hospital cares for patients, and rates adjusted for patient case mixture and severity of illness may be thought to offer an even more reliable basis for comparisons. As demonstrated, however, rankings based on adjusted mortality measures are statistically uncertain and are liable to overinterpretation; the vast majority of children's hospitals in this study exist in a zone of essentially indistinguishable mortality rates. Therefore, although the use of this particular quality-of-care indicator either within or across hospitals may prompt quality improvement efforts, the data seem to be an inexact guide; metaphorically, hospital mortality rates and rankings may supply a stiff wind but a poor compass.