The purpose of publicly reporting outcomes measures is to illuminate the quality of care provided to patients across the country and, particularly, to examine care through the lens that is most meaningful to patients, patient outcomes. Our report, paralleling the 2010 release, reveals substantial variation in risk-standardized outcomes for patients admitted with AMI and HF and persistently high rates of these outcomes, particularly readmission, with distinct regional patterns to this variation. Documentation of the variation in risk-standardized outcomes provides important evidence of continued room for improvements in care. Furthermore, we have demonstrated that hospitals’ performance is not static. Indeed, a number of hospitals have improved their performance such that they no longer have significantly higher RSMRs or RSRRs than the national rate; others have moved from rates that were not significantly different from the national norms to being classified as having significantly better than expected performance and still others’ relative performance has declined.
Reducing high rates of readmission is a national priority, with policy efforts being initiated to reward better performance. In the recent health reform bill, the Patient Protection and Affordable Care Act, there is specific language linking readmission measurement to payment.14
Although the details are not yet clear, this policy direction indicates that there will be financial incentives for hospitals and their clinicians to focus on improving performance in this area. The increasing attention to measurement has been accompanied by a rapid increase in research on how to reduce readmission rates, with studies suggesting that reductions in readmissions of 15–20% are possible at many hospitals.15–16
Such reductions could lead to fewer disruptions for patients, many of whom are currently experiencing an additional hospitalization soon after hospital discharge, and will likely contribute to lower overall costs of care.
The updated measures presented on Hospital Compare
this spring are based on data from hospitalizations from July 1 2006 – June 30, 2009. Two thirds of this cohort (the first two years) overlaps with the data presented last year. For this reason we do not expect to see dramatic changes from last year’s release. Furthermore, no changes as a result of public reporting of RSRRs would yet be visible in this year’s rates since public reporting for RSRRs began in July 2009, and the current data extends only through June 2009. Nonetheless, there are a few interesting findings when comparing this year’s results to those presented in last year’s report.1
First, the median RSMR for AMI has decreased modestly (from 16.6% in 2009 to 16.0% in 2010). Trends in improving AMI mortality have been noted in a number of recent publications17–18
and our report suggests that this improvement may have continued through the first half of 2009. Second, the regional patterns seen in last year’s results are broadly similar to the patterns shown in this report but there are distinct HRRs where notable changes have occurred. For example Odessa, TX has gone from being within the middle quintile to the lowest quintile for AMI RSRR, while Charlottesville, VA has moved from the middle quintile to the highest quintile for HF RSRR.
The other change from last year’s reported numbers is a small reduction both in the volume of cases and the number of hospitals with rates significantly different than the U.S. national rate. Compared with last year there has been a small decrease in the total number of AMI and HF hospitalizations in the cohort as well as a decrease in the median number of cases seen by each hospital (for AMI RSMR, median cases decreased from 53 in 2009 to 48 in 2010; for HF RSMR, 143 to 131; for AMI RSRR, 36 to 32; and for HF RSRR, 168 to 153). These small decreases are not due to any change in the approach to identifying eligible hospitalizations or the inclusion/exclusion criteria. This minor decrease in case numbers could indicate small changes in the incidence of hospitalizations for the three conditions, a shift in how hospitals assign primary diagnosis codes, or changing patterns in where patients obtain care. Associated with these volume changes is also a reduction in the number of hospitals who are classified as having significantly different performance than the national rate; fewer hospitals are better than and fewer are worse than the national rates. Smaller case volume may pull the risk-standardized rates of hospitals toward the middle of the performance distribution.19
There are several limitations to consider in this report. First, the reported rates only reflect the experience of FFS Medicare patients and cannot necessarily support quality inferences for other patients. Second, although we have used a robust risk-adjustment approach we cannot be sure that differences between hospitals in RSMRs and RSRRs are purely due to quality differences; there may also be other sources of variation such as differences in coding practices. However the measures have been validated with chart-based models thereby minimizing the likelihood that coding differences are the main source of variation. Third the two-year overlap between this year’s and last year’s data limits interpretation of trends. Finally, each year, as a part of measure maintenance, we re-examine the methodology used to estimate RSMRs and RSRRs and have made minor refinements to improve the measures and incorporate any changes in coding.12
However, no changes made in this year’s maintenance would be expected to affect these results in a substantive way.
Examination of the most recent outcomes measurement of the nation’s hospitals reveals continued variation in the quality of care provided to patients with AMI and HF. This year’s publicly-reported measure update supports the need for continued efforts to reduce rates of rehospitalization and mortality after AMI and HF and provides evidence that such improvements are possible.