|Home | About | Journals | Submit | Contact Us | Français|
Patient outcomes provide a critical perspective on quality of care. The Centers for Medicare and Medicaid Services (CMS) is publicly-reporting 30-day risk-standardized mortality rates (RSMRs) and risk-standardized readmission rates (RSRRs) for patients hospitalized with acute myocardial infarction (AMI) and heart failure (HF). We provide a national perspective on hospital performance for the 2010 release of these measures.
The RSMRs and RSRRs are calculated from Medicare claims data for fee-for-service Medicare beneficiaries, 65 years or older, hospitalized with AMI or HF between July 1, 2006 and June 30, 2009. The rates are calculated using hierarchical logistic modeling to account for patient clustering, and are risk-adjusted for age, sex and patient comorbidities. The median RSMR for AMI was 16.0% and for HF was 10.8%. Both measures had a wide range of hospital performance with an absolute 5.2% difference between hospitals in the 5th versus 95th percentile for AMI and 5.0% for HF. The median RSRR for AMI was 19.9%, and for HF was 24.5% (3.9% range for 5–95th percentile for AMI, 6.7% for HF). Distinct regional patterns were evident for both measures and both conditions.
High RSRRs persist for AMI and HF and clinically meaningful variation exists for RSMRs and RSRRs for both conditions. Our results suggest continued opportunities for improvement in patient outcomes for HF and AMI.
The federal government has identified cardiovascular conditions as a priority area for the public reporting of hospital-based outcomes measures. In June 2007, the Centers for Medicare & Medicaid Services (CMS) began publicly reporting 30-day risk-standardized mortality rates (RSMRs) for patients hospitalized with acute myocardial infarction (AMI) and heart failure (HF) for the nation’s hospitals. Last year CMS expanded the reporting of outcomes measures by publicly reporting 30-day risk-standardized readmission rates (RSRRs) for these conditions.1–2
Outcomes measures extend performance assessment beyond the long-standing efforts to characterize hospital quality by measuring delivery of processes of care. The process measures focus on subsets of patients who are ideal candidates for specific strategies and assess important, yet limited, aspects of the care of these patients.3 Many difficult to measure processes of care can have an important influence on a patient’s clinical course. The outcomes measures can provide a broader assessment of the net effectiveness of care and convey information about overall quality of care provided at an institution.4 Prior work has documented considerable variation in 30-day RSMRs and RSRRs among the nation’s hospitals, suggesting that there may be substantial opportunities for improvements.1, 5 6–7 The variation suggests that many adverse events could be averted if the performance moved toward what is now being achieved by the top institutions in the country.
In this report we are updating information about the performance of the nation’s hospitals on the publicly reported outcomes measures for AMI and HF based on the 2010 release. This report complements the CMS reports for individual hospitals on the Hospital Compare Web site and updates a similar report from last year. 1–2 Our objectives are to: 1) provide summary information on the publicly reported measures and the range of hospital performance, 2) display the geographic variation in the rates and 3) report on changes in the number of hospitals that are significantly better or worse than the U.S. national rate. We are reporting on data used to produce the publicly reported outcomes measures representing hospitalizations from July 1, 2006 through June 30, 2009. CMS also reports 30-day risk-standardized mortality and readmission for pneumonia; a description of the 2010 reporting results for those measures are described elsewhere. 8
The cohort for all 4 measures are hospitalizations for fee-for-service (FFS) Medicare patients who are ≥ 65 years old and who have been enrolled in FFS Medicare for the 12 months before the index hospitalization (the hospital admission being measured for the outcome). The cohorts include discharges that occurred during the three-year period from July 1, 2006 to June 30, 2009. The cohorts are defined by identifying hospitalizations for each condition based on patients’ principal discharge diagnosis. The defining diagnoses for each condition and details of the inclusion and exclusion criteria for the cohorts are available in a number of publicly available technical reports and published studies.6–7, 9 5, 10–12 For patients with multiple hospitalizations for the same condition during a single year, one randomly selected hospitalization per year per eligible patient is included for the mortality measures. For the readmission measures a patient may have multiple index admissions, but no admissions within 30 days of discharge from an index admission are considered as additional index admissions.
The outcomes measures use CMS administrative claims data and enrollment data. Index admissions and readmissions are identified from inpatient claims. The measures use inpatient, outpatient department, and physician claims from the year prior to the index admission to identify patient risk factors.
For the mortality measures we count death from any cause, in any setting, within 30-days of admission for the index hospitalization as an outcome. We ascertain date of death from the CMS enrollment file or the discharge status on the inpatient claim. For readmission we count re-hospitalization for any cause to any acute care hospital within 30-days of discharge from the index hospitalization as an outcome. For the mortality measures we attribute the outcome to the hospital a patient is initially admitted to, even if they are subsequently transferred. For the readmission measures we attribute the outcome to the hospital that ultimately discharges a patient to a non-acute care setting (e.g. home, skilled nursing facility). For the AMI readmission measure we do not count planned admissions for percutaneous coronary interventions or coronary artery bypass grafting as readmissions.10 Readmissions to observation status or non-acute units such as rehabilitation are also not counted.
Hierarchical logistic regression models are used to estimate hospital-level 30-day all-cause RSRRs and RSMRs for each condition. In brief, the approach takes into account the hierarchical structure of the data to account for patient clustering within hospitals. Each model includes age, sex, selected clinical covariates, as well as a hospital-specific random effects intercept. Comorbidities from the index admission that could represent complications of care are not included in the risk adjustment unless they are also present in the 12 months prior to admission. (See technical reports for a list of comorbidities that are included as risk adjustment variables.)9–11 The RSMR or RSRR is calculated as the ratio of the number of “predicted” outcomes to the number of “expected” outcomes (death or readmission), multiplied by the national unadjusted rate of the given outcome. For each hospital, the “numerator” of the ratio is the number of deaths/readmissions within 30 days predicted on the basis of the hospital’s performance with its observed case mix, and the “denominator” is the number of deaths/readmissions expected on the basis of performance of the nation’s “average” hospital with this hospital’s case mix.
To describe the geographic distribution of the risk-standardized measures we identified the Hospital Referral Region (HRR) for each hospital based on the definition of HRRs produced by the Dartmouth Atlas of Health Care project.13
For the distribution of the RSMRs and RSRRs and the presentation of RSMRs and RSRRs at the HRR level (geographic distribution) we calculated the mean and percentiles by weighting each hospital’s value by the inverse of the variance of the hospital’s estimated rate, in which the variance is calculated from the bootstrap distribution. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. For assigning whether a hospital or HRR is significantly different than the national rate we assessed whether a given hospital’s or HRR’s 95% interval estimate for the RSMR/RSRR overlapped with the national crude mortality or readmission rate. For hospitals, this information is used to categorize hospitals on Hospital Compare as “better than the U.S. national rate,” “worse than the U.S. national rate,” or “no different than the U.S. national rate.” We calculated the number of hospitals that changed category from last year’s publicly reported data compared with this year. For hospitals with fewer than 25 cases in the 3-year period, no category (or rate) is reported on Hospital Compare and they are thus excluded from our analysis of changes in hospital categorizations. (Hospitals with fewer than 25 cases are, however, included in the distributions and HRR RSMRs and RSRRs reported below.)
All analyses were done with SAS version 9.1 (SAS Institute Inc, Cary, NC). We created the HRR maps using ArcGIS version 9.3 (ESRI, Redlands, California). This work was approved by the Yale University Human Investigation Committee.
For AMI the number of eligible admissions was about 550,000 over the 3-year period (Tables 1 and and2).2). For the HF mortality measure there were over 1 million eligible admissions in the 3-year period. For HF readmission there was a slightly greater number of included hospitalizations because a given patient can have more than one eligible index admission evaluated for readmission. The measures include data from approximately 4,500 hospitals and the median number of cases over the 3 years for each hospital for the mortality measures was 48 for AMI and 131 for HF. For the readmission measures the median number of admissions was 32 for AMI and 153 for HF.
The median RSMR for AMI was 16.0% (range 10.3%–24.6%) and there was an absolute difference of 5.2% in RSMRs for hospitals between the 5th and 95th percentiles. The median RSMR for HF was 10.8% (range 6.6%–18.2%) and there was an absolute difference of 5.0% in RSMRs for hospitals between the 5th and 95th percentiles. (Table 1, Figures 1, ,22).
The median RSRR for AMI was 19.9%. The distribution of RSRRs for AMI is narrower than for RSMRs (range 15.3%–26.3%) and there was an absolute difference of 3.9% between hospitals in the 5th compared with the 95th percentile. For HF RSRRs the median rate was 24.5% and the range was 17.3%–32.4%. There was a 6.7% difference in RSRRs for HF between hospitals in the 5th and 95th percentile (Table 1, Figures 3, ,44).
As shown in Figure 5, the areas with the highest AMI RSMRs are clustered in the southern United States with a few additional high AMI RSMR regions found in isolated HRRs in upper New York and Vermont, Michigan, Wisconsin, and western states. For HF, (Figure 6) similarly, there are clusters of high RSMRs in the southern United States but fewer such regions than for AMI. There are more HRRs in the highest quintile for average HF RSMR in the Midwest and western states as compared to AMI. HRRs with lower RSMRs (better performance) are found predominantly in the Northeast and Midwest for both AMI and HF with scattered additional areas of low RSMR for HF in western states. Tables 2 and and33 lists those HRRs that have significantly higher or lower rates than the national average for HRRs.
Figures 7 and and88 show the regional patterns for the readmission rates. For readmission for both AMI and HF the areas with the highest RSRRs are almost exclusively in the eastern, southeastern and mid-western states with very few areas in the western states that are in the highest quintile for average RSRR except for a small pocket of high rates in southern Nevada, California and Arizona for AMI RSRR.. HRRs with lower RSRRs are found predominantly in western states.
For all 4 measures a portion of hospitals changed categories from last year’s reporting of the measures to this year’s report. For AMI RSMR, 29 hospitals that had been no different than the U.S. national rate as categorized last year, are now categorized as better than the national rate. Nearly a half (65/131) of hospitals that had been better than the U.S. national rate in the 2009 reporting are now no different; the rest remain better than the national rate. Overall there was a small decrease in the number of hospitals that were significantly different than the U.S. national rate: both 36 fewer hospitals are better than the U.S. national rate in the 2010 data compared with 2009 and 9 fewer hospitals are categorized as worse than the national rate.
For HF RSMRs 61 hospitals have moved from being no different to better than the national rate and 75 of 138 that had been categorized as better than the U.S. national rate last year are now no different. Similarly, there are fewer hospitals that are worse than the U.S. national rate in 2010 compared to 2009.
For the RSRRs the pattern was very similar, with a small decrease in the number of hospitals for both the “better than” and the “worse than” categories. For AMI RSRR, 7 fewer hospitals are better than and 7 fewer hospitals are worse than U.S. national rates compared to last year’s results. For HF, 33 fewer hospitals (of 180) are better than the U.S. national rate and 40 fewer hospitals (of 233) are worse than the U.S. national rate compared with last year. No hospitals moved from one being significantly worse than the U.S. national rate to better than the U.S. national rate or vice versa for either measure. For all 4 measures there was a 2–3% increase in the number of hospitals with fewer than 25 cases.
The purpose of publicly reporting outcomes measures is to illuminate the quality of care provided to patients across the country and, particularly, to examine care through the lens that is most meaningful to patients, patient outcomes. Our report, paralleling the 2010 release, reveals substantial variation in risk-standardized outcomes for patients admitted with AMI and HF and persistently high rates of these outcomes, particularly readmission, with distinct regional patterns to this variation. Documentation of the variation in risk-standardized outcomes provides important evidence of continued room for improvements in care. Furthermore, we have demonstrated that hospitals’ performance is not static. Indeed, a number of hospitals have improved their performance such that they no longer have significantly higher RSMRs or RSRRs than the national rate; others have moved from rates that were not significantly different from the national norms to being classified as having significantly better than expected performance and still others’ relative performance has declined.
Reducing high rates of readmission is a national priority, with policy efforts being initiated to reward better performance. In the recent health reform bill, the Patient Protection and Affordable Care Act, there is specific language linking readmission measurement to payment.14 Although the details are not yet clear, this policy direction indicates that there will be financial incentives for hospitals and their clinicians to focus on improving performance in this area. The increasing attention to measurement has been accompanied by a rapid increase in research on how to reduce readmission rates, with studies suggesting that reductions in readmissions of 15–20% are possible at many hospitals.15–16 Such reductions could lead to fewer disruptions for patients, many of whom are currently experiencing an additional hospitalization soon after hospital discharge, and will likely contribute to lower overall costs of care.
The updated measures presented on Hospital Compare this spring are based on data from hospitalizations from July 1 2006 – June 30, 2009. Two thirds of this cohort (the first two years) overlaps with the data presented last year. For this reason we do not expect to see dramatic changes from last year’s release. Furthermore, no changes as a result of public reporting of RSRRs would yet be visible in this year’s rates since public reporting for RSRRs began in July 2009, and the current data extends only through June 2009. Nonetheless, there are a few interesting findings when comparing this year’s results to those presented in last year’s report.1 First, the median RSMR for AMI has decreased modestly (from 16.6% in 2009 to 16.0% in 2010). Trends in improving AMI mortality have been noted in a number of recent publications17–18 and our report suggests that this improvement may have continued through the first half of 2009. Second, the regional patterns seen in last year’s results are broadly similar to the patterns shown in this report but there are distinct HRRs where notable changes have occurred. For example Odessa, TX has gone from being within the middle quintile to the lowest quintile for AMI RSRR, while Charlottesville, VA has moved from the middle quintile to the highest quintile for HF RSRR.
The other change from last year’s reported numbers is a small reduction both in the volume of cases and the number of hospitals with rates significantly different than the U.S. national rate. Compared with last year there has been a small decrease in the total number of AMI and HF hospitalizations in the cohort as well as a decrease in the median number of cases seen by each hospital (for AMI RSMR, median cases decreased from 53 in 2009 to 48 in 2010; for HF RSMR, 143 to 131; for AMI RSRR, 36 to 32; and for HF RSRR, 168 to 153). These small decreases are not due to any change in the approach to identifying eligible hospitalizations or the inclusion/exclusion criteria. This minor decrease in case numbers could indicate small changes in the incidence of hospitalizations for the three conditions, a shift in how hospitals assign primary diagnosis codes, or changing patterns in where patients obtain care. Associated with these volume changes is also a reduction in the number of hospitals who are classified as having significantly different performance than the national rate; fewer hospitals are better than and fewer are worse than the national rates. Smaller case volume may pull the risk-standardized rates of hospitals toward the middle of the performance distribution.19
There are several limitations to consider in this report. First, the reported rates only reflect the experience of FFS Medicare patients and cannot necessarily support quality inferences for other patients. Second, although we have used a robust risk-adjustment approach we cannot be sure that differences between hospitals in RSMRs and RSRRs are purely due to quality differences; there may also be other sources of variation such as differences in coding practices. However the measures have been validated with chart-based models thereby minimizing the likelihood that coding differences are the main source of variation. Third the two-year overlap between this year’s and last year’s data limits interpretation of trends. Finally, each year, as a part of measure maintenance, we re-examine the methodology used to estimate RSMRs and RSRRs and have made minor refinements to improve the measures and incorporate any changes in coding.12 However, no changes made in this year’s maintenance would be expected to affect these results in a substantive way.
Examination of the most recent outcomes measurement of the nation’s hospitals reveals continued variation in the quality of care provided to patients with AMI and HF. This year’s publicly-reported measure update supports the need for continued efforts to reduce rates of rehospitalization and mortality after AMI and HF and provides evidence that such improvements are possible.
We thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematica Policy Research Inc for data and analytic support and gratefully acknowledge the analytic support of Changqin Wang and Jinghong Gao at YNHHS/Yale CORE.
The analyses upon which this publication is based were performed under Contract Number HHSM-500-2008-0025I (0001), entitled "Measure & Instrument Development and Support (MIDS)-Development and Re-evaluation of the CMS Hospital Outcomes and Efficiency Measures,” and HHSM-500-2008-00020I (0001) entitled “Production and Implementation of Hospital Outcome and Efficiency Measures” funded by the CMS, Department of Health and Human Services. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services. The authors assume full responsibility for the accuracy and completeness of the ideas presented.
Dr. Ross is currently supported by the National Institute on Aging (K08 AG032886) and by the American Federation of Aging Research through the Paul B. Beeson Career Development Award.
Dr. Bernheim, Jacqueline Grady, Zhenqiu Lin, Yun Wang, Yongfei Wang, Shantal V. Savage, Kanchana R. Bhat, Elizabeth Drye, and Harlan Krumholz all work under contract with CMS to develop and maintain performance measures. Dr. Merrill works under contract with CMS to produce and implement the outcomes measures. Dr. Han and Dr. Rapp are employed by CMS.