|Home | About | Journals | Submit | Contact Us | Français|
National attention has increasingly focused on readmission as a target for quality improvement. We present the development and validation of a model approved by the National Quality Forum and used by the Centers for Medicare & Medicaid Services for hospital-level public reporting of risk-standardized readmission rates for patients discharged from the hospital after an acute myocardial infarction.
We developed a hierarchical logistic regression model to calculate hospital risk-standardized 30-day all-cause readmission rates for patients hospitalized with acute myocardial infarction. The model was derived using Medicare claims data for a 2006 cohort and validated using claims and medical record data. The unadjusted readmission rate was 18.9%. The final model included 31 variables and had discrimination ranging from 8% observed 30-day readmission rate in the lowest predictive decile to 32% in the highest decile and a C statistic of 0.63. The 25th and 75th percentiles of the risk-standardized readmission rates across 3890 hospitals were 18.6% and 19.1%, with fifth and 95th percentiles of 18.0% and 19.9%, respectively. The odds of all-cause readmission for a hospital 1 SD above average were 1.35 times that of a hospital 1 SD below average. Hospital-level adjusted readmission rates developed using the claims model were similar to rates produced for the same cohort using a medical record model (correlation, 0.98; median difference, 0.02 percentage points).
This claims-based model of hospital risk-standardized readmission rates for patients with acute myocardial infarction produces estimates that are excellent surrogates for those produced from a medical record model.
Acute myocardial infarction (AMI) is a high-risk condition and a common cause of admission to hospitals in the United States. Traditional assessments of hospital performance in the care of patients with AMI have focused on short-term mortality rates, which have significantly improved over the past decade.1 However, there is increasing interest in readmission rates as an indicator of the quality of hospital care and the transition from inpatient to outpatient status.2,3 Although heart failure is the condition in cardiovascular disease that has received the most attention for readmission, patients with an AMI are also frequently readmitted within 30 days of discharge.4 The Medicare Payment Advisory Commission called for the development of readmission measures and highlighted AMI as 1 of 7 conditions that account for nearly 30% of potentially preventable readmissions in the 15-day window after initial hospital discharge.2 The healthcare reform bill, recently signed into law, establishes the Hospital Readmission Reduction Program and pilot payment reforms that will create financial incentives for hospitals to reduce readmissions.5 However, few studies have focused on this risk, and until recently, little attention has been directed toward hospital performance in this area. A recent review indicated that there were no published models developed to compare hospital performance.4
We present a model, endorsed by the National Quality Forum, to estimate hospital-specific readmission rates for Medicare patients hospitalized with an AMI. We developed and validated the model using Medicare administrative claims data and tested whether estimates from the claims model could be used as a surrogate for the results that could be obtained from a comprehensive medical record model with a C statistic approaching 0.63. The approach incorporated key attributes for publicly reported outcomes measures as described in an American Heart Association scientific statement.6 The model adds another cardiovascular condition to the previously published heart failure 30-day readmission measure that is also publicly reported.7 This model is now being used by the Centers for Medicare & Medicaid Services (CMS) to publicly report 30-day readmission rates for hospitals in the United States.
For administrative model development, we used Medicare claims data from 2006, including Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files. Part A inpatient data refer to claims paid for Medicare inpatient hospital care, skilled nursing facility care, some home health agency services, and hospice care. Hospital outpatient refers to Medicare claims paid for the facility component of surgical or diagnostic procedures, emergency department care, and other noninpatient services performed in a hospital outpatient department or ambulatory surgical/diagnostic center. Part B data refer to Medicare claims for the services of physicians (regardless of setting) and other outpatient care, services, and supplies. For purposes of this project, Part B services include only face-to-face encounters between a care provider and patient. Thus, laboratory tests, medical supplies, or other ambulatory services were excluded.
For administrative model validation, we used medical record data from the Cooperative Cardiovascular Project (CCP) initiative and the corresponding administrative data. The CCP initiative included >200 000 admissions to nongovernmental, acute care hospitals in the United States and Puerto Rico from 1994 to 1995.8-9 The CCP database includes information about patient history, presentation, examination findings, and laboratory and test results.
The patients in CCP were matched to the Medicare enrollment database to determine survival and, where applicable, the date of death. Corresponding medical records were abstracted by 2 clinical data abstraction centers (DynKePRO [York, PA] and FMAS Corporation [Rockville, MD]), and the clinical data were used to confirm the diagnosis of AMI.
To qualify for inclusion in this study cohort, patients with AMI must have had a principal hospital discharge diagnosis of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code 410.xx, excluding those with 410.x2 (AMI, subsequent episode of care) at the discharging hospital. We defined an index admission as an admission in which we evaluated the subsequent 30 days after discharge for a readmission. A single patient may have >1 index admission in the study year. We excluded patients aged <65 years, those who died during their index hospitalization, and those admitted and discharged on the same day. If a patient had ≥1 AMI admission within 30 days of discharge from an index AMI admission, we did not consider the additional AMI admissions as index admissions; they were considered as readmissions. Thus, any AMI admission was either an index admission or a readmission but not both. We considered admissions with transfer as a single episode of care, with the codes to define the variables in the model derived from the discharging hospital. The readmission was always attributed to the discharging hospital. To maximize our ability to risk adjust and to identify readmissions, we restricted the cohort to patients enrolled in fee-for-service Medicare Parts A and B for 12 months before their AMI hospitalization who continued in the fee-for-service plan for at least 30 days after discharge.
The outcome was 30-day all-cause readmission, defined as the occurrence of at least 1 hospitalization in any acute care hospital in the United States for any cause within 30 days of discharge after an index hospitalization. We identified readmissions from the hospital claims data submitted to CMS. We excluded hospitalizations that were likely related to an elective revascularization that was planned at the time of the index admission (eg, to perform percutaneous coronary intervention [PCI] on another vessel or another location in the same vessel or to perform coronary artery bypass graft [CABG] surgery after AMI). Because it is difficult to identify these readmissions with certainty, we only counted readmissions with nonelective PCI or CABG. To operationalize this strategy, we excluded readmissions with a PCI or CABG procedure that followed an index AMI hospitalization, except for readmissions with the following principal discharge diagnoses that are not consistent with an elective readmission: heart failure, AMI, unstable angina, arrhythmia, and cardiac arrest (ie, readmissions with these diagnoses and a PCI or CABG procedure were counted as readmissions). We include the codes that we considered to be associated with nonelective readmissions in the online-only Data Supplement.
These so-called “staged procedures” may be medically indicated in some patients; however, there are currently no definitive clinical guidelines that address this practice, which varies across providers and hospitals. Additional analyses conducted to quantify the number of readmissions that were excluded showed that in 2006, the 30-day readmission rate with PCI or CABG was only 2.8% (5708/200 750) and among these, 81.4% (4648/5708) did not have any of the 5 predefined discharge diagnoses. Of 4171 hospitals, 2750 (65.9%) had no readmission for PCI or CABG without the predefined cardiac diagnosis codes, 1361 (32.6%) had 1 to 10 such cases, and 60 (1.4%) had >10.
We developed candidate variables for the readmission model from the claims codes using the data sources described. To assemble clinically coherent codes into candidate variables, we used the publicly available CMS Hierarchical Condition Categories to group codes into 189 Condition Categories (CCs).10-11 We used the April 2008 version of the ICD-9-CM to assign codes to CCs using a map maintained by CMS and posted at www.qualitynet.org. We defined a CC as present for a given patient if it was coded in any of the hospital inpatient, outpatient, or physician claims data sources in the prior 12 months, including the index admission. A physician team identified candidate variables and differentiated CC variables that, when coded as secondary diagnosis codes during the index hospitalization, could represent either comorbid conditions on admission or complications of care (eg, urinary tract infection). To avoid including potential complications as comorbidities, we did not code these as potential risk factors if they appeared only as secondary diagnosis codes for the index hospitalization and not on claims in the prior year.
For derivation, we used a randomly selected half of the 2006 cohort for the development sample. To inform variable selection, we performed a modified approach to stepwise logistic regression. The developmental data set was used to create 500 bootstrap samples. For each sample, we ran a logistic stepwise regression, with both backward and forward selection, that included the 103 candidate variables. The P value to enter was set at 0.001 and the P value to exit at 0.001. The results were summarized to show the percentage of times that each candidate variable was significantly associated with readmission (at the P<0.001 level) in each of the 500 repeated samples (eg, 70% would mean that the candidate variable was found significant at P<0.001 in 70% of the 500 samples). We also assessed the direction and magnitude of the regression coefficients for clinical sensibility.
The clinician team reviewed these results and with the exception of obesity, which was excluded because of lack of clinical coherence, retained all other risk-adjustment variables above a 70% cutoff (18 variables) because these demonstrated a relatively strong association with readmission and were clinically relevant. Variables selected in <70% of the bootstrap samples also were included in the final model if they were markers for end of life/frailty (decubitus ulcer; dementia and senility; protein-calorie malnutrition; hemiplegia, paraplegia, paralysis, and functional disability; metastatic cancer and acute leukemia; stroke), on the same clinical spectrum as a variable above the 70% cutoff and were clinically important for patients with AMI (cerebrovascular disease, acute coronary syndromes, angina pectoris, prior AMI, history of PCI or CABG, asthma), and from hospitals that may have a disproportionate number of patients with a certain condition (cancer).
We conducted analyses of model performance using a generalized linear model with a logit link function. To assess model performance at the patient level, we calculated the receiver operating characteristic area under the curve (AUC), explained variation as measured by the generalized R2 statistic, and the observed average readmission rates in the lowest and highest deciles based on predicted readmission probabilities.12 We also compared performance with a null model, a model that adjusted for age and sex only, and a model that included all of the candidate variables.
Given the clustering of admissions within hospitals and that hospitals were our unit of inference, we estimated risk-standardized readmission rates using hierarchical generalized linear models.13 This modeling strategy accounts for within-hospital correlation of the observed readmissions and reflects our assumption that after adjusting for patient risk and sampling variability, the remaining variation is due to hospital quality.
We next calculated risk-standardized hospital-specific readmission rates. These rates were obtained as the ratio of the number of predicted to expected readmissions multiplied by the national unadjusted rate. The predicted number of readmissions in each hospital was estimated using its own patient mix and its own hospital-specific intercept. The expected number of readmissions in each hospital was estimated using the hospital’s own patient mix and the average hospital-specific intercept based on all hospitals in our sample. This calculation is a form of indirect standardization.
We compared the model’s performance in the derivation sample with its performance in the remaining half of the 2006 claims data and the full year of 2005 claims data. The model was recalibrated in each validation set. We calculated indices that quantify overfitting for each validation data set, each time calculating a risk score using the regression estimates from our derivation model. For each subject in the validation data set, the risk score was calculated as the sum of the product of the individual’s covariates with the estimated regression coefficient obtained from the derivation model. We next estimated a logistic regression model using the observed readmission outcome for each individual in the validation sample as a function of an intercept and the risk score. The overfitting statistics corresponded to the estimated intercept and slope of the new model. Intercepts far from 0 and slopes far from 1 indicate overfitting. A risk score coefficient that is much different from 1 and an intercept different from 0 are indicative of overfitting. We also examined whether model performance varied for the following important subgroups of patients: older age, sex, race/ethnicity, and urban/rural hospital.
We developed a separate medical record model of readmission risk using CCP data. We also linked the patients in the CCP cohort to their Medicare claims data, including claims from 1 year before the index hospitalization, so that we could calculate the risk-standardized state readmission rates in this cohort separately using medical record and claims data models. We conducted this analysis at the hospital level. To examine the relationship between the risk-standardized rates obtained from medical record and administrative data models, we estimated a linear regression model describing the association between the 2 rates, weighting each hospital by the number of index hospitalizations, and calculated the intercept and the slope of this equation. A slope close to 1 and an intercept close to 0 would provide evidence that the risk-standardized hospital readmission rates from the medical record and claims models were similar. We also calculated the difference between the hospital risk-standardized readmission rates from the 2 models.
Analyses were conducted using SAS version 9.1.3 (SAS Institute Inc; Cary, NC) statistical software. We estimated the hierarchical models using the GLIMMIX procedure in SAS. The Human Investigation Committee at the Yale University School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication. The authors had full access to the data and take responsibility for their integrity. All authors have read and agreed to the manuscript as written.
The initial sample contained 279 152 hospitalization records of Medicare beneficiaries. The final sample included 200 750 index AMI hospitalizations from 4171 hospitals after applying exclusions for age <65 years (9.7% of initial sample), in-hospital death (9.2%), transfer to another acute care facility (8.4%), and same-day discharge (1.7%) (Table 1). The derivation sample consisted of 100 465 hospitalizations in 3890 hospitals with at least 1 hospitalization, with an unadjusted 30-day readmission rate of 18.9%. The top principal discharge diagnoses for these readmissions were heart failure (17%), AMI (7%), coronary atherosclerosis (4%), pneumonia (3%), and acute renal failure (3%). The observed readmission rate ranged from 0% to 100% across these hospitals with 25th, 50th, and 75th percentiles of 0%, 17.4%, and 27.0%, respectively. The median annual volume of Medicare AMI hospitalizations was 15 (25th, 75th percentile, 5, 59). In 7.4% of admissions, the patient died within the 30 days postdischarge. In 5.2% of admissions, the patient died without being readmitted, and in 2.2% of admissions, the patient was readmitted and died within the 30 days postdischarge.
The claims model included 31 variables (2 demographic, 10 cardiovascular, and 19 comorbidity) (Table 2). The mean age of the cohort was 78.7 years, with 50.5% women and 11.3% nonwhite patients. The mean observed readmission rate in the derivation data set ranged from 8% in the lowest decile of predicted readmission rates to 32% in the highest predicted decile, an absolute difference of 24%. The AUC was 0.63, 0.50 for a null model, 0.54 for a model with only age and sex, and 0.63 for a model that included all the candidate variables.
Figure 1 shows the distributions of the risk-standardized 30-day readmission rates. The fifth percentile was 18.0%, and the 95th percentile was 19.9%. The 25th and 75th percentiles were 18.6% and 19.1%, respectively. The mean riskstandardized rate was 18.9%. The odds of all-cause readmission for a hospital that was 1 SD above average was 1.35 times that of a hospital that was 1 SD below average.
In the remaining 50% of AMI readmissions from 2006 and all the 2005 data, the regression coefficients and SEs were similar to those for the derivation data set. The performance was also similar in the validation data sets (Table 3).
Initial CCP data contained 212 744 hospitalization records of Medicare beneficiaries. The final CCP validation sample included 130 944 hospitalizations from 4178 hospitals. The crude 30-day readmission rate was 19.96%.
In 5.2% of admissions, the patient died within the 30 days postdischarge. In 3.0% of admissions, the patient died without being readmitted, and in 2.1% of admissions, the patient was readmitted and died within the 30 days postdischarge. The readmission rate was 41.1% among the 5.2% of admissions in which the patient died within 30 days compared with 18.8% for admissions in which patients survived 30 days.
The medical record comparison model included 45 variables (Table 4). The AUC was 0.58, and the observed readmission rate ranged from 13% in the lowest predicted decile to 29% in the highest (Table 5). In the same cohort of 130 944 hospitalizations, the administrative model had an AUC of 0.59 and observed readmission rates ranging from 13% in the lowest predicted decile to 31% in the highest predicted decile.
The estimated hospital-specific standardized readmission rates derived from each model are shown in Figure 2. The slope of the weighted regression line between chart- and claims-based state readmission rates was 0.939 (SE, 0.0005), and the intercept was 0.011 (SE, 0.0001). The correlation coefficient of the standardized readmission rates from the 2 models was 0.98 (SE, 0.0006). The Spearman rank correlation coefficient was 0.9835. The median difference between the models in the hospital-specific risk-standardized readmission rates was 0.02 percentage points (25th percentile, −0.10; 75th percentile, 0.13; 10th percentile, −0.31; 90th percentile, 0.28).
The model shows comparable discrimination by patient age, sex, and race/ethnicity as well as by hospital urban/rural status (Figure 3). Compared with the overall AUC of 0.63, the AUCs are comparable when the sample is restricted to men (0.64), women (0.61), white patients (0.64), nonwhite patients (0.63), rural hospitals (0.57), nonrural hospitals (0.63), patients aged ≥85 years (0.56), and patients aged <85 years (0.65).
We present a hierarchical logistic regression model for 30-day readmission after AMI hospitalization that is based on administrative data and is suitable for public reporting. The model is a strong surrogate for a similar model based on medical record data. The approach uses a group of >15 000 ICD-9-CM codes that are in the public domain and yield clinically coherent variables. The model does not adjust for variables that may represent complications rather than comorbidities. There is a standardized period of follow-up, and the study sample is appropriately defined. The statistical approach takes into account the clustering of patients within hospitals and differences in sample size across hospitals.
AMI was chosen because it is among the most common principal hospital discharge diagnoses given for Medicare beneficiaries; in 2005, it was the fourth most expensive condition billed to Medicare.14 Readmission rates following discharge for AMI are high; for example, rates of all-cause readmission at 30 days have been found to range from 11.3%15 to 28.1%.16
Readmission rates are influenced by the quality of inpatient and outpatient care, the availability and use of effective disease management programs, and the bed capacity of the local healthcare system. Some of the variation in readmissions may be attributable to delivery system characteristics.17 Additionally, interventions during and after a hospitalization can be effective in reducing readmission rates in geriatric populations generally18-20 and for patients with AMI specifically. 21-24 Moreover, such interventions can be cost saving. 19-21 Tracking readmissions also emphasizes improvement in care transitions and care coordination. Although discharge planning is required by Medicare as a condition of hospital participation, transitional care focuses more broadly on “hand-offs” of care from one setting to another and may have implications for quality and costs.25
The patient-level discrimination and the explained variation of the model are consistent with those observed in the development of the heart failure 30-day all-cause readmission measure, which was recently approved by the National Quality Forum.26 The model performs as expected given that the risk of readmission much more likely depends on the quality of care and system characteristics than on the patient severity and comorbidity characteristics. The readiness for discharge, the proper medications, and the proper transition to the outpatient setting may be even more important for readmission than for mortality. Results of intervention studies underscore this potential.21-24
Our approach to risk adjustment is tailored to and appropriate for a publicly reported outcome measure. Adjusting for patient characteristics improved model performance. The receiver operating characteristic of 0.63 is higher than that of a model with just age and sex (0.54) and similar to that of a model with all candidate variables. However, we excluded covariates for which we would not want to adjust in a quality measure, such as potential complications, certain patient demographics (eg, race, socioeconomic status), and patient admission path and discharge disposition (eg, admitted from or discharged to a skilled nursing facility). These characteristics may be associated with readmission and, thus, could increase the model performance to predict patient readmissions. However, these variables may be related to quality or supply factors that should not be included in an adjustment that seeks to control for patient clinical characteristics while illuminating important quality differences. For example, if hospitals with a higher share of a certain ethnic group have higher readmission rates, then including ethnic group in the model will attenuate this difference and obscure differences that are important to identify. Although a high C statistic usually is desirable, an extremely high C statistic for a model including patient characteristics would imply that the outcome is largely determined by patients and that physicians and hospitals do not matter. We believe that much of the unexplained variation derives from the latent variable of quality. Hospitals and medical communities have had no incentive to focus on improving the transition from inpatient to outpatient status. Another explanation is the design of the model in which we adjust for factors that are present or known at admission but do not include any factors that describe events in the hospital as they could be related to quality. As a result, the predictors and the start of follow-up are separated in time.
The model has some notable features. We use all-cause readmission because from the patient perspective, readmission for any cause is a key concern. Second, limiting the measure to AMI-related readmissions may make it susceptible to gaming. We recognize that it would be preferable to report preventable readmissions, but that approach is not advisable given that there are no codes that would definitively identify a readmission as preventable. To determine whether a readmission were preventable might require a root-cause analysis with a thorough evaluation of medical records and interviews with clinicians. In addition, if certain codes were considered to signal a nonpreventable readmission, there would be an incentive to preferentially use these codes and perhaps influence the measurement of quality. Consequently, the decision to use all-cause readmission was considered the best alternative.
Another notable feature is our attempt to exclude staged revascularization procedures, which represent planned admissions for an interventional procedure after an initial procedure during the index hospitalization. There is debate about the indications for these hospitalizations, but we opted to exclude them because some experts consider them an extension of the index admission. The measure does not currently exclude readmissions with other procedures that could have been planned, such as pacemaker insertion. Future iterations of the measure may expand the number of procedures considered as potentially planned.
The agreement between the estimates from the claims model and those from the medical record model suggests that despite the known limitations of administrative codes, the proposed model can stand in place of a model with more detailed clinical information for hospital-level profiling. The AUC and the explained variation of the model are modest, but the purpose is to profile hospital performance based on patient status on admission, not to predict outcomes for individual patients. Important considerations for profiling relate to the degree of between-hospital variation and the amount of information each hospital provides. After adjusting for patient presenting factors, a patient has a 35% higher risk of being readmitted within 30 days of discharge when discharged alive from a hospital 1 SD below national quality relative to discharge from a hospital of higher quality. The differences in risk-standardized rates among hospitals are substantial.
The approach has several limitations. The approach to calculating risk-standardized readmission rates is only validated with Medicare data. However, about 60% of the patients hospitalized with an AMI are aged ≥65 years. Additionally, we were unable to test the model with a Medicare managed care population, for which data are not currently available. In addition, we do not use time-to-event analysis because of the difficulty in fitting such a model to a large national data set. Fortunately, there is no strong relationship between mortality and readmission. Finally, our approach focuses on 30-day readmission and not death. If a patient died within 30 days postdischarge without a readmission, we coded the outcome as no readmission, recognizing that this has the effect of counting such a death as a no-event readmission outcome.
In conclusion, this article presents a model to produce hospital-specific risk-standardized estimates of 30-day readmission rates after discharge for an AMI. This model is being used to publicly report the variation in readmission rates among hospitals across the United States.
We thank Debra Chromik, Angela Merrill, Maureen O’Brien, Geoffrey Schreiner, and Dima Turkmani. CMS reviewed and approved the use of its data for this work and approved submission of the manuscript.
Sources of Funding The analyses upon which this publication is based were performed under contract number HHSM-500–2005-CO001C, entitled “Utilization and Quality Control Quality Improvement Organization for the State (Commonwealth) of Colorado,” funded by the Centers for Medicare & Medicaid Services, an agency of the US Department of Health and Human Services. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services nor does the mention of trade names, commercial products, or organizations imply endorsement by the US government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.
Dr Krumholz is supported by grants R01 HL081153–03 and U01 HL105270–01 from the National Heart, Lung, and Blood Institute; grant R01 HS016929–02 from the Agency for Healthcare Research and Quality and the United Health Foundation; and grant 20070407 from The Commonwealth Fund.
Disclosures Dr Krumholz reports that he is a consultant to UnitedHealthcare. Dr Normand reports that she is funded by the Massachusetts Department of Public Health to monitor the quality of care following cardiac surgery or PCI. The other authors report no conflicts.