PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
JAMA. Author manuscript; available in PMC Mar 20, 2013.
Published in final edited form as:
PMCID: PMC3603349
NIHMSID: NIHMS429222
Risk Prediction Models for Hospital Readmission: A Systematic Review
Devan Kansagara, MD,1,2,3 Honora Englander, MD,3 Amanda Salanitro, MD, MS, MSPH,4,5 David Kagen, MD,2,3 Cecelia Theobald, MD,5 Michele Freeman, MPH,1 and Sunil Kripalani, MD, MSc5
1VA Evidence-based Synthesis Program, Veterans Affairs Medical Center, Portland, Oregon
2General Internal Medicine, Veterans Affairs Medical Center, Portland, Oregon
3Department of Internal Medicine, Oregon Health & Science University, Portland, Oregon
4Geriatric Research, Education and Clinical Center, VA Tennessee Valley Healthcare System
5Section of Hospital Medicine, Vanderbilt University, Nashville, TN
Corresponding author: Devan Kansagara, MD, Portland Veterans Affairs Medical Center, Mailcode: RD71, 3710 SW US Veterans Hospital Rd, Portland, OR 97239. Phone: (503) 220-8262 ext. 51838. Fax: (503) 721-1461. kansagar/at/ohsu.edu
Context
Predicting hospital readmission risk is of great interest to identify which patients would benefit most from care transition interventions, as well as to risk-standardize readmission rates for purposes of hospital comparison.
Objective
To summarize validated readmission risk prediction models, describe their performance, and assess suitability for clinical or administrative use.
Data Sources
MEDLINE, CINAHL, and Cochrane Library through March 2011, EMBASE through August 2011, and hand search of reference lists.
Study Selection
Dual review to identify English language studies of prediction models tested with medical patients, with both derivation and validation cohorts.
Data Extraction
Data were extracted on the population, setting, sample size, follow-up interval, readmission rate, model discrimination and calibration, type of data used, and timing of data collection.
Results
Of 7,843 citations reviewed, 30 studies of 26 unique models met criteria. The most common outcome used was 30-day readmission; only one model specifically addressed preventable readmissions. Fourteen models relying on retrospective administrative data could be potentially used for standardization of readmission risk and hospital comparisons; of these, nine were tested in large US populations and had poor discriminative ability (c-statistics 0.55 – 0.65). Seven models could potentially be used to identify high-risk patients for intervention early during a hospitalization (c-statistics 0.56 – 0.72), and five could be used at hospital discharge (c-statistics 0.68 – 0.83). Six studies compared different models in the same population and two of these found that functional and social variables improved model discrimination. Though most models incorporated medical comorbidity and prior utilization variables, few examined variables associated with overall health and function, illness severity, or social determinants of health.
Conclusions
Most current readmission risk prediction models, whether designed for comparative or clinical purposes, perform poorly. Though in certain settings such models may prove useful, efforts to improve their performance are needed as use becomes more widespread.
An increasing body of literature attempts to describe and validate hospital readmission risk prediction tools. Interest in such models has grown for two reasons. First, transitional care interventions may reduce readmissions among chronically ill adults.1-3 Readmission risk assessment could be used to help target the delivery of these resource-intensive interventions to the patients at greatest risk. Ideally, models designed for this purpose would provide clinically relevant stratification of readmission risk and give information early enough during the hospitalization to trigger a transitional care intervention, many of which involve discharge planning and begin well before hospital discharge. Second, there is interest in using readmission rates as a quality metric. Recently, the Centers for Medicare & Medicaid Services (CMS) began using readmission rate as a publicly reported metric, with plans to lower reimbursement to hospitals with excess risk-standardized readmission rates.4 Valid risk adjustment methods are required for calculation of risk-standardized readmission rates which could, in turn, be used for hospital comparison, public reporting, and reimbursement determinations. Models designed for these purposes should have good predictive ability; be deployable in large populations; use reliable data that can be easily obtained; and use variables that are clinically related to, and validated in, the populations in which use is intended.5
This systematic review was performed to synthesize the available literature on validated readmission risk prediction models, describe their performance, and assess their suitability for clinical or administrative use.
Data sources and searches
We searched Ovid MEDLINE, CINAHL, and the Cochrane Library (Central Trial Registry, Systematic Reviews, and Abstracts of Reviews of Effectiveness) from database inception through March 2011, and EMBASE through August 2011, for English-language studies of readmission risk prediction models in medical populations. All citations were imported into an electronic database (EndNote X2, Thomson Reuters, New York, NY). Appendix A provides the search strategies in detail.
Study selection
Seven investigators reviewed the citations and abstracts identified from electronic literature searches. Full-text articles of potentially relevant references were retrieved for further review. Each article was independently assessed by two reviewers using the eligibility criteria shown in Appendix B. Eligible articles were published in English and evaluated the ability of statistical models to predict hospital readmission risk. Because a set of predictive factors derived in only one population may lack validity and applicability,6 we included only studies of models that were tested in both a derivation and validation cohort, even if these results were presented in separate papers. We did not pre-specify the method of validation, nor did we exclude studies in which the derivation and validation cohorts were drawn from the same population (i.e., split-half validation). We did not limit studies by diagnosis within medical populations, but we excluded studies focused on psychiatric, surgical, and pediatric populations as factors contributing to readmission risk might be considerably different in these patient groups,. Finally, we excluded studies from developing nations as these were unlikely to provide directly applicable results.
Data extraction and quality assessment
From each study, we abstracted the following: population characteristics, setting, number of subjects in the derivation and validation cohorts, utilization outcome, readmission rate, range of readmission rates according to predicted risk, and model discrimination. To facilitate a high-level comparison of predictor variables, we grouped final model variables into one of six categories (medical comorbidity, mental health comorbidity, illness severity, prior utilization, overall health and function, and sociodemographic/social determinants of health).7
To characterize the practical utility of each model, two reviewers abstracted from each study the type of data used and the timing of data collection. Disagreements between reviewers about these classifications were resolved through group discussion. Data type consisted of administrative, primary (e.g., survey, chart review), or both. Regarding timing, we classified a model as using real-time data if the variables would be available on or shortly after index hospital admission, and as using retrospective data if the variables would not be available early during a hospitalization. For example, a model using prior healthcare utilization and data from patient surveys conducted early during a hospitalization would be classified as using real-time data, while a model using index hospital length of stay or index hospital discharge diagnostic codes would be classified as using retrospective data. Because of coding delays, models relying on administrative codes from index hospital admission were considered retrospective.
We report the c-statistic, with 95% confidence interval when available, to describe model discrimination. The c-statistic, which is equivalent to the area under the receiver operating characteristic curve, is the proportion of times the model correctly discriminates a pair of high- and low-risk individuals.8 A c-statistic of 0.5 indicates the model performs no better than chance; a c-statistic of 0.7 to 0.8 indicates modest or acceptable discriminative ability, and a threshold of greater than 0.8 indicates good discriminative ability.9, 10 If the c-statistic was not reported, we abstracted other operational statistics such as sensitivity, specificity and predictive values for representative risk score cut-offs when available. Model calibration is the degree to which predicted rates are similar to those observed in the population. To describe model calibration we report the range of observed readmission rates from the predicted lowest to highest risk groupings.
To guide our methodologic assessment of included studies, we adapted elements – including cohort definition, follow-up, adequacy of prognostic and outcome variable measurement, and validation method – from a prognosis study quality tool and clinical decision rule assessment tool (Appendix C).6, 11
Data synthesis
The included studies were too heterogenous to permit meta-analysis. Therefore, we qualitatively synthesized results, focusing on model discrimination, the populations in which the model has been tested, practical aspects of model implementation, and the types of variables included in each model.
From 7,843 titles and abstracts, 286 articles were selected for full-text review (Figure available as online supplement). Of these, 30 studies of 26 unique models across a broad variety of settings and patient populations met our inclusion criteria (Table 1). Most (N=23) studies were based on US healthcare data. The remainder were from Australia (2 studies), England (2), Ireland (1), Switzerland (1), or Canada (1). Fourteen studies included only patients at least 65 years of age. Of these, seven relied solely on Medicare administrative data. Four studies used VA data.
Table 1
Table 1
Characteristics of validated readmission risk prediction models
Total sample size ranged from just 173 patients to more than 2.7 million. The outcome of 30-day readmission was reported most commonly, though some models chose other follow-up intervals ranging from 14 days to 4 years. Among 21 studies reporting a c-statistic, values ranged from 0.55 – 0.83 (Table 1), but only six studies reported a c-statistic above 0.70 indicating modest discriminative ability. Performance was similar between studies using split-sample validation methods (n=21, c-statistic range 0.59-0.75), and those that used external validation methods (n=9, c-statistic range 0.53-0.83). Among models that analyzed the relationship between risk categories and actual readmission rates, a substantial gradient in readmission rate was present between patients at the lowest vs. the highest risk level. For example, among six models using 30-day readmission as an outcome, the lowest and highest risk groups differed by 20.4 to 34.5 percentage points in their actual readmission rates.
Models relying on retrospective administrative data
Fourteen models were based on retrospective administrative data and could potentially be used for hospital comparison purposes (Table 1). Most of these included medical comorbidity and prior utilization variables, but few considered mental health, functional status and social determinant variables (Table 2). The three models with c-statistics ≥ 0.70 were developed and tested in large European or Australian cohorts. One examined the risk of two or more unplanned readmissions for all hospitalized patients in England, including pediatric and obstetric patients, for one calendar year.12 A Swiss study of potentially preventable readmissions is described in greater detail below.13 An Australian model incorporating over 100 medical comorbidities and administrative social determinant variables performed at a modest level in asthma patients, but poorly in myocardial infarction patients.14
Table 2
Table 2
Variables considered by studies in evaluating the risk of readmission
The nine large population-based or multicenter US studies generally had poor discriminative ability (c-statistics 0.55 – 0.65). The CMS used a methodologically rigorous process to create three models for congestive heart failure, acute myocardial infarction, and pneumonia admissions based on Hierarchical Condition Categories, which are groups of related comorbidities.15-17 All three models showed relatively poor ability to predict 30-day all cause readmissions (c-statistics 0.61, 0.63, and 0.63, respectively). A recent study evaluating the CMS heart failure model, and an older heart failure model fared similarly (c-statistics 0.59 and 0.61, respectively).18, 19 The other four US models have limited generalizability: one captured readmissions to one medical center only,20 and the others were developed over two decades ago.21-23
Models using real-time administrative data
Three administrative data-based models were designed to identify high-risk patients in real-time to potentially facilitate targeted interventions. A model with modest discriminative ability (c-statistic 0.72, 95% CI 0.70-0.75) examined 30-day heart failure readmissions in a single urban US health system with a large socioeconomically disadvantaged population.24 It incorporated variables from an automated electronic medical record system, including numerous social factors such as number of address changes, census tract socioeconomic status, history of cocaine use, and marital status. The only study focused specifically on Medicaid enrollees used a 0 to 100 risk score for 12-month readmissions and found patient cost profiles varied widely with risk score.25 Finally, a British model used prior utilization and comorbidity data, and also controlled for observed to expected readmission rates for the admission hospital, but predictive ability remained modest (c-statistic 0.69).26
Models incorporating primary data collection
Nine models incorporated survey or chart review data and could potentially be used for clinical intervention purposes, though five used data unlikely to be available early during a hospitalization. The best performing of these used administrative comorbidity and prior utilization data (c-statistic 0.77) along with functional status data (c-statistic 0.83) from the Medicare Beneficiaries Survey to predict a composite outcome of readmissions and nursing home transfers.27 The survey was not routinely administered during index hospitalization and it is unclear to what extent the use of retrospective survey data affects the predictive ability of the model. Similarly, a medical record study in Ireland retrospectively applied a nine-item questionnaire, including items such as discharge polypharmacy, and performed modestly well (c-statistic 0.70).28 A simple Canadian model used medical comorbidities up through index hospital discharge along with index hospital length of stay and prior utilization (c-statistic 0.68, 95%CI 0.65-0.71).29 Increasing scores on another four-item model of medical comorbidities, prior utilization and discharge creatinine were associated with increasing readmission rates in heart failure patients.30
Four models incorporated primary data collected in real-time. Only two of these models have been tested in contemporary populations, the others having been conducted more than two decades ago. One survey-based model developed at six academic hospitals included social determinant, comorbidity, utilization, and self-rated health variables, but had poor predictive ability (c-statistic 0.61).31 The Probability of Repeated Admissions (PRA) is a simple eight-item survey tool developed in older Medicare beneficiaries, but it also had poor predictive ability across several studies (c-statistic 0.56–0.61, 95% CI 0.44-0.67).32-34
Use of variables
A comparison of the types of variables considered for, and included in, the final models can provide some information about the contribution of different types of variables to readmission risk prediction (Table 2). Nearly all studies included medical comorbidity data and many included prior utilization variables, usually prior hospitalizations. Basic sociodemographic variables such as age and gender were considered by most studies but, in many instances, these variables did not contribute enough to be included in the final model. Table 2 also highlights important gaps in model development: few studies considered variables associated with illness severity, overall health and function, and social determinants of health.
Six studies that compared the performance of different models within the same population offer further insights about the incremental value of different types of variables (Table 3). Amarasingham and colleagues found that an automated electronic medical record-based model incorporating sociodemographic factors such as drug use and housing discontinuities, was more predictive than comorbidity-based models.24 Coleman and colleagues found the inclusion of variables such as functional status from survey data improved model performance slightly compared to the use of utilization and comorbidity-based administrative data alone (c-statistics 0.83 vs 0.77).27
Table 3
Table 3
Studies that compared models within a population
Other comparative studies found little difference among models.Clinical data, such as laboratory and physiologic variables, from medical records or registries did not enhance performance of claims-only CMS models.15-17, 28 A US study of older patients found that an intricate ICD-9 code based disease complexity system added very little discriminative ability to a poorly performing Health Care Financing Authority model.23 A large Swiss study of potentially preventable readmission risk compared a very simple non-clinical model, a Charlson comorbidity-based model, and a more complex hierarchical diagnosis and procedures based model called SQLape, finding only slight differences among them (c-statistics 0.67, 0.69, and 0.72, respectively).13 Finally, Allaudeen and colleagues found internal medicine interns using a gestalt approach predicted readmissions with a similar poor level of ability as an older, established survey-based model (PRA) in a small, single center cohort.34
Potentially preventable readmissions
Only one model attempted to explicitly define and identify potentially preventable readmissions.35 Investigators conducted a systematic medical record review to define potentially preventable readmissions and develop an administrative data-based algorithm. A subsequent publication (described above) compared the performance of three models in predicting readmissions according to their algorithm.13
In this systematic review, we found 26 readmission risk prediction models of medical patients tested in a variety of settings and populations. Several are being applied currently in clinical, research or policy arenas. Half the models were largely designed to facilitate calculation of risk-standardized readmission rates hospital comparison purposes. The other half were clinical models that could be used to identify high-risk patients for whom a transitional care intervention might be appropriate. Most models in both categories have poor predictive ability.
Readmission risk prediction remains a poorly understood and complex endeavor. Indeed, models of patient level factors such as medical comorbidities, basic demographic data, and clinical variables are much better able to predict mortality than readmission risk.18, 24, 29 Broader social, environmental, and medical factors such as access to care, social support, substance abuse, and functional status contribute to readmission risk in some models, but the utility of such factors has not been widely studied.
It is likely that hospital and health system-level factors, which are not present in current readmission risk models, contribute to risk.36 For instance, the timeliness of post-discharge follow-up, coordination of care with the primary care physician, and quality of medication reconciliation may be associated with readmission risk.37, 38 The supply of hospital beds may independently contribute to higher readmission rates.39 Finally, the quality of inpatient care could also contribute to risk,40 though the evidence is mixed.41 Though the inclusion of such hospital-level factors would conceivably improve the predictive ability of models, it would be inappropriate to include them in models that are used for risk-standardization purposes. Doing so would adjust hospital readmission rates for the very deficits in quality and efficiency that hospital comparison efforts seek to reveal, and which could be targets for quality improvement interventions.
Public reporting and financial penalties for hospitals with high 30-day readmission rates are spurring organizations to innovate and implement quality improvement programs.42, 43 Nevertheless, the poor discriminative ability of most of the administrative models we examined raises concerns about the ability to standardize risk across hospitals in order to fairly compare hospital performance. Until risk prediction and risk adjustment become more accurate, it seems inappropriate to compare hospitals in this way and reimburse (or penalize) them on the basis of risk-standardized readmission rates. Others have reached similar conclusions,44 and have also expressed concern that such financial penalties could exacerbate health disparities by penalizing hospitals with fewer resources.45 Still others have argued that readmission rate is an incomplete accountability measure that fails to consider “the real outcomes of interest – health, quality of life, and value.”46
Use of readmission rates as a quality metric assumes that readmissions are related to poor quality care and are potentially preventable. However, the preventability of readmissions remains unclear and understudied. We found only one validated prediction model that explicitly examined potentially preventable readmissions as an outcome, and it found only about one-quarter of readmissions were clearly preventable.13 A recent systematic review of 34 studies found wide variation in the percentage of readmissions considered preventable; estimates ranged from 5% to 79%, with a median of 27%.47 More work is needed to develop readmission risk prediction models with an outcome of preventable readmissions. This could not only improve risk-standardization efforts, but also allow hospitals to better focus limited clinical resources in readmission avoidance programs.
As with models that are used for risk-standardization, readmission risk models that are intended for clinical use also have certain requirements and limitations. Clinical models would ideally provide data prior to discharge, discriminate high- from low-risk patients, and would be adapted to the settings and populations in which they are to be used. Very few models met all these criteria, and only one of these – a single-center study – had acceptable discriminative ability.24 As with the risk-adjustment models, most of the models developed for clinical purposes had poor predictive ability, though notable exceptions suggest the addition of social or functional variables may improve overall performance.24, 27
The best choice of model may depend on setting and the population being studied. The success of some models in certain populations and the lack of success of others suggest the patient-level factors associated with readmission risk may differ according to the population studied. For example, while medical comorbidities may account for a large proportion of risk in some populations, social determinants may disproportionately influence risk in socioeconomically disadvantaged populations. Our review finds, though, that very few models have incorporated such variables.
Even though the overall predictive ability of the clinical models was poor, we did find that high- and low-risk scores were associated with a clinically meaningful gradient of readmission rates. This is important given resource constraints and the need to selectively apply potentially costly care transition interventions. Even limited ability to identify a proportion of patients at risk for future high-cost utilization can increase the cost-effectiveness of such programs.26, 48
Of note, very few models incorporated clinically actionable data that could be used to triage patients to different types of interventions. For example, marginally housed patients, or those struggling with substance abuse, might require unique discharge services. Relatively simple, practical models that use real-time clinically actionable data, such as the Project BOOST model, have been created, but their performance has not yet been rigorously validated.49
Our review concurs with and adds to the findings of several other reviews that found deficiencies in the predictive abilities of risk prediction models. One recent review limited to US studies examined general risk factors for preventable readmissions, but did not search explicitly for validated models, and many of the included studies suffered from poor study design.50 The authors suggest that, in general, measures of poor health such as comorbidity burden, prior utilization, and increasing age were associated with readmissions. Two other reviews focused on specific diagnoses and found very few readmission risk models for heart failure,44 COPD,51 or myocardial infarction.52
Our review has certain limitations. We included studies outside the United States, given that portions of US health care may resemble other countries' health systems, but applicability of models from other countries to the US may still be limited. Our classifications of data types, data collection timing, and the intended use of each model, are subject to interpretation, but we attempted to mitigate subjectivity by using a dual-review and consensus process. Finally, few studies directly compared models within the same population, and summary statistics such as the c-statistic should not be used to directly compare models across different populations.
Additional research is needed to assess the true preventability of readmissions in US health systems. Given the broad variety of factors that may contribute to preventable readmission risk, models that include factors obtained through medical record review or patient report, may be valuable. Innovations to collect broader variable types for inclusion in administrative data sets should be considered. Future studies should assess the relative contributions of different types of patient data (e.g., psychosocial factors) to readmission risk prediction by comparing the performance of models with and without these variables in a given population. These models should ideally be based on population specific conceptual frameworks of risk. Implementation of risk stratification models and their effect on work flow and resource prioritization should be assessed in a broad variety of hospital settings. Also, given that many models have limited predictive ability and may require some investment of time and cost to implement, future studies should further evaluate the relative value of clinician gestalt compared to predictive models in assessing readmission risk.
In summary, readmission risk prediction is a complex endeavor with many inherent limitations. Most models created to date, whether for hospital comparison or clinical purposes, have poor predictive ability. Though in certain settings such models may prove useful, better approaches are needed to assess hospital performance in discharging patients, as well as to identify patients at greater risk of avoidable readmission.
Figure 1
Figure 1
Risk Prediction Models for Hospital Readmission - Literature Flow
Acknowledgments
The authors wish to thank Rose Relevo, MLS, MS, AHIP, research librarian at Oregon Health and Science University, for constructing and deploying the search strategy, as well as Tomiye Akagi, BA, administrative assistant at the Portland VAMC. We also thank Ed Vasilevskis, MD; Frank Harrell, PhD; Art Wheeler, MD; and Italo Biaggioni, MD of Vanderbilt University for critically reviewing a draft of this manuscript. Dr. Wheeler was compensated by the Vanderbilt CTSA (UL1 RR024976 from NCRR/NIH). This report is based on research conducted by the Evidence-based Synthesis Program (ESP) Center located at the Portland VA Medical Center, Portland OR funded by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, Health Services Research and Development. The research was also supported in part by Vanderbilt CTSA grant 1 UL1 RR024975 from the National Center for Research Resources, National Institutes of Health. As the principal investigator of the project, Dr. Devan Kansagara had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. The funding organizations had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The findings and conclusions in this report are those of the authors who are responsible for its contents; the findings and conclusions do not necessarily represent the views of the Department of Veterans Affairs or the United States government. Therefore, no statement in this article should be construed as an official position of the Department of Veterans Affairs. No authors have any affiliations or financial involvement (e.g., employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties) that conflict with materials presented in the report.
1. Jack BW, Chetty VK, Anthony D, et al. A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med. 2009 Feb 3;150(3):178–187. [PMC free article] [PubMed]
2. Coleman EA, Parry C, Chalmers S, et al. The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006 Sep 25;166(17):1822–1828. [PubMed]
3. Naylor MD, Brooten D, Campbell R, et al. Comprehensive discharge planning and home follow-up of hospitalized elders: a randomized clinical trial. Jama. 1999 Feb 17;281(7):613–620. [PubMed]
4. QualityNet. [Accessed 5/28/2011];Readmission Measures Overview - Publicly reporting risk-standardized, 30-day readmission measures for AMI, HF and PN. http://www.qualitynet.org/dcs/ContentServer?cid=1219069855273&pagename=QnetPublic%2FPage%2FQnetTier2&c=Page.
5. Krumholz HM, Brindis RG, Brush JE, et al. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006 Jan 24;113(3):456–462. [PubMed]
6. McGinn TG, Guyatt GH, Wyer PC, Naylor CD, Stiell IG, Richardson WS. Users' guides to the medical literature: XXII: how to use articles about clinical decision rules. Evidence-Based Medicine Working Group. Jama. 2000 Jul 5;284(1):79–84. [PubMed]
7. Centers for Disease Control and Prevention. Establishing a Holistic Framework to Reduce Inequities in HIV, Viral Hepatitis, STDs, and Tuberculosis in the United States. Atlanta (GA): U.S. Department of Health and Human Services, Centers for Disease Control and Prevention; Oct, 2010. The report is available at: www.cdc.gov/socialdeterminants.
8. Iezzoni LI, editor. Risk adjustment for measuring health care outcomes. 3rd. Chicago, IL: Health Administration Press; 2003.
9. Schneeweiss S, Seeger JD, Maclure M, Wang PS, Avorn J, Glynn RJ. Performance of comorbidity scores to control for confounding in epidemiologic studies using claims data. Am J Epidemiol. 2001 Nov 1;154(9):854–864. [PubMed]
10. Ohman EM, Granger CB, Harrington RA, Lee KL. Risk stratification and therapeutic decision making in acute coronary syndromes. Jama. 2000 Aug 16;284(7):876–878. [PubMed]
11. Hayden JA, Cote P, Bombardier C. Evaluation of the quality of prognosis studies in systematic reviews. Ann Intern Med. 2006 Mar 21;144(6):427–437. [PubMed]
12. Bottle A, Aylin P, Majeed A, Bottle A, Aylin P, Majeed A. Identifying patients at high risk of emergency hospital admissions: a logistic regression analysis. J R Soc Med. 2006 Aug;99(8):406–414. [PMC free article] [PubMed]
13. Halfon P, Eggli Y, Pretre-Rohrbach I, Meylan D, Marazzi A, Burnand B. Validation of the potentially avoidable hospital readmission rate as a routine indicator of the quality of hospital care. Med Care. 2006 Nov;44(11):972–981. [PubMed]
14. Holman CDAJ, Preen DB, Baynham NJ, Finn JC, Semmens JB. A multipurpose comorbidity scoring system performed better than the Charlson index. J Clin Epidemiol. 2005 Oct;58(10):1006–1014. [PubMed]
15. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Heart Failure Readmission Measure: Methodology. Report prepared for Centers for Medicare & Medicaid Services. 2008
16. Krumholz HM, Normand ST, Keenan PS, et al. Hospital 30-Day Acute Myocardial Infarction Readmission Measure: Methodology. A report prepared for the Centers for Medicare & Medicaid Services. 2008
17. Krumholz HM, Normand ST, Keenan PS, et al. Hospital 30-Day Pneumonia Readmission Risk Measure: Methodology. A report prepared for the Centers for Medicare & Medicaid Services. 2008
18. Hammill BG, Curtis LH, Fonarow GC, et al. Incremental value of clinical data beyond claims data in predicting 30-day outcomes after heart failure hospitalization. Circulation: Cardiovascular Quality and Outcomes. 2011;4(1):60–67. [PubMed]
19. Philbin EF, DiSalvo TG. Prediction of hospital readmission for heart failure: development of a simple risk score based on administrative data. J Am Coll Cardiol. 1999 May;33(6):1560–1566. [PubMed]
20. Silverstein MD, Qin H, Mercer SQ, Fong J, Haydar Z. Risk factors for 30-day hospital readmission in patients <GT> or = 65 years of age. Baylor University Medical Center Proceedings. 2008;21(4):363–372. [PMC free article] [PubMed]
21. Thomas JW. Does risk-adjusted readmission rate provide valid information on hospital quality? Inquiry. 1996;33(3):258–270. [PubMed]
22. Anderson GF, Steinberg EP. Predicting hospital readmissions in the Medicare population. Inquiry. 1985;22(3):251–258. [PubMed]
23. Naessens JM, Leibson CL, Krishan I, Ballard DJ. Contribution of a measure of disease complexity (COMPLEX) to prediction of outcome and charges among hospitalized patients. Mayo Clin Proc. 1992 Dec;67(12):1140–1149. [PubMed]
24. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30-day readmission or death using electronic medical record data. Med Care. 2010 Nov;48(11):981–988. [PubMed]
25. Billings J, Mijanovich T, Billings J, Mijanovich T. Improving the management of care for high-cost Medicaid patients. Health Aff (Millwood) 2007 Nov-Dec;26(6):1643–1654. [PubMed]
26. Billings J, Dixon J, Mijanovich T, Wennberg D. Case finding for patients at risk of readmission to hospital: development of algorithm to identify high risk patients. Bmj. 2006 Aug 12;333(7563):327. [PMC free article] [PubMed]
27. Coleman EA, Min SJ, Chomiak A, et al. Posthospital care transitions: patterns, complications, and risk identification. Health Serv Res. 2004 Oct;39(5):1449–1465. [PMC free article] [PubMed]
28. Morrissey EFR, McElnay JC, Scott M, McConnell BJ. Influence of drugs, demographics and medical history on hospital readmission of elderly patients: A predictive model. Clinical Drug Investigation. 2003;23(2):119–128.
29. van Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. Cmaj. 2010 Apr 6;182(6):551–557. [PMC free article] [PubMed]
30. Krumholz HM, Chen YT, Wang Y, Vaccarino V, Radford MJ, Horwitz RI. Predictors of readmission among elderly survivors of admission with heart failure. Am Heart J. 2000 Jan;139(1 Pt 1):72–77. [PubMed]
31. Hasan O, Meltzer DO, Shaykevich SA, et al. Hospital readmission in general medicine patients: a prediction model. J Gen Intern Med. 2009;25(3):211–219. [PMC free article] [PubMed]
32. Boult C, Dowd B, McCaffrey D, Boult L, Hernandez R, Krulewitch H. Screening elders for risk of hospital admission. J Am Geriatr Soc. 1993 Aug;41(8):811–817. [PubMed]
33. Novotny NL, Anderson MA. Prediction of early readmission in medical inpatients using the Probability of Repeated Admission instrument. Nurs Res. 2008 Nov-Dec;57(6):406–415. [PubMed]
34. Allaudeen N, Schnipper JL, Orav EJ, et al. Inability of providers to predict unplanned readmissions. J Gen Intern Med. 2011 Jul;26(7):771–776. [PMC free article] [PubMed]
35. Halfon P, Eggli Y, van Melle G, Chevalier J, Wasserfallen JB, Burnand B. Measuring potentially avoidable hospital readmissions. J Clin Epidemiol. 2002 Jun;55(6):573–587. [PubMed]
36. Oddone EZ, Weinberger M, Horner M, et al. Classifying general medicine readmissions. Are they preventable? Veterans Affairs Cooperative Studies in Health Services Group on Primary Care and Hospital Readmissions. J Gen Intern Med. 1996 Oct;11(10):597–607. [PubMed]
37. Hernandez AF, Greiner MA, Fonarow GC, et al. Relationship between early physician follow-up and 30-day readmission among Medicare beneficiaries hospitalized for heart failure. Jama. 2010 May 5;303(17):1716–1722. [PubMed]
38. Kripalani S, Jackson AT, Schnipper JL, et al. Promoting effective transitions of care at hospital discharge: a review of key issues for hospitalists. Journal of hospital medicine (Online) 2007 Sep;2(5):314–323. [PubMed]
39. Fisher E, Goodman D, Skinner J, Bronner K. Health Care Spending, Quality, and Outcomes - More Isn't Always Better. The Dartmouth Institute for Health Policy & Clinical Practice. 2009
40. Ashton CM, Wray NP. A conceptual framework for the study of early readmission as an indicator of quality of care. Soc Sci Med. 1996 Dec;43(11):1533–1541. [PubMed]
41. Weissman JS, Ayanian JZ, Chasan-Taber S, Sherwood MJ, Roth C, Epstein AM. Hospital readmissions and quality of care. Med Care. 1999 May;37(5):490–501. [PubMed]
42. Fung CH, Lim YW, Mattke S, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008 Jan 15;148(2):111–123. [PubMed]
43. The Care Transitions Quality Improvement Organization Support Center (QIOSC) [Accessed 6/1/11]; http://www.cfmc.org/caretransitions.
44. Ross JS, Mulvey GK, Stauffer B, et al. Statistical models and patient predictors of readmission for heart failure: a systematic review. Arch Intern Med. 2008 Jul 14;168(13):1371–1386. [PubMed]
45. Joynt KE, Jha AK. Who has higher readmission rates for heart failure, and why? Implications for efforts to improve care using financial incentives. Circulation: Cardiovascular Quality and Outcomes. 2011;4(1):53–59. [PMC free article] [PubMed]
46. Axon RN, Williams MV, Axon RN, Williams MV. Hospital readmission as an accountability measure. Jama. 2011 Feb 2;305(5):504–505. [PubMed]
47. van Walraven C, Bennett C, Jennings A, et al. Proportion of hospital readmissions deemed avoidable: a systematic review. Cmaj. 2011 Apr 19;183(7):E391–402. [PMC free article] [PubMed]
48. Mukamel DB, Chou CC, Zimmer JG, Rothenberg BM. The effect of accurate patient screening on the cost-effectiveness of case management programs. Gerontologist. 1997 Dec;37(6):777–784. [PubMed]
49. [Accessed 5/28/11];Society of Hospital Medicine Project Boost Better Outcomes for Older Adults through Safe Transitions Tool for Addressing Risk: A Geriatric Evaluation for Transitions. http://www.hospitalmedicine.org/ResourceRoomRedesign/RR_CareTransitions/PDFs/TARGET_screen_v22.pdf.
50. Vest JR, Gamm LD, Oxford BA, et al. Determinants of preventable readmissions in the United States: a systematic review. Implement Sci. 2010;5:88. [PMC free article] [PubMed]
51. Bahadori K, FitzGerald JM. Risk factors of hospitalization and readmission of patients with COPD exacerbation--systematic review. Int J Chron Obstruct Pulmon Dis. 2007;2(3):241–251. [PMC free article] [PubMed]
52. Desai MM, Stauffer BD, Feringa HHH, Schreiner GC. Statistical models and patient predictors of readmission for acute myocardial infarction: a systematic review. Circulation. 2009 Sep;2(5):500–507. Cardiovascular Quality & Outcomes. [PubMed]
53. Holloway JJ, Medendorp SV, Bromberg J. Risk factors for early readmission among veterans. Health Serv Res. 1990 Apr;25(1 Pt 2):213–237. [PMC free article] [PubMed]
54. Howell S, Coory M, Martin J, et al. Using routine inpatient data to identify patients at risk of hospital readmission. BMC Health Serv Res. 2009;9:96. [PMC free article] [PubMed]
55. Smith DM, Norton JA, McDonald CJ. Nonelective readmissions of medical patients. J Chronic Dis. 1985;38(3):213–224. [PubMed]
56. Smith DM, Weinberger M, Katz BP, Moore PS. Postdischarge care and readmissions. Med Care. 1988 Jul;26(7):699–708. [PubMed]
57. Smith DM, Katz BP, Huster GA, Fitzgerald JF, Martin DK, Freedman JA. Risk factors for nonelective hospital readmissions. J Gen Intern Med. 1996 Dec;11(12):762–764. [PubMed]
58. Burns R, Nichols LO. Factors predicting readmission of older general medicine patients. J Gen Intern Med. 1991 Sep-Oct;6(5):389–393. [PubMed]
59. Evans RL, Hendricks RD, Lawrence KV, Bishop DS. Identifying factors associated with health care use: a hospital-based risk screening index. Soc Sci Med. 1988;27(9):947–954. [PubMed]
60. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373–383. [PubMed]
61. Eggli Y. [Prévision des coûts hospitaliers fondés sur le profil des patients] Hospital costs prevision grounded on case-mix. Chardonne, Switzerland: SQLape sàrl. 2005
62. Tabak YP, Johannes RS, Silber JH, Tabak YP, Johannes RS, Silber JH. Using automated clinical data for risk adjustment: development and validation of six disease-specific mortality predictive models for pay-for-performance. Med Care. 2007 Aug;45(8):789–805. [PubMed]
63. Bowen OR, Roper WL. Region IX: American Samoa, Arizona, Guam, Hawaii, Nevada Publication No HCFA 00651. Washington, DC: US Government Printing Office; 1988. Medicare Hospital Mortality Information 1987.