PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
J Hosp Med. Author manuscript; available in PMC 2017 March 24.
Published in final edited form as:
Published online 2016 February 29. doi:  10.1002/jhm.2568
PMCID: PMC5365027
NIHMSID: NIHMS850943

Predicting All-Cause Readmissions Using Electronic Health Record Data From the Entire Hospitalization: Model Development and Comparison

Oanh Kieu Nguyen, MD, MAS,1,2,* Anil N. Makam, MD, MAS,1,2 Christopher Clark, MPA,3 Song Zhang, PhD,4 Bin Xie, PhD,3 Ferdinand Velasco, MD,5 Ruben Amarasingham, MD, MBA,1,2,3 and Ethan A. Halm, MD, MPH1,2

Abstract

Background

Incorporating clinical information from the full hospital course may improve prediction of 30-day readmissions.

Objective

To develop an all-cause readmissions risk-prediction model incorporating electronic health record (EHR) data from the full hospital stay, and to compare “full-stay” model performance to a “first day” and 2 other validated models, LACE (includes Length of stay, Acute [non-elective] admission status, Charlson Comorbidity Index, and Emergency department visits in the past year), and HOSPITAL (includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type [nonelective], number of Admissions in the past year, and Length of stay).

Design

Observational cohort study.

Subjects

All medicine discharges between November 2009 and October 2010 from 6 hospitals in North Texas, including safety net, teaching, and nonteaching sites.

Measures

Thirty-day nonelective readmissions were ascertained from 75 regional hospitals.

Results

Among 32,922 admissions (validation = 16,430), 12.7% were readmitted. In addition to many first-day factors, we identified hospital-acquired Clostridium difficile infection (adjusted odds ratio [AOR]: 2.03, 95% confidence interval [CI]: 1.18-3.48), vital sign instability on discharge (AOR: 1.25, 95% CI: 1.15-1.36), hyponatremia on discharge (AOR: 1.34, 95% CI: 1.18-1.51), and length of stay (AOR: 1.06, 95% CI: 1.04-1.07) as significant predictors. The full-stay model had better discrimination than other models though the improvement was modest (C statistic 0.69 vs 0.64-0.67). It was also modestly better in identifying patients at highest risk for readmission (likelihood ratio +2.4 vs. 1.8–2.1) and in reclassifying individuals (net reclassification index 0.02–0.06).

Conclusions

Incorporating clinically granular EHR data from the full hospital stay modestly improves prediction of 30-day readmissions. Given limited improvement in prediction despite incorporation of data on hospital complications, clinical instabilities, and trajectory, our findings suggest that many factors influencing readmissions remain unaccounted for. Further improvements in readmission models will likely require accounting for psychosocial and behavioral factors not currently captured by EHRs.

Unplanned hospital readmissions are frequent, costly, and potentially avoidable.1,2 Due to major federal financial readmissions penalties targeting excessive 30-day readmissions, there is increasing attention to implementing hospital-initiated interventions to reduce readmissions.3,4 However, universal enrollment of all hospitalized patients into such programs may be too resource intensive for many hospitals.5 To optimize efficiency and effectiveness, interventions should be targeted to individuals most likely to benefit.6,7 However, existing readmission risk-prediction models have achieved only modest discrimination, have largely used administrative claims data not available until months after discharge, or are limited to only a subset of patients with Medicare or a specific clinical condition.814 These limitations have precluded accurate identification of high-risk individuals in an all-payer general medical inpatient population to provide actionable information for intervention prior to discharge.

Approaches using electronic health record (EHR) data could allow early identification of high-risk patients during the index hospitalization to enable initiation of interventions prior to discharge. To date, such strategies have relied largely on EHR data from the day of admission.15,16 However, given that variation in 30-day readmission rates are thought to reflect the quality of in-hospital care, incorporating EHR data from the entire hospital stay to reflect hospital care processes and clinical trajectory may more accurately identify at-risk patients.1720 Improved accuracy in risk prediction would help better target intervention efforts in the immediate postdischarge period, an interval characterized by heightened vulnerability for adverse events.21

To help hospitals target transitional care interventions more effectively to high-risk individuals prior to discharge, we derived and validated a readmissions risk-prediction model incorporating EHR data from the entire course of the index hospitalization, which we termed the “full-stay” EHR model. We also compared the full-stay EHR model performance to our group's previously derived prediction model based on EHR data on the day of admission, termed the “first-day” EHR model, as well as to 2 other validated readmission models similarly intended to yield near real-time risk predictions prior to or shortly after hospital discharge.9,10,15

Methods

Study Design, Population, and Data Sources

We conducted an observational cohort study using EHR data from 6 hospitals in the Dallas–Fort Worth metroplex between November 1, 2009 and October 30, 2010 using the same EHR system (Epic Systems Corp., Verona, WI). One site was a university-affiliated safety net hospital; the remaining 5 sites were teaching and nonteaching community sites.

We included consecutive hospitalizations among adults ≥18 years old discharged alive from any medicine inpatient service. For individuals with multiple hospitalizations during the study period, we included only the first hospitalization. We excluded individuals who died during the index hospitalization, were transferred to another acute care facility, left against medical advice, or who died outside of the hospital within 30 days of discharge. For model derivation, we randomly split the sample into separate derivation (50%) and validation cohorts (50%).

Outcomes

The primary outcome was 30-day hospital readmission, defined as a nonelective hospitalization within 30 days of discharge to any of 75 acute care hospitals within a 100-mile radius of Dallas, ascertained from an all-payer regional hospitalization database. Non-elective hospitalizations included all hospitalizations classified as a emergency, urgent, or trauma, and excluded those classified as elective as per the Centers for Medicare and Medicaid Services Claim Inpatient Admission Type Code definitions.

Predictor Variables for the Full-Stay EHR Model

The full-stay EHR model was iteratively developed from our group's previously derived and validated risk-prediction model using EHR data available on admission (first-day EHR model).15 For the full-stay EHR model, we included all predictor variables included in our published first-day EHR model as candidate risk factors. Based on prior literature, we additionally expanded candidate predictors available on admission to include marital status (proxy for social isolation) and socioeconomic disadvantage (percent poverty, unemployment, median income, and educational attainment by zip code of residence as proxy measures of the social and built environment).2227 We also expanded the ascertainment of prior hospitalization to include admissions at both the index hospital and any of 75 acute care hospitals from the same, separate all-payer regional hospitalization database used to ascertain 30-day readmissions.

Candidate predictors from the remainder of the hospital stay (ie, following the first 24 hours of admission) were included if they were: (1) available in the EHR of all participating hospitals, (2) routinely collected or available at the time of hospital discharge, and (3) plausible predictors of adverse outcomes based on prior literature and clinical expertise. These included length of stay, in-hospital complications, transfer to an intensive or coronary care unit, blood transfusions, vital sign instabilities within 24 hours of discharge, select laboratory values at time of discharge, and disposition status. We also assessed trajectories of vital signs and selected laboratory values (defined as changes in these measures from admission to discharge).

Statistical Analysis

Model Derivation

Univariate relationships between readmission and each of the candidate predictors were assessed in the derivation cohort using a prespecified significance threshold of P ≤ 0.05. We included all factors from our previously derived and validated first-day EHR model as candidate predictors.15 Continuous laboratory and vital sign values at the time of discharge were categorized based on clinically meaningful cutoffs; predictors with missing values were assumed to be normal (<1% missing for each variable). Significant univariate candidate variables were entered in a multivariate logistic regression model using stepwise backward selection with a prespecified significance threshold of P ≤ 0.05. We performed several sensitivity analyses to confirm the robustness of our model. First, we alternately derived the full-stay model using stepwise forward selection. Second, we “forced in” all significant variables from our first-day EHR model, and entered the candidate variables from the remainder of the hospital stay using both stepwise backward and forward selection separately. Third, prespecified interactions between variables were evaluated for inclusion. Though final predictors varied slightly between the different approaches, discrimination of each model was similar to the model derived using our primary analytic approach (C statistics ± 0.01, data not shown).

Model Validation

We assessed model discrimination and calibration of the derived full-stay EHR model using the validation cohort. Model discrimination was estimated by the C statistic. The C statistic represents the probability that, given 2 hospitalized individuals (1 who was readmitted and the other who was not), the model will predict a higher risk for the readmitted patient than for the nonreadmitted patient. Model calibration was assessed by comparing predicted to observed probabilities of readmission by quintiles of risk, and with the Hosmer-Lemeshow goodness-of-fit test.

Comparison to Existing Models

We compared the full-stay EHR model performance to 3 previously published models: our group's first-day EHR model, and the LACE (includes Length of stay, Acute (nonelective) admission status, Charlson Comorbidity Index, and Emergency department visits in the past year) and HOSPITAL (includes Hemoglobin at discharge, discharge from Oncology service, Sodium level at discharge, Procedure during index hospitalization, Index hospitalization Type (nonelective), number of Admissions in the past year, and Length of stay) models, which were both derived to predict 30-day readmissions among general medical inpatients and were intended to help clinicians identify high-risk patients to target for discharge interventions.9,10,15 We assessed each model's performance in our validation cohort, calculating the C statistic, integrated discrimination index (IDI), and net reclassification index (NRI) compared to the full-stay model. IDI is a summary measure of both discrimination and reclassification, where more positive values suggest improvement in model performance in both these domains compared to a reference model.28 The NRI is defined as the sum of the net proportions of correctly reclassified persons with and without the event of interest.29 The theoretical range of values is −22 to 2, with more positive values indicating improved net reclassification compared to a reference model. Here, we calculated a category-based NRI to evaluate the performance of models in correctly classifying individuals with and without readmissions into the highest readmission risk quintile versus the lowest 4 risk quintiles compared to the full-stay EHR model.29 This pre-specified cutoff is relevant for hospitals interested in identifying the highest-risk individuals for targeted intervention.6 Because some hospitals may be able to target a greater number of individuals for intervention, we performed a sensitivity analysis by assessing category-based NRI for reclassification into the top 2 risk quintiles versus the lowest 3 risk quintiles and found no meaningful difference in our results (data not shown). Finally, we qualitatively assessed calibration of comparator models in our validation cohort by comparing predicted probability to observed probability of readmission by quintiles of risk for each model. We conducted all analyses using Stata 12.1 (Stata-Corp, College Station, TX). This study was approved by the UT Southwestern Medical Center institutional review board.

Results

Overall, 32,922 index hospitalizations were included in our study cohort; 12.7% resulted in a 30-day readmission (see Supporting Figure 1 in the online version of this article). Individuals had a mean age of 62 years and had diverse race/ethnicity and primary insurance status; half were female (Table 1). The study sample was randomly split into a derivation cohort (50%, n = 16,492) and validation cohort (50%, n = 16,430). Individuals in the derivation cohort with a 30-day readmission had markedly different socioeconomic and clinical characteristics compared to those not readmitted (Table 1).

Table 1
Baseline Characteristics and Candidate Variables for Risk-Prediction Model

Derivation and Validation of the Full-Stay EHR Model for 30-Day Readmission

Our final model included 24 independent variables, including demographic characteristics, utilization history, clinical factors from the first day of admission, and clinical factors from the remainder of the hospital stay (Table 2). The strongest independent predictor of readmission was hospital-acquired Clostridium difficile infection (adjusted odds ratio [AOR]: 2.03, 95% confidence interval [CI] 1.18-3.48); other hospital-acquired complications including pressure ulcers and venous thromboembolism were also significant predictors. Though having Medicaid was associated with increased odds of readmission (AOR: 1.55, 95% CI: 1.31-1.83), other zip code–level measures of socioeconomic disadvantage were not predictive and were not included in the final model. Being discharged to hospice was associated with markedly lower odds of readmission (AOR: 0.23, 95% CI: 0.13-0.40).

Table 2
Final Full-Stay EHR Model Predicting 30-Day Readmissions (Derivation Cohort, N = 16,492)

In our validation cohort, the full-stay EHR model had fair discrimination, with a C statistic of 0.69 (95% CI: 0.68-0.70) (Table 3). The full-stay EHR model was well calibrated across all quintiles of risk, with slight overestimation of predicted risk in the lowest and highest quintiles (Figure 1a) (see Supporting Table 5 in the online version of this article). It also effectively stratified individuals across a broad range of predicted readmission risk from 4.1% in the lowest decile to 36.5% in the highest decile (Table 3).

Fig. 1
Comparison of the calibration of different readmission models. Calibration graphs for full-stay (a), first-day (b), LACE (c), and HOSPITAL (d) models in the validation cohort. Each graph shows predicted probability compared to observed probability of ...
Table 3
Comparison of the Discrimination and Reclassification of Different Readmission Models*

Comparing the Performance of the Full-Stay EHR Model to Other Models

The full-stay EHR model had better discrimination compared to the first-day EHR model and the LACE and HOSPITAL models, though the magnitude of improvement was modest (Table 3). The full-stay EHR model also stratified individuals across a broader range of readmission risk, and was better able to discriminate and classify those in the highest quintile of risk from those in the lowest 4 quintiles of risk compared to other models as assessed by the IDI and NRI (Table 3) (see Supporting Tables 1–4 and Supporting Figure 2 in the online version of this article). In terms of model calibration, both the first-day EHR and LACE models were also well calibrated, whereas the HOSPITAL model was less robust (Figure 1).

The diagnostic accuracy of the full-stay EHR model in correctly predicting those in the highest quintile of risk was better than that of the first-day, LACE, and HOSPITAL models, though overall improvements in the sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios were also modest (see Supporting Table 6 in the online version of this article).

Discussion

In this study, we used clinically detailed EHR data from the entire hospitalization on 32,922 individuals treated in 6 diverse hospitals to develop an all-payer, multicondition readmission risk-prediction model. To our knowledge, this is the first 30-day hospital read-mission risk-prediction model to use a comprehensive set of factors from EHR data from the entire hospital stay. Prior EHR-based models have focused exclusively on data available on or prior to the first day of admission, which account for clinical severity on admission but do not account for factors uncovered during the inpatient stay that influence the chance of a postdischarge adverse outcome.15,30 We specifically assessed the prognostic impact of a comprehensive set of factors from the entire index hospitalization, including hospital-acquired complications, clinical trajectory, and stability on discharge in predicting hospital readmissions. Our full-stay EHR model had statistically better discrimination, calibration, and diagnostic accuracy than our existing all-cause first-day EHR model15 and 2 previously published read-missions models that included more limited information from hospitalization (such as length of stay).9,10 However, although the more complicated full-stay EHR model was statistically better than previously published models, we were surprised that the predictive performance was only modestly improved despite the inclusion of many additional clinically relevant prognostic factors.

Taken together, our study has several important implications. First, the added complexity and resource intensity of implementing a full-stay EHR model yields only modestly improved readmission risk prediction. Thus, hospitals and healthcare systems interested in targeting their highest-risk individuals for interventions to reduce 30-day readmission should consider doing so within the first day of hospital admission. Our group's previously derived and validated first-day EHR model, which used data only from the first day of admission, qualitatively performed nearly as well as the full-stay EHR model.15 Additionally, a recent study using only preadmission EHR data to predict 30-day readmissions also achieved similar discrimination and diagnostic accuracy as our full-stay model.30

Second, the field of readmissions risk-prediction modeling may be reaching the maximum achievable model performance using data that are currently available in the EHR. Our limited ability to accurately predict all-cause 30-day readmission risk may reflect the influence of currently unmeasured patient, system, and community factors on readmissions.3133 Due to the constraints of data collected in the EHR, we were unable to include several patient-level clinical characteristics associated with hospital readmission, including self-perceived health status, functional impairment, and cognition.3336 However, given their modest effect sizes (ORs ranging from 1.06–2.10), adequately measuring and including these risk factors in our model may not meaningfully improve model performance and diagnostic accuracy. Further, many social and behavioral patient-level factors are also not consistently available in EHR data. Though we explored the role of several neighborhood-level socioeconomic measures—including prevalence of poverty, median income, education, and unemployment—we found that none were significantly associated with 30-day readmissions. These particular measures may have been inadequate to characterize individual-level social and behavioral factors, as several previous studies have demonstrated that patient-level factors such as social support, substance abuse, and medication and visit adherence can influence readmission risk in heart failure and pneumonia.11,16,22,25 This underscores the need for more standardized routine collection of data across functional, social, and behavioral domains in clinical settings, as recently championed by the Institute of Medicine.11,37 Integrating data from outside the EHR on postdischarge health behaviors, self-management, follow-up care, recovery, and home environment may be another important but untapped strategy for further improving prediction of readmissions.25,38

Third, a multicondition readmission risk-prediction model may be a less effective strategy than more customized disease-specific models for selected conditions associated with high 30-day readmission rates. Our group's previously derived and internally validated models for heart failure and human immunodeficiency virus had superior discrimination compared to our full-stay EHR model (C statistic of 0.72 for each).11,13 However, given differences in the included population and time periods studied, a head-to-head comparison of these different strategies is needed to assess differences in model performance and utility.

Our study had several strengths. To our knowledge, this is the first study to rigorously measure the additive influence of in-hospital complications, clinical trajectory, and stability on discharge on the risk of 30-day hospital readmission. Additionally, our study included a large, diverse study population that included all payers, all ages of adults, a mix of community, academic, and safety net hospitals, and individuals from a broad array of racial/ethnic and socioeconomic backgrounds.

Our results should be interpreted in light of several limitations. First, though we sought to represent a diverse group of hospitals, all study sites were located within north Texas and generalizability to other regions is uncertain. Second, our ascertainment of prior hospitalizations and readmissions was more inclusive than what could be typically accomplished in real time using only EHR data from a single clinical site. We performed a sensitivity analysis using only prior utilization data available within the EHR from the index hospital with no meaningful difference in our findings (data not shown). Additionally, a recent study found that 30-day readmissions occur at the index hospital for over 75% of events, suggesting that 30-day readmissions are fairly comprehensively captured even with only single-site data.39 Third, we were not able to include data on outpatient visits before or after the index hospitalization, which may influence the risk of readmission.1,40

In conclusion, incorporating clinically granular EHR data from the entire course of hospitalization modestly improves prediction of 30-day readmissions compared to models that only include information from the first 24 hours of hospital admission or models that use far fewer variables. However, given the limited improvement in prediction, our findings suggest that from the practical perspective of implementing real-time models to identify those at highest risk for readmission, it may not be worth the added complexity of waiting until the end of a hospitalization to leverage additional data on hospital complications, and the trajectory of laboratory and vital sign values currently available in the EHR. Further improvement in prediction of readmissions will likely require accounting for psychosocial, functional, behavioral, and postdischarge factors not currently present in the inpatient EHR.

Acknowledgments

This work was supported by the Agency for Healthcare Research and Quality–funded UT Southwestern Center for Patient-Centered Outcomes Research (1R24HS022418-01) and the Commonwealth Foundation (#20100323). Drs. Nguyen and Makam received funding from the UT Southwestern KL2 Scholars Program (NIH/NCATS KL2 TR001103). Dr. Halm was also supported in part by NIH/NCATS U54 RFA-TR-12-006. The study sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript.

Footnotes

Additional Supporting Information may be found in the online version of this article.

This study was presented at the Society of Hospital Medicine 2015 Annual Meeting in National Harbor, Maryland, and the Society of General Internal Medicine 2015 Annual Meeting in Toronto, Canada.

Disclosures: The authors have no conflicts of interest to disclose.

References

1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee-for-service program. N Engl J Med. 2009;360(14):1418–1428. [PubMed]
2. van Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. [PMC free article] [PubMed]
3. Rennke S, Nguyen OK, Shoeb MH, Magan Y, Wachter RM, Ranji SR. Hospital-initiated transitional care interventions as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):433–440. [PubMed]
4. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30-day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–528. [PubMed]
5. Rennke S, Shoeb MH, Nguyen OK, Magan Y, Wachter RM, Ranji SR. Interventions to Improve Care Transitions at Hospital Discharge. Rockville, MD: Agency for Healthcare Research and Quality; 2013.
6. Amarasingham R, Patel PC, Toto K, et al. Allocating scarce resources in real-time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22(12):998–1005. [PMC free article] [PubMed]
7. Amarasingham R, Patzer RE, Huesch M, Nguyen NQ, Xie B. Implementing electronic health care predictive analytics: considerations and challenges. Health Aff (Millwood) 2014;33(7):1148–1154. [PubMed]
8. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688–1698. [PMC free article] [PubMed]
9. van Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551–557. [PMC free article] [PubMed]
10. Donze J, Aujesky D, Williams D, Schnipper JL. Potentially avoidable 30-day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med. 2013;173(8):632–638. [PubMed]
11. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30-day readmission or death using electronic medical record data. Med Care. 2010;48(11):981–988. [PubMed]
12. Singal AG, Rahimi RS, Clark C, et al. An automated model using electronic medical record data identifies patients with cirrhosis at high risk for readmission. Clin Gastroenterol Hepatol. 2013;11(10):1335–1341.e1331. [PMC free article] [PubMed]
13. Nijhawan AE, Clark C, Kaplan R, Moore B, Halm EA, Amarasingham R. An electronic medical record-based model to predict 30-day risk of readmission and death among HIV-infected inpatients. J Acquir Immune Defic Syndr. 2012;61(3):349–358. [PubMed]
14. Horwitz LI, Partovian C, Lin Z, et al. Development and use of an administrative claims measure for profiling hospital-wide performance on 30-day unplanned readmission. Ann Intern Med. 2014;161(10 suppl):S66–S75. [PMC free article] [PubMed]
15. Amarasingham R, Velasco F, Xie B, et al. Electronic medical record-based multicondition models to predict the risk of 30 day readmission or death among adult medicine patients: validation and comparison to existing models. BMC Med Inform Decis Mak. 2015;15(1):39. [PMC free article] [PubMed]
16. Watson AJ, O'Rourke J, Jethwani K, et al. Linking electronic health record-extracted psychosocial data in real-time to risk of readmission for heart failure. Psychosomatics. 2011;52(4):319–327. [PMC free article] [PubMed]
17. Ashton CM, Wray NP. A conceptual framework for the study of early readmission as an indicator of quality of care. Soc Sci Med. 1996;43(11):1533–1541. [PubMed]
18. Dharmarajan K, Hsieh AF, Lin Z, et al. Hospital readmission performance and patterns of readmission: retrospective cohort study of Medicare admissions. BMJ. 2013;347:f6571. [PubMed]
19. Cassel CK, Conway PH, Delbanco SF, Jha AK, Saunders RS, Lee TH. Getting more performance from performance measurement. N Engl J Med. 2014;371(23):2145–2147. [PubMed]
20. Bradley EH, Sipsma H, Horwitz LI, et al. Hospital strategy uptake and reductions in unplanned readmission rates for patients with heart failure: a prospective study. J Gen Intern Med. 2015;30(5):605–611. [PMC free article] [PubMed]
21. Krumholz HM. Post-hospital syndrome—an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100–102. [PMC free article] [PubMed]
22. Calvillo-King L, Arnold D, Eubank KJ, et al. Impact of social factors on risk of readmission or mortality in pneumonia and heart failure: systematic review. J Gen Intern Med. 2013;28(2):269–282. [PMC free article] [PubMed]
23. Keyhani S, Myers LJ, Cheng E, Hebert P, Williams LS, Bravata DM. Effect of clinical and social risk factors on hospital profiling for stroke readmission: a cohort study. Ann Intern Med. 2014;161(11):775–784. [PubMed]
24. Kind AJ, Jencks S, Brock J, et al. Neighborhood socioeconomic disadvantage and 30-day rehospitalization: a retrospective cohort study. Ann Intern Med. 2014;161(11):765–774. [PMC free article] [PubMed]
25. Arbaje AI, Wolff JL, Yu Q, Powe NR, Anderson GF, Boult C. Postdischarge environmental and socioeconomic factors and the likelihood of early hospital readmission among community-dwelling Medicare beneficiaries. Gerontologist. 2008;48(4):495–504. [PubMed]
26. Hu J, Gonsahn MD, Nerenz DR. Socioeconomic status and readmissions: evidence from an urban teaching hospital. Health Aff (Millwood) 2014;33(5):778–785. [PubMed]
27. Nagasako EM, Reidhead M, Waterman B, Dunagan WC. Adding socioeconomic data to hospital readmissions calculations may produce more useful results. Health Aff (Millwood) 2014;33(5):786–791. [PMC free article] [PubMed]
28. Pencina MJ, D'Agostino RB, Sr, D'Agostino RB, Jr, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med. 2008;27(2):157–172. discussion 207–212. [PubMed]
29. Leening MJ, Vedder MM, Witteman JC, Pencina MJ, Steyerberg EW. Net reclassification improvement: computation, interpretation, and controversies: a literature review and clinician's guide. Ann Intern Med. 2014;160(2):122–131. [PubMed]
30. Shadmi E, Flaks-Manov N, Hoshen M, Goldman O, Bitterman H, Balicer RD. Predicting 30-day readmissions with preadmission electronic health record data. Med Care. 2015;53(3):283–289. [PubMed]
31. Kangovi S, Grande D. Hospital readmissions—not just a measure of quality. JAMA. 2011;306(16):1796–1797. [PubMed]
32. Joynt KE, Jha AK. Thirty-day readmissions—truth and consequences. N Engl J Med. 2012;366(15):1366–1369. [PubMed]
33. Greysen SR, Stijacic Cenzer I, Auerbach AD, Covinsky KE. Functional impairment and hospital readmission in medicare seniors. JAMA Intern Med. 2015;175(4):559–565. [PMC free article] [PubMed]
34. Holloway JJ, Thomas JW, Shapiro L. Clinical and sociodemographic risk factors for readmission of Medicare beneficiaries. Health Care Financ Rev. 1988;10(1):27–36. [PMC free article] [PubMed]
35. Patel A, Parikh R, Howell EH, Hsich E, Landers SH, Gorodeski EZ. Mini-cog performance: novel marker of post discharge risk among patients hospitalized for heart failure. Circ Heart Fail. 2015;8(1):8–16. [PubMed]
36. Hoyer EH, Needham DM, Atanelov L, Knox B, Friedman M, Brotman DJ. Association of impaired functional status at hospital discharge and subsequent rehospitalization. J Hosp Med. 2014;9(5):277–282. [PMC free article] [PubMed]
37. Adler NE, Stead WW. Patients in context—EHR capture of social and behavioral determinants of health. N Engl J Med. 2015;372(8):698–701. [PubMed]
38. Nguyen OK, Chan CV, Makam A, Stieglitz H, Amarasingham R. Envisioning a social-health information exchange as a platform to support a patient-centered medical neighborhood: a feasibility study. J Gen Intern Med. 2015;30(1):60–67. [PMC free article] [PubMed]
39. Henke RM, Karaca Z, Lin H, Wier LM, Marder W, Wong HS. Patient factors contributing to variation in same-hospital readmission rate. Med Care Res Review. 2015;72(3):338–358. [PubMed]
40. Weinberger M, Oddone EZ, Henderson WG. Does increased access to primary care reduce hospital readmissions? Veterans Affairs Cooperative Study Group on Primary Care and Hospital Readmission. N Engl J Med. 1996;334(22):1441–1447. [PubMed]