|Home | About | Journals | Submit | Contact Us | Français|
To develop and prospectively validate a risk-adjustment tool in acute asthma.
Data were obtained from two large studies on acute asthma, the Multicenter Airway Research Collaboration (MARC) and the National Emergency Department Safety Study (NEDSS) cohorts. Both studies involved >60 emergency departments (EDs) and were performed during 1996–2001 and 2003–2006, respectively. Both included patients aged 18–54 years presenting to the ED with acute asthma.
Retrospective cohort studies.
Clinical information was obtained from medical record review. The risk index was derived in the MARC cohort and then was prospectively validated in the NEDSS cohort.
There were 3,515 patients in the derivation cohort and 3,986 in the validation cohort. The risk index included nine variables (age, sex, current smoker, ever admitted for asthma, ever intubated for asthma, duration of symptoms, respiratory rate, peak expiratory flow, and number of beta-agonist treatments) and showed satisfactory discrimination (area under the receiver operating characteristic curve, 0.75) and calibration (p=.30 for Hosmer–Lemeshow test) when applied to the validation cohort.
We developed and validated a novel risk-adjustment tool in acute asthma. This tool can be used for health care provider profiling to identify outliers for quality improvement purposes.
Risk adjustment is an important method in health services research, particularly when profiling provider performance and adjusting capitation-based payment (Iezzoni et al. 1998; Majeed, Bindman, and Weiner 2001a,b; Blumenthal et al. 2005;). A number of risk-adjustment tools have been developed in cardiology (Krumholz et al. 1999; Hall et al. 2007;), trauma (Reiter et al. 2004), and critical care (Zimmerman et al. 2006) for profiling hospital performance. Risk-adjustment tools for acute respiratory disorders, such as acute asthma, are very limited. Acute asthma is a common medical problem, accounting for approximately 2 million emergency department (ED) visits and 500,000 hospitalizations each year (Moorman et al. 2007). Despite its importance, only a few risk indices or scoring systems for acute asthma are available in the literature (Rodrigo and Rodrigo 1997, 1998; Cham et al. 2002; Gorelick et al. 2004; Kelly, Kerr, and Powell 2004). These indices, however, are designed to risk-stratify asthmatic patients in clinical practice and include subtle physical findings that are infrequently documented in the medical record. As a result, these indices are not well suited for risk adjustment in health services research.
Accordingly, we developed and prospectively validated a risk-adjustment tool for acute asthma using data from two large multicenter cohort studies. To demonstrate the use of this risk-adjustment tool, we chose hospital admission as a potentially important outcome measure and profiled admission practices across the EDs. Hospitalization is an important outcome in asthma because it represents a large portion of the expenditures for asthma care, with an estimated $4.7 billion spent each year (NHLBI 2007). The decision to admit, however, can vary from hospital to hospital (Morris and Munasinghe 1994; Ansari et al. 2003; Lougheed et al. 2006;). With risk adjustment, differences in patient mix across hospitals can be taken into account such that hospitals caring for sicker patients are not unfairly penalized for their higher admission rates. Moreover, by minimizing the differences in patient mix, practice profiling can identify hospital outliers with unexplained variations in admission practices and these unexplained variations could be further investigated.
The MARC is a division of the Emergency Medicine Network (EMNet, http://www.emnet-usa.org). Details of the study design and data collection have been published previously (Banerji et al. 2006). The MARC database combines data from four observational cohort studies performed during 1996–2001. Using a standardized protocol, investigators at 76 U.S. EDs provided 24 hour/day coverage for a median of 2 weeks. Inclusion criteria were physician diagnosis of acute asthma, age 18–54, and the ability to give informed consent. Repeat visits by individual subjects were excluded. Patients' demographics, asthma history, and details of their current exacerbation were obtained by ED interview. Data on ED management and disposition were obtained using medical chart review. For those who did not complete the ED interview (missed by investigators, refused, or other reasons), their medical records were reviewed to capture full data on demographics, ED presentation, ED course, as well as limited information on asthma history. Because each of the interviewed subjects also had data collected from their medical records, the MARC database represents all eligible patients presenting to the ED during the study periods. For the current analysis, we focused on the variables taken from medical records.
The NEDSS is a large, multicenter study designed to characterize organizational- and clinician-related factors associated with the occurrence of errors in EDs. Details of the study design and data collection have been published previously (Sullivan et al. 2007). In brief, NEDSS was also coordinated by EMNet and recruited EDs by directly inviting sites affiliated with EMNet; EDs not yet affiliated with EMNet were invited through postings on emergency medicine listservs and presentations at national emergency medicine meetings. Three clinical conditions were selected and examined in the NEDSS: acute myocardial infarction, dislocations, and acute asthma. The current analysis examined the asthma component (Tsai et al. 2009). Using a standardized data abstraction tool, trained research personnel at 63 U.S. EDs abstracted data from randomly selected ED visits for acute asthma during 2003–2006. The visits were identified by using International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes 493.xx. Inclusion criteria were age 14–54 years and a history of asthma before the index visit. The following visits were excluded: repeat visits; transfer visits; patient visits with a history of chronic obstructive pulmonary disease, emphysema, or chronic bronchitis; or visits not prompted, in large part, by asthma exacerbation. Similar to MARC, data abstraction focused on baseline patient characteristics, past asthma history, ED presentation, management, and disposition. One hospital was prohibited by its institutional review board to document date of birth and other dates and thus the risk index cannot be calculated for this site. Thus, we dropped this site from the NEDSS cohort, leaving 62 EDs in the NEDSS analysis.
Peak expiratory flow (PEF) was recorded in l/minute and expressed as the absolute value; no predicted values are presented due to lack of the patient's height. Severity of acute asthma was classified according to the initial PEF as follows: mild, 300+ for women, 400+ for men; moderate, 200–299 for women, 250–399 for men; severe, 120–199 for women, 150–249 for men; and very severe, <120 for women, <150 for men. The absolute PEF cutoffs represented approximately 70, 40, and 25 percent predicted, respectively, for a typical adult woman and man (Radeos and Camargo 2004).
The outcome measure was hospital admission, which was defined as admission to an inpatient unit, observation unit, or intensive care unit. We chose hospital admission as the most relevant severity measure because mortality is very rare in acute asthma.
All analyses were performed using Stata 10.0 (StataCorp, College Station, TX). Summary statistics are presented as proportions (with 95 percent confidence intervals [CI]), means (with standard deviations), or medians (with interquartile ranges). All p values are two-sided, with p<.05 considered statistically significant.
Multivariable logistic regression was used to develop the risk-adjustment tool for hospital admission from the MARC database. Model variables had to be readily available in the medical record and were selected a priori based on the review of the medical literature (Rodrigo and Rodrigo 1997, 1998; Emerman et al. 1999; Kelly, Powell, and Kerr 2002; Weber et al. 2002; Kelly, Kerr, and Powell 2004) and clinical experience. The variable domains included the following: demographics, chronic asthma-related factors, ED presentation and severity, and ED course. To determine the functional form used for continuous predictors, we grouped the predictor into bins of equal width and checked if log odds of admission increased or decreased linearly. If the linearity assumption did not hold, dummy coded categorical variables were generated to characterize the dose–response relationship. Variables with missing data were dummy coded using the missing indicator method (Miettinen 1985). This method of modeling missing data assumes data are missing at random. The performance of the model was evaluated by discrimination and calibration. The discriminatory power was quantified by determining the area under the receiver operating characteristic (ROC) curve. The calibration was measured by comparing predicted versus observed admissions in each decile of admission probability using the Hosmer–Lemeshow goodness-of-fit test (Hosmer and Lemeshow 2000). All odds ratios (ORs) are presented with 95 percent CI. After development of the risk index in the MARC cohort, the regression coefficients were retained and prospectively validated in the NEDSS cohort. The performance of the index, including discrimination and calibration, was re-evaluated.
Because PEF measurements were not available for all patients and that the modeling for missing data requires assumptions, we reduced the number of covariates in the model by omitting PEF and repeated the analyses using this reduced model.
The risk-adjustment tool can be used for profiling many severity-related outcome measures, such as hospital admissions, ED length of stay, and ED costs. In this paper, we demonstrated profiling admission practices. There are at least two analytic approaches for practice profiling (DeLong et al. 1997). The first one is the ready-made approach from the simple logistic regression model (Ivanov, Tu, and Naylor 1999). This method uses the validated beta coefficients as weights and applies them to individual patient data to obtain expected probabilities (p) of admission:
These individual probabilities are then averaged at the hospital level to give each hospital's expected admission rate. Finally, risk-adjusted admission rates for each hospital are calculated by dividing the hospital's actual admission rates by its expected value and then multiplying that by the hospital-wide average.
A more sophisticated approach is to use hierarchical modeling, which takes into account the potential for clustered observations within hospitals (Krumholz et al. 2006a; Tsai 2009;), as demonstrated in this paper. We performed a random intercept, two-level hierarchical logistic regression model. This model included the fixed effects of the patient-level covariates comprising the risk index, plus a hospital-level random intercept (Rabe-Hesketh and Skrondal 2005). This model used the same variables in the index but re-estimated the beta coefficients according to the study population and hierarchical model specification; for example, the beta coefficients in model 2 are different from those in model 1:
where pij denotes the probability of admission for the ith patient treated at the jth hospital; X1ij through Xnij denote patient characteristics. The above model incorporates a normally distributed hospital random effect (τj). This hospital-specific random effect is the logarithm of the OR of admission at the given hospital compared with a hospital with an average admission rate in the study population, after adjusting for patient mix. The patients treated at hospitals with positive random effects have greater odds of admission than patients treated at a hospital with an average admission rate.
For each hospital, an estimate of that hospital's random effects was computed, as was its standard error. To identify the outliers in admission practice, hospitals were ranked by their point estimates of random effects. Ninety-five percent CIs were plotted around the point estimates (aka caterpillar plot). Those hospitals whose 95 percent CI lay entirely above zero were classified as having significantly higher-than-average admission rates, while those hospitals whose 95 percent CI lay entirely below zero were classified as having significantly lower-than-average admission rates (DeLong et al. 1997). The impact of risk adjustment on hospital rankings was assessed by changes in tertile ranking before and after risk adjustment, as well as a weighted κ coefficient of agreement.
If one is interested in hospital characteristics associated with admission rather than identifying the hospital outliers, one could enter hospital characteristics as fixed-effect parameters into model 2 (i.e., H1j –Hkj as in model 3).
A key informant survey was distributed at each site in NEDSS to collect data on ED characteristics. We included the following eight ED characteristics and refit the hierarchical model as follows: number of beds in the ED, annual visit volume, geographic regions (Northeast, South, Midwest, and West), affiliation with an emergency medicine residency program, number of ED physicians and ED nurses, the presence of a daily hospital “interdepartmental bed conference,” and whether ED attending physicians had admitting privileges.
There were 3,515 patients with acute asthma in the MARC derivation cohort and 3,986 in the NEDSS validation cohort. The patients in the derivation and validation cohorts were quite similar (Table 1). The median age was between 30 and 40 years for both cohorts, and there were more women than men in both cohorts. More patients in the MARC cohort had a history of admission or intubation for asthma, compared with the NEDSS cohort. Acute ED presentation was similar in both cohorts, with the majority of patients being classified as having moderate-to-severe asthma according to the initial PEF. Admission rates were 21 and 19 percent in the derivation and validation cohorts, respectively.
The model from the derivation cohort was comprised of nine variables, including demographics, chronic asthma-related factors, acuity at ED presentation, and initial ED treatments (Table 2). Female sex, prior history of hospital admission for asthma, higher respiratory rate and lower PEF at ED presentation, and more intensive β-agonist treatments were independently associated with an increased risk of hospital admission. In contrast, shorter duration of symptoms was associated with a decreased risk of admission. The area under the ROC curve for the model from the derivation cohort was 0.75; the model fit was satisfactory (p=.39 for Hosmer–Lemeshow test) (Table 3).
When the derivation model was applied to the validation cohort, it maintained satisfactory discriminatory ability (area under the ROC, 0.75) and calibration (p=.30 for Hosmer–Lemeshow test) (Table 3). The satisfactory calibration was evident in the plot of observed versus predicted probabilities of admission. In all deciles of admission probability, the predicted probabilities of admission were fairly consistent with actual risks of admission (Figure 1).
When omitting PEF from the MARC model, the discriminatory ability of the index was slightly attenuated (area under the ROC, 0.73), while the calibration was maintained (p=.41 for Hosmer–Lemeshow test) (Table 3). The reduced eight-variable model still performed satisfactorily when applied to the validation cohort (Table 3).
Results for identifying hospitals as outliers in admission practices are shown in Figure S1. After adjusting for patient mix, nine hospitals were identified as having significantly lower admission rates, while 18 hospitals were identified as having significantly higher admission rates in the NEDSS sample.
Hospitals were ranked according to the random effects obtained from the hierarchical model. After adjusting for patient mix, there were significant changes in the tertile rankings, with all the changes occurring between adjacent categories (Table S1). The weighted κ coefficient showed only a moderate agreement of hospital rankings before and after risk adjustment (unweighted κ coefficient was 0.47 and weighted κ with linear weighting was 0.60).
Further inclusion of ED characteristics in the hierarchical model revealed that only the number of ED beds was independently, positively associated with hospitalization (OR per 1-bed increase, 1.05; 95 percent CI, 1.01–1.09), after adjusting for patient mix.
Using data from two large cohorts, we developed and prospectively validated a risk-adjustment tool for acute asthma. We also demonstrated that this tool can be used for profiling admission practices across hospitals. Given its validity, we believe that this tool may have broader uses, particularly in monitoring and reporting performance of hospitals and health care providers, as well as in reimbursement control.
A number of studies have proposed hospital admission as a proxy for severity of illness in acute care settings and have developed risk adjustment models using admission as an outcome measure (Chamberlain et al. 2004; Gorelick et al. 2007;). These models, however, are “generic” in nature and have not been validated in disease-specific conditions, such as acute asthma. A few asthma-specific risk indices or scoring systems are available (Rodrigo and Rodrigo 1997, 1998; Cham et al. 2002; Gorelick et al. 2004; Kelly, Kerr, and Powell 2004). However, as mentioned before, these tools either utilize repeated measurements of lung function (Rodrigo and Rodrigo 1998; Kelly, Kerr, and Powell 2004;) or incorporate subtle physical findings (e.g., accessory muscle use) (Rodrigo and Rodrigo 1997; Cham et al. 2002; Gorelick et al. 2004;), both of which are infrequently documented in the medical record.
Risk-adjustment models should be developed and validated in different samples to assess robustness because external validation is the true test of a predictive model (Harrell, Lee, and Mark 1996; Krumholz et al. 2006a;). Although the NEDSS patients seemed to be less ill compared with the MARC patients, the risk index retained satisfactory discrimination and calibration when applied to the NEDSS data. The stability of the model over time supports the validity of the nine variables in the index. It is possible that a simpler risk-adjustment tool based on administrative data will be developed in the future, and this medical record–based model may be used to validate the administrative claims model, as health services researchers have done in heart failure and acute myocardial infarction (Krumholz et al. 2006b,2006c;).
We have shown that the risk index can be incorporated into the hierarchical model for benchmarking admission practices across hospitals. By inspecting the “caterpillar plot,” significant deviations from the average should prompt review of the medical practices (utilization management), especially in the hospitals with the highest deviations from the reference. For those hospitals that potentially overadmit patients, payment for unnecessary services may be denied to avoid a waste of inpatient resources. For those hospitals that potentially fail to admit patients when necessary, physicians' re-education and feedback on their practice patterns may be needed to minimize adverse events among patients discharged from the ED.
Because the results of performance ranking (i.e., report card) have profound effects on hospitals and health care providers (Shahian et al. 2005), it is critically important that the risk-adjustment tool is updated, transparent, and accountable, and that the statistical methodology for profiling is appropriate (Tsai 2009). Some studies have shown that using hierarchical models may avoid false outlier classification and may result in more accurate estimates of provider performance (Shahian et al. 2001, 2005). With the use of our validated risk index and the hierarchical model, provider profiling for acute asthma would be more credible.
This study has some potential limitations. First, unlike risk-adjustment tools derived from administrative data, the risk index requires medical record abstraction. Although it includes more clinical information, it can be costly. However, with the advances in information technology, electronic medical records may provide a more efficient way to capture the information needed for this index. Second, we used hospitalization as an outcome measure to demonstrate the utility of the risk-adjustment tool. The decision to admit, however, is influenced by many other factors in addition to disease severity, such as patient preference and the availability of hospital beds (Wennberg 2002). These unwarranted variations in practice would require a closer inspection of medical records to determine the appropriateness of admission decisions. In this context, the risk-adjustment tool helps identify outliers to mitigate the burden associated with full utilization review. Moreover, this risk-adjustment tool can be used to look at other severity-related outcomes, such as costs and length of stay. Third, this risk index was not designed for risk stratification in clinical practice. Rather, it is intended to be applied to groups of patients at the hospital or provider level for the purposes of risk adjustment. Finally, the EDs that composed our samples are predominantly urban, academically affiliated hospitals. The applicability of this index to other institutions will require additional studies.
In summary, we developed and prospectively validated a novel risk-adjustment tool in acute asthma. The tool can be used for profiling practices among health care providers and to identify outliers for the purposes of quality improvement or reimbursement control. For policymakers, validated risk-adjustment tools and appropriate statistical methodology increase the likelihood of correct inferences and sound policies. For health care providers, receiving regular feedback on practices should help improve decision making and achieve a more cost-effective practice.
Joint Acknowledgment/Disclosure Statement: This investigator-initiated study was supported by a research grant from Critical Therapeutics (Lexington, MA), but this funder had no role in data collection, statistical analysis, preparation of the manuscript, or decision to publish. The underlying studies were supported by unrestricted grants from GlaxoSmithKline (Research Triangle Park, NC) and R01 HS-13099 from the Agency for Healthcare Research and Quality (Rockville, MD). Dr. Camargo was also funded by grant HL084401 (Bethesda, MD). The authors thank the participating investigators for their ongoing dedication to emergency medicine and patient safety research (full list in the online supplemental material).
Disclosures: Dr. Camargo has received financial support from a variety of groups for participation in conferences, consulting, and medical research; recent industry sponsors with an interest in asthma were AstraZeneca, Critical Therapeutics, Dey, Genentech, GSK, Merck, Novartis, Respironics, and Schering-Plough. Other authors have no conflicts of interest to disclose.
Additional supporting information may be found in the online version of this article:
Appendix SA1: Author Matrix.
Appendix SA2. Full Acknowledgments.
Figure S1. Profiling Hospital Admissions. ED-level random intercepts (log of the OR for admission) and their 95 percent confidence intervals, obtained by the hierarchical model, are used to identify the outliers of hospital admission practices. The black circles represent the hospital outliers that have significantly lower or higher admission practices than an average hospital, while the white circles represent the hospitals that have admission practices not significantly different from an average hospital in the study population.
Table S1. Comparison of the Impact of Risk Adjustment on the Hospital Rankings of Admission Practices in the Validation Cohort.
Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.