|Home | About | Journals | Submit | Contact Us | Français|
Compare characteristics and outcomes of patients hospitalized in specialty cardiac and general hospitals for acute myocardial infarction (AMI) and coronary artery bypass grafting (CABG).
2000–2005 all-payor administrative data from Arizona, California, Texas, and Wisconsin.
We identified patients admitted to specialty and competing general hospitals with AMI or CABG and compared patient demographics, comorbidity, and risk-standardized mortality in specialty and general hospitals.
Specialty hospitals admitted a lower proportion of women and blacks and treated patients with less comorbid illness than general hospitals. Unadjusted in-hospital AMI mortality for Medicare enrollees in specialty and general hospitals was 6.1 and 10.1 percent (p<.0001) and for non-Medicare enrollees was 2.8 and 4.0 percent (p<.04). Unadjusted in-hospital CABG mortality for Medicare enrollees in specialty and general hospitals was 3.2 and 4.7 percent (p<.01) and for non-Medicare enrollees was 1.1 and 1.8 percent (p=.02). After adjusting for patient characteristics and hospital volume, risk-standardized in-hospital mortality for all AMI patients was 2.7 percent for specialty hospitals and 4.1 percent for general hospitals (p<.001) and for CABG was 1.5 percent for specialty hospitals and 2.0 percent for general hospitals (p=.07).
In-hospital mortality in specialty hospitals was lower than in general hospitals for AMI but similar for CABG. Our results suggest that specialty hospitals may offer significantly better outcomes for AMI but not CABG.
The emergence of physician-owned specialty cardiac hospitals has generated widespread controversy (Iglehart 2005; Berenson, Bodenheimer, and Pham 2006;). Critics allege that these hospitals preferentially select low-risk well-insured patients for admission without demonstrating any improvements in risk-adjusted outcomes. Moreover, opponents contend that by cherry picking low-risk and better insured patients specialty hospitals severely damage the financial viability of competing full-service hospitals (Kahn 2006). Supporters counter that by focusing capital and personnel on a circumscribed population of patients, specialty hospitals will deliver better outcomes at lower cost than general hospitals (Herzlinger 2004).
Despite an often contentious debate involving providers, regulators, and policy makers (2005; Barro, Huckman, and Kessler 2006; Wilson 2006;), empirical data assessing outcomes in specialty and general cardiac hospitals remain limited. Available data from a number of recent analyses suggest that risk-adjusted rates of adverse outcomes may be 10–20 percent lower for specialty hospitals as compared with general hospitals (Dobson 2003; 2005; Cram, Rosenthal, and Vaughan-Sarrazin 2005; Nallamothu et al. 2007;). However, an important limitation of prior studies has been their exclusive reliance on Medicare administrative data, making it uncertain whether the improved outcomes observed among Medicare beneficiaries can be extrapolated to other patient populations.
The overarching goal of our study was to compare mortality in specialty cardiac and competing general hospitals for adults admitted with acute myocardial infarction (AMI) or undergoing coronary artery bypass graft (CABG) surgery using all-payor State Inpatient Data (SID) from the Agency for Healthcare Research and Quality (AHRQ). Our objective was to examine whether the improved outcomes observed in specialty hospitals in prior studies of Medicare enrollees would also be observed in our analysis of the non-Medicare population.
We used SID data for years 2000–2004 from Texas and 2000–2005 from California and Arizona, and 2005 from Wisconsin to identify all patients age 20 years or older hospitalized with AMI on the basis of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code 410.X or undergoing CABG (ICD-9-CM codes 3610–3619). We selected these four states for analysis because they have a large number of physician-owned specialty hospitals (2005) and submit data to the Agency for Healthcare Research and Quality (AHRQ) as part of the health care utilization project (HCUP). The SID include detailed administrative data derived from the UB-92 hospital discharge abstract for all patients hospitalized in participating states. Key elements include the following: patient demographics; admitting hospital; primary and secondary diagnoses and procedures, as captured by ICD-9-CM codes; the diagnosis-related group (DRG); admission source (e.g., emergency department, transfer from another hospital); admission and discharge dates; patient's primary insurance (categorized as Medicare, private insurance, Medicaid, self-pay, and other); type of insurance (fee-for-service or HMO); and disposition at the time of hospital discharge (e.g., transfer to another acute care hospital, deceased).
After identifying all patients admitted with AMI or for CABG as described above, we excluded all patients hospitalized after transfer from another acute care hospital. Transfer patients were excluded because the SID data for the states included in our study lack unique patient identifiers, making it impossible to track patients transferred between hospitals; including transfer patients would have had the potential to result in double counting of certain patients at both the transferring and receiving hospitals. This resulted in the exclusion of 5,998 AMI patients admitted to specialty hospitals (50.3 percent of specialty hospital AMI admissions) and 2,824 CABG patients admitted to specialty hospitals (25.6 percent of specialty hospital CABG admissions) as well as 7,059 AMI patients admitted to general hospitals (12.7 percent of general hospital AMI admissions) and 4,202 CABG patients admitted to general hospitals (10.5 percent of general hospital CABG admissions).
We stratified the AMI and CABG patients into those insured by Medicare (either fee-for-service or Medicare-managed care) and those not insured by Medicare (i.e., private insurance, Medicaid, or uninsured/other). We identified comorbid conditions using ICD-9-CM code algorithms defined by Elixhauser et al. In contrast to the full Elixhauser algorithm, which does not consider cardiac-related secondary diagnoses as comorbid conditions for cardiac disease, all secondary diagnoses were considered potential comorbid conditions. Additional clinical risk factors included variables that have been examined in prior studies assessing cardiovascular outcomes (Elixhauser et al. 1998; Rosenthal, Vaughan, and Hannan 2003; Popescu, Vaughan-Sarrazin, and Rosenthal 2007;), including a history of previous CABG surgery or PCI, use of intraaortic balloon pump (IABP) during the admission, and use of mechanical ventilation during the hospitalization (AMI only) and receipt of revascularization during admission (AMI patients only). AMI location was coded as anterolateral (ICD-9 codes 410.0, 410.1, and 410.5), inferoposterior (410.2, 410.3, 410.4, and 410.6), subendocardial (410.7), and other (410.8 and 410.9). We calculated AMI and CABG volume for each hospital by summing total AMI (and CABG) admissions to each facility.
We identified physician-owned specialty cardiac hospitals and competing general hospitals using a method that we and others have used previously (Cram, Rosenthal, and Vaughan-Sarrazin 2005; Nallamothu et al. 2007;). Briefly, using the SID we created a measure of cardiac specialization for each hospital—the proportion of all admissions to each hospital in 2003 that were categorized as Major Diagnostic Category (MDC) 5 (Diseases of the Circulatory System). Next, we reviewed the Internet sites and SID data for each of the 20 hospitals with the highest degree of cardiac specialization in each state and excluded all hospitals that provided obstetric care and/or general pediatric services (N=50) as such facilities do not fit the definition of specialty cardiac hospitals. This resulted in the identification of 10 specialty cardiac hospitals whose status as physician-owned specialty hospitals was verified by review of hospital websites and comparison with a published list of physician-owned specialty hospitals developed by the Centers for Medicare and Medicaid Services (CMS) (2005).
Each specialty hospital was assigned to a unique hospital market (hospital referral region [HRR]) based on algorithms defined by the Dartmouth Atlas of Health Care (The Dartmouth Institute 2000). We identified all general hospitals located in the same HRR as one or more specialty hospitals. We then excluded all general hospitals that lacked cardiac revascularization capability defined as performing fewer than five PCIs and five CABGs during the most recent year of data we had available (130 hospitals that admitted 23,052 AMI and 584 CABG patients); we chose to exclude hospitals without revascularization because such small hospitals would not represent an appropriate comparator for specialty hospitals. Thus, our final comparison group of general hospitals with revascularization capability consisted of 58 hospitals for AMI and 58 hospitals for CABG. In total, across the 10 HRRs included in our study between 61 and 94 percent of AMI patients were admitted to study hospitals while between 99.8 and 100 percent of CABG patients were admitted to study hospitals. All analyses described in subsequent sections of the manuscript were conducted separately for the AMI and CABG cohorts.
For the AMI and CABG cohorts, analyses were conducted for three different patient cohorts: Medicare enrollees alone, non-Medicare enrollees alone, and all patients combined. We used bivariate methods to compare the demographic characteristics, comorbidity, and clinical risk factors of patients admitted to specialty and general hospitals in each of the three cohorts.
We used three-level hierarchical logistic regression models to calculate the risk-standardized mortality rates for specialty and general hospitals for each of the three AMI and CABG cohorts defined above. These multilevel models take into account the fact that patients are nested within hospitals that are nested within HRRs. Three separate models were estimated for each patient cohort with progressive risk adjustment: (1) unadjusted models; (2) models adjusted for patient demographics, comorbidity, and clinical risk factors; and (3) models adjusted for all patient characteristics plus hospital AMI or CABG volume. For these models, the dependent variable was an indicator variable representing whether a given patient died during his or her hospitalization. Independent variables were selected from a wide array of patient demographic characteristics, comorbid illness, and hospital-level variables that have been shown to be associated with outcomes for AMI and CABG in prior studies using administrative data. Most candidate variables were selected by stepwise model selection procedure and retained in the models if p-values were <.15; a limited number of key variables (e.g., patient age, gender) were selected for the models irrespective of their statistical association with the study endpoint. In the risk adjustment model, age was expressed as five indicator variables (<50, 50–59, 60–69, 70–79, and 80 years and older), with a referent category of age <50 years. Race was expressed using a series of indicator variables for patients identified as black, Hispanic, Asian, Native American and other, with white race serving as the referent. The full models are available as Appendices 1 and 2. Risk-standardized mortality rates for specialty and general hospitals were estimated by the direct method of standardization using the characteristics of average patient across the entire sample (Curtin and Klein 1995).
To assess the robustness of our findings, we conducted an array of sensitivity analyses. First, we repeated the comparison of specialty and general hospitals using an alternative definition of competing general hospitals; in our more expansive definition, we defined competing general hospitals as all hospitals performing at least five PCIs and five CABGs in 2004 for TX and 2005 for CA, AZ, and WI irrespective of whether the hospitals were located within an HRR with a specialty hospital. Second, we repeated our analyses without excluding patients admitted after transfer from another acute care hospital. Third, given prior reports suggesting that specialty hospitals inappropriately transfer critically ill patients to avoid in-hospital deaths, we examined the potential impact of interhospital transfers on our results in two different ways: in our primary analyses, patients who were transferred out from the admitting hospital to another acute-care hospital were counted as being discharged alive; in an alternative analysis, we counted patients who were transferred from the admitting hospital to another acute-care hospital as an adverse outcome by creating a composite study endpoint of either in-hospital death or transfer to another acute care hospital—thus penalizing hospitals that might be transferring excess patients in an effort to avoid in-hospital deaths.
C-statistics were calculated for each model and 95 percent confidence intervals were generated using bootstrap methods by re-sampling the study population with 5,000 replications. All p-values are two-tailed, with p-values <.05 deemed statistically significant. All statistical analyses were performed using SAS 9.1.3 (SAS Institute Inc., Cary, NC). This project was approved by the University of Iowa Institutional Review Board.
The 10 specialty hospitals admitted 5,271 AMI patients and 7,520 CABG patients, while the 58 competing general hospitals admitted 47,138 AMI patients and 34,945 CABG patients (Tables 1 and and2).2). Among the AMI cohort, both Medicare and non-Medicare patients admitted to specialty cardiac hospitals were less likely to be women and less likely to be black as compared with patients with similar insurance admitted to general hospitals. In addition, patients admitted to specialty hospitals with AMI had less of most, but not all, comorbid conditions, including diabetes with complications and renal failure. AMI patients admitted to specialty hospitals were more likely to require mechanical ventilation and were more likely to have had previous revascularization (either PCI or CABG), but those with Medicare were less likely to receive an IABP. Likewise, both Medicare and non-Medicare patients undergoing CABG in specialty hospitals were less likely to be women, less likely to be black, had lower rates of many comorbid conditions, and were more likely to have received prior revascularization than patients treated in general hospitals (Table 2).
Median annual AMI volume for specialty hospitals was 303 (mean 312) and general hospitals was 223 (mean=286) (p=.87). A small percentage of AMI patients admitted to specialty hospitals (0.9 percent) as compared with general hospitals (1.7 percent) were transferred to another acute care hospital (p<.001). Median and mean annual CABG volume for specialty hospitals was higher (median 296; mean 308) than for general hospitals (median 165; mean 205) (p=.07). A similar percentage of CABG patients admitted to specialty hospitals (1.1 percent) as compared with general hospitals (1.1 percent) were transferred to another acute care hospital (p=.83).
Unadjusted in-hospital AMI mortality was lower for specialty hospitals than for general hospitals both for Medicare enrollees (6.1 vs. 10.1 percent; p<.0001) and for non-Medicare enrollees (2.8 vs. 4.0 percent; p=.04). Risk-standardized mortality for AMI patients was significantly lower for specialty hospitals than general hospitals both in unadjusted analyses and after adjustment for patient demographics and comorbidity; mortality remained lower after further adjustment for AMI volume (Table 3).
Unadjusted in-hospital CABG mortality was lower in specialty hospitals than in general hospitals both for patients with Medicare (3.2 vs. 4.7 percent; p<.01) and those without (1.1 vs. 1.8 percent; p=.02). Risk-standardized mortality for CABG patients was also significantly lower for specialty hospitals as compared with general hospitals in both unadjusted analyses and analyses that adjusted for patient demographics and comorbidity (Table 4). Alternatively, after additional adjustment for hospital CABG volume, the improved outcomes in specialty hospitals were no longer statistically significant.
Results were similar when outcomes in specialty hospitals were compared with all general hospitals in their state as opposed to those general hospitals located within the same HRR with risk-standardized AMI mortality of 2.7 percent in specialty hospitals (95 percent CI [2.2–3.5 percent]) and 4.1 percent in general hospitals (95 percent CI [3.7–4.6 percent]) (p<.001). Likewise, risk-standardized CABG mortality in specialty hospitals was 1.5 percent (95 percent CI [1.2–1.9 percent]) and 2.0 percent in general hospitals (95 percent CI [1.7–2.3 percent]) (p=.07). Results were also similar when we considered a composite outcome of patients who died and patients who required transfer to another acute care hospital.
In an analysis of all-payor administrative data, we found evidence of improved outcomes in specialty hospitals for AMI but statistically similar outcomes for CABG. Our findings were consistent among Medicare and non-Medicare populations, when specialty hospitals were compared with general hospitals within the same market and within the same state, and when patients who required transfer to another acute care hospital were treated as adverse outcomes. The results of this study provide important validation of prior studies relying exclusively on Medicare data and confirm the results of prior studies.
A number of our findings merit further discussion. First, the finding that specialty hospitals appeared to have statistically significantly better outcomes for AMI but similar outcomes for CABG requires explanation in light of prior studies. Nallamothu and colleagues, using Medicare administrative data, found that 30-day AMI risk-adjusted mortality was a statistically significant 10 percent lower in specialty hospitals than general hospitals, mirroring results of the current study (Nallamothu et al. 2007). Cram, Rosenthal, and Vaughan-Sarrazin (2005) found that CABG mortality was a statistically significant 15 percent lower in specialty cardiac hospitals than in general hospitals. Using slightly different methods, the current study revealed a 33 percent relative reduction in in-hospital AMI mortality in specialty hospitals as compared with general hospitals (2.8 vs. 4.2 percent) and a 20 percent relative reduction in CABG mortality (1.6 vs. 2.0 percent). While the improved outcomes for specialty hospitals in the current study were statistically significant for AMI but not CABG in this analysis, we believe that the consistent advantage now observed among specialty hospitals should be acknowledged and is important for patients, physicians, and policy makers. Our findings extend the results of prior studies using an alternative data source that includes younger patients not insured by Medicare. We found that the advantage of specialty hospitals persisted when the comparison group of general hospitals was extended from those within the same HRR as the specialty hospitals to a broader group including all competing general hospitals within the state; this is a small but important expansion of prior studies, which exclusively compared outcomes of specialty hospitals with those of general hospitals located within the same health care markets.
In interpreting our results, it is important to consider our definitions of specialty hospitals and general hospitals. We identified specialty cardiac hospitals using a methodology that has been used previously both by government regulators and researchers—namely identifying physician-owned hospitals for whom an extremely high proportion of total admissions were cardiac in nature that do not provide general care including pediatric and obstetrical services (2003a,b;). While this definition clearly identifies a well-circumscribed group of hospitals, there are potential shortcomings to defining hospital specialization in such a rigid manner. In reality, hospital specialization might best be viewed as a continuum measured as the proportion of a hospital's total admissions that fall within a given group of diseases rather than a dichotomous measure (Hwang et al. 2007). There is growing concern that increasing competition between hospitals and tightening financial margins are encouraging many general hospitals to focus on better-reimbursed service lines including cardiac care, orthopedics, and oncology care with uncertain consequences (Pham et al. 2004; Berenson, Bodenheimer, and Pham 2006;). To be included in our study a cardiac hospital was required to have a unique billing number for purposes of reimbursement by payors. Thus, free standing cardiac hospitals located on large medical campuses as well as cardiac hospitals located with a larger general hospital were not considered specialty hospitals for purposes of our study.
Our definition of general hospitals also merits brief mention. We defined competing general hospitals as all general hospitals performing at least five PCI and CABG procedures per year. We excluded general hospitals that did not provide revascularization from our analysis. Since specialty cardiac hospitals, by definition, provide comprehensive revascularization services, it seems appropriate for our comparison to be limited to general hospitals providing a similar scope of service. Alternatively, by eliminating general hospitals that do not perform revascularization, we likely excluded the smallest (and highest mortality) general hospitals from our analysis; had these hospitals been included, it is likely that the in-hospital mortality advantage seen in specialty hospitals for AMI would have been even larger.
It is also important to consider the strengths and limitations of administrative data in general and the SID data more specifically. While there is a long history of using administrative data in assessing patient outcomes and quality of care (Hannan et al. 1997; Iezzoni 1997;), there are legitimate concerns about such a strategy, most notably using ICD-9-CM codes to measure clinical status. For example, ICD-9-CM codes may not capture laboratory values or physical findings that have prognostic value. There are also concerns about errors in ICD-9-CM coding using patients' medical records (Hsia et al. 1988; Waterstraat, Barlow, and Newman 1990;). Studies that have formally investigated the accuracy of ICD-9-CM codes have found varying levels of agreement between patient medical records and administrative data (Fisher et al. 1992; Hannan et al. 1992, 1997; Green and Wintfeld 1993; Baron et al. 1994). While agreement is generally excellent for major procedures, agreement may be poorer for some comorbid conditions, leading some to recommend caution when using administrative data to make inferences about quality of care or treatment outcomes. In a recent study Krumholz et al. (2006a,b); examined the agreement between hospital quality for AMI and CHF assessed using MedPAR administrative data as compared with clinical data and determined that while clinical data is superior, administrative data provides essentially similar results. Nevertheless, improving administrative data is a priority, particularly through the inclusion of a present on admission (POA) indicator, which would distinguish conditions that are preexisting from those arise as complications arising during the hospital stay (Parker et al. 2006; Glance et al. 2008;). For example, Glance et al. (2008) evaluated conditions defined by the Elixhauser algorithm and determined that paralysis and coagulopathy are misclassified >20 percent of the time as preexisting conditions when developing during the course of an AMI admission. Nevertheless, we generated additional analyses in which these variables were excluded as potential comorbid conditions, and conclusions were virtually identical.
There are a number of limitations to our study that merit brief mention. First, our study focused on patients admitted to specialty and general hospitals with AMI in four states; thus, generalizing our findings to all hospitals and diagnoses should be made with caution. Additional investigations are needed to further examine the consistency of these two administrative data sources. Second, our analyses were somewhat limited by the idiosyncrasies of the SID data. For example, the structure of the SID data differs significantly by state; some states report patient age as a continuous variable while others report age as a categorical variable to prevent patient identification. In addition, the SID data do not assign any unique patient identifiers, making it impossible to track patients over time to look at mortality occurring after discharge or tracking patient readmissions or transfers to other hospitals. Despite these limitations, the SID data offer a tremendous resource for answering questions that cannot be addressed with Medicare data alone.
In summary, our results provide further evidence that physician-owned specialty cardiac hospitals deliver modest improvements in patient outcomes as compared with general hospitals. Specialty cardiac hospitals may represent a good choice for patients seeking hospitals that deliver high-quality care.
Joint Acknowledgment/Disclosure Statement: Dr. Vaughan-Sarrazin is a research scientist at the Iowa City VA Medical Center, which is funded through the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service. Dr. Popescu is supported by a K08 career development award (HL-095930) from the NHLBI at the NIH. Dr. Cram is supported by a K23 career development award (RR01997201) from the NCRR at the NIH and the Robert Wood Johnson Physician Faculty Scholars Program. This work is also funded by R01 HL085347-01A1 from NHLBI at the NIH.
Disclaimers: The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs. The funding sources had no role in the analyses or drafting of this manuscript.
Additional supporting information may be found in the online version of this article:
Appendix SA1: Author Matrix.
Appendix S1: AMI Patients.
Appendix S2: CABG Patients.
Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.