|Home | About | Journals | Submit | Contact Us | Français|
In the setting of acute ST-segment elevation myocardial infarction, reperfusion therapy with emergent primary percutaneous coronary intervention (PCI) significantly reduces mortality. It is unknown whether a hospital’s performance on the Centers for Medicare & Medicaid Services (CMS) quality metric for time from patient arrival to angioplasty is associated with its overall hospital acute myocardial infarction (AMI) mortality rate.
The objective of this study was to evaluate if hospitals with higher performance on the time-to-PCI quality measure are more likely to achieve lower mortality for patients admitted for any type of AMI.
Using merged 2006 data from the National Inpatient Sample, the American Hospital Association annual survey, and CMS Hospital Compare quality indicator data, we examined 69,101 admissions with an ICD-9 coded principal diagnosis of AMI in the 116 hospitals that reported more than 24 emergent primary PCI admissions in that year. Hospitals were categorized into quartiles according to percentage of admissions in 2006 that achieved the primary PCI timeliness threshold (time-to-PCI quality measure). Using a random effects logistic regression model of inpatient mortality, we examined the significance of the hospital time-to-PCI quality measure after adjustment for other hospital and individual patient sociodemographic and clinical characteristics.
The unadjusted inpatient AMI mortality rate at the 27 top quartile hospitals was 4.3%, as compared to 5.1% at the 32 bottom quartile (worst performing) hospitals. The risk-adjusted odds ratio of inpatient death was 0.83 (95% CI = 0.72 to 0.95), or 17% lower odds of inpatient death, among patients admitted to hospitals in the top quartile for the time-to-PCI quality measure as compared to the case if the hospitals were in the bottom 25th percentile.
Hospitals with the highest and second highest quartiles of time-to-PCI quality measure had a significantly lower overall AMI mortality rate than the lowest quartile hospitals. Despite the fact that a minority of all patients with AMI get an emergent primary PCI, hospitals that perform this more efficiently also had a significantly lower mortality rate for all their patients admitted with AMI. The time-to-PCI quality measure in 2006 was a potentially important proxy measure for overall AMI quality of care.
Timely reperfusion therapy with primary percutaneous coronary intervention (PCI) significantly reduces mortality for patients with ST-segment elevation (STEMI). The shorter the time from symptom onset to treatment, the greater the survival benefit.1–4 Time to primary PCI (time-to-PCI quality measure) has been adopted as a quality measure by the U.S. Centers for Medicare & Medicaid Services (CMS). CMS, as well as other health care organizations, participates in the Hospital Quality Alliance, a large-scale public-private collaboration that makes performance information on all acute care non-federal hospitals accessible to the public, payers, and providers of care.5,6 These performance measures evaluate hospital quality on many processes of care, including primary PCI for certain patients presenting with STEMI. This comparative quality information is available to the public through the CMS Website, Hospital Compare (http://www.hospitalcompare.hhs.gov).
While there is strong agreement about the importance of early reperfusion, there is debate about whether achieving the precise cardiac performance measures affects mortality.7–9 For example, if cardiac intervention is not performed within 90 minutes for patients with STEMI, the hospital fails, even if reperfusion is achieved minutes later and the patient has a good outcome. Furthermore, patients with STEMI constitute a minority (42% during the study period) of all acute myocardial infarction (AMI) admissions,10 so it is not clear whether hospitals with superior time-to-PCI quality measure achieve better overall mortality for all patients with AMI presenting to their institution.
In this study, we sought to determine whether higher performance on the time-to-PCI quality measure was associated with lower risk-adjusted inpatient mortality for all patients diagnosed with AMI, not just those with STEMI. We examined this outcome in all AMI admissions for two reasons. First, it is not possible to validly differentiate performance for patients with STEMI from non-STEMI (NSTEMI) patients in hospital administrative data using International Classification of Diseases, Ninth Revision (ICD-9) codes. Second, our goal was to evaluate whether superior primary PCI quality measure scores are a proxy for multiple aspects of AMI quality of care that can produce superior outcomes of all patients with AMI, perhaps reflecting that hospitals that achieve high performance on PCI quality indicators are more likely to employ overall high quality practices that would lead to reduced mortality for all AMI patients, not just those with STEMI.
All 2006 data were derived from three sources. Hospital characteristics data came from the American Hospital Association (AHA) annual survey. We used the time-to-PCI quality measure available from the CMS web site, Hospital Compare, as our measure of performance. Patient level administrative discharge data came from the Nationwide Inpatient Sample (NIS) of the Healthcare Cost and Utilization Project (HCUP). We used the AHA hospital identification number to link the three datasets, and then performed a cross-sectional analysis of inpatient mortality for AMI between January 1, 2006 and December 31, 2006. The time-to-PCI quality measure was compared with hospital risk-adjusted mortality rates. The Northwestern University Institutional Review Board found this study exempt from informed consent and full human subjects committee review.
The NIS is the largest publicly available all-payer inpatient care database, with discharge data from 1,045 hospitals located in 38 states in 2006 (the latest data available at the inception of the study).11 These data include ICD-9 coded primary and secondary diagnoses; primary and secondary procedures; admission and discharge status; demographic information such as sex, age, race and ethnicity, and median income for zip code divided into quartiles; expected payment source; total charges; length of stay; and hospital region, teaching status, ownership type, and bed size, categorized as small, medium, or large. HCUP represents a federal-state-industry partnership sponsored by the Agency for Healthcare Research and Quality (AHRQ). Additional detail on the NIS can be found on the HCUP website (http://www.hcup-us.ahrq.gov/).
We linked the NIS data with data from the 2006 AHA annual survey to obtain information about hospital characteristics. The AHA annual survey is mailed to all hospitals in the United States and has an annual response rate of approximately 85%. The survey provides a cross-sectional view of the hospital industry, including size, ownership, geographic location, services provided, and Council of Teaching Hospitals and Health Systems membership. In 2006, the survey included information on 4,554 non-federal, general hospitals.
We used the 2006 time-to-PCI quality measure available from Hospital Compare as our measure of performance. CMS publicly reports performance scores for over 4,000 acute care hospitals for care provided from January 1, 2006 through December 31, 2006. In 2006, CMS had data on 19 quality measures. There were ten process measures known as the “starter set” because the Medicare Prescription Drug, Improvement, and Modernization Act provided financial incentives for reporting these ten measures. We have focused on one of these measures, AMI-8a, proportion of admissions with primary PCI received within 120 minutes of hospital arrival. On July 1 2006, the CMS set the goal to reduce time from less than 120 minutes to less than 90 minutes per ACC/AHA guidelines.12 This change did not affect hospitals or patient data because it was mandated by all hospitals included in the datasets. For each hospital, a score of 0% to 100% was reported to CMS, which is the success rate of eligible patients with STEMI making the PCI timeliness threshold. The numerator was all patients with STEMI or new-onset left bundle branch block (LBBB) on electrocardiograph (ECG), whose time from hospital arrival to primary PCI was less than 120 minutes, and after July 1, 2006, 90 minutes. The denominator was all patients with STEMI or new-onset LBBB on ECG who received primary PCI regardless of time. The indicator inclusion and exclusion population can be found at: http://www.cms.hhs.gov/apps/QMIS/measure_details.asp?id=596
Figure 1 summarizes our selection criteria for inclusion of hospitals in the analysis. The study sample was limited to patients 18 years or older with an ICD-9 principal diagnosis of 410.00–410.91. From all patients admitted with an AMI principal diagnosis, 60,460 admissions (30% of all NIS AMI admissions) were excluded because of missing hospital identification in the NIS from states that only report de-identified hospitals. Also excluded were 46,134 admissions (23% of all NIS AMI admissions) from hospitals that did not submit Hospital Compare time-to-PCI quality measure scores to CMS. Finally, those hospitals that submitted less than 25 primary emergent PCI cases in 2006 were excluded, which resulted in exclusion of an additional 24,321 cases (12%) of NIS AMI admissions. The exclusion of low-volume emergent PCI hospitals was based on the fact that the CMS Hospital Compare website states that less than 25 primary PCI cases is too small to reliably tell how well a hospital is performing.
As shown in Table 1, hospitals in the inclusion group, totaling 116 institutions, were more likely to be urban, larger in bed size, and teaching hospitals as compared to hospitals excluded from the analysis sample. Excluded NIS hospitals included those with AHA identification but less than 25 emergent PCI cases (57 hospitals), as well as hospitals not reporting either an AHA identifier or not reporting their time-to-PCI quality measure data (826 hospitals).
In order to compare the association of the time-to-PCI quality measure with overall AMI mortality, the 116 hospitals meeting inclusion criteria were divided into quartiles based on quality indicator performance (proportion of relevant admissions meeting timeliness threshold). The time-to-PCI measure was evaluated as a continuous variable as well as quartiles. Although the continuous variable was statistically significant, we present data in quartiles for ease of interpretation of odds ratios (ORs).
We used ICD-9 secondary diagnosis codes to indicate the presence of up to 30 chronic comorbidities likely to have been present on admission, using the Elixhauser comorbidity adjustment scheme.13 This technique is now widely used and supported by AHRQ; it accounts for significant mortality risk differences associated with coded comorbidities. Because a count of Elixhauser comorbidities was a very strong, monotonic predictor of inpatient MI mortality in our study, we present data for a patient comorbidity count, coded for each patient as zero to up to five or more comorbidities, the most severe category.14 Additional patient level variables included age (four categories), sex, race and ethnicity (five categories), and median zip code household income for each patient (four quartiles). Hospital level variables included bed size (three categories), rural versus urban location, region of country (four census categories), teaching status (member of the Council of Teaching Hospitals), volume of primary PCIs done per hospital (four quartiles), and volume of patients with AMI per hospital (four quartiles).
Univariate differences in inpatient mortality were compared with chi-square tests. A generalized estimating equations logistic regression model with an exchangeable working correlation matrix was used to account for clustering within hospitals.15 We tested whether there were significant differences in the likelihood of inpatient death between hospitals in four performance quartiles on the time-to-PCI quality measure after controlling for hospital structural characteristics, individual patient sociodemographics, and clinical characteristics. We present ORs with 95% confidence intervals (CIs). All analyses were conducted using STATA version 10.0 (STATA Corp, College Station, TX).
As shown on Table 1, the AMI inpatient mortality rate for the 116 NIS hospitals included in the analysis sample was 4.7%, comparable to that of the 57 hospitals reporting less than 25 primary PCI admissions, (4.9%, p = 0.12) and that of hospitals excluded for missing identifiers (4.9%, p = 0.10).
As shown on Table 2, the 27 top quartile hospitals for the 2006 time-to-PCI quality measure had PCI timeliness scores ranging from 80% to 95% of their qualified AMI admissions. In contrast, the bottom quartile included 32 hospitals with time-to-PCI quality measure scores below 60%.
The overall inpatient AMI mortality rates for all AMI admissions were higher for hospitals performing in successive quartiles of the time-to-PCI quality measure. Patients admitted to top quartile hospitals had an unadjusted inpatient mortality rate of 4.3% while patients admitted to bottom quartile hospitals had an inpatient mortality rate of 5.1% (p < 0.01).
Figure 2 presents a scatter plot of each hospital’s 2006 time-to-PCI quality measure score, ranged against each hospital’s 2006 overall unadjusted AMI mortality rate. The clustering of hospitals in the lower right corner identifies hospitals with high time-to-PCI quality measure scores and low overall unadjusted AMI mortality rates.
Table 3 presents bivariate comparisons and multivariate logistic regression analyses of the association of all independent variables with the likelihood of inpatient death for the 69,101 patients with AMI. The risk of inpatient death increased dramatically with age, and when compared to the youngest patients, was highest among patients aged 84 years and older (OR 11.05, 95% CI = 8.44 to 14.47; p < 0.01). In univariate analyses, males had a lower inpatient AMI mortality rate than females (4.2% vs. 5.3%, p < 0.01); however, after adjustment for age and other factors, there was no significant sex difference in the odds of inpatient mortality (OR 0.93, 95% CI = 0.87 to 1.01; p = 0.06). The risk of death differed by race and ethnicity, with African American or black patients having a lower risk of inpatient death as compared to white patients (OR 0.66 95% CI = 0.56 to 0.78; p < 0.01).
Patients with an increasing count of Elixhauser comorbidities had sharply increasing mortality rates, from 1.9% of those with no comorbidity codes to 7.3% with five or more comorbidities. Patients did not have differences in mortality across zip code quartiles at the univariate level of analysis, but after adjustment for other risk factors, the patients coming from zip codes in the wealthiest quartile had significantly lower adjusted odds of death than patients from the poorest quartile (OR 0.86, 95% CI = 0.77 to 0.97; p = 0.01). After adjustment, there were no significant mortality differences in the hospital characteristics evaluated, including location, region, bed size, teaching status, volume of primary PCIs done per hospital, and volume of patients with AMI per hospital.
Controlling for very significant independent patient level predictors of mortality, patients admitted to hospitals that scored higher on the time-to-PCI quality measure had an 17% lower odds of inpatient death than patients admitted to the lowest quartile hospitals (OR=0.83, 95% CI = 0.72 to 0.95; p < 0.01). Patients admitted to the second highest quartile of PCI quality hospitals also had a significant 13% lower adjusted odds of death of 0.87 (95% CI = 0.77 to 0.99, p = 0.04) as compared to patients admitted to the bottom 25th percentile hospitals. The time-to-PCI quality score was also significant (p < 0.01) when scores were entered as a continuous variable rather than quartiles.
Overall inpatient AMI mortality rates for all AMI admissions increased in a monotonic fashion for hospitals performing in successive quartiles of the time-to-PCI quality measure. The top performing two quartiles of hospitals of the time-to-PCI quality measure each had a significantly lower overall inpatient AMI mortality rate than the bottom performing hospitals.
The time-to-PCI quality measure combines two different specialties (emergency medicine and interventional cardiology) and requires high performance by many hospital personnel and systems in order to successfully achieve this goal. When patients arrive at the door of the ED with chest pain, they must be rushed to an area where an ECG can be done. This ECG must be performed quickly and accurately, and then an emergency physician must promptly evaluate it. This physician must recognize the STEMI and contact the interventional cardiology team. Simultaneously, essential patient care tasks must be carried out to stabilize and treat the patient, as well as communicate with the patient and family. The patient must be moved quickly to the cardiac catheterization suite, where the procedure is carried out and the coronary vessel opened.
A delay in any part of the process can mean that the time goal is not met. Even environmental barriers, such as an excessive number of patients in the ED, can cause delays. The more crowded the ED, the longer it takes to get an ECG for patients with chest discomfort and the more likely that a delay will result in an adverse outcome for patients presenting with chest pain.16 The time-to-PCI quality measure can be negatively affected by many sub-systems within the ED, within interventional cardiology, and even within the larger hospital. It is our belief that because this measure needs the resources, commitment, alignment, focused effort, and synergy of many physicians, nurses, and administrators, the hospitals that score well in this measure are most likely to also optimally treat all patients with AMI.
This is the first study to our knowledge that analyzes the relationship of the time-to-PCI quality measure timeliness with overall AMI inpatient mortality. There are several previous examinations of the relationship between other Hospital Compare quality measures and risk-adjusted mortality of AMI at the hospital level.7–9 Two studies examined relationships between individual Hospital Compare measures and risk-adjusted mortality rates of AMI; however, neither included the time-to-PCI quality measure.7,9 One previous study has evaluated time to reperfusion processes, but their analysis included patients receiving fibrinolytic therapy within 30 minutes along with the time-to-PCI quality measure.8 Our results add to the literature on the inverse correlation between Hospital Compare quality measures and mortality.
Similar to previous studies,17,18 our study shows that no hospital characteristic, such as the number of inpatient beds, presence of resident house-staff, or region of the country, was correlated with a significant increase or decrease in mortality after risk-adjustment. This study does not explore why hospitals of similar structures (bed size, teaching status, and region) can have very different time-to-PCI times and therefore mortality rates for their patients presenting with STEMI. Hospital specific strategies, which decrease times from ED to opening of vessel, have been described.19 Although this information was known in 2006, there was still significant variation of the Hospital Compare performance measure. Perhaps the system and personnel issues that limit success in achieving the STEMI quality metric also affect treatment of patients with NSTEMI.
At hospitals that are successful at achieving the time-to-PCI measure, there are also significantly better outcomes for all patients with AMI; however, it is still uncertain why this occurs. We believe it is probable that hospitals with shorter time-to-PCI times also do well on other evidence-based quality measures, such as aspirin and beta-blocker administration within 24 hours, as well as other best practices that are currently not reported to Hospital Compare, such as use of clopidogrel and platelet glycoprotein IIb/IIIa inhibitor agents for patients with NSTEMI. Another explanation could involve systems quality management issues. As discussed before, performing well on the time-to-PCI measure depends on a complex hospital system. Further research needs to be done on exactly which characteristics of high performing hospitals achieving the time-to-PCI measure contribute to a hospital’s and an ED’s ability to achieve better outcomes.
The hospitals in the NIS are a regional sample that was intended to be representative of all U.S. hospitals.20 Our analysis was limited to one measure from CMS Hospital Compare quality indicators list. The sample in this study represented only the subgroup of U.S. hospitals that provided at least 24 emergent PCI for AMI patients in 2006. Because some states did not report hospital identification to NIS, 30% of patients who had a principal diagnosis of AMI had to be excluded from the study, but because the states were distributed throughout the U.S., exclusion of these hospitals should not have biased the remaining sample. As seen in Table 1, there are obviously important differences between the generally large urban teaching hospitals in our sample and smaller institutions or those without catheterization labs or open heart surgery facilities, and restricting the analysis to hospitals with at least 24 emergent PCI cases was done because this is the threshold CMS uses. Less than 25 primary PCI cases annually is too few to reliably rank hospital performance.21
Administrative data that are based on ICD-9 coding do not contain detailed clinical data for risk adjustment on many key items (e.g. prior MI, laboratory values at admission). Detailed clinical data may have shown that poorer performing hospitals did indeed have more severely ill patients at admission; however, an equally likely possibility is that use of clinically detailed risk adjustment would increase the gap between observed and expected mortality rates at high and low performing hospitals.
Other large databases, such as the Medicare Provider Analysis and Review, can be linked to death certificate data to obtain 30-day or longer mortality, which would indicate a much higher case fatality rate. However, limiting our study to inpatient mortality in the NIS database has the advantage of better revealing the more immediate effect of ED care on mortality outcomes.
Hospitals with the highest performance on time-to-PCI quality measure had a significantly lower overall mortality for both STEMI and NSTEMI compared to the hospitals with the lowest performance on time-to-PCI. The time-to-PCI quality measure alone is an important proxy measure for overall AMI quality of care.
Funding: Dr. Khare was funded by a grant from the Agency for Healthcare Research and Quality (F32 HS17876-01) for a study to evaluate interventions to reduce mortality of myocardial infarctions. Dr. Courtney has received a research grant from National Heart, Lung, and Blood Institute (5K23HL077404-04).
Presentations: The Agency for Healthcare Research and Quality National Conference in Bethesda, MD in September, 2009; and the Midwest SAEM Regional Conference, Ann Arbor, MI, September 2009.
Conflicts of Interest: None declared by the authors