Reduced mortality and increased use of acute stroke therapies are two expected benefits of primary stroke centers.2
Nevertheless, limited empiric evidence supports the benefits of stroke centers—in particular, outcome-based quality measures.9
In this large observational study, we found that patients admitted to stroke centers were more likely to receive thrombolytic therapy and less likely to die when compared to patients admitted to non-designated hospitals. This survival benefit was sustained for up to one year after stroke occurrence and was independent of patient and hospital characteristics. Importantly, the lower mortality at designated stroke centers was specific to stroke and was not found for other acute life-threatening conditions, which suggests that the mortality benefit was related to stroke center designation, rather than to overall quality improvement efforts at designated stroke centers. Collectively, our study provides evidence that the implementation and establishment of a BAC-recommended stroke system of care was associated with improved outcomes for patients with acute ischemic stroke.
Previous evaluations of stroke center quality performance have primarily focused on process measures with limited information on patient outcomes.5–8
To date, only one study in Finland has reported lower 1-year stroke case-fatality associated with stroke centers.26
Our study extends the findings from this prior study, as systems of stroke care in the U.S. may differ substantially from other national healthcare systems (especially those with universal health coverage). Significantly, we were able to report both short-term and 1-year mortality outcomes. Finally, we were able to demonstrate that lower mortality was specific to stroke at designated stroke centers.
Importantly, geographic patterns of stroke triage are likely to be non-random. Designated stroke centers and non-designated hospitals may treat different groups of patients in terms of demographics and disease severity. For instance, it is possible that EMS may systematically transport more severely-ill patients to stroke centers,4
which is consistent with our finding of a greater IV-adjusted mortality difference (in absolute value) compared to the unadjusted difference (e.g. 2.5% vs. 2.4% for 30-day all-cause mortality). Moreover, prior studies have reported that stroke centers are more likely to admit patients with hemorrhagic strokes, which are associated with higher mortality as compared to strokes with an ischemic etiology.7,8
Indeed, we found a similar pattern, in which nearly 60% (4,193/7,243) of hemorrhagic stroke patients in New York were admitted to a stroke center during our study period. In the absence of randomized controlled trials, controlling for treatment patterns is often difficult and assessments of mortality outcomes may be biased given the presence of treatment selection. Our analysis sought to address these concerns by using an instrumental variable analysis to control for the selection bias (both measured and unmeasured) inherent in observational studies. After adjusting for patient and hospital characteristics and the potential for unmeasured selection bias with the instrumental variable analysis, we found that admission to a stroke center for an acute ischemic stroke was associated with a 2.5% absolute reduction in 30-day all-cause mortality.
The BAC recommendations serve as the cornerstone for the establishment of primary stroke centers. Previous studies have shown reduced mortality among patients who were treated by neurologists or who received organized care in a stroke care unit.27–29
Although we cannot determine which individual components of the BAC criteria for stroke center designation were most important for the lower mortality observed in this study, it is likely that the BAC criteria cannot be examined as individually isolated units. Rather, the 11 core criteria, combined, establish the infrastructure and define a paradigm for optimizing care for acute ischemic stroke. By emphasizing an integrated and organized system of care with EMS, hospital emergency departments, acute stroke teams, stroke units, and neuro-imaging services, the BAC criteria facilitate rapid transportation, evaluation, and treatment. Moreover, availability of stroke protocols standardizes acute stroke care and minimizes protocol violation. These efforts are further enhanced by the BAC criteria’s emphasis on surveillance of outcomes, quality initiatives, and continuing educational programs. Of equal importance are the improved performance measures which may also impact downstream care and outcomes. Studies have found association between process of care performance measures and mortality outcomes for patients with cardiovascular disease and stroke.30, 31
It is certainly possible that improved guideline-based treatment, more frequent thrombolytic therapy, enhanced secondary prevention and risk factor management, early rehabilitation and patient education programs may also be responsible for the lower mortality rates among patients treated at stroke centers. However, these efforts may not have any appreciable short-term or immediate life-saving effect, which is consistent with our findings of minimal mortality difference at day 1 and similar readmission rates at 30 days compared to greater survival benefit at the end of 1 year follow-up. Collectively, it is likely that the combinations of these efforts elevate the structure and process of stroke care and subsequently lead to improved patient outcomes.
Since stroke center certification is voluntary, it is possible that hospitals were already committed to quality improvement and would have achieved these results regardless of designation. A recent evaluation of the Joint Commission Certified Primary Stroke Centers found that certified hospitals had better outcomes than non-certified hospitals even before the certification program began.32
Based on our data, we cannot definitively establish if the designation program resulted in reduced mortality or if higher quality hospitals participated in designation. However, this concern is largely mitigated by our specificity analysis, in which we examined mortality rates at designated and non-designated hospitals for 2 other life-threatening conditions—GI hemorrhage and AMI. Hospitals committed to quality improvement prior to stroke center designation would be expected to demonstrate lower mortality for other medical conditions, as well as for stroke. Nevertheless, the lower mortality observed in this study was specific to stroke, thus suggesting that the lower stroke mortality could not be explained simply by the fact that hospitals who received stroke designation were more likely to implement hospital-wide quality improvement.
Our study should be interpreted in the context of the following limitations. First, the SPARCS database did not include information on stroke severity. The differences in mortality may be due to patient case-mix, as opposed to variation in the quality of acute stroke care. This being said, selection bias is more likely to be against admission to a stroke center rather than favoring stroke centers.4
Second, we were unable to assess other performance and outcomes such as eligibility and contradiction of thrombolytic therapy, thrombolysis related hemorrhage, quality of life, neurological and functional status at discharge, since these measures were not collected in the SPARCS database, nor were we able to assess cause-specific mortality. Nonetheless, our study was able to report on the relationship between stroke center admission and all-cause mortality—an outcome which has not been routinely reported. Third, while our sensitivity and specificity analyses suggest that lower mortality associated with stroke center admission may be due to the implementation of the BAC criteria as part of stroke center designation, other quality improvement initiatives (e.g., the AHA Get With The Guidelines-Stroke program), economic incentives from pay for performance, and public reporting could also impact stroke care and outcomes. Fourth, our study only included data from New York. The generalizability of our findings to other states and agencies certifying stroke centers remains to be established. Fifth, we were unable to assess acute treatments other than thrombolytic therapy, including the use of life-sustaining interventions and end-of-life care which may affect short-term or intermediate survival.33
Sixth, many hospitals were transitioning to stroke center during our study period. Defining a stroke center based on designation status on the admission date may have underestimated the mortality differences with stroke center admission. Nonetheless, our conservative approach still demonstrates a lower risk of death associated with stroke centers. Finally, the instrumental variable approach assumes that differential distance has no independent effect on patient outcomes except through its impact on the likelihood of receiving treatment at the designated stroke center. The assumption by its very nature is unproven. However, this assumption would generally be satisfied if a patient's residence is not associated with stroke severity, which appears reasonable. Moreover, differential distance has been widely and successfully used as an instrument to control for selection bias in a variety of clinical settings.16–20
In conclusion, we found that admission to designated stroke centers in New York State was associated with a lower risk of death for patients with an acute ischemic stroke. The lower mortality in designated stroke centers was specific to stroke. Our findings suggest that a rigorous process of designating stroke centers has the potential to improve the quality of stroke care and reduce stroke mortality.