In this study of 1426 patients in 14 EDs, we found, in general, no association between either process or timeliness measures and outcome measures for ED management of acute asthma in children. This differs from the findings of the previously cited adult EMNet study,8
as well as 2 of the 4 previous studies of this issue in children.3,4,9,10
The process measures used in the adult study were the same 5 level A recommendations as in our study; the study revealed that 100% guideline concordance (in 12 level A and B measures) was associated with a 46% lower admission rate.8
Among studies including children, 3 prepost studies, all single-site, evaluated the impact of use of an asthma guideline on the quality of ED asthma care. A US study revealed mixed findings in outcome measures: decreased admissions but no change in revisits.4
An Australian study revealed decreased admissions and revisits.3
An all-ages Canadian study revealed no change in admissions and revisits.10
A fourth, multiinstitutional, Canadian study revealed that the presence of an asthma order sheet, but not the presence of an asthma guideline, was associated with a decreased revisit rate.9
Our negative study is the first multicenter study of the association between guideline-concordant ED care and asthma outcomes in children and the first to use patient-level process measures, rather than the institution-level presence of a guideline, as the predictor of outcomes.
The sole significant process-outcome association found in adjusted models was that between timely albuterol administration and the risk of admission. This finding likely represents insufficient adjustment for confounding by severity because patients with more severe asthma are both more likely to have timely albuterol and more likely to be hospitalized, as also reported in previous MARC studies.19,20
The contrast between our negative study and the positive process-outcome association found by its closest counterpart, the adult EMNet study,8
may reflect 2 confounders.21
First, the 2 studies differed in how they adjusted for patient-level factors, including severity. In our study, we used a 12-point acute severity scale, whereas the authors of the adult EMNet study used the peak expiratory flow absolute value, resulting in a higher percentage of patients categorized as severe in the EMNet study (38% vs 9% in our study) despite a lower admission rate (18% vs 24% in our study, a rate comparable to nationally representative findings1
). Because the eligible population definitions for 2 of the 5 level A process measures included acute asthma severity, this may have biased our findings toward the null. The severity assessment used in the adult EMNet study, peak expiratory flow, is not reliably measured among children.22
Second, the 2 studies differed somewhat in measure definition. As an example, the adult EMNet study’s time cutoff for initial β-agonist was 15 minutes compared with 60 minutes in our study (28% vs 92% concordant, respectively). Having fewer patients with nonconcordant care may have limited our study’s power to detect a difference in quality of care.
The other issue raised by our negative study is one of perspective. For a disease process such as asthma, where outcome measures are either related to subjective physician behavior (admission), are not proximate to ED care (relapse, ongoing symptoms), or are exceedingly rare (mortality), our negative study raises the question of the value of using outcome measures to validate or justify the use of process measures. Perhaps process measures are the more sensitive indicator of real variations in quality and can be used without showing a link to outcome measures.21
We regard this argument with caution in that it disregards a central tenet in the field of quality measures: validity of process measures is demonstrated when variations in the attribute they measure lead to differences in outcome and vice versa.14
The importance of careful examination of the process-outcome measure association is illustrated by the adverse consequences of the Joint Commission’s “time to first antibiotic dose” process measure for ED patients with community-acquired pneumonia. Introduction of this performance measure resulted in overuse of antibiotics and no change in the relevant outcome, mortality.23
Thus, we conclude from our negative study that further exploration of the process-outcome link in the quality of ED asthma care is needed, as well as further consideration of appropriate process and outcome measures, before implementing process measures as performance metrics.
Our study has some potential limitations. First, the study derived some measures from chart review, so data quality depended on the accuracy of clinical charting. However, previous studies revealed that the rates of ED assessments and treatments for asthma by retrospective chart abstraction were similar to those achieved by direct observation, with κ coefficients of 0.6 to 0.9.24
Second, we studied only the initial processes of asthma care; several studies revealed that data from the time of ED disposition, rather than from arrival, is more predictive of outcomes.18,20,25
Third, our secondary hypothesis used the outcome admission, a heterogeneous decision based on the clinical opinion of individual providers. The subjectiveness of this secondary outcome is why we selected the composite outcome successful discharge as our primary outcome.18
Fourth, the use of admission to define pulmonary index cut-points may have biased our findings because admission was also a study outcome. As noted above, when compared with the adult EMNet study, the current study’s methods yielded a smaller proportion of exacerbations categorized as severe. It is not clear how this would bias our findings. Fifth, the study’s use of data from noncontinuous time periods may have introduced spectrum bias as the precipitating factors and incidence of acute asthma exacerbations change seasonally, although it is not clear how this would bias our findings. Sixth, this study was a secondary analysis of existing data and it is possible that the available sample size is not sufficient to detect the observed differences in the primary and secondary outcomes (Type II error). Finally, the EDs that compose the study sample were predominantly urban, academic EDs, which may make our results less generalizable to rural or suburban, nonacademic EDs.