In this postimplementation study of a CDSS to enhance asthma management in a pediatric pulmonary clinic, we found that computerized CDSS assessments were accurate compared to expert clinician review in 80% of all control assessments, 66% of all severity assessments, and 39% of all step recommendations. Practicing pediatric pulmonologists failed to strictly follow guideline recommendations in 8% of return visits and 18% of new patients.
The reasons providers and the computerized CDSS disagreed were quite different for assessments of control versus severity. The majority of control variances were caused by providers attributing asthma-like symptoms such as cough to other conditions such as allergic rhinitis, gastrointestinal reflux, or acute upper-respiratory infection. Since the computerized CDSS was designed always to treat these as asthma-related, it tended to assess patients as being less well controlled than providers thought they were. By contrast, many disagreements about asthma severity were caused by the fact that many of the ‘new’ patients arriving for subspecialty consultation had in fact already been diagnosed and treated. According to the EPR-3 guidelines, severity assessment in these patients should take into consideration existing medications. This was not feasible for the CDSS and therefore led to additional errors.
Analysis of variances between computerized CDSS assessments and clinician assessments can provide insights into areas for improvement in decision-support design. Our taxonomy of reasons for discrepancies between provider and computerized CDSS assessments identified some obvious gaps in the design of the CDSS that could be quickly remedied to improve its assessment capabilities. For instance, modifying impairment questions to ensure they are asthma-specific (ie, changing ‘any cough symptoms’ to ‘any asthma-related cough symptoms’) would immediately eliminate half the disagreements about control. It is notable that these flaws existed in the system, despite integral involvement by practicing pediatric pulmonologists from the start of the design phase, as recommended by informatics experts.28
Our study thus demonstrates the critical importance of carefully analyzing the reasons for practicing clinician disagreements with decision support in the postimplementation period in order to improve design and effectiveness. Unfortunately, although the value of postimplementation audits is well recognized,29
published reports of computerized CDSS interventions typically lack such information.
Our analysis of disagreements between the computerized CDSS and clinician assessments also gave us insight into clinical care provision. About a third of disagreements about severity or control were attributable to provider errors. Other studies have shown that computerized CDSSs may be more accurate than clinicians.22
Although guidelines may legitimately be disregarded in certain clinical contexts, it is notable that those who were most often at variance with the computerized CDSS were the least clinically experienced providers: pulmonary fellows. Consequently, we believe it likely that these events represent true deviations from appropriate care, demonstrating that even clinical experts may derive some benefit from computerized CDSSs. Furthermore, it is impossible to determine how many assessments were altered in real-time by providers who noted the computerized CDSS recommendations and may therefore have improved their care.
Lastly, analysis of disagreements between computerized CDSS and clinicians also gives us an insight into CDSS capabilities. Most computerized CDSSs provide one-step alerts or guidance based on relatively simple rules (ie, suggest pharmacotherapy for LDL above goal if diabetes is in the problem list).21
These are typically modestly effective.32
The computerized CDSS in this study, by contrast, was designed to perform much more cognitively rich work—to determine, based on a large variety of inputs, a patient's acuity of illness, and then, based on that assessment, to suggest a tailored treatment regimen. Systematic reviews show that CDSSs are much less successful for this type of activity than for simpler activities such as preventive care or drug dosing.20
In this context, the relatively high accuracy of the computerized CDSS asthma control assessment (80%) is a notable achievement, although the system did not perform as well for new patients, and even 80% accuracy may not be sufficiently high for widespread use or acceptance.
The strengths of this study lie in our ability to determine precisely what information providers were using to determine asthma severity and control, and therefore to pinpoint reasons for disagreement with a computerized CDSS following guideline protocols. However, this study does have some limitations. We relied on chart notes to determine practitioners' clinical reasoning. We do not know if providers chose to discount certain information, if they preferentially weighted certain information in a different fashion than the guidelines suggest, or if they inadvertently checked an incorrect box on the clinical decision support screen. Furthermore, in most cases we were not able to assess whether these clinical experts deliberately disagreed with the guidelines, or were simply not aware of the guideline-recommended practice. However, all cases in which a provider did not follow guidelines were reviewed by an expert clinician to determine whether the difference was appropriate (an ambiguous guideline or a reasonable disagreement) or a lapse in recommended care. We do not know how often providers changed their assessments to become concordant with the guidelines once they viewed the CDSS assessments. We did not assess patient outcomes. Finally, we performed this study in a specialty clinic, and the results may not be generalizable to a primary care setting. It is possible that the rate of disagreement would be lower in a setting in which the clinicians were not content experts.
We have used the results of this analysis to substantively alter the next iteration of the decision-support system, which is aimed at primary care physicians. We have changed the wording of the cough assessment to clarify that it refers only to asthma-related cough. We have added specific questions about adherence, inhaler technique, and environmental controls. Unless these are all recorded as appropriate, the computerized CDSS will not provide advice about treatment step. This eliminates the problem of the computerized CDSS recommending a higher step of therapy for patients who would probably respond to existing therapy if their inhaler technique were improved. For patients who are not well-controlled, we have added additional questions about alternative diagnoses or psychosocial factors. Finally, we have improved the computerized CDSS so that it can now identify existing treatments and recommend management for return patients as well as new patients. We expect these alterations to reduce computerized CDSS errors and increase the CDSS face validity and utility for practitioners.
In conclusion, we found that a CDSS designed to assess and manage pediatric asthma patients in a pediatric pulmonology practice performed well for return visits, with providers both entering data appropriately and agreeing with most of its assessments. We further found that 8% of return visits and 18% of new visits to an academic pediatric pulmonology practice did not conform to guideline-based practice, suggesting that even expert clinicians may benefit from clinical decision support. Finally, examining cases in which pulmonologists did not agree with the computerized CDSS proved to be a valuable method both of identifying guideline-deviant care and of improving the CDSS itself. This is an evaluative step that should be undertaken after implementation of complex decision-support systems.