We studied a new set of actionable electronic reminders designed to improve quality of primary care and found no significant improvement in testing rates for diabetes care, breast cancer screening, and osteoporosis screening. Unfortunately, many intervention physicians reported not using, and even not being aware of, the new features of the electronic health record.
In prior reviews of electronic clinical decision support systems, reminders prompting clinicians to improve guideline adherence have often improved clinician performance.9
However, these studies almost universally compared the effect of electronic reminders to no reminders. In our study, we measured the incremental effect of linking reminders to the electronic order entry system, with both intervention and control sites receiving similar basic information from the electronic reminders.
Compared with informational reminders, the implementation of actionable reminders presented additional challenges of facilitating efficient test ordering across different ordering systems. Limited interfaces between our reminders and lab and radiology order systems were considerable hurdles for clinician adoption. This lack of ‘efficiency gain’ for ordering tests likely caused the unexpectedly lower rate of mammography testing when clinicians recorded mammography-related structured responses. When ordering mammography, clinicians likely went directly to the radiology order entry system, rather than take extra steps by clicking the reminders. We believe that most clinicians used the actionable reminders for informational purposes, as well as to document reasons tests were not performed. This theory was supported by the similar relationship between structured responses and mammogram performance seen in the control clinics with purely informational reminders.
To explore other reasons for low usage of the actionable reminders, we performed informal follow-up discussions with clinicians in the intervention sites. In addition to the workflow barriers described above, concerns arose regarding the content of the reminders. Clinicians felt that a few alerts contained inaccuracies that led to needless extra work. While our review of the positive predictive value of the reminders found that this rate was not excessive, the perception of inaccurate reminders for some cases may have led some clinicians to abandon the reminders altogether.
Our experience provides quantitative support for the theoretical frameworks identifying barriers to adoption of clinical decision support systems. During our evaluation, several practical lessons emerged. First, our ordering workflow using reminders needed to be more efficient. Second, our intervention likely would have benefited from obtaining data on usage of the system earlier in the process, allowing us to address the underlying issues in a more timely fashion. Third, despite our training efforts, a significant subset of clinicians in the intervention group was not fully aware of the capabilities of the system and required retraining. Finally, the usage of our reminders could have improved from increased communication with clinicians regarding perceived inaccuracies in the reminders, which may have increased confidence and use of the system. Our lessons are consistent with a recent review suggesting that effective evaluation strategies can provide vital information to improve the implementation of these systems.20
We conducted an evaluation of the implementation of a health-information-technology-based quality-improvement initiative which included collection of clinical data as well as an assessment of physician use patterns. However, our study does have some limitations. Our intervention group comprised clinicians who volunteered to use the reminders, leading to potential selection bias. However, we would have expected a bias toward a positive effect from the intervention because these clinicians were more likely to respond to the reminders. In addition, we were unable to directly measure whether the actionable reminders were clicked and used the recording of structured responses as a proxy for reminder usage. While we believe that the vast majority of these responses are entered via the reminders, it is possible to enter these structured responses outside the context of the reminders. Finally, the generalizability of our results may be limited by the fact that the intervention occurred within a single healthcare system and was performed using an internally developed electronic health record.