We sought to identify differences in the alert-management strategies of providers with high versus low rates of timely follow-up of critical diagnostic imaging alerts, with particular attention to their use of computerized alert-management tools. Using techniques from cognitive task analysis, we found that providers of both types were relatively unaware of important alert-management features and used workarounds (such as handwritten notes as reminders) to process alerts. Although more providers in the untimely than in the timely group were aware of the notifications feature, more providers from the untimely group also reported manually scanning the alert list and heuristically processing alerts according to their judgment of clinical priority. Overall, we found a lack of standardization in the management of critical diagnostic test alerts.
The results of this study are consistent with research and theory on cognitive workload and attention. Managing large numbers of alerts (over 50/day average in our sample) is cognitively complex because making a decision about any given alert requires processing many variables at the same time (eg, criticality, urgency, date received, whether the alert is informational or requires action).14
Related research suggests that decision-makers (such as providers trying to prioritize alerts) cannot cognitively evaluate more than about four to five variables at a time in making their decisions.23
Thus, busy providers, short on time, using heuristics based on multiple clinical and time-based criteria to decide which alerts to address first have difficulty managing a large volume of alerts.
Though managing alerts overall requires a high level of reasoning due to the complexity of interaction with the clinical data, deciding which alerts to address first requires far less complex reasoning. Providers compile, rather than analyze, information to make this kind of decision;14
this type of compiling activity could be offloaded to some degree to a computer, thus saving precious cognitive resources for more complicated clinical analysis that only the provider can accomplish. Alert-management features such as those we studied provide an algorithmic, systematic means of prioritizing and filtering alerts, so as to reduce cognitive workload and allow the provider to devote more cognitive resources to the task of following up on an alert, instead of deciding which alert needs attention first. This could explain why providers who manually prioritized alerts were more likely to be in the untimely group compared to providers who relied on the systematic use of alert-management tools.
In addition to cognitive workload, other explanations for the observed results such as “clinical inertia”24
and cognitive style are also possible. In our previous work,6
we found that a lack of timely follow-up was minimal when the radiologist notified the provider by phone, which at the study site is required only when test results are life-threatening or require an emergent intervention. This “inertia” has been described in other settings and associated with a lack of action on results that have less immediate consequential implications.
Cognitive style (a concept increasingly studied in the technology acceptance literature) has also been posited as a potential source of variation in technology usage. Chakraborty and colleagues25
found that people with innovative cognitive styles are more likely than people with adaptive styles to perceive a new technology (such as alert-management features) as useful (and thus make them more likely to use it). However, a contemporary study26
found that after accounting for computer anxiety, self-efficacy, and gender (variables common in early adoption and use models), cognitive style accounts for no significant incremental variance, though personality characteristics do. Thus, the evidence is mixed, and more studies are needed before firm conclusions can be drawn from the cognitive style literature.
Our findings highlight the importance of usability, user knowledge, and adequate provider training in utilization and integration of EMRs into their clinical work. A significant portion of participants were unaware of the alert-management features we studied, and none were aware of all of them; providers often employed workarounds, the most common being handwritten notes and reminders, to aid them in managing their alerts. In our ongoing work, we have demonstrated several of the features discussed in the paper to participants and found that many were unaware that CPRS had these functionalities, and “new” tools. Even the most perfectly designed user interface is useless if the user is unaware of its features, or if the user does not know how to use the feature properly.
An excellent example of this from our study is the notifications feature. This feature displays a list of all available events types (eg, notifications that certain lab results were completed, that a patient was seen by a specialty clinic, etc) for which an alert can be generated; users can then select the types of events of which they want to be notified. At the study site, 10 types of notifications are mandatory (ie, the user cannot opt out of receiving them); on average, providers have approximately 15 types of notifications turned on, though we found some providers with as many as 50. When a new user account is created, only a set of institutionally determined “mandatory” alert notifications are turned on by default. However, providers often want additional alerts about clinical events they consider important. If the user is unaware of the notifications feature and does not change the default settings, then they may be missing important information impacting patient safety; conversely, if the user is not selective about what is most relevant to him/her, he/she could inadvertently increase both their overall alert volume and their signal-to-noise ratio, making it more difficult to address alerts in a timely fashion. This could partially explain the difference between groups in knowledge of the notifications feature, and why providers in the untimely group had 28% greater alert volume.
User knowledge of and targeted training on EMR features may also help reduce the observed follow-up variation. Many institutions (including our study site) provide a single, cursory training on the basic features of their EMR; this training usually occurs during new employee orientation, when providers are cognitively overloaded with a myriad of other logistical details. Decades of training and skill acquisition research suggest the need for activating multiple sensory modalities during training, accommodating trainee learning styles, distributed practice schedules, and opportunities to practice newly learnt skills on the job.27
Thus, institutions should consider strengthening their EMR training programs to include features such as audiovisual or computerized media, and periodic refresher trainings (targeted to the needs of the individual). Further, research also demonstrates that pretraining conditions such as the availability of protected training time, trainee readiness and strategically framing the purpose of the training can have a considerable impact on training effectiveness.29
Training could also be improved by involving clinical “super users” from the institution in the training process. EMR training sessions framed to help providers view the EMR as a natural part of their clinical work (eg, a tool as basic as a stethoscope) rather than a bureaucratic barrier are likely to have a greater impact, and thus improve the way physicians manage alerts (and perhaps other EMR components). Though the study site offered several training tools (initial instructor led workshop, web-based training, super users in the form of clinical applications coordinators), barriers such as the lack of protected time for training and improper framing of the training tools offered may have significantly reduced their effectiveness.
Finally, this research serves to highlight the fact that the alert system is also being used as a clinical task-management system by clinicians when it was not designed as such, and thus has significant limitations. Regardless, clinical information system designers should begin to devote far more resources to building in methods to facilitate the process of capturing user actions along with the clinical context in which they take place. Such data would enable clinical system designers, along with researchers, to reconstruct user actions enabling them to better understand what is working and what is not as these complex systems are used for routine clinical activities. This increased understanding should greatly facilitate the process of system redesign and thus allow future systems to better meet clinicians' needs.
This work contains two important limitations. First, the study was conducted at a single VA medical center, which limits generalizability. However, the basic features of CPRS (including which alert-management features are available) are consistent across VAMCs throughout the USA. Thus, the alert-management strategies observed at this VAMC are likely observable at other VAMCs. Although our findings may not generalize to EMRs in the private sector, users in the private sector can benefit from the implications of this work.
Second, with a sample size of 28, it is difficult to identify probabilistic differences between the timely and untimely follow-up groups. A larger sample size may have allowed for better detection of some of the more subtle effects, such as differences in exactly which alert-management tools were used by each group. Because of its labor intensiveness, the cognitive task analysis technique used for this research does not lend itself to large-scale data collection. However, in exchange for labor intensiveness, this technique offers a rich depth of information not obtainable from a survey or structured simulation. Indeed, one of the most interesting findings that providers in the untimely group manually prioritize whereas timely providers do not was not an a priori hypothesis and emerged from the data. Further, significant findings, despite a small sample size, are often considered stronger evidence of an effect than significant findings based on a large sample size, where small effects are easy to find.31
Finally, we were unable to ascertain individual differences among providers outside of provider type, such as differences in cognitive style, personality characteristics, or usage habits. Future studies taking these factors into account could clarify some of the mixed findings currently in the literature.