This study compared two approaches to infobutton links (i.e., topic links and nonspecific links) regarding information seeking effort and success rate. To our knowledge, this is the first study to make this comparison.
The study presents evidence to support the study alternate hypothesis that session length was shorter in intervention group. In fact, users spent 17% less time on their information seeking efforts when provided with links that attempt to lead them to content that more specifically addressed their questions. On one hand, this difference may be deemed an important finding, given that lack of time and seamless access are among the main barriers to the use of online health information resources at the point of care 1,5
and one of the key motivations of infobuttons is to reduce these barriers. 14,17,18
On the other hand, it may be argued that the absolute session length difference between the control and intervention groups was not large enough to produce an impact that is clinically significant.
In theory, the idea behind topic links is that these links lead clinicians to content subsections that are likely to be more closely related to the clinician's question, reducing exposure to unnecessary information and minimizing cognitive effort. Cognitive psychology theories support the notion that information systems should minimize cognitive overload, providing the minimal amount of information to support decisions. This view has been well elaborated in a study by Weir et al., who stated that “any computerized information system needs to be designed to support strategies that allow for ‘fast and frugal’ decision-making. These strategies are often used by decision-makers in the real world. They include multiple methods for rapidly identifying the minimally sufficient data to support a decision, attempts to efficiently balance accuracy versus speed, and creating personal information displays that support recognition-primed actions.” 29
Therefore, topic links may have reduced clinicians' cognitive workload when compared to unspecific links. Yet, the present study has not aimed at proving such an impact. Future studies, with a stronger qualitative focus, may be necessary to address this question more appropriately.
A more pronounced difference may have been observed if, instead of fixing the default topic to “adult dose,” a more accurate prediction method were employed to determine the topic that is most likely to match the clinician's information need in a given context. In the intervention group, users frequently had to override the default topic, a task that demands additional cognitive effort and time from users. This overhead can be minimized if the default topic is accurately predicted. In a previous study, we have demonstrated that accurate prediction models can be derived from data of previous infobutton sessions using machine learning methods, suggesting this to be a sound alternative for further improvement of infobuttons. 24,25
No statistical difference was found regarding infobutton usage. According to the technology acceptance model, perceived usefulness and ease of use of a technology are the main determinants of actual use. 26
Therefore, it is possible that users may have not perceived the relative advantage of topic links regarding usefulness and ease of use when compared to nonspecific links. Another potential explanation is that the study subjects, who were all frequent users, already used infobuttons whenever a need was perceived. Finally, a significant difference might have been detected if the study were conducted for a longer period of time, giving users sufficient experience with the improved functionality to affect their perceived usefulness and ease of use. The fact that almost half of the subjects were enrolled during the course of the study and therefore had even less time to interact with topic links would have contributed to this.
There was no significant difference in terms of information seeking success rate and impact. This may be explained by the fact that the baseline success rate and impact were already very high, leaving little room for improvement, at least within the context of medication order entry. Other infobutton studies have also shown high session success rates, supporting this hypothesis. 17,18
The analysis of unsuccessful sessions raised two important points. First, users tended to quit their search effort very quickly without looking at different topics and information resources that were offered. The results were more pronounced than those of general purpose search engines, where the majority of users perform up to two queries and view up to two web sites when searching for content. 27
The difference may be explained by the busier nature of patient care settings and highlights the importance of information resources providing clinicians with content that rapidly addresses their questions. 5
Second, a high percentage of the reported unsuccessful sessions was due to issues related to the use of inappropriate drug codes in infobutton requests, reflecting incomplete content indexing. Once identified, these issues were all fixed, so that, in the future, similar requests will successfully retrieve content. Therefore, these issues underscore the importance of continuous monitoring of infobutton sessions as a good knowledge management practice. 28
Continuous monitoring allows problems with code mapping, content indexing, lack of content, and changes to the information resource APIs to be identified and fixed in the Infobutton Manager infrastructure or by the content provider. For example, reports of unsuccessful content requests could be shared with content providers so they can optimize content indexing or decide how to allocate content development resources.
A number of studies have assessed the effectiveness of information resources at the point of care. 30
However, differences in methodology, study population, clinical setting, goals of the information retrieval tool, and types of information needs make a comparison among these and the present study difficult and some times inappropriate. Therefore, comparisons should be considered with caution, considering the lack of a uniform methodology.
In our study, clinicians reported that they were able to find answers to their questions in 88.5% of the infobutton sessions and improved their decisions in 36.3% of these sessions. In a study conducted by Mabrabi et al., physicians reported that their questions were completely or partially answered in 73% of the information seeking sessions conducted via Quick Clinical, an online evidence system. 31
Yet, physicians in the Quick Clinical study used the tool for a variety of clinical questions including types of questions that were probably more complex. Conversely, the present study focused solely on medication-related queries.
A mixed method study conducted by Pluye et al. revealed that residents perceived that information accessed via a tool called InfoRetriever was relevant to their search objectives in 85.9% of the searches. 32
However, the concept of “relevancy” in the Pluye et al. study, did not have the same meaning as “success” in the present study, since the former does not necessarily indicate that a question has actually been answered.
A systematic review of studies that evaluated the impact of clinical information retrieval technology on physicians found that 20 to 82% of searches produced any type of positive impact on physicians. Yet, according to the authors of the review, the higher estimates were subject to recall bias, which tend to overestimate impact. Therefore, the most plausible estimates ranged from 20 to 39%. 30
In the present study, a positive impact was found in 63.7% of the infobutton sessions. Since the post-session survey was prompted immediately after the conclusion of infobutton sessions, recall bias was not likely to be an important factor in our results.
Not many studies evaluated applications that enable context-sensitive access to information resources within EMR systems. 33
In a study conducted by Maviglia et al., users were able to meet their medication-related information needs in 84% and enhanced their decisions in 15% of the sessions. 18
However, users in the present study were likely to be more experienced with infobuttons than those in the former study. While the former study included all users of a medication order entry system, our study selected a subpopulation of frequent infobutton users. In addition, the present study was performed five years after the initial release of infobuttons, while the former study was initiated once infobuttons were firstly released at their institution.
The present study found an overall median session duration of 39.3 seconds and 35.5 seconds in the intervention group. Users in the study conducted by Maviglia et al. spent 25 seconds looking for information. 18
However, the session time measurement method was different in these two studies. Session duration in the present study was measured from the infobutton click event to the content window close event, therefore the entire session was captured. Maviglia et al., on the other hand, stopped the timer when the last page viewed in a session was opened. The authors of that study estimated that users may have spent approximately 21 seconds on this last page. Therefore, their total session time can be estimated to be roughly 45 seconds, which is very similar to the median session length in our control group. Interestingly, the infobuttons in the Maviglia et al., study did not provide a topic links functionality, otherwise it is conceivable that they would have observed session lengths similar to the ones observed in the intervention group of the present study.
Cimino conducted a survey to assess the overall user satisfaction and effectiveness of infobuttons at Columbia University. Among the respondents, 74% indicated that infobuttons produced a positive effect on patient care decisions and 20% reported positive impact on patient care. 17
Differences in methodology preclude a more direct comparison between the present and Cimino's results. However, both studies provide evidence that supports the usefulness of infobuttons.
In a randomized trial, Rosembloom et al. compared two user interfaces that enabled context-sensitive access to educational material in a computerized provider order entry system (CPOE). 33
Control subjects could access educational content from a menu in the standard CPOE interface. Intervention subjects had access to an improved version, where links to educational material were visibly highlighted in the interface analogously to an automobile dashboard. The usage rate of the improved version was approximately 9 times higher than of the control group, illustrating the importance of user interface in the on-demand delivery of decision support. In our study, the improved version of infobuttons (i.e., topic links) was not associated with an increased usage. Notably, our improved version focused on promoting quicker access to relevant information as opposed to drawing attention to the availability of infobutton links. On the other hand, the results reported by Rosembloom et al. suggest that improved visibility may hold the answer to increasing the usage of infobuttons.
This study has several limitations. First, since the study subjects were selected from a pool of frequent and probably enthusiastic infobutton users, the measurement of session success rate and outcome may have been overestimated for the larger group of all infobutton users. 34
Therefore, the high session success rate and outcome observed in this study cannot be generalized to low frequency infobutton users. Nevertheless, the authors believe that the current study results should generalize to high-frequency, medication-related infobutton users in other institutions. This belief is based in part on the observation that the current study results are comparable with the success rates reported in a similar study that included an entire EMR user population. 18
In addition, due to the randomization procedure, it is unlikely that the subject selection strategy has biased the comparison between the control and intervention group regarding session success rate and outcome.
Second, as previously discussed in the present report, the session length difference between the control and intervention group, though statistically significant, was not necessarily impressive in absolute terms. Our study was not able to determine whether this statistical difference was also associated with a clinically significant impact.
Third, only one information resource that provided access to topic specific medication content was available at our institution. Therefore, it is possible that variations in the information resource implementations, such as user interface, content depth, and content coverage would lead to different results. Further studies are necessary to investigate whether specific characteristics of an information resource influence the effectiveness of infobuttons.
Fourth, this study was limited to the context of medication order entry and consequently to questions related to medications. Medication questions, such as those related to dosing, drug interactions, and pregnancy category, may be more easily answered straightforwardly and objectively. On the other hand, questions related to the diagnosis, etiology, or prognosis of diseases may entail more extensive and subjective answers. Therefore, the session length, success rate, and outcome reported in the present study cannot be generalized to other EMR contexts, such as problem list review or laboratory results review.
Fifth, infobutton session success rate and outcome were measured based on clinicians' self assessment, i.e., their own perceptions and judgments which may not always be correct. Westbrook et al., for example, found out that many clinicians placed confidence in information that actually led them to incorrect answers. 35
Therefore, it is not possible to determine whether a self reported “decision enhancement” was in truth associated with a better clinical decision. Further investigation is required to objectively assess this possibility. A mixed-method approach, including techniques such as critical incidents and journey mapping, combined with log data analysis, is a promising strategy to overcome this limitation. 32,36
Last, a potentially negative side of the proposed approach is that topic links may hide pieces of information that, though not actively sought by the clinician, might still be useful, reducing the opportunities for serendipitous learning. Yet, the overall short session length associated with a high success rate in both control and intervention groups suggests that infobuttons are being used to answer very specific questions in situations where there is very little time for looking at content that does not directly answer these questions.
As previously noted, this study focused on one single information resource and medication order entry infobuttons. Future studies are necessary to assess whether the similar results can be observed with different information resources and different EMR contexts. Another potential area of future research is the development and evaluation of methods that are able to more accurately predict the information needs that arise in a given context as well as the resources that are most likely to fulfill these needs.