Many clinical information retrieval services have been developed to facilitate searching for the current best evidence for clinical decisions. DiCenso, Bayley, and Haynes24
have described a ‘6S’ hierarchy of evidence-based information services. This follows the evolution of information processing from: (1) original studies (the lowest level); (2) synopses of original studies (evidence-based abstraction journals); (3) syntheses (reviews); (4) synopses of synthesis (eg, DARE, healthevidence.ca); (5) summaries (evidence-based online textbooks); and (6) systems (the highest level—computerized clinical decision support). Being familiar with available resources at the various levels of the hierarchy can expedite searching. Evidence-based resources near the top of the 6S hierarchy have already filtered the primary literature and appraised the research, creating quality evidence in their resources from a large quantity of information.24
Articles in Medline, the lowest level of evidence, are still important in clinical care. Tools such as the Clinical Queries search filters assist by limiting search retrievals to articles of higher quality with the intent that they will reduce the number of articles that a clinician needs to screen out and increase the number more likely to provide them with an answer based on sound research. This research shows that these filters can assist clinicians in their quest for high-quality clinically relevant articles.
For treatment questions, the primary outcome of retrieval of relevant articles was not different for Clinical Queries compared with PubMed. The filtered searches did result in higher numbers of articles from the core internal medicine journal subset and higher-quality articles that were selected as relevant and that passed scientific criteria; these articles provide the clinicians with better answers to their questions. In addition, because of the popularity of the core journals, clinicians will likely have a higher probability of obtaining the article in full text form, since many institutions have subscriptions to these. Few differences between the retrieval of relevant articles were found from the main PubMed page and from those filtered through the Clinical Queries page, and no differences in search satisfaction, although the non-significant differences favored Clinical Queries searches.
For the diagnosis searches, the filtered results returned more relevant articles, with the first relevant one presented higher in the retrieved list, which would allow searchers to get answers more quickly and reduce the time to screen through results. Clinical Queries also returned significantly fewer articles, which is often a preferable situation because physicians can determine these more quickly if they need to continue searching or if they have found an answer to their question.
The aim of the Clinical Query filters is to optimize search results by increasing the sensitivity, specificity, and precision of searches with the goal of returning more articles that are on target and fewer that are off target. The treatment filter uses the terms (randomized controlled trial[Publication Type] OR (randomized[Title/Abstract] AND controlled[Title/Abstract] AND trial[Title/Abstract])) which increase the yield of higher-quality trials. The current study supports the return of a greater proportion of higher-quality studies than without the use of the filter. The perceived clinical relevance of the retrieved articles to the searches, however, was not impacted by the filter.
Similarly, for the diagnostic studies, the Clinical Queries filter limits results with the term (specificity[Title/Abstract]). The filters resulted in more relevant articles but did not impact participant satisfaction. This is presumably because participants are focused on content relevance rather than methodologic quality (which they were not directed to judge).
Future research includes testing the robustness of the Clinical Queries filters as applied to current databases since they were derived in 2000. The performance of PubMed with and without Clinical Queries on assisting clinicians in arriving at ‘correct’ answers to clinical questions will also be assessed.
Searching databases for research relevant to a physician's question is not an easy task. A number of strategies can help improve search retrieval such as using Boolean operators, controlled vocabulary, and the Participants, Intervention, Control, Outcome format of question analysis. In this study, the approaches to searching by the participants varied greatly. Some very sophisticated users included MeSH terms and truncation; others directly copied the question into the search box. The methodological terms used by participants had varying levels of impact on the operating characteristics of their searches (, ).
The precision of searches, the proportion of retrieved articles that are on target, is one of the most important measures for busy clinicians; they want an answer, and they want it quickly and easily. The filtered searches showed some improvements in giving fewer articles, and more on target, but the content of the returned searches still relied heavily on the content terms submitted.
Strengths and limitations
One of the strengths of this study is that the participants were blinded to where their search terms were being sent. They were unaware that the differences between PubMed and the Clinical Queries filter were being tested. Further, the patterns in the results were similar for provided and participants' own search questions, indicating that the findings can be extrapolated beyond the study sample/participants.
The use of standardized questions gives some control on the search terms being used. The generation of good standardized questions is quite challenging; because the questions were based on systematic review topics, assurances were made that the question did not replicate words in the review title to reduce the probability of the systematic review being the first article retrieved and skewing the participants' search by that result.
Limitations for this study included the constraints put on physician searching and lack of access to full-text articles for relevance assessment. Searching is generally an iterative process; once the results of a search are presented, searchers usually refine their search to increase the applicability of the articles retrieved. Participants were only able to adapt their search once, but only if their initial search returned no articles in one of the interfaces. Some expressed frustration at this limit. Participants did not have access to full text of the articles, but they did have the option to access the abstract. Relevance assessments were therefore not based on the whole study, but rather only on the title and optionally the abstract. Only one clinician requested the full text of an article.
The study used more ‘specific’ Clinical Queries search strategies, which minimize the retrieval of ‘off target’ articles. Performance would likely be different if the ‘sensitive’ search filters had been used; these filters maximize the proportion of high-quality studies retrieved (at the expense of somewhat lower specificity).