|Home | About | Journals | Submit | Contact Us | Français|
Readers may use articles without permission of copyright owners, as long as the author and MLA are acknowledged and the use is educational and not for profit.
The review evaluated studies of electronic database search strategies designed to retrieve adverse effects data for systematic reviews.
Studies of adverse effects were located in ten databases as well as by checking references, hand-searching, searching citations, and contacting experts. Two reviewers screened the retrieved records for potentially relevant papers.
Five thousand three hundred thirteen citations were retrieved, yielding 19 studies designed to develop or evaluate adverse effect filters, of which 3 met the inclusion criteria. All 3 studies identified highly sensitive search strategies capable of retrieving over 95% of relevant records. However, 1 study did not evaluate precision, while the level of precision in the other 2 studies ranged from 0.8% to 2.8%. Methodological issues in these papers included the relatively small number of records, absence of a validation set of records for testing, and limited evaluation of precision.
The results indicate the difficulty of achieving highly sensitive searches for information on adverse effects with a reasonable level of precision. Researchers who intend to locate studies on adverse effects should allow for the amount of resources and time required to conduct a highly sensitive search.
For patients, clinicians, and other decision makers to make informed, balanced decisions, they need appropriate information on both the intended benefits and undesirable consequences of an intervention. However, currently there is an absence of sufficient evidence-based information on the frequency or magnitude of adverse effects. Long lists of potential adverse effects may be all that can be found, with little or no information available as to the magnitude of these effects or of the probability of their occurrence [1–3]. One potential solution to this problem would be to incorporate data on adverse effects into systematic reviews. Systematic reviews are one of the most powerful and reliable tools to estimate the magnitude of effects and the probability of their occurrence [4–10].
Searching databases as part of a systematic review can be a difficult and time-consuming process and usually requires the skills of an information specialist or experienced searcher. Search strategies need to be devised that balance sensitivity (the ability to identify as many relevant articles as possible) with precision (the ability to exclude as many irrelevant articles as possible). In recent years, research has been undertaken to improve this process by developing search filters or search hedges [11–14]. A search filter is a predefined combination of search terms designed to retrieve information on a particular topic. The filter may be created and evaluated in various ways. For example, search terms in a filter may be subjectively derived by contacting experts in literature searching or the topic area. Terms may be objectively derived using word frequency analysis or statistical analysis of a set of relevant records, and then the best combination of terms can be identified by measuring how many relevant and irrelevant records are retrieved using various combinations. Alternatively, statistical techniques such as logistic regression can be used to suggest the best combination of search terms. Once a search filter has been developed, it can then be tested against a validation set of records (a different set of relevant records).
Methodological search filters have been developed for various study designs and have proved to be particularly useful for effectiveness studies [12, 14–18]. For example, the Cochrane Collaboration uses a highly sensitive search strategy that has recently been updated for identifying reports of randomized trials [14,16]. In PubMed, the Clinical Queries feature allows searchers to filter articles according to etiology, diagnosis, prognosis, therapy, or clinical prediction guides . These filters have been developed via research using objective statistical analysis at McMaster University  and are revised periodically.
Although the same basic principles may be applied to systematic reviews of adverse effects as to reviews of effectiveness, specific procedures may be needed as the retrieval of information on adverse effects poses particular challenges [3,10,20]. Difficulties arise when searching for adverse effects because searches may sometimes need to go beyond randomized controlled trials and there may be numerous adverse outcomes of interest, some of which may not be well defined or specified prior to the search. Moreover, adverse effects may be poorly reported, inadequately indexed, and inconsistently described.
Although systematic reviews incorporating adverse effects have become increasingly important, there is little guidance on what constitutes the best search strategy . The development of a search filter to identify information on adverse effects would be particularly useful given the problems of searching for studies on adverse effects. This research aims to systematically review methodological studies that report on the development and evaluation of search filters designed to identify articles with information on adverse effects resulting from any health care intervention.
The systematic review was conducted by two independent reviewers who retrieved potentially relevant articles and extracted data. The two reviewers then resolved discrepancies and reached a consensus on the final results.
The reviewers anticipated that much of the literature in this newly developing area of search filters for information on adverse effects would be identified by searching beyond MEDLINE and EMBASE and that much of the relevant research would not be published as peer-reviewed journal articles. For example, a previous systematic review on a similar methodological topic only identified thirteen of thirty of the included papers through searching MEDLINE and EMBASE . A range of bibliographic databases was therefore searched in the current study (Table 1). These databases were carefully selected to allow the identification of reports, dissertations, and gray literature in addition to journal articles. Hand-searching of key journals in librarianship, drug safety, and research methodology was carried out to identify articles either not indexed in electronic databases or not easily identifiable in electronic databases. These journals were selected by consultation with experts and from the authors' own knowledge of relevant literature.
Unpublished material was also sought by hand-searches of conference proceedings, by scans of evidence-based websites, and through discussion with experts involved in the Cochrane Adverse Effects Methods Group. Conference proceedings and web sources were selected on the basis of their coverage of systematic review methodology In addition, the bibliographies of any eligible articles identified were checked for additional references. Citation searches were carried out for all eligible articles using ISI Web of Knowledge.
Searching for methodological papers can prove very difficult in databases other than those in which methods papers are specifically labeled as methodology papers, such as the Cochrane Database of Systematic Reviews (CDSR) and the Database of Abstracts of Reviews of Effects (DARE). To develop search strategies in databases such as MEDLINE and EMBASE, a pragmatic approach was used to keep the results set manageable (Appendix, online). Terms were often limited to the title field only, and some potentially relevant text-words and indexing terms that retrieved thousands or tens of thousands of irrelevant records were omitted. All the search strategies were checked by a second experienced information scientist.
No date or language restrictions were applied to the searches. Although logistical constraints meant that non-English publications were not included in the data extraction or quality assessment, the reviewers thought an estimation of the size of the non-English literature was useful.
A research study located with the above strategy was considered eligible for inclusion in the data extraction and quality assessment portion of this review if one of its main objectives was the development and/or evaluation of a search filter or filters that could generally be used for retrieving articles with data on the adverse effects of any health care intervention (resulting from drugs, surgery, etc.) from an electronic database. Eligible research studies were required to give at least one measure of performance, such as the sensitivity and recall or precision and specificity of the filters. Studies were excluded if they:
Information was extracted about the database and interface for which the search filter was devised, the type of interventions, type of adverse effects, and methods used to create or test the search filter, such as the source and size of the reference set of relevant records or the validation set of records. Primary outcomes of interest were measures of sensitivity and recall (proportion of relevant articles retrieved by the filter), precision (number of relevant articles divided by the total number of studies retrieved with that filter), accuracy (proportion of all articles that are correctly classified by the filter), numbers needed to screen and read (inverse of precision), or specificity (proportion of irrelevant articles that were not identified by the filter). Any search strategies recommended by the authors were also noted.
The methodological quality of the included studies was assessed using published criteria adapted specifically for this review [22,23]. The included studies were assessed using the following questions:
Searches were originally undertaken in September 2007 and retrieved 4,609 records. Update searches were subsequently performed in August 2008 and retrieved an additional 704 records. Twenty of these articles were studies that attempted to develop filters for retrieving adverse effects.
Three studies met the inclusion criteria for this review [24–29]: two were published as full papers, and one was a conference presentation [24,25] (Table 2). Although studies were not excluded from this review according to the type of health care intervention, all three of the studies that met the inclusion criteria aimed to maximize the sensitivity of search strategies to identify papers on the adverse effects of drug interventions.
One study evaluated search strategies for a named specific adverse effect (breast cancer with oral contraceptives) [27–29], whereas the other two studies aimed to develop search strategies to capture all or all serious adverse effects for a particular group of drugs [24–26]. All three studies looked at search strategies for MEDLINE, and one study included search strategies in EMBASE .
Seventeen studies were excluded from this review: eight contained no evaluation of the search strategies for adverse effects data that they proposed [30–37]; three were designed to identify sensitive search strategies for causation or etiological studies [38–40]; two did not suggest any filters but undertook co-word analysis [41,42]; three were in not in English [43–45]; and one evaluated search strategies to retrieve systematic reviews of adverse effects, rather than primary data .
One study was only published as a conference abstract [24,25], and although the slides of this presentation were available, the level of detail of reporting of the methods was not comparable to published research papers.
The number of relevant records in the reference sets varied considerably. The largest study was based on several hundred records and had a validation set of records. The authors stated that precision of the searches could be measured with their particular study design [24,25]. However, this study was not published in a journal, and full details of the methods were not presented. The other two studies tested their search strategies on smaller numbers of relevant records (fifty-eight and eighty-four records) [26–29] and did not test the search strategies on a validation set of records (another set of relevant records) (Table 3) [26–29]. None of the studies reported the confidence intervals for the point estimates of sensitivity.
Although each study used a number of sources to identify their reference set of records, it was possible that the original search strategies (despite searching a range of sources) failed to retrieve a substantial number of relevant records. The original search strategies might then have biased the results obtained from evaluating the search filters. For instance, if the reference set is obtained using the term “adverse” (among others), then this term is more likely to retrieve articles in the reference set and so is more likely to have a higher sensitivity when tested on that reference set. In this way, evaluation of search filters is often in danger of becoming self-fulfilling . Each study was also limited to a particular class of drugs, limiting the generalizability of the results. The derivation of the search terms was not described for one study, whilst the other two studies used either terms derived from relevant records from a systematic review or used in previous studies.
All 3 studies were able to create highly sensitive search strategies that had between 97.0% and 100.0% sensitivity (Table 4). However, the results of the 2 studies that also measured precision indicated that this high sensitivity was achieved with very poor precision (between 0.9% to 2.8%) [26–29]. This precision rate means that to retrieve one additional article on adverse effects, between 36 and 125 records retrieved with this strategy will need to be screened, which may be potentially unmanageable, given that full-text checking is often necessary.
The search strategy with the highest sensitivity in Wieland et al. [27–29] did not contain any text-words for the intervention (oral contraceptives), as searching specifically for the intervention would have missed nine of the relevant citations. However, as the authors acknowledge, any search strategy that excludes terms for the interventions is likely to lead to unmanageably large results.
The studies by Badgett et al. [24,25] and Golder et al.  both indicate the value of using floating subheadings (subheadings not attached to any indexing terms) for highly sensitive searches in MEDLINE . Badgett et al. [24,25] suggested using the subheadings “adverse effects,” “complications,” “poisoning,” and “drug effects,” whereas Golder et al.  recommended using “adverse effects,” “complications,” and “drug effects.” Golder et al.  was the only study to attempt to develop a search filter for EMBASE, and the suggested search strategy for EMBASE in that study did not differ substantially from the suggested search strategy in MEDLINE other than in the use of subheadings. While the MEDLINE search strategy indicated the value of floating subheadings, the results of the EMBASE search strategy suggested that using subheadings attached to the named drug intervention (for example, vigabatrin/adverse drug reaction or vigabatrin/drug toxicity) performed better.
Badgett et al. [24,25] and Wieland et al. [27–29] both included study designs in their filter, and Golder et al.  and Wieland et al. [27–29] both included specified known adverse effects. Text-words such as “adverse effects,” “side effect,” and “adverse reaction” were only included in the filter by Golder et al. .
The complete search strategy for identifying papers in a systematic review on adverse effects will depend on the inclusion criteria for the review. For example, if the inclusion criteria are limited to particular study designs, the search strategy may need to reflect this. Similarly, search strategies may need to be adapted in reviews designed to establish whether an association exists between an intervention and a suspected adverse effect, to assess the frequency of a known adverse effect, or to review the overall safety profile of an intervention. Depending on the question to be addressed, searches can also be restricted to specific adverse effects, as in the case of Wieland et al. [27–29], or conducted using a generic search filter for all adverse effects, as in the case of Golder et al.  or Badgett et al. [24,25]. The results here indicate that creating a highly sensitive search strategy with an acceptable level of precision is difficult, irrespective of whether the focus is on a specific named adverse effect or a broad search for any (unspecified) potential adverse effects.
Two of the studies included in this review recommended search strategies for adverse effects using not only adverse effects terms, but also study design terms. Adverse effects terms alone, therefore, might not be sufficient to identify papers with information on adverse effects. A study by Derry et al.  has identified particular problems in using terms for adverse effects alone to create highly sensitive search strategies. They studied 107 trials that reported adverse effects data and assessed how many papers were indexed with relevant terms for adverse effects in MEDLINE and EMBASE and how many titles or abstracts contained “adverse effects” or related terms. They found that a combined search covering the 2 databases using both index and text-word terms for adverse effects would have retrieved only 82 of 107 (77%) trials . Other studies have also indicated the problems of searching on terms for adverse effects in the title and abstract [49–51]. One study found that of the adverse effects literature from one database, 64% (of 3,040 studies) contained adverse effects terms in the title , whilst 2 more recent studies found that adverse effects were mentioned in only 53% (130/243) and 63% (328/521) of abstracts of journal articles [50,51].
It is often assumed that search strategies should contain terms for the intervention under investigation. While this is probably true for clinical trials, the situation is different for observational studies that are focused on identifying the etiology or multiple risk factors behind a particular adverse outcome (e.g., the risk factors for breast cancer). Here, Wieland et al. [27–29] found that not all studies of adverse effects contained terms for one of the suspected drugs (oral contraceptives) in the bibliographic details. However, other studies have indicated that searching on drug terms in the title might be an effective method for searching for adverse effects, identifying 99% of papers . It should be noted, however, that the study in question  is now over 30 years old.
Some guidance on searching for adverse effects is currently available [30–37], although how evidence-based this guidance is, is difficult to ascertain. Much of this guidance, however, has tended to emphasize the usefulness of subheadings (such as “adverse effects” or “drug toxicity”) in MEDLINE and EMBASE [30, 31, 33–37]. The results from Golder et al.  and Badgett et al. [24,25] do suggest that subheadings are useful.
Although all three of the included studies aimed to create search filters that maximized sensitivity, the suggested terms and approaches may be adapted (with appropriate limits, such as specific adverse effects terms or subheadings) for searches for adverse effects in clinical settings where the busy clinician would prefer greater precision. However, the problems of poor indexing and inconsistent terminology will also be an issue for all searchers, whether they aim to maximize sensitivity or precision.
This systematic review has a number of limitations. First, the difficulty in searching electronic databases for methodology papers might mean that some relevant studies were missed. However, hand-searching, reference checking, and citation searching were included to overcome this limitation to some extent. Second, there was a potential for bias, particularly as the author of one of the included studies was also an author of this systematic review. To reduce any potential bias, data extraction and analysis were performed in duplicate, with the second reviewer working independently. Full consensus was reached between the two reviewers.
This review highlights the problems of achieving a balance between sensitivity and precision when searching for information on adverse effects and the lack of research in this area. Although high sensitivity can be achieved, this is likely to be associated with poor precision. Authors of systematic reviews may therefore need to create pragmatic search strategies for adverse effects to keep the results set manageable and, at the same time, need to supplement searches of electronic databases with other means of identifying papers such as checking references, contacting industry, and searching citations.
The limitations of the case studies identified by this review and the large number of other search strategies that have been proposed but not yet empirically tested [30–37] suggest that still further research is needed to develop clear evidence-based guidance as to the most efficient means of searching for information on adverse effects.
No conflicts of interest have been declared.
This research was undertaken by Su Golder as part of a Medical Research Council (MRC) fellowship. The views expressed in this presentation are those of the authors and not necessarily those of the MRC.
We thank Jane Burch at the Centre for Reviews and Dissemination (CRD) for initial screening of the Endnote Library and Lindsey Myers at CRD for checking the search strategies
ECA supplemental appendix is available with the online version of this journal.