Venous thromboembolism (VTE) is a common preventable cause of mortality in hospitalized medical patients. Despite rigorous randomized trials generating strong recommendations for anticoagulant use to prevent VTE, nearly 40% of medical patients receive inappropriate thromboprophylaxis. Knowledge-translation strategies are needed to bridge this gap.
We conducted a 16-week pilot cluster randomized controlled trial (RCT) to determine the proportion of medical patients that were appropriately managed for thromboprophylaxis (according to the American College of Chest Physician guidelines) within 24 hours of admission, through the use of a multicomponent knowledge-translation intervention. Our primary goal was to determine the feasibility of conducting this study on a larger scale. The intervention comprised clinician education, a paper-based VTE risk assessment algorithm, printed physicians’ orders, and audit and feedback sessions. Medical wards at six hospitals (representing clusters) in Ontario, Canada were included; three were randomized to the multicomponent intervention and three to usual care (i.e., no active strategies for thromboprophylaxis in place). Blinding was not used.
A total of 2,611 patients (1,154 in the intervention and 1,457 in the control group) were eligible and included in the analysis. This multicomponent intervention did not lead to a significant difference in appropriate VTE prophylaxis rates between intervention and control hospitals (appropriate management rate odds ratio = 0.80; 95% confidence interval: 0.50, 1.28; p = 0.36; intra-class correlation coefficient: 0.022), and thus was not considered feasible. Major barriers to effective knowledge translation were poor attendance by clinical staff at education and feedback sessions, difficulty locating preprinted orders, and lack of involvement by clinical and administrative leaders. We identified several factors that may increase uptake of a VTE prophylaxis strategy, including local champions, support from clinical and administrative leaders, mandatory use, and a simple, clinically relevant risk assessment tool.
Hospitals allocated to our multicomponent intervention did not have a higher rate of medical inpatients appropriately managed for thromboprophylaxis than did hospitals that were not allocated to this strategy.
Thromboprophylaxis; Medical patients; Anticoagulants; Venous thromboembolism; Cluster randomization; Standard orders
Scientific knowledge is in constant change. The flow of new information requires a frequent re-evaluation of the available research results. Clinical practice guidelines (CPGs) are not exempted from this phenomenon and need to be kept updated to maintain the validity of their recommendations. The objective of our review is to systematically identify, describe and assess strategies for monitoring and updating CPGs.
Study design and setting
We conducted a systematic review of studies evaluating one or more methods of updating (with or without monitoring) CPGs or recommendations. We searched MEDLINE (PubMed) and The Cochrane Methodology Register (The Cochrane Library) from 1966 to June 2012. Additionally, we hand-searched reference lists of the included studies and the Guidelines International Network book of abstracts. If necessary, we contacted study authors to obtain additional information.
We included a total of eight studies. Four evaluated if CPGs were out of date, three updated CPGs, and one continuously monitored and updated CPGs. The most detailed reported phase of the process was the identification of new evidence. As opposed to studies updating guidelines, studies evaluating if CPGs were out of date applied restricted searches. Only one study compared a restricted versus an exhaustive search suggesting that a restricted search is sufficient to assess recommendations’ Validity. One study analyzed the survival time of CPGs and suggested that these should be reassessed every three years.
There is limited evidence about the optimal strategies for monitoring and updating clinical practice guidelines. A restricted search is likely to be sufficient to monitor new evidence and assess the need to update, however, more information is needed about the timing and type of search. Only the exhaustive search strategy has been assessed for the update of CPGs. The development and evaluation of more efficient strategies is needed to improve the timeliness and reduce the burden of maintaining the validity of CPGs.
Clinical practice guidelines; Diffusion of innovation; Evidence-based medicine; Information storage and retrieval; Methodology; Updating; Implementation science; Dissemination and implementation; Knowledge translation
When searching for renal literature, nephrologists must choose between several different bibliographic databases. We compared the availability of renal clinical studies in six major bibliographic databases.
We gathered 151 renal systematic reviews, which collectively contained 2195 unique citations referencing primary studies in the form of journal articles, meeting articles or meeting abstracts published between 1963 and 2008. We searched for each citation in three subscription-free bibliographic databases (PubMed, Google Scholar and Scirus) and three subscription-based databases (EMBASE, Ovid-MEDLINE and ISI Web of Knowledge). For the subscription-free databases, we determined which full-text journal articles were available free of charge via links to the article source.
The proportion of journal articles contained within each of the six databases ranged from 96 to 97%; results were similar for meeting articles. Availability of meeting abstracts was poor, ranging from 0 to 37% (P < 0.01) with ISI Web of Knowledge containing the largest proportion [37%, 95% confidence interval (95% CI) 32–43%]. Among the subscription-free databases, free access to full-text articles was highest in Google Scholar (38% free, 95% CI 36–41%), and was only marginally higher (39%) when all subscription-free databases were searched. After 2000, free access to full-text articles increased to 49%.
Over 99% of renal clinical journal articles are available in at least one major bibliographic database. Subscription-free databases provide free full-text access to almost half of the articles published after the year 2000, which may be of particular interest to clinicians in settings with limited access to subscription-based resources.
bibliographic databases; content coverage; evidence-based medicine; information storage and retrieval; literature searching; renal informatics
Studies published in general and specialty medical journals have the potential to improve emergency medicine (EM) practice, but there can be delayed awareness of this evidence because emergency physicians (EPs) are unlikely to read most of these journals. Also, not all published studies are intended for or ready for clinical practice application. The authors developed “Best Evidence in Emergency Medicine” (BEEM) to ameliorate these problems by searching for, identifying, appraising, and translating potentially practice-changing studies for EPs. An initial step in the BEEM process is the BEEM rater scale, a novel tool for EPs to collectively evaluate the relative clinical relevance of EM-related studies found in more than 120 journals. The BEEM rater process was designed to serve as a clinical relevance filter to identify those studies with the greatest potential to affect EM practice. Therefore, only those studies identified by BEEM raters as having the highest clinical relevance are selected for the subsequent critical appraisal process and, if found methodologically sound, are promoted as the best evidence in EM.
The primary objective was to measure inter-rater reliability (IRR) of the BEEM rater scale. Secondary objectives were to determine the minimum number of EP raters needed for the BEEM rater scale to achieve acceptable reliability and to compare performance of the scale against a previously published evidence rating system, the McMaster Online Rating of Evidence (MORE), in an EP population.
The authors electronically distributed the title, conclusion, and a PubMed link for 23 recently published studies related to EM to a volunteer group of 134 EPs. The volunteers answered two demographic questions and rated the articles using one of two randomly assigned seven-point Likert scales, the BEEM rater scale (n = 68) or the MORE scale (n = 66), over two separate administrations. The IRR of each scale was measured using generalizability theory.
The IRR of the BEEM rater scale ranged between 0.90 (95% confidence interval [CI] = 0.86 to 0.93) to 0.92 (95% CI = 0.89 to 0.94) across administrations. Decision studies showed a minimum of 12 raters is required for acceptable reliability of the BEEM rater scale. The IRR of the MORE scale was 0.82 to 0.84.
The BEEM rater scale is a highly reliable, single-question tool for a small number of EPs to collectively rate the relative clinical relevance within the specialty of EM of recently published studies from a variety of medical journals. It compares favorably with the MORE system because it achieves a high IRR despite simply requiring raters to read each article’s title and conclusion.
Clinical Queries filters were developed to improve the retrieval of high-quality studies in searches on clinical matters. The study objective was to determine the yield of relevant citations and physician satisfaction while searching for diagnostic and treatment studies using the Clinical Queries page of PubMed compared with searching PubMed without these filters.
Materials and methods
Forty practicing physicians, presented with standardized treatment and diagnosis questions and one question of their choosing, entered search terms which were processed in a random, blinded fashion through PubMed alone and PubMed Clinical Queries. Participants rated search retrievals for applicability to the question at hand and satisfaction.
For treatment, the primary outcome of retrieval of relevant articles was not significantly different between the groups, but a higher proportion of articles from the Clinical Queries searches met methodologic criteria (p=0.049), and more articles were published in core internal medicine journals (p=0.056). For diagnosis, the filtered results returned more relevant articles (p=0.031) and fewer irrelevant articles (overall retrieval less, p=0.023); participants needed to screen fewer articles before arriving at the first relevant citation (p<0.05). Relevance was also influenced by content terms used by participants in searching. Participants varied greatly in their search performance.
Clinical Queries filtered searches returned more high-quality studies, though the retrieval of relevant articles was only statistically different between the groups for diagnosis questions.
Retrieving clinically important research studies from Medline is a challenging task for physicians. Methodological search filters can improve search retrieval.
Health information science; knowledge translation; information storage and retrieval; PubMed, search engine; databases as topic; medical informatic; Health; evidence-based medicine; information retrieval; informatics education; library science
This project engages patients and physicians in the development of Decision Boxes, short clinical topic summaries covering medical questions that have no single best answer. Decision Boxes aim to prepare the clinician to communicate the risks and benefits of the available options to the patient so they can make an informed decision together.
Seven researchers (including four practicing family physicians) selected 10 clinical topics relevant to primary care practice through a Delphi survey. We then developed two one-page prototypes on two of these topics: prostate cancer screening with the prostate-specific antigen test, and prenatal screening for trisomy 21 with the serum integrated test. We presented the prototypes to purposeful samples of family physicians distributed in two focus groups, and patients distributed in four focus groups. We used the User Experience Honeycomb to explore barriers and facilitators to the communication design used in Decision Boxes. All discussions were transcribed, and three researchers proceeded to thematic content analysis of the transcriptions. The coding scheme was first developed from the Honeycomb’s seven themes (valuable, usable, credible, useful, desirable, accessible, and findable), and included new themes suggested by the data. Prototypes were modified in light of our findings.
Three rounds were necessary for a majority of researchers to select 10 clinical topics. Fifteen physicians and 33 patients participated in the focus groups. Following analyses, three sections were added to the Decision Boxes: introduction, patient counseling, and references. The information was spread to two pages to try to make the Decision Boxes less busy and improve users’ first impression. To try to improve credibility, we gave more visibility to the research institutions involved in development. A statement on the boxes’ purpose and a flow chart representing the shared decision-making process were added with the intent of clarifying the tool’s purpose. Information about the risks and benefits according to risk levels was added to the Decision Boxes, to try to ease the adaptation of the information to individual patients.
Results will guide the development of the eight remaining Decision Boxes. A future study will evaluate the effect of Decision Boxes on the integration of evidence-based and shared decision making principles in clinical practice.
Evidence-based medicine; User experience; Risk communication; Usability; Patient-centered care; Counselling; Clinical topic summary; Decision support; Knowledge translation; Communication design
It is unknown whether computer-generated, patient-tailored feedback leads to improvements in glycemic control in people with type 2 diabetes.
RESEARCH DESIGN AND METHODS
We recruited people with type 2 diabetes aged ≥40 years with a glycated hemoglobin (A1C) ≥7%, living in Hamilton, Canada, who were enrolled in a community-based program (Diabetes Hamilton) that provided regular evidence-based information and listings of community resources designed to facilitate diabetes self-management. After completing a questionnaire, participants were randomly allocated to either receive or not receive periodic computer-generated, evidence-based feedback on the basis of their questionnaire responses and designed to facilitate improved glycemic control and diabetes self-management. The primary outcome was a change in A1C after 1 year.
A total of 465 participants (50% women, mean age 62 years, and mean A1C 7.83%) were randomly assigned, and 12-month A1C values were available in 96% of all participants, at which time the A1C level had decreased by an absolute amount of 0.24 and 0.15% in the intervention and control groups, respectively. The difference in A1C reduction for the intervention versus control group was 0.09% (95% CI −0.08 to 0.26; P = 0.3). No between-group differences in measures of quality of life, diabetes self-management behaviors, or clinical outcomes were observed.
Providing computer-generated tailored feedback to registrants of a generic, community-based program that supports diabetes self-management does not lead to lower A1C levels or a better quality of life than participation in the community-based program (augmented by periodic A1C testing) alone.
Physicians practicing in ambulatory care are adopting electronic health record (EHR) systems. Governments promote this adoption with financial incentives, some hinged on improvements in care. These systems can improve care but most demonstrations of successful systems come from a few highly computerized academic environments. Those findings may not be generalizable to typical ambulatory settings, where evidence of success is largely anecdotal, with little or no use of rigorous methods. The purpose of our pilot study was to evaluate the impact of a diabetes specific chronic disease management system (CDMS) on recording of information pertinent to guideline-concordant diabetes care and to plan for larger, more conclusive studies.
Using a before–after study design we analyzed the medical record of approximately 10 patients from each of 3 diabetes specialists (total = 31) who were seen both before and after the implementation of a CDMS. We used a checklist of key clinical data to compare the completeness of information recorded in the CDMS record to both the clinical note sent to the primary care physician based on that same encounter and the clinical note sent to the primary care physician based on the visit that occurred prior to the implementation of the CDMS, accounting for provider effects with Generalized Estimating Equations.
The CDMS record outperformed by a substantial margin dictated notes created for the same encounter. Only 10.1% (95% CI, 7.7% to 12.3%) of the clinically important data were missing from the CDMS chart compared to 25.8% (95% CI, 20.5% to 31.1%) from the clinical note prepared at the time (p < 0.001) and 26.3% (95% CI, 19.5% to 33.0%) from the clinical note prepared before the CDMS was implemented (p < 0.001). There was no significant difference between dictated notes created for the CDMS-assisted encounter and those created for usual care encounters (absolute mean difference, 0.8%; 95% CI, −8.5% to 6.8%).
The CDMS chart captured information important for the management of diabetes more often than dictated notes created with or without its use but we were unable to detect a difference in completeness between notes dictated in CDMS-associated and usual-care encounters. Our sample of patients and providers was small, and completeness of records may not reflect quality of care.
Tools to enhance physician searches of Medline and other bibliographic databases have potential to improve the application of new knowledge in patient care. This is particularly true for articles about glomerular disease, which are published across multiple disciplines and are often difficult to track down. Our objective was to develop and test search filters for PubMed, Ovid Medline, and Embase that allow physicians to search within a subset of the database to retrieve articles relevant to glomerular disease.
We used a diagnostic test assessment framework with development and validation phases. We read a total of 22,992 full text articles for relevance and assigned them to the development or validation set to define the reference standard. We then used combinations of search terms to develop 997,298 unique glomerular disease filters. Outcome measures for each filter included sensitivity, specificity, precision, and accuracy. We selected optimal sensitive and specific search filters for each database and applied them to the validation set to test performance.
High performance filters achieved at least 93.8% sensitivity and specificity in the development set. Filters optimized for sensitivity reached at least 96.7% sensitivity and filters optimized for specificity reached at least 98.4% specificity. Performance of these filters was consistent in the validation set and similar among all three databases.
PubMed, Ovid Medline, and Embase can be filtered for articles relevant to glomerular disease in a reliable manner. These filters can now be used to facilitate physician searching.
Glomerular diseases; Glomerulopathy; Medical Informatics; Information retrieval; Medline; Embase
This study evaluated search strategies for finding high-quality studies on treatment and systematic reviews in PsycINFO.
Study design and setting:
64 journals were hand searched at McMaster University. Methodologic criteria were applied to clinically relevant articles to identify “pass” and “fail” articles. 4,985 candidate terms were compiled: 7,463 combinations for therapy articles and 5,246 combinations for reviews. Candidate search strategy results were compared with hand searches. The proposed strategies served as “diagnostic tests” for sound studies; the hand searches were the “gold standard.” Sensitivity, specificity, precision, and accuracy were calculated.
233 (32.5%) of 716 treatment articles met criteria for scientific merit, and 58 (11.5%) of 506 review articles met criteria for systematic reviews. For treatment studies, combined terms had a peak sensitivity of 97.9% (specificity 52.2%). Maximum specificity was 97.7% (sensitivity 51.5%). Sensitivity and specificity were each 79% when optimizing both while minimizing their difference. For review articles, combined terms had a peak sensitivity of 81.0% (specificity 54.4%). Maximum specificity was 98.1% (sensitivity 51.7%). Sensitivity and specificity were each 65% when optimizing both while minimizing their difference.
Empirically derived search strategies can achieve high sensitivity and specificity for retrieving sound treatment studies and review articles from PsycINFO.
Databases; bibliographic; Psychological literature; Information retrieval
Physicians face challenges when searching PubMed for research evidence, and they may miss relevant articles while retrieving too many nonrelevant articles. We investigated whether the use of search filters in PubMed improves searching by physicians.
We asked a random sample of Canadian nephrologists to answer unique clinical questions derived from 100 systematic reviews of renal therapy. Physicians provided the search terms that they would type into PubMed to locate articles to answer these questions. We entered the physician-provided search terms into PubMed and applied two types of search filters alone or in combination: a methods-based filter designed to identify high-quality studies about treatment (clinical queries “therapy”) and a topic-based filter designed to identify studies with renal content. We evaluated the comprehensiveness (proportion of relevant articles found) and efficiency (ratio of relevant to nonrelevant articles) of the filtered and nonfiltered searches. Primary studies included in the systematic reviews served as the reference standard for relevant articles.
The average physician-provided search terms retrieved 46% of the relevant articles, while 6% of the retrieved articles were nonrelevant (the ratio of relevant to nonrelevant articles was 1:16). The use of both filters together produced a marked improvement in efficiency, resulting in a ratio of relevant to nonrelevant articles of 1:5 (16 percentage point improvement; 99% confidence interval 9% to 22%; p < 0.003) with no substantive change in comprehensiveness (44% of relevant articles found; p = 0.55).
The use of PubMed search filters improves the efficiency of physician searches. Improved search performance may enhance the transfer of research into practice and improve patient care.
Journal impact factor (JIF) is often used as a measure of journal quality. A retrospective cohort study determined the ability of clinical article and journal characteristics, including appraisal measures collected at the time of publication, to predict subsequent JIFs.
Clinical research articles that passed methods quality criteria were included. Each article was rated for relevance and newsworthiness by 3 to 24 physicians from a panel of more than 4,000 practicing clinicians. The 1,267 articles (from 103 journals) were divided 60∶40 into derivation (760 articles) and validation sets (507 articles), representing 99 and 88 journals, respectively. A multiple regression model was produced determining the association of 10 journal and article measures with the 2007 JIF.
Four of the 10 measures were significant in the regression model: number of authors, number of databases indexing the journal, proportion of articles passing methods criteria, and mean clinical newsworthiness scores. With the number of disciplines rating the article, the 5 variables accounted for 61% of the variation in JIF (R2 = 0.607, 95% CI 0.444 to 0.706, P<0.001).
For the clinical literature, measures of scientific quality and clinical newsworthiness available at the time of publication can predict JIFs with 60% accuracy.
Background: Clinical end users of EMBASE have a difficult time retrieving articles that are both scientifically sound and directly relevant to clinical practice. Search filters have been developed to assist end users in increasing the success of their searches. Many filters have been developed for the literature on therapy and reviews for use in MEDLINE, but little has been done for use in EMBASE with no filter development for studies of prognosis. The objective of this study was to determine how well various methodologic textwords, index terms, and their Boolean combinations retrieve methodologically sound literature on the prognosis of health disorders in EMBASE.
Methods: An analytic survey was conducted, comparing hand searches of 55 journals with retrievals from EMBASE for 4,843 candidate search terms and 8,919 combinations. All articles were rated using purpose and quality indicators, and clinically relevant prognostic articles were categorized as “pass” or “fail” according to explicit criteria for scientific merit. Candidate search strategies were run in EMBASE, the retrievals being compared with the hand search data. The sensitivity, specificity, precision, and accuracy of the search strategies were calculated.
Results: Of the 1,064 articles about prognosis, 148 (13.9%) met basic criteria for scientific merit. Combinations of search terms reached peak sensitivities of 98.7% with specificity at 50.6%. Compared with best single terms, best multiple terms increased sensitivity for sound studies by 12.2% (absolute increase), while decreasing specificity (absolute decrease 5.1%) when sensitivity was maximized. Combinations of search terms reached peak specificities of 93.4% with sensitivity at 50.7%. Compared with best single terms, best multiple terms increased specificity for sound studies by 7.1% (absolute increase), while decreasing sensitivity (absolute decrease 8.8%) when specificity was maximized.
Conclusion: Empirically derived search strategies combining indexing terms and textwords can achieve high sensitivity or specificity for retrieving sound prognostic studies from EMBASE.
Computerized clinical decision support systems (CCDSSs) are claimed to improve processes and outcomes of primary preventive care (PPC), but their effects, safety, and acceptance must be confirmed. We updated our previous systematic reviews of CCDSSs and integrated a knowledge translation approach in the process. The objective was to review randomized controlled trials (RCTs) assessing the effects of CCDSSs for PPC on process of care, patient outcomes, harms, and costs.
We conducted a decision-maker-researcher partnership systematic review. We searched MEDLINE, EMBASE, Ovid's EBM Reviews Database, Inspec, and other databases, as well as reference lists through January 2010. We contacted authors to confirm data or provide additional information. We included RCTs that assessed the effect of a CCDSS for PPC on process of care and patient outcomes compared to care provided without a CCDSS. A study was considered to have a positive effect (i.e., CCDSS showed improvement) if at least 50% of the relevant study outcomes were statistically significantly positive.
We added 17 new RCTs to our 2005 review for a total of 41 studies. RCT quality improved over time. CCDSSs improved process of care in 25 of 40 (63%) RCTs. Cumulative scientifically strong evidence supports the effectiveness of CCDSSs for screening and management of dyslipidaemia in primary care. There is mixed evidence for effectiveness in screening for cancer and mental health conditions, multiple preventive care activities, vaccination, and other preventive care interventions. Fourteen (34%) trials assessed patient outcomes, and four (29%) reported improvements with the CCDSS. Most trials were not powered to evaluate patient-important outcomes. CCDSS costs and adverse events were reported in only six (15%) and two (5%) trials, respectively. Information on study duration was often missing, limiting our ability to assess sustainability of CCDSS effects.
Evidence supports the effectiveness of CCDSSs for screening and treatment of dyslipidaemia in primary care with less consistent evidence for CCDSSs used in screening for cancer and mental health-related conditions, vaccinations, and other preventive care. CCDSS effects on patient outcomes, safety, costs of care, and provider satisfaction remain poorly supported.
Health-system policy makers need timely access to synthesised research evidence to inform the policy-making process. No efforts to address this need have been evaluated using an experimental quantitative design. We developed an evidence service that draws inputs from Health Systems Evidence, which is a database of policy-relevant systematic reviews. The reviews have been (a) categorised by topic and type of review; (b) coded by the last year searches for studies were conducted and by the countries in which included studies were conducted; (c) rated for quality; and (d) linked to available user-friendly summaries, scientific abstracts, and full-text reports. Our goal is to evaluate whether a "full-serve" evidence service increases the use of synthesized research evidence by policy analysts and advisors in the Ontario Ministry of Health and Long-Term Care (MOHLTC) as compared to a "self-serve" evidence service.
We will conduct a two-arm randomized controlled trial (RCT), along with a follow-up qualitative process study in order to explore the findings in greater depth. For the RCT, all policy analysts and policy advisors (n = 168) in a single division of the MOHLTC will be invited to participate. Using a stratified randomized design, participants will be randomized to receive either the "full-serve" evidence service (database access, monthly e-mail alerts, and full-text article availability) or the "self-serve" evidence service (database access only). The trial duration will be ten months (two-month baseline period, six-month intervention period, and two month cross-over period). The primary outcome will be the mean number of site visits/month/user between baseline and the end of the intervention period. The secondary outcome will be participants' intention to use research evidence. For the qualitative study, 15 participants from each trial arm (n = 30) will be purposively sampled. One-on-one semi-structured interviews will be conducted by telephone on their views about and their experiences with the evidence service they received, how helpful it was in their work, why it was helpful (or not helpful), what aspects were most and least helpful and why, and recommendations for next steps.
To our knowledge, this will be the first RCT to evaluate the effects of an evidence service specifically designed to support health-system policy makers in finding and using research evidence.
To support the use of research evidence by community-based organizations (CBOs) we have developed 'Synthesized HIV/AIDS Research Evidence' (SHARE), which is an evidence service for those working in the HIV sector. SHARE consists of several components: an online searchable database of HIV-relevant systematic reviews (retrievable based on a taxonomy of topics related to HIV/AIDS and open text search); periodic email updates; access to user-friendly summaries; and peer relevance assessments. Our objective is to evaluate whether this 'full serve' evidence service increases the use of research evidence by CBOs as compared to a 'self-serve' evidence service.
We will conduct a two-arm randomized controlled trial (RCT), along with a follow-up qualitative process study to explore the findings in greater depth. All CBOs affiliated with Canadian AIDS Society (n = 120) will be invited to participate and will be randomized to receive either the 'full-serve' version of SHARE or the 'self-serve' version (a listing of relevant systematic reviews with links to records on PubMed and worksheets that help CBOs find and use research evidence) using a simple randomized design. All management and staff from each organization will be provided access to the version of SHARE that their organization is allocated to. The trial duration will be 10 months (two-month baseline period, six-month intervention period, and two month crossover period), the primary outcome measure will be the mean number of logins/month/organization (averaged across the number of users from each organization) between baseline and the end of the intervention period. The secondary outcome will be intention to use research evidence as measured by a survey administered to one key decision maker from each organization. For the qualitative study, one key organizational decision maker from 15 organizations in each trial arm (n = 30) will be purposively sampled. One-on-one semi-structured interviews will be conducted by telephone on their views about and their experiences with the evidence service they received, how helpful it was in their work, why it was helpful (or not helpful), what aspects were most and least helpful and why, and recommendations for next steps.
To our knowledge, this will be the first RCT to evaluate the effects of an evidence service specifically designed to support CBOs in finding and using research evidence.
This study identified the journals with the highest yield of clinical obesity research articles and surveyed the scatter of such studies across journals. The study exemplifies an approach to establishing a journal collection that is likely to contain most new knowledge about a field.
Design and methods
All original studies that were cited in 40 systematic reviews about obesity topics (“included studies”) were compiled and journal titles of where they were published were extracted. The journals were ranked by the number of included studies. The highest yielding journals for clinical obesity and the scatter across journal titles were determined. A subset of these journals was created in MEDLINE (PubMed) to test search recall and precision for high quality studies of obesity treatment (i.e., articles that pass predetermined methodology criteria, including random allocation of participants to comparison groups, assessment of clinical outcomes, and at least 80% follow-up).
Articles in 252 journals were cited in the systematic reviews. The three highest yielding journals specialized in obesity but they published only 19.2% of the research, leaving 80.8% scattered across 249 non-obesity journals. The MEDLINE journal subset comprised 241 journals (11 journals were not indexed in MEDLINE) and included 82% of the clinical obesity research articles retrieved by a search for high quality treatment studies (“recall” of 82%) and 11% of the articles retrieved were about clinical obesity care (“precision” of 11%), compared with precision of 6% for obesity treatment studies in the full MEDLINE database.
Obesity journals captured only a small proportion of the literature on clinical obesity care. Those wishing to keep up in this field will need to develop more inclusive strategies than reading these specialty journals. A journal subset based on these findings may be useful when searching large electronic databases to increase search precision.
obesity; health informatics; clinical trials
Applying evidence is one of the most challenging steps of evidence-based clinical practice. Healthcare professionals have difficulty interpreting evidence and translating it to patients. Decision boxes are summaries of the most important benefits and harms of diagnostic, therapeutic, and preventive health interventions provided to healthcare professionals before they meet the patient. Our hypothesis is that Decision boxes will prepare clinicians to help patients make informed value-based decisions. By acting as primers, the boxes will enhance the application of evidence-based practices and increase shared decision making during the clinical encounter. The objectives of this study are to provide a framework for developing Decision boxes and testing their value to users.
We will begin by developing Decision box prototypes for 10 clinical conditions or topics based on a review of the research on risk communication. We will present two prototypes to purposeful samples of 16 family physicians distributed in two focus groups, and 32 patients distributed in four focus groups. We will use the User Experience Model framework to explore users' perceptions of the content and format of each prototype. All discussions will be transcribed, and two researchers will independently perform a hybrid deductive/inductive thematic qualitative analysis of the data. The coding scheme will be developed a priori from the User Experience Model's seven themes (valuable, usable, credible, useful, desirable, accessible and findable), and will include new themes suggested by the data (inductive analysis). Key findings will be triangulated using additional publications on the design of tools to improve risk communication. All 10 Decision boxes will be modified in light of our findings.
This study will produce a robust framework for developing and testing Decision boxes that will serve healthcare professionals and patients alike. It is the first step in the development and implementation of a new tool that should facilitate decision making in clinical practice.
Most methodologic search filters developed for use in large electronic databases such as MEDLINE have low precision. One method that has been proposed but not tested for improving precision is NOTing out irrelevant content.
To determine if search filter precision can be improved by NOTing out the text words and index terms assigned to those articles that are retrieved but are off-target.
NOTing out unique terms in off-target articles and testing search filter performance in the Clinical Hedges Database.
Main Outcome Measures:
Sensitivity, specificity, precision and number needed to read (NNR).
For all purpose categories (diagnosis, prognosis and etiology) except treatment and for all databases (MEDLINE, EMBASE, CINAHL and PsycINFO), constructing search filters that NOTed out irrelevant content resulted in substantive improvements in NNR (over four-fold for some purpose categories and databases).
Search filter precision can be improved by NOTing out irrelevant content.
Rather than searching the entire MEDLINE database, clinicians can perform searches on a filtered set of articles where relevant information is more likely to be found. Members of our team previously developed two types of MEDLINE filters. The 'methods' filters help identify clinical research of high methodological merit. The 'content' filters help identify articles in the discipline of renal medicine. We will now test the utility of these filters for physician MEDLINE searching.
When a physician searches MEDLINE, we hypothesize the use of filters will increase the number of relevant articles retrieved (increase 'recall,' also called sensitivity) and decrease the number of non-relevant articles retrieved (increase 'precision,' also called positive predictive value), compared to the performance of a physician's search unaided by filters.
We will survey a random sample of 100 nephrologists in Canada to obtain the MEDLINE search that they would first perform themselves for a focused clinical question. Each question we provide to a nephrologist will be based on the topic of a recently published, well-conducted systematic review. We will examine the performance of a physician's unaided MEDLINE search. We will then apply a total of eight filter combinations to the search (filters used in isolation or in combination). We will calculate the recall and precision of each search. The filter combinations that most improve on unaided physician searches will be identified and characterized.
If these filters improve search performance, physicians will be able to search MEDLINE for renal evidence more effectively, in less time, and with less frustration. Additionally, our methodology can be used as a proof of concept for the evaluation of search filters in other disciplines.
The study of implementing research findings into practice is rapidly growing and has acquired many competing names (e.g., dissemination, uptake, utilization, translation) and contributing disciplines. The use of multiple terms across disciplines pose barriers to communication and progress for applying research findings. We sought to establish an inventory of terms describing this field and how often authors use them in a collection of health literature published in 2006.
We refer to this field as knowledge translation (KT). Terms describing aspects of KT and their definitions were collected from literature, the internet, reports, textbooks, and contact with experts. We compiled a database of KT and other articles by reading 12 healthcare journals representing multiple disciplines. All articles published in these journals in 2006 were categorized as being KT or not. The KT articles (all KT) were further categorized, if possible, for whether they described KT projects or implementations (KT application articles), or presented the theoretical basis, models, tools, methods, or techniques of KT (KT theory articles). Accuracy was checked using duplicate reading. Custom designed software determined how often KT terms were used in the titles and abstracts of articles categorized as being KT.
A total of 2,603 articles were assessed, and 581 were identified as KT articles. Of these, 201 described KT applications, and 153 included KT theory. Of the 100 KT terms collected, 46 were used by the authors in the titles or abstracts of articles categorized as being KT. For all 581 KT articles, eight terms or term variations used by authors were highly discriminating for separating KT and non-KT articles (p < 0.001): implementation, adoption, quality improvement, dissemination, complex intervention (with multiple endings), implementation (within three words of) research, and complex intervention. More KT terms were associated with KT application articles (n = 13) and KT theory articles (n = 18).
We collected 100 terms describing KT research. Authors used 46 of them in titles and abstracts of KT articles. Of these, approximately half discriminated between KT and non-KT articles. Thus, the need for consolidation and consistent use of fewer terms related to KT research is evident.
Computerized clinical decision support systems are information technology-based systems designed to improve clinical decision-making. As with any healthcare intervention with claims to improve process of care or patient outcomes, decision support systems should be rigorously evaluated before widespread dissemination into clinical practice. Engaging healthcare providers and managers in the review process may facilitate knowledge translation and uptake. The objective of this research was to form a partnership of healthcare providers, managers, and researchers to review randomized controlled trials assessing the effects of computerized decision support for six clinical application areas: primary preventive care, therapeutic drug monitoring and dosing, drug prescribing, chronic disease management, diagnostic test ordering and interpretation, and acute care management; and to identify study characteristics that predict benefit.
The review was undertaken by the Health Information Research Unit, McMaster University, in partnership with Hamilton Health Sciences, the Hamilton, Niagara, Haldimand, and Brant Local Health Integration Network, and pertinent healthcare service teams. Following agreement on information needs and interests with decision-makers, our earlier systematic review was updated by searching Medline, EMBASE, EBM Review databases, and Inspec, and reviewing reference lists through 6 January 2010. Data extraction items were expanded according to input from decision-makers. Authors of primary studies were contacted to confirm data and to provide additional information. Eligible trials were organized according to clinical area of application. We included randomized controlled trials that evaluated the effect on practitioner performance or patient outcomes of patient care provided with a computerized clinical decision support system compared with patient care without such a system.
Data will be summarized using descriptive summary measures, including proportions for categorical variables and means for continuous variables. Univariable and multivariable logistic regression models will be used to investigate associations between outcomes of interest and study specific covariates. When reporting results from individual studies, we will cite the measures of association and p-values reported in the studies. If appropriate for groups of studies with similar features, we will conduct meta-analyses.
A decision-maker-researcher partnership provides a model for systematic reviews that may foster knowledge translation and uptake.
The growing numbers of topically relevant biomedical publications readily available due to advances in document retrieval methods pose a challenge to clinicians practicing evidence-based medicine. It is increasingly time consuming to acquire and critically appraise the available evidence. This problem could be addressed in part if methods were available to automatically recognize rigorous studies immediately applicable in a specific clinical situation. We approach the problem of recognizing studies containing useable clinical advice from retrieved topically relevant articles as a binary classification problem. The gold standard used in the development of PubMed clinical query filters forms the basis of our approach. We identify scientifically rigorous studies using supervised machine learning techniques (Naïve Bayes, support vector machine (SVM), and boosting) trained on high-level semantic features. We combine these methods using an ensemble learning method (stacking). The performance of learning methods is evaluated using precision, recall and F1 score, in addition to area under the receiver operating characteristic (ROC) curve (AUC). Using a training set of 10,000 manually annotated MEDLINE citations, and a test set of an additional 2,000 citations, we achieve 73.7% precision and 61.5% recall in identifying rigorous, clinically relevant studies, with stacking over five feature-classifier combinations and 82.5% precision and 84.3% recall in recognizing rigorous studies with treatment focus using stacking over word + metadata feature vector. Our results demonstrate that a high quality gold standard and advanced classification methods can help clinicians acquire best evidence from the medical literature.
Research for developing search strategies to retrieve high-quality clinical journal articles from MEDLINE is expensive and time-consuming. The objective of this study was to determine the minimal number of high-quality articles in a journal subset that would need to be hand-searched to update or create new MEDLINE search strategies for treatment, diagnosis, and prognosis studies.
The desired width of the 95% confidence intervals (W) for the lowest sensitivity among existing search strategies was used to calculate the number of high-quality articles needed to reliably update search strategies. New search strategies were derived in journal subsets formed by 2 approaches: random sampling of journals and top journals (having the most high-quality articles). The new strategies were tested in both the original large journal database and in a low-yielding journal (having few high-quality articles) subset.
For treatment studies, if W was 10% or less for the lowest sensitivity among our existing search strategies, a subset of 15 randomly selected journals or 2 top journals were adequate for updating search strategies, based on each approach having at least 99 high-quality articles. The new strategies derived in 15 randomly selected journals or 2 top journals performed well in the original large journal database. Nevertheless, the new search strategies developed using the random sampling approach performed better than those developed using the top journal approach in a low-yielding journal subset. For studies of diagnosis and prognosis, no journal subset had enough high-quality articles to achieve the expected W (10%).
The approach of randomly sampling a small subset of journals that includes sufficient high-quality articles is an efficient way to update or create search strategies for high-quality articles on therapy in MEDLINE. The concentrations of diagnosis and prognosis articles are too low for this approach.
Objective To determine if citation counts at two years could be predicted for clinical articles that pass basic criteria for critical appraisal using data within three weeks of publication from external sources and an online article rating service.
Design Retrospective cohort study.
Setting Online rating service, Canada.
Participants 1274 articles from 105 journals published from January to June 2005, randomly divided into a 60:40 split to provide derivation and validation datasets.
Main outcome measures 20 article and journal features, including ratings of clinical relevance and newsworthiness, routinely collected by the McMaster online rating of evidence system, compared with citation counts at two years.
Results The derivation analysis showed that the regression equation accounted for 60% of the variation (R2=0.60, 95% confidence interval 0.538 to 0.629). This model applied to the validation dataset gave a similar prediction (R2=0.56, 0.476 to 0.596, shrinkage 0.04; shrinkage measures how well the derived equation matches data from the validation dataset). Cited articles in the top half and top third were predicted with 83% and 61% sensitivity and 72% and 82% specificity. Higher citations were predicted by indexing in numerous databases; number of authors; abstraction in synoptic journals; clinical relevance scores; number of cited references; and original, multicentred, and therapy articles from journals with a greater proportion of articles abstracted.
Conclusion Citation counts can be reliably predicted at two years using data within three weeks of publication.