Community participation is often restricted after stroke, due to reduced confidence and outdoor mobility. Australian clinical guidelines recommend that specific evidence-based interventions be delivered to target these restrictions, such as multiple escorted outdoor journeys. The aim of this study was to describe post-inpatient outdoor mobility and transport training delivered to stroke survivors in New South Wales, Australia and whether therapy differed according to type, sector or location of service provider.
Using an observational retrospective cohort study design, 24 rehabilitation service providers were audited. Provider types included outpatient (n = 8), day therapy (n = 9), home-based rehabilitation (n = 5) and transitional aged care services (TAC, n = 2). Records of 15 stroke survivors who had received post-hospital rehabilitation were audited per service, for wait time, duration, amount of therapy and outdoor-related therapy.
A total of 311 records were audited. Median wait time for post-hospital therapy was 13 days (IQR, 5–35). Median duration of therapy was 68 days (IQR, 35–109), consisting of 11 sessions (IQR 4–19). Overall, a median of one session (IQR 0–3) was conducted outdoors per person. Outdoor-related therapy was similar across service providers, except that TAC delivered an average of 5.4 more outdoor-related sessions (95 % CI 4.4 to 6.4), and 3.5 more outings into public streets (95 % CI 2.8 to 4.3) per person, compared to outpatient services.
The majority of service providers in the sample delivered little evidence-based outdoor mobility and travel training per stroke participant, as recommended in national stroke guidelines.
Australian and New Zealand Clinical Trials Registry ACTRN12611000554965.
Physical therapy; Occupational therapy; Physiotherapy; Knowledge translation; Walking
Those planning, managing and working in health systems worldwide routinely need to make decisions regarding strategies to improve health care and promote equity. Systematic reviews of different kinds can be of great help to these decision-makers, providing actionable evidence at every step in the decision-making process. Although there is growing recognition of the importance of systematic reviews to inform both policy decisions and produce guidance for health systems, a number of important methodological and evidence uptake challenges remain and better coordination of existing initiatives is needed. The Alliance for Health Policy and Systems Research, housed within the World Health Organization, convened an Advisory Group on Health Systems Research (HSR) Synthesis to bring together different stakeholders interested in HSR synthesis and its use in decision-making processes. We describe the rationale of the Advisory Group and the six areas of its work and reflects on its role in advancing the field of HSR synthesis. We argue in favour of greater cross-institutional collaborations, as well as capacity strengthening in low- and middle-income countries, to advance the science and practice of health systems research synthesis. We advocate for the integration of quasi-experimental study designs in reviews of effectiveness of health systems intervention and reforms. The Advisory Group also recommends adopting priority-setting approaches for HSR synthesis and increasing the use of findings from systematic reviews in health policy and decision-making.
Evidence synthesis; Health systems research; Health policy; Systematic reviews; Decision-making
Implementation intervention effects can only be fully realised and understood if they are faithfully delivered. However the evaluation of implementation intervention fidelity is not commonly undertaken. The IMPLEMENT intervention was designed to improve the management of low back pain by general medical practitioners. It consisted of a two-session interactive workshop, including didactic presentations and small group discussions by trained facilitators. This study aimed to evaluate the fidelity of the IMPLEMENT intervention by assessing: (1) observed facilitator adherence to planned behaviour change techniques (BCTs); (2) comparison of observed and self-reported adherence to planned BCTs and (3) variation across different facilitators and different BCTs.
The study compared planned and actual, and observed versus self-assessed delivery of BCTs during the IMPLEMENT workshops.
Workshop sessions were audiorecorded and transcribed verbatim. Observed adherence of facilitators to the planned intervention was assessed by analysing the workshop transcripts in terms of BCTs delivered. Self-reported adherence was measured using a checklist completed at the end of each workshop session and was compared with the ‘gold standard’ of observed adherence using sensitivity and specificity analyses.
The overall observed adherence to planned BCTs was 79%, representing moderate-to-high intervention fidelity. There was no significant difference in adherence to BCTs between the facilitators. Sensitivity of self-reported adherence was 95% (95% CI 88 to 98) and specificity was 30% (95% CI 11 to 60).
The findings suggest that the IMPLEMENT intervention was delivered with high levels of adherence to the planned intervention protocol.
Trial registration number
The IMPLEMENT trial was registered in the Australian New Zealand Clinical Trials Registry, ACTRN012606000098538 (http://www.anzctr.org.au/trial_view.aspx?ID=1162).
Methodological guidelines for intervention reporting emphasise describing intervention content in detail. Despite this, systematic reviews of quality improvement (QI) implementation interventions continue to be limited by a lack of clarity and detail regarding the intervention content being evaluated. We aimed to apply the recently developed Behaviour Change Techniques Taxonomy version 1 (BCTTv1) to trials of implementation interventions for managing diabetes to assess the capacity and utility of this taxonomy for characterising active ingredients.
Three psychologists independently coded a random sample of 23 trials of healthcare system, provider- and/or patient-focused implementation interventions from a systematic review that included 142 such studies. Intervention content was coded using the BCTTv1, which describes 93 behaviour change techniques (BCTs) grouped within 16 categories. We supplemented the generic coding instructions within the BCTTv1 with decision rules and examples from this literature.
Less than a quarter of possible BCTs within the BCTTv1 were identified. For implementation interventions targeting providers, the most commonly identified BCTs included the following: adding objects to the environment, prompts/cues, instruction on how to perform the behaviour, credible source, goal setting (outcome), feedback on outcome of behaviour, and social support (practical). For implementation interventions also targeting patients, the most commonly identified BCTs included the following: prompts/cues, instruction on how to perform the behaviour, information about health consequences, restructuring the social environment, adding objects to the environment, social support (practical), and goal setting (behaviour). The BCTTv1 mapped well onto implementation interventions directly targeting clinicians and patients and could also be used to examine the impact of system-level interventions on clinician and patient behaviour.
The BCTTv1 can be used to characterise the active ingredients in trials of implementation interventions and provides specificity of content beyond what is given by broader intervention labels. Identification of BCTs may provide a more helpful means of accumulating knowledge on the content used in trials of implementation interventions, which may help to better inform replication efforts. In addition, prospective use of a behaviour change techniques taxonomy for developing and reporting intervention content would further aid in building a cumulative science of effective implementation interventions.
Electronic supplementary material
The online version of this article (doi:10.1186/s13012-015-0248-7) contains supplementary material, which is available to authorized users.
Behaviour change; Taxonomy; Diabetes; Quality improvement; Techniques; Intervention content
Policymakers, stakeholders and researchers have not been able to find research evidence about health systems using an easily understood taxonomy of topics, know when they have conducted a comprehensive search of the many types of research evidence relevant to them, or rapidly identify decision-relevant information in their search results.
To address these gaps, we developed an approach to building a ‘one-stop shop’ for research evidence about health systems. We developed a taxonomy of health system topics and iteratively refined it by drawing on existing categorization schemes and by using it to categorize progressively larger bundles of research evidence. We identified systematic reviews, systematic review protocols, and review-derived products through searches of Medline, hand searches of several databases indexing systematic reviews, hand searches of journals, and continuous scanning of listservs and websites. We developed an approach to providing ‘added value’ to existing content (e.g., coding systematic reviews according to the countries in which included studies were conducted) and to expanding the types of evidence eligible for inclusion (e.g., economic evaluations and health system descriptions). Lastly, we developed an approach to continuously updating the online one-stop shop in seven supported languages.
The taxonomy is organized by governance, financial, and delivery arrangements and by implementation strategies. The ‘one-stop shop’, called Health Systems Evidence, contains a comprehensive inventory of evidence briefs, overviews of systematic reviews, systematic reviews, systematic review protocols, registered systematic review titles, economic evaluations and costing studies, health reform descriptions and health system descriptions, and many types of added-value coding. It is continuously updated and new content is regularly translated into Arabic, Chinese, English, French, Portuguese, Russian, and Spanish.
Policymakers and stakeholders can now easily access and use a wide variety of types of research evidence about health systems to inform decision-making and advocacy. Researchers and research funding agencies can use Health Systems Evidence to identify gaps in the current stock of research evidence and domains that could benefit from primary research, systematic reviews, and review overviews.
Electronic supplementary material
The online version of this article (doi:10.1186/1478-4505-13-10) contains supplementary material, which is available to authorized users.
A peer-reviewed journal would not survive without the generous time and insightful comments of the reviewers, whose efforts often go unrecognized. Although final decisions are always editorial, they are greatly facilitated by the deeper technical knowledge, scientific insights, understanding of social consequences, and passion that reviewers bring to our deliberations. For these reasons, the Editors-in-Chief and staff of the journal warmly thank the 610 reviewers whose comments helped to shape Trials, for their invaluable assistance with review of manuscripts for the journal in Volume 15 (2014).
Practical solutions are needed to support the appropriate use of available health system resources as countries are continually pressured to ‘do more with less’ in health care. Increasingly, health systems and organizations are exploring the reassessment of possibly obsolete, inefficient, or ineffective health system resources and potentially redirecting funds to those that are more effective and efficient. Such processes are often referred to as ‘disinvestment’. Our objective is to gain further understanding about: 1) whether how and under what conditions health systems decide to pursue disinvestment; 2) how health systems have chosen to undertake disinvestment; and 3) how health systems have implemented their disinvestment approach.
We will use a critical interpretive synthesis (CIS) approach, to develop a theoretical framework based on insights drawn from a range of relevant sources. We will conduct systematic searches of databases as well as purposive searches to identify literature to fill conceptual gaps that may emerge during our inductive process of synthesis and analysis. Two independent reviewers will assess search results for relevance and conceptually map included references. We will include all empirical and non-empirical articles that focus on disinvestment at a system level. We will then extract key findings from a purposive sample of articles using frameworks related to government agendas, policy development and implementation, and health system contextual factors and then synthesize and integrate the findings to develop a framework about our core areas of interest. Lastly, we will convene a stakeholder dialogue with Canadian and international policymakers and other stakeholders to solicit targeted feedback about the framework (e.g., by identifying any gaps in the literature that we may want to revisit before finalizing it) and deliberating about barriers for developing and implementing approaches to disinvestment, strategies to address these barriers and about next steps that could be taken by different constituencies.
Disinvestment is an emerging field and there is a need for evidence to inform the prioritization, development, and implementation of strategies in different contexts. Our CIS and the framework developed through it will support the actions of those involved in the prioritization, development, and implementation of disinvestment initiatives.
Systematic review registration
Electronic supplementary material
The online version of this article (doi:10.1186/2046-4053-3-143) contains supplementary material, which is available to authorized users.
Disinvestment; Critical interpretive synthesis; Health technologies; Health system context; Policy development; Policy implementation; Reassessment; Appropriateness
Mobilizing research evidence for daily decision-making is challenging for health system decision-makers. In a previous qualitative paper, we showed the current mix of supports that Canadian health-care organizations have in place and the ones that are perceived to be helpful to facilitate the use of research evidence in health system decision-making. Factors influencing the implementation of such supports remain poorly described in the literature. Identifying the barriers to and facilitators of different interventions is essential for implementation of effective, context-specific, supports for evidence-informed decision-making (EIDM) in health systems. The purpose of this study was to identify (a) barriers and facilitators to implementing supports for EIDM in Canadian health-care organizations, (b) views about emerging development of supports for EIDM, and (c) views about the priorities to bridge the gaps in the current mix of supports that these organizations have in place.
This qualitative study was conducted in three types of health-care organizations (regional health authorities, hospitals, and primary care practices) in two Canadian provinces (Ontario and Quebec). Fifty-seven in-depth semi-structured telephone interviews were conducted with senior managers, library managers, and knowledge brokers from health-care organizations that have already undertaken strategic initiatives in knowledge translation. The interviews were taped, transcribed, and then analyzed thematically using NVivo 9 qualitative data analysis software.
Limited resources (i.e., money or staff), time constraints, and negative attitudes (or resistance) toward change were the most frequently identified barriers to implementing supports for EIDM. Genuine interest from health system decision-makers, notably their willingness to invest money and resources and to create a knowledge translation culture over time in health-care organizations, was the most frequently identified facilitator to implementing supports for EIDM. The most frequently cited views about emerging development of supports for EIDM were implementing accessible and efficient systems to support the use of research in decision-making (e.g., documentation and reporting tools, communication tools, and decision support tools) and developing and implementing an infrastructure or position where the accountability for encouraging knowledge use lies. The most frequently stated priorities for bridging the gaps in the current mix of supports that these organizations have in place were implementing technical infrastructures to support research use and to ensure access to research evidence and establishing formal or informal ties to researchers and knowledge brokers outside the organization who can assist in EIDM.
These results provide insights on the type of practical implementation imperatives involved in supporting EIDM.
Evidence informed decision-making; Knowledge transfer and exchange; Knowledge translation
The translation of research into practices has been incomplete. Organizational readiness for change (ORC) is a potential facilitator of effective knowledge translation (KT). However we know little about the best way to assess ORC. Therefore, we sought to systematically review ORC measurement instruments.
We searched for published studies in bibliographic databases (Pubmed, Embase, CINAHL, PsychINFO, Web of Science, etc.) up to November 1st, 2012. We included publications that developed ORC measures and/or empirically assessed ORC using an instrument at the organizational level in the health care context. We excluded articles if they did not refer specifically to ORC, did not concern the health care domain or were limited to individual-level change readiness. We focused on identifying the psychometric properties of instruments that were developed to assess readiness in an organization prior to implementing KT interventions in health care. We used the Standards for Educational and Psychological Testing to assess the psychometric properties of identified ORC measurement instruments.
We found 26 eligible instruments described in 39 publications. According to the Standards for Educational and Psychological Testing, 18 (69%) of a total of 26 measurement instruments presented both validity and reliability criteria. The Texas Christian University –ORC (TCU-ORC) scale reported the highest instrument validity with a score of 4 out of 4. Only one instrument, namely the Modified Texas Christian University – Director version (TCU-ORC-D), reported a reliability score of 2 out of 3. No information was provided regarding the reliability and validity of five (19%) instruments.
Our findings indicate that there are few valid and reliable ORC measurement instruments that could be applied to KT in the health care sector. The TCU-ORC instrument presents the best evidence in terms of validity testing. Future studies using this instrument could provide more knowledge on its relevance to diverse clinical contexts.
This protocol builds on the development of a) a framework that identified the various supports (i.e. positions, activities, interventions) that a healthcare organisation or health system can implement for evidence-informed decision-making (EIDM) and b) a qualitative study that showed the current mix of supports that some Canadian healthcare organisations have in place and the ones that are perceived to facilitate the use of research evidence in decision-making. Based on these findings, we developed a web survey to collect cross-sectional data about the specific supports that regional health authorities and hospitals in two Canadian provinces (Ontario and Quebec) have in place to facilitate EIDM.
This paper describes the methods for a cross-sectional web survey among 32 regional health authorities and 253 hospitals in the provinces of Quebec and Ontario (Canada) to collect data on the current mix of organisational supports that these organisations have in place to facilitate evidence-informed decision-making. The data will be obtained through a two-step survey design: a 10-min survey among CEOs to identify key units and individuals in regard to our objectives (step 1) and a 20-min survey among managers of the key units identified in step 1 to collect information about the activities performed by their unit regarding the acquisition, assessment, adaptation and/or dissemination of research evidence in decision-making (step 2). The study will target three types of informants: CEOs, library/documentation centre managers and all other key managers whose unit is involved in the acquisition, assessment, adaptation/packaging and/or dissemination of research evidence in decision-making. We developed an innovative data collection system to increase the likelihood that only the best-informed respondent available answers each survey question. The reporting of the results will be done using descriptive statistics of supports by organisation type and by province.
This study will be the first to collect and report large-scale cross-sectional data on the current mix of supports health system organisations in the two most populous Canadian provinces have in place for evidence-informed decision-making. The study will also provide useful information to researchers on how to collect organisation-level data with reduced risk of self-reporting bias.
Health systems; Knowledge translation; Research evidence; Cross-sectional study
One of the greatest challenges in healthcare is how to best translate research evidence into clinical practice, which includes how to change health-care professionals’ behaviours. A commonly held view is that multifaceted interventions are more effective than single-component interventions. The purpose of this study was to conduct an overview of systematic reviews to evaluate the effectiveness of multifaceted interventions in comparison to single-component interventions in changing health-care professionals’ behaviour in clinical settings.
The Rx for Change database, which consists of quality-appraised systematic reviews of interventions to change health-care professional behaviour, was used to identify systematic reviews for the overview. Dual, independent screening and data extraction was conducted. Included reviews used three different approaches (of varying methodological robustness) to evaluate the effectiveness of multifaceted interventions: (1) effect size/dose-response statistical analyses, (2) direct (non-statistical) comparisons of multifaceted to single interventions and (3) indirect comparisons of multifaceted to single interventions.
Twenty-five reviews were included in the overview. Three reviews provided effect size/dose-response statistical analyses of the effectiveness of multifaceted interventions; no statistical evidence of a relationship between the number of intervention components and the effect size was found. Eight reviews reported direct (non-statistical) comparisons of multifaceted to single-component interventions; four of these reviews found multifaceted interventions to be generally effective compared to single interventions, while the remaining four reviews found that multifaceted interventions had either mixed effects or were generally ineffective compared to single interventions. Twenty-three reviews indirectly compared the effectiveness of multifaceted to single interventions; nine of which also reported either a statistical (dose-response) analysis (N = 2) or a non-statistical direct comparison (N = 7). The majority (N = 15) of reviews reporting indirect comparisons of multifaceted to single interventions showed similar effectiveness for multifaceted and single interventions when compared to controls. Of the remaining eight reviews, six found single interventions to be generally effective while multifaceted had mixed effectiveness.
This overview of systematic reviews offers no compelling evidence that multifaceted interventions are more effective than single-component interventions.
Electronic supplementary material
The online version of this article (doi:10.1186/s13012-014-0152-6) contains supplementary material, which is available to authorized users.
Clinical practice is not always evidence-based and, therefore, may not optimise patient outcomes. Opinion leaders disseminating and implementing ‘best evidence’ is one method that holds promise as a strategy to bridge evidence-practice gaps.
To assess the effectiveness of the use of local opinion leaders in improving professional practice and patient outcomes.
We searched Cochrane EPOC Group Trials Register, the Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, HMIC, Science Citation Index, Social Science Citation Index, ISI Conference Proceedings and World Cat Dissertations up to 5 May 2009. In addition, we searched reference lists of included articles.
Studies eligible for inclusion were randomised controlled trials investigating the effectiveness of using opinion leaders to disseminate evidence-based practice and reporting objective measures of professional performance and/or health outcomes.
Data collection and analysis
Two review authors independently extracted data from each study and assessed its risk of bias. For each trial, we calculated the median risk difference (RD) for compliance with desired practice, adjusting for baseline where data were available. We reported the median adjusted RD for each of the main comparisons.
We included 18 studies involving more than 296 hospitals and 318 PCPs. Fifteen studies (18 comparisons) contributed to the calculations of the median adjusted RD for the main comparisons. The effects of interventions varied across the 63 outcomes from 15% decrease in compliance to 72% increase in compliance with desired practice. The median adjusted RD for the main comparisons were: i) Opinion leaders compared to no intervention, +0.09; ii) Opinion leaders alone compared to a single intervention, +0.14; iii) Opinion leaders with one or more additional intervention(s) compared to the one or more additional intervention(s), +0.10; iv) Opinion leaders as part of multiple interventions compared to no intervention, +0.10. Overall, across all 18 studies the median adjusted RD was +0.12 representing a 12% absolute increase in compliance in the intervention group.
Opinion leaders alone or in combination with other interventions may successfully promote evidence-based practice, but effectiveness varies both within and between studies. These results are based on heterogeneous studies differing in terms of type of intervention, setting, and outcomes measured. In most of the studies the role of the opinion leader was not clearly described, and it is therefore not possible to say what the best way is to optimise the effectiveness of opinion leaders.
*Leadership; *Policy Making; Evidence-Based Medicine [*standards]; Information Dissemination; Physician’s Practice Patterns; Process Assessment (Health Care); Professional Practice [*standards]; Randomized Controlled Trials as Topic; Humans
The opportunity to improve care by delivering decision support to clinicians at the point of care represents one of the main incentives for implementing sophisticated clinical information systems. Previous reviews of computer reminder and decision support systems have reported mixed effects, possibly because they did not distinguish point of care computer reminders from e-mail alerts, computer-generated paper reminders, and other modes of delivering ‘computer reminders’.
To evaluate the effects on processes and outcomes of care attributable to on-screen computer reminders delivered to clinicians at the point of care.
We searched the Cochrane EPOC Group Trials register, MEDLINE, EMBASE and CINAHL and CENTRAL to July 2008, and scanned bibliographies from key articles.
Studies of a reminder delivered via a computer system routinely used by clinicians, with a randomised or quasi-randomised design and reporting at least one outcome involving a clinical endpoint or adherence to a recommended process of care.
Data collection and analysis
Two authors independently screened studies for eligibility and abstracted data. For each study, we calculated the median improvement in adherence to target processes of care and also identified the outcome with the largest such improvement. We then calculated the median absolute improvement in process adherence across all studies using both the median outcome from each study and the best outcome.
Twenty-eight studies (reporting a total of thirty-two comparisons) were included. Computer reminders achieved a median improvement in process adherence of 4.2% (interquartile range (IQR): 0.8% to 18.8%) across all reported process outcomes, 3.3% (IQR: 0.5% to 10.6%) for medication ordering, 3.8% (IQR: 0.5% to 6.6%) for vaccinations, and 3.8% (IQR: 0.4% to 16.3%) for test ordering. In a sensitivity analysis using the best outcome from each study, the median improvement was 5.6% (IQR: 2.0% to 19.2%) across all process measures and 6.2% (IQR: 3.0% to 28.0%) across measures of medication ordering.
In the eight comparisons that reported dichotomous clinical endpoints, intervention patients experienced a median absolute improvement of 2.5% (IQR: 1.3% to 4.2%). Blood pressure was the most commonly reported clinical endpoint, with intervention patients experiencing a median reduction in their systolic blood pressure of 1.0 mmHg (IQR: 2.3 mmHg reduction to 2.0 mmHg increase).
Point of care computer reminders generally achieve small to modest improvements in provider behaviour. A minority of interventions showed larger effects, but no specific reminder or contextual features were significantly associated with effect magnitude. Further research must identify design features and contextual factors consistently associated with larger improvements in provider behaviour if computer reminders are to succeed on more than a trial and error basis.
*Decision Support Systems, Clinical; *Outcome and Process Assessment (Health Care); *Point-of-Care Systems; *Reminder Systems; Decision Making, Computer-Assisted; Humans
The primary care specialist interface is a key organisational feature of many health care systems. Patients are referred to specialist care when investigation or therapeutic options are exhausted in primary care and more specialised care is needed. Referral has considerable implications for patients, the health care system and health care costs. There is considerable evidence that the referral processes can be improved.
To estimate the effectiveness and efficiency of interventions to change outpatient referral rates or improve outpatient referral appropriateness.
We conducted electronic searches of the Cochrane Effective Practice and Organisation of Care (EPOC) group specialised register (developed through extensive searches of MEDLINE, EMBASE, Healthstar and the Cochrane Library) (February 2002) and the National Research Register. Updated searches were conducted in MEDLINE and the EPOC specialised register up to October 2007.
Randomised controlled trials, controlled clinical trials, controlled before and after studies and interrupted time series of interventions to change or improve outpatient referrals. Participants were primary care physicians. The outcomes were objectively measured provider performance or health outcomes.
Data collection and analysis
A minimum of two reviewers independently extracted data and assessed study quality.
Seventeen studies involving 23 separate comparisons were included. Nine studies (14 comparisons) evaluated professional educational interventions. Ineffective strategies included: passive dissemination of local referral guidelines (two studies), feedback of referral rates (one study) and discussion with an independent medical adviser (one study). Generally effective strategies included dissemination of guidelines with structured referral sheets (four out of five studies) and involvement of consultants in educational activities (two out of three studies). Four studies evaluated organisational interventions (patient management by family physicians compared to general internists, attachment of a physiotherapist to general practices, a new slot system for referrals and requiring a second ‘in-house’ opinion prior to referral), all of which were effective. Four studies (five comparisons) evaluated financial interventions. One study evaluating change from a capitation based to mixed capitation and fee-for-service system and from a fee-for-service to a capitation based system (with an element of risk sharing for secondary care services) observed a reduction in referral rates. Modest reductions in referral rates of uncertain significance were observed following the introduction of the general practice fundholding scheme in the United Kingdom (UK). One study evaluating the effect of providing access to private specialists demonstrated an increase in the proportion of patients referred to specialist services but no overall effect on referral rates.
There are a limited number of rigorous evaluations to base policy on. Active local educational interventions involving secondary care specialists and structured referral sheets are the only interventions shown to impact on referral rates based on current evidence. The effects of ‘in-house’ second opinion and other intermediate primary care based alternatives to outpatient referral appear promising.
*Medicine [organization & administration; standards]; *Outpatients; *Practice Guidelines as Topic; *Primary Health Care [economics; organization & administration; standards]; *Specialization; Controlled Clinical Trials as Topic; Economics, Medical; Family Practice [economics; organization & administration; standards]; Information Dissemination; Referral and Consultation [economics; organization & administration; *standards]; Humans
To improve quality of care and patient outcomes, health system decision-makers need to identify and implement effective interventions. An increasing number of systematic reviews document the effects of quality improvement programs to assist decision-makers in developing new initiatives. However, limitations in the reporting of primary studies and current meta-analysis methods (including approaches for exploring heterogeneity) reduce the utility of existing syntheses for health system decision-makers. This study will explore the role of innovative meta-analysis approaches and the added value of enriched and updated data for increasing the utility of systematic reviews of complex interventions.
We will use the dataset from our recent systematic review of 142 randomized trials of diabetes quality improvement programs to evaluate novel approaches for exploring heterogeneity. These will include exploratory methods, such as multivariate meta-regression analyses and all-subsets combinatorial meta-analysis. We will then update our systematic review to include new trials and enrich the dataset by surveying authors of all included trials. In doing so, we will explore the impact of variables not, reported in previous publications, such as details of study context, on the effectiveness of the intervention. We will use innovative analytical methods on the enriched and updated dataset to identify key success factors in the implementation of quality improvement interventions for diabetes. Decision-makers will be involved throughout to help identify and prioritize variables to be explored and to aid in the interpretation and dissemination of results.
This study will inform future systematic reviews of complex interventions and describe the value of enriching and updating data for exploring heterogeneity in meta-analysis. It will also result in an updated comprehensive systematic review of diabetes quality improvement interventions that will be useful to health system decision-makers in developing interventions to improve outcomes for people with diabetes.
Systematic review registration
PROSPERO registration no. CRD42013005165
Diabetes care; Knowledge translation; Quality improvement interventions; Complex Interventions; Health system decision-makers; Systematic review; Meta-analysis; Implementation science; Heterogeneity; Hierarchical modeling
Clinical decision rules (CDRs) can be an effective tool for knowledge translation in emergency medicine, but their implementation is often a challenge. This study examined whether the Theory of Planned Behaviour (TPB) could help explain the inconsistent results between the successful Canadian C-Spine Rule (CCR) implementation study and unsuccessful Canadian CT Head Rule (CCHR) implementation study. Both rules are aimed at improving the accuracy and efficiency of emergency department radiography use in clinical contexts that exhibit enormous inefficiency at the present time. The rules were prospectively derived and validated using the same methodology demonstrating high sensitivity and reliability. The rules subsequently underwent parallel implementations at 12 Canadian hospitals, yet only the CCR was observed to significantly reduce radiography ordering rates, while the CCHR failed to have any significant impact at all. The drastically different results are unlikely to be the result of differences in implementation strategies or the decision rules.
Physicians at the 12 participating Canadian hospitals were randomized to CCR or CCHR TPB surveys that were administered during the baseline phases of the implementation studies, before any intervention had taken place. The collected baseline survey data were linked to concurrent baseline physician and patient-specific imaging data, and subsequently analyzed using mixed effects linear and logistic models.
A total of 223 of the 378 eligible physicians randomized to a TPB survey completed their assigned baseline survey (CCR: 122 of 181; CCHR: 101 of 197). Attitudes were significantly associated with intention in both settings (CCR: ß = 0.40; CCHR: ß = 0.30), as were subjective norms (CCR: ß = 0.26; CCHR: ß = 0.73). Intention was significantly associated with actual image ordering for CCR (OR = 1.79), but not CCHR.
The TPB can be used to better understand processes underlying use of CDRs. TPB constructs were significantly associated with intention to perform both imaging behaviours, but intention was only associated with actual behaviour for CCR, suggesting that constructs outside of the TPB framework may need to be considered when seeking to understand use of CDRs.
Electronic supplementary material
The online version of this article (doi:10.1186/s13012-014-0088-x) contains supplementary material, which is available to authorized users.
Clinical decision rules; Canadian C-Spine Rule; Canadian CT-Head Rule; Theory of planned behaviour; Emergency physicians; Implementation study
Theory-based process evaluations conducted alongside randomized controlled trials provide the opportunity to investigate hypothesized mechanisms of action of interventions, helping to build a cumulative knowledge base and to inform the interpretation of individual trial outcomes. Our objective was to identify the underlying causal mechanisms in a cluster randomized trial of the effectiveness of printed educational materials (PEMs) to increase referral for diabetic retinopathy screening. We hypothesized that the PEMs would increase physicians’ intention to refer patients for retinal screening by strengthening their attitude and subjective norm, but not their perceived behavioral control.
Design: A theory based process evaluation alongside the Ontario Printed Educational Material (OPEM) cluster randomized trial. Postal surveys based on the Theory of Planned Behavior were sent to a random sample of trial participants two months before and six months after they received the intervention. Setting: Family physicians in Ontario, Canada. Participants: 1,512 family physicians (252 per intervention group) from the OPEM trial were invited to participate, and 31.3% (473/1512) responded at time one and time two. The final sample comprised 437 family physicians fully completing questionnaires at both time points. Main outcome measures: Primary: behavioral intention related to referring patient for retinopathy screening; secondary: attitude, subjective norm, perceived behavioral control.
At baseline, family physicians reported positive intention, attitude, subjective norm, and perceived behavioral control to advise patients about retinopathy screening suggesting limited opportunities for improvement in these constructs. There were no significant differences on intention, attitude, subjective norm, and perceived behavioral control following the intervention. Respondents also reported additional physician- and patient-related factors perceived to influence whether patients received retinopathy screening.
Lack of change in the primary and secondary theory-based outcomes provides an explanation for the lack of observed effect of the main OPEM trial. High baseline levels of intention to advise patients to attend retinopathy screening suggest that post-intentional and other factors may explain gaps in care. Process evaluations based on behavioral theory can provide replicable and generalizable insights to aid interpretation of randomized controlled trials of complex interventions to change health professional behavior.
Electronic supplementary material
The online version of this article (doi:10.1186/1748-5908-9-86) contains supplementary material, which is available to authorized users.
Process evaluation; Theory of planned behavior; Printed educational material; Healthcare professional behavior; Behavior change
Evidence of the effectiveness of printed educational messages in narrowing the gap between guideline recommendations and practice is contradictory. Failure to screen for retinopathy exposes primary care patients with diabetes to risk of eye complications. Screening is initiated by referral from family practitioners but adherence to guidelines is suboptimal. We aimed to evaluate the ability of printed educational messages aimed at family doctors to increase retinal screening of primary care patients with diabetes.
Design: Pragmatic 2×3 factorial cluster trial randomized by physician practice, involving 5,048 general practitioners (with 179,833 patients with diabetes). Setting: Ontario family practitioners. Interventions: Reminders (that retinal screening helps prevent diabetes-related vision loss and is covered by provincial health insurance for patients with diabetes) with prompts to encourage screening were mailed to each physician in conjunction with a widely-read professional newsletter. Alternative printed materials formats were an ‘outsert’ (short, directive message stapled to the outside of the newsletter), and/or a two-page, evidence-based article (‘insert’) and a pre-printed sticky note reminder for patients. Main outcome measure: A successful outcome was an eye examination (which includes retinal screening) provided to a patient with diabetes, not screened in the previous 12 months, within 90 days after visiting a family practitioner. Analysis accounted for clustering of doctors within practice groups.
No intervention effect was detected (eye exam rates were 31.6% for patients of control physicians, 31.3% for the insert, 32.8% for the outsert, 32.3% for those who received both, and 31.2% for those who received both plus the patient reminder with the largest 95% confidence interval around any effect extending from −1.3% to 1.1%).
This large trial conclusively failed to demonstrate any impact of printed educational messages on screening uptake. Despite their low cost, printed educational messages should not be routinely used in attempting to close evidence-practice gaps relating to diabetic retinopathy screening.
Electronic supplementary material
The online version of this article (doi:10.1186/1748-5908-9-87) contains supplementary material, which is available to authorized users.
Audits of blood transfusion demonstrate around 20% transfusions are outside national recommendations and guidelines. Audit and feedback is a widely used quality improvement intervention but effects on clinical practice are variable, suggesting potential for enhancement. Behavioural theory, theoretical frameworks of behaviour change and behaviour change techniques provide systematic processes to enhance intervention. This study is part of a larger programme of work to promote the uptake of evidence-based transfusion practice.
The objectives of this study are to design two theoretically enhanced audit and feedback interventions; one focused on content and one on delivery, and investigate the feasibility and acceptability.
Study A (Content): A coding framework based on current evidence regarding audit and feedback, and behaviour change theory and frameworks will be developed and applied as part of a structured content analysis to specify the key components of existing feedback documents. Prototype feedback documents with enhanced content and also a protocol, describing principles for enhancing feedback content, will be developed. Study B (Delivery): Individual semi-structured interviews with healthcare professionals and observations of team meetings in four hospitals will be used to specify, and identify views about, current audit and feedback practice. Interviews will be based on a topic guide developed using the Theoretical Domains Framework and the Consolidated Framework for Implementation Research. Analysis of transcripts based on these frameworks will form the evidence base for developing a protocol describing an enhanced intervention that focuses on feedback delivery. Study C (Feasibility and Acceptability): Enhanced interventions will be piloted in four hospitals. Semi-structured interviews, questionnaires and observations will be used to assess feasibility and acceptability.
This intervention development work reflects the UK Medical Research Council’s guidance on development of complex interventions, which emphasises the importance of a robust theoretical basis for intervention design and recommends systematic assessment of feasibility and acceptability prior to taking interventions to evaluation in a full-scale randomised study. The work-up includes specification of current practice so that, in the trials to be conducted later in this programme, there will be a clear distinction between the control (usual practice) conditions and the interventions to be evaluated.
Electronic supplementary material
The online version of this article (doi:10.1186/s13012-014-0092-1) contains supplementary material, which is available to authorized users.
Audit and feedback; Blood transfusion; Implementation; Health services research; Study protocol; Health professional behaviour change
Mild head injuries commonly present to emergency departments. The challenges facing clinicians in emergency departments include identifying which patients have traumatic brain injury, and which patients can safely be sent home. Traumatic brain injuries may exist with subtle symptoms or signs, but can still lead to adverse outcomes. Despite the existence of several high quality clinical practice guidelines, internationally and in Australia, research shows inconsistent implementation of these recommendations. The aim of this trial is to test the effectiveness of a targeted, theory- and evidence-informed implementation intervention to increase the uptake of three key clinical recommendations regarding the emergency department management of adult patients (18 years of age or older) who present following mild head injuries (concussion), compared with passive dissemination of these recommendations. The primary objective is to establish whether the intervention is effective in increasing the percentage of patients for which appropriate post-traumatic amnesia screening is performed.
The design of this study is a cluster randomised trial. We aim to include 34 Australian 24-hour emergency departments, which will be randomised to an intervention or control group. Control group departments will receive a copy of the most recent Australian evidence-based clinical practice guideline on the acute management of patients with mild head injuries. The intervention group will receive an implementation intervention based on an analysis of influencing factors, which include local stakeholder meetings, identification of nursing and medical opinion leaders in each site, a train-the-trainer day and standardised education and interactive workshops delivered by the opinion leaders during a 3 month period of time. Clinical practice outcomes will be collected retrospectively from medical records by independent chart auditors over the 2 month period following intervention delivery (patient level outcomes). In consenting hospitals, eligible patients will be recruited for a follow-up telephone interview conducted by trained researchers. A cost-effectiveness analysis and process evaluation using mixed-methods will be conducted. Sample size calculations are based on including 30 patients on average per department. Outcome assessors will be blinded to group allocation.
Australian New Zealand Clinical Trials Registry ACTRN12612001286831 (date registered 12 December 2012).
Mild traumatic brain injury; Cluster trial; Emergency department
This paper extends the findings of the Cochrane systematic review of audit and feedback on professional practice to explore the estimate of effect over time and examine whether new trials have added to knowledge regarding how optimize the effectiveness of audit and feedback.
We searched the Cochrane Central Register of Controlled Trials, MEDLINE, and EMBASE for randomized trials of audit and feedback compared to usual care, with objectively measured outcomes assessing compliance with intended professional practice. Two reviewers independently screened articles and abstracted variables related to the intervention, the context, and trial methodology. The median absolute risk difference in compliance with intended professional practice was determined for each study, and adjusted for baseline performance. The effect size across studies was recalculated as studies were added to the cumulative analysis. Meta-regressions were conducted for studies published up to 2002, 2006, and 2010 in which characteristics of the intervention, the recipients, and trial risk of bias were tested as predictors of effect size.
Of the 140 randomized clinical trials (RCTs) included in the Cochrane review, 98 comparisons from 62 studies met the criteria for inclusion. The cumulative analysis indicated that the effect size became stable in 2003 after 51 comparisons from 30 trials. Cumulative meta-regressions suggested new trials are contributing little further information regarding the impact of common effect modifiers. Feedback appears most effective when: delivered by a supervisor or respected colleague; presented frequently; featuring both specific goals and action-plans; aiming to decrease the targeted behavior; baseline performance is lower; and recipients are non-physicians.
There is substantial evidence that audit and feedback can effectively improve quality of care, but little evidence of progress in the field. There are opportunity costs for patients, providers, and health care systems when investigators test quality improvement interventions that do not build upon, or contribute toward, extant knowledge.
audit and feedback; scientific progress; quality improvement; systematic review; cumulative analysis