|Home | About | Journals | Submit | Contact Us | Français|
The persistence of a large quality gap between what we know about how to produce high quality clinical care and what the public receives has prompted interest in developing more effective methods to get evidence into practice. Implementation research aims to supply such methods.
This article proposes a set of recommendations aimed at establishing a common understanding of what implementation research is, and how to foster its development.
We developed the recommendations in the context of a translation research conference hosted by the VA for VA and non-VA health services researchers.
Health care organizations, journals, researchers and academic institutions can use these recommendations to advance the field of implementation science and thus increase the impact of clinical and health services research on the health and health care of the public.
In recent years, research sponsors, policy makers, and the public have increasingly viewed the “quality chasm,” or gap between what we know medical care should deliver based on principles of safety and efficacy and what patients actually receive, as an imperative for change.1 The Institute of Medicine's Clinical Research Roundtable identified the need for research that completes the cycle from basic scientific discovery to clinical research,2 including the need to invest in research on how to move the results of clinical research into clinical and public health practice. The purpose of this paper is to consider how the field of health services research can best facilitate improvements in the consistency, speed, and efficiency with which sound clinical and health care research evidence is implemented over the next several years. The paper is directed at health services research funders, health care organizations that use or want to use health services research, clinicians and clinician educators who participate in health services research, health care journal editors and reviewers, and implementation researchers themselves.
The recommendations in this paper arose from discussions among the participants in a workgroup convened at an implementation research conference convened by the VA Quality Enhancement and Research Initiative (QUERI) in August 2004 (see Acknowledgments). Any errors of interpretation are those of the authors.
We first recommend that the health services research community adopt, through expert panel or similar methods, a common definition of implementation research. As a starting point, we propose the following definition:
Implementation research consists of scientific investigations that support movement of evidence-based, effective health care approaches (e.g., as embodied in guidelines) from the clinical knowledge base into routine use. These investigations form the basis for health care implementation science. Implementation science consists of a body of knowledge on methods to promote the systematic uptake of new or underused scientific findings into the usual activities of regional and national health care and community organizations, including individual practice sites.
To build this body of knowledge, implementation researchers focus on understanding and influencing the process of uptake of scientific findings by applying and developing theories on why health care providers do what they do, and on how to improve their performance. In using the term health care provider, we intend to include both health care professionals and nonprofessionals who carry out health promotion or clinical care activities within either health care or community organizations. This science's ultimate goal is to improve the health of the public through equitable, efficient application of rigorously evaluated scientific knowledge.
We recommend that the health services research community emphasize, in its publications, conferences, and strategic plans, the critical role of implementation research within the broader translation framework developed by the Institute of Medicine.2,3 Any efforts to improve translation will ultimately fail if the final step of actually implementing the research across broad populations is not completed. Yet, scientists have been slow to realize that the methods for achieving this step are often substantially different from those that apply earlier in the pathway, and are currently not well understood.
The Institute of Medicine's framework identifies 2 common “translation blocks” requiring translation research. The first represents impeded movement from basic science discoveries into clinical studies. The second represents impeded progress from clinical study results into health systems and medical practice. Close examination of the diversity of potential research strategies applicable to the second block shows the need for more detail. In Figure 1, we have modified the original conceptualization to identify 3 translation blocks. The second block now identifies the need to translate the results of clinical studies into practice standards, or guidelines. This translation process involves ensuring that the available body of clinical studies has addressed enough of the relevant issues, such as applicability to diverse populations and feasibility of use under routine conditions, to support national consensus on quality standards, such as guidelines. It also addresses the process of creating clinical guidelines or standards, including the conduct of metaanalyses and literature syntheses for this purpose.
The third translation block occurs when the progress of scientific clinical evidence into routine practice stalls despite broad consensus on the validity of the evidence. When standards or guidelines diffuse into routine practice without additional research, no third translation block occurs. Implementation science identifies methods for overcoming the third translation block. In doing so, implementation science supports the validity of the health research enterprise as a whole by transmitting that enterprise's benefits directly to the consumers who ultimately fund it.
Methods for overcoming the third translation block can be conceptualized as quality improvement interventions (QIIs). Quality improvement interventions are policies, programs, or strategies that aim to improve quality of care for clinical or community populations and, thus, put guidelines into practice. Implementation research aims to overcome the third translation block by creating new knowledge about how best to design, implement, and evaluate QIIs. Some of this knowledge comes, for example, from descriptive studies that identify determinants of poor care, some from qualitative studies of the change process or barriers to change, and some from studies that implement and evaluate QIIs.
We recommend that health services research funders promote the development of literature and materials that summarize current implementation science. Implementation science draws from a wide theoretical and empirical base. Because it is scattered throughout journals and books from diverse fields, its core literature is difficult to access. Examples of the types of literature needed include summaries of how to carry out implementation research,4–6 systematic reviews of empirical studies of implementation,7–13 reviews of relevant theoretical constructs,14–19 and literature on methodologic issues relevant to implementation science.20–24 The technical manual category includes accessible works for those contemplating designing their own implementation studies.4–6 The systematic reviews of empirical studies identify the types of interventions known to have an impact, such as audit and feedback and clinical reminders, and the expected magnitude of impact of such interventions.7–13 The reviews of theoretical constructs are important for setting the stage for what theories need to be tested in future studies; they cover diffusion of innovations, psychologic theories of behavior change, and organizational culture.14–19 The methodologic papers deal with particularly thorny issues that are often either neglected or misunderstood, such as how to design and analyze cluster or place randomized trials, how to design studies to specifically test a theoretical construct, and the need to use modeling trials prior to full blown implementation trials to assure that the interventions are feasible and actually affect the constructs they are predicted to change.20–24
We recommend historical review of the major implementation research initiatives that have been undertaken by funding agencies in the United States, United Kingdom, the Netherlands, and other countries in the past decade and a half, and cross-cutting analysis of studies within current initiatives. Assessing past successes and failures can improve the efficiency of current efforts, and cross-cutting analyses that generate or test hypotheses about implementation across studies within these initiatives can provide new information beyond the results of individual studies. In the United States, major past efforts include the Agency for Healthcare Research and Quality (AHRQ), patient outcomes research teams (PORTs)25 begun in 1989 and continuing through the 1990s to understand and later to improve quality for a wide variety of conditions; the VA's National Surgical Quality Improvement Program (NSQIP), started in 1991 and continuing to the present, to promote the systematic collection, analysis, and feedback of risk-adjusted surgical data26,27; and the series of studies on depression funded over the last decade by the NIMH Division of Services and Intervention Research.28
These large initial efforts have been succeeded by a second generation of ongoing initiatives that should be rigorously analyzed and evaluated over the next decade. In the United States, this group includes the VA's QUERI, begun in 199929,30; AHRQ's Translating Research Into Practice or TRIP program, begun in 199931; and the Centers for Disease Control's (CDC) Translating Research Into Action for Diabetes (TRIAD), begun in 1999 and including a VA partnership.32–34 Among nongovernmental U.S. funding agencies, the Robert Wood Johnson Foundation's “Pursuing Perfection” initiative, begun in 2002 in partnership with the Institute for Healthcare Improvement (Donald Berwick, MD, MPP, Director), is noteworthy.35 Kaiser Permanente and Group Health Cooperative of Puget Sound are examples of health care organizations that have long histories of funding internal health services research centers.36 The ReAIM framework37 developed at Kaiser Permanente of Colorado and the Chronic Illness Care model developed by Group Health researchers in collaboration with the McColl Institute and the Institute for Healthcare Improvement are examples of new implementation approaches developed by these organizations and applied in the Breakthrough Series.38
We recommend active efforts to foster strategic progression of studies within particular topic areas from clinical science toward full implementation23,30 using implementation science and provider behavior theory.29 The progression should occur along 3 dimensions. First, studies should progress along a continuum spanning clinical guidelines or best practices, measurement of quality and quality variations, tests of QII effectiveness, tests of QII spread, and policy development. Second, studies of QIIs should progress from higher researcher control tests of QII efficacy or effectiveness to lower researcher control tests of QIIs as carried out by clinical and community organizations themselves. Third, studies of QIIs should progress from local studies (e.g., α testing) to regional studies (e.g., β testing) to national studies. As viewed along this third dimension, studies move from a focus on effectiveness to a focus on quality impacts, including business outcomes and performance measures, and from a focus on individuals enrolled in studies to populations. Identifying and assessing progression along these dimensions can enable research funders to identify unneeded, repetitive studies of the same techniques as well as continuing gaps in our knowledge base that need research.9,14
Pursuing active progression along these dimensions will ultimately foster better policy development on a national level. Policies based on systematic testing within the targeted political and organizational contexts they aim to influence will be more practical and successful. Such policies can incorporate detailed information on stakeholder costs and values, and more easily avoid unanticipated negative consequences.
We recommend that clinical guideline developers routinely incorporate implementation research findings into guideline recommendations. While both quality improvement practitioners and researchers have been quick to use clinical guidelines and best practices as a foundation for care improvement, they have not always applied implementation science within this context. For example, if particular care models or change strategies, such as clinical reminders, have been shown to be effective for ensuring higher quality care for a given health problem, guidelines should incorporate the use of clinical reminders into the guideline recommendation.
We recommend that researchers and their funders use existing implementation science to develop policies and information dissemination methods that promote adoption of research findings in routine care. Policies should recognize that research products have different propensities for being adopted outside of research, and should anticipate basic implementation support needs. We discuss below the kinds of challenges that should be addressed by these policies.
At the simplest level, we know that complex QIIs cannot be applied either in future research or in clinical settings without detailed information about what was done. Researchers should therefore be required to document all information and tools necessary for understanding how the product was developed, applied, and evaluated. This information should be publicly available in enough detail to support replication and diffusion, such as on the web.
Guideline concordant treatment and management strategies can be thought of as products that may or may not diffuse effectively. Greenhalgh et al.14 identified at least 13 different research traditions related to understanding how innovations diffuse. Among these, Rogers'16 theories about which characteristics make an innovation likely to diffuse is one of the most widely used. Innovations with positive diffusion attributes may have sufficient impact on clinical care through routine dissemination activities such as journal publication and commercial marketing, without any additional effort from the research community, while those with negative attributes may require substantial researcher implementation support.
For example, proton pump inhibitors (pills to treat gastro-esophageal reflux and ulcers), are effective, easy to prescribe, and affect bothersome symptoms. Pharmaceutical companies also have financial interests in promoting these products. Not surprisingly, proton pump inhibitors have been widely adopted with little implementation support from researchers. On the other hand, the finding that placing infants on their backs reduces sudden infant death syndrome (SIDS) did not diffuse based on journal articles, despite its low cost and simplicity. It contradicted prior habits and beliefs among many parents and pediatricians about sleeping position, and had no commercial market stakeholders. Successful dissemination required the use of social marketing research and methods39 and the involvement of researchers and community partners in a “Back to Sleep” campaign supported by at least 5 partner organizations, including the National Institutes of Health.40 As evidence of the impact of this campaign, participants cite a 70% decrease in prone sleeping between 1992 and 1996, along with a 38% reduction in SIDS mortality.41
Unfortunately, many research findings in need of implementation require significant behavior change, and provide no compelling financial or other advantages to those who must enact the change. For example, studies have repeatedly shown low adherence to hand washing recommendations. Grol et al.5 predicted, based on implementation science, that a QII approach that targets a variety of specific barriers to change at a variety of different levels (professional, team, patient, and organization) will be required to achieve lasting changes in hand-hygiene routines. Multicomponent organizational interventions, such as the hand washing QII envisioned above, are often the most effective means of achieving quality goals,7 but have negative attributes in terms of ease of diffusion. Interestingly, this issue of the Journal also contains the description of a successful QII based on the multicomponent organizational Six Sigma approach to implement hand-hygiene guidelines.42 Research organizations must become proactive in anticipating these implementation needs.
We recommend that the health services research community actively foster clinical/health services research partnerships that focus on implementation through funding such partnerships, through literature on how to make partnerships effective, and through embedding health services researchers within clinical organizations. Policies that provide incentives for participation on both sides of these partnerships should be developed.
Efforts to increase the rapidity of implementation of clinical and health services research findings are most likely to be successful if they involve broadly based organizational partnerships. Purely top-down initiatives based on fiat are rarely successful, unless bottom-up development has already occurred. Bottom-up development often occurs in the context of partnerships. Such partnerships include those between members of the health services research community, such as research funding organizations, academic and educational organizations, and scientific journals. They also include partnerships between the health services research community and nonresearch-centered organizations, such as those concerned with healthcare delivery, community interests, and policy (Fig. 2). As indicated in the figure, communication between and within these groups through, for example, joint conferences, publications, and strategic planning efforts is essential, because each of these types of partners has enormous influence on the progress of implementation.
Table 1 lists the wide variety of partners in the United States necessary to (1) design, fund, carryout, and replicate implementation research and (2) to adopt and sustain these changes in routine practice. The listed clinical governmental agencies that also fund health services research, such as the VA, the Health Resources and Services Administration (HRSA), and Centers for Medicare & Medicaid Services Administration (CMS) have a particularly strong interest in the application of research findings to improve the health both of their target populations and of the public as a whole.
Finally, clinical organizations should find ways to embed health services researchers as active partners in care delivery. This approach has the potential for changing healthcare organizations into “learning organizations.” The VA, for example, which began embedding health services researchers in diverse sites during the 1980s, used this investment to develop information systems, clinical quality monitors, and primary care and economic models in a bottom-up and top-down model that produced rapid, large quality improvements when nationally implemented during the late 1990s.36,43 Without the decade-long bottom-up development process fostered by embedding, it is unlikely that any top-down strategy could have produced such major change.
Health services researchers and their sponsors should adopt a systematic approach toward promoting the development and use of implementation science. We identify 8 recommendations that, if carried out, would promote the further development of implementation research over the next several years (Table 2). Health services research organizations and stakeholders should consider identifying strategic goals and plans based on these recommendations, and invest the resources necessary to bring the plans into action.
Fostering implementation research will initially cause discomfort among researchers, managers and policy makers. Increasingly, managers and policy makers are challenged to provide evidence for the validity of their initiatives, and implementation research may encourage such challenges by raising the evidence bar for decision making. Implementation science advocates will need to be sensitive to the practical demands placed on managers and policy makers, and to learn when a research approach will impede action as well as when it will foster improved results. On the research side, pursuing scientific truth into the complexities of clinical and community settings will demand substantial cultural change. To achieve implementation, the relative scientific comfort of clinical trials must ultimately give way to the greater uncertainty of working with organizations and communities. As Greenhalgh comments,
The shifting baseline of context and the multiplicity of confounding variables must be stripped away (“controlled for”) to make the research objective. But herein lies a paradox. Context and “confounders” lie at the very heart of the diffusion, dissemination, and implementation of complex innovations. They are not extraneous to the object of study; they are an integral part of it.14
Implementation science provides the signposts that make translation of research into routine care more efficient and effective. Context becomes a legitimate objective for scientific study, and a legitimate influence on decision making. In the long run, implementation science supports both basic and clinical science by ensuring that research produces observable improvements in the health of the public, who ultimately foot the research bill.
We acknowledge the support of the VA HSR&D Center of Excellence for the Study of Healthcare Provider Behavior, and the VA HSR&D Veterans Evidence-based Research, Dissemination and Implementation Center (VERDICT). We also acknowledge the participants in the working group of the VA State-of-the-Art Conference on Implementation Research that reviewed our paper and provided us with their ideas and commentary, including (alphabetically) Jeroan J. Allison, MD, MP; Anna C. Alt-White, RN, PhD; Caryn Cohen, MS; Joseph Francis, MD, MPH; Allen L. Gifford, MD; Brian Mittman, PhD; Julie J. Mohr, MSPH, PhD; Audrey L. Nelson, PhD; Timothy J. O'Leary, MD, PhD; Marjorie L. Pearson, PhD, MSHS; Gary E. Rosenthal, MD; Theodore Speroff, PhD; Mark L. Willenbring, MD.