Search tips
Search criteria 


Logo of jncimonoLink to Publisher's site
J Natl Cancer Inst Monogr. 2012 May; 2012(44): 86–99.
PMCID: PMC3482959

Implementation and Spread of Interventions Into the Multilevel Context of Routine Practice and Policy: Implications for the Cancer Care Continuum


The promise of widespread implementation of efficacious interventions across the cancer continuum into routine practice and policy has yet to be realized. Multilevel influences, such as communities and families surrounding patients or health-care policies and organizations surrounding provider teams, may determine whether effective interventions are successfully implemented. Greater recognition of the importance of these influences in advancing (or hindering) the impact of single-level interventions has motivated the design and testing of multilevel interventions designed to address them. However, implementing research evidence from single- or multilevel interventions into sustainable routine practice and policy presents substantive challenges. Furthermore, relatively few multilevel interventions have been conducted along the cancer care continuum, and fewer still have been implemented, disseminated, or sustained in practice. The purpose of this chapter is, therefore, to illustrate and examine the concepts underlying the implementation and spread of multilevel interventions into routine practice and policy. We accomplish this goal by using a series of cancer and noncancer examples that have been successfully implemented and, in some cases, spread widely. Key concepts across these examples include the importance of phased implementation, recognizing the need for pilot testing, explicit engagement of key stakeholders within and between each intervention level; visible and consistent leadership and organizational support, including financial and human resources; better understanding of the policy context, fiscal climate, and incentives underlying implementation; explication of handoffs from researchers to accountable individuals within and across levels; ample integration of multilevel theories guiding implementation and evaluation; and strategies for long-term monitoring and sustainability.

Most scientific evidence about improving health and health care stems from single-site and single-level interventions, taking decades to move from clinical trials to new routines at bedsides or clinic offices (1,2). Despite the growing volume of such interventions in the literature, their application in typical practice settings remains stubbornly elusive, rendering the promise of evidence-based practice—widespread implementation of efficacious interventions into routine clinical care—still unrealized (3,4). Also, few interventions address interventions outside health-care settings, limiting potential contributions of community-level interventions to advances in public health (eg, mobile units, neighborhood screening). The heart of the matter, however, is that the evidence itself is insufficient, as single-level interventions have chiefly been tested under highly-controlled and homogenized circumstances, often in academic medical centers or other settings—circumstances unlike those in which most patients obtain their care (5,6). As a result, interventions yielding significant advances under controlled research protocols undergo what has been described as a “voltage drop” when applied to real-world settings (7).

Applying the current state of research evidence to health care (ie, fostering the adoption, implementation, spread, and sustainability of new evidence-based approaches to care) requires explicit attention to the interactions between and among multiple levels of influence surrounding any particular single-level intervention (ie, communities and families surrounding patients; health-care policies and organizations surrounding provider teams) (8,9). Indeed, practice guidelines have increasingly embraced multilevel concepts (eg, tobacco control guidelines incorporate patient-, provider-, and system-level recommendations) (10,11), though rarely based on trials that themselves were multilevel (12,13).

Though seldom reported (14), the contextual influences underlying intervention success (or failure) have been the subject of increasing study, as each contextual layer potentially becomes the target for additional intervention components (1519). Greater recognition of such influences has motivated the design and testing of multilevel interventions that target them (20,21). However, relatively few multilevel interventions (comprising ≥3 levels) have been conducted along the cancer care continuum, and fewer still have been implemented, spread, or sustained in practice (20).

The purpose of this chapter is to illustrate and examine the concepts underlying the implementation and spread of predominantly single-level interventions into the multilevel context of routine practice and policy using a series of cancer and noncancer examples. The examples span different levels and stages of the care continuum, from community-based primary prevention to screening in diverse clinical practices to treatment implementation and spread in large integrated health-care systems.

Implementation and Spread of Interventions Into the Multilevel Context of Routine Practice and Policy

Efficacy vs Effectiveness: Getting to Implementation

Efficacy studies place primary emphasis on internal validity to maximize the certainty with which claims may be made that the intervention was responsible for the observed differences in outcomes. Effectiveness (and implementation) studies must generalize from efficacy studies, recognizing all the ways in which they lack external validity and particular relevance to the local circumstances in which they would be applied and necessarily adapted (22) (Table 1). Adaptations require setting-specific evaluations as efficacy studies provide no assurance that the adaptations will achieve the same effects in different settings, circumstances, populations, cultures, and political environments (23). These differences account for much of the diminished impact when interventions from efficacy trials are implemented more broadly.

Table 1
Issues of efficacy vs effectiveness related to implementation of interventions into the multilevel context of routine practice and policy*

Implementation and spread are neither direct nor intuitive when patients are selected to reduce complexity, when interventions are tested only in the most favorable environments, when context is factored out, and when researchers work to ensure strict protocol adherence and control (that will not typically be feasible during implementation in other sites/levels). Rather than being entirely controlled by researchers, interventions implemented in real-world settings must involve and engage policymakers, managers, providers, nurses, clerks, and usually patients and their families, as key stakeholders in the new processes underlying implementation at each level. These stakeholders are directly engaged in working to determine how to adapt intervention elements to their practice and routines and within their social norms and settings (ie, their context). Researchers’ capacity to influence such adoption is acutely determined by the nature of the “handoffs” and support they construct through negotiation with the people, places, and circumstances of each environment they seek to improve. Each aspect of change (for stakeholders at each level), therefore, requires consideration of how individuals contribute to (or hinder) implementation. To further spread interventions to achieve a universal and permanent new way of doing business, new organizational units and/or fiscal policies may be required or new legislation enacted (24).

Furthermore, not all contextual factors are modifiable, requiring adaptation that stretches beyond the available evidence base (eg, urbanization, family structure) and commonly beyond investigators’ comfort zones (18). As adaptation extends to less familiar levels in which investigators have less influence, the inevitable drift from the seeming simplicity of the original evidence bases to accommodate increasingly diverse practices and communities complicates virtually everything (5,25). Hawe et al. (26) recommend, instead, starting with an understanding of the community first and studying how phenomena are reproduced in that system, rather than focusing on mimicking processes from the original controlled setting. Either way, it is essential to bridge the gaps between evidence-based practice and practice-based evidence (27).

Theoretical Foundations for Implementation and Spread

Much of the research evaluating implementation of interventions in real-world settings has lacked strong theoretical foundations, thereby ignoring the contributions of different social science disciplines to their design and implementation (28,29). While theoretical frameworks represent an important resource for designing implementation efforts (3032), no integrative theories have been developed to specifically guide implementation across multiple levels. Designing effective approaches for multiple levels may require a collection of theories addressing behavior and behavior change at each component level, whereas others have recommended a consolidated framework across often overlapping theories to help explain implementation in multilevel contexts (33). Related fields offer additional guidance in identifying theories for use in multilevel implementation [eg, patient (31,34), professional (35), and organizational behavior change (15)] and are addressed in other chapters (36,37). Theories in political science and policy studies also are available to help researchers address levels of government and regulatory agencies (30,32). In addition to theories offering detailed depictions of causal relationships, a number of planning frameworks (eg, Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation [PRECEDE]-Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development [PROCEED]) and conceptual models (eg, Chronic Care Model) identify broad categories of factors to consider, although many stop short of specifying individual causal relationships and influences of these factors (38,39).

Cancer and Noncancer Examples for Examining Implementation and Spread

To grapple with these issues, we drew on our combined experience with a series of interventions whose implementation and spread spanned different levels (Figure 1 and Table 2). Because the use of multilevel interventions in cancer care is still developing, we also included a noncancer example that has spanned virtually all levels. Use of theory is well reflected in the examples chosen.

Table 2
Description of examples for implementing and spreading interventions into multilevel contexts*
Figure 1
Implementation and spread of interventions into multilevel contexts of routine practice and policy, levels covered by cancer and noncancer examples. CHOICE = Communicating Health Options through Information and Cancer Education (40,41); HVMA Systems = ...

Pool Cool Diffusion Trial

The Pool Cool Diffusion Trial tested a three-level skin cancer prevention program at recreational swimming pools (44). To implement, disseminate, and evaluate the program, the project team had to build effective relationships with professional organizations and recreation sites at national, regional, and local levels. This was achieved by participating in aquatics and recreation conferences, developing career opportunities and encouraging local media coverage of program activities (55,56), and providing resources to conduct the program after research participation concluded (57).

The Pool Cool program drew from social cognitive theory (58), diffusion of innovations theory (5961), and theories of organizational change (49). These models are complementary, with considerable overlap among them (50,58). The investigators’ intent was not to test a single model but to apply the most promising constructs from each to the problem of skin cancer prevention and program diffusion in aquatics settings.

Improving Systems for Colorectal Cancer (CRC) Screening and Follow-up in Clinical Practices

Three examples focused on CRC screening, one of which also focused on follow-up.

CHOICE (Communicating Health Options Through Information and Cancer Education).

CHOICE combined patient activation through decision aides and brochures among health plan members with academic detailing to prepare practices to facilitate CRC screening for activated patients (40,41). A cluster randomized trial, CHOICE required extensive engagement and partnership development at all levels. The CHOICE intervention relied on social cognitive theory (multiple levels) and the transtheoretical model of change (stages of change) (for the decision side, patient/member level) (51,62).

Improving Systems for CRC Screening at Harvard Vanguard Medical Associates (HVMA).

Sequist et al. designed randomized multilevel systems interventions to assess whether CRC screening could be increased among overdue adults. The study was conducted at HVMA, a large integrated medical group in Eastern Massachusetts. Screening rates were higher for patients who received mailings compared with those who did not, and the effect increased with patients’ age (42). Screening rates were similar among patients whose physicians received electronic reminders and those whose physicians were in the control group. However, reminders tended to increase screening rates among patients with three or more primary care visits over the 15-monthintervention. Adenoma detection tended to increase with both patient mailings and physician reminders. With a cost-effectiveness ratio of USD $94 for each additional patient screened, patient mailings were deemed cost-effective for continued use by the organization (43).

Improving CRC Screening and Follow-up in the Veterans Health Administration (VHA).

Improving CRC screening has been a longstanding national priority in the VHA health-care system, followed by more recent emphasis on managing timely, complete endoscopic follow-up and treatment. These examples span the VHA Colorectal Cancer Care Collaborative (C4) and Veterans Affairs (VA) Colorectal Cancer Quality Monitoring System (54,63), which grew out of QUERI (Quality Enhancement Research Initiative) (64,65).

The HVMA and VHA examples were more explicitly anchored in principles of continuous quality improvement during implementation phases, guided by Plan-Do-Study-Act (PDSA) cycles. Originally proposed by Langley et al. (66), amplified by Berwick (67), and applied in QUERI (64,65), PDSAs have often been used for smaller-scale rapid-cycle improvements. PDSA has also been adopted for broader organizational initiatives to improve quality of care (68). Stone et al. (53) have augmented this approach to guide quality improvement interventions to promote cancer screening services, identifying several key intervention features, including top management support, high visual appeal and clarity, collaboration and teamwork, and theory-based tailoring of interventions based on current needs and barriers. In both HVMA and VHA examples, these insights were primarily used during planning and pilot phases, when study interventions at different levels were refined with input from organizational leaders and pilot testing at one or more health centers or practices.

Best Practices for Comprehensive Tobacco Control Programs

The Office on Smoking and Health of the Centers for Disease Control and Prevention (CDC) examined the experience of several successful statewide tobacco control programs in the early to mid-1990s (particularly California and Massachusetts, but also specific lessons drawn from Arizona, Oregon, Florida, and Mississippi in the mid- to late-1990s). They blended these programs with the evidence-based literature on tobacco control from other sources to produce a widely adopted document titled Best Practices for Comprehensive Tobacco Control Programs (45). A second edition was published in 2007, based on the growing evidence from other states after following the lead of the initial states and another CDC document, Introduction to Program Evaluation for Comprehensive Tobacco Control Programs (69). On reviewing the evidence of effective comprehensive statewide programs, CDC concluded that no single intervention by itself, other than sharply increased prices on cigarettes through taxation, could account for the significant changes in tobacco consumption found over time. California and Massachusetts, in particular, doubled, tripled, and then quadrupled the rate of decline in tobacco consumption of the other 48 states while implementing their comprehensive statewide programs. Less comprehensive programs had successes in specific subpopulations, on specific outcomes, at specific levels of their states, but not as dramatic as California's or Massachusetts's comprehensive, multilevel programs. This example spans national and state policy changes as an overlay to organizational-level interventions that occurred, for example, in schools, worksites, and restaurants within statewide programs. The overriding theoretical framework for the tobacco control programs was social normative theory, which drove the mass media and smoke-free policy initiatives and which, in turn, undermined the tobacco industry's promotions and the acceptance of smoking in public (7072).

TIDES (Translating Initiatives in Depression Into Effective Solutions)

The TIDES initiative, which began with a planning phase in 2001 and enrolled its first patients in 2002, used evidence-based quality improvement (EBQI) methods as the basis for redesigning, adapting, and spreading collaborative care models for improving outcomes among primary care patients with depression. Collaborative care models have been shown to be effective and cost-effective based on more than 35 randomized trials and meta-analyses. Also supported by the VA QUERI program, TIDES was a multiregion EBQI effort to adapt and implement the research-based depression collaborative care models to the context of the large national VA health-care system.

The EBQI approach used regional and local iterative meetings to adapt and tailor collaborative care evidence—a multicomponent intervention directed at primary care patients who screen positive for depression—to the VA context. Key intervention features included a depression care manager supervised by a mental health specialist, structured assessment and follow-up of depressed patients, and patient self-management support. Key EBQI features are regional leadership priority setting, a research/clinical partnership with involvement of technical experts, and iterative intervention development with provider- and practice-level feedback on collaborative care intervention performance. The overriding goal of the series of projects that comprised the TIDES initiative was to use regional and local adaptation of the evidence-based care model as the basis for national VHA implementation, which occurred in 2006. The VHA-only SharePoint website, which houses TIDES tools and methods, continues to be accessed about 2000 times per month from all VHA regions across the country, in addition to an internet site sponsored by the Substance Abuse and Mental Health Services Administration ( In addition to continuous quality improvement, TIDES also relied on the Chronic Illness Care model (39) and tenets of social marketing (73).

Lessons Learned About Implementation and Spread of Interventions Into Multilevel Practice and Policy

Table 3 provides a summary of the lessons learned about the implementation and spread of interventions into multilevel practice and policy. Several key themes emerged from our examination of these diverse examples.

Table 3
Lessons learned from examples regarding implementation and spread of interventions into multilevel contexts*

Combinations and Phases of Multilevel Intervention Implementation

Attention to the nature of stakeholders at each level is key to successful implementation of a multilevel intervention, as is a strong understanding of how levels may interact. For example, in CHOICE, academic detailing was designed to prepare providers for patients activated by the decision aide. The HVMA delivered patient and provider reminders in parallel. Creating interdependencies also can be beneficial, for example, when local programs received tobacco control funding for mapping to state-level program activities or where local facilities received incentives for achieving compliance with CRC follow-up performance monitors. Determining the quality of the evidence (and continually integrated new evidence) for the interventions being deployed at each level also is important. However, when the evidence is lacking, blending scientific literature with experience from successful programs can be especially useful. Use of social marketing strategies also provided interventional messaging that penetrated multiple levels, though messages often have to be honed for each level's target audience (ie, what rivets the attention of patients likely differs from that of providers or policymakers). Several projects emphasized rapid cycle improvement pilots to test functions and effectiveness of implementation efforts within and across levels. This approach is especially important given the size and complexity of multilevel interventions and the importance of balancing fidelity and flexibility when adapting to local contexts.

Implementation also benefited from staged approaches, beginning with pilot testing within levels at a single practice or community followed by broader implementation as details and needs at each level become clearer (5). Recognition of the time needed for changes to penetrate each level's members’ knowledge and behavior is often underappreciated. For example, many multilevel interventions rely on champions, which requires education/training of the champion and then their peers or constituents (either by the champion or project team) through formal or informal social networks (76).

The direction of implementation—top–down vs bottom–up—also is an important distinction. In the Pool Cool program, the demand for and interest in the program went in different directions at different levels of the intervention. In some regions, motivated leaders at the top sometimes dictated program involvement, whereas in other regions, someone from a “lower level” (eg, a specific pool) was resourceful enough to find other sites and resources to bring the program to the local area. Tobacco control successes clearly moved from local and state levels to the national level for dissemination to other states that could emulate successful states’ practice-based experience, blended with evidence-based practices from controlled trials on specific interventions. TIDES also grew from a bottom–up intervention design guided by regional priorities and later was adopted nationally. Experiences from these programs, as well as others, also point to the importance of comprehensive process evaluations to measure the levers and directions of implementation, as well as the processes used, if any, to promote activity and align interests at different levels.

Partnerships Within and Across Levels

The importance of partnerships within and across levels and between researchers, clinicians, and managers was a clear and consistent theme across the examples, reflecting in large part the reduced control that researchers have over implementation dynamics on each level and the need to hand off intervention activities to nonresearchers—otherwise, it would not be “routine care” (5). To fit local conditions, proactive and intentional adaptations to the environmental and organizational milieu represented by each partnership level (eg, practice tailoring) reduce the risk of failed implementation (7784). Such partnerships require shared knowledge, trust, and role specification; require time spent in relationship- and team-building before, during and after implementation (with changing roles over time); and continual identification of a growing network of stakeholders who will ultimately maintain and be responsible for the intervention components at their level. Few studies have documented the costs associated with such implementation, with the exception of TIDES, which demonstrated substantial contributed time by implementers and researchers (85).

Strong support from senior leaders is also essential. Policy, community, practice, and other leaders help ensure engagement of members at their respective levels and frequently secure and allocate resources while also encouraging other participants who may need to be involved (eg, engaging gastroenterology and/or radiology specialists in primary care–based efforts to improve CRC screening). Senior leaders also are accountable for implementation and maintenance activities between research team contacts and may play a major role in coalition building. Partnerships with health information technology staff also were considered key, especially in settings with electronic medical records (EMRs).

Implementation Barriers and Facilitators

Consistent with the Institute of Medicine's Crossing the Quality Chasm report (86), our examples point to the importance of organizational supports for implementation. In some scenarios, such supports may be centralized across a large number of sites (eg, computerized decision support in practices with a shared EMR or state-level media campaigns for tobacco control) and may include direct grants, special funding allocations, and/or protected time for quality improvement and training. The degree of leadership control over a particular level may also increase the consistency of implementation, especially when supported by regular feedback of evaluation data. For example, in the HVMA CRC screening intervention, organizational leaders fully endorsed the programs being developed, allowing key quality improvement staff to participate actively in their design and implementation. However, implementation that requires interdisciplinary cooperation may be met with resistance when members at a particular level compete for resources or control or operate in silos where communication and coordination mechanisms may not have been developed. The perceived importance or value of implementation goals must be balanced with competing demands among busy members at any given level (87,88). These kinds of implementation barriers may not be predictable, underscoring the value of planning phases, “pre-work,” and PDSA cycles as integral components of implementation efforts.

Understanding Policy Context, Fiscal Climate, and Performance Incentives

Insofar as all behavior is affected by context, our examples demonstrated the vital importance of understanding the contextual influences surrounding players at each level of implementation. For example, the policy context in Massachusetts during the time of the HVMA CRC screening initiative was a virtual “perfect storm” in favor of implementation, as confirmed in structured interviews with HVMA chief medical officers, another large integrated provider network in the same region, and two regional insurers. The National Committee for Quality Assurance (NCQA) had introduced a new Healthcare Effectiveness and Data Information Set (HEDIS) measure for CRC screening in 2004 (89), with two of Massachusetts's four major insurers having participated in NCQA's field testing of the new measure. Pay-for-performance incentives for CRC screening rates also were being incorporated in some health-plans’ provider contracts, and a statewide quality monitoring program, Massachusetts Health Quality Partners (, was preparing to release statewide public reports on medical groups’ CRC screening rates. In other states without this policy context, the same level of adoption and participation might not have been seen.

Similarly, the rapid adoption and implementation of practice-based evidence for tobacco control from California and Massachusetts was accelerated by the Master Settlement Agreement between the states’ attorneys-general and the tobacco industry, which infused large amounts of earmarked funds into state tobacco control budgets. Implementation in settings where the fiscal climate is more difficult requires advance assessment of practice priorities and placement of the intervention among competing demands, in addition to adapting to local constraints.

Determinants of Spread

Few examples of intervention spread are generally available. Among our examples, the spread of successful tobacco control programs benefited from CDC's best practices document as a touchstone for planning programs at a time when the Master Settlement funds became available from the lawsuit filed against the tobacco industry, making its publication both timely and immediately applicable. Although such timing may occur serendipitously, implementation clearly benefits when advances at different levels of influence co-occur.

In the 4 years since the HVMA CRC screening interventions were originally implemented, the CRC screening rates have continued to rise from 63% to about 85%, which is one of the highest publicly reported rates for any medical group, health-plan, or region in the United States. This high rate was achieved through a strong organizational commitment to CRC screening, an advanced EMR for tracking CRC screening and other preventive services, and an expanded capacity to perform screening colonoscopy (by about 300 procedures per month) at a new HVMA endoscopy center.

Champions can support spread in addition to implementation, for example, through initial practices’ sharing of their experiences and troubleshooting with spread practices. Such person-to-person support, however, may best be accomplished when augmented with tools that facilitate adoption in new locations (eg, tracking tools, compendia of evidence, listservs, resource websites), adaptation to new populations (or subgroups), and measurement and evaluation.

However, one of the keys to implementation and spread based on these examples is the explication of the handoffs of multilevel intervention activities from researchers to accountable individuals within and across levels. When researchers support implementation by offloading certain activities from providers, they are unintentionally creating a nonsustainable situation. Furthermore, when multilevel interventions engage several clinical disciplines and multiple levels of leadership, no single handoff strategy is likely to succeed. Better assessments of usual practice, development of explicit memoranda of understanding (ie, spelling out the details of new roles and responsibilities), and continual management of research–clinical partnerships help alleviate at least some of these issues.

Sustainability: End Game or Myth?

Implementation of current evidence remains painfully slow, and the evidence base itself may not change as fast or as dramatically as often implied. Nonetheless, one of the reasons it is difficult to implement and spread evidence-based practice is that the levels of implementation are often changing. Practices face provider and staff turnover and leadership changes, and the political environment is always evolving. Just as multilevel influences are in perpetual motion, so is the evidence base to support interventions. New trials are completed, whereas observational studies contribute new information to our understanding of the factors involved in patient, provider, or organizational behavior and beyond. It is therefore important to continually scan and integrate new evidence over time: Sustainability may be a myth as there is always new evidence to consider, new people to train, practices opening and closing, communities adapting to new contexts, and state and federal agencies and their priorities changing. Unfortunately, systematic reviews, in their typically exclusive reliance on randomized controlled trials, will not close the information gap in the strategies for implementation, spread, and sustainability.

Based on the examples we reviewed, the best evidence for sustainability is long-term and continual attention to influences within and across all levels, enabled by engagement of people and places with ever increasing and overlapping spheres of influence (90). Integration of evidence into new national norms, regardless of how such norms are fostered or reinforced (eg, through performance measures, new reimbursement policies or legislation), is an essential method for sustaining multilevel change, though the path at the national level is complex and circuitous at best.

Methodological Challenges

While full treatment of the range of study design and other methodological issues rooted in implementation and spread research are beyond the scope of this monograph, Table 2 provides insights into the methodological approaches each example used, as well as the challenges they faced. Key issues span study design complexity, geographic scope, measures and data collection mapped to multiple levels and over multiple waves, and the inherent value of EMR systems for supporting evaluation and monitoring.


In this chapter, we used several exemplary studies to illustrate key concepts underlying the implementation and spread of interventions into the multilevel contexts of routine practice and policy. Lessons from these studies provide insights into approaches for handling implementation of interventions and partnerships within and across levels, as well as facilitators and barriers for their implementation, spread, and sustainability.

Advancing implementation will continue to be a challenge for the foreseeable future. Discomfort with the compromises inherent in the naturalistic rollout of intervention activities at multiple levels (in contrast to experimental control focused on reducible variation) slows our ability to meet these challenges. Criticisms against multilevel intervention research are also misguided when they are based on the contention that it is inherently difficult to discern the relative contributions of each intervention component. Experience from our examples suggests that they produce synergies and complementary effects, which require mixed methods and may benefit from hybrid designs to yield useful information. Furthermore, implementation requires expertise in politics and diplomacy, skills rarely taught in scientific curricula, in addition to flexibility, comfort with uncertainty, and persistence.

Experiences from our examples offer a potential roadmap for improving the design and evaluation of multilevel interventions focused on the cancer care continuum. The methodological challenges will require ongoing investment in interdisciplinary mixed methods of research and evaluation, and greater emphasis on the training/education of a growing cadre of investigators and research teams skilled at building and bridging diverse partnerships, without which most implementation will not be systematically studied. Such an investment should pay dividends by increasing the number of levels effectively combined and examined, the quality of the evidence deployed at each level, and, ultimately the impact on routine practice and population health outcomes.


This work was coordinated by the National Cancer Institute at the National Institutes of Health. EMY's time was supported by the US Department of Veterans Affairs (VA) through the Veterans Health Administration Health Services Research & Development (HSR&D) Service through a Research Career Scientist award (05-195). JZA was supported by the National Cancer Institute (R01 CA112367).


The authors would like to thank and acknowledge the planning group in the Behavioral Research Program at the National Cancer Institute for their review of and input on multiple revisions of this monograph chapter. The authors are also grateful to Thomas D. Sequist and Eric C. Schneider for interviewing clinical leaders regarding the HVMA CRC screening interventions. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the US Department of Veterans Affairs.


1. Bradley EH, Webster TR, Baker D, et al. Translating research into practice: speeding the adoption of innovative health care programs. Issue Brief (Commonw Fund). 2004;724:1–12. [PubMed]
2. Steinbrook R. The potential of human papillomavirus vaccines. N Engl J Med. 2006;354(11):1109–1112. [PubMed]
3. Markman M. Cancer screening: understanding barriers to optimal use of evidence-based strategies. J Womens Health (Larchmt). 2007;16(1):9–10. [PubMed]
4. McKenna H, Ashton S, Keeney S. Barriers to evidence based practice in primary care: a review of the literature. Int J Nurs Stud. 2004;41(4):369–378. [PubMed]
5. Rubenstein LV, Pugh J. Strategies for promoting organizational and practice change by advancing implementation research. J Gen Intern Med. 2006;21((suppl 2)):S58–S64. [PMC free article] [PubMed]
6. Van Driel ML, De Sutter AI, Christiaens TCM, DeMaeseneer JM. Quality of care: the need for medical, contextual and policy evidence in primary care. J Eval Clin Pract. 2005;11(5):417–429. [PubMed]
7. Oxman T, Dietrich A, Schulberg H. The depression care manager and mental health specialist as collaborators within primary care. Am J Geriatric Psychiatry. 2003;11(5):507–516. [PubMed]
8. Solberg LI. Improving medical practice: a conceptual framework. Ann Fam Med. 2007;5(3):251–256. [PubMed]
9. Sammer CE, Lykens K, Singh KP. Physician characteristics and the reported effect of evidence-based practice guidelines. Health Serv Res. 2008;43(2):569–581. [PMC free article] [PubMed]
10. Zapka JG, Taplin SH, Solberg LI, Manos MM. A framework for improving the quality of cancer care: the case of breast and cervical cancer screening. Cancer Epidemiol Biomarkers Prev. 2003;12(1):4–13. [PubMed]
11. Fiore MC. U.S. Public Health Service clinical practice guideline: treating tobacco use and dependence. Respir Care. 2000;45:1200–1261. [PubMed]
12. Richard L, Potvin L, Kishchuck N, Prlic H, Green LW. Assessment of the integration of the ecological approach in health promotion programs. Am J Health Prom. 1996;10(4):318–328. [PubMed]
13. Kok G, Gottlieb NH, Commers M, Smerecnik C. The ecological approach in health promotion programs: a decade later. Am J Health Promot. 2008;22(6):437–442. [PubMed]
14. Klesges LM, Dzewaltowski DA, Glasgow RE. Review of external validity reporting in childhood obesity prevention research. Am J Prev Med. 2008;34(3):216–23. [PubMed]
15. Yano EM. Influence of health care organizational factors on implementation research: QUERI Series. Implement Sci. 2008;3(1):29. [PubMed]
16. Bradley F, Wiles R, Kinmonth AL, Mant D, Gantley M. for the SHIP Collaborative Group. Development and evaluation of complex interventions in health services research: case study of the Southampton Heart integrated care project (SHIP) BMJ. 1999;318(7185):711–715. [PMC free article] [PubMed]
17. Krein SL, Damschroder LJ, Kowalski CP, Forman J, Hofer TP, Saint S. The influence of organizational context on quality improvement and patient safety efforts in infection prevention: a multi-center qualitative study. Soc Sci Med. 2010;71(9):1692–1701. [PubMed]
18. Litaker D, Tomolo A. Association of contextual factors and breast cancer screening: finding new targets to promote early detection. J Womens Health. 2007;16(1):36–45. [PubMed]
19. Bamberger P. Beyond contextualization: using context theories to narrow the micro-macro gap in management research. Acad Manage J. 2008;51(5):839–846.
20. Taplin SH, Anhang Price R, Edwards HM, et al. Introduction: understanding and influencing multilevel factors across the cancer care continuum. J Natl Cancer Inst Monogr. 2012;(44):2–10. [PMC free article] [PubMed]
21. Zapka J, Taplin SH, Ganz P, Grunfeld E, Sterba K. Multilevel factors affecting quality: examples from the cancer care continuum. J Natl Cancer Inst Monogr. 2012;(44):11–19. [PMC free article] [PubMed]
22. Green LW, Glasgow R. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006;29(1):126–153. [PubMed]
23. Green LW, Glasgow RE, Atkins D, Stange K. Making evidence from research more relevant, useful, and actionable in policy, program planning, and practice: slips “twixt cup and lip” Am J Prev Med. 2009;37(6S1):S187–S191. [PubMed]
24. Bodenheimer T. California Healthcare Foundation. The science of spread: how innovations in care become the norm. Published September 2007. Accessed March 15, 2012 California Healthcare Foundation Web site.
25. Kilbourne AM, Schulberg HC, Post EP, Rollman BL, Belnap BH, Pincus HA. Translating evidence-based depression management services to community-based primary care practices. Milbank Q. 2004;82(4):631–659. [PubMed]
26. Hawe P, Shielle A, Riley T, Gold L. Methods for exploring implementation variation and local context within a cluster randomized community intervention trial. J Epidemiol Community Health. 2004;58(9):788–793. [PMC free article] [PubMed]
27. Green LW. Making research relevant: if it's an evidence-based practice, where's the practice-based evidence? Fam Pract. 2008;25(suppl 1):20–24. [published online ahead of print September 15, 2008] doi: 10.1093/fampra/cmn055. [PubMed]
28. Bhattacharyya O, Reeves S, Garfinkel S, Zwarenstein M. Designing theoretically-informed implementation interventions: fine in theory, but evidence of effectiveness in practice is needed. Implement Sci. 2006;1(1):5. [PMC free article] [PubMed]
29. Slotnick HB, Shershneva MB. Use of theory to interpret elements of change. J Contin Educ Health Prof. 2002;22(4):197–204. [PubMed]
30. Ottoson JM, Green LW. Reconciling concept and context: a theory of implementation. Adv Health Educ Prom. 1987;2:353–382.
31. Green LW, Ottoson JM, Garcia C, Hiatt R. Diffusion theory and knowledge dissemination, utilization and integration. Ann Rev Public Health. 2009;30(1):151–174. [PubMed]
32. Ottoson JM. Knowledge-for-action theories in evaluation: knowledge utilization, diffusion, implementation, transfer, and translation. New Directions for Evaluation. 2009. pp. 7–20. 2009(124, special issue)
33. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implem Sci. 2009;4(1):50. [PMC free article] [PubMed]
34. Centers for Disease Control and Prevention. Best Practices for Comprehensive Tobacco Control Programs—2007. Atlanta, GA: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health; 2007.
35. Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A. on behalf of the “Psychological Theory” Group. Making psychological theory useful for implementing evidence-based practice: a consensus approach. Qual Saf Health Care. 2005;14(1):26–33. [PMC free article] [PubMed]
36. Weiner BJ, Lewis MA, Clauser SB, Stitzenberg KB. In search of synergy: strategies for combining interventions at multiple levels. J Natl Cancer Inst Monogr. 2012;44:34–41. [PMC free article] [PubMed]
37. Stange KC, Breslau ES, Dietrich AJ, Glasgow RE. State-of-the-art and future directions in multilevel interventions across the cancer control continuum. J Natl Cancer Inst Monogr. 2012;44:20–31. [PMC free article] [PubMed]
38. Green LW, Kreuter MW. Health Program Planning. 4th ed. New York, NY: McGraw-Hill; 2005.
39. Wagner EH, Glasgow RE, Davis C, et al. Quality improvement in chronic illness care: a collaborative approach. Jt Comm J Qual Improve. 2001;27(2):63–80. [PubMed]
40. Lewis C, Pignone M, Schild LA, et al. Effectiveness of a patient and practice-level colorectal cancer screening intervention in health plan members: design and baseline findings of the CHOICE Trial. Cancer. 2010;116(7):1164–1173. [PMC free article] [PubMed]
41. Pignone M, Winquist A, Schild LA, et al. Effectiveness of a patient and practice-level colorectal cancer screening intervention in health plan members: the CHOICE Trial [published online ahead of print February 11, 2011] Cancer. 2011;117(15):3352–3362. doi: 10-1002/cncr.25924. [PMC free article] [PubMed]
42. Sequist TD, Zaslavsky AM, Marshall R, Fletcher RH, Ayanian JZ. Patient and physician reminders to promote colorectal cancer screening: a randomized controlled trial. Arch Intern Med. 2009;169(4):364–371. [PMC free article] [PubMed]
43. Sequist TD, Franz C, Ayanian JZ. Cost-effectiveness of patient mailings to promote colorectal cancer screening. Med Care. 2010;48(6):553–557. [PMC free article] [PubMed]
44. Glanz K, Steffen A, Elliott T, O’Riordan D. Diffusion of an effective skin cancer prevention program: design, theoretical foundations, and first-year implementation. Health Psychol. 2005;24(5):477–487. [PubMed]
45. Centers for Disease Control and Prevention. Best Practices for Comprehensive Tobacco Control Programs—August 1999. Atlanta, GA: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health; 1999. Published August 1999. Accessed April 10, 2011.
46. Rubenstein LV, Chaney EF, Ober S, et al. Using evidence-based quality improvement methods for translating depression collaborative care research into practice. Fam Syst Health. 2010;28(2):91–113. [PubMed]
47. Smith JL, Williams JW, Jr, Owen RR, Rubenstein LV, Chaney E. Developing a national dissemination plan for collaborative care for depression. Implement Sci. 2008;3(1):59. [PMC free article] [PubMed]
48. Chaney EF, Rubenstein LV, Liu CF, et al. Implementing collaborative care for depression treatment in primary care: a cluster randomized evaluation of a quality improvement practice redesign. Implement Sci. 2011;6(1):121. [PMC free article] [PubMed]
49. Steckler A, Goodman RM, Kegler MC. Mobilizing organizations for health enhancement: theories of organizational change. In: Glanz K, Lewis FM, Rimer BK, editors. Health Behavior and Health Education: Theory, Research and Practice. 3rd ed. San Francisco, CA: Jossey-Bass; 2002. pp. 335–360.
50. Glanz K, Geller A, Shigaki D, Maddock J, Isnec MR. A randomized trial of skin cancer prevention in aquatics settings: the Pool Cool program. Health Psychol. 2002;21(6):579–587. [PubMed]
51. Glanz K, Bishop DB. The role of behavioral science theory in development and implementation of public health interventions. Annu Rev Public Health. 2010;31(1):399–418. [PubMed]
52. Sequist TD, Zaslavsky AM, Colditz GA, Ayanian JZ. Electronic patient messages to promote colorectal cancer screening: a randomized, controlled trial [published online ahead of print December 13, 2010] Arch Intern Med. 2011;171(17):636–641. doi: 10.1001/archinternmed.2010.467. [PMC free article] [PubMed]
53. Stone EG, Morton SC, Hulscher ME, et al. Interventions that increase use of adult immunization and cancer screening services: a meta-analysis. Ann Intern Med. 2002;136(9):641–651. [PubMed]
54. Jackson GL, Powell AA, Ordin DL, et al. VA Colorectal Cancer Care Planning Committee Members. Developing and sustaining quality improvement partnerships in the VA: the Colorectal Cancer Care Collaborative. J Gen Intern Med. 2010;25(suppl 1):38–43. [PMC free article] [PubMed]
55. Hall D, Dubruiel N, Elliott T, Glanz K. Linking agents’ activities and communication patterns in a study of the dissemination of an effective skin cancer prevention program. J Public Health Manag Pract. 2009;15(5):409–415. [PMC free article] [PubMed]
56. Escoffery C, Glanz K, Hall D, Elliott T. A multi-method process evaluation for a skin cancer prevention diffusion trial. Eval Health Prof. 2009;32(2):184–203. [PMC free article] [PubMed]
57. Hall DM, Escoffery C, Nehl E, Glanz K. Spontaneous diffusion of an effective skin cancer prevention program through web-based access to program materials. Prev Chronic Dis. 2010;7(6):A125. [PMC free article] [PubMed]
58. Bandura A. Social Foundations of Thought and Action: A Social Cognitive Theory. Englewood Cliffs, NJ: Prentice Hall; 1986.
59. Monahan JL, Scheirer MA. The role of linking agents in the diffusion of health promotion programs. Health Educ Q. 1988;15(4):417–433. [PubMed]
60. Orlandi MA, Landers C, Weston R, Haley N. Diffusion of health promotion innovations. In: Glanz K, Lewis FM, Rimer BK, editors. Health Behavior and Health Education: Theory, Research and Practice. San Francisco, CA: Jossey-Bass; 1990. pp. 270–286.
61. Rogers EM. Diffusion of Innovations. New York, NY: The Free Press; 2003.
62. Prochaska JM, Prochaska JO, Levesque DA. A transtheoretical approach to changing organizations. Adm Policy Ment Health. 2001;28(4):247–261. [PubMed]
63. Chao HH, Schwartz AR, Hersh J, et al. Improving colorectal cancer screening and care in the Veterans Affairs healthcare system. Clin Colorectal Cancer. 2009;8(1):22–28. [PubMed]
64. Demakis JG, McQueen L, Kizer KW, Feussner JR. Quality Enhancement Research Initiative (QUERI): a collaboration between research and clinical practice. Med Care. 2000;38(6) suppl 1:I17–I25. [PubMed]
65. Rubenstein LV, Mittman BS, Yano EM, Mulrow CD. From understanding health care provider behavior to improving health care: the QUERI framework for quality improvement. Med Care. 2000;38(suppl I):I129–I141. [PubMed]
66. Langley GR, MacLellan AM, Sutherland HJ, Till JE. Effect of nonmedical factors on family physicians’ decisions about referral for consultation. CMAJ. 1992;147(5):659–666. [PMC free article] [PubMed]
67. Berwick DM. Quality comes home. Ann Intern Med. 1996;125(10):839–843. [PubMed]
68. Jha AK, Perlin JB, Kizer KW, Dudley RA. Effect of the transformation of the Veterans Affairs health care system on the quality of care. N Engl J Med. 2003;348(22):2218–2227. [PubMed]
69. MacDonald G, Starr G, Schooley M, Yee SL, Klimowski K, Turner K. U.S. Centers for Disease Control and Prevention. Introduction to Program Evaluation for Comprehensive Tobacco Control Programs. Atlanta, GA: Centers for Disease Control and Prevention; 2001. Published November 2001. Accessed April 10, 2011.
70. Bal DG, Lloyd JC, Roeseler A, et al. California as a model. J Clin Oncol. 2001;19(suppl 18):69S–73S. [PubMed]
71. Brownson RC, Koffman DM, Novotny TE, et al. Environmental and policy interventions to control tobacco use and prevent cardiovascular disease. Health Educ Q. 1995;22(4):478–498. [PubMed]
72. Zhang X, Cowling DW, Tang H. The impact of social norm change strategies on smokers’ quitting behaviours. Tob Control. 2010;19(suppl 1):51–55. [PMC free article] [PubMed]
73. Kotler P, Roberto N, Lee N. Social Marketing: Improving the Quality of Life. Thousand Oaks, CA: Sage Publications; 2002.
74. Alexander J, Prabhu Das I, Johnson TP. Time issues in multilevel interventions for cancer treatment and prevention. J Natl Cancer Inst Monogr. 2012;44:42–48. [PMC free article] [PubMed]
75. Flood AB, Fennell ML, Devers KJ. Health reforms as examples of multilevel interventions in cancer care. J Natl Cancer Inst Monogr. 2012;44:80–85. [PMC free article] [PubMed]
76. Curran GM, Thrush CR, Smith JL, Owen RR, Ritchie M, Chadwick D. Implementing research findings into practice using clinical opinion leaders: barriers and lessons learned. Jt Comm J Qual Patient Safety. 2005;31(12):700–707. [PubMed]
77. Kochevar L, Yano EM. Understanding organizational needs and context: beyond performance gaps. J Gen Intern Med. 2006;21(suppl 2):S25–S29. [PMC free article] [PubMed]
78. Fixsen DL, Naoom SF, Blasé KA, Friedman RM, Wallace F. Implementation Research: A Synthesis of the Literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network; 2005. Organizational context and external influences; pp. 58–66. (FMHI Publication no. 321)
79. Solberg LI. Guideline implementation: what the literature doesn’t tell us. Jt Comm J Qual Improv. 2000;26(9):525–537. [PubMed]
80. Flottorp S, Havelsrud K, Oxman AD. Process evaluation of a cluster randomized trial of tailored interventions to implement guidelines in primary care—why is it so hard to change practice? Fam Pract. 2003;20(3):333–339. [PubMed]
81. Shaw B, Cheater F, Baker R, et al. Tailored interventions to overcome identified barriers to change: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2005;(3) CD005470. [PubMed]
82. Kirsh SR, Lawrence RH, Aron DC. Tailoring an intervention to the context and system redesign related to the intervention: a case study of implementing shared medical appointments for diabetes. Implement Sci. 2008;3(1):34. [PMC free article] [PubMed]
83. Jansen YJ, de Bont A, Foets M, Bruijnzeels M, Bal R. Tailoring intervention procedures to routine primary health care practice; an ethnographic process evaluation. BMC Health Serv Res. 2007;7(1):125. [PMC free article] [PubMed]
84. Murray E, Charles C, Gafni A. Shared decision-making in primary care: tailoring the Charles et al. model to fit the context of general practice. J Gen Intern Med. 2006;62(2):205–211. [PubMed]
85. Liu CF, Rubenstein LV, Kirchner JE, et al. Organizational costs of quality improvement for depression. Health Serv Res. 2009;44(1):225–244. [PMC free article] [PubMed]
86. Berwick DM. A user's manual for the IOM’s ‘Quality Chasm’ report. Health Aff (Milwood). 2002;21(3):80–90. [PubMed]
87. Helfrich CD, Weiner BJ, McKinney MM, Minasian L. Determinants of implementation effectiveness: adapting a framework for complex innovations. Med Care Res Rev. 2007;64(3):279–303. [PubMed]
88. Rye CB, Kimberly JR. The adoption of innovations by provider organizations in health care. Med Care Res Rev. 2007;64(3):235–278. [PubMed]
89. Schneider EC, Nadel MR, Zaslavsky AM, McGlynn EA. Assessment of the scientific soundness of clinical performance measures: a field test of the National Committee for Quality Assurance’s colorectal cancer screening measure. Arch Intern Med. 2008;168(8):876–882. [PubMed]
90. Greenhalgh T, Robert G, MacFarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629. [PubMed]

Articles from Journal of the National Cancer Institute. Monographs are provided here courtesy of Oxford University Press