The research-practice gap in autism mirrors that in other conditions of childhood. A large body of research shows that interventions for children with psychiatric and developmental disorders are not as effective in communities as they are in research settings and do not sustain over time (Storch and Crisp 2004
; Weisz et al. 2005
). The lag between the development of evidence-based treatment and its integration into routine practice is estimated to be 20 years (Walker 2004
). Developers and advocates of effective interventions have a responsibility to cultivate the conditions that will facilitate successful diffusion.
Diffusion of innovation theory highlights that the study any intervention is always the study of that intervention in a context. Rather than treating contextual factors as nuisance variables, diffusion of innovation theory suggests that they are critical to the adoption and continued, committed use of the intervention. This idea was discussed in the NIH-sponsored meetings on the state of autism intervention research, where there was consensus that key stakeholders (e.g., families, teachers, clinicians, and administrators) must be involved in developing a research agenda to foster large-scale use of effective treatments (Lord et al. 2005
). Despite this consensus, there was disagreement on how to move from efficacy trials (testing the intervention under ideal circumstances) to effectiveness trials (testing the intervention under real-world conditions).
In their excellent review of the current challenges facing the field of autism intervention research, Smith et al. (2007)
propose a model for systematically validating and disseminating interventions for autism. This model provides a strong framework in setting the agenda for autism intervention research. However, as researchers in related fields have highlighted, in order for efficacious interventions for be successfully implemented, the community context must be considered explicitly throughout all phases of research.
These researchers (Glasgow et al. 2003
; Schoenwald and Hoagwood 2001
; Weisz et al. 2004
) suggest that it may be necessary to rethink the current “efficacy-to-effectiveness” sequence, often described as the “stage pipeline” model (Rohrbach et al. 2006
). A report of the National Institute of Mental Health’s Advisory Council (National Institute of Mental Health 2001
) describes a model of treatment development that attends to service delivery issues at the outset. Similarly, Weisz and colleagues (2004)
state, “To create the most robust, practice-ready treatments, the field [should] consider a shift from the traditional model to a model that brings treatments into the crucible of clinical practice early in their development and treats testing in practice settings as a sequential process, not a single final phase” (p. 304). Weisz and colleagues advocate research models that attend to setting characteristics from the start, in the initial pilot and testing phases. They outline steps—from manual development to wide-scale dissemination—that focus on the setting in which the service ultimately will be delivered. This model is designed to accelerate the pace at which interventions are developed, adapted, refined, and implemented in communities (Weisz et al. 2004
). Other researchers (Glasgow et al. 2003
) have developed frameworks to increase external validity in trials by providing criteria for evaluating interventions on their efficacy and applicability to real-world practice. They also emphasize that participatory research methods should be built into efficacy studies, rather than left for later phases of research.
Successful efforts to adapt interventions to the practice context are likely to be bidirectional. In addition to adapting interventions to improve their fit with the values and capacities of public settings, adaptations within these settings may have to occur to improve practices (Hoagwood and Johnson 2003
). Researchers have improved organizational capacity to support interventions by working with key stakeholders to improve their willingness and ability to adopt, implement, and maintain the intervention. The ARC (availability, responsiveness and continuity) model (Glisson and Schoenwald 2005
) and the RE-AIM (reach, efficacy, adoption, implementation, and maintenance) framework (Glasgow et al. 2003
) address barriers to the “fit” between social context and intervention by focusing organizational and community efforts on a specific population and problem, building community support for services that target the problem, creating alliances among providers and community stakeholders, encouraging desired provider behaviors, and developing a social context that fosters effective services delivery.
To highlight how these considerations can be incorporated within Smith et al.’s (2007)
current model for conducting autism intervention research, presents Smith et al.’s recommendations for each phase of autism intervention research. In the right-hand column, we present suggestions from dissemination, implementation, and community-based participatory research (Glasgow et al. 2003
; Glasgow et al. 2001
; Glisson 2007
; Glisson and Schoenwald 2005
; Israel et al. 2005
; Weisz et al. 2004
) on how to refine these phases in order to enhance the likelihood of uptake of efficacious autism interventions in the community, We also add a fifth stage, which is not explicitly considered within the scope of the Smith et al. (2007)
model. The general principles presented are described in more detail below.
Target Research Towards Issues that are Most Salient to Public Practice
Autism interventions often receive broad media coverage and are adopted before claims of effectiveness have been adequately tested. Researchers increasingly recognize the need to test the validity of these claims quickly, but funding to test these claims is slow compared with the rate of information diffusion through commercial media (Lord 2000
). Rapid funding of studies of autism interventions that have attracted significant public interest should be encouraged (Lord 2000
). Similarly, intervention research should target ecologically valid outcomes that match the needs of stakeholders.
Enhance Generalizability of Intervention Studies by Including Heterogeneous Samples in More Naturalistic Settings
Most treatment research samples do not reflect demographics or clinical presentation of the general population (2005). Lord et al. (2005)
suggest some strategies for increasing sample diversity. This issue also has been addressed elsewhere (Swanson and Ward 1995
; Yancey et al. 2006
). For example, Yancey et al. (2006)
reviewed 95 studies describing methods of increasing minority enrollment and retention and identified several key recommendations, including: (a) reducing restraints on eligibility; (b) improving communication with potential minority participants to establish mutually beneficial goals and counteract mistrust of scientific research (e.g., by using personal contact rather than mass mailings); (c) facilitating community involvement by hiring outreach workers from the target population and working through community-based organizations, such as churches and schools; and (d) improving retention by providing intensive follow-up, having the same staff over time, and having accessible locations for intervention implementation and data collection, regular telephone reminders, and timely compensation.
Involve Stakeholders in Research from the Protocol Development Stage, and Have Them Assess the Fit of the Prototype to their Needs, Values, and Setting
Hoagwood and Johnson (2003)
identify several organizational elements that must be understood to assess intervention fit, such as local policies, staffing, financing, and coordination of services. It is also critical to identify stakeholders’ perceptions of key barriers to diffusion, and collaboratively address these barriers. Participatory decision-making should be used throughout the dissemination and implementation process as a way to foster a strong sense of “team.” Finally, researchers can rely on social validity research to assess the fit of the program to stakeholders’ needs and values (Callahan et al. 2008
; Gresham, Cook, Crews, and Kern 2004
). Social validity research systematically assesses whether the goals, procedures, and outcomes of specific programs or interventions are acceptable to key stakeholders (Callahan et al. 2008
Include Formal Data Collection and Comprehensive Follow-Up to Monitor Implementation Fidelity, Child Outcomes and Stakeholder Satisfaction, and Provide Ongoing Consultation
A review of autism intervention studies found that only 18% reported fidelity data (Wheeler et al. 2006
). While measuring fidelity is fundamental to any study of intervention, it is particularly important for effectiveness studies, where fidelity is expected to be highly variable. Researchers also should study outcomes that are salient to stakeholders, and incorporate measures of stakeholder satisfaction. For example, researchers again can collect and report data to key stakeholders, monitor progress in solving problems stakeholders identify, and recommend changes (Glisson and Schoenwald 2005
Plan for Intervention Maintenance by Providing Information, Training and Tools, and Incrementally Facilitating Community Practitioners’ Independent Use of the Intervention
In the absence of planning and support, community use of evidence-based interventions does not sustain over time (Shediac-Rizkallah and Bone 1998
). Researchers should plan for sustainability by examining implementation during the study and determining what supports or modifications are necessary for it to continue after the study ends. Here again, autism intervention researchers can benefit from strategies identified in other fields, including: addressing multiple pathways to sustainability (e.g., policy change), rather than focusing exclusively on training; having organizational—in addition to individual—commitment to ensure stability; establishing program ownership among stakeholders and strengthening champion roles and leadership actions; building and maintaining local expertise; and establishing feasible evaluation strategies to monitor implementation quality and effectiveness (Israel et al. 2006
; Johnson et al. 2004
; Rohrbach et al. 2006