PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Am J Community Psychol. Author manuscript; available in PMC 2010 October 27.
Published in final edited form as:
PMCID: PMC2964843
NIHMSID: NIHMS242468

The Getting To Outcomes Demonstration and Evaluation: An Illustration of the Prevention Support System

Abstract

Communities are increasingly being required by state and federal funders to achieve outcomes and be accountable, yet are often not provided the guidance or the tools needed to successfully meet this challenge. To improve the likelihood of achieving positive outcomes, the Getting To Outcomes (GTO) intervention (manual, training, technical assistance) is designed to provide the necessary guidance and tools, tailored to community needs, in order to build individual capacity and program performance. GTO is an example of a Prevention Support System intervention, which as conceptualized by the Interactive Systems Framework, plays a key role in bridging the gap between prevention science (Prevention Synthesis and Translation System) and prevention practice (Prevention Delivery System). We evaluated the impact of GTO on individual capacity and program performance using survey- and interview-based methods. We tracked the implementation of GTO and gathered user feedback about its utility and acceptability. The evaluation of GTO suggests that it can build individual capacity and program performance and as such demonstrates that the Prevention Support System can successfully fulfill its intended role. Lessons learned from the implementation of GTO relevant to illuminating the framework are discussed.

Evidence-based prevention can successfully address public health problems, such as drug and alcohol abuse, and be cost-effective (Caulkins et al. 1999; NIDA 1997; Spoth et al. 2002); but it needs to be comprehensive and implemented with quality to reap these benefits (Backer 2001). However, communities often face difficulties achieving outcomes using the evidence-based programs that are increasingly being mandated (Cantor et al. 2001; Crosseet al. 2001; CSAP 2000; Ennett et al. 2003; Hallfors et al. 2002; Kreuter et al. 2000; Reuter and Timpane 2001; Roussos and Fawcett 2000; Silvia and Thorne 1997; Yin et al. 1997). The resulting “gap” between research and practice (e.g., Green 2001; Wandersman and Florin 2003) calls for greater collaboration between researchers and practitioners to improve the quality of prevention and outcomes. Such a collaboration is proposed in the Interactive Systems Framework for disseminating and implementing preventive innovations (this issue). In this framework, researchers develop interventions which are readied for use in the “Prevention Synthesis and Translation System” and communities put them into practice in the “Prevention Delivery System”. A “Prevention Support System” plays the key role of linking these two systems, facilitating the process of translation within the Research System and implementation within the Delivery System in order to improve outcomes. In this article, we describe Getting To Outcomes or GTO, a quality improvement intervention that illustrates the Prevention Support System conceptualized in the Interactive Systems Framework. We also present preliminary evidence of its impact on capacity of communities to conduct high quality prevention programming.

Getting To Outcomes Intervention

Getting To Outcomes (GTO) was developed to address the gap between prevention research and practice by building capacity (self-efficacy, attitudes, and behaviors) at the individual and program levels for effective prevention practices (e.g., choosing evidence-based practices; and planning, implementing, evaluating, and sustaining those practices). As such, GTO plays the role envisioned by Prevention Support System, helping communities and organizations of the Prevention Delivery System to effectively put preventive interventions into place. GTO does this by posing ten questions (see Table 1) that must be addressed in order to obtain positive results and then provides practitioners with the guidance necessary to answer those questions with quality. Each question is linked to a specific step in the GTO process, six for planning, including the use of evidence-based strategies (steps 1–6), two for process and outcome evaluation (steps 7–8), and two steps on the use of data to improve and sustain programs (steps 9–10). This GTO process is facilitated by the GTO intervention, which has three components: the GTO manual of text and tools published by the RAND Corporation, Getting to Outcomes 2004: Promoting Accountability Through Methods and Tools for Planning, Implementation, and Evaluation (Chinman et al. 2004), face-to-face training, and on-site technical assistance (TA).

Table 1
The 10 GTO steps, questions and corresponding information in the GTO manual

In addition to offering specific strategies for effective prevention practice as shown in Table 1, the GTO intervention also attempts to work with organization (e.g., coalitions) leadership to modify their structure to better support the integration of the practices targeted by GTO into their routine operations. The goal is to transform organizations into true learning organizations (Preskill and Torres 1999)—ones that emphasize learning about itself, values continuous quality improvement, and is open to genuine change (Argyris 1999; Argyris and Schön 1978, 1996). Therefore early stages of the GTO intervention focus on building individual capacity using the manual, trainings, and onsite TA. Later, the GTO intervention, collaboratively with community practitioners, works to incorporate the GTO process into the way prevention is routinely conducted by helping them to plan GTO and secure future resources for its use.

Theory of GTO

GTO is an operationalization of Empowerment Evaluation theory, which states there will be a greater probability of achieving positive results when evaluators collaborate with program implementers and provide them with both the tools and the opportunities to better plan, implement with quality, evaluate outcomes, and develop a continuous quality improvement system themselves (Fetterman and Wandersman 2005). Using this approach, the GTO intervention is designed to facilitate local communities’ use of assessment techniques such as traditional evaluation methods (e.g., Rossi et al. 2004) to pursue outcomes that meet goals and objectives of funders, communities, and/or known quality standards. GTO then helps communities to continually use their evaluation data to improve consistent with Continuous Quality Improvement, a technique of Total Quality Management (Deming 1986; Juran 1989). While the Prevention Support System and GTO both have capacity building as a goal, the Interactive Systems Framework does not specifically discuss how the Prevention Support and Prevention Delivery Systems ought to work together. According to GTO, it is useful to use Empowerment Evaluation, which strives to create a collaborative relationship between researchers and practitioners, as a vehicle for capacity building. The theory of GTO is more elaborately described in Wandersman et al. (2000).

GTO is designed to empower community coalitions to engage in accountability activities that are associated with quality prevention, such as conducting process and outcomes evaluation and using the information to inform decision-making. Consistent with social cognitive theories of behavioral change (Ozer and Bandura 1990; Bandura 2004), we hypothesized that participation in GTO would lead to a greater sense of competency and effectiveness (i.e., self-efficacy) in performing GTO-related activities, such as goal setting and evaluation. In addition to a greater sense of self-efficacy, we believe that the exposure to GTO through training and TA would lead to more positive attitudes towards-GTO-related activities. Finally, given that behaviors are related to one’s attitudes toward that behavior (Ajzen and Fishbein 1977; Fishbein and Ajzen 1974; 1975), we expected that participation in GTO training and TA would lead the execution of more GTO-related behaviors.

The GTO Demonstration and Evaluation Design

We had combined goals of evaluating the impact of GTO on individuals’ prevention capacity and programs’ prevention performance, and, at the same time, improving the GTO intervention. We employed elements of a longitudinal, quasi-experimental design to assess the impacts of GTO and elements of the Continuous Quality Improvement Design (CQI-Design; Rapkin and Trickett 2005) to modify and improve it within the participating sites. The power of the CQI-Design is that it utilizes multiple sources of data to indicate performance quality, has local participants and researchers review those data to collaboratively make improvements, and then repeats that cycle with the improved intervention. This hybrid design was used to answer two questions: (1) can a Prevention Support System intervention such as GTO improve the level of individual capacity and program performance of those in the Prevention Delivery System?, and (2) what lessons from the implementation process and midcourse corrections can be applied to future demonstrations of capacity improving interventions like GTO? We hypothesized that both the individuals and programs as a whole assigned to GTO would show greater improvement in capacity and performance than those who were not assigned to the intervention. Both questions are relevant to improving the operations of a Prevention Support System conceptualized in the Interactive Systems Framework. Our approach included conducting three waves of a coalition staff survey to measure individual change in self-efficacy, attitudes, and behaviors associated with GTO; interviewing key informants in GTO and comparison programs to gauge program level change in prevention performance using structured rating scales; monitoring implementation and technical assistance in the GTO programs; obtaining regular feedback from participants; and modifying the intervention itself throughout the implementation period. Our measures are specified as part of our theoretical model and evaluation strategy with a timeline of when they were administered in Fig. 1.

Fig. 1
The GTO demonstration’s theoretical model and evaluation strategy with timeline of measures

Methods

Study Sites

The study sites were two community-based, substance abuse prevention coalitions: Santa Barbara Fighting Back (SBFB) in Santa Barbara, California; and Lexington/Richland Alcohol and Drug Abuse Council’s (LRADAC) Governor’s Cooperative Agreement for Prevention (G-CAP) headquartered in Columbia, South Carolina. SBFB began in 1991 with an RWJ Foundation grant that was renewed for an additional five years ending in 2002. SBFB has replaced the RWJ support with a variety of local, state, and federal funding and implements 16 adult and adolescent programs that span the prevention to treatment continuum. The South Carolina coalition provides similar services, operating 5 prevention programs at the time of this project. Since 1998, LRADAC was one of 19 local initiatives in South Carolina that shared funding provided by the federal Center for Substance Abuse Prevention State Incentive Grant program in 2002. Unlike the California site, the SC coalition had some prior exposure to GTO during an earlier developmental phase of GTO. Both coalitions are similar in that they have a small number of paid staff supporting a large volunteer base, which is divided into committees based on different community sectors (criminal justice, youth, health, media, etc.).

Selection of Site Programs into Study Conditions

Compatible with a phased intervention development approach (Rothman and Thomas 1994), program selection was purposive and targeted a small number of programs. Selection was based on discussions between the GTO team and the executives and program managers of the coalitions about which programs might most benefit from GTO (e.g. because they had no current evaluation process) and offer good demonstration sites for obtaining feedback on implementation. The choices were consistent with the approach of the CQI-Design and the GTO model, which emphasize ongoing local tailoring, input, and improvement. There were two programs in South Carolina and four in California. The six programs were:

  • GTO 1 (SC): An ongoing district-wide social norms media campaign to educate youth about how, in reality, most youth do not use drugs and alcohol
  • GTO 2 (SC): An ongoing school-based parenting skills education program
  • GTO 3 (CA): An ongoing student assistance program whose campus-based personnel deliver universal and selective prevention programs, coordinate campus wide drug free events and provide one on one counseling and referrals in middle and high schools
  • GTO 4 (CA): An ongoing teen court that diverts first time youth offenders from the criminal justice system and uses a jury of their peers to impose sentences that include peer groups, alcohol and drug education, or community service
  • GTO 5 (CA): An ongoing mentoring program that matches high-risk elementary school children with adult mentors
  • GTO6 (CA): A new to the coalition evidence-based adolescent treatment program, Cannabis Youth Treatment, a time-limited group program designed to prevent further marijuana use

Only one new program was included in the demonstration. GTO 1–4 participated for two years; GTO 5 and 6 participated for only one year. Other programs in each coalition provided natural comparison groups. There were two types of comparison groups created. At the individual level (for the coalition staff-wide survey), the comparison group consisted of all other coalition members who staffed all the non-GTO programs

For the performance ratings made at the program level, four programs (two in each coalition) that were the least connected to the GTO programs were selected. These programs were:

  • Comparison 1 (SC): A curriculum-based program for 5th and 6th grade girls and their parents/caregivers that addresses the need for gender-specific prevention for girls as they transition to middle school.
  • Comparison 2 (SC): A curriculum-based program for high school junior and senior students that addresses the risks posed by their transitioning to college or the workplace.
  • Comparison 3 (CA): An ongoing public awareness and media campaign
  • Comparison 4 (CA): An ongoing employee assistance program

GTO Intervention

Training in the GTO Model

The goals of the initial training sessions were to help the practitioners anticipate and understand how the GTO model operates and how they might use it, and introduce the technical assistance staff (see below), and distribute and explain the GTO manuals. The manual, Getting to Outcomes 2004: Promoting Accountability Through Methods and Tools for Planning, Implementation, and Evaluation (Chinman et al. 2004), fully explains the 10-step framework and provides tools to assist with many planning, implementation, evaluation, and sustaining tasks needed for prevention programming. These tools, mostly Word documents, are contained on a CD-ROM that accompanies each manual so that they can be tailored by either program or TA staff. In this study, the training was conducted with most staff of the GTO programs over a full day at the start of the project and was repeated about a year later. It involved walking coalition staff members through the 10 accountability questions and providing concrete examples of how each question could be successfully addressed with GTO tools from the manual. The training is designed to be interactive and involves discussions of staff’s previous prevention experience. Also, there are several exercises that involve practicing the 10 questions using both hypothetical scenarios and those taken from the local community.

Implementation of GTO

After training, we began providing ongoing technical assistance (TA). There were two technical assistants, one for each coalition, who met with staff of each of the GTO programs weekly for about an hour and a half to two hours and provided an additional two hours of support by phone and email and follow-up on meetings. The two technical assistants were PhD psychologists who had strong backgrounds in research and evaluation and in working with communities. They provided additional information about the GTO model and assistance with how to use each step to guide their respective program. This included:

  • examining needs data (GTO Step1)
  • developing realistic goals and objectives (GTO Step 2)
  • ensuring program fit by tailoring programs or initiatives as needed (GTO Step 4)
  • securing the needed resources to ensure high quality implementation (GTO Step 5)
  • planning to ensure all aspects of the program or initiative are included (GTO Step 6)
  • conducting process and outcome evaluations (GTO Steps 7 & 8)
  • using the process and outcome data to improve current and future programs (GTO Step 9)
  • working to sustain effective programs or initiatives (GTO Step 10)

Based on the TA literature (Chinman et al. 2005; Mitchell et al. 2004; O’Donnell et al. 2000; Stevenson et al. 2002), we used a collaborative TA-process that had both structured steps, and was flexible enough to allow the program staff to dictate the focus. This was critical because the ongoing programs expressed that they had priority needs among the 10 GTO questions, given that they were underway and not new programs. Therefore, the first step was to conduct a TA needs assessment using the GTO Needs Assessment Tool. This tool lists all the tasks associated with high quality prevention organized by the GTO 10-step framework. The TA and program staff together assessed which of these tasks needed to be completed using this tool. For example, most programs were far more interested in evaluation and quality improvement than they were in goal setting and selecting best practice programs and the TA-accommodated these preferences. Based on the results of that assessment, the TA and program staff devised a plan of action using the GTO Implementation Planning Tool. This second tool prompts the staff and the TA practitioner to consider the current challenges facing the program, decide what steps are needed to address those challenges, determine which tools from the GTO manual would be useful, pinpoint anticipated dates of completion of those tools and steps to be taken, and designate the person(s) responsible for carrying out the specified actions.

Monitoring GTO Implementation and Modifying the GTO Intervention

In the interest of obtaining early information about program effectiveness (Reid 1994), we established a process that involved not only coalition staff members receiving feedback to improve their implementation of GTO in their programs, but also coalition staff members providing the GTO staff with feedback about the utility of the GTO model consistent with the CQI-Design. This bi-directional feedback took place at the meetings of the Community Workgroups. These workgroups formed at both coalitions with representation from GTO research personnel, top coalition leadership, and program directors and key staff of the programs receiving the GTO intervention. The Workgroups met approximately quarterly throughout the life of the project for formal feedback discussions. In these meetings, each GTO program gave an update of their progress, often using data from the evaluations they helped plan and conduct, and common barriers and challenges were identified. With the presence of the directors of the coalitions in those meetings, problem-solving that required budget or other leadership decisions was more likely to be acted upon. We hoped that this meeting would serve as a permanent forum for continuous quality improvement across all the programs consistent with a learning organization. In addition to the Workgroup meetings, the two TA staff continuously observed program planning, implementation, and evaluation activities and used those observations to guide their feedback to program staff during their weekly TA sessions.

Midcourse Intervention Modifications

The feedback and discussion with users that took place at the Community Workgroup meetings led to several modifications to the GTO intervention. GTO training was revised substantially between the first and second rounds based on feedback from participants who requested that the training be more engaging and feature more interaction and a closer match to their specific programs’ operations. To make the training more engaging, we collaborated with a coalition staff person to develop a simulation in which trainees role played hypothetical families using the GTO model to determine whether or not to purchase a new car. To convey basic GTO concepts (i.e., defining the GTO steps) more interactively, we reduced the amount of didactic presentation and used a question and answer period to guide participants to learn various GTO concepts. We also added exercises such as completing a logic model using data specific to each program.

Modifications were also made to the GTO manual. For example, additional GTO tools were developed throughout the study based on program needs. These included: a template for creating “desk notes” or a detailed job description that conformed to the GTO model; a monthly report form template for program directors to use to provide program updates to the coalition executives on each of the GTO steps; a timeline for grant reporting that specified the funders, progress report due dates, and information required in the reports for staff; a semester accountability report that allowed the school-based staff to describe delivery of different prevention activities, participation rates, facilitators and barriers to delivery and participation to their manager; and a GTO implementation tool that helped the TA and program staff plan and execute activities specified by the GTO model. Other program-specific worksheets and tools were developed to help specify needs or better coordinate services across the agencies, such as referral forms, call logs, and a list of coalition programs that included contact information along with target population, activities, and referral sources.

TA was also modified, in part by adding two additional GTO programs in the California site after the first year of implementation. As a result, the intensity of TA for each California program was less in the second year compared to the first. In addition, the technical assistants began to work with more program staff due partly to high staff turnover and partly to the high demand for TA from multiple staff. Finally, the technical assistant in the California site helped to start an additional monthly meeting of coalition program directors after year one to provide programs with an opportunity to discuss cross-cutting issues. Coalition leadership wanted both GTO and non-GTO programs to be part of this meeting. This provided an additional opportunity for the organization to begin routinizing GTO practice into management activities and for non-GTO staff to gain GTO self-efficacy, to improve GTO attitudes, and to perform more GTO behaviors.

Data Collection and Measures

Measuring Impact on Individuals’ Prevention Capacity: Coalition Survey

We assessed capacity at the individual coalition staff member level with the “Coalition Survey”. All coalition staff members were initially approached to complete the Coalition Survey at regularly scheduled meetings at baseline (Wave 1), again after one year (Wave 2), and again after two years (Wave 3) of GTO implementation. Non-responders or those not in attendance were followed-up through the mail or mailboxes at the coalition offices.

The survey had three sections. The first section consisted of demographics including age, race/ethnicity, gender, education, and employment status, and length of time belonging to the coalition. The second section had a GTO Participation Index based on the Hall and colleagues’ model of categorizing the degree to which individuals “use” an innovation (Hall and Hord 2001; Hall et al. 1975; Hall and Loucks 1977). In that model, characteristics associated with greater “levels of use” are: exposure to training and other educational materials, working collaboratively, and securing resources to support implementation. The Index consists of 6 true/false items (T = 1, F = 0) that assess these key markers of use including participation in training, reading the materials, planning, discussing the model with colleagues, securing resources, and receiving TA. Researchers coded receipt of TA based on notes from TA meetings. We used the GTO Participation Index to assess implementation or exposure to the intervention and in the capacity outcome analyses described below.

The third section assessed prevention capacities, defined as self-efficacy, attitudes, and behaviors individuals need to perform high quality prevention and were specifically developed for this study. The first 23 items in this section address the GTO self-efficacy (i.e., perceived competency and effectiveness to conduct such activities as needs assessment, setting goals and objectives, using evidence-based practices, determining fit and capacity, implementating programs, conducting process and outcome evaluations, engaging in continuous quality improvement activities, and program sustainability), by asking each respondent to self report how much assistance they would need to complete these tasks from 1 = great deal, to 2 = some, to 3 = none. These items were only asked at Wave 3. A single GTO self-efficacy scale was created from these 23 items. Factor analysis using varimax rotation yielded a scree plot for the self-efficacy factors that supported one scale, with the top four factor eigen-values being 11.8, 1.7, 1.1, and 1.0. All but 1 item loaded most strongly onto a single factor and the item that did not load primarily onto this factor had a loading that was similar to that for a second factor; all loadings were greater than 0.4. The self-efficacy factor accounted for 51% of the explained variation and the resulting scale had a Cronbach’s alpha of 0.96.

The next 16 items were attitudinal items. Possible response choices ranged from strongly agree = 1 to strongly disagree = 7. Similar to the other scales, these items are linked to the domains of each GTO step. For example, “Program staff often know whether a strategy is working well without having to do a formal evaluation” was used to assess attitude towards evaluation (Step 8). Scree plots from a factor analysis of these items supported one factor, which accounted for about 25% of the explained variation. Though four to six factors had eigen-values greater than 1 across the three waves of data, the additional factors were not easily interpretable and yielded scales that had low Cronbach’s alpha values. We therefore developed a single Attitude Index using the 10 items that loaded onto the first factor with loadings greater than 0.4. The Cronbach’s alpha for the Attitude Index was 0.76 across the three waves. The Attitude Index ranged from 10–70 (10 items × 7 choices) with higher scores indicating greater endorsement of quality prevention practices. The Cronbach’s alphas of the Attitude Index were 0.75–0.76 across the three waves.

The final 10 items asked about the frequency with which GTO behaviors targeted by each of the 10 steps in the GTO model were performed in the past 12 months, ranging from 1 = never to 7 = very often. Within each wave, each of the 10 items were highly positively inter-correlated with one another, with no correlation falling below .4 and all being significant at p<.0001. Factor analysis of these items yielded one factor with an eigen-value greater than 1 that accounted for 65% of the explained variation. The Behavioral Index developed from all 10 items ranged from 10–70 (10 items × 7 choices), with higher scores indicating greater frequency. The Cronbach alphas of the Behavior Index = .94, .93, and .95 at Waves 1, 2, and 3, respectively.

Measuring Impact on Program Level Prevention Performance: GTO-Innovation Configuration Map

We used the GTO-Innovation Configuration Map to assess prevention performance at the program level. Although programs consist of individual people with varying levels of ability, we made performance ratings at the program level since the program operates as a gestalt. Based on the idea that innovations are often implemented differently than intended, “IC Maps” are a framework that can be tailored to evaluate the quality of use of any innovation (Hall and Hord 1987; 2001). In this project, we adapted the IC Map, with participation from the IC Map developer Dr. Gene Hall, to create the GTO-IC Map. The GTO-IC Map has 14 items (called “components”) tied to each of the ten steps of the GTO model. For GTO steps 7–9, we proposed that multiple components were needed to capture both: (1) the mechanics or implementation quality of the step, and (2) the extent the data were used in decision-making to improve practice. Each component had seven possible response choices that include the ideal performance of prevention practices targeted by GTO and six other possible variations ranging from “highly faithful to” to “highly divergent from” what is specified in the GTO model. Each component’s seven response choices are also accompanied by descriptions of observable behaviors specific to GTO. For example, for Outcome Evaluation Decision-Making, the highly faithful description was: “Synthesized review and making strategic decisions based upon all outcomes evaluation results for outcome improvement” and the highly divergent description was: “Actions not influenced by empirical data (e.g., use of personal anecdotes for making strategic decisions)”. The steps of the GTO model (e.g., conducting outcome evaluation) are good prevention practices in general; thus, it is possible that the comparison programs may demonstrate varying levels of performance, making the GTO-IC Map applicable for assessment of both intervention and comparison programs.

GTO research staff conducted semi structured interviews, developed in consultation with Hall, with the program directors of the ten programs and used those interviews to make the GTO-IC Map ratings across each of the 14 components. Coalition staff from each of the GTO and non-GTO programs described above were interviewed at baseline and again after one and two years of GTO implementation. Blinded interviews were not possible as GTO participants universally mentioned GTO by name in their responses while staff in the comparison group did not. GTO programs 5 and 6 participated in the intervention only beginning in the second year and therefore its first two ratings could both be considered baselines. Comparison program 2, ceased program activities after one year and therefore had only two ratings. Two research staff members made ratings from the same interview in two thirds of the cases across all three waves and inter-rate reliability (Pearson r) was calculated. The average inter-rater reliability for the 14 components was .74 and ranged from .45 to .96.

Measuring Implementation and Obtaining Feedback on the GTO Intervention

In addition to feedback obtained at the Community Workgroup meetings described above and informal observation of implementation by the technical assistants, we used the TA Monitoring Form to track date, type of TA (in person, phone, e-mail), staff present, and duration of the TA by GTO step. The technical assistant also wrote a short narrative description of the TA provided. This form was completed each time TA was provided by the TA staff person at each coalition.

After two years of GTO implementation, we conducted in depth qualitative interviews with program directors and key staff of each of the GTO demonstration programs (two programs in SC, four programs in CA) and top executives (two in SC, three in CA) in the Fall and Winter of 2005. These interviews were conducted by an independent interviewer that had not been part of the intervention. The individual interview format was chosen to yield unbiased information about the feasibility and utility of the GTO model. The success of GTO depends a great deal on obtaining local input and support and therefore our detailed interview protocol included questions about the coalitions’ beliefs about: (1) how GTO as whole and its three components (manual, training, TA) were helpful or not helpful or challenging; (2) what factors, both GTO and non-GTO related, facilitated the use of GTO; and (3) what factors, both GTO and non-GTO related, served as a barrier to the use of GTO. The interviews used open-ended (“grand-tour”) questions within each topic area, followed by focused, standard probes, such as verification and compare and contrast questions (Becker 1958; O’Brien 1993; Spradley 1979). Interviews were conducted in person, tape-recorded, and detailed summaries were created for each interviewee following well-established procedures for semi-structured interviews (Bernard 2000; Kvale 1996; McCraken 1988).

Data Analyses

Individual Prevention Capacity Change: Coalition Survey

To assess the level of participation in the GTO process, we assessed differences between the GTO and comparison groups on the activities that made up the GTO Participation Index across the two waves using a Rao-Scott chi-square test in SAS Version 9.1.3 PROC SURVEYFREQ (SAS Institute Inc. 2002). We also assessed differences between the GTO and comparison groups on the GTO Participation Index mean score across the two waves using a t-test.

Preliminary Analyses

Especially because programs were nested within GTO and non-GTO conditions, we assessed and controlled for significant differences in demographic characteristics between the GTO and non-GTO groups in all of these models, as identified in Table 2. To determine this, we used SAS Version 9.1.3 PROC GLIMMIX (SAS Institute Inc. 2002) to test for bivariate associations between GTO assignment and participant characteristics. In addition, for the secondary regression analyses of GTO exposure conditional on GTO assignment, we controlled for time by including a dummy variable for being in Wave 2 or 3 for the attitudes and behaviors analyses. Missing survey items were accounted for through multiple imputation (Little and Rubin 2002). To account for clustering due to program participation, random effects regression models were fit to five multiply-imputed data sets. Imputations were done using SAS PROC MI, and five sets of results from each of five imputed data sets were combined using SAS PROC MI-ANALYZE to derive an overall result that accounted for the uncertainty due to missing data.

Table 2
Characteristics of study sample

Impact on prevention attitudes and behaviors

In our primary analyses, we fit random effects models using SAS PROC MIXED to the Attitude Index measured at each wave. Random intercept terms were included to adjust for clustering among survey participants due to which specific GTO and non-GTO programs members were working in (programs were nested within GTO versus non-GTO conditions) and completion of multiple surveys. We included an indicator variable of GTO assignment, a time variable to indicate survey wave, and an interaction term for GTO assignment and time to test whether there was a GTO assignment effect. In our secondary analyses, we also used these models to examine the relationship between the amount of GTO exposure (using the GTO Participation Index) and attitudes, conditional on having been in a GTO program. Since the amount of GTO exposure was reported at Waves 2 and 3, we examined data from only these waves for this secondary attitudes analysis. Using the Behavior Index, we conducted the same primary and secondary analyses as described above for the Attitude Index.

Impact on prevention self-efficacy

We also used random effects models to analyze the self-efficacy scales collected at Wave 3, again accounting for the clustering due to the programs in which members were working. Since the self-efficacy items were only available at Wave 3, we tested the GTO effect by including just the GTO assignment indicator variable as a predictor in the model. Similar to the attitudes and behavior analyses, we also conducted a secondary associational analysis for persons only in GTO-assigned programs at Wave 3 (n = 68) to examine the relationship between the GTO Participation Index and self-efficacy.

Program Performance Change: GTO-Innovation Configuration Map

Given the small number of programs in the study, we were limited to descriptive analyses. We calculated the mean GTO-IC Map score across all 14 components for both the GTO and non-GTO programs at three points in time (baseline, after one year, and after two years of GTO implementation). We then calculated the percent change for each component and the overall score between baseline and a year later and baseline and two years later for both the GTO and non-GTO programs. Comparison program 2 ceased program operations after the first year and thus its percent change after two years could not be calculated.

Implementation Monitoring and Intervention Feedback: Technical Assistance Monitoring

We used descriptive statistics to present the amount of TA (in hours) that was delivered to each of the six programs over the two-year period. We calculated, for each GTO program, the total number of hours of TA delivered and percent of TA time spent on each GTO step. Aggregating across all six GTO programs and treating each GTO step as a case (i.e., n = 10), we calculated a Pearson correlation between TA hours spent on each GTO step with the amount of change exhibited by that step on the GTO-IC Map from baseline to two years of GTO implementation. We also calculated the same correlation using change after one year of GTO implementation.

Implementation Monitoring and Intervention Feedback: Interviews

After each interview, the interviewer used the recordings to prepare an initial written narrative summarizing the various themes identified in the interview. The summaries were developed by beginning with the interview protocol to establish the general categories of themes (e.g., how the GTO model helped) while allowing additional themes to emerge consistent with grounded theory (Glaser and Strauss 1967; Strauss and Corbin 1990). From the detailed narratives of each interview, a senior graduate student familiar with GTO but not otherwise connected to the project created an initial written narrative across all the interviews, again following the protocol while allowing for new themes to emerge. The GTO team then reviewed the detailed interview summaries and the initial narrative and prepared a final narrative across all the interviews based upon consensus discussion.

Results

First we describe study findings relevant to Research Question #1: Can a Prevention Support System intervention such as GTO improve the level of individual capacity and program-level performance of those in the Prevention Delivery System?

Individual-Level Capacity

Coalition Survey: Sample Characteristics

Documented from the coalition survey, characteristics of the two coalitions’ staff assigned to both the GTO intervention and comparison programs at the three waves are displayed in Table 2. In both the GTO and comparison programs, respondents were primarily white, middle-aged females with over three years of involvement. Respondents assigned to the comparison condition were less likely to report being a high school graduate as compared to respondents assigned to the GTO condition across all three data collection periods. At the third data collection period only, the respondents spent more time in the coalition and were less likely to be line staff than respondents in the GTO condition.

Participation in GTO

Table 3 displays the amount of participation in GTO-related activities across the GTO and comparison conditions at the second and third waves of the coalition survey. A total mean score across all of the individual GTO-related activities for the GTO and comparison conditions was computed for use in further regression analyses (i.e., GTO Participation Index, see next section). The average score for the GTO group’s Participation Index was low but varied greatly (at Wave 2, M = 2.49, SD = 2.06; at Wave 3, M = 2.22, SD = 2.04). At Wave 2, 44% assigned to the intervention condition had moderate to high (score of 3 or higher) participation in the GTO intervention, at Wave 3, it was 43%.

Table 3
Participation in the GTO intervention

In addition, there was some contamination of the GTO intervention into the comparison group, as they also participated in a small amount of the intervention (at Wave 2, M = .79, SD = 1.20; at Wave 3, M = 1.20, SD = 1.67). Comparing the GTO and comparison groups, a higher percentage of GTO respondents reported participating in GTO training, reading most of materials, making plans for use, and talking to others at the second wave. At Wave 3, there were fewer differences between the two groups on the individual GTO-related activities and the GTO Participation Index mean score no longer was statistically different between the GTO and the comparison groups. The GTO Participation Index was used in secondary analyses examining the impact of GTO on individual level capacity, that is, self-efficacy, attitudes and behaviors.

Primary Analyses

There were 156, 153, and 123 persons who responded to the coalition survey at Waves 1, 2, 3, respectively with 268 persons completing a survey at least once. The coalitions experienced a high level of staff turnover. Thus, the response rates were computed as the number of responders out of all coalition staff present: 65% (90% in California, 47% in South Carolina), 52% (71% in California, 38% in South Carolina), and 69% (79% in California, 52% in South Carolina) at Waves 1, 2, 3, respectively. Given that many respondents completed surveys at multiple waves, there were 432 observations gathered from the 268 persons who completed 1–3 coalition surveys. Compared to those assigned to the comparison condition, staff in a GTO program did not experience a significant increase in their ratings on the self-efficacy scale (p = 0.43), Attitude Index (p = 0.63), or Behavior Index (p = 0.66).

Secondary Analyses

We conducted secondary analyses including only those respondents assigned to the intervention condition (n = 77 at Wave 2, 68 at Wave 3). Responses on the self-efficacy, attitude, and behavior measures were regressed on the GTO participation scores. The findings indicate that more GTO participation was associated with significant (p <.002) increases in their ratings on the Attitude and Behavior Indices, and a significantly (p = .01) greater self-efficacy score (Table 4). In other words, for each additional level of GTO participation, respondents assigned to the GTO programs experienced an increase of 1.18 and 3.05 units on the Attitude and Behavior Indices respectively. Similarly, for each additional level of GTO participation, respondents assigned to the GTO programs were 1.98 units higher on the self-efficacy scale at Wave 3.

Table 4
The relationship between GTO participation index and individual prevention capacity among those in GTO-assigned programs

Program-Level Performance

Table 5 displays the program-level performance ratings for each program across the three data collection periods. Similar to the Behavior Index, we combined the ratings for all 14 components into a single program performance score. The ratings indicate that overall, the GTO programs consistently increased over time as compared to the comparison programs. The two programs that had some prior exposure to GTO (i.e., GTO 1 and 2 in South Carolina) had the highest baseline ratings. The two programs that didn’t receive technical assistance until after the second data collection period (i.e., GTO 5 and 6 in California) do not show increases in the IC-Map ratings until the third data collection period. This improvement after one year of GTO intervention was similar in magnitude to the improvement demonstrated by the other two programs in California (i.e., GTO 3 and 4) after their first year of GTO. We also show each individual GTO-IC Map component over time for the programs averaged together by group (GTO vs Comparison). Comparing just those programs that had a full two years of the GTO intervention, the GTO programs improved almost three times as much as the comparison programs (46% Vs. 12%). In particular, the three areas that showed the largest differential improvement versus the comparison programs (in % change, GTO v comparison) were outcome evaluation-decision making (113 Vs. 50), Process evaluation-mechanics (53 Vs. 8), and CQI-mechanics (54 Vs. 9). The three areas that showed the least improvement were best practices (5 Vs. 17), sustain (43 Vs. 44), and needs (50 Vs. 38).

Table 5
GTO-IC Map ratings of program level performance

Technical Assistance Monitoring

There was variability in how much TA was delivered to each of the programs, depending upon: which year they started receiving the TA (i.e., first or second), the coalition staff’s availability, and the tasks upon which they desired to focus. Programs received between 78 to 322 hours of TA over the course of the study (1–3 h per week). The teen court (GTO 4 in California) and social norms (GTO 1 in South Carolina) programs received the most hours of TA, the two California programs that were added in the second year (GTO 5 and 6) received the least. Despite the variation in amount of TA received and the differences in the programs (each of the six GTO programs had different content, histories, and funding sources and amounts), they all showed a similar pattern of technical assistance use. All the programs tended to want the most assistance with planning and process and outcome evaluation; about three quarters (73%) of the TA time was spent on these three steps.

Relationship between Technical Assistance and Program Level Performance Changes

Figure 2 displays the relationship between TA hours delivered by GTO step across all six programs that received technical assistance. The correlation between the average percent change on the six GTO programs’ total GTO-IC Map ratings after two years with the number of hours spent on each GTO step was r(10) = .59, p = .07. This figure shows two outliers in the dataset: step 8, outcomes evaluation, had the biggest improvement in performance with the highest level of TA and step 3, best practices, had little TA devoted to it and no change in performance.

Fig. 2
Relationship between TA hours and improvements in program level performance after two years

After removing outcome evaluation and best practices, the correlation of the remaining eight data points was r(8) = .37, p = .36, indicating a weaker relationship between TA hours and program level performance. The relationship between TA and change after one year was r(10) = .07, p = .84, showing that there was not a strong relationship between TA and program performance change after one year.

Below are the study findings relevant to Research Question #2: What lessons from the implementation process and midcourse corrections can be applied to future demonstrations of capacity improving interventions like GTO?

Coalition Staff Interviews

How was GTO as a whole and its three components helpful or not helpful?

After two years of GTO implementation, almost all of the respondents across both sites felt that GTO as a whole was helpful and no one stated that GTO was unhelpful. When asked about changes made in the last two years and what they attributed those changes to, the group as a whole mentioned 75 improvements and attributed over half (47) of them to the GTO process. Specifically, many respondents at both sites felt that GTO helped with planning and organization. This included developing clarity about program goals, deciding on whether new grants were a good fit to pursue, and preparing new grant applications, reports, and presentations for conferences. One respondent indicated, “This model can be very good at helping people think ahead”. Many respondents also stated that GTO helped improve program reporting, which in turn helped improve communication both outside the coalition, i.e., with stakeholders and funders, and within the coalition, i.e., with the members and staff.

Several respondents at both sites stated that GTO helped improve their evaluations including increasing the accountability of both staff and programs by putting more of a focus on outcomes and data collection, building evaluation capacity (e.g., many respondents reported increased understanding of data changes and trends, and how to present results), and helping to track the progress toward goals. For example, one respondent reported that “Everything about the strategy is entered in GTO—[we] use it everyday to know where we stand. It provides a step-by-step plan on what needs to be done. It keeps us on track and highlights our strengths and weaknesses.” Similarly, a staff member from another program stated, “The regular comprehensive and formalized assessment of our programs on a monthly basis and the kind of accountability this raises is a key piece.” Some respondents reported that GTO increased their ability to meet future demands including adapting to changing responsibilities and sustaining funding.

Many staff and executives commented how GTO helped at an organizational level. For example, several executives across both sites mentioned that GTO created consistency in their coalitions, which was helpful given that their programs were so varied. One executive said, “Where GTO has been a benefit for me is that there is now a common framework that the staff has bought into for program planning and evaluation that makes my job easier.” Another executive stated, “[It] helps to create a common language when there are people from different fields coming together. By all using the same model we can be on the same page.” Many also mentioned that GTO helped the organizations as a whole be more accountable and open to change, adopt structures organization-wide that support the collection and reporting of data for program improvement, determine whether new funding opportunities are a good fit for the organization, communicate more effectively, and empower staff to make changes.

While asking about how GTO as a whole was helpful, we also asked respondents about their perceptions of the three components of the GTO model: the manual, training, and TA:

  • Manual: Respondents were mixed in their perceptions about the utility of the GTO manual. Some respondents at both sites felt the manual was a helpful resource and provided some helpful forms. For example, an executive felt the manual answered questions that s/he would normally have to answer (“staff found the manual helpful and I can refer them to that when they have questions or are not sure about something—[this] helps me in terms of time management”). However, many at both sites felt it was difficult to use and boring and some felt it “unnecessary”. A number of respondents mentioned that the manual’s large size made it somewhat overwhelming and harder to use.
  • Training: There was a wide range of responses regarding the usefulness of the GTO trainings. Some in each coalition reported that the training helped make the GTO model less intimidating and build uniform acceptance of the GTO model. However, several in California believed that the training was not applied enough, failing to make it clear where to start with the GTO process, connecting the training to actual practice, or integrating practice opportunities into the training day. Others in California stated that the second training, which was significantly revised with assistance from the coalition staff members to include more interactive activities, was much improved in this regard.
  • Technical assistance: All of the respondents from both sites felt the technical assistance they received was helpful. What respondents appreciated the most was that the TA staff helped translate complicated ideas into practice, “taking things that are complex and making them clear” according to one respondent. Several staff valued how the TA staff helped them adapt their plans and evaluations to new situations. Some California respondents noted that in addition to providing concrete guidance on tasks, such as data collection, the TA staff also provided encouragement.

What facilitated using GTO?

Many respondents from both sites reported that the availability of technical assistance was the most important GTO-related facilitator. Somewhat less commonly mentioned GTO-related facilitators included a clear, logical format, availability of a user friendly manual written in everyday language, and standardized tools. Outside of GTO itself, many in both coalitions listed agency support as the primary facilitator. One respondent stated, “without organizational support we wouldn’t have had the training. We wouldn’t have had the monthly meetings. We wouldn’t have had the workgroup meetings. There was definitely the message [from management] that this was something valuable.”

What served as a barrier to using GTO?

The most frequently cited barrier from those at both sites was lack of resources including time, staff, and computers. One staff member stated that there was “just never enough time and never enough money [to do the full GTO process]”. Given this barrier, a major concern for a number of the respondents at both sites was whether or not the coalition would be able to continue many of the GTO steps without the continuation of the TA. The second most frequently cited barrier included a lack of staff skills, made worse by high staff turnover. Some in California stated that there were a variety of systems level barriers including staff resistance, the lack of support from the coalition’s board, and the lack of an appropriate structure. Some in both coalitions also stated that aspects of GTO itself made it more difficult to use. For example, some stated that the manual’s tools were not user-friendly enough, that there were too many steps, and that evaluation concepts were very complex.

Discussion

In this section, we review the results of the GTO Demonstration and Evaluation, discuss how lessons learned about GTO relate to the Prevention Support System, present the study’s limitations, and propose directions for future research.

Impact of GTO on Individual Capacity and Program-Level Performance

The Interactive Systems Framework hypothesizes that the Prevention Support System can help build capacity within the Prevention Delivery System. This article presents preliminary evidence that this is possible. Although analyses by GTO assignment (GTO v non-GTO) did not show any significant differences in individual level capacity after two years, among staff in the GTO intervention group, more participation in the GTO intervention was associated with greater improvements in individual level prevention capacity over time. Also, the GTO programs showed a marked improvement in program level performance ratings over time compared to programs assigned to the comparison group. In other words, GTO program staff improved the quality with which they performed a number of prevention activities targeted by GTO, especially in the area of collecting process and outcome data and using that data to improve programming. This is a very important finding given the need for community interventions to demonstrate outcomes. The apparent contradiction between the intent to treat analyses at the individual level and program performance results could be because of the high amount of staff turnover. It is possible that because of the structures put in place by the GTO intervention (e.g., new evaluations), the program performance continued to be enhanced even while individual staff were turning over.

In South Carolina, where there was some prior exposure to earlier versions of GTO, the two programs assigned to the GTO condition began the study at a much higher level of program performance and as a result may have improved somewhat less than the California programs. In California where there was no prior exposure to GTO before the study began, the programs that received two years of the GTO intervention improved more than twice as much in program performance than those programs that only received one year of the intervention.

Lessons Learned about GTO Implementation Relevant to the Prevention Support System

Monitoring implementation of GTO led to several insights into how a Prevention Support System might operate in the future. First, it appears that we need to revisit the role played by the GTO 2004 manual. The data from the GTO Participation Index—in which about half of those assigned to GTO programs read their GTO manual—and feedback from the interviews show that the manual was not well utilized. Many we interviewed stated that at 400 pages, the manual is simply too long and addressed complicated concepts. Many of these pages contain background information on each of the ten domains targeted by the GTO model and some reported having difficulty connecting the concepts discussed in the manual in the abstract to their specific situations. Some staff stated that the manual was more useful as a resource document to be consulted periodically rather than to be read from start to finish. This lesson applies to any type of Prevention Support System intervention and also highlights the limitations of a technology transfer approach that only uses information dissemination. Thus, materials that were shorter, easy to use, easily tailored to unique circumstances, and supported by technical assistance would likely better engage busy prevention practitioners, which would lead to greater utilization and presumably better outcomes.

Second, similar to comments about the manual, we learned that the training needed to be more interactive and relevant to the participants’ situations. We received clear feedback that didactic, generic training was not useful. Training that was interactive and fun, and that used real data from the participants’ programs was much more helpful in helping them to apply the GTO process to their work. In addition, given the high turnover, it would seem important for future demonstrations of GTO to include more frequent trainings, which would also serve as boosters for existing staff.

Third, the relationship between the amount of onsite technical assistance and improvement in program level performance varied across the GTO steps. For example, all GTO programs either developed a new outcome evaluation or greatly improved their current design, reflected in the many TA hours spent and the associated large gains on the GTO-IC Map rating for outcome evaluation. In contrast, best practices received very little TA attention across all the programs and showed no change in program level performance. It was these two GTO steps that led to such a strong correlation between TA hours and improvement in program level performance. However, the relationship between TA hours and improvement in program level performance for the other GTO steps was less clear. We posit that the different steps within the GTO process vary in their scope and difficulty, requiring differing levels of effort in order to yield the same level of improvement in program level performance. In support of this idea, the data suggest that assisting a program to develop and use goals and objectives at a high level of quality may require much less TA effort than helping to establish a high quality process or outcome evaluation. Therefore, when deciding where to focus the TA effort, GTO or any Prevention Support System intervention will need to strike a balance between the current level of performance in the organization and the resources available to create change.

A fourth lesson was that the coalitions that participated in the GTO intervention were receptive, but the extent to which the coalitions became true learning organizations is unclear. For example, we were able to secure leadership support for the demonstration of the GTO intervention in that meetings organized by the researchers around the GTO intervention were well attended by program staff. However, the GTO intervention also attempted to help program staff initiate activities, such as talking to others, planning, and securing resources to do GTO. Data from responses on the GTO Participation Index indicate fairly good participation in the researcher-initiated activities of trainings, and meeting with TA staff, but less endorsement of the program staff-initiated activities, such as reading the manual, talking to others, and planning to use and secure resources for GTO. Therefore, while there was support for the demonstration of GTO’s components through attendance at researcher-initiated activities, there was less initiated GTO activity by program staff, indicating that during the study period GTO might not have been well routinized in the organization.

There are many possible explanations for these findings. First, the lack of various resources (e.g., time, staffing level, funding, computer hardware and software) made it more challenging to fully implement all of the GTO steps. This is consistent with the conclusions of Livet and Wandersman (2005), who reviewed 68 empirical studies across a wide range of medical and behavioral health fields and found relationships between the availability of resources and the performance of the tasks targeted by GTO’s ten steps.

Second, there were certain tasks associated with evaluation that most practitioner staff lacked prior experience with or training in and were uncomfortable undertaking in addition to their service provider roles (e.g., finding psychometrically sound outcome measures, conducting statistical analyses). It made more sense to program staff to rely on the TA expertise with the technical aspects of evaluation than to commit to learning technical evaluation skills. It also made clear the need for organizations committed to routinizing GTO to acquire staff with technical evaluation skills in order to sustain GTO. In one site, GTO and program staff began to advocate for an internal data analyst position as the GTO process continued.

Third, in this demonstration, the GTO manual, training, and TA focused initially on enhancing the capacity to perform the GTO steps, rather than on the integration of GTO into the organizational structure. For example it took about a year before the programs’ newly developed evaluations yielded data. Only then could questions of how to apply the data to program operations be addressed, and little time remained in the demonstration period to do so. The coalition’s difficulty making organizational changes in part explains why these types of changes—such as performance incentives, revised job descriptions, and accountability mechanisms—were not made. Perhaps it will require longer than two years to facilitate these types of changes and the coalitions’ complete ownership of the GTO process.

Consistent with the CQI-Design as noted above, modifications were made and tools developed in the second year to help to make the application of GTO easier and to sustain its practices. For example, the use of a GTO format for desk notes, or job descriptions, and a monthly report form using a GTO template were modifications aimed at helping to routinize the practice of GTO. However, next steps in designing Prevention Support Systems like GTO should pay further attention to what is required to set the stage for and support eventual organizational change needed to fully integrate GTO into community coalitions’ management and operations (cf. Livet and Wandersman 2005).

The final lesson learned is about the coalition’s relationship to their funders and other stakeholders. While funders typically demand outcome data, they rarely engage coalitions in discussions about using evaluation results to improve services or provide adequate resources over a long enough time period to do so (Crusto and Wandersman 2004; The Cornerstone Consulting Group 2002; Wandersman 1999; Yost 1998). Therefore, there exists a large disincentive for the coalitions to spend their limited resources critically examining their own programming consistent with the GTO framework.

Moving the Prevention Support System Forward

To successfully integrate processes such as GTO into organizational operations, coalitions like the ones in our study need adequate resources (e.g., funding, appropriate staffing), and the proper incentive established by the larger funding system to invest those resources in an infrastructure for continuous quality improvement. As conceptualized in the Interactive Systems Framework, the Prevention Support System may be limited in its ability to address these issues. For example, the GTO Demonstration and Evaluation provided to the two coalitions: manuals for all the GTO staff, four full-day trainings, and two, half-time doctoral level technical assistants for two years. These are resources that the coalitions could not have supported without research grant funds and gains in capacity would likely not have occurred without this support. It may be necessary for Prevention Support System interventions to not only support their own costs (as done in this study), but to provide supplementary resources that communities would need to pay for additional planning and evaluation activities (e.g., data collection and analysis) needed for high quality prevention.

Also, coalitions need to be embedded in a larger system in which their funders support continuous quality improvement activities like the ones specified in the GTO framework. This is something the Prevention Support System does not currently address. There are examples of how this could be realized. In 2001, the state of South Carolina received funding from CSAP to provide grants to local prevention coalitions. The state attempted to create a system that built in incentives for local coalitions to practice the GTO processes by a) structuring the application for funding around GTO’s 10 steps, b) requiring grantees to monitor program implementation using common GTO forms, c) requiring grantee reports be submitted using a common GTO format, d) delivering technical assistance consistent with the GTO approach, and e) providing feedback to grantees they could use to improve (Chinman et al. 2001). Another example comes from a private foundation in South Carolina who required its grantees to engage in empowerment evaluation/continuous quality improvement activities as part of their reporting requirements (Keener et al. 2005). In these examples, prevention coalitions were given incentives as well as the resources to more critically examine their own programming.

Limitations and Future Research

While the results are encouraging, certain limitations should be noted. First, the study included a small number of programs that differed in terms of goals and methods. Although this variety suggests that GTO can work with different types of programs, it is difficult to know how generalizable the results are given the small number of programs that were involved. Also related to generalizability is the question of whether the resources made available through the GTO intervention in this study are feasible to provide across a large number of communities nationwide. More research will be needed to determine the minimum level of resources needed to achieve gains in capacity and performance that are sufficient to improve outcomes in communities.

A second limitation is that we were not able to assess the sustainability of the gains in capacity and performance— i.e., after the end of technical assistance. Efforts at sustaining these gains were often hampered by high staff turnover, typical of prevention coalitions (Chinman et al. 1996). In this study, the two coalitions have addressed sustainability in different ways, although both are using their own funds. With assistance from the GTO team, one coalition has assigned one of its own current staff to assume some TA responsibilities. In the other coalition, they continue to support their GTO technical assistant in a reduced capacity. Future research will need to address which types of capacity and performance gains can be easily sustained and which will need ongoing support.

Third, the impact of GTO on individual and program level performance was evaluated with a quasi-experimental design using established programs. Although baseline differences between GTO and the comparison groups were minimal, it is possible that unmeasured characteristics biased the results. This design was appropriate given the early stage of development of the GTO intervention. A next step should be studies that use random assignment with larger numbers of similar types of prevention programs when possible to determine the differential effects of GTO on individual capacity and program level performance. These studies would also need to include new programs (or staff considering which programs to choose) to extend the generalizability of the results shown here beyond established programs.

Fourth, we were not able to directly assess within the timeline of this study whether increased individual capacity and program level performance translated into outcomes such as reduced youth alcohol and drug use and/or improvements in associated risk factors. While the evaluations the program staff conducted did reveal positive outcomes—such as reduced 30-day marijuana use, greater intentions to use more positive parenting practices, and more negative attitudes toward alcohol, tobacco, and drug use— what is unclear is whether the use of GTO simply revealed outcomes that were present but previously unmeasured, or actually improved them. The fact that all of the GTO and comparison programs were different also made it difficult to judge whether it was the presence of the GTO intervention that improved alcohol and drug outcomes. Future research on GTO (or other Prevention Support System interventions) would benefit from working with communities implementing the same prevention program, which would allow for the collection and analysis of comparable outcome data.

Fifth, the measures used here (Coalition survey’s Attitude, Self-efficacy, and Behavior scales; GTO IC Map) were specifically developed for this study. While these measure were based on sound theories and we were able to positively document some psychometric properties (e.g., interrater reliability of the IC Maps, factor loadings and internal consistency of the survey scales), these measures lack previous empirical support and require further refinement and testing.

Finally, there was significant contamination of the GTO intervention into the comparison group as shown by the GTO Participation Index data. This was a direct effect of how the GTO intervention attempted to target the whole organization to make the GTO process the routine way of conducting prevention. Therefore certain changes recommended by the GTO process, such as adding a report to facilitate communication between staff about progress could not have been kept away from the non-GTO programs. This contamination may have negated potential group differences on the individual capacity and program level performance ratings because the comparison group also received some of the intervention. However, this could also be interpreted as a marker of some success in that the GTO intervention was able to integrate at least some aspects of itself into the coalitions’ routine organizational operations. Future studies ought to consider assigning whole coalitions and all of their programs to receive GTO or not to prevent such contamination.

Conclusions

Communities are increasingly being required to implement evidence-based prevention strategies by state and federal funders, yet are often not provided the guidance or the tools needed to successfully meet this challenge. To improve the likelihood of achieving positive outcomes, the GTO intervention is designed to provide the necessary guidance and tools, tailored to community needs, in order to build individual capacity and program level performance. GTO is an example of a Prevention Support System intervention, which as conceptualized by the Interactive Systems Framework, plays a key role in bridging the gap between Prevention Science (Prevention Synthesis and Translation System) and the Prevention Practice (Prevention Delivery System). The results of this evaluation suggest that GTO can improve capacity at the individual level and performance at the program level and as such demonstrates that the Prevention Support System can successfully fulfill its intended role. However, the dwindling resources available to prevention coalitions and the lack of incentives provided by the larger funding system may hamper Prevention Support System interventions. Despite those challenges, this article provides preliminary evidence for the impact of GTO on prevention capacity and performance and lessons learned about the relationship between the Prevention Support and Prevention Delivery Systems, supporting its importance and need for additional funding and research.

Acknowledgments

The Centers for Disease Control and Prevention provided funding for this paper through support of Participatory Research of an Empowerment Evaluation System (R06/CCR921459). Work on this paper was also supported by The Department of Veterans Affairs Desert Pacific Mental Illness Research, Education, and Clinical Center (MIRECC). We would like to thank Megan Zander-Cotugno and Kirsten Becker at RAND for their assistance with the data collection and management in this project. We would also like to thank all of those at Santa Barbara Fighting Back and LRADAC in Columbia, without whose participation this project would not have been possible. Any opinions expressed are only the authors’, and do not necessarily represent the views of any affiliated institutions.

Contributor Information

Matthew Chinman, RAND Corporation, 1776 Main Street, Santa Monica, CA 90407, USA.

Sarah B. Hunter, RAND Corporation, 1776 Main Street, Santa Monica, CA 90407, USA.

Patricia Ebener, RAND Corporation, 1776 Main Street, Santa Monica, CA 90407, USA.

Susan M. Paddock, RAND Corporation, 1776 Main Street, Santa Monica, CA 90407, USA.

Lindsey Stillman, Department of Psychology, University of South Carolina, Columbia, SC 29208, USA.

Pamela Imm, Department of Psychology, University of South Carolina, Columbia, SC 29208, USA.

Abraham Wandersman, Department of Psychology, University of South Carolina, Columbia, SC 29208, USA.

References

  • Ajzen I, Fishbein M. Attitude-behavior relations: A theoretical analysis and review of empirical research. Psychological Bulletin. 1977;84:888–918.
  • Argyris C. On organizational learning and learning organizations. Malden, MA: Blackwell Business; 1999.
  • Argyris C, Schön D. Organizational learning. Reading, MA: Addison-Wesley; 1978.
  • Argyris C, Schön D. Organizational learning II: Theory and method and practice. Reading, MA: Addison-Wesley; 1996.
  • Backer TE. Finding the balance: program fidelity and adaptation in substance abuse prevention: a state-of-the art review. Rockville, MD: Center for Substance Abuse Prevention; 2001.
  • Bandura A. Health promotion by social cognitive means. Health Education & Behavior. 2004;31(2):143–164. [PubMed]
  • Becker HS. Problems of inference and proof in participant observation. American Sociological Review. 1958;23:652–660.
  • Bernard RH. Social research methods: qualitative and quantitative approaches. Thousand Oaks, CA: Sage Publications, Inc; 2000.
  • Cantor D, Crosse S, Hagen CA, Mason MJ, Siler AJ, von Glatz A. A closer look at drug and violence prevention efforts in American schools: report on the study on school violence and prevention. Washington, DC: U.S. Department of Education, Planning and Evaluation Services; 2001.
  • Caulkins JP, Rydell CP, Everingham SS, Chiesa J, Bushway S. An ounce of prevention, a pound of uncertainty: the cost-effectiveness of school-based drug prevention program. (MR-923-RWJ) Santa Monica, CA: RAND; 1999. Available at: http://www.rand.org/pubs/monograph_reports/MR923/index.html.
  • Chinman M, Anderson C, Imm P, Wandersman A, Goodman RM. The perception of costs and benefits of high active versus low active groups in community coalitions at different stages in coalition development. The Journal of Community Psychology. 1996;24(3):263–274.
  • Chinman M, Early D, Ebener P, Hunter S, Imm P, Jenkins P, et al. Getting To Outcomes: a community-based participatory approach to preventive interventions. Journal of Interprofessional Care. 2004;18(4):441–443. [PubMed]
  • Chinman M, Hannah G, Wandersman A, Ebener P, Hunter SB, Imm P, et al. Developing a community science research agenda for building community capacity for effective preventive interventions. American Journal of Community Psychology. 2005;35(3–4):143–157. [PubMed]
  • Chinman M, Imm P, Wandersman A. Getting to Outcomes 2004: Promoting accountability through methods and tools for planning, implementation, and evaluation. (No. TR-TR101) Santa Monica, CA: RAND; 2004. Available at: http://www.rand.org/publications/TR/TR101/
  • Chinman M, Imm P, Wandersman A, Kaftarian S, Neal J, Pendleton KT, Ringwalt C. Using the Getting To Outcomes (GTO) model in a statewide prevention initiative. Health Promotion Practice. 2001;2:302–309.
  • The Cornerstone Consulting Group. End games: The challenge of sustainability. Baltimore, MD: The Annie E. Casey Foundation; 2002.
  • Crosse S, Burr M, Cantor D, Hagen CA, Hantman I. Report on the study on school violence and prevention. Washington, DC: U.S. Department of Education, Planning and Evaluation Service; 2001. Wide scope, questionable quality: Drug and violence prevention efforts in American schools.
  • Crusto C, Wandersman A. Setting the stage for accountability and program evaluation in community-based grant-making. In: Roberts AR, Yeager K, editors. Handbook of practice-focused research and evaluation. Oxford University Press; 2004.
  • CSAP. Prevention works through community partnerships: findings from SAMHSA/CSAP’s national evaluation. Rockville, MD: Substance Abuse and Mental Health Services Administration, U.S. Department of Health and Human Services; 2000.
  • Deming WE. Out of the crisis. Cambridge, MA: MIT Press; 1986.
  • Ennett ST, Ringwalt CL, Thorne J, Rohrbach LA, Vincus A, Simons-Rudolph A, et al. A comparison of current practice in school-based substance use prevention programs with meta-analysis findings. Prevention Science. 2003;4(1):1–14. [PubMed]
  • Fetterman DM, Wandersman A, editors. Empowerment evaluation principles in practice. New York, NY: Guilford Press; 2005.
  • Fishbein M, Ajzen I. Atitudes toward objects as predictive of single and multiple behavioral criteria. Psychological Review. 1974;81:59–74.
  • Fishbein M, Ajzen A. Beliefs, attitudes, intentions, and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley; 1975.
  • Glaser BG, Strauss AL. Awareness contexts and social interaction. American Sociological Review. 1967;29:669–679.
  • Green L. From research to “best practices” in other settings and populations. American Journal of Health Behavior. 2001;25(3):165–178. [PubMed]
  • Hall GE, Hord SM. Change in schools facilitating the process. Albany, NY: State University of New York Press; 1987.
  • Hall GE, Hord SM. Implementing change. Boston, MA: Allyn and Bacon; 2001.
  • Hall GE, Loucks SF. A developmental model for determining whether the treatment is actually implemented. American Educational Research Journal. 1977;14(3):263–276.
  • Hall GE, Loucks SF, Rutherford WL, New love BW. Levels of use of the innovation: a framework for analyzing innovation adoption. Journal of Teacher Education. 1975;26(1):52–56.
  • Hallfors D, Cho H, Livert D, Kadushin C. Fighting back against substance abuse: are community coalitions winning? American Journal of Preventive Medicine. 2002;23(4):237–245. [PubMed]
  • Juran JM. Juran on leadership for quality. New York: Free Press; 1989.
  • Keener DC, Snell-Johns J, Livet M, Wandersman A. Lessons that influenced the current conceptualization of empowerment evaluation. In: Fetterman DM, Wandersman A, editors. Empowerment evaluation principles in practice. New York, NY: Guilford Press; 2005. pp. 73–91.
  • Kreuter MW, Lezin NA, Young LA. Evaluating community-based collaborative mechanisms: implications for practitioners. Health Promotion Practice. 2000;1(1):49–63.
  • Kvale S. InterViews: An introduction to qualitative research interviewing. Thousand Oaks, CA: Sage Publications, Inc; 1996.
  • Little R, Rubin D. Statistical analysis with missing data. 2. New York, NY: John Wiley and Sons; 2002.
  • Livet M, Wandersman A. Organizational functioning: Facilitating effective interventions and increasing the odds of programming success. In: Fetterman DM, Wandersman A, editors. Empowerment evaluation principles in practice. New York: Guilford Press; 2005. pp. 123–154.
  • McCormick LK, Steckler AB, McLeroy KR. Diffusion of innovations in schools: a study of adoption and implementation of school-based tobacco prevention curricula. American Journal of Health Promotion. 1995;9(3):210–219. [PubMed]
  • McCraken GD. The long interview. Newbury Park, CA: Sage Publications, Inc; 1988.
  • Mitchell RE, Stone-Wiggins B, Stevenson JF, Florin P. Cultivating capacity: outcomes of a statewide support system for prevention coalitions. Journal of Prevention & Intervention in the Community. 2004;27(2):67–87.
  • NIDA. Preventing drug use among children and adolescents: A research-based guide. (No. 97–4212) National Institutes of Health; 1997.
  • O’Brien K. Using focus groups to develop health surveys: An example from research on social relationships and AIDS-preventive behavior. Health Education Quarterly. 1993;20(3):361–372. [PubMed]
  • O’Donnell L, Scattergood P, Adler M, Doval AS, Barker M, Kelly JA, et al. The role of technical assistance in the replication of effective HIV interventions. AIDS Education and Prevention. 2000;12(5 Suppl):99–111. [PubMed]
  • Ozer EM, Bandura A. Mechanisms governing empowerment effects: A self-efficacy analysis. Journal of Personality and Social Psychology. 1990;58:472–486. [PubMed]
  • Preskill H, Torres DT. Building capacity for organizational learning through evaluative inquiry. Evaluation. 1999;5(1):42–60.
  • Rapkin B, Trickett EJ. Comprehensive dynamic trial designs for behavioral prevention research with communities: Overcoming inadequacies of the randomized controlled trial paradigm. In: Trickett EJ, Pequenaut W, editors. Community intervention and AIDS. Oxford, UK: Oxford University Press; 2005.
  • Reuter P, Timpane JM. Assessing options for the safe and drug free schools and communities act. (No. MR-1328) Santa Monica, CA: RAND; 2001.
  • Reid WJ. Field testing and data gathering of innovative practice interventions in early development. In: Rothman J, Thomas E, editors. Intervention research: design and development for human service. Binghamton, NY: Haworth Press; 1994. pp. 245–264.
  • Rothman J, Thomas E, editors. Intervention research: design and development for human services. Binghamton, NY: Haworth Press; 1994.
  • Roussos ST, Fawcett SB. A review of collaborative partnerships as a strategy for improving community health. Annual Review of Public Health. 2000;21:369–402. [PubMed]
  • SAS Institute, Inc. SAS 9.1.3 for Windows. Cary, NC: SAS Institute Inc; 2002.
  • Silvia ES, Thorne J. School-based drug prevention programs: a longitudinal study in selected school districts. North Carolina: Research Triangle Institute; 1997.
  • Spoth RL, Guyll M, Day SX. Universal family-focused interventions in alcohol-use disorder prevention: cost-effectiveness and cost-benefit analyses of two interventions. Journal of Studies on Alcohol. 2002;63(2):219–228. [PubMed]
  • Spradley JP. The ethnographic interview. New York, NY: Holt, Reinhart, Winston; 1979.
  • Stevenson JF, Florin P, Mills DS, Andrade M. Building evaluation capacity in human service organizations: a case study. Evaluation and Program Planning. 2002;25(3):233–243.
  • Strauss A, Corbin JM. Basics of qualitative research: grounded theory procedures and techniques. Thousand Oaks, CA: Sage Publications, Inc; 1990.
  • Wandersman A. Results oriented grant making and grant implementation. Orlando, FL: American Evaluation Association Annual Conference; 1999.
  • Wandersman A, Florin P. Community interventions and effective prevention. American Psychologist. 2003;58(6–7):441–448. [PubMed]
  • Wandersman A, Imm P, Chinman M, Kaftarian S. Getting To Outcomes: A results-based approach to accountability. Evaluation and Program Planning. 2000;23:389–395.
  • Yin RK, Kaftarian SJ, Yu P, Jansen MA. Outcomes from CSAP’s community partnership program: findings from the national cross-site evaluation. Evaluation and Program Planning. 1997;20:345–356.
  • Yost J. Empowerment evaluation and results oriented grant making in foundations. Chicago, IL: American Evaluation Association Annual Conference; 1998.