PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
J Subst Abuse Treat. Author manuscript; available in PMC 2010 September 1.
Published in final edited form as:
PMCID: PMC2737382
NIHMSID: NIHMS137822

Agency context and tailored training in technology transfer: A pilot evaluation of motivational interviewing training for community counselors

Abstract

Few empirical studies are available to guide best practices for transferring evidenced-based treatments to community substance abuse providers. To maximize the learning and maintenance of new clinical skills, this study tested a context-tailored training model (CTT) which used standardized patient actors in role-plays tailored to agency clinical context, repetitive cycles of practice and feedback, and enhanced organizational support. This study reports the results of a randomized pilot evaluation of CTT for motivational interviewing (MI). Investigators randomly assigned community substance abuse treatment agencies to receive either CTT or a standard two-day MI workshop. The study also evaluated the effects of counselor-level and organizational-level variables on the learning of MI. No between-condition differences were observed on the acquisition and maintenance of MI skills, despite reported higher satisfaction with the more costly context tailored model. Analyses revealed that those counselors with more formal education and less endorsement of a disease model of addiction made the greatest gains in MI skills, irrespective of training condition. Similarly, agencies whose individual counselors viewed their organization as being more open to change and less supportive of autonomy showed greater average staff gains in MI skills, again, irrespective of training method. Post-training activities within agencies that supported the ongoing learning and implementation of MI mediated the effects of organizational openness to change. This pilot study suggests that tailored training methods may not produce better outcomes than traditional workshops for the acquisition of evidence-based practice and that efforts to enhance dissemination should be focused on characteristics of learners and ongoing organizational support of learning.

1. Introduction

Psychosocial treatments in addictions are continually developed, evaluated, and refined. Yet there are comparatively few scientific studies of effective methods of transfer of new treatment approaches into the community of addiction treatment providers. Walters and colleagues (2005) reviewed 17 studies that evaluated aspects of professional education in addictions. Scientific rigor was generally limited, with few studies utilizing control groups and measuring outcomes directly with skills-based assessments. Intensive training workshops, which assemble groups of learners for a short period of time, are a common form of dissemination in current practice.

Given the complexity of both the adoption process and many evidence-based practices (Simpson, 2002), it may not be surprising that workshops alone often result in limited change in practitioner behavior. In particular, workshops lack the key elements of models of successful technology transfer: constituent input and tailored training goals (Backer et al., 1995). This is because workshop training can only be modestly tailored to the work context of trainees, resulting in training that may be removed from the trainees' everyday work experience. Additionally, the trainer sets general training goals which are generic to general competence, and individualized learning goals are not a specific focus. The limitations of workshop training has been observed not only in addiction treatment (Walters et al., 2005), but in medical education (Davis, 1998) and mental health services as well (Corrigan, Steiner, McCracken, Blaser, & Barr, 2001; Fixsen et al., 2005; Milne, Gorenski, Westerman, Leck, & Keegan, 2000).

Several studies suggest that learning of new treatment skills is possible with broader and more integrated efforts. For example, Morgenstern and colleagues (2001) reported successful training of a manual guided Cognitive-Behavioral therapy to community substance abuse practitioners via 35 hours of didactic instruction and hour-for-hour supervision of four training cases. Three studies reported training success using multiple techniques (e.g., training, feedback, role plays, clinical supervision, and positive reinforcement (Andrzejewski, Kirby, Morral, & & Iguchi, 2001; Keller & Galanter, 1999; Sorensen et al., 1988; Sorenson et al., 1988). Sholomskas and colleagues (2005) reported that compared to only a treatment manual, community based clinicians learned cognitive behavioral therapy interventions more effectively when given a manual, a seminar, and supervised casework.

The process of learning, retention, and ultimate adoption of a new intervention can also be facilitated or impeded by the work context in which the new learning is to be implemented. Dimensions of organizational functioning that appear to support or impede adoption of new practices include organizational climate for innovation and organizational support (Bartholomew, Joe, & Rowan-Szal, 2007; Joe, Broome, Simpson, & Rowan-Szal, 2007; Simpson, Joe, & Rowan-Szal, 2007). Other organizational dimensions that challenge adoption of evidenced-based practices include a lack of existing knowledge and skills to assimilate new approaches (Dipboye, 1997; McFarlane, McNary, Dixon, Hornby, & Cimett, 2001), limitations in organizational resources such as counselor or supervisor time (Milne et al., 2000), and low interest of learners if innovations are perceived to reflect administrator interests rather than their own front-line clinical needs (Miller & Rollnick, 1991, 2002; Reid, Everson, & Green, 1999). Complicating this research is a historical disagreement about the best unit of measurement for estimating effects of agency or work settings, with writers distinguishing between the psychological and organizational climates (James & Jones, 1974). Psychological climate is conceptualized and measured as individuals' perceptions of the work climate, which can be idiosyncratic and influenced by each person's psychological characteristics. Conversely, organizational climate is a shared or summary perception that groups of people attach to their workplace. Variance in constructs assessed at both psychological and organizational levels likely affect technology transfer (Florin, Giamartino, Kenny, & Wandersman, 1990; Kenny & LaVoie, 1985).

1.1 Motivational Interviewing

There is significant interest in the transfer of motivational interviewing (MI) to community-based treatment. MI is a client-centered and directive style of counseling developed as a method of helping people resolve ambivalence and enhance intrinsic motivation for changing addictive behaviors (Miller & Mount, 2001). Interest in MI led Drs. William Miller and Steve Rollnick to train a cadre of trainers through a Train-the-Trainers program, which initiated the formation of the Motivational Interviewing Network of Trainers (MINT). Despite limited data to guide decision making, the MINT has encouraged development of “best practices” for training. The MINT consensus is that a two-day workshop should be considered as a standard for developing basic competence in MI skills (Rubel, Shepell, Sobell, & Miller, 2000). Workshops shape practitioner behavior via brief didactic presentations, individual and group exercises, trainer/videotape demonstrations of skills, and feedback to learners. The MINT Trainers Manual provides a list of exercises and approaches for teaching the most common MI elements.

To date, only a few evaluations of MI training have been reported. Initial studies documented gains in participant satisfaction and questionnaire measures of MI knowledge and MI-consistent behavior (Miller & Mount, 2001; Saitz, 2000). Two subsequent studies evaluated skills gains after training and over follow-up periods. Both studies showed significant increases in MI-skills based on audiotapes of sessions with either patients (Miller & Mount, 2001) or medical actors (Baer et al., 2004), but reductions in skill gains over follow-up periods. Miller and colleagues (Miller, Yahne, Moyers, Martinez & Pirritano, 2004) completed a much larger study of MI training, recruiting 140 licensed substance abuse professionals in a five group design, comparing waiting list control to workshop training alone or a workshop followed by practice feedback, personal coaching, or both. Their data revealed gains in MI skills and a reduction in MI-inconsistent behaviors, further demonstrating that training could transfer to practice samples and could be maintained over a 1-year follow-up period. Notably, coaching and feedback, when added to the workshop, increased the magnitude and retention of gains in MI proficiency. Schoener and colleagues (2006) similarly used ongoing coaching following workshop training to produce significant improvement in MI skills in their small but intensive study of the training of 10 mental health counselors. However, the inclusion of feedback and coaching does not insure skill gain. Moyers et al. (2008) report on a training study of 129 behavioral health providers within the Air Force, where training enrichments in the form of personalized feedback and consultation phone calls did not enhance skill retention at four month follow-up.

To date, studies of MI training have been conducted with volunteers, likely those most eager to learn MI. With the exception of Miller and Mount (2001), studies of training have not yet included intact clusters of practitioners from single agencies that are more typically “volunteered” by management to adopt new programs and approaches. Miller et al. (2004) acknowledge their sample to be a group of highly motivated individuals, as reflected by their willingness to volunteer to travel to New Mexico and learn MI. Many studies in the Walters et al. (2005) review were completed in samples of graduate and medical students with presumably high levels of interest. Moyers and colleagues (Moyers et al., 2008) note that initial skill levels of Air Force providers may be an important consideration in skill gain. All of these studies share the limitations noted for workshops: modest tailoring to trainee needs, general training goals, practice removed from the trainee work situation and volunteers.

1.2 Context-based training

Rollnick and colleagues (2002) described a method of training tailored to the unique challenges of a work setting, which they referred to as “context-bound” or SPICE (Simulated Patients In Clinical Encounters) training. Context-bound training does not attempt to teach a single model, but instead targets challenges in the work setting. Training takes place at the individuals' place of work with intact work teams. The trainer facilitates a needs-assessment, acknowledges participants' current skills and builds upon or refines these. Through a process of self-review, clinicians evaluate and denote what areas are to be the foci and goals of training. Trainers and clinicians then collectively develop case scenarios that will match the treatment circumstance of the trainees and allow for practice of relevant skill sets. Following a practice now common in medical education (Rollnick, Kinnerlsey, & Lane, 2001), simulated clients are selected and trained to be primary training tools. Trainees encounter simulated clients in their work setting, rather than in a workshop training room. Rather than massed practice over a 2 or 3 day period, shorter training sessions are spaced over several weeks, so that practice can occur between sessions. Learner groups tend to be homogenous, and define shared challenges. The content of training is presented as solutions to everyday problems. Although clearly incorporating important elements of technology transfer, that is, participant input and tailored training goals, Rollnick and colleagues' (2002) SPICE model has not been empirically tested, nor has it been extended to clinical staffs that provide substance abuse treatment within treatment agencies.

1.3 The current study

The current study sought to develop and evaluate context-tailored training (CTT), an adaptation of the SPICE model, for MI training of agency personnel who deliver addictions treatment. The training model seeks to improve upon traditional workshop training by taking into account features important for technology transfer: participant input, training goals tailored to everyday work challenges, and practice and feedback with simulated patients. The study sought to experimentally manipulate the counselor training process and examine training (not treatment) outcomes. The primary hypothesis was that CTT would result in better MI skill acquisition and maintenance than would standard workshop. The primary outcomes were measures of MI skillfulness derived from coding of simulated patient interviews. Given the lack of prior research, our study was conducted with six agencies with the goals of developing the training method, evaluating it for feasibility, and estimating its effect on training outcomes. Secondarily, training within community clinical settings affords the opportunity to both assess and support agency level practices that can sustain the ongoing use of MI skills. Based on literature indicating that context influences the outcome of training and implementation efforts, we hypothesized that the extent of post-training MI-supportive activities in the agency would benefit learning and maintenance of MI skills. The study also explored whether pre-existing psychological and organizational climate influences learning.

2. Method

2.1 Sample Recruitment

The University of Washington Institutional Review Board (IRB) approved all study procedures. This study recruited participants sequentially from six substance abuse treatment facilities that were members of NIDA's Clinical Trials Network, and offered free MI training at their facility, as well as corresponding credit units for continuing education. Agencies received compensation at a flat rate (determined a priori) to offset facilities costs for training. Investigators attended agency staff meetings to present information about the study. Care was taken to avoid involvement of administrators or supervisors in the recruitment process, although a number of supervisors enrolled in the study themselves. These individuals were asked to be supportive of staff decisions to participate but to avoid trying to influence the decision. Interested staff provided informed consent, and agreed to complete a series of three training outcome assessments in exchange for personal financial compensation ($30 per assessment). These assessment were at baseline (prior to training), post-training (immediately following training), and at follow-up (three-months after training).

2.2 Measures

Study measures include five general types. The first two were baseline measures used as predictors of outcomes: individual counselor characteristics, and psychological and organizational agency climate. Additional assessments targeted training processes, post-training agency practices supportive of learning, and training outcomes.

2.2.1 Individual counselor characteristics (assessed at baseline)

2.2.1.1 Socio-demographics Form

A self-report form included questions about age, gender, ethnicity, prior training and education (e.g., highest degree attained, agency tenure, CDP licensure, prior MI exposure), and perspective on current clinical work (clinical orientation, current duties).

2.2.1.2 Short Understanding of Substance Abuse Scale

(Humphreys, Greenbaum, Noke, & Finney, 1996). This 19-item self-report form assessed beliefs about the origin and treatment of addictive behaviors via ratings on a 5-point Likert-style scale (1 = Strongly Disagree, 5 = Strongly Agree). Subscales include Disease Model, Psycho-Social, and Eclectic orientations. Only Disease Model was used in analysis as it was the only set of items with acceptable alpha reliability (alpha = .72). (Cronbach's alpha was .54 and .32 for Psychosocial and Eclectic orientation items, respectively).

2.2.2 Psychological and organizational climate (assessed at baseline)

2.2.2.1 Organizational Readiness for Change (ORC) Scale

(Lehman, Greener, & Simpson, 2002). This self-report form included 60 items that pertain to practitioner perception of resources and organizational climate of their respective agency. Participants rated items on a 5-point Likert-style scale (1 = Strongly Disagree, 5 = Strongly Agree). This abbreviated version of the ORC retained twelve subscales: Staffing Resources, Training Resources, Staff Growth, Staff Efficacy, Staff Influence, Staff Adaptability, Organizational Mission, Organizational Cohesion, Organizational Autonomy, Organizational Communication, Organizational Stress, and Organizational Change.

For analyses, ORC scores were mathematically computed to represent both psychological climate (variation within agencies) and organizational climate (variation between agencies) (Hox, 2002). Psychological climate scores for each individual were created by subtracting each person's agency mean from her/his original score. Organizational climate variables were constructed for each agency by calculating the mean score for all its counselors. Intraclass correlation (ICC) coefficients were calculated from individuals' raw scores to assess variability at the psychological and organizational levels. For nine of the twelve ORC variables, ICCs showed that the bulk of variance was between individuals rather than agencies (ICC < .10). For these nine, organizational climate variables were not retained. The other three ORC variables (staff efficacy, organizational autonomy, and organizational change) had larger ICCs (.13, .15, and .22, respectively, all p. < .05). This indicated that variance existed at both the individual and agency levels, and so scores representing both psychological climate and organizational climate were analyzed.

2.2.3 Training process measures

2.2.3.1 Training Attendance and Participation (assessed during training)

Trainers recorded practitioner attendance at the conclusion of each training session. There were two, day-long training sessions in the workshop condition, and five, 2-3 hour sessions in the CTT condition. In both conditions we computed Percentage of Attendance for each practitioner. For the CTT condition we also derived Percentage of Skills-Practice Completion to reflect the rate that practitioners completed skills-practice interviews with simulated clients.

2.2.3.2 Training Satisfaction Survey (TSS) (assessed post-training and follow-up)

This self-report form assessed practitioner satisfaction with five aspects of training: format, applicability, trainer characteristics, overall experience, and simulated clients (for CTT agencies only). In each domain, respondents provided 7-point Likert ratings (1 = Very Dissatisfied, 7 = Very Satisfied) to a series of five items, followed by prompts for free-response qualitative reactions and feedback. We computed subscale scores as averages of the five items for each of the five aspects of training. The internal consistency of items on each TSS subscale was strong (Cronbach alphas = .79, .86, .85, .89, and .93, respectively).

2.2.4 Post-training agency practices supportive of learning (assessed at follow-up)

2.2.4.1 Supportive Practices Form

This self-report form assessed practitioner perception of supportive practices at an agency during a specified time period. The form includes eight items, each corresponding to an investigator-identified supportive practice (e.g., creation of regular MI discussion group), to which practitioners responded to a forced-choice dichotomy (“Did or did not occur at my agency”). We tallied affirmative responses to produce a summary score (ranging from 0 to 8) for each individual, and created an organizational climate score for each agency as described above. Internal consistency of the eight items in the current sample was acceptable (Cronbach alpha = .78).

2.2.5 Training outcomes (assessed at baseline, post-training, and follow-up)

2.2.5.1 Standardized Patient (SP) interview

This protocol consisted of a single SP portraying a recently referred client with background characteristics common for agency clientele. SPs provide reliable stimuli for the assessment of MI skills, and avoid problems with missing data which is common when samples of clinical work are requested (Baer et al., 2004). The interview lasted 20-minutes and was audio-recorded. Raters scored recordings using standard global indices and behavior counts from the Motivational Interviewing Treatment Integrity scale, or MITI (Moyers, Martin, Manuel, Hendrickson, & Miller, 2005). The MITI includes global ratings of empathy and MI spirit (7-point Likert scales) and behavioral tallies of: 1) giving information, 2) closed questions, 3) open questions, 4) simple reflections, 5) complex reflections, 6) MI-adherent behavior, and 7) MI non-adherent behavior. The originators of the MITI (Moyers, Martin, Manuel, & Miller, 2004) propose a set of conceptually-derived summary indices for measuring elements of MI skillfulness. For sake of parsimony in the current manuscript, we focus evaluation of skill outcomes from SP interviews on two of these indices: 1) MI Spirit global rating, and 2) reflection to question ratio (R/Q). MI Spirit was a 7-point (1= low, 7 = high) rating for the whole session of the degree to which a trainee was “collaborative”, “evoked client ideas”, and “supported client autonomy”. R/Q summed counts of both simple (repeat or limited paraphrase) and complex (using metaphor or an amplification of meaning) reflections and divided this by the sum of both open-ended and closed (asks for a yes-no or numerical answer) questions.

2.3 Training processes

Six treatment agencies were paired according to size and distance from the University of Washington and randomly assigned in pairs to receive one of two training formats: 1) traditional two-day workshop, or 2) context-tailored training process (CTT). Common elements to both training formats were: A) use of two experienced trainers (DR, CD), both licensed psychologists and members of the Motivational Interviewing Network of Trainers (MINT); B) use of traditional components (e.g., didactic presentations, group exercises, role-plays); C) content focus on central components of MI; and D) overall direct training time of approximately 15 hours. Traditional two-day workshops distributed training time over two workdays. In CTT, however, training time was distributed into five, 2 – 3 hour group training sessions offered at two-week intervals, between which each participant completed a 10 to 15-minute audio-recorded skills-practice interview with a medical actor. Due to the emphasis on experiential learning in CTT, we allotted time in group training sessions to engage trainees in designing skills-practice interviews to represent common work challenges. We integrated presentation of central MI components into management of these challenging work situations. Skills-practice interviews took place with a medical actor (one not involved in outcome assessments) who provided immediate, process-related feedback. The trainer reviewed recordings of these practices and provided the participant brief, written feedback at the next training session. Trainers also encouraged participants to listen to their own practice tapes. Group training sessions also devoted time for discussion of participant experiences and feedback from actors and trainers.

CTT attempted to build agency-level support for use of MI following training. Trainers and the trainee group identified a person or persons (if the agency had distinct treatment units) to act as “MI Champion.” Trainers met at least once with this individual to offer suggestions and consultation regarding how the champion could continue interest and learning of MI after training concluded. No such individuals were identified at standard workshop agencies.

2.4 Training and Oversight of Independent Raters

2.4.1 MITI-Rating

Three student assistants served as raters of MITI-coded interviews. Initial rater training included a MITI workshop facilitated by Terri Moyers, followed by a ten-week period in which raters were: 1) acquainted with study procedures, 2) provided relevant readings (including MITI manuals), 3) involved in weekly group discussions of measurement issues, 4) assigned pilot interview recordings to score for experiential practice, and 5) resolved scoring discrepancies elicited by practice reviews. Total training time was approximately 40 hours. The tape review process occurred in the ensuing 18 months, during which raters received an hour of weekly group supervision with investigators consisting of consensus review of sample stimuli and real-time discussion of scoring applications. To diminish individual and/or consensual drift, raters were blinded to all identifiers (e.g., practitioner, agency, timing of assessment) to minimize preconceived notions of training effects. Periodically, we evaluated inter- and intra-rater reliability during the review period.

2.4.2 Rating reliability

Intra-class correlations (ICCs) to assess consistency between each pair of independent raters (e.g., inter-rater reliability) ranged from .43 to .94 (M = .67, SD = .16) and those to assess consistency of each rater over time (e.g., intra-rater reliability) ranged from .53 to .96 (M = .79, SD = .13). Cichetti (1994) notes acceptable ranges for ICC values (.40-.59 = fair, .60-.74 = good, .75-1.00 = excellent).

2.5 Analysis strategy

The two primary outcome measures from SP interviews (MI Spirit and R/Q) were each analyzed separately. Because trainees were nested within agencies we assessed the proportion of total variance on outcomes that could be attributed to agencies. This ranged from .03 to .39, depending on the variable and time-point. Hence, all analyses used Hierarchical Linear Modeling (HLM), as implemented in SAS v9.1.3 PROC MIXED. In addition to estimating effects at both the individual and agency levels, analyses utilized HLM's additional advantages of allowing for the clustering of repeated measures within individuals, the specification of the best fitting covariance structure, and the inclusion of all participants (even those without complete data from all follow-up assessments).

There were three categories of predictors: experimental condition, psychological climate, and organizational climate. The number of predictors of interest relative to the sample size, as well as the relatively small number of agencies, precluded the inclusion of all three of these categories of variables in a single regression analysis. Hence, we performed three separate regressions for each of two outcomes. All regression analyses included, and hence controlled for participant demographic characteristics (education, years of counseling experience), previous MI training (attendance at a prior MI workshop), and personal model for understanding substance abuse. Regression 1 tested the effects of experimental condition (CTT vs. workshop), Regression 2 tested the effects of psychological climate, and Regression 3 tested the effects of organizational climate. In other words, Regression 2 tested whether individuals who rated climate more favorably than others in their agency had, for example, more change in MI spirit, while Regression 3 tested whether organizations with better overall climate scores compared to other agencies had more change in overall MI spirit. For reporting parsimony, we then estimated a final regression model for each outcome that controlled for individual characteristics and included the significant findings for that outcome.

The three-level HLM analyses (repeated measures nested in individuals nested in agencies) provided tests of the effects of time, the main effect of each predictor, and the interaction of each predictor by time. The main effect for a predictor reflects baseline differences in the outcome due to that predictor. While it was expected that no baseline differences in MI skills would exist based on experimental condition, it was likely, and of substantive interest, that other factors might be related to baseline scores on outcomes (e.g., that more educated people might have superior pre-training MI skills). An additional regression analysis evaluated organizational supportive practices in the prediction of learning above and beyond other predictors. This was added subsequent to the primary analyses since supportive activities occurred post-training, and may have been influenced by other predictors, all of which were measured at baseline. Given the timing of its occurrence, supportive activities was treated as a time-varying predictor; that is, coded as 0 at baseline and post-training for all participants, and then denoted as the number of activities that occurred over the course of the follow-up period.

3. Results

3.1 Sample Description

One hundred forty-four participants consented to participate in the study and completed assessment prior to the initial training session at their agency. In total, recruitment represented 72% (144 of 201) of eligible staff (25 of 36 [69%], 27 of 30 [90%], 12 of 16 [75%], 29 of 35 [83%], 35 of 60 [58%], and 16 of 24 [67%] respectively across the six agencies). Differences in rates of participation did not appear to reflect different levels of interest among agencies. Size, complexity, and scheduling issues at each agency influenced what proportion of staff made themselves available for recruitment. Of the initial 144, nine withdrew prior to conclusion of training processes, with common reasons being competing personal and agency time demands, diminished interest in training content, extended absence due to personal or family illness or vacation, and change of employment and/or residence. At some agencies, non-clinicians (n = 12) participated in the training and the study, at the encouragement of agency directors. Excluding these individuals from training evaluation, a sample of 123 practitioners provided a post-training and/or follow-up training outcome assessment.

The sample at baseline reported a mean age of 46.1 years (SD = 11.6). Gender distribution was predominantly female (70%). Distribution of reported ethnicity was as follows: 72% Caucasian, 11% Multi-Ethnic, 7% African-American, 3% Native American, 1% Pacific Islander, 1% Asian, 4% Other, and 1% chose not to identify. The sample reported a range of up to 38 years of clinical service, with a mean of 9.5 years (SD = 8.4). Agency tenure was reported as a range of up to 35 years, and with a mean 5.0 years (SD = 5.7). Prior education was reported as the highest degree attained, with 30% having attained a graduate degree (e.g., M.A./M.S.W./M.P.H., Ph.D./Psy.D., M.D.), 24% having attained a bachelor's degree (e.g., B.A./B.S./B.S.W.), 30% having attained an associate's degree (e.g., A.A.), and 15% having attained a high school diploma or equivalent (e.g., G.E.D.). Forty seven percent of the sample reported having attained licensure as a chemical dependency professional (C.D.P.). Personal recovery status was also reported, with 46% of the sample indicating that they were in recovery themselves, 44% indicating that they were not in recovery, and the remaining 10% declining to respond. In terms of prior MI exposure, 11% of the sample reported prior attendance at a two-day workshop, and 50% of the sample reported some other prior exposure such as via self-study methods.

3.2 Retention, participation, and satisfaction

The training outcome sample excluded 5 participants missing data on at least one predictor but included all the remaining 118 participants, including those missing some follow-up data. Of the 118 participants, 109 completed post-training assessments (92.3%) and 104 the three-month follow-up (88.1%); 112 (94.9%) provided data for at least one of the follow-up points. While workshop participants attended an average of 99% of the training, CTT participants attended an average of 84%, a statistically significant difference, F(1, 120) = 17.98, p < .001. CTT participants completed an average of 89% of their four skills practice interviews.

At post-training, satisfaction across conditions was high, and workshop and CTT participants did not differ in satisfaction scores (all ns). On a 1-7 point scale counselors reported high post-training satisfaction with format, M(SD) = 5.9 (0.9); applicability of training to one's work, M(SD) = 6.0(0.9); satisfaction with the trainers, M(SD) = 6.0(1.0); and overall satisfaction, M(SD) = 6.1(0.9). However, modest differences were observed between conditions when satisfaction was reassessed at the 3-month follow-up, with CTT reporting greater satisfaction than workshop participants with respect to format, M(SD) = 5.9(0.8) and 5.5(1.1), F(1, 104) = 3.56, p = .06; applicability of training to one's work, M(SD) = 5.9(0.8) and 5.3(1.2), F(1, 104) = 8.69, p < .01; satisfaction with the trainers, M(SD) = 6.0(1.0) and 5.6(1.2), F(1, 104) = 2.98, p = .09; and overall satisfaction, M(SD) = 6.2(0.8) and 5.7(1.2), F(1, 104) = 6.82, p < .01. CTT participants also assessed their satisfaction with practicing with the simulated patients highly at post and the final follow-ups, M(SD) = 6.5(0.9) and 6.6(0.6).

3.3 Preliminary analyses

3.3.1 Effect of attendance (CTT only)

For CTT trainees, there was no evidence that the amount of training affected MI skillfulness. Analyses showed no time by condition effects due to amount of training received on MI spirit, F(2, 62) = 0.41, ns; and for R/Q, F(2, 67) = 2.07, ns.

3.3.2 Baseline associations with experimental condition

The two groups did not differ in MI skillfulness (MI Spirit, F(1, 4) = 0.72, ns, and R/Q, F(1, 4) = 3.56, ns) prior to training.

3.4 Primary Analyses

3.4.1 Effects of experimental condition, demographics, and disease models beliefs (Regression 1)

For MI Spirit, a statistical trend was seen for the main effect of time representing overall improvement from baseline, F(2, 206) = 2.56, p = .08. On a 7-point scale, MI Spirit scores averaged 3.47(1.23) at baseline, rose to 4.33(1.23) post training, and averaged 3.95(1.28) at follow-up. Training type was associated with differential change over time, F(2, 204) = 4.31, p < .05 favoring the workshop condition. CTT participants showed improvement from baseline to post, M(SD) = 3.49 (1.25) to 4.08 (1.30), and then maintenance at follow-up, M(SD) = 4.11 (1.17). In comparison, workshop participants showed greater baseline to post change in scores, M(SD) = 3.45 (1.23) to 4.57 (1.13), and then greater reductions post to follow-up, M(SD) = 3.78 (1.39); the end result being three month scores that were not statistically different from those for CTT participants, F(1, 96) = 0.37, ns (See Figure 1). In sum, although workshop participants increased their MI Spirit scores more from before to immediately after training than did the CTT group, they also showed greater post- to follow-up reductions, and the two groups ended up not different from each other.

Figure 1
MI Spirit across baseline, post-training and follow-up for CTT vs. workshop

Greater baseline MI Spirit was seen among those with higher education, F(1, 57) = 9.57, p < .01; and lower disease model beliefs, F(1, 68) =10.70, p < .01. No effect on baseline MI Spirit was seen for prior workshop attendance, F(1, 112) = 0.34, ns; and years of counseling experience, F(1, 103) = 0.31, ns. None of these factors had significant interactions with time: education, F(2, 2.07) = 1.58, ns; disease model beliefs, F(2, 203) = 0.27, ns; workshop attendance, F(2, 208) = 0.07, ns; and counseling experience, F(2, 203) = 1.65, ns, that is, they did not appear to influence participants' learning of skills during the study.

In models evaluating R/Q, there was a significant main effect for time representing overall improvement from baseline, F(2, 150) = 3.56, p < .05. On average, R/Q scores averaged 0.51 (0.39) at baseline, improving to 1.24(1.16) at post training, and 0.88(0.73) at follow-up. Training type was unrelated to changes over time, F(2, 159) = 0.50, ns (See Figure 2); that is the CTT and workshop groups did not show differential skill acquisition on this measure. As with MI Spirit, higher education was related to higher baseline R/Q scores, F(1, 127) = 12.55, p < .001; but not change over time, F(2, 167) = 2.36, ns. Also, higher disease model beliefs predicted lower baseline R/Q, F(1, 129) = 14.56, p < .001; and (unlike with MI spirit) less R/Q increase over time, F(2, 155) = 4.30, p < .05. This smaller change in R/Q as a function of disease model beliefs was seen from baseline to both follow-ups: post, t(160) = 1.96, p < .05; and three-month, t(117) = 2.61, p < .01. Thus, trainees who identified more strongly with a traditional disease model of addiction offered fewer reflections as compared with questions than did those with less disease model identification prior to receiving training, and, additionally, they tended to show fewer gains in this skill over time. Prior workshop attendance did not predict baseline R/Q, F(1, 119) = 0.52, ns; or change over time, F(2, 147) = 2.46, ns. Years counseling experience also did not predict baseline R/Q, F(1, 123) = 1.73, ns; or change in R/Q, F(2, 150) = 0.94, ns.

Figure 2
Reflection/question ratio across baseline, post-training and follow-up for CTT vs. workshop

3.4.2 Effects of Psychological and Organizational Climate (Regressions 2 and 3)

For parsimony, the results for these models will focus on statistical significance only; the final model details greater information on effect sizes. As previously mentioned, Regressions 2 and 3 included all control variables (i.e., education, disease model beliefs) that had been included in Regression 1. Results for control variables remained consistent across analyses.

3.4.2.1 Psychological climate (Regression 2)

There were no effects of psychological climate on baseline MI Spirit or R/Q (all ns). One significant interaction over time was observed, this for staff adaptability predicting MI Spirit, F(2, 98) = 3.37, p < .05. The higher a trainee's report of staff adaptability the lower the rate of positive change from baseline to post, t(197) = 2.29, p < .05 and three-month follow-up, t(201) = 2.05, p < .05.

3.4.2.2 Organizational climate (Regression 3)

Baseline MI Spirit was not associated with organizational climate measures. Significant differential change in MI Spirit was seen due to three organizational climate variables: organizational autonomy, F(2, 105) = 4.67, p < .05; staff efficacy, F(2, 102) = 5.72, p < .01; and organizational change, F(2, 103) = 5.82, p < .01. In all three cases, results were not significant in predicting post training assessment, but were significant from baseline to three months assessment. Higher organizational autonomy, t(107) = 2.61, p < .05 and staff efficacy, t(102) = 2.62, p < .05 predicted less positive change in MI Spirit, while higher organizational change predicted more change in MI Spirit, t (107) = 2.61, p < .05. In sum, in agencies where staff-perceived higher levels of autonomy and staff efficacy, trainees showed less retention of gains in MI Spirit. In agencies that trainees viewed as open to change, trainees showed greater retention of gains in MI Spirit.

With respect to staff R/Q, no relationships with baseline skills were observed. Significant interactions with time were observed with organizational autonomy, F(2, 102) = 6.46, p < .01; and organizational change, F(2, 101) = 5.21, p < .01. Higher organizational autonomy predicted less positive change from baseline to post-training, t(148) = 3.32, p < .01 and three-months, t(82) = 2.49, p < .05, while higher organizational change predicted more positive change from baseline to post-training, t(149) = 3.10, p < .01 and three-months, t(80) = 1.93, p = .06. Trainee reflections skills improved less and were retained less in agencies perceived as offering greater autonomy, and they improved more in organizations seen as open to change.

3.4.3 Final models

Table 1 displays standardized regression coefficients and associated significance levels for the final MI Spirit and R/Q models. For completeness, these include the main effects (reflecting effects on baseline levels of the outcome) and interactions with time (reflecting differential change). The models included only significant predictors from regressions, and controlled for individual participant characteristics.

Table 1
Standardized betas (and 95% confidence intervals) from HLM models predicting standardized patient interview measures

With MI Spirit, the model reproduces effects observed previously, with the exception that staff education, already shown as related to higher baseline scores, was predictive of less improvement baseline to three months, t(200) = 2.18, p < .05. Trainees with higher education started the project with sessions that reflected more of the spirit of MI, but they then gained less on this skill measure than did those with less education, possibly reflecting a ceiling effect for this measure. Also, the psychological climate variable of staff adaptability was related to greater baseline MI spirit, t(257) = 2.10, p < .05, but again reduced change at follow-up. Findings for organizational climate mirrored those above. For R/Q, effects seen in prior models were again observed, with the exception that those having previously participated in an MI workshop showed enhanced change at follow-up, t(110) = 2.15, p < .05.

3.5 Analysis of Supportive Practices

Variability was observed in the MI-supportive activities counselors reported, M(SD) = 2.56(2.18), with significant differences between agencies, F(5, 94) = 10.18, p < .001. However, this variability was not due to training condition F(1, 97) = 1.44, ns., despite procedures to encourage these within CTT agencies. In fact, two workshop agencies were most active in supporting MI after training, despite the absence of specific encouragement to do so.

The final models were re-estimated adding supportive practices by agencies between post-training and the three-month follow-up as a predictor of MI skills assessed at the final follow-up,. Similar findings occurred for both outcomes. Supportive activity predicted greater three month MI spirit, standardized β = .36, t(66) = 2.46, p < .05; and R/Q, standardized β = .23, t(101) = 1.90, p < .06. Thus trainees who experienced more MI-supportive activities at their agencies showed higher levels of MI skillfulness at the three month follow-up. Other predictors remained significantly related to three month outcomes, with similar magnitudes to the models in Table 1, with one exception. The ORC variable of Organizational Change prediction of three month outcomes was smaller (Table 1 standardized β for MI spirit changed from .55 to .28, while R/Q changed from .28 to .11) and no longer significant, t(189) = 1.28 and t(106) = 0.67 (both ns); this finding raises the possibility of a mediating pathway, in which agencies rated higher for a climate of organizational change have greater amounts of supportive activities, which then help with retention of MI learning. Indeed, we noted a trend toward association of ratings of organizational change and supportive activities: mean supportive activities were noticeably higher in the three agencies with the greatest organizational change scores compared to the three with the lowest, M(SD) = 3.45 (1.13) vs. 1.75(0.47), t(112) = 1.67, p = .09. Using Krull and MacKinnon's (2001) formulas for calculating mediated effects in multilevel modeling, the overall mediated effect of organizational change via supportive activities on MI spirit was standardized β = .27 (SE = .122, p < .05) and on R/Q was standardized β = .18 (SE = .040, p < .001). This suggests that agencies with an openness to change positively influenced MI skill learning through their use of post-training supportive activities.

4. Discussion

This study sought to develop and conduct a pilot test of a training model based in technology transfer. It also targeted addiction treatment personnel recruited from within intact agencies. The training and transfer model incorporated participant input, tailored training goals based on work challenges, practice with simulated patients, and organizational support for training implementation. We conducted a preliminary test of this training approach to evaluate acceptability and estimate effects on participants' learning of MI skills. Although the ultimate goal is improvement in client engagement and treatment outcomes, this study sought, as a first step, to determine whether counselor skills could be improved. In addition, we examined the impact of participant and organizational characteristics on learning.

A comparison of changes in MI skillfulness post-training and after a 3-month follow-up between agencies assigned to CTT versus workshop training found few differences in training outcomes. Counselors trained with either model showed significant improvements in MI skills post-training. These skills were generally maintained over the 3-month follow-up period, although with some decrement. For one of two primary outcomes, R/Q from standardized patient interviews, no differences were observed between training groups. Our other primary outcome revealed a small and apparently temporary effect favoring workshop training. The small and time-limited nature of the effect, and the fact that the other outcome analysis did not favor the workshop condition, suggests caution in concluding that the workshop was superior to the CTT model. In any case, our data offer no support for our original hypothesis that better training outcomes can be achieved with the CTT model. Despite a trial with only six agencies, the consistent lack of skill-based benefits from CTT makes it unlikely that this result is a product of the study being underpowered. Furthermore, despite high participation rates and somewhat greater participant satisfaction with the CTT approach, the CTT model was more costly than the workshop to deliver, requiring multiple trainer visits, greater trainer travel time, repeated tape review, as well as the hiring and training of simulated patients and their time and travel. Thus, our conclusion is the data support use of the least costly training model, standard workshop training.

Our data, however, point to other factors that appear important for developing most effective processes for the adoption of evidence-based practices. In this study two trainee characteristics related to MI skillfulness. Trainees with higher levels of education and lower endorsement of disease model beliefs initially had higher levels of MI skills. Albeit with some variation across statistical tests, baseline differences associated with these characteristics remained at post-training and follow-up assessments. In all but one analyses either no differential learning occurred (differences in MI skills attributable to education or disease model beliefs remained over time), or, lower initial scores were associated with less learning over time (differences in MI skills attributable to education or disease model beliefs increased over time).

Prior studies have been unsuccessful in relating individual and demographic differences to training outcomes (Baer et al., 2004; Miller et al., 2004). Findings pertaining to education and disease model endorsement in the current study are likely due to the recruitment of practitioners from intact organizations. Though many demographic factors can be compared across studies, educational attainment shows differences quite clearly. In the current study 45% had not completed a bachelor's degree, and 30% had master's degrees or higher. In contrast, in Miller et al's (2004) sample, 66% reported Master's degrees and an additional 19% reported doctoral degrees. Without the involvement of colleagues from the agency, participants in the current study may not have been otherwise interested in or seeking training in MI. Our approach to recruitment may better reflect the real-world workforce, and reveals factors associated with training that may be targeted in future training programs.

Though one may argue that MI and 12-step treatment are not incompatible (Cloud et al., 2006), it is apparent from these data that those with more traditional disease model training struggled more in learning MI skills. Trainers observed that some counselors had more difficulty “unlearning” old habits and some had difficulty believing that concepts such as “rolling with resistance” were truly beneficial. They expressed the view that their patients were in a life-or-death struggle that required immediate and direct confrontation. For some clinicians who endorse disease model beliefs, MI was neither “compatible” with current practice nor seen as having “relative advantage,” two factors thought by Rogers (Rogers, 2003) to increase likelihood of adoption of innovations. It may be desirable to screen out counselors for whom innovations are incompatible. Alternatively, additional and presumably preparatory training may be required to directly address conceptual differences between approaches.

In this study, both the standard workshop and CTT training were delivered at the treatment agency. In both cases, most clinical staff at the agency were eligible to volunteer, training was delivered during the work-day at the agency, and co-trainees included only one's colleagues. Given this, we thought it possible that training response might depend on characteristics of the agency or how those characteristics are perceived by the trainees. Individual perceptions of organizational variables for the most part were not predictors of MI learning. Organizational-level climate factors, however, were significantly predictive of change in MI skills as assessed by blind tape raters. Despite our ability to analyze only six organizations, trainees at organizations with higher perceived levels of organizational change (perceived as encouraging staff to try new things) showed significantly greater increases in MI Spirit and R/Q with training. In contrast, within organizations rated higher on autonomy, MI Spirit and R/Q showed less increase over time. Organizations with higher rated staff efficacy also showed a lower rate of positive change in MI Spirit. Autonomy reflects counselors being left on their own to determine the best way to treat their patients whereas efficacy reflects staff confidence in their skills. Perhaps programs in which decisions are more individually driven and in which staff have a high degree of independence and confidence in their abilities provide less support and encouragement for changing treatment techniques. Although these findings should be considered preliminary given the small number of agencies included in this study, they are consistent with those recently noted by Simpson, Joe, and Rowan-Szal (2007), who reported that organizational climate measures, in particular openness to change, prospectively predicted staff satisfaction with training and self-reported utilization of a program innovation.

In CTT agencies, trainers identified and sought to support a champion or champions (for multiple treatment units) to carry out activities to support implementation of MI in the agency after training concluded. We found that supportive activities occurred in both CTT and standard workshop agencies, even though we directly attempted (apparently unsuccessfully) to increase such activities only in CTT agencies. At three months, regardless of condition assignment, those agencies with more supportive activities had higher levels of both MI Spirit and R/Q. In addition, supportive activities mediated the effect of one aspect of organizational climate, organizational change, on MI skills. Those agencies with greater openness to new techniques engaged in more support activities, resulting in greater baseline-three month skill increases. This finding emphasizes the importance of collective readiness and openness to change with regard to a new practice prior to training, and the apparent benefit of supportive practices within agencies for skills maintenance. More generally these findings are consistent with the notion that organizational climate may impede or support practitioner readiness to adopt new practices (Simpson, 2002; Simpson & Flynn, 2007), as well as models of learning transfer (Holton & Baldwin, 2000) which emphasize the importance of conditions of the learner and team before, during, and after training to achieve maximum learning and retention.

4.1 Limitations

This study was a developmental effort. We sought to build a training model, evaluate it for feasibility, and estimate its effects on skills-based outcomes. It was beyond the scope of the study to evaluate impact of training on client outcome. We also did not employ sufficient numbers of agencies to conduct a fully powered test of training efficacy or organizational-level factors. Effects of organizational level variables should especially be interpreted cautiously, as one or two agencies may possibly account for the observed effects. Replication of the organizational level findings with a larger sample of addiction treatment agencies is clearly needed. Other limitations to our study include the fact that timing of assessments differed across conditions. The post-training assessment took place fewer weeks after baseline assessment for standard vs. CTT participants. However, time from post- to three-month assessment was equated. Skills assessment with simulated patients, although reliable in past studies (Baer et al., 2004) may not well-represent use of MI with actual clients. Despite appearing different than samples of individual volunteers studied previously, participating programs and staff may not represent the population of addiction treatment programs and staff. All were programs involved in the National Institute on Drug Abuse Clinical Trials Network as Community Treatment Programs, and may be more open to adopting research-based practice. However, none of these programs had participated in prior CTN trials using MI and only 3 of the six had participated in any CTN trial.

4.2 Future Research and Practical Application

The results of this pilot study with six addiction treatment agencies do not support moving to a larger efficacy test of CTT. Rather, and perhaps paradoxically, our data suggest that massed training as provided in the two-day workshop can produce equivalent outcomes at a lower cost. With regard to identifying best training practices, there appear to be several levels at which one might intervene to achieve more uniform increases in skills and to assure better retention of skills following training. First, trainers should identify individuals whose current approach to treatment is incompatible with the new method and either screen these trainees out or address the compatibility and relative advantages of the approaches. Whether it is advantageous to attempt to train all practitioners to a new approach would depend on one's assessment of the costs of training and evidence that one approach has greater efficacy than the other. Regarding 12 Step and MI approaches, as operationalized in carefully conducted trials, there is at least some evidence of equivalent efficacy (Project MATCH, 1998).

Second, there is a need to prospectively examine organizational characteristics in addiction treatment as related to learning transfer and to determine how to intervene at the program level to increase agency supports for learning. There are models of professional behavior change that place emphasis on the organizational structures needed to support individual learners' attempts to adopt new practices (Fixsen et al., 2005; Holton & Baldwin, 2000; Holton, Bates, & Ruona, 2000; Simpson & Flynn, 2007), but more work is needed to determine whether and how interventions targeting organizational structures influence training outcome. This research will likely need to utilize hybrid longitudinal designs that include such elements as randomized comparisons of different training methods, evaluations of training and organizational process within intact practice settings, and multi-level mediator and outcome measurement (i.e., at the clinic, clinician, and client level). These studies pose design challenges. For example, the number of organizations needed to examine clinic-level variables may exceed the number for which a training “experiment” can feasibly be mounted.

Third, the role of supervised practice in helping counselors learn and retain treatment techniques has been emphasized by others (Miller, Sorenson, Selzer, & Brigham, 2006; Miller et al., 2004). It is noteworthy that a pattern of greater baseline skills followed by less improvement, as was noted for more educated participants, suggests ceiling effects for initial training in MI, at least as measured in this study. A ceiling effect for initial training also offers one explanation for the lack of training differences observed between training models. It is possible that only so much skill gain can be achieved through initial training, and subsequent consolidation and implementation of skills must occur over time. A model involving initial massed training followed by processes to support learning is consistent with data from Miller and colleagues (Miller et al., 2004) and recommended in recent writings about technology transfer in addictions (Miller et al., 2006), albeit not effective in all studies (Moyers et al., 2008). How best to achieve this in an affordable and sustainable way within the addiction treatment system is an important area for future study.

Acknowledgments

This research was supported by NIDA R01 DA016360. The authors thank Jeretta Scott, Amber Wolfe, and Deborah Kelly for their standardized patient and simulated client portrayals; Andrew Slade, David G. Peterson, and Avry Todd for efforts with data coding; and all participating agencies for their active support of this project.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • Andrzejewski ME, Kirby KC, Morral AR, Iguchi MY. Technology transfer through performance management: The effects of graphical feedback and positive reinforcement on drug treatment counselors' behavior. Drug and Alcohol Dependence. 2001;63(2):179–186. [PubMed]
  • Backer TE, David SL, Soucy G. Introduction: Behavioral science knowledge base on technology transfer. In: Backer TE, David SL, Soucy G, editors. Reviewing the behavioral science knowledge base on technology transfer: Nida research monograph no 155. Rockville, MD: National Institute on Drug Abuse; 1995. [PubMed]
  • Baer JS, Rosengren DR, Dunn C, Wells E, Ogle R, Hartzler B. An evaluation of workshop training in motivational interviewing for addiction and mental health clinicians. Drug and Alcohol Dependence. 2004;73(1):99–106. [PubMed]
  • Bartholomew NG, Joe GW, Rowan-Szal GA. Counselor assessments of training and adoption barriers. Journal of Substance Abuse Treatment. 2007;33(2):193–199. [PMC free article] [PubMed]
  • Cicchetti DV. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment. 1994;6(4):284–290.
  • Cloud RN, Besel K, Bledsoe L, Golder S, McKiernan P, Patterson D, et al. Adapting motivational interviewing strategies to increase posttreatment 12-step meeting attendance. Alcoholism Treatment Quarterly. 2006;24(3):31–53.
  • Corrigan PW, Steiner L, McCracken SG, Blaser B, Barr M. Strategies for disseminating evidence-based practices to staff who treat people with serious mental illness. Psychiatric Services. 2001;52(12):1598–1606. [PubMed]
  • Davis D. Does cme work? An analysis of the effect of educational activities on physician performance or health care outcomes. Int'l J Psychiatry in Medicine. 1998;28(1):21–39. [PubMed]
  • Dipboye R. Organizational barriers to implementing a rational model of training. In: Quinones MA, Ehrenstein A, editors. Training for a rapidly changing workplace: Applications of psychological research. Washington, D. C.: American Psychological Association; 1997.
  • Fixsen D, Blase K, Naoom S, Crockett B, Frazier Y, Gilbert-Johnson TA, et al. Implementation research: A synthesis of the literature. University of South Florida; 2005.
  • Florin P, Giamartino GA, Kenny DA, Wandersman A. Levels of analysis and effects: Clarifying group influence and climate by separating individual and group effects. Journal of Applied Social Psychology. 1990;20:881–900.
  • Holton EF, Baldwin TT. Making transfer happen: An action perspective on learning transfer systems. In: Holton EFI, Baldwin TT, Naquin SS, editors. Managing and changing learning transfer systems, advances in developing human resources. 4. Vol. 2. 2000. pp. 1–6.
  • Holton EF, Bates RA, Ruona WEA. Development of a generalized learning transfer system inventory. Human Resource Development Quarterly. 2000;11(4):333–360.
  • Humphreys K, Greenbaum MA, Noke JM, Finney JW. Reliability, validity, and normative data for a short version of the understanding of alcoholism scale. Psychol Addict Behav. 1996;10(1):38–44.
  • James LR, Jones AP. Organizational climate: A review of theory and research. Psychological Bulletin. 1974;81:1096–1112.
  • Joe GW, Broome KM, Simpson DD, Rowan-Szal GA. Counselor perceptions of organizational factors and innovations training experiences. Journal of Substance Abuse Treatment. 2007;33(2):171–182. [PMC free article] [PubMed]
  • Keller DS, Galanter M. Technology transfer of network therapy to community-based addictions counselors. Journal of Substance Abuse Treatment. 1999;16(2):183–189. [PubMed]
  • Kenny DA, LaVoie L. Separating individual and group effects. Journal of Personality and Social Psychology. 1985;48:339–348.
  • Krull JL, MacKinnon DP. Multilevel modeling of individual and group level mediated effects. Multivariate Behavioral Research. 2001;36:249–277.
  • Lehman WEK, Greener JM, Simpson DD. Assessing organizatinal readiness for change. Journal of Substance Abuse Treatment. 2002;22(4):197–209. [PubMed]
  • McFarlane WR, McNary S, Dixon L, Hornby H, Cimett E. Predictors of dissemination of family psychoeducation in community mental health centers in maine and illinois. Psychiatric Services. 2001;52:935–942. [PubMed]
  • Miller WR, Mount KA. A small study of training in motivational interviewing: Does one workshop change clinician and client behavior? Behavioural and Cognitive Psychotherapy. 2001;29:457–471.
  • Miller WR, Rollnick S. Motivational interviewing: Preparing people to change addictive behavior. New York: Guilford Press; 1991.
  • Miller WR, Rollnick S. Motivational interviewing: Preparing people for change. 2nd. New York: Guilford Press; 2002.
  • Miller WR, Sorenson JL, Selzer JA, Brigham GS. Disseminating evidence-based practices in substance abuse treatment: A review with suggestions. Journal of Substance Abuse Treatment. 2006;31:25–39. [PubMed]
  • Miller WR, Yahne CE, Moyers TB, Martinez J, Pirritano M. A randomized trial of methods to help clinicians learn motivational interviewing. Journal of Counseling and Clinical Psychology. 2004;72(6):1050–1062. [PubMed]
  • Milne D, Gorenski O, Westerman C, Leck C, Keegan D. What does it take to transfer training? Psychiatric Rehabilitation Skills. 2000;4:259–281.
  • Morgenstern J, Morgan TJ, McCrady BS, Keller DS, Carroll KM. Manual-guided cognitive-behavioral therapy training: A promising method for disseminating empirically supported substance abuse treatments to the practice community. Psychology of Addictive Behaviors. 2001;15(2):83–88. [PubMed]
  • Moyers TB, Manuel JK, Wilson PG, Hendrickson SML, Talcot W, Durand P. A randomized trial investigating training in motivational interviewing for behavioral health providers. Behavioural and Cognitive Psychotherapy. 2008;36:149–162.
  • Moyers TB, Martin T, Manuel JK, Hendrickson SML, Miller WR. Assessing competence in the use of motivational interviewing. Journal of Substance Abuse Treatment. 2005;28:19–26. [PubMed]
  • Moyers TB, Martin T, Manuel JK, Miller WR. Motivational interviewing treatment integrity (miti) coding system: The University of New Mexico Center on Alcoholism, Substance Abuse, & Addictions. 2004. http://casaa-0031.unm/edu/
  • Project MATCH. Matching alcoholism treatments to client heterogeneity: Project match three-year drinking outcomes. Alcoholism: Clinical and Experimental Research. 1998;22:1300–1311. [PubMed]
  • Reid D, Everson J, Green C. A systematic evaluation of preferences identified through person-centered planning for people with profound multiple disabilities. Journal of Applied Behavior Analysis. 1999;32:467–477. [PMC free article] [PubMed]
  • Rogers EM. Diffusion of innovations. Vol. 5. New York: The Free Press; 2003.
  • Rollnick S, Kinnerlsey P, Lane C. Context-bound training and the spice method: Adaptations and applications. Occasional Paper Series, No 1 2001
  • Rollnick S, Kinnersley P, Butler C. Context-bound communication skills training: Development of a new method. Medical Education. 2002;36:377–383. [PubMed]
  • Rubel E, Shepell W, Sobell L, Miller W. Do continuing education workshops improve participants' skills? Effects of a motivational interviewing workshop on substance-abuse counselor's skills and knowledge. Behavior Therapist. 2000;23(4):73–77.
  • Saitz R, Sullivan LM, Samet JH. Training community-based clinicians in screening and brief intervention for substance abuse problems: Translating evidence into practice. Substance Abuse. 2000;21(1):21–31. [PubMed]
  • Schoener EP, Madeja CL, Henderson MJ, Ondersma SJ, Janisse JJ. Effects of motivational interviewing training on mental health therapist behavior. Drug and Alcohol Dependence. 2006;82:269–275. [PubMed]
  • Sholomskas DE, Syracuse-Siewert G, Rounsaville BJ, Ball SA, Nuro KF, Carroll KM. We don't train in vain: A dissemination trial of three strategies of training clinicians in cognitive-behavioral therapy. Journal of Consulting and Clinical Psychology. 2005;73(1):106–115. [PMC free article] [PubMed]
  • Simpson DD. A conceptual framework for transferring research to practice. Journal of Substance Abuse Treatment. 2002;22:171–182. [PubMed]
  • Simpson DD, Flynn PM. Moving innovations into treatment: A stage-based approach to program change. Journal of Substance Abuse Treatment. 2007;33(2):111–120. [PMC free article] [PubMed]
  • Simpson DD, Joe GW, Rowan-Szal GA. Linking the elements of change: Program and client responses to innovation. Journal of Substance Abuse Treatment. 2007;33(2):201–209. [PMC free article] [PubMed]
  • Sorensen JL, Hall SM, Loeb P, Allen T, Glaser EM, Greenberg PD. Dissemination of a job seekers' workshop to drug treatment programs. Behavior Therapist. 1988;19:143–155.
  • Sorenson JL, Hall SM, Loeb P, Allen T, Glaser EM, Greenberg PD. Dissemination of a job seekers' workshop to drug treatment programs. Behavior Therapist. 1988;19:143–155.
  • Walters ST, Matson SA, Baer JS, Ziedonis DM. Effectiveness of workshop training of psychosocial treatments in addiction: A systematic review. Journal of Substance Abuse Treatment. 2005;29:283–293. [PubMed]