This empirical review of 55 studies evaluating six therapist training methods has found that there are differences in the number of studies for specific training methods and their respective effectiveness. Multiple studies have been conducted on multi-component treatment packages (20), workshops (19), and workshop follow-ups (9). Fewer studies have been completed on the utility of pyramid (train-the-trainer) models (3), reading (5), and self-directed trainings (7). Not only have multi-component treatment packages been studied most often, they also have most consistently demonstrated positive training outcomes relative to other training methods. Conversely, studies evaluating the utility of reading, self-directed trainings, and workshops have documented that these methods do not routinely produce positive outcomes. Workshop follow-ups help to sustain outcomes. Little is known about the impact of pyramid or train-the-trainer methods.
The literature is limited by a lack of methodological rigor and multiple “pilot” studies characterized by small sample sizes, limited power, and absent comparison groups, random assignment, standardized assessment measures, and routine follow-up assessments. The inclusion of therapists who may not be representative of those providing services in community agencies also has compromised conclusions that can be drawn within this area of investigation. Few follow-ups have been conducted, and those that have been conducted are generally of short duration. Patient outcomes also are rarely included in studies. Therefore, we are unable to understand treatment sustainability or the impact of training on patient outcomes. Despite significant methodological flaws, what follows is a brief summary of some key lessons learned from this research, including: a) the level of effectiveness for a variety of training methods, b) factors that appear to influence training outcome, c) methodological concerns, and d) recommendations for therapist training and training research.
Effectiveness of Different Training Methods
To date, the most common way to train community therapists in new treatment approaches like EBTs has been to ask them to read written materials (e.g., treatment manuals) or attend workshops. There is little to no evidence that either of these approaches will result in positive, sustained training outcomes (i.e., increases in skill and competence). The most positive result of reading written materials was a slight increase in knowledge. Of the five studies reviewed that examined the utility of reading, none demonstrated increases in significant behavior change or competence. In fact, one study found no differences between those who read versus did not read a treatment manual (Rubel et al., 2000
). In terms of workshop attendance, this review confirms what others (e.g., Davis et al., 1999
; VandeCreek et al., 1990
; Walters, Matson, Baer, & Ziedonis, 2005
) have found: while workshop participants sometimes demonstrate increases in skills and (more often) knowledge, workshops are not sufficient for enabling therapists to master skills (Sholomskas et al., 2005
), maintain skills over time (Baer et al., 2004
;W. R. Miller et al., 2004
), or impact patient outcome (W. R. Miller & Mount, 2001a
Additional information is needed on the effect of self-directed training methods on therapist skills. For example, web-based trainings are cost-effective, convenient, and well liked by participants (National Crime Victims Research & Treatment Center, 2007
); however, there is only a small amount of data to support their effectiveness (Dimeff et al., 2009
). One study found a 36.3% increase in knowledge after completing a web-based training (National Crime Victims Research & Treatment Center, 2007
); however, this finding is based on a Type 3 study with significant methodological flaws. Using more rigorous methodology, Dimeff and colleagues (2009
; Type 1) demonstrated increases in knowledge, competence, and adherence at post and 90-day follow-up using a sophisticated online learning method. In contrast, Sholomskas et al. (2005)
, a Type 2 study, found that web-based training was only slightly more effective than reading a treatment manual. There simply is not yet enough evidence to draw a conclusion about the utility of this training technique. Additional information on the interactive nature of the online method and other technologies (e.g., podcasts, archived webinars) will be important to gather given their potential broad application.
Workshop follow-ups that included observation, feedback, consultation, and/or coaching have improved adoption of the innovation (Type 2; Kelly et al., 2000
), retention of proficiency (Type 1; W. R. Miller et al., 2004
), and client outcome (Type 2; Parsons, Reid, & Green, 1993
), compared to workshops alone. Essentially, there does not seem to be a substitute for expert consultation, supervision, and feedback for improving skills and increasing adoption. The challenge is that these methods are resource intensive as they require the availability of expert consultation, clinical supervisors, and therapist time, all of which are costly for community-based mental health agencies. The implementation field needs to determine: a) how to sequence learning activities to be cost-effective without compromising training and treatment outcome, and b) how to use technology more effectively. Participants report liking web-based training (e.g., National Crime Victims Research & Treatment Center, 2007
); perhaps we can capitalize on technology to increase the availability of expert consultation. Additionally, utilizing cost-effective training methods initially might reduce the amount of expert consultation and supervision needed later. Hawkins and Sinha (1998)
found that consultations appeared to be more effective for therapists with a reasonable amount of pre-training knowledge, but this result is tentative given that the methodological flaws of this Type 3 study. If results were replicated, one strategy might be for therapists to complete a web-based training prior to attending a workshop. Once competency knowledge and skill levels were met, that therapist could proceed to participate in conference calls with an expert trainer and other therapists from different agencies as a form of group supervision. Afterward, the therapist could receive individual supervision and expert consultation on selected cases. This type of training approach might minimize costs and maximize the potential for skill acquisition by sequencing training activities, imposing competency standards, and utilizing internet technology.
Pyramid or train-the-trainer training methods also have the potential to be time- and cost-effective; however, this method has received the least amount of rigorous examination, limited to only three studies (S. E. Anderson & Youngson, 1990
; Demchak & Browder, 1990
; Shore et al., 1995
). The ultimate question that remains is that even if effects are watered down from supervisors to therapists, are the improvements for consumers still clinically meaningful. Chamberlain and colleagues currently are conducting a large-scale, prospective study to examine the effects of using a cascading model to transfer components of Multidimensional Treatment Foster Care (NIMH Grant # 060195) from a research setting (The Oregon Social Learning Center) to the foster care system in San Diego. Initially, the original developers of the model will train and supervise staff in San Diego to implement the model. In the second training iteration, the developers will have substantially less involvement. Similarly, Chaffin and colleagues are examining the utility of a Cascading model for implementing in-home family preservation/family reunification services. Providers from a well-trained, model, seed program will serve as implementation agents for sequential implementations at other agencies. (NIMH Grant #001334). Studies like these will contribute to a better understanding of the utility of cascading models as a training technique.
The familiar tone of Bickman’s observations (Bickman, 1996
) demonstrating that “more is not always
better” resonates in studies examining the effectiveness of multi-component training packages as a training method. Of the twenty studies in this area, the large majority found positive training outcomes. However, two (Bein, Anderson, Strupp, Henry, Sachacht et al., 2000
; Brooker & Butterworth, 1993
) studies found that therapists did not achieve even basic competence in the treatment approach after extended (e.g., year-long) training initiatives. One study (Crits-Christoph et al., 1998
) found that only one of three therapies (CBT) demonstrated learning that carried over from training case to training case. This is somewhat disappointing given the substantial resources invested; however, it highlights the need to understand the utility of specific components of these training packages and the ease of training specific approaches.
Additional information is also needed on the methods in which therapists should be trained. Chorpita and Weisz (e.g., Chorpita, Becker, & Daleiden, 2007
) have focused on comparing the benefits of training therapists in a modular based treatment versus individual EBTs, which will help to inform this area. As these authors have suggested, perhaps training therapists in one conceptual approach will have broader implications and be well received by therapists rather than training them in multiple EBTs.
Influences on Training Outcome Not Included in this Review
This review focused on the training design component of Transfer of Training Models (Machin, 2002
). Essentially the focus was on outcomes of what happens during training; however, the two remaining components of the model, what happens before (therapist characteristics) and after (organizational setting) training, are equally important, which has been highlighted by those implementing EBTs (Chaffin, 2006
). For example, Bruns and colleagues’ (2008)
maintain that a supportive organizational context and clinical supervisors who are trained to supervise EBTs, are critical to the success of EBT implementation.
Therapist characteristics are often mentioned as key factors in treatment implementation and dissemination. After all, the characteristics of those who receive the training and provide the treatment could affect implementation on multiple levels such as treatment competence (Siqueland et al., 2000
) and client outcomes (Vocisano et al., 2004
). Most EBTs have been developed by and for doctoral-level clinical professionals (e.g., clinical psychologists, psychiatrists) within defined theoretical orientations (e.g., behavioral, cognitive-behavioral). In contrast, community mental health centers employ primarily masters-level therapists to provide most of the mental health therapy (Garland, Kruse, & Aarons, 2003
; Weisz, Chu, & Polo, 2004
). Therapists report their theoretical orientation to be “eclectic” (e.g. Addis & Krasnow, 2000
; Kazdin, 2002
; Weersing, Weisz, & Donenberg, 2002
) and that they value the quality of the therapeutic alliance over the use of specific techniques (Shirk & Saiz, 1992
Small sample sizes and lack of random assignment hinder our ability to determine the degree to which therapist characteristics are important and which characteristics in particular need to be addressed by trainers. Therapists are a diverse group with different learning histories, training backgrounds, and preferences. Understanding more about how to tailor training to maximize learning outcomes for diverse groups will be an important academic endeavor. Studies that randomly assign therapists to different training conditions could control characteristics that are common to research studies, such as high motivation and interest in the treatment approach, while examining factors that could be addressed such as knowledge, caseload size, and supervisor support, each of which has been raised as impacting training results. Examining therapist characteristics seems to be a missed opportunity within the existing research. Much more could be learned if researchers conducted studies of therapists or at a minimum, included moderator analyses in their existing implementation studies.
Organizational difficulties are commonly cited in discussion sections and conceptual papers as challenges that have to be overcome in order to implement EBT (e.g., Bailey, Burbach, & Lea, 2003
; Fadden, 1997
); however, organizational factors are seldom studied. When they are examined, it seems that they are sometimes included at the end of a study to potentially account for findings (e.g., post study interviews; W. R. Miller et al., 2004
; Schoener et al., 2006
). Also missing in this literature are multiple studies on how organizational interventions (Glisson & Schoenwald, 2005
) could be used to enhance implementation successes. This may be an emerging area of study (e.g., Glisson et al., 2008
; Gotham, 2006
; Schoenwald, Chapman et al., 2008
Glisson and colleagues (Glisson, Dukes, & Green, 2006
) developed the Availability, Responsiveness, and Continuity (ARC) organizational intervention strategy to improve services in child welfare and juvenile justice systems, which is now being used to support the implementation of Multisystemic Therapy in rural Appalachia (Glisson & Schoenwald, 2005
). Similarly, the Addiction Technology Transfer Center of New England has implemented an organizational change strategy, Science to Service Laboratory, in 54 community-based substance abuse treatment agencies in New England (Squires et al., 2008
) since 2003.
Finding appropriate training and supervision has been cited as a primary barrier in dissemination of EBT (Conner-Smith & Weisz, 2003
; Essock et al., 2003
). Extensive reviews have been completed on the methodological limitations of research on clinical supervision (Ellis, Ladany, Krengel, & Schult, 1996
) as well as the efficacy of supervision in training therapists (Holloway & Neufeldt, 1995
). The links between clinical supervision and therapist efficacy and treatment adherence have rarely been studied (Ellis et al., 1996
; Lambert & Ogles, 1997
), with a few notable exceptions (e.g., Henggeler, Melton, Brondino, Scherer, & Hanley, 1997
; Henggeler, Schoenwald, Liao, Letourneau, & Edwards, 2002
). These existing studies indicate that: a) training supervisors has been shown to facilitate improvements in staff performance, b) supervision increases therapist knowledge and proficiency with complex therapeutic procedures, c) supervisor expertise is positively correlated with therapist treatment adherence, d) supervisor rigidity (over focus on the analytic process and treatment principles) is associated with low therapist adherence, e) supervisor feedback appeared to enhance the maintenance of staff members’ skills, and f) supervisors benefit from receiving specific instruction on how to supervise others in addition to instruction on treatment content. Even fewer studies have examined the relation of therapist performance and client outcome to clinical supervision (Holloway & Neufeldt, 1995
). A better understanding of how supervisors should be trained and included in the implementation process is needed.
Lack of Theory to Drive Implementation Research
This emerging area of research appears to be suffering from a lack of theory-driven studies. Researchers (Glisson & Schoenwald, 2005
; Gotham, 2004
) have highlighted the value in understanding the complex environment of which these training efforts are a part. Despite these recommendations, there remains a lack of systematic investigations tied together by a strong theoretical framework. Perhaps there is value in looking to other disciplines with similar missions to understand potentially relevant theoretical frameworks. For example, the medical field has tried to implement evidence-based practices. The field of behavioral health may benefit from incorporating organizational theories from this work such as complexity science adaptive systems (R. A. Anderson, Crabtree, Steele, & McDaniel, 2005
; Scott et al., 2005
This literature seems to be largely composed of a few significant dissemination/implementation efforts within specific topic areas such as behavioral family therapy for schizophrenia, substance abuse treatments including motivational interviewing, DBT, and behavioral interventions for individuals with developmental disabilities in residential treatment facilities. Considering that studies on these topics are dominant within a small implementation literature, generalizations across treatment approaches are difficult and questionable. For example, it is unclear if results from training studies focused on implementing family therapy for schizophrenia might be applicable to training studies focused on implementing motivational interviewing. Perhaps the method and dose of training necessary for adequate skill acquisition (competence in a treatment) is specific to each treatment. More intensive treatment approaches and/or those that require the use of significantly different skills that a therapist’s current skill set may require more intensive training methods or doses than less intensive treatment approaches and/or those that are similar to therapists existing skill sets.
Alternatively, our observation that the literature is dominated by a few significant dissemination/implementation efforts may be due to the snowball search method employed. The reference section for each identified article was reviewed for the possible inclusion of referenced studies in this review. To guard against this potential bias, several keywords and databases were used, as indicated in the methods section of this paper. Also, all relevant articles reference sections were reviewed, even those that were not included in this review (e.g., reference sections of conceptual papers). Therefore, however plausible, it appears unlikely that the snowball search method biased the selection of studies for inclusion in this review.
Measurement of too few constructs
As previously noted, trainers often seek to improve knowledge and skill; however, knowledge acquisition appears to be easier to demonstrate and is more commonly assessed in comparison to skill acquisition. A few studies that have assessed both knowledge and skill have found that these constructs do not always increase at the same rate nor do they always positively correlate. Freeman and Morris (1999)
found statistical significance was demonstrated on a knowledge measure, but not on a clinical vignette where the application of knowledge had to be demonstrated. Similarly, Byington et al. (1997)
found that knowledge improvements were evident, but improvements were not evident on applying concepts. Reporting only knowledge can lead to a more optimistic or skewed (Baer et al., 2009
) view of training outcome than is possibly accurate.
Exclusive Reliance on Therapist Self-report
Therapist self-repot is commonly used to evaluate response to training; however, studies that have examined the validity of therapist self-report have found that therapist self-reports of their own behavior (e.g., clinical proficiency) and patient improvements were more optimistic when compared with behavior observations (Gregorie, 1994
; W. R. Miller & Mount, 2001b
;W. R. Miller et al., 2004
). Behavior observation ratings present challenges to studies (e.g., cost, time, sample adequacy), but poor concordance between therapist and observer ratings suggest that therapist reports may be a supplement to, but not substitute for, observer ratings (Carroll, Nich, & Rounsaville, 1998
, p. 307). In one study by Carroll et al. (2000)
, 741 sessions were rated by a therapist and an independent rater. For 71% of those sessions, therapists’ ratings were higher (more optimistic) than the independent raters, 26% of ratings were identical, and 6% of ratings were higher for independent raters than therapists.
Lack of Rigor in Study Design and Scope
As mentioned previously, the multiple methodological flaws limit the conclusions that can be drawn from these studies. There also is significant heterogeneity among therapists, training methods, training protocols, interventions trained, and constructs assessed. All of this variability combined with a lack of methodological rigor in completed studies significantly complicates this area of inquiry. While this review sought to organize the literature in a meaningful way by using an established classification system (Nathan and Gorman, 2002
), the categorization of studies should not be treated as sacrosanct. Nathan and Gorman’s classification system is not the only system available for classifying research methodologies (e.g., Bliss, Skinner, Hautau, & Carroli, 2008
); however, it is the most comprehensive and widely disseminated system with regard to rank ordering research methods by the degree of scientifically regarded rigor. For example, Bliss and colleagues (2008)
describe different research methodologies, but do not rank order them.
We are just beginning to understand how to train community therapists in psychosocial treatment skills. Thus far, some methods appear to more effective in changing knowledge and skill (e.g., multi-component training packages, feedback, consultation, supervision) than others (e.g., reading a treatment manual, attending workshops). The former methods are notable for their individualized approach, although it should also be noted that these methods have other requirements or limitations (e.g., time, cost, intensity). Few studies have directly compared different methods, which may be one of the main directions for further work. One key question is, what is the most efficient method in order to achieve initial therapist skill acquisition. Perhaps an even more important question is whether it is necessary to administer ongoing training and consultation (feedback) in order to achieve therapist adoption. An ongoing study by Chaffin and colleagues (NIMH Grant #065667) is evaluating the role of ongoing fidelity monitoring on the implementation of an EBT at the state level. Results may help to determine whether this component is essential in maintaining good adherence to a treatment model and ultimately improved client outcome. Similar research might also examine the benefits of different training activities, such as supervisor training or use of live coaching/consultation.
Complex, but important questions originally proposed in the review by Ford (1979)
continue to remain unanswered, including: a) What is the minimal therapist skill proficiency level that could serve as a valid cutting point for predicting success or failure in training?, b) Are there certain complex interpersonal skills that underlie treatment approaches that should be considered prerequisites for training?, and c) Is there a way to match trainees with a training method to produce better training outcomes?. However, even simpler questions remain such as: d) What educational level (e.g., M.A./M.S., M.S.W., Ph.D.) is necessary to be able to benefit from training?, e) What is the impact of therapist training on client outcomes?, f) How well do trained skills generalize from training cases to ‘real-world’ clients?, g) Is the impact of training transient or long-term?, and h) what program/agency or organizational mechanisms/structures/resources are needed to maximize the likelihood of successful therapist acquisition and adoption of a psychosocial treatment? To address some of these unanswered questions, Kolko and colleagues are currently completing a randomized effectiveness trial (NIMH Grant # 074737) to understand the potential benefits of training therapists who are diverse in background (BA vs. MA/MS/MSW) and service setting (in-home, family-based, outpatient) in one EBT for child physical abuse, Alternatives for Families: A Cognitive Behavioral Therapy. This same study will provide information on therapist knowledge, skills, attitudes, real-world practices, and the impact of these factors on family outcomes. It also will provide information on supervisor and organizational characteristics that impact implementation over time. Perhaps these efforts as well as some of those included in this review are reflective of a shift toward applying increasing rigorous methods to the study of psychosocial treatment implementation. Notable is that 5 of the 6 studies included in this review that were rated as a Type 1 study were published after 2004 (Baer et al., 2009
; Dimeff et al., 2009
; Lochman et al., 2009
;W. R. Miller et al., 2004
; Moyers et al., 2008
In summary, surprisingly little research has been conducted to evaluate methods for training therapists in implementing a broad array of psychotherapy techniques. Clearly, there is a need to develop and test innovative and practical training methods. Research of training methods should move beyond examinations of workshop training into developing and testing training models that are innovative, practical, and resource effective. Large-scale, methodologically-rigorous trials that include representative clinicians, patients, and follow-up assessments are necessary to provide sufficient evidence of effective training methods and materials. Without such trials, the field will continue to try to disseminate evidence-based treatments without evidence-based training strategies.
Ultimately, the current national focus on dissemination requires researchers to examine two issues together: 1) how well can community therapists be trained to effectively retain and implement new psychotherapy skills and knowledge and 2) does the application of these new skills and knowledge increase positive outcomes for clients when delivered in community settings. Attention to the integration of these complementary objectives will hopefully promote advances in training technologies that can play a significant role in promoting advancing the mental health competencies of community therapists and enhancing the quality of care delivered in everyday practice settings. Ultimately, just as “Evidence-based medicine should be complimented by evidence-based implementation” (Grol, 1997
), so too should evidence-based psychosocial treatments by complimented by evidence-based implementation.