Participants in a novel 3-h workshop on the assessment and management of suicide risk gained knowledge, confidence, and skill in risk formulation based on assessments obtained immediately after the workshop. Participants reported strong confidence that they could implement the skills taught in the workshop in their usual practice setting. While mastery of any clinical skill requires practice and feedback over time, our findings are consistent with previous studies, which indicated that focused continuing education workshops can increase knowledge, confidence, and documentation skills (Gask et al., 2006
; McNiel et al., 2008
; Oordt et al., 2009
). The results of our study extend previous work in clinician education in several important ways.
This study was conducted with a relatively large multidisciplinary sample of clinicians with clinical responsibilities across the spectrum of acute and outpatient care. This diverse sample showed large improvements on all self-report and objective measures, with no significant differences in change scores among participants from acute, ambulatory, and other services. Trainees and staff/faculty clinicians likewise benefited from the program to a similar degree. Participants saw the workshop as highly transferable to the workplace, regardless of their service setting. Together, these findings suggest that education in the core competencies of assessing and managing suicide risk can be accomplished through cross-services education, despite the diversity of clinical demands experienced in ambulatory, acute, and other services. These findings also suggest that a carefully designed curriculum can accommodate the needs of clinicians at various stages of professional development.
The design of the CTL curriculum presents opportunities to advance the state-of-the-art in clinician education. First, the curriculum demonstrated promising methods for teaching about risk assessment and response. The curriculum used visual concept mapping, documentation-driven delivery, and a structured procedure for customizing education to fit the response options available to clinicians in a given setting. Second, the efficiency with which this curriculum achieved fairly dramatic results has obvious practical advantages for clinicians and administrators, and also offers hope to those interested in improving the competence of the mental health workforce. Third, the preworkshop interview and consultation with clinical leaders results in a workshop that is tailored to the local clinical context. The consultation explicitly prompts leaders to use the brief educational experience as a starting point for ongoing education and administrative support for risk assessment. This direct engagement with stakeholders about how to enhance the transfer of learning into practice is novel, and serves as an example of a feasible step that developers can take to link education with implementation (Pisani et al., 2011
). Future research is needed to determine what types of ongoing support lead to skill improvements that persist over time and extend to other aspects of patient care.
Constructs measured in this study contribute to the development of an evaluation science in the field of clinical education in managing suicide risk. The objective measurement of documentation skill in this study substantially extends findings of one previous study that measured documentation improvements in a small homogeneous sample (McNiel et al., 2008
). This study is the first to use a validated instrument to assess how participants viewed the match between the education they received about suicide risk and the demands and resources present in their usual practice setting. Participants' assessment of this match has been shown in previous studies to be a critical factor in predicting transfer of training (Cross et al., 2010
Limitations of the study are important to note. It is limited by the lack of a control group of clinicians who did not receive the educational intervention. Normed “gold standard” measures of suicide-specific knowledge, confidence, and skill are not available, but the measures used in this study were developed with expert input and had adequate psychometric properties. The brief measures of knowledge (13 items) and confidence (9 items) sampled key domains of competence and minimized participant burden, but could have missed important domains because of the limited number of items that it was feasible to present. The repeated measures design entails a risk that a portion of the large effects changes seen on posttest reflect a “practice effect.” Participants in this study were mandated by their employer to attend the educational workshop, which could have influenced self-report ratings and motivation to perform well on posttest measures. We attempted to minimize response bias by directly assuring and transparently demonstrating to participants that their identifying information would be kept separate from responses at every stage of data collection and analysis; however, the risk of bias could still exist. Finally, despite marked improvement on all domains measured, posttraining scores in knowledge, risk formulation, and overall quality of documentation leave additional room for improvement. It may be that a longer experiential training would have yielded even more dramatic improvements, though this is an empirical question. The knowledge test and documentation coding standards also may have required a higher degree of knowledge and skill than is necessary for competent practice.
Future research can build upon this study in a number of directions. First, observational studies are essential for the next generation of education research. Direct observation of patient care and/or standardized patient simulations would allow us to measure the effect of education on interviewing, crisis management, and safety planning. Second, future studies of CTL and other clinician educational workshops should include long-term follow-up to determine whether improvements achieved through education persist over time. Third, related to long-term skill development, we need implementation research to test strategies such as booster educational sessions, performance feedback, and clinician reminder systems to promote the transfer and continued use of skills into clinicians' daily practice. Fourth, models for teaching risk assessment need to be compared in terms of their effect on clinician decisions and interactions with patients. For example, do clinicians educated in different models of risk assessment make different judgments about patient lethality or the need for hospitalization? Likewise, future research should investigate which clinician characteristics and attitudes influence the uptake of new skills. Finally, comparative cost-effectiveness data are needed in order to evaluate the potential economic benefit of CTL and other types of suicide-risk intervention education.
In conclusion, CTL is a promising, innovative, and efficient curriculum for educating practicing clinicians to assess and respond to suicide risk. This evaluation demonstrated that a wide range of acute and ambulatory services staff and trainees can benefit from a singe well-designed curriculum. The study suggests the need for controlled observational research to test, compare, and enhance the long-term impact of education on clinical practice.