|Home | About | Journals | Submit | Contact Us | Français|
We would like to respond to the comments of Dr. Archambault and colleagues regarding our recent paper Getting to Uptake: Do Communities of Practice Support the Implementation of Evidence-Based Practice? Dr. Archambault is correct in pointing out two limitations to our study and a typographical error.
First, although there is a detailed description of what the practitioners in the CoP group did, there is no description of what the practice as usual (PaU) group did to implement the CAFAS tool. They suggest that there could be elements of a CoP present in the PaU organizations in their daily activities and that the lack of a validated measure to assess the presence and intensity of CoP processes prevents us from comparing the different mechanisms at play in both groups.
Apart from questionnaire measurement, we did not track the implementation of the CAFAS tool in the PaU organizations using a process evaluation methodology. Our understanding of how CYMH organizations in Ontario implement the CAFAS tool stems from anecdotal evidence and nine years of cumulative experience with implementing CAFAS. We have come to understand that a proportion of organizations fail to implement CAFAS in practice following training in what would be considered a timely or systematic fashion; there is rarely an implementation plan per se, rather practitioners are sent for training and implementation is expected to be emergent. Moreover, practitioners in CYMH are overburdened with high case-loads that do not cycle through the system as rapidly as they could for a variety of reasons. This leaves little time for practice reflection, using a CoP format or otherwise. Also, many CYMH organizations deal with significant staff turnover and this presents a barrier to implementation because organizations have to conduct more frequent training for incoming staff.
We did track whether CoP and PaU organizations had CAFAS data to export and this is reported in Table 2 of our paper. Because the CAFAS is administered as an electronic tool and each administration is automatically relegated to a database on site, the lack of data exports suggests the tool was not used and the number of ratings is a strong indicator of use. Repeated requests for CAFAS exports are made to each organization by our data analysts and, thus, it is unlikely that these organizations had data but simply failed to export. In retrospect, it would have been informative to interview key informants in the PaU organizations in order to contextualize their CAFAS use or lack thereof over the year of study. Our recommendation for future research would be to capture change and process variables using a mixed methodology.
Second, Dr. Archambault and colleagues point out that the absence of a denominator – the number of cases that could have been assessed on CAFAS during the year - did not allow us to report more than an absolute value of CAFAS ratings conducted. We certainly understand this point, however we are not certain that we could have gotten clarity on this information from organizations. Organizations could have told us how many clients came into service during the implementation phase of the study. However, CAFAS user organizations in Ontario do not conduct a CAFAS assessment on every client that enters service. There are several reasons for this. First, many organizations do not feel they have staff capacity to rate CAFAS on each incoming client. Second, there are mandated exceptions to rating CAFAS for certain types of clients, namely clients must be between the ages of 6 and 17 years, 11 months; and clients meeting the following criteria are excluded: (i) children receiving services for which no detailed screening or assessment occurs (e.g. prevention, outreach, parenting education groups, support groups); (ii) children receiving services that are delivered in 1 to 3 sessions (e.g., crisis, early intervention, single-session intervention); (iii) children seen at an organization primarily to redirect appropriately to another organization; and (iv) children receiving service for problems other than a psychological, emotional, behavioural or substance abuse, e.g., developmental impairment. Each organization also decides whether to rate CAFAS for clients with comorbid developmental impairment and mental health problems. Assuming the organization had someone on staff capable of pulling the specific data from the client information system – and that is, in our experience, a pertinent assumption – the calculation would still be complex and unattainable (Denominator = Total number clients entering service within date parameter – age exceptions – client type exceptions – those simply not rated due to a lack of human resource capacity). Organizations do not capture these qualitative variables within their client databases, rather these distinctions are clinical in nature. In summary, while it would have been ideal to have such a denominator, it would have been unfeasible to arrive at one that was reliable and accurate.
A typographical error is identified on page 24, where it is stated that one of the CoP organizations could not rate any of their clients with the CAFAS tool because of technical problems and this is inconsistent with the date reported in table 2. This is a typographical error and should read that it was one of the PaU organizations who experienced technological problems.
We appreciate all of the comments offered by Dr. Archambault and colleagues, and look forward to continuing our program of CoP study within the context of web2.0.