Context of the process evaluation: The CALM effectiveness study
A total of 1004 adult primary care patients with one of four anxiety disorders (generalized anxiety disorder, panic disorder, posttraumatic stress disorder, or social anxiety disorder) were recruited from 17 clinics within four national sites (Seattle, Los Angeles, San Diego, and Little Rock). The clinics were purposively selected based on a number of considerations, including clinician interest, space availability, size and diversity of patient population, and insurance mix. Anxiety clinical specialists (ACSs) delivered education, self-activation in the context of promoting medication adherence, and CBT to intervention patients and monitored their symptoms using a web-based system in which they recorded anxiety and depression ratings at each contact. Intervention patients chose CBT (34%), anti-anxiety medications (9%), or both (57%) in a "stepped care" treatment that varied according to clinical need. ACSs were located on-site in each clinic and conducted face-to-face CBT with the assistance of a computerized program. ACSs interacted with an off-site study psychiatrist as needed and communicated clinical recommendations about medications between the study psychiatrist and the patients' primary care providers. Control patients received usual care from their primary care clinician. Anxiety symptoms, functioning, satisfaction with care, and healthcare utilization were assessed at six-month intervals. The salaries of the ACSs were covered by the study, and most facilities received some additional assistance/incentives, such as fees, to cover the use of space and/or small amounts of salary coverage for clinic liaisons/champions who helped facilitate the study.
CALM's innovations included (1) the flexibility to treat any one of four anxiety disorders, co-occurring depression, and/or alcohol abuse; (2) using on-site clinicians (ACSs) to conduct initial assessments; (3) a computer-assisted psychotherapy delivery system; and (4) a web-based system customized for anxiety status tracking. CALM was designed for easy dissemination in a variety of primary care settings.
The 17 clinics can be categorized as members of large health maintenance organizations (HMOs; n = 4), federally qualified community healthcare centers (n = 4), university-affiliated clinics (n = 4), or private clinics, either free-standing or part of a hospital group (n = 5). Some of the private clinics and federally qualified centers, however, had university affiliations to some extent, though they were not located on university campuses. About two-thirds of the clinics were internal medicine, with the remaining being family practice. Less than half of the clinics had an in-house mental health provider or providers, and those professionals were usually master's-trained clinicians (e.g., social workers, counseling psychologists). About three-fourths of the clinics served some uninsured patients.
The total number of participants in this qualitative key-informant interview study was 61, including 14 ACSs hired and trained by the study to conduct the intervention, 18 primary care physicians (PCPs), 13 primary care nurses, and 16 primary care clinic administrators or other staff members. The ACSs came from nursing or social work backgrounds almost exclusively, and all but two were female. Eleven of the PCPs were internal medicine physicians, and the remaining seven were family practice physicians. Twelve of the physicians were female, and six were male. The clinic nurses were a relatively even mixture of registered nurses, licensed practical nurses, and licensed vocational nurses. All but two were female. The clinic administrators interviewed (n = 8) were the administrative leads for their clinics, and all but three were female. The other "clinic staff" informants were a mixture of front desk clerks, scheduling and administrative assistants, and project coordinators. All but one was female. No other personal or demographic information was collected from the participants. While a substantial majority of the participants were female, we feel that the sample generally reflects the characteristics of the work environment and the gender make-up of the majority of the professions sampled, namely, nursing, clinic administration, and social work.
In terms of recruitment, all of the ACSs were asked to participate in the key informant interviews. Most of the ACSs (N = 9) were interviewed twice, at a midpoint of the intervention's implementation and at the conclusion of the study. Four ACSs were interviewed only at the midpoint (two had moved on to another position by the endpoint of the study). One was interviewed only at the conclusion. Two ACSs refused to participate. The majority of ACSs worked in one clinic at their respective sites, but several worked in more than one (part-time) across the span of the study. The PCPs, nurses, and clinic administrators/staff with moderate-to-strong involvement in the CALM study (as rated by the study coordinators and/or ACSs) were targeted and "oversampled" for participation. The rationale for this was that because the aim of the qualitative implementation study was to uncover barriers and facilitators to implementation and sustainability of the intervention, it was necessary to interview mostly those clinicians and staff with at least moderate knowledge about the implementation, firsthand, in their clinics. However, clinicians and staff less involved with the CALM intervention (approximately 20% of the sample) were also interviewed to provide balance. The implications of this sampling strategy, both positive and negative, are discussed later in the manuscript. The clinician and staff participants were all interviewed during the final year of the intervention. We did not track approaches to and refusal rates of clinicians and other staff. Some of the potential participants were approached individually, based on their level of participation with the CALM intervention, while others were collectively approached in open calls for participation in meetings or via email. We did not predefine a desired number of participants per category or site; however, we attempted to get at least two employees per clinic to participate, including at least one clinician.
The study protocol, consent forms, and interview guides were reviewed and approved by the Institutional Review Boards at the University of Washington, the University of California at Los Angeles, the University of California at San Diego, and the University of Arkansas for Medical Sciences. After the study was described to potential participants, written informed consent was obtained. Data were collected via key informant interviews by phone. Interview guides were used for each provider/staff group, including ACSs. While each guide elicited some information specific to the provider/staff type, a common core of questions was included in all of the interview guides (see Table ). The questions were decided upon by the study investigators collectively, and the team revised them several times, including after the interviews began.
Core questions for all qualitative interviews
The lead author interviewed the ACS sample at the study's close. One trained interviewer per host site (Seattle, Los Angeles, San Diego, and Little Rock) interviewed their clinic's clinicians and staff. Interviews were recorded and transcribed verbatim. In two instances, the audio recorder malfunctioned and notes from the interviewer were used during coding.
Transcripts were content analyzed through a combination of manual coding of printed transcripts and electronic coding using ATLAS.ti software (ATLAS.ti Scientific Software Development GmBH, Berlin, Germany). Content analysis is a common research method for the subjective interpretation of text through a systematic process of classifying text into categories or themes that represent similar meanings [31
]. In the current study, the interview protocol was designed to support both "conventional" and inductive and "directed" a priori content analyses [32
]. The protocol included a mixture of "grand-tour" type questions that are common opening questions for inductive analyses (e.g., "Tell me about your role and involvement in the CALM project" and "How was CALM implemented in your clinic and how did it go?") and more specific questions/probes informed by existing conceptual models of implementation [21
], which focused on the organizational context (norms and attitudes, routine care procedure, resources), the process of implementation (stage of adoption, variation in implementation), and mechanisms of diffusion (influence of peers/leaders, change agents, incentives). With conventional content analyses, investigators focus on descriptions of phenomena to identify themes and concepts that emerge from reading the interview transcript text without being constricted to a specific theory or conceptual model of behavior (i.e., keeping an open-as opposed to an empty-mind). With directed content analysis, investigators are guided by existing theory or research findings and explore predetermined themes or concepts in their coding. We employed both types of content analysis, with an emphasis on the former.
In terms of coding, the lead author coded the ACS interviews himself, while he and another investigator co-coded the provider/staff interviews. When two coders were involved, they independently coded identical sections of text and compared coding and interpretations. Each subsample of participants' transcripts was coded as a group, in this sequence: ACSs, physicians, nurses, administrators, and staff. This method allowed the coders to focus on that data and emergent themes of one group at a time (as opposed to coding random transcripts), and the sequence allowed the coders to investigate the stakeholders most involved with the CALM intervention first (ACSs), then in descending order of overall stakeholder involvement in implementation.
Informed by the aims of the study and guided by previous examples of similar implementation analyses [25
], the transcripts were first coded using the "top-level" codes (or "macro-themes") of barriers/facilitators to implementation
and barriers/facilitators to sustainability
. Organizing the analyses around these top-level codes [35
] improved the "usability" of the information and allowed the data to be blended easily with other reports documenting implementation barriers/facilitators for major mental health initiatives. Top-level codes were used to broadly categorize the data and represent informants' beliefs about which factors hindered or facilitated implementation and/or sustainability of the CALM intervention and how the intervention could be improved. Further subcoding of categories within each top-level code came next. The subcoding step assigned new codes that described the content of the barriers/facilitators reported by the participants (e.g., "provider interest in mental health issues" as a facilitator of implementation). A third coding step classified the individually categorized barriers/facilitators into types, such as "provider attitudes/behaviors as facilitators" and "clinic structure-related barriers." This last step was interpretive, representing the views of the coders. The final list of codes consisted of behaviors, attitudes, personal characteristics, contexts, processes, and policies the informants believed to be associated with the implementation and sustainability of the CALM intervention.