Main evaluation findings
The evaluation of EQUIP provided an understanding of quality gaps and how to address related problems in schizophrenia. Our findings are summarized in the left column of Table .
Findings in EQUIP and resulting adjustments made in EQUIP-2
EQUIP revealed that solutions to quality problems in schizophrenia differ by treatment domain. For example, challenges to implementing family services proved to be very different from challenges to implementing weight management using wellness groups. Improving family services required assessment of each patient-caregiver relationship, intensive negotiation with patients and caregivers, major care reorganization to accommodate family involvement, and attention to clinician competencies (e.g., knowledge, attitude, and skills). Improving weight and wellness required assessment of the problem in each patient, the establishment of therapeutic groups, involvement of nutrition and recreational services, and help with referrals and follow-ups.
EQUIP also revealed that quality problems can arise from poor clinician competencies [20
]. For example, we found clinician competency problems in the use of clozapine. This clozapine competency problem is well established anecdotally, although there is little empirical evidence of it. The main competency problems that we encountered, in at least a subset of clinicians, were: 1) clinicians were not trained in the use of clozapine, or had not used it despite training; 2) clinicians were not credentialed to use clozapine in their settings; 3) clinicians were discouraged by the possibility that having patients on clozapine would necessitate longer clinical visits with more clinical effort; and/or 4) clinicians did not believe clozapine would be helpful. Quality problems can also arise due to difficulty in changing psychiatric treatments. In EQUIP, we noted that psychiatrists made minimal use of data showing that their patients had high levels of symptoms and side-effects (Quality Reports), and they also made minimal use of the guidelines that were easily accessible via the MINT "PopUp" (see Table ) that was available on their computer at every clinical encounter [21
Quality problems such as these can be exacerbated by a general lack of awareness of evidence-based practices, such as approaches to managing increased weight or treatment-refractory psychosis. During the course of implementation, it became apparent to the research team that increasing the intensity of follow-up (e.g., adding clinic visits) for severely ill patients was of limited use. Clinicians typically did not change treatments in response to clinical data. Therefore, additional treatment visits were of limited value because they were not likely to lead to appropriate changes in treatment in response to psychosis or medication side-effects.
Based on what we were seeing in terms of these persistent quality problems, we began to conclude that improving care required creating resources to support clinicians and reorganizing care to help them easily implement changes in their clinical practices. Also, there was a need for intensive education and product champions who would work with clinicians to encourage awareness and use of these resources. Care managers and the informatics system did help physicians identify clinical problems in their patients, but these interventions and tools needed reassessment and possibly redesign. For example, we learned, through the involvement of the case managers, that tools designed for clinicians may not have the same appeal across types of clinicians. We found that psychiatrists and case managers (though the sample was small) had differing perspectives on the value of being provided with clinical data by their computer (e.g., PopUps) during the encounter. This supported the assertion that formative evaluation data must be gathered from multiple perspectives. As Lyons et al. point out, it is essential to examine the perspectives of multiple individuals: the "single-provider focus does not well represent clinical reality as experienced by interdisciplinary teams [22
Finally, we learned that improving care within the VA healthcare system (and perhaps other large healthcare organizations) can require high-level organizational involvement. For example, implementing wellness groups or clozapine clinics required active involvement from nutrition and pharmacy, respectively, which were medical center-wide services. Indeed, sometimes management of these services resided at the level of the Veterans Integrated Service Network (VISN; the 21 VA regions of the United States). Nutrition and pharmacy services did not respond to requests from staff at the level of mental health clinics, and this lack of responsiveness impeded our ability to implement a clozapine clinic or to involve the nutrition department in the wellness programs.
This section describes: 1) the EQUIP-2 project and conceptual framework, 2) the evolution of the EQUIP-2 implementation strategy (i.e., interventions and tools), and 3) the formative evaluation component of EQUIP-2.
As noted above, the next phase of work building toward national roll-out of the EQUIP intervention is EQUIP-2 – a Step 4, Phase 2 multi-site evaluation (See Table ). As the Overview to the Series
], projects within this phase are considered "clinical trials to further refine and evaluate an improvement/implementation program." These trials involve a small sample of facilities conducting the implementation program under somewhat idealized conditions. Moreover, it is noted [4
] that these projects require active research team support and involvement, plus modest real-time refinements to maximize the likelihood of success and to study the process for replication requirements. They employ formative evaluation (to monitor and feed back information regarding implementation and acceptance and impacts), as well as development and use of formal measurement tools and evaluation methods.
EQUIP-2 is a three-year project that was funded in January 2006, and aimed at our implementation strategy refinement and broad formative evaluation in eight sites across four VISNs that used the implementation approaches adopted by QUERI. As noted above, EQUIP-2 was funded as an SDP [4
], which involves a unique set of expectations in terms of addressing what are called "quality gaps" (i.e., the current lack of evidence-based care for schizophrenia, described above).
The project reflects the growth in knowledge, both at the researcher and study reviewer levels, regarding implementation science. More specifically, unlike EQUIP, this study includes a conceptually-driven study of the process of implementation that includes the effect of various interventions on patients, clinicians, and organizations, and a more conceptually-based implementation strategy. The early implementation efforts described above also prompted the EQUIP-2 investigators to incorporate and/or strengthen several components of the multi-phasic evaluation as described by Stetler and colleagues [2
]. These authors recommend: diagnostic analysis of organizational readiness (e.g., using relevant surveys) and interviews regarding attitudes and beliefs; implementation-focused evaluation examining the context where change is taking place; maintenance and optimization of research implementation interventions; and provision of feedback, e.g., regarding progress on targeted goals. They also recommend collecting data from experts, representative clinicians/administrators, and other key informants regarding both pre-implementation barriers and facilitators and post-implementation perceptions of the evidence-based practice and implementation strategy. All of these elements are being utilized in EQUIP-2 and are described further below with regard to the formative evaluation.
Conceptual framework: Simpson Transfer Model & PRECEDE
Though informed by more than one conceptual framework, EQUIP-2 is organized around the Simpson Transfer Model (STM). This model guides the development and refinement of a diversified, flexible menu of tools and interventions to improve schizophrenia care. Incorporating the notion of readiness to change [23
] at both the individual and organizational levels, Simpson developed a program change model for transferring research into practice [24
]. The STM has provided important conceptual input to many studies in technology transfer [25
]. This model involves four action stages: exposure, adoption, implementation, and practice. Exposure
is dedicated to introducing and training in the new technology; adoption
refers to an intention to try a new technology/innovation through a program leadership decision and subsequent support; implementation
refers to exploratory use of the technology/innovation; and practice
refers to routine use of the technology/innovation, likely with the help of customization of the technology/innovation at the local level. Crucial to moving from exposure to implementation are personal motivations of staff and resources provided by the institution (e.g., training, leadership), organizational characteristics such as "climate for change" (e.g., staff cohesion, presence of product champions, openness to change), staff attributes (e.g., adaptability, self-efficacy), and characteristics of the innovations themselves (e.g., complexity, benefit, observability).
EQUIP-2 also draws upon the PRECEDE planning model for designing behavior change initiatives [28
]. Because the STM model does not recommend specific
behavior change tools to be used in a knowledge transfer intervention, additional guidance is necessary regarding development of the implementation framework. The PRECEDE acronym stands for "predisposing, reinforcing, and enabling factors in diagnosis and evaluation." PRECEDE stresses the importance of applying multiple interventions to influence the adoption of targeted clinician behaviors. These include: 1) academic detailing and consultation with an opinion leader or clinical expert, which can help predispose
clinicians to be willing and able to make the desired changes; 2) patient screening technologies, clinical reminders, and/or other clinical support tools that can enable
clinicians to change; and 3) social or economic incentives that can reinforce
clinicians' implementation of targeted behaviors.
A key part of the PRECEDE model is the active participation
of the target audience in defining the issues and factors that influence targeted behaviors, and in developing and implementing solutions [28
]. This participation principle is consistent with the social marketing
framework, which emphasizes the importance of understanding a target audience's initial and ongoing perceptions of the innovation, in order to facilitate behavior change [29
]. Both PRECEDE and social marketing theory state that messages and interventions should be tailored to perceptions in order to influence the desired behavior change.
Taken together, the models and frameworks discussed above suggest that the impact of implementation efforts will be maximized when they: 1) are based on assessments of the needs, barriers, and incentives of targeted end users; 2) are based on an understanding of the local context; 3) involve representatives of diverse stake-holder groups in the planning process; 4) use expert involvement in planning, especially when behaviors to be adopted and/or changed are complex; 5) draw on marketing principles for developing and disseminating intervention tools; and 6) secure support and involvement from top level management and product champions [31
]. Each of these factors is integrated into the STM, which guides the EQUIP-2 strategy and formative evaluation. Table provides an overview of how we will engage in each phase of the STM.
Simpson Transfer Model stages and corresponding activities
Evolution of the schizophrenia implementation strategy
Several modifications were made in EQUIP-2 as a result of the findings and observations in EQUIP. An overview of each type of strategy is provided below; Table (right column) notes the specific changes made in EQUIP-2 based on findings from EQUIP.
Evidence-based clinical/therapeutic practices
EQUIP-2 is more targeted than EQUIP in its approach to strengthening specific evidence-based practices within the care model. EQUIP-2 focuses on quality improvement by assisting staff to implement specific evidence-based practices that have shown strong impacts on outcomes [7
]. In addition, since EQUIP's onset, the VA has made a national commitment to implementing "recovery-oriented" practices in schizophrenia, which is embodied in the President's New Freedom Commission on Mental Health that was established in 2002 [34
], and the VA's Mental Health Strategic Plan [10
]. Thus, EQUIP-2 provides implementation support on evidence-based practices that support recovery. Each VISN involved was asked to choose two evidence-based practices from a list of four practices that EQUIP-2 was prepared to support (Table ). All four VISNs chose the same two targets – wellness and supported employment.
Evidence-based clinical/therapeutic practices that could be supported in EQUIP-2:
Delivery system interventions
During the intervention period, there is a monthly quality meeting at each intervention clinic. This quality meeting is "local" and the site PI (principal investigator), quality coordinator, product champion, and clinicians attend this meeting. During the meeting, each clinician is given his/her personal "Quality Report." Quality meetings: 1) allow pervasive quality problems to be identified, 2) optimize teamwork by encouraging group problem-solving on patient management problems, and 3) identify resources needed to address care problems. Lastly, high-achieving clinicians are discussed (i.e., those who are accomplishing the specific goals of each care target) and incentives are distributed.
In terms of marketing, all of the sites had an explicit project "kick-off" that signalled the start of the project and promoted a sense of excitement. Educational activities and trainings commenced both at the coordinating center and at the individual clinics.
In order to promote further engagement and collaboration, additional levels of personnel are involved in the project from its inception. Prior to enrolment, we have had monthly planning calls involving clinic staff, regional managers, and medical center leadership. These calls address practical issues regarding study set-up, as well as plans for marketing. Once enrollment begins, we will have monthly Implementation Team calls involving site PIs, site project directors, product champions, VISN-level staff, and the research team. These calls will examine and address all implementation issues as they arise and will work toward sustainability of the model. During the course of implementation, we maintain the research nurse position from EQUIP in the form of "Quality Coordinators." These individuals were reported to make a difference in EQUIP, not only to clinicians, in that they provided additional clinical information about patients, but also to patients, in that they provided an additional source of support. Further, we engage case managers from the beginning of the project.
We encourage staff to identify who they go to for expertise in the chosen care targets, and ask that individual to volunteer as product champions for the project. We identify product champions based on this information and ask them to participate in monthly Implementation Team calls, as well as other mechanisms of involvement.
As noted above, this Phase 2 implementation project involves a more formal evaluation component, due to the importance at this stage of program refinement. Below we describe each component of the formative evaluation.
In EQUIP, we observed that organizational climate and staff engagement and structure significantly affected the degree to which the tools presented in the project were effective. This observation is consistent with emerging implementation science, which itself is increasingly recognizing the importance of context. In order to better understand and "diagnose" [2
] the organizational climate of the sites, the Simpson Transfer Model organizational readiness measures will be used in EQUIP-2. We also conduct key informant interviews in order to better understand the clinics' preparedness for the intervention.
Each month during implementation, there are Implementation Team meetings, which serve to link intervention sites and the research team from the coordinating center. Here, barriers and facilitators to implementation are identified and discussed, and group problem-solving and any needed reorganization of care is planned and documented. Product champions and other site personnel also report on any informal feedback they have received about problems with the implementation. As implementation continues, this team works toward sustainability of the model. Minutes from these meetings, project managers' field notes, and quality coordinators' logs are analyzed to evaluate implementation throughout the intervention period. In addition, midway through the intervention, the research team conducts semi-structured interviews with clinicians and clinic managers to evaluate the operationalization of the intervention, necessary refinements to the intervention, and areas of desired guidance. In order to reduce burn-out, promote and maintain enthusiasm for the project, and to optimize successful implementation overall, various interventions are modified if feedback and other formative data indicate that change is necessary.
During the course of the project, in order to monitor progress toward the project's goals, we evaluate the degree to which physicians respond to the patient self-assessments. For example, do they provide the necessary and/or requested referrals to supported employment, and do they refer patients to wellness groups for weight management. We also assess the Quality Reports for other outcome progress. When we find that progress is not being made toward the goals, we work in coordination with the clinics to identify barriers to achieving the goals and strategies for addressing and mitigating the barriers.
At the conclusion of the project, we will conduct semi-structured interviews with the clinicians and clinic managers regarding the usefulness of the EQUIP-2 strategy, their satisfaction with the implementation process, barriers to and facilitators of implementation, and recommendations for future refinements [2
]. In order to re-evaluate the delivery system interventions, we will collect quantitative data about the usability of the informatics system. Measures of organizational readiness will be repeated, in order to describe changes in organizational climate during the course of the project, as another potential influence on successful implementation. And the extent to which the care model has become "institutionalized," (i.e., degree to which the care model has become part of routine clinical practice) will be examined.
For the final interpretive evaluation, we will explore all formative evaluation data in light of our outcome data in order to provide: alternative explanations of results; clarification of our implementation effort success (or failure); and assessment of the potential for reproducibility of our implementation strategy in a broader segment of the VA [4