This theory-based process analysis identified predisposing, reinforcing and enabling factors that are important to the success of a multi-component educational intervention. The IT role was integral in all three categories. Two general IT themes emerged from the qualitative analysis and were explored in the quantitative section. First, the use of IT in clinical change processes is both idiosyncratic and ubiquitous, across all organizational levels. Each clinic required a unique combination of support, re-engineering, education, and customization activities. Second, the qualitative results highlighted how each organization has a unique pattern of integrating IT with QI and clinical work processes, and this relationship may be a critical implementation factor to success.
The first theme regarding the idiosyncratic requirements of each clinic and clinician suggests that it might not be possible to ever ‘standardize’ or even directly predict the implementation process. Providing individualized IT tools at both the clinic and clinician level may be a critical component of success for any clinical intervention and should be part of assessment for readiness. The importance of systematically ‘diagnosing’ a clinical context has been promulgated by a number of health service researchers in the form of readiness assessments.39–41
This perspective is also congruent with recent work highlighting the impact of micro-systems42
and the complexity of workflow modeling.43
Future evolutions of EHRs and IT should move to make tools available to support the clinician's information management goals and educational learning needs.44
The finding that each organization has a unique pattern of integrating IT with QI and clinical work processes suggests that this relationship may provide a causal mechanism for implementation success that is often not considered.34
Easy access to IT departmental resources can significantly affect how well quality and educational programs are developed and implemented, a finding noted by others.52
We identified the concept of IT department ‘activation’, which refers to both the ease of access and the degree of customization available at the clinic level. Mechanisms to access IT resources and the IT/QI department relationship has not been explicitly explored and could be an area of future work. One way to identify mechanisms is to hypothesize them directly. In their model for Realistic Evaluations, Pawson and Tilley suggest that it is important to identify context mechanism outcome configurations.53
From this perspective, the IT/QI relationship may support implementation success through several causal processes. First, it may lead to greater congruence or alignment across the institution, from leadership to staff. Second, it may bring the deep history of change processes into the hands of IT support staff. Or, third, it may simply bring resources that are hard to come by in strapped institutions (eg, spread the pain).
These findings do not diminish the role of understanding patterns of adoption at the individual level. Models, such as the Technology Adoption Model,54
were developed from older value-expectancy motivational models, such as the Theory of Planned Behavior56
and Theory of Reasoned Action,57
in which intentions are predicted from attitudes, social norms, values, and outcome expectancies. The Technology Adoption Model specifically focuses on usefulness and usability of the technology. Our intervention was not particularly new and, in fact, the implementation team attempted to minimize any significant change in their electronic environment. Hence we would expect that the variables measured in the Technology Adoption Model would be less useful in predicting behavior for this intervention.
Several authors have called for a greater clarification of the phenomenon of information technology as a causal variable and for a more theoretical approach to design and implementation.58–61
For example, the authors from the RAND Corporation recently published a meta-analysis of the costs and benefits of health information technology,61
noting: ‘In summary, we identified no study or collection of studies, outside of those from a handful of HIT leaders, that would allow a reader to make a determination about the generalizable knowledge of the system's reported benefit … This limitation in generalizable knowledge is not simply a matter of study design and internal validity.’ (p 4). This work is a call to increase the theoretical grounding of informatics. The strength of the study is the integration of quantitative and qualitative findings. By extending the qualitative findings into exploratory quantitative analysis, the groundwork is laid for better, more theoretically based hypothesis testing in this field.
The idiosyncratic and unique requirements of each clinic meant that the supportive activities were applied differently in each setting, making it not possible to tease out the intensity of support from need. This dilemma is at the core of key issues in the tension between program evaluation and the need for scientific generalizability. It is also the point made in a recent systematic review of the impact of context on the success of QI activities.62
Although the results may not be entirely surprising, this study illustrates the great need to develop theoretical hypotheses about the mechanisms of action prior to engaging in a study and following up with those measures as part of a systematic analysis. Most current theories of implementation are too broad to lead to testable questions. Identifying these questions more specifically as they arise from formative analyses and planning can enhance generalizabiilty and the building of a science of informatics. These questions can include a focus on the conditions that might need to be triggered to create the needed mechanisms, the possible interaction of different factors that might be causally related to outcomes and an examination across all levels from individual providers to the organizational level.
The results of this study also provide practical advice to future designers of clinical interventions regarding the combination of institutional and implementation approaches that should be addressed.63
Recent reviews have noted the importance of an integrated educational model, and this study provides a model of the implementation process that integrates multiple levels of change.64
Finally, the new emphasis on comparative effectiveness research requires a deeper understanding of the underlying causal mechanisms of IT interventions.66
This understanding requires a stronger emphasis on theoretical perspectives if a generalizable science is to be developed.67
This study has several limitations. Only three organizations participated and not all clinics within each institution were approached. Those who participated might be different on some immeasurable dimension than those who did not. The quantitative analysis is essentially exploratory only and should not be overinterpreted, as random assignment was not performed and sample size varied across groups and in one case was quite small. Because we did not quantify the institutional variables identified in the qualitative portion of our study (IT access and the IT/QI relationship), we cannot test for their impact directly. As a result, our conclusions are only suggestive. The investigator team served as the implementation facilitators and their role is inextricably intertwined with the outcomes. This paper was an attempt to measure those relationships directly. Finally, the three institutions are all in the same part of the country, resulting in unknown cultural and population bias.
This study highlights the complex role of IT in the design and implementation of educational and QI projects. Access to IT and the degree of integration between the institutional departments of IT and quality emerged as particularly salient in this study and should be investigated more systematically in future studies. Enhancing tools to bring control of IT interventions to the clinic and the individual clinician may be a necessary component of future success.