In this paper, we present nine practical lessons learned for academic evaluators seeking to measure the effects of health IT in community-based settings. These lessons provide guidance to: contribute to the quality of academic–community partnerships; facilitate effective communication; anticipate interactions between implementation and evaluation; and address methodological issues that are central to this type of endeavor.
Several of these lessons are similar to findings reported by others, for example, discrepancies between IRBs,17
and adapting evaluations to implementation delays, evolution in implementation plans, and stage of implementation.15
The opportunity to reuse clinical and operational data to support evaluation efforts has also been discussed elsewhere.15
Globally, our findings are also congruent with the core principles of community-based participatory research.4
Nevertheless, the lessons presented here extend the literature by providing very practical guidance on how to operationalize the principles of community-based participatory research for the evaluation of community-based health IT. This is particularly relevant at a time when more studies such as these are being planned nationwide.
Conducting evaluation under a community-based participatory research framework is time-consuming and challenging. It requires considerable investment in building strong relationships, mutual education about priorities, and flexibility. On the other hand, these collaborations helped us develop a rich and nuanced understanding of health IT on the ground, foster commitment and buy-in to novel community-based studies that might otherwise be impossible, and ensure that findings are put into practice immediately to improve health care in the communities in which the research is being conducted.
In our experience, these collaborations also led to more accurate interpretation of study results and generated additional hypotheses to be tested. In general, investigators close to the implementation process may be most likely to understand the reasons behind both positive and negative findings. On the other hand, the collaborations also preserved distance between implementers and the evaluators of technology. This distinguishes HITEC projects from previous research on the effectiveness of health IT that has occurred in large academic medical centers,1
where evaluators were either members of those medical centers or participants in the implementation.
In addition to its implications for academic researchers, our experience has several policy implications. Policy makers seeking to understand the impact of health IT on quality and cost should be aware of the long time horizon associated with the corresponding evaluation and research. Delays in implementation, of course, delay evaluation, and measurable effects on quality and cost cannot be determined immediately after technologies go live.
Second, when community-based implementations are mature enough to measure cost and quality outcomes, claims data aggregation across health plans can provide a powerful method of assessing these outcomes at the community level. However, current solutions to data aggregation require hardware, software and personnel capabilities that are beyond the scope of most communities and researchers. Third-party vendor solutions are available but their cost may outstrip the cost of the rest of the evaluation combined. State and federal funds may be needed to support data aggregation centers for research, evaluation, and quality improvement in order to capture these effects of health IT. Eventually, the goal will be to replace claims data with rich clinical data from electronic health records (EHRs) and other sources of electronic data.
Third, while there are federal efforts underway to expand training for people to implement health IT such as the Office of the National Coordinator program of assistance to university-based training programs (http://healthit.hhs.gov
), there appears to be a relative shortage of health services researchers and informaticists trained to evaluate health IT initiatives. The academic–community partnerships described here can serve as a template for how to leverage a group of investigators across many geographically disparate communities.
First, we collaborated with highly motivated community organizations that were in relatively advanced stages of health IT implementation. Second, New York State was directly supporting implementation and requiring evaluation. As a result, the lessons learned from these experiences may not be generalizable to all communities.