Translational research has received heightened attention since the publication and initial implementation of the National Institutes of Health (NIH) Roadmap for Medical Research. 1,2
Although more focus has centered on breaking down the barriers between basic science and clinical science, several authors have emphasized the importance of removing the barriers between clinical science and translation of discoveries into routine clinical practice and healthcare policy. 3,4
Woolf, in particular, has suggested the need for additional resources to support the latter given the likelihood of greater impact on the public health. 5
The NIH Clinical and Translational Science Awards (CTSAs) 6
partially address this need through the funding of Community Engagement Resources and, in some instances, through supplemental funding to conduct pilot work related to the creation of a national network of community-based research sites. 7
Recognition of the importance of studying the real world implementation of efficacious interventions to address their effectiveness preceded these NIH initiatives. Models, such as Veteran's Affairs QUERI 8,9
and the RE-AIM framework (Reach, Effectiveness, Adoption, Implementation, and Maintenance), 10–13
have been proposed as methods to facilitate translation of research into practice. In addition, federal agencies 14
and others 15
have called for the conduct of so-called practice-based or pragmatic trials so the evidence generated through research is more likely to be appropriate for implementation in practice. The number of practice-based research networks (PBRNs) is on the rise and research conducted in practice settings has been identified as “a crucial scientific step, the blue highway, between the great medical advances of the next 25 years and the millions of Americans who want to live a long and healthy life.” 4
Common across these two approaches—using a theoretically based implementation model to apply evidence to practice and generation of evidence from practice-based research—is an increased consideration of external validity in addition to the internal validity of the study design.
Our premise is that, as with clinical science, clinical informatics intervention research suffers from a lack of attention to external validity (i.e., generalizability) in study design, implementation, evaluation, and dissemination. Moreover, lack of attention to external validity hampers the ability of others to assess the fit of a clinical informatics intervention with demonstrated efficacy in one setting for implementation in their setting. The RE-AIM framework addresses these concerns.
The purpose of this model formulation paper is to demonstrate the applicability of the RE-AIM framework to clinical informatics intervention research. First, we discuss the importance of such a framework for planning, implementing, evaluating, and reporting clinical informatics intervention studies. Second, we describe the RE-AIM framework and suggest additional assessment questions for clinical informatics intervention research. Third, we validate the use of the RE-AIM framework with its extension for clinical informatics intervention research through two clinical informatics intervention case studies. The first case study is focused on the real world implementation of a clinical informatics intervention with demonstrated efficacy in randomized controlled trials (RCTs)—the Choice (Creating better Health Outcomes by Improving Communication about Patients' Experiences) intervention. In this instance, the dimensions of the RE-AIM framework provide a model for describing the process of implementing Choice into routine care (i.e., translation of evidence into practice). The second case study is an RCT of a personal digital assistant (PDA)-based decision support system (DSS) for guideline-based screening and management of depression, obesity, and smoking cessation. This case study illustrates how the RE-AIM framework can be used to inform the design of an efficacy RCT that will generate evidence from practice through capture of essential contextual details typically lacking in RCT design and reporting.