Despite the many advances in scientific research over the last several decades, cutting edge technologies and therapeutics often take many years to find their way into widespread use. The dissemination and uptake of best practices in clinical care is a separate and sometimes neglected component of research that is essential to improve the population’s health. The diagram from the Institute of Medicine (IOM) report in Figure 1 demonstrates the relationships between pre-clinical, clinical, and translational research that covers the spectrum from bench to bedside to the community and into public health policy. Type 2 translational research, sometimes called “Proof in Practice Research,” seeks to maximize the yield of what has been learned from the bench and from carefully controlled clinical trials and attempts to extend those benefits to a larger population. One aspect of type 2 translational research, sometimes called evidence implementation or implementation science, applies what has been learned about clinical medicine to achieve best practices across providers and health systems and thereby maximize the health of a population.
Implementation research has been defined as “the scientific study of methods to promote the systematic uptake of clinical research findings and other evidence-based practices into routine practice, and hence to improve the quality (effectiveness, reliability, safety, appropriateness, equity, efficiency) of health care. It includes the study of influences on healthcare professional and organisational behaviour.” Healthcare efforts should be directed at implementing strategies that work and allocating limited resources most efficiently. Implementation research is typically considered at the interface between research and quality improvement.
Some may ask whether evidence implementation should be considered research at all. Although potentially still a matter of debate, it seems most prudent to consider any investigation that leads to generalizable knowledge as research and should be governed by an Institutional Review Board or ethics committee.
Using osteoporosis as an example disease state, this article reviews both general and specific aspects of evidence implementation focused on the healthcare provider, health systems, and patients to improve quality of care.
Osteoporosis care generally does not occur during hospital episodes (even for most fracture patients), and strategies to impact multiple points of the healthcare system are vital. Because the main process measures of interest in osteoporosis are usually receipt of bone mineral density testing and osteoporosis medications, it is perhaps most intuitive to first focus on the healthcare provider, usually a physician. However, even this step presents challenges. For example, who is the right provider to target? Using the example of glucocorticoid induced osteoporosis, we might think that the physician who is prescribing glucocorticoids would be the most important provider to engage to make sure that osteoporosis risk has been considered. Moreover, the glucocorticoid prescriber should be the most relevant provider to be aware of the risks of long-term use of glucocorticoid therapy. However, if the glucocorticoid prescriber is a specialist, s/he may consider that osteoporosis screening and management should be the domain of the primary care physician who usually takes responsibility for other health screening. Additionally, if the patient sees a specialist (e.g. endocrinology, rheumatology) who typically is familiar with bone health, then this specialist might be considered the responsible provider. In short, if evidence implementation seeks to intervene with the healthcare provider, even identifying the correct and most appropriate provider for a patient is not a trivial undertaking.
Another challenge is that for many evidence implementation studies, group randomization is most appropriate, not patient-level randomization. The reason for this is that patients are often treated similarly based upon characteristics and/or behaviors of their treating physician or physician group (e.g. a multi-physician group practice, or a health system). Because physicians who practice together may have a greater likelihood to treat their patients similarly, it may make the most sense to randomize groups of physicians. It may also not be feasible to implement an intervention at a patient level rather than at a physician or group level. If one does randomize groups of providers, then there are methodological issues that need to be taken into account in both the design and the analysis phase of a study. A well-defined set of statistical methodologies have been developed for group-randomized trials.
Finally, for any multi-faceted intervention, one may want to disentangle the intervention to determine which aspect was effective. However, this is often not feasible. At the design phase, there may be planned subgroups which are too small to allow for definitive conclusions; if determining which component(s) of the intervention were most effective is being considered during the analysis phase of the study, there may be numerous post-hoc analyses which limit inference.
Examples of Physician-Focused Evidence Implementation in Osteoporosis
As examples of evidence implementation in osteoporosis, investigators at the University of Alabama have conducted a number of physician-focused interventions. One example of an intervention in glucocorticoid osteoporosis is described in Figure 2, which provides an overview of the study design.
We identified patients within a commercial health plan that were long term glucocorticoid users and then identified the prescriber who was responsible for the majority of the glucocorticoid prescriptions. Further, we used health plan data to obtain information about Dual energy X-ray absorptiometry (DXA) screening rates as well as utilization of prescription osteoporosis medication such as bisphosphonates.
We randomized the physicians to either an intervention or a control arm. The intervention arm had two components: one of them was a series of Internet-based case based modules made available during the study period. We also audited and provided feedback for each doctor regarding the proportion of their long-term steroid users in the health plan who had any form of BMD testing or prescription osteoporosis medications. We compared these physicians to a metric of the top ten percent of the highest performing physicians who also had similar patients in the health plan. This metric was called the Achievable Benchmark of Care . Following the intervention, we evaluated follow up rates about screening and osteoporosis treatment. The control arm physicians received an unrelated CME module that had nothing to do with osteoporosis.
This intervention did not produce the intended quality improvement and the trial was “negative”. The intervention physicians who received case-based learning and audit and feedback were not more likely to improve osteoporosis screening and treatment. Of interest was a subgroup analysis that found the physicians who completed all three modules (about 1/3 of the total) had approximately 10% better performance compared to control physicians who were similarly engaged. Our research term’s conclusion from this study, and from other studies, is that although the provider may need to be part of an intervention, an exclusively provider-focused intervention is unlikely to yield sufficient improvement in quality of care in osteoporosis.