|Home | About | Journals | Submit | Contact Us | Français|
On February 17, 2009, President Barack Obama signed into law an initiative providing $1.1 billion to support research on the comparative effectiveness of drugs, medical devices, surgical procedures, and other treatments for various conditions. This comparative-effectiveness research (CER) initiative has generated considerable controversy. Industry and free-market advocates have expressed concerns about the role of cost-effectiveness analyses within CER and subsequent governmental intrusion into doctor–patient decisions.
Despite such controversy, the broad consensus is that although the amount of funding the federal government provides for research is already large, the translation of this investment into practice, enabling new laboratory discoveries to reach patients' bedsides, is frustratingly slow. Furthermore, much of the government's research funding goes toward randomized clinical trials that evaluate the efficacy of new drugs, devices, and treatments within highly controlled environments. Doctors are most concerned about the relative benefits and harms of one treatment as compared with another for a particular patient, but randomized trials are seldom designed to answer these types of practical questions.1 Therefore, health policymakers, health insurers, and providers are increasingly interested in the information that could be obtained from studies of the comparative effectiveness of various treatments for specific conditions.
Surprisingly little attention has been paid to what we believe is the most critical question facing CER: Will its results significantly improve the quality and safety of the health care received by the average patient? Policymakers and research funders, such as the National Institutes of Health, often assume that the final steps in the translation of clinical research — the decision to act on new medical evidence and its implementation in routine care — are seamless and automatic, whereas we know that changing the behavior of physicians and patients is difficult.2 Though we agree that the need for CER is clear, many of the assumptions regarding the most important aspect of such research — the ultimate implementation of its findings into health care — have little empirical support.
One encouraging example of the fruits of linking CER with implementation research is the remarkable improvement in the safety and quality of primary percutaneous coronary intervention (PCI) during acute myocardial infarction. Numerous randomized trials in the early 1990s established the clear superiority of primary PCI over fibrinolytic therapy in controlled, clinical laboratory settings. Subsequent comparative-effectiveness studies examining the characteristics of patients and systems that were associated with the time from presentation to primary PCI showed that delays common in most real-world settings attenuated the benefits of therapy.3 Ten years after the publication of the first efficacy studies, less than one third of hospitals were achieving the average time from diagnosis to PCI (90 minutes or less) associated with high-quality care. Recent efforts to understand and ameliorate this quality gap have relied on implementation science, using qualitative and survey research methods to identify hospital-based strategies associated with faster times to primary PCI. Implementation researchers have worked in partnership with health care organizations and professional societies to develop a tool kit to facilitate hospitals' implementation of strategies that reduce these times.3
Given that the experience with PCI is uncommon — research results are rarely implemented into routine health care — it is time to revisit our expectations of what CER will produce. CER is the centerpiece of the Effective Health Care Program of the Agency for Healthcare Research and Quality (AHRQ) (www.effectivehealthcare.ahrq.gov) whose creation was mandated by the Medicare Modernization Act of 2003. Leaders within the AHRQ recently described the need for three tiers of evidence translation: the first translating basic science into clinical efficacy data (T1), the second (T2) using patient-oriented outcomes and health services research to develop knowledge about clinical effectiveness, and the third (T3) using implementation research for continuous measurement and refinement of treatment implementation.4 (See the table for an overview of these tiers as applied to primary PCI.)
These leaders argue that an earlier emphasis on creating informational tools for organizing and disseminating CER findings has given way to an imperative to develop connections between researchers and practitioners through measurement, experimentation, and dissemination of information about the best ways of delivering effective care.4 Similarly, the Department of Veterans Affairs (VA) has developed the Quality Enhancement Research Initiative (QUERI; www.queri.research.va.gov) for disseminating research findings to VA policymakers, providers, and patients and has created a dedicated resource center, the Center for Implementation Practice and Research Support (CIPRS; www.queri.research.va.gov/ciprs/default.cfm), to accelerate improvement in the quality and performance of VA health care through implementation research and practice.
These efforts suggest that some researchers and policymakers guiding the three tiers of evidence translation have come to understand that a shift is needed from the “science of recommendation to a science of implementation.”2 The creation of a CER initiative focused on developing and disseminating effectiveness reviews is an essential, but not a sufficient, step toward the routine provision of safe, high-quality health care to all Americans. We also need evidence-based methods for discovering and describing how the findings of clinical trials and CER can be efficiently implemented and incorporated into routine practice.5 Harnessing the promise of CER by ensuring the efficient and effective implementation of its findings into practice requires substantial investment and planning that will involve health care providers, patients, and other local stakeholders.
An implementation research and development program designed for this purpose could fulfill three important objectives: it could accelerate the translation of evidence into everyday care, enhance the opportunities for doctors and patients to define value (balancing expected benefits with costs) on the basis of their understanding of local contexts and constraints, and allow providers and patients to communicate with researchers and policymakers about clinically important issues earlier in the research process. In efforts to reduce the average time to primary PCI, for example, hospitals are encouraged to use several core strategies that define high-quality primary PCI care, but they are given substantial latitude in determining the best methods of applying these strategies.3 Local factors such as infrastructure, existing processes, the degree of buy-in by clinicians and administrators, and the costs associated with change may vary widely among hospitals and regions. Stakeholders determine how best to integrate the core strategies into local practice, given the value of the action steps required for implementation. Decisions that are not informed by a recognition of local constraints are likely to be poorly received and ineffective.
As demonstrated by the example of primary PCI, implementation research and development are critical for achieving the objectives of CER. The Federal Coordinating Council for Comparative Effectiveness Research need look no further than the VA's QUERI program, CIPRS, and the AHRQ's John M. Eisenberg Clinical Decisions and Communications Science Center for useful models. These programs are well positioned to achieve the three objectives of implementation research. Above all, the Federal Coordinating Council must remain mindful that the primary goal of CER is to enhance the translation of new medical discoveries into safe and high-quality health care for all Americans. This goal can be achieved only if our renewed investment in CER includes a commitment to implementation research.
The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs or the Agency for Healthcare Research and Quality.
No potential conflict of interest relevant to this article was reported.