The US healthcare system continues to face multiple challenges related to unsustainable increases in cost, uneven quality of care, and persistent barriers to universal access. Additional pressures are mounting as a result of demographic and other trends: especially the ageing of the US population leading to a more complex and costly disease burden in the coming years; the potentially transformative impact of personalized medicine based on individual genomic information; and the movement toward greater involvement in decision making about health issues by patients and their families.
While biomedical research has yielded many new diagnostic and therapeutic options, it is not always clear which options offer “…the right treatment for the right patient at the right time”.1
Efforts to determine “what works” are hardly new in the study of medicine, but the systematic utilization of “evidence-based medicine” (EBM) began in the 1990s, led by a small group of researchers and educators. As defined by its adherents, EBM “…is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients”.2
The American Medical Informatics Association (AMIA) convened a 2008 Health Policy Conference to focus discussions and advance understanding about the potential for informatics-enabled evidence-based care, clinical research, and knowledge management. Conference participants explored the applicability of informatics tools and technologies to improve the evidence base on which providers and patients can draw to diagnose and treat health problems. This paper, based on the conference findings, presents a model of an evidence continuum that is dynamic, collaborative, and enabled by health informatics technologies.
It has been 10 years since the Institute of Medicine (IOM) Committee on the Quality of Health Care in America released its report, To err is human: building a safer health system.3
Although this report and the subsequent IOM report, Crossing the quality chasm: a new health system for the
have generated significant discussion and research, our healthcare system in general, and the entities and enterprises within it have been slow to generate, transform, and use evidence. Improved efficiency and effectiveness of care relies on the best information being available and readily accessible by health professionals and patients to use in making decisions. An underlying series of complex processes is required for this to happen via basic, translational, and clinical research: collecting patient data and making it available to researchers and clinicians; organizing the information that is needed for clinical decision making; creating methods to effectively disseminate the information; and capturing the results of decisions so that this information is available for new analyses and future cycles of improvement.5
The Agency for Healthcare Research and Quality (AHRQ) Effective Health Care program conducts research to provide up-to-date, unbiased evidence on healthcare interventions. Evidence-based practice centers (EPCs) (http://www.ahrq.gov/clinic/epc/
) are a key part of the AHRQ program; launched in 1997, the EPCs develop evidence reports and technology assessments on topics relevant to clinical, social science/behavioral, economic, and other healthcare organization and delivery issues. The Cochrane Collaboration (http://www.cochrane.org
), an international non-profit independent organization, is dedicated to producing and disseminating systematic reviews of healthcare interventions; founded in 1993, the Collaboration sets standards for reviews of medical treatments and offers systematic reviews of research by disorder. Research into the comparative effectiveness of different healthcare interventions has been authorized by the American Recovery and Reinvestment Act (ARRA) of 2009, with the creation of a Federal Coordinating Council charged with coordinating research and guiding investments in this research.6
One example of more recent efforts is an AHRQ funded project entitled “Structuring clinical recommendations for use in clinical decision support applications” to create structured logic statements from clinical recommendations, such as those provided by the US Preventive Services Task Force (USPSTF) and those underlying clinical measures related to “meaningful use” criteria. The goal is to accelerate, in a scalable fashion, the process whereby such clinical recommendations are converted into locally executable clinical decision-support rules.
Clearly, the push is on now to rapidly increase knowledge about “what works” in order to improve healthcare and contain costs. However, what counts as evidence to determine “what works” varies according to different experts. While the term “evidence” is often defined as findings established through randomized controlled trials and systematic reviews, the EBM paradigm encompasses other methods of establishing evidence as well.7–10
Examples of these methodologies include observational, cohort, and case–control studies; surveys; qualitative research; and expert opinion, among others.
Many caution against an overreliance on any one methodology or approach, because inherent shortcomings can prevent it from meeting the healthcare system's current and future needs for the timely generation of evidence. For example, limitations of randomized controlled trials have been described as slow pace, high cost, failure to address many questions of interest to practitioners, lack of inclusion of sizeable numbers of disadvantaged populations in studies, and difficulty in generalizing findings to the general population.9–14
Further, establishing new evidence about the efficacy and safety of clinical interventions through any means is no guarantee that the interventions will be used in actual clinical practice. Practitioners face challenges in staying abreast of new evidence and implementing evidence-based care. While National Institute of Health (NIH) programs often refer to the importance of dissemination of translational research, even extensive dissemination of such evidence is rarely sufficient to assure that it will be put into practice in a real world clinical setting. There is evidence demonstrating that even when a best practice is well known and documented in clinical practice guidelines, it is only used in patient care about 50% of the time.15
In the realm of public policy, policy-makers are besieged with information but results are not easily translated into policy decisions, and interpretation of study findings is sometimes inconclusive and even controversial.16
At the same time as these challenges are being increasingly recognized, there has been a slow but steady rise in adoption of new information and communications technologies (eg, e-prescribing, electronic health records, personal health records) by the healthcare community. Experts posit that health information technology (health IT) will be instrumental in helping to answer many of the pressing questions facing the healthcare system and will facilitate efforts to evaluate the effectiveness of healthcare interventions.1
Use of health IT will likely accelerate given the large amounts of money being made available for this purpose through ARRA. While this should lead to a substantial increase in available data that we may be able to use in order to advance evidence creation and improve knowledge building and clinical and preventive care, extracting value from such data repositories is a challenge for researchers as well as for healthcare practitioners.