|Home | About | Journals | Submit | Contact Us | Français|
There is an increased level of activity in the biomedical and health informatics world (e-prescribing, electronic health records, personal health records) that, in the near future, will yield a wealth of available data that we can exploit meaningfully to strengthen knowledge building and evidence creation, and ultimately improve clinical and preventive care. The American Medical Informatics Association (AMIA) 2008 Health Policy Conference was convened to focus and propel discussions about informatics-enabled evidence-based care, clinical research, and knowledge management. Conference participants explored the potential of informatics tools and technologies to improve the evidence base on which providers and patients can draw to diagnose and treat health problems. The paper presents a model of an evidence continuum that is dynamic, collaborative, and powered by health informatics technologies. The conference's findings are described, and recommendations on terminology harmonization, facilitation of the evidence continuum in a “wired” world, development and dissemination of clinical practice guidelines and other knowledge support strategies, and the role of diverse stakeholders in the generation and adoption of evidence are presented.
The US healthcare system continues to face multiple challenges related to unsustainable increases in cost, uneven quality of care, and persistent barriers to universal access. Additional pressures are mounting as a result of demographic and other trends: especially the ageing of the US population leading to a more complex and costly disease burden in the coming years; the potentially transformative impact of personalized medicine based on individual genomic information; and the movement toward greater involvement in decision making about health issues by patients and their families.
While biomedical research has yielded many new diagnostic and therapeutic options, it is not always clear which options offer “…the right treatment for the right patient at the right time”.1 Efforts to determine “what works” are hardly new in the study of medicine, but the systematic utilization of “evidence-based medicine” (EBM) began in the 1990s, led by a small group of researchers and educators. As defined by its adherents, EBM “…is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients”.2 The American Medical Informatics Association (AMIA) convened a 2008 Health Policy Conference to focus discussions and advance understanding about the potential for informatics-enabled evidence-based care, clinical research, and knowledge management. Conference participants explored the applicability of informatics tools and technologies to improve the evidence base on which providers and patients can draw to diagnose and treat health problems. This paper, based on the conference findings, presents a model of an evidence continuum that is dynamic, collaborative, and enabled by health informatics technologies.
It has been 10 years since the Institute of Medicine (IOM) Committee on the Quality of Health Care in America released its report, To err is human: building a safer health system.3 Although this report and the subsequent IOM report, Crossing the quality chasm: a new health system for the 21st century,4 have generated significant discussion and research, our healthcare system in general, and the entities and enterprises within it have been slow to generate, transform, and use evidence. Improved efficiency and effectiveness of care relies on the best information being available and readily accessible by health professionals and patients to use in making decisions. An underlying series of complex processes is required for this to happen via basic, translational, and clinical research: collecting patient data and making it available to researchers and clinicians; organizing the information that is needed for clinical decision making; creating methods to effectively disseminate the information; and capturing the results of decisions so that this information is available for new analyses and future cycles of improvement.5
The Agency for Healthcare Research and Quality (AHRQ) Effective Health Care program conducts research to provide up-to-date, unbiased evidence on healthcare interventions. Evidence-based practice centers (EPCs) (http://www.ahrq.gov/clinic/epc/) are a key part of the AHRQ program; launched in 1997, the EPCs develop evidence reports and technology assessments on topics relevant to clinical, social science/behavioral, economic, and other healthcare organization and delivery issues. The Cochrane Collaboration (http://www.cochrane.org), an international non-profit independent organization, is dedicated to producing and disseminating systematic reviews of healthcare interventions; founded in 1993, the Collaboration sets standards for reviews of medical treatments and offers systematic reviews of research by disorder. Research into the comparative effectiveness of different healthcare interventions has been authorized by the American Recovery and Reinvestment Act (ARRA) of 2009, with the creation of a Federal Coordinating Council charged with coordinating research and guiding investments in this research.6 One example of more recent efforts is an AHRQ funded project entitled “Structuring clinical recommendations for use in clinical decision support applications” to create structured logic statements from clinical recommendations, such as those provided by the US Preventive Services Task Force (USPSTF) and those underlying clinical measures related to “meaningful use” criteria. The goal is to accelerate, in a scalable fashion, the process whereby such clinical recommendations are converted into locally executable clinical decision-support rules.
Clearly, the push is on now to rapidly increase knowledge about “what works” in order to improve healthcare and contain costs. However, what counts as evidence to determine “what works” varies according to different experts. While the term “evidence” is often defined as findings established through randomized controlled trials and systematic reviews, the EBM paradigm encompasses other methods of establishing evidence as well.7–10 Examples of these methodologies include observational, cohort, and case–control studies; surveys; qualitative research; and expert opinion, among others.
Many caution against an overreliance on any one methodology or approach, because inherent shortcomings can prevent it from meeting the healthcare system's current and future needs for the timely generation of evidence. For example, limitations of randomized controlled trials have been described as slow pace, high cost, failure to address many questions of interest to practitioners, lack of inclusion of sizeable numbers of disadvantaged populations in studies, and difficulty in generalizing findings to the general population.9–14 Further, establishing new evidence about the efficacy and safety of clinical interventions through any means is no guarantee that the interventions will be used in actual clinical practice. Practitioners face challenges in staying abreast of new evidence and implementing evidence-based care. While National Institute of Health (NIH) programs often refer to the importance of dissemination of translational research, even extensive dissemination of such evidence is rarely sufficient to assure that it will be put into practice in a real world clinical setting. There is evidence demonstrating that even when a best practice is well known and documented in clinical practice guidelines, it is only used in patient care about 50% of the time.15 In the realm of public policy, policy-makers are besieged with information but results are not easily translated into policy decisions, and interpretation of study findings is sometimes inconclusive and even controversial.16
At the same time as these challenges are being increasingly recognized, there has been a slow but steady rise in adoption of new information and communications technologies (eg, e-prescribing, electronic health records, personal health records) by the healthcare community. Experts posit that health information technology (health IT) will be instrumental in helping to answer many of the pressing questions facing the healthcare system and will facilitate efforts to evaluate the effectiveness of healthcare interventions.1 17 Use of health IT will likely accelerate given the large amounts of money being made available for this purpose through ARRA. While this should lead to a substantial increase in available data that we may be able to use in order to advance evidence creation and improve knowledge building and clinical and preventive care, extracting value from such data repositories is a challenge for researchers as well as for healthcare practitioners.
Recognizing the potential of informatics tools and techniques to contribute to a more robust evidence base, the 2008 AMIA Health Policy Conference took as its theme: “informatics-enabled evidence-based care, clinical research, and knowledge management”. AMIA convened a group of approximately 100 subject matter experts and stakeholders representing a variety of backgrounds, disciplines, and work environments, and posed the following questions to encourage them to think broadly about an approach to an evidence continuum that is dynamic, collaborative, and enabled by informatics:
This paper, drawing on insights gained from the conference, outlines the approach to the evidence continuum conceptualized by participants, discusses ways in which it can be strengthened by informatics, and presents findings and recommendations for related research activities and policy actions.
In 2007, the Institute of Medicine (IOM) called for a new “rapid learning healthcare system” to accelerate the generation of new evidence. This holistic paradigm is characterized by continuous learning and improvement, and the evolution of new approaches to rapidly generate, apply, and evaluate evidence. A key feature of this paradigm is a “culture of shared responsibility” in which stakeholders (researchers, providers, patients) embrace the concept of a healthcare system that “learns”; share an understanding of the nature of evidence and the evolution of new methods to generate it; and work together toward the goal of shared decision making that is informed by the best possible evidence.9
With the current approach to the evidence continuum, the evidence flows in a top-down manner, with evidence generated by researchers, then translated by consensus panels into clinical practice guidelines, which are then disseminated chiefly via medical journals, mailings, and other professional vehicles, with the goal that these guidelines will be adopted by clinical practitioners.
Conference participants discussed the ongoing evolution of the EBM concept to meet current and future healthcare system needs and developed a model that illustrates this evolution. The model is characterized by shared responsibility for evidence generation and information exchange; it is dynamic and collaborative, with continuous feedback about effectiveness that emerges in the course of care.9 Conceptualized as a “…continuum of synthesized information”18 it embraces various research methods, acknowledging that evidence is generated from numerous data sources such as practice-based research; large, aggregated databases; modeling and simulation; emerging technologies (implantable devices, geographic information systems technology); and patient contributions.
In this model, research data generated from multiple sources are integrated and harmonized, stored in registries and other public repositories, and converted to computable, user friendly guidelines along with other useful approaches, which are disseminated through numerous resources: online repositories such as the National Guideline Clearinghouse (http://www.guideline.gov/); email alerts; online courses; social networks and microblogging services such as Twitter; online services focused on medical research (Physicians First Watch, Medscape, DocGuide Weekly); and aggressive efforts such as the American Heart Association's “Get With The Guidelines” program.19 The guidelines and other “knowledge distillations” can then be applied by practitioners, patients, and citizens who use computerized decision support tools based on the evidence. Feedback from the point of care (including patient feedback) is actively sought to determine the comparative effectiveness of treatments, and to identify new research questions and priorities that are then integrated into future research agendas.
This evidence continuum requires participation by many types of stakeholders: academic and pharmaceutical industry researchers, community researchers, healthcare providers, patients, policymakers, consumers and their caregivers, and the health IT industry. Rather than the current linear information flow from bench to bedside, the flow is circular, returning information from the bedside to the bench.20 Technical, policy, and regulatory issues are embedded in this continuum, for example, intellectual property, data stewardship, and data access and use among others.
The model shown in figure 1 outlines the following processes that are characteristic of the evidence continuum conceptualized by conference participants: generating evidence, translating evidence for use in care delivery, disseminating and implementing evidence in the clinical environment, and adopting and assessing effectiveness of evidence-based interventions. The arrows in the illustration flow both ways—with evidence-based findings informing practice, and practice generating feedback by testing findings in real word settings, as well as identifying topics for the research agenda.
Participants agreed that we must improve our ability to generate knowledge and evidence to support prevention, diagnosis, and treatment. A variety of evidence-generation approaches may need to be pursued to resolve the many unknowns and “semi-knowns” routinely faced by practitioners. As noted below, informatics research methods and techniques offer genuine promise in this regard, and further research will be needed to confirm these impressions.
As we generate additional knowledge and evidence to support care delivery, we can anticipate that there will be significantly more information that will become accessible to the healthcare system. In its raw form, this information will be difficult to use effectively: data, knowledge, and evidence collected as part of care management or data mining will be more helpful if translated (aggregated and synthesized) for application in clinical care (eg, clinical care guidelines). A primary consideration in this regard is the need for guidelines to be in computer-understandable form to enable a new generation of systems for retrieving, analyzing, and presenting evidence.21
Effective strategies, techniques, and incentives are needed to disseminate and implement this new knowledge throughout clinical environments. This need was noted at the “2nd Annual National Institutes of Health Conference on the science of dissemination and implementation: building research capacity to bridge the gap from science to service” (2009). Among the goals of this conference was the identification of methods and approaches to promote research and organizational capacity that will advance dissemination and implementation science.22 NIH conference participants recognized that there is a large gap between what we know can maximize the quality of healthcare and what is currently being delivered in practice and that we must not only understand how to create the best interventions, but how to ensure that they are effectively delivered by clinical and community practices.
Organizations of practicing physicians are increasingly promoting activities that foster the use of EBM by members and measurably improve practice performance. For example, a key component of the mission of the AMA-convened Physician Consortium for Performance Improvement (PCPI) is “…identifying and developing evidence-based clinical performance measures and measurement resources that enhance quality of patient care and foster accountability”.23 In 2002, the American Academy of Family Physicians (AAFP) established incentives to increase the use of research evidence in continuing medical education activities; in 2011, the Academy will eliminate the incentives, noting that it is now the norm for independent, certified continuing medical educations to be based on evidence.24
Adoption refers to the acceptance of an intervention and its integration into practice by providers and patients. Barriers to adoption of EBM include, among others, lack of awareness of evidence; limited applicability of recommendations for patient management; absence of good clinical studies for therapies; organizational culture, cognitive overload; and lack of compensation incentives.25 26 With respect to the last barrier, the Institute of Medicine, in its 2007 report, “Rewarding provider performance: aligning incentives in Medicare”, urged the creation of new payment incentives that support the revamping of health system structures and processes of care with the aim of promoting higher value. Although the report acknowledged that there are gaps in knowledge about the magnitude of incentives that are needed to achieve change while avoiding adverse consequences, it advocated beginning to align payment policies with activities that promote quality.27 In fact, the Centers for Medicare and Medicaid Services (CMS) has initiated pay for performance initiatives to encourage improved quality of care in Medicare services. For example, the Medicare Health Care Quality Demonstration includes among its goals “…reducing variations in utilization by appropriate use of evidence-based care and best practice guidelines…”.28
Recently, increased national attention is focusing on the concept of comparative effectiveness. According to AHRQ, comparative effectiveness is “research that is designed to inform healthcare decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options”.29 The need for ongoing assessment of evidence-based interventions was acknowledged in the ARRA legislation which includes funding for research to evaluate and compare clinical outcomes, effectiveness, risks, and benefits of two or more medical treatments and services that address a particular medical condition.
In 2001, Bakken, in support of her statement that “…an informatics infrastructure is essential for evidence-based practice”, explained that this infrastructure was comprised of five building blocks: “standardized terminologies and structures, digital sources of evidence, standards that facilitate healthcare data exchange among heterogeneous systems, informatics processes that support the acquisition and application of evidence to a specific clinical situation, and informatics competencies”.18 Several examples of ways in which fundamental processes of the evidence continuum described above are being facilitated by health IT are outlined below.
While many experts have noted the potential that mining of large health-related databases offers as a tool to generate evidence, the fact that these sources are both numerous and heterogeneous in structure and content presents challenges as well as opportunities. Data are stored in many locations on multiple systems (eg, regional and national healthcare data repositories, structured databases, legacy systems, databases, and text files behind web forms). Increasing amounts of data are multimedia and high-dimensional (eg, voice, imaging, and continuous biomedical signals). These data, ranging from tests results to patient-provided data, have varying degrees of reliability.36 Other challenges are related to database design shortcomings and biases42; difficulties in linking data that lack common patient identification42; medical record fragmentation and lack of tools to explore narrative information in records37 39; and data privacy issues.36 These “silos” must yield to data integration efforts.
Four major findings emerged from the 2008 AMIA Health Policy Conference relating to harmonization of terminology, facilitation of the evidence-based continuum, development and dissemination of clinical practice guidelines, and inclusion of multiple, diverse stakeholders (including patients) in the generation and adoption of evidence. These findings are described below along with related recommendations for further action.
AMIA conference participants noted that there are various approaches to defining and using the terms “evidence-based care/practice” and “translational research”. The term evidence-based practice (or practices; EBP(s)), generally refers to approaches to prevention or treatment that are validated by some form of documented scientific evidence. However, as noted in the introduction to this paper, what the term “evidence” means can vary. Experts have argued for renewed attention to the definition of EBM to reflect the evolution of its scope.49 50
With respect to translational research, there are traditionally two distinct domains: “early-stage” translational research, or the translation of basic science to clinical research, and “late-stage” translational research, or the translation of clinical evidence to health services research. The NIH roadmap describes translational research as bringing new knowledge from “bench to bedside” (http://nihroadmap.nih.gov/clinicalresearch/overview-translational.asp). While this journey from the scientist's laboratory bench to a patient's bedside arguably covers the full spectrum of basic science, clinical practice, health services, and health policy research, emphasis has, in reality, been placed on the translation of basic science evidence into efficacious and safe clinical interventions.
In contrast, AHRQ describes translational research as the bringing of clinical research into practice settings. For example, the AHRQ TRIP-II (Translating Research Into Practice) program describes translational research as aimed at producing “sustainable improvements in clinical outcomes and patient outcomes” by “translating research findings into diverse applied settings”.51 AHRQ's definition suggests that the problem of delivering appropriate healthcare is much more than simple “dissemination” of scientific evidence; the production of clinical guidelines is not enough to assure adoption in clinical practice. Thus, translational research, in this view, is a process of developing, testing, and implementing strategies for health services beyond the artificial environment of a clinical trial.
There are also issues related to the use of terms involved in dissemination and implementation (D&I) research. Dissemination can be viewed as the targeted distribution of information and intervention materials to a specific public health or clinical practice audience. Implementation is the use of strategies to adopt and integrate evidence-based health interventions and change practice patterns within specific settings. A review of the literature on dissemination and knowledge utilization noted that there is little agreement among researchers on common definitions of key terms in this field, such as “dissemination”.52 Rabin et al highlight inconsistencies in the use and meaning of D&I terms and concepts and provide a glossary for D&I terminology in public health and clinical settings.53
In spite of various public and private sector efforts, research is not easily converted into information that can be readily adapted into clinical guidelines or related tools which in turn are not readily translatable or deployed in practice. Moreover, numerous individuals and organizations are working on similar translation initiatives but rarely sharing approaches or moving toward common standards on the form of translation. Some promising efforts have been undertaken to improve communication. The Clinical and Translational Science Award (CTSA) Consortium, a national consortium of medical research institutions funded by the NIH National Center for Research Resources (NCRR), is one example. Within the Consortium, the CTSA Informatics Committee promotes successful implementation of informatics support for CTSA functions through the sharing of knowledge, expertise, and resources (http://www.ctsaweb.org/index.cfm?fuseaction=committee.viewCommittee&;com_ID=9). The ARRA legislation includes funds to enable AHRQ to build on its existing collaborative and transparent Effective Health Care program. This program is expected to allow for input from many perspectives into the development of research and implementation of the findings.
There is an increasing role for biomedical and health informatics in supporting all phases of the evidence continuum. Research is needed to study how new and combined sources of data (eg, implantable devices, genetics data); data obtained via remote monitoring and telehealth/telemedicine; and data generated or collected by consumers (eg, personal health records) can contribute to evidence-based practice. Public policies regarding data use and data stewardship need to be reviewed and modified to accommodate the new data sources. For example, it is still not always clear who has the authority to mine records and under what circumstances.
Payment and coverage policies are often not consistent with efforts to establish performance or quality measures based on evidence-based findings. Differences in incentives have engendered a culture clash between the majority of researchers and the practitioners to whom evidence is being pushed; this inhibits evidence adoption. Strategies are needed that will enhance dissemination, adoption, and evaluation of evidence-based care for specific care settings and care recipients including non-academic environments, rural and safety net providers, and special populations who are often under-represented in studies or who may require alternative solutions. In the realm of treatment effectiveness, real-world heterogeneity of patients must be taken into account. Patients differ from one another on a number of levels (genetically, biologically, demographically, geographically, etc) and these differences may impact the effectiveness of treatments for reasons that may be either biological or cultural, or both. In another important area related to effectiveness, attention needs to be focused on the growing body of evidence that demonstrates the pervasiveness of interventions that are excessive, inappropriate, and unscientific.54 For example, the 2009 National Quality Forum Spring Implementation Conference focused on waste and overuse in healthcare (http://www.qualityforum.org/Events/Conferences/Spring_Implementation/2009/Agenda.aspx).
Questions that can help guide approaches to creating a research and policy agenda that facilitates the generation, translation, dissemination, and adoption of evidence-based knowledge include:
Many feel that informatics-enabled clinical practice guidelines and/or standards of care have the potential to contribute significantly to the uptake of evidence-based knowledge. However, it is important to ensure that guidelines are developed for clinical practice areas in which they are most needed and translated effectively into everyday practice by being supportive of the way clinicians think and act. An increased awareness of the processes of guideline development and translation is likely to be valuable in achieving a more realistic set of codes.
Guidelines are frequently developed in isolation and there are many steps in the implementation process that reveal the gaps in the guidelines themselves. It is important to understand what is and what is not executable in guidelines. Assuming that they are evidence-based, effective guideline development ultimately requires specification of how data are obtained and represented. The translation process will be greatly facilitated by growth in standards for how evidence is expressed during patient encounters. In order for guidelines to be widely shared and implemented, they need to be encoded in a standardized form; authoring environments have been created to systematize the ways in which encodeable guidelines are developed.55 56 Feedback loops between guideline developers and translators should help to accelerate the process. The Quality Enhancement Research Initiative (QUERI) of the Veterans Health Administration (VHA) is an example of a collaboration across key elements of a large healthcare system that has enriched the development, implementation, and evaluation of clinical practice guidelines.57 Strategies for broad dissemination as well as local adaptation of guidelines are critical for successful implementation.19 58 Further, and of great importance, links between guideline implementation and outcomes need to be established. Variability of guideline implementation (eg, what was and was not adhered to, what else was done, etc) needs to be captured as well as the influence of implementations on outcomes.
As the demand for “what works” in healthcare accelerates, new evidence will be produced not only by the “push” of academic centers, but by the pull of clinical need, comparative effectiveness research, and public demand. Achieving success requires sustained, multidisciplinary efforts combining the technical skills of clinicians, informaticians, programmers, health services researchers, epidemiologists, biostatisticians, geneticists, policymakers, and others. For example, cross training in skill sets by practitioners of different disciplines is an important step in ensuring full participation in research innovations that are enabled by health IT. Health professionals must be trained in informatics so that they have a thorough understanding of the assets and limitation of health IT tools and systems that are becoming increasingly prevalent throughout the healthcare system, and are expected to play a major role in enabling the processes underlying the evidence continuum. Professionals working at all points in the evidence continuum will need to understand the ramifications of policy decisions at the local and national level (eg, data use, payment incentives).
We must also recognize changes in the ways that patients perceive their involvement in evidence generation, dissemination, and evaluation processes. The impact of the patient-centered movement on healthcare is broad and deep at numerous points in the evidence continuum. In 2007, the National Working Group on Evidence-Based Care issued a call to action advocating increased involvement of patients and consumers in setting research design priorities and clinical endpoints (eg, quality of life); reviewing evidence and identifying unanswered questions; translating findings for lay populations; and monitoring the impact of findings on patient health, outcomes, and access to care.59 In November 2009, DHHS convened a meeting on consumer health informatics, noting that consumer behavior is an essential contributor to quality improvement in healthcare. The purpose of the meeting was to create a blueprint for improving healthcare quality through enhanced behavioral support for healthcare consumers (http://www.consumerhealthinformatics.org/general.asp).
Rapidly growing social networking and collaborative websites are helping to enable this involvement. Examples of patient-driven activities in the research arena are plentiful: maintenance of a network of rare diseases biological resource centers (Eurodis—EuroBioBank, http://www.eurobiobank.org/); promotion of recruitment into clinical trials and training of activists to participate in trial design and oversight (National Breast Cancer Coalition, http://www.stopbreastcancer.org/); funding of researchers who commit to a collaborative rather than a competitive philosophy (The Life Raft Group, http://www.liferaftgroup.org/research.html); and assistance in pinpointing the identity of a gene for a rare inherited disorder (PXE International, http://www.pxe.org/english/view.asp?x=1).
Achieving desirable levels of healthcare quality that incorporate patient safety and centeredness as well as cost effectiveness requires consistent, systematic, and comprehensive application of available evidence-based knowledge. Thus, the conduct of translational health research has become a vital national enterprise. However, multiple barriers prevent the effective translation of basic science discoveries into clinical and community practice. There are many important questions that remain unanswered and the need for evidence to support clinical practice far exceeds the healthcare research enterprise's ability to produce it.
The current approach/framework to the generation and sharing of evidence does not adequately address this need, and AMIA meeting participants agreed that efforts are required to create a more dynamic and collaborative evidence continuum including new approaches to translate evidence to support clinical care and to study outcomes. They advanced an alternative way to depict the evidence continuum which we outlined in this paper. Additional refinement of this concept is invited.
One of the ways to accelerate the generation and adoption of evidence is to engage more practicing community-based clinicians as active participants in the research process. Expanding the pool of researchers in this way will help ground research in the “real world” where practitioners deal every day with unanswered questions presented by their patients related to disease prevention and treatment. However, this evidence-generation approach, as well as others discussed in this paper, raises new challenges in assessing the quality of new evidence,63 issues with regard to applying the evidence across broad and specialized populations, and other concerns that will require ongoing thoughtful consideration.
New and emerging technologies and advances in the electronic capture of clinical care information present opportunities for powering the evidence continuum. However, the healthcare profession faces numerous challenges in implementing health IT to drive improvement in health outcomes across a broad array of settings. Funding for research and evaluation of efforts to develop and deploy health IT tools is essential to assuring their successful use and integration into the health delivery systems.
In a recent commentary on the original 1992 article in JAMA that brought the term “EBM” to the attention of the overall medical community, Montori et al discussed a second fundamental principle of EBM outlined by the EBM Working Group in 2000 (the first principle being the hierarchy of evidence): “…whatever the evidence, value and preference judgments are implicit in every clinical decision. A key implication of this second principle is that clinical decisions, recommendations, and practice guidelines must not only attend to the best available evidence, but also to the values and preferences of the informed patient.”63
In keeping with this tenet as well as with the overarching principles regarding shared decision making and customization of care based on patient choice outlined by the IOM in Crossing the quality chasm,4 this paper has described an enhanced role for patients in the evidence continuum—as generators of data and as informed participants in decision making about the application of evidence-based findings to healthcare. Making this a reality will require extensive education of consumers and patients about the principles of evidence-based research and care and about the value of their personal health information for legitimate biomedical and health services research. It will require widespread acceptance of their participation by researchers and providers, as well as practical techniques to encourage their involvement to the greatest extent possible. And, it will require the needed technical strides and public policies to ensure the privacy and security of person-specific data. Communication and information technology each have roles to play in this transformation.
By convening the 2008 conference and disseminating this paper, AMIA has further delineated critical issues related to evidence-based practice and care and informatics. The AMIA Board of Directors reviewed the paper and endorsed its findings, conclusions, and recommendations. AMIA will continue to encourage other organizations to work collaboratively to further this important public discourse. AMIA will forward the paper and its recommendations to DHHS organizations and private sector organizations for their review and consideration.
AMIA would like to acknowledge the input of the many participants and presenters from the 2008 AMIA Health Policy Conference on which this article is based. Doug Fridsma and Suzanne Markel-Fox co-chaired the Meeting Steering Committee. Srini Kalluri, Kraig Kinchen, David Leventhal, Joyce Niland, Phil Payne, and Mark Weiner served as members of the Steering Committee. They were actively involved in and provided valuable input to all aspects of the planning processes.
The authors wish to thank Freda Temple for her careful review and editing of all versions of this manuscript. The authors would also like to thank the following individuals who reviewed earlier versions of this summary of the 2008 AMIA meeting: Doug Fridsma, Kraig Kinchen, and Suzanne Markel-Fox. Additionally, AMIA acknowledges and thanks the organizations that generously supported the 2008 meeting: Amplify Public Affairs, GSK, IMO, Lilly, PercipEnz, and Pfizer.
Competing interests: None.
Provenance and peer review: Not commissioned; externally peer reviewed.