Concurrently with a workshop sponsored by the National Cancer Institute, we identified key “drivers” for accelerating cancer epidemiology across the translational research continuum in the 21st century: emerging technologies, a multilevel approach, knowledge integration, and team science. To map the evolution of these “drivers” and translational phases (T0–T4) in the past decade, we analyzed cancer epidemiology grants funded by the National Cancer Institute and published literature for 2000, 2005, and 2010. For each year, we evaluated the aims of all new/competing grants and abstracts of randomly selected PubMed articles. Compared with grants based on a single institution, consortium-based grants were more likely to incorporate contemporary technologies (P = 0.012), engage in multilevel analyses (P = 0.010), and incorporate elements of knowledge integration (P = 0.036). Approximately 74% of analyzed grants and publications involved discovery (T0) or characterization (T1) research, suggesting a need for more translational (T2–T4) research. Our evaluation indicated limited research in 1) a multilevel approach that incorporates molecular, individual, social, and environmental determinants and 2) knowledge integration that evaluates the robustness of scientific evidence. Cancer epidemiology is at the cusp of a paradigm shift, and the field will need to accelerate the pace of translating scientific discoveries in order to impart population health benefits. While multi-institutional and technology-driven collaboration is happening, concerted efforts to incorporate other key elements are warranted for the discipline to meet future challenges.
cancer; cancer epidemiology; collaboration; consortia; epidemiologic methods; knowledge integration; translation
Advances in genomics have near-term impact on diagnosis and management of
monogenic disorders. For common complex diseases, the use of genomic information
from multiple loci (polygenic model) is generally not useful for diagnosis and
individual prediction. In principle, the polygenic model could be used along
with other risk factors in stratified population screening to target
interventions. For example, compared to age-based criterion for breast,
colorectal, and prostate cancer screening, adding polygenic risk and family
history holds promise for more efficient screening with earlier start and/or
increased frequency of screening for segments of the population at higher
absolute risk than an established screening threshold; and later start and/or
decreased frequency of screening for segments of the population at lower risks.
This approach, while promising, faces formidable challenges for building its
evidence base and for its implementation in practice. Currently, it is unclear
whether or not polygenic risk can contribute enough discrimination to make
stratified screening worthwhile. Empirical data are lacking on population-based
age-specific absolute risks combining genetic and non-genetic factors, on impact
of polygenic risk genes on disease natural history, as well as information on
comparative balance of benefits and harms of stratified interventions.
Implementation challenges include difficulties in integration of this
information in the current health-care system in the United States, the setting
of appropriate risk thresholds, and ethical, legal, and social issues. In an era
of direct-to-consumer availability of personal genomic information, the public
health and health-care systems need to prepare for an evidence-based integration
of this information into population screening.
evidence-based medicine; genetics; genomics; polygenic model; public health; risk assessment; screening
We live in the era of genomics and big data. Evaluating the impact on health of large-scale biological, social, and environmental data is an emerging challenge in the field of epidemiology. In the past 3 years, major discussions and plans for the future of epidemiology, including with several recommendations for actions to transform the field, have been launched by 2 institutes within the National Institutes of Health. In the present commentary, I briefly explore the themes of these recommendations and their effects on leadership, resources, cohort infrastructure, and training. Ongoing engagement within the epidemiology community is needed to determine how to shape the evolution of the field and what truly matters for changing population health. We also need to assess how to leverage existing epidemiology resources and develop new studies to improve human health. Readers are invited to examine these recommendations, consider others that might be important, and join in the conversation about the future of epidemiology.
big data; epidemiology; funding; genomics; precision medicine; training
Three articles in this issue of Genetics in Medicine describe examples of “knowledge integration,” involving methods for generating and synthesizing rapidly emerging information on health-related genomic technologies and engaging stakeholders around the evidence. Knowledge integration, the central process in translating genomic research, involves three closely related, iterative components: knowledge management, knowledge synthesis, and knowledge translation. Knowledge management is the ongoing process of obtaining, organizing, and displaying evolving evidence. For example, horizon scanning and “infoveillance” use emerging technologies to scan databases, registries, publications, and cyberspace for information on genomic applications. Knowledge synthesis is the process of conducting systematic reviews using a priori rules of evidence. For example, methods including meta-analysis, decision analysis, and modeling can be used to combine information from basic, clinical, and population research. Knowledge translation refers to stakeholder engagement and brokering to influence policy, guidelines and recommendations, as well as the research agenda to close knowledge gaps. The ultrarapid production of information requires adequate public and private resources for knowledge integration to support the evidence-based development of genomic medicine.
evidence-based medicine; genomic medicine; knowledge integration; management; synthesis; translation
Biomedicine is undergoing a revolution driven by high throughput and connective computing that is transforming medical research and practice. Using oncology as an example, the speed and capacity of genomic sequencing technologies is advancing the utility of individual genetic profiles for anticipating risk and targeting therapeutics. The goal is to enable an era of “P4” medicine that will become increasingly more predictive, personalized, preemptive, and participative over time. This vision hinges on leveraging potentially innovative and disruptive technologies in medicine to accelerate discovery and to reorient clinical practice for patient-centered care. Based on a panel discussion at the Medicine 2.0 conference in Boston with representatives from the National Cancer Institute, Moffitt Cancer Center, and Stanford University School of Medicine, this paper explores how emerging sociotechnical frameworks, informatics platforms, and health-related policy can be used to encourage data liquidity and innovation. This builds on the Institute of Medicine’s vision for a “rapid learning health care system” to enable an open source, population-based approach to cancer prevention and control.
biomedical research; crowdsourcing; health information technology; innovation; precision medicine
State health departments in Michigan, Minnesota, Oregon, and Utah explored the use of genomic information, including family health history, in chronic disease prevention programs. To support these explorations, the Office of Public Health Genomics at the Centers for Disease Control and Prevention provided cooperative agreement funds from 2003 through 2008. The 4 states’ chronic disease programs identified advocates, formed partnerships, and assessed public data; they integrated genomics into existing state plans for genetics and chronic disease prevention; they developed projects focused on prevention of asthma, cancer, cardiovascular disease, diabetes, and other chronic conditions; and they created educational curricula and materials for health workers, policymakers, and the public. Each state’s program was different because of the need to adapt to existing culture, infrastructure, and resources, yet all were able to enhance their chronic disease prevention programs with the use of family health history, a low-tech “genomic tool.” Additional states are drawing on the experience of these 4 states to develop their own approaches.
The term P4 medicine is used to denote an evolving field of medicine that uses systems biology approaches and information technologies to enhance wellness rather than just treat disease. Its four components include predictive, preventive, personalized, and participatory medicine. In the current paper, it is argued that in order to fulfill the promise of P4 medicine, a “fifth P” must be integrated--the population perspective--into each of the other four components. A population perspective integrates predictive medicine into the ecologic model of health; applies principles of population screening to preventive medicine; uses evidence-based practice to personalize medicine; and grounds participatory medicine on the three core functions of public health: assessment, policy development, and assurance. Population sciences--including epidemiology; behavioral, social, and communication sciences; and health economics, implementation science, and outcomes research--are needed to show the value of P4 medicine. Balanced strategies that implement both population- and individual-level interventions can best maximize health benefits, minimize harms, and avoid unnecessary healthcare costs.
Advances in genomics and related fields promise a new era of personalized medicine in the cancer care continuum. Nevertheless, there are fundamental challenges in integrating genomic medicine into cancer practice. We explore how multilevel research can contribute to implementation of genomic medicine. We first review the rapidly developing scientific discoveries in this field and the paucity of current applications that are ready for implementation in clinical and public health programs. We then define a multidisciplinary translational research agenda for successful integration of genomic medicine into policy and practice and consider challenges for successful implementation. We illustrate the agenda using the example of Lynch syndrome testing in newly diagnosed cases of colorectal cancer and cascade testing in relatives. We synthesize existing information in a framework for future multilevel research for integrating genomic medicine into the cancer care continuum.
The increasing availability of personal genomic tests has led to discussions about the validity and utility of such tests and the balance of benefits and harms. A multidisciplinary workshop was convened by the National Institutes of Health and the Centers for Disease Control and Prevention to review the scientific foundation for using personal genomics in risk assessment and disease prevention and to develop recommendations for targeted research. The clinical validity and utility of personal genomics is a moving target with rapidly developing discoveries but little translation research to close the gap between discoveries and health impact. Workshop participants made recommendations in five domains: (1) developing and applying scientific standards for assessing personal genomic tests; (2) developing and applying a multidisciplinary research agenda, including observational studies and clinical trials to fill knowledge gaps in clinical validity and utility; (3) enhancing credible knowledge synthesis and information dissemination to clinicians and consumers; (4) linking scientific findings to evidence-based recommendations for use of personal genomics; and (5) assessing how the concept of personal utility can affect health benefits, costs, and risks by developing appropriate metrics for evaluation. To fulfill the promise of personal genomics, a rigorous multidisciplinary research agenda is needed.
behavioral sciences; epidemiologic methods; evidence-based medicine; genetics; genetic testing; genomics; medicine; public health
Recent emphasis on translational research (TR) is highlighting the role of epidemiology in translating scientific discoveries into population health impact. The authors present applications of epidemiology in TR through 4 phases designated T1–T4, illustrated by examples from human genomics. In T1, epidemiology explores the role of a basic scientific discovery (e.g., a disease risk factor or biomarker) in developing a “candidate application” for use in practice (e.g., a test used to guide interventions). In T2, epidemiology can help to evaluate the efficacy of a candidate application by using observational studies and randomized controlled trials. In T3, epidemiology can help to assess facilitators and barriers for uptake and implementation of candidate applications in practice. In T4, epidemiology can help to assess the impact of using candidate applications on population health outcomes. Epidemiology also has a leading role in knowledge synthesis, especially using quantitative methods (e.g., meta-analysis). To explore the emergence of TR in epidemiology, the authors compared articles published in selected issues of the Journal in 1999 and 2009. The proportion of articles identified as translational doubled from 16% (11/69) in 1999 to 33% (22/66) in 2009 (P = 0.02). Epidemiology is increasingly recognized as an important component of TR. By quantifying and integrating knowledge across disciplines, epidemiology provides crucial methods and tools for TR.
epidemiology; genomics; medicine; public health; translational research
The recent success of genome-wide association studies in finding susceptibility genes for many common diseases presents tremendous opportunities for epidemiologic studies of environmental risk factors. Analysis of gene-environment interactions, included in only a small fraction of epidemiologic studies until now, will begin to accelerate as investigators integrate analyses of genome-wide variation and environmental factors. Nevertheless, considerable methodological challenges are involved in the design and analysis of gene-environment interaction studies. The authors review these issues in the context of evolving methods for assessing interactions and discuss how the current agnostic approach to interrogating the human genome for genetic risk factors could be extended into a similar approach to gene-environment-wide interaction studies of disease occurrence in human populations.
environment; epidemiologic methods; genetics; genomics
Genome-wide association studies (GWAS) have led to a rapid increase in available data on common genetic variants and phenotypes and numerous discoveries of new loci associated with susceptibility to common complex diseases. Integrating the evidence from GWAS and candidate gene studies depends on concerted efforts in data production, online publication, database development, and continuously updated data synthesis. Here the authors summarize current experience and challenges on these fronts, which were discussed at a 2008 multidisciplinary workshop sponsored by the Human Genome Epidemiology Network. Comprehensive field synopses that integrate many reported gene-disease associations have been systematically developed for several fields, including Alzheimer's disease, schizophrenia, bladder cancer, coronary heart disease, preterm birth, and DNA repair genes in various cancers. The authors summarize insights from these field synopses and discuss remaining unresolved issues—especially in the light of evidence from GWAS, for which they summarize empirical P-value and effect-size data on 223 discovered associations for binary outcomes (142 with P < 10−7). They also present a vision of collaboration that builds reliable cumulative evidence for genetic associations with common complex diseases and a transparent, distributed, authoritative knowledge base on genetic variation and human health. As a next step in the evolution of Human Genome Epidemiology reviews, the authors invite investigators to submit field synopses for possible publication in the American Journal of Epidemiology.
association; database; encyclopedias; epidemiologic methods; genome, human; genome-wide association study; genomics; meta-analysis
The authors describe the rationale and initial development of a new collaborative initiative, the Genomic Applications in Practice and Prevention Network. The network convened by the Centers for Disease Control and Prevention and the National Institutes of Health includes multiple stakeholders from academia, government, health care, public health, industry and consumers. The premise of Genomic Applications in Practice and Prevention Network is that there is an unaddressed chasm between gene discoveries and demonstration of their clinical validity and utility. This chasm is due to the lack of readily accessible information about the utility of most genomic applications and the lack of necessary knowledge by consumers and providers to implement what is known. The mission of Genomic Applications in Practice and Prevention Network is to accelerate and streamline the effective integration of validated genomic knowledge into the practice of medicine and public health, by empowering and sponsoring research, evaluating research findings, and disseminating high quality information on candidate genomic applications in practice and prevention. Genomic Applications in Practice and Prevention Network will develop a process that links ongoing collection of information on candidate genomic applications to four crucial domains: (1) knowledge synthesis and dissemination for new and existing technologies, and the identification of knowledge gaps, (2) a robust evidence-based recommendation development process, (3) translation research to evaluate validity, utility and impact in the real world and how to disseminate and implement recommended genomic applications, and (4) programs to enhance practice, education, and surveillance.
decision support; genomics; information; medicine; network; public health
Validation of early detection cancer biomarkers has proven to be disappointing when initial promising claims have often not been reproducible in diagnostic samples or did not extend to prediagnostic samples. The previously reported lack of rigorous internal validity (systematic differences between compared groups) and external validity (lack of generalizability beyond compared groups) may be effectively addressed by utilizing blood specimens and data collected within well-conducted cohort studies. Cohort studies with prediagnostic specimens (eg, blood specimens collected prior to development of clinical symptoms) and clinical data have recently been used to assess the validity of some early detection biomarkers. With this background, the Division of Cancer Control and Population Sciences (DCCPS) and the Division of Cancer Prevention (DCP) of the National Cancer Institute (NCI) held a joint workshop in August 2013. The goal was to advance early detection cancer research by considering how the infrastructure of cohort studies that already exist or are being developed might be leveraged to include appropriate blood specimens, including prediagnostic specimens, ideally collected at periodic intervals, along with clinical data about symptom status and cancer diagnosis. Three overarching recommendations emerged from the discussions: 1) facilitate sharing of existing specimens and data, 2) encourage collaboration among scientists developing biomarkers and those conducting observational cohort studies or managing healthcare systems with cohorts followed over time, and 3) conduct pilot projects that identify and address key logistic and feasibility issues regarding how appropriate specimens and clinical data might be collected at reasonable effort and cost within existing or future cohorts.
Within current oncology practice several genomic applications are being use to inform treatment decisions with molecularly targeted therapies in breast, lung, colorectal, melanoma and other cancers. This commentary introduces a conceptual framework connecting the full spectrum of biomedical research disciplines, including fundamental laboratory research, clinical trials, and observational studies in the translation of genomic applications into clinical practice. The conceptual framework illustrates the contribution that well-designed observational epidemiological studies provide to the successful translation of these applications, and characterizes the role observational epidemiology plays in driving the dynamic and iterative bench-to-bedside, and bedside-to-bench translation continuum. We also discuss how the principles of this conceptual model, emphasizing integration of multidisciplinary research, can be applied to the evolving paradigm in “precision oncology” focusing on multiplex tumor sequencing, and we identify opportunities for observational studies to contribute to the successful and efficient translation of this paradigm.
translational epidemiology; observational studies; precision medicine; pharmacogenomics