Validation of early detection cancer biomarkers has proven to be disappointing when initial promising claims have often not been reproducible in diagnostic samples or did not extend to prediagnostic samples. The previously reported lack of rigorous internal validity (systematic differences between compared groups) and external validity (lack of generalizability beyond compared groups) may be effectively addressed by utilizing blood specimens and data collected within well-conducted cohort studies. Cohort studies with prediagnostic specimens (eg, blood specimens collected prior to development of clinical symptoms) and clinical data have recently been used to assess the validity of some early detection biomarkers. With this background, the Division of Cancer Control and Population Sciences (DCCPS) and the Division of Cancer Prevention (DCP) of the National Cancer Institute (NCI) held a joint workshop in August 2013. The goal was to advance early detection cancer research by considering how the infrastructure of cohort studies that already exist or are being developed might be leveraged to include appropriate blood specimens, including prediagnostic specimens, ideally collected at periodic intervals, along with clinical data about symptom status and cancer diagnosis. Three overarching recommendations emerged from the discussions: 1) facilitate sharing of existing specimens and data, 2) encourage collaboration among scientists developing biomarkers and those conducting observational cohort studies or managing healthcare systems with cohorts followed over time, and 3) conduct pilot projects that identify and address key logistic and feasibility issues regarding how appropriate specimens and clinical data might be collected at reasonable effort and cost within existing or future cohorts.
Genome and exome sequencing can identify variants unrelated to the primary goal of sequencing. Detecting pathogenic variants associated with an increased risk of a medical disorder enables clinical interventions to improve future health outcomes in patients and their at-risk relatives. The Clinical Genome Resource, or ClinGen, aims to assess clinical actionability of genes and associated disorders as part of a larger effort to build a central resource of information regarding the clinical relevance of genomic variation for use in precision medicine and research.
We developed a practical, standardized protocol to identify available evidence and generate qualitative summary reports of actionability for disorders and associated genes. We applied a semiquantitative metric to score actionability.
We generated summary reports and actionability scores for the 56 genes and associated disorders recommended by the American College of Medical Genetics and Genomics for return as secondary findings from clinical genome-scale sequencing. We also describe the challenges that arose during the development of the protocol that highlight important issues in characterizing actionability across a range of disorders.
The ClinGen framework for actionability assessment will assist research and clinical communities in making clear, efficient, and consistent determinations of actionability based on transparent criteria to guide analysis and reporting of findings from clinical genome-scale sequencing.
18 12, 1258–1268.
clinical actionability; exome sequencing; genome sequencing; incidental findings; secondary findings
Concurrently with a workshop sponsored by the National Cancer Institute, we identified key “drivers” for accelerating cancer epidemiology across the translational research continuum in the 21st century: emerging technologies, a multilevel approach, knowledge integration, and team science. To map the evolution of these “drivers” and translational phases (T0–T4) in the past decade, we analyzed cancer epidemiology grants funded by the National Cancer Institute and published literature for 2000, 2005, and 2010. For each year, we evaluated the aims of all new/competing grants and abstracts of randomly selected PubMed articles. Compared with grants based on a single institution, consortium-based grants were more likely to incorporate contemporary technologies (P = 0.012), engage in multilevel analyses (P = 0.010), and incorporate elements of knowledge integration (P = 0.036). Approximately 74% of analyzed grants and publications involved discovery (T0) or characterization (T1) research, suggesting a need for more translational (T2–T4) research. Our evaluation indicated limited research in 1) a multilevel approach that incorporates molecular, individual, social, and environmental determinants and 2) knowledge integration that evaluates the robustness of scientific evidence. Cancer epidemiology is at the cusp of a paradigm shift, and the field will need to accelerate the pace of translating scientific discoveries in order to impart population health benefits. While multi-institutional and technology-driven collaboration is happening, concerted efforts to incorporate other key elements are warranted for the discipline to meet future challenges.
cancer; cancer epidemiology; collaboration; consortia; epidemiologic methods; knowledge integration; translation
Genome and exome sequencing can identify variants unrelated to the primary goal of sequencing. Detecting pathogenic variants associated with an increased risk of a medical disorder allows the possibility of clinical interventions to improve future health outcomes in patients and their at-risk relatives. The Clinical Genome Resource, or ClinGen, aims to assess clinical actionability of genes and associated disorders as part of a larger effort to build a central resource on the clinical relevance of genomic variation for use in precision medicine and research.
We developed a practical, standardized protocol to identify available evidence and generate qualitative summary reports of actionability for disorders and associated genes. We applied a semi-quantitative metric to score actionability.
We generated summary reports and actionability scores for the 56 genes and associated disorders recommended by the American College of Medical Genetics and Genomics for return as secondary findings from clinical genome-scale sequencing. We also describe the challenges that arose during the development of the protocol which highlight important issues in characterizing actionability across a range of disorders.
The ClinGen framework for actionability assessment will assist research and clinical communities in making clear, efficient, and consistent determinations of actionability based on transparent criteria to guide analysis and reporting of findings from clinical genome-scale sequencing.
clinical actionability; secondary findings; incidental findings; genome sequencing; exome sequencing
There is expanding consensus on the need to modernize the training of cancer epidemiologists to accommodate rapidly emerging technological advancements and the digital age, which are transforming the practice of cancer epidemiology. There is also a growing imperative to extend cancer epidemiology research that is etiological to that which is applied and has the potential to affect individual and public health. Medical schools and schools of public health are recognizing the need to develop such integrated programs; however, we lack the data to estimate how many current training programs are effectively equipping epidemiology students with the knowledge and tools to design, conduct, and analyze these increasingly complex studies. There is also a need to develop new mentoring approaches to account for the transdisciplinary team-science environment that now prevails. With increased dialogue among schools of public health, medical schools, and cancer centers, revised competencies and training programs at predoctoral, doctoral, and postdoctoral levels must be developed. Continuous collection of data on the impact and outcomes of such programs is also recommended.
cancer epidemiologists; education; large-scale epidemiologic research
Insufficient evidence on the net benefits and harms of genomic tests in real-world settings is a translational barrier for genomic medicine.
Understanding stakeholders’ assessment of the current evidence base for clinical practice and coverage decisions should be a critical step to influence research, policy, and practice.
Twenty-two stakeholders participated in a workshop exploring the evidence of genomic tests for clinical and coverage decision-making. Stakeholders completed a survey prior to and during the meeting. They also discussed if they would recommend for or against current clinical use of each test.
At baseline, the level of confidence on the clinical validity and clinical utility of each test varied, although the group expressed greater confidence for EGFR mutation and Lynch Syndrome (LS) testing than for Oncotype DX. Following the discussion, survey results reflected even less confidence for Oncotype DX and EGFR testing, but not LS. The majority of stakeholders would consider clinical use for all three tests, but under the conditions of additional research or a shared clinical decision-making approach.
Stakeholder engagement in unbiased settings is necessary to understand various perspectives about evidentiary thresholds in genomic medicine. Participants recommended the use of various methods for evidence generation and synthesis.
With the accelerated implementation of genomic medicine, health-care providers will depend heavily on professional guidelines and recommendations. Because genomics affects many diseases across the life span, no single professional group covers the entirety of this rapidly developing field.
To pursue a discussion of the minimal elements needed to develop evidence-based guidelines in genomics, the Centers for Disease Control and Prevention and the National Cancer Institute jointly held a workshop to engage representatives from 35 organizations with interest in genomics (13 of which make recommendations). The workshop explored methods used in evidence synthesis and guideline development and initiated a dialogue to compare these methods and to assess whether they are consistent with the Institute of Medicine report “Clinical Practice Guidelines We Can Trust.”
The participating organizations that develop guidelines or recommendations all had policies to manage guideline development and group membership, and processes to address conflicts of interests. However, there was wide variation in the reliance on external reviews, regular updating of recommendations, and use of systematic reviews to assess the strength of scientific evidence.
Ongoing efforts are required to establish criteria for guideline development in genomic medicine as proposed by the Institute of Medicine.
evidence synthesis; genomic medicine; guideline development
Biomedicine is undergoing a revolution driven by high throughput and connective computing that is transforming medical research and practice. Using oncology as an example, the speed and capacity of genomic sequencing technologies is advancing the utility of individual genetic profiles for anticipating risk and targeting therapeutics. The goal is to enable an era of “P4” medicine that will become increasingly more predictive, personalized, preemptive, and participative over time. This vision hinges on leveraging potentially innovative and disruptive technologies in medicine to accelerate discovery and to reorient clinical practice for patient-centered care. Based on a panel discussion at the Medicine 2.0 conference in Boston with representatives from the National Cancer Institute, Moffitt Cancer Center, and Stanford University School of Medicine, this paper explores how emerging sociotechnical frameworks, informatics platforms, and health-related policy can be used to encourage data liquidity and innovation. This builds on the Institute of Medicine’s vision for a “rapid learning health care system” to enable an open source, population-based approach to cancer prevention and control.
biomedical research; crowdsourcing; health information technology; innovation; precision medicine
There is a growing movement to encourage reproducibility and transparency practices in the scientific community, including public access to raw data and protocols, the conduct of replication studies, systematic integration of evidence in systematic reviews, and the documentation of funding and potential conflicts of interest. In this survey, we assessed the current status of reproducibility and transparency addressing these indicators in a random sample of 441 biomedical journal articles published in 2000–2014. Only one study provided a full protocol and none made all raw data directly available. Replication studies were rare (n = 4), and only 16 studies had their data included in a subsequent systematic review or meta-analysis. The majority of studies did not mention anything about funding or conflicts of interest. The percentage of articles with no statement of conflict decreased substantially between 2000 and 2014 (94.4% in 2000 to 34.6% in 2014); the percentage of articles reporting statements of conflicts (0% in 2000, 15.4% in 2014) or no conflicts (5.6% in 2000, 50.0% in 2014) increased. Articles published in journals in the clinical medicine category versus other fields were almost twice as likely to not include any information on funding and to have private funding. This study provides baseline data to compare future progress in improving these indicators in the scientific literature.
Examination of recent trends in reproducibility and transparency practices in biomedical research reveals an ongoing lack of access to full datasets and detailed protocols for both clinical and non-clinical studies.
There is increasing interest in the scientific community about whether published research is transparent and reproducible. Lack of replication and non-transparency decreases the value of research. Several biomedical journals have started to encourage or require authors to submit detailed protocols, full datasets, and disclose information on funding and potential conflicts of interest. In this study, we investigate the reproducibility and transparency practices across the full spectrum of published biomedical literature from 2000–2014. We identify an ongoing lack of access to full datasets and detailed protocols for both clinical and non-clinical biomedical investigation. We also map the availability of information on funding and conflicts of interest in this literature. The results from this study provide baseline data to compare future progress in improving these indicators in the scientific literature. We believe that this information may be essential to sensitize stakeholders in science about the need for improving reproducibility and transparency practices.
The aim of this study was to explore the prevalence and correlates of receiving and sharing high-penetrance cancer genetic test results.
Participants completed the population-based, cross-sectional 2013 Health Information National Trends Survey. We examined sociodemographic characteristics of participants reporting having had BRCA1/2 or Lynch syndrome genetic testing, and sociodemographic and psychosocial correlates of sharing test results with health professionals and family members.
Participants who underwent BRCA1/2 or Lynch syndrome genetic testing (n=77; 2.42% of respondents) were more likely to be female and to have a family or personal cancer history than those not undergoing testing. Approximately three-quarters of participants shared results with health professionals and three-quarters with their family; only 4% did not share results with anyone. Participants who shared results with health professionals reported greater optimism, self-efficacy for health management, and trust in information from their doctors. Participants who shared results with family were more likely to be female and to have a personal cancer history, and had greater self-efficacy for health management, perceived less ambiguity in cancer prevention recommendations, and lower cancer prevention fatalism.
We identified several novel psychosocial correlates of sharing genetic information. Health professionals may use this information to identify patients less likely to share information with at-risk family members.
Genetic testing; communication of test results; hereditary cancer; BRCA1/2; Hereditary Nonpolyposis Colorectal Cancer (HNPCC); Health Information National Trends Survey; Lynch syndrome
Knowledge integration includes knowledge management, synthesis, and translation processes. It aims to maximize the use of collected scientific information and accelerate translation of discoveries into individual and population health benefits. Accumulated evidence in cancer epidemiology constitutes a large share of the 2.7 million articles on cancer in PubMed. We examine the landscape of knowledge integration in cancer epidemiology. Past approaches have mostly used retrospective efforts of knowledge management and traditional systematic reviews and meta-analyses. Systematic searches identify 2,332 meta-analyses, about half of which are on genetics and epigenetics. Meta-analyses represent 1:89-1:1162 of published articles in various cancer subfields. Recently, there are more collaborative meta-analyses with individual-level data, including those with prospective collection of measurements [e.g., genotypes in genome-wide association studies (GWAS)]; this may help increase the reliability of inferences in the field. However, most meta-analyses are still done retrospectively with published information. There is also a flurry of candidate gene meta-analyses with spuriously prevalent "positive" results. Prospective design of large research agendas, registration of datasets, and public availability of data and analyses may improve our ability to identify knowledge gaps, maximize and accelerate translational progress or—at a minimum—recognize dead ends in a more timely fashion.
In a time of scientific and technological developments and budgetary constraints, the National Cancer Institute (NCI)’s Provocative Questions (PQ) Project offers a novel funding mechanism for cancer epidemiologists. We reviewed the purposes underlying the PQ Project, present information on the contributions of epidemiologic research to the current PQ portfolio, and outline opportunities that the cancer epidemiology community might capitalize on to advance a research agenda that spans a translational continuum from scientific discoveries to population health impact.
To understand the translational trajectory of genomic tests in cancer screening, diagnosis, prognosis, and treatment, we reviewed tests that have been assessed by recommendation and guideline developers.
For each test, we marked translational milestones by determining when the genomic association with cancer was first discovered and studied in patients, and when a health application for a specified clinical use was successfully demonstrated and approved or cleared by the US Food and Drug Administration. To identify recommendations and guidelines, we reviewed the websites of cancer, genomic, and general guideline developers and professional organizations. We searched the in vitro diagnostics database of the US Food and Drug Administration for information, and we searched PubMed for translational milestones. Milestones were examined against type of recommendation, Food and Drug Administration approval or clearance, disease rarity, and test purpose.
Of the 45 tests we identified, 9 received strong recommendations for their usage in clinical settings, 14 received positive but moderate recommendations, and 22 were not currently recommended. For 18 tests, two or more different sources had issued recommendations, with 67% concordance. Only five tests had Food and Drug Administration approval, and an additional five had clearance. The median time from discovery to recommendation statement was 14.7 years.
In general, there were no associations found between translational trajectory and recommendation category.
cancer; genomics; genetic testing; recommendations and guidelines; translational research
Three articles in this issue of Genetics in Medicine describe examples of “knowledge integration,” involving methods for generating and synthesizing rapidly emerging information on health-related genomic technologies and engaging stakeholders around the evidence. Knowledge integration, the central process in translating genomic research, involves three closely related, iterative components: knowledge management, knowledge synthesis, and knowledge translation. Knowledge management is the ongoing process of obtaining, organizing, and displaying evolving evidence. For example, horizon scanning and “infoveillance” use emerging technologies to scan databases, registries, publications, and cyberspace for information on genomic applications. Knowledge synthesis is the process of conducting systematic reviews using a priori rules of evidence. For example, methods including meta-analysis, decision analysis, and modeling can be used to combine information from basic, clinical, and population research. Knowledge translation refers to stakeholder engagement and brokering to influence policy, guidelines and recommendations, as well as the research agenda to close knowledge gaps. The ultrarapid production of information requires adequate public and private resources for knowledge integration to support the evidence-based development of genomic medicine.
evidence-based medicine; genomic medicine; knowledge integration; management; synthesis; translation
The National Cancer Institute's (NCI) Surveillance, Epidemiology, and End Results (SEER) registries have been a source of biospecimens for cancer research for decades. Recently, registry-based biospecimen studies have become more practical, with the expansion of electronic networks for pathology and medical record reporting. Formalin-fixed paraffin-embedded specimens are now used for next-generation sequencing and other molecular techniques. These developments create new opportunities for SEER biospecimen research.
We evaluated 31 research articles published during 2005–2013 based on author confirmation that these studies involved linkage of SEER data to biospecimens. Rather than providing an exhaustive review of all possible articles, our intent was to indicate the breadth of research made possible by such a resource. We also summarize responses to a 2012 questionnaire that was broadly distributed to the NCI intra- and extramural biospecimen research community. This included responses from 30 investigators who had used SEER biospecimens in their research. The survey was not intended to be a systematic sample, but instead to provide anecdotal insight on strengths, limitations, and the future of SEER biospecimen research. Identified strengths of this research resource include biospecimen availability, cost, and annotation of data, including demographic information, stage, and survival. Shortcomings include limited annotation of clinical attributes such as detailed chemotherapy history and recurrence, and timeliness of turnaround following biospecimen requests. A review of selected SEER biospecimen articles, investigator feedback, and technological advances reinforced our view that SEER biospecimen resources should be developed. This would advance cancer biology, etiology, and personalized therapy research.
SEER; Tissue Banks; Public Health Surveillance; Cancer
Over the past two decades, researchers have increasingly used human biospecimens to evaluate hypotheses related to disease risk, outcomes and treatment. We conducted an analysis of population-science cancer research grants funded by the National Cancer Institute (NCI) to gain a more comprehensive understanding of biospecimens and common derivatives involved in those studies and identify opportunities for advancing the field. Data available for 1,018 extramural, peer-reviewed grants (active as of July 2012) supported by the Division of Cancer Control and Population Sciences (DCCPS), the NCI Division that supports cancer control and population-science extramural research grants, were analyzed. 455 of the grants were determined to involve biospecimens or derivatives. The most common specimen types included were whole blood (51% of grants), serum or plasma (40%), tissue (39%), and the biospecimen derivative, DNA (66%). While use of biospecimens in molecular epidemiology has become common, biospecimens for behavioral and social research is emerging, as observed in our analysis. Additionally, we found the majority of grants were using already existing biospecimens (63%). Grants that involved use of existing biospecimens resulted in lower costs (studies that used existing serum/plasma biospecimens were 4.2 times less expensive) and more publications per year (1.4 times) than grants collecting new biospecimens. This analysis serves as a first step at understanding the types of biospecimen collections supported by NCI DCCPS. There is room to encourage increased use of archived biospecimens and new collections of rarer specimen and cancer types, as well as for behavioral and social research. To facilitate these efforts, we are working to better catalogue our funded resources and make that data available to the extramural community.
Next Generation Sequencing (NGS) technologies are used to detect somatic mutations in tumors and study germ line variation. Most NGS studies use DNA isolated from whole blood or fresh frozen tissue. However, formalin-fixed paraffin-embedded (FFPE) tissues are one of the most widely available clinical specimens. Their potential utility as a source of DNA for NGS would greatly enhance population-based cancer studies. While preliminary studies suggest FFPE tissue may be used for NGS, the feasibility of using archived FFPE specimens in population based studies and the effect of storage time on these specimens needs to be determined. We conducted a study to determine whether DNA in archived FFPE high-grade ovarian serous adenocarcinomas from Surveillance, Epidemiology and End Results (SEER) registries Residual Tissue Repositories (RTR) was present in sufficient quantity and quality for NGS assays. Fifty-nine FFPE tissues, stored from 3 to 32 years, were obtained from three SEER RTR sites. DNA was extracted, quantified, quality assessed, and subjected to whole exome sequencing (WES). Following DNA extraction, 58 of 59 specimens (98%) yielded DNA and moved on to the library generation step followed by WES. Specimens stored for longer periods of time had significantly lower coverage of the target region (6% lower per 10 years, 95% CI: 3-10%) and lower average read depth (40x lower per 10 years, 95% CI: 18-60), although sufficient quality and quantity of WES data was obtained for data mining. Overall, 90% (53/59) of specimens provided usable NGS data regardless of storage time. This feasibility study demonstrates FFPE specimens acquired from SEER registries after varying lengths of storage time and under varying storage conditions are a promising source of DNA for NGS.
The dizzying pace of genomic discoveries is leading to an increasing number of clinical applications. However, very little translational research is ongoing beyond Bench to Bedside to assess validity, utility, implementation and outcomes of such applications. Here we report cross sectional results of ongoing horizon scanning of translational genomic research conducted between May 16, 2012 and May 15, 2013. Based on a weekly, systematic query of PubMed, we created a curated set of 505 beyond bench-to-bedside research publications, including 312 original research articles, 123 systematic and other reviews, 38 clinical guidelines, policies and recommendations, and 32 papers describing tools, decision support and educational materials. Most papers (62%) addressed a specific genomic test or other health application; almost half of these (n=180) were related to cancer. We estimate that these publications account for 0.5% of reported human genomics and genetics research during the same time. These data provide baseline information to track the evolving knowledge base and gaps in genomic medicine. Continuous horizon scanning is crucial for an evidence-based translation of genomic discoveries into improved health care and disease prevention.
genomic medicine; public health; surveillance; translational research
Candidate gene and genome-wide association studies (GWAS) represent two complementary approaches to uncovering genetic contributions to common diseases. We systematically reviewed the contributions of these approaches to our knowledge of genetic associations with cancer risk by analyzing the data in the Cancer Genome-wide Association and Meta Analyses database (Cancer GAMAdb). The database catalogs studies published since January 1, 2000, by study and cancer type. In all, we found that meta-analyses and pooled analyses of candidate genes reported 349 statistically significant associations and GWAS reported 269, for a total of 577 unique associations. Only 41 (7.1%) associations were reported in both candidate gene meta-analyses and GWAS, usually with similar effect sizes. When considering only noteworthy associations (defined as those with false-positive report probabilities ≤0.2) and accounting for indirect overlap, we found 202 associations, with 27 of those appearing in both meta-analyses and GWAS. Our findings suggest that meta-analyses of well-conducted candidate gene studies may continue to add to our understanding of the genetic associations in the post-GWAS era.
GWAS; candidate gene studies; meta-analysis; cancer
A major promise of genomic research is information that can transform health care and public health through earlier diagnosis, more effective prevention and treatment of disease, and avoidance of drug side effects. Although there is interest in the early adoption of emerging genomic applications in cancer prevention and treatment, there are substantial evidence gaps that are further compounded by the difficulties of designing adequately powered studies to generate this evidence, thus limiting the uptake of these tools into clinical practice. Comparative effectiveness research (CER) is intended to generate evidence on the “real-world” effectiveness compared with existing standards of care so informed decisions can be made to improve health care. Capitalizing on funding opportunities from the American Recovery and Reinvestment Act of 2009, the National Cancer Institute funded seven research teams to conduct CER in genomic and precision medicine and sponsored a workshop on CER on May 30, 2012, in Bethesda, Maryland. This report highlights research findings from those research teams, challenges to conducting CER, the barriers to implementation in clinical practice, and research priorities and opportunities in CER in genomic and precision medicine. Workshop participants strongly emphasized the need for conducting CER for promising molecularly targeted therapies, developing and supporting an integrated clinical network for open-access resources, supporting bioinformatics and computer science research, providing training and education programs in CER, and conducting research in economic and decision modeling.
In contemporary oncology practices there is an increasing emphasis on concurrent evaluation of multiple genomic alterations within the biological pathways driving tumorigenesis. At the foundation of this paradigm shift are several commercially available tumor panels using next-generation sequencing to develop a more complete molecular blueprint of the tumor. Ideally, these would be used to identify clinically actionable variants that can be matched with available molecularly targeted therapy, regardless of the tumor site or histology. Currently, there is little information available on the post-analytic processes unique to next-generation sequencing platforms used by the companies offering these tests. Additionally, evidence of clinical validity showing an association between the genetic markers curated in these tests with treatment response to approved molecularly targeted therapies is lacking across all solid-tumor types. To date, there is no published data of improved outcomes when using the commercially available tests to guide treatment decisions. The uniqueness of these tests from other genomic applications used to guide clinical treatment decisions lie in the sequencing platforms used to generate large amounts of genomic data, which have their own related issues regarding analytic and clinical validity, necessary precursors to the evaluation of clinical utility. The generation and interpretation of these data will require new evidentiary standards for establishing not only clinical utility, but also analytical and clinical validity for this emerging paradigm in oncology practice.