|Home | About | Journals | Submit | Contact Us | Français|
Pharmacogenomic tests offer a promising strategy to improve the safety and efficacy of drug treatment. Compelling examples, such as HLA-B*5701 testing to identify patients at risk for abacavir-associated hypersensitivity (1), are already changing clinical care. However, the level of evidence required to establish clinical utility is often the subject of debate. Determining the most efficient and effective pathway to benefit for a given test is therefore both a practical and an ethical concern.
Because of differences in clinical context and test properties, some pharmacogenomic tests are easier to validate than others. When the association between genotype and drug response is sufficiently strong, a test’s value may be based on retrospective or observational data. As an example, the clinical utility of testing for variants in the thiopurine methyl transferase (TPMT) gene was established from retrospective studies, supporting the use of TPMT testing prior to thipurine treatment in childhood leukemia and other clinical settings (2). Indeed, when observational data suggest a significant clinical benefit, the gold standard of a randomized trial may be unethical – as is arguably the case for KRAS testing prior to cetuximab therapy for colorectal cancer (3). As new and more technically sophisticated pharmacogenomic tests are developed, however, prospective clinical trials may be required to confirm a therapeutic benefit.
This approach is likely to be particularly important for complex biological markers, such as gene expression or other molecular profiles of inherited or acquired genetic change (or both) intended to inform treatment decisions. For tests of this kind, data from prospective, randomized assessments can allow clinicians, patients, and other health care decision-makers (including insurers and guidelines panels) to define with confidence the circumstances under which these new testing paradigms improve outcomes. But there is a caveat: the predictive value of the test must be adequately established before it is used to allocate patients in a clinical trial to different treatment arms – that is, before its potential to improve health outcome by guiding therapy is assessed. Inadequate knowledge of the test’s capacity to distinguish patient subgroups can potentially lead to false conclusions about test performance and treatment efficacy, resulting in unnecessary risks for trial participants and patients whose treatment is subsequently based on trial data; scarce research resources are wasted as well. It is perhaps no more than a truism to say that a test’s predictive value should be adequately understood before the test is used in a clinical trial, but for complex molecular tests, this pathway is largely unchartered territory.
Current molecular science is characterized by rapid discovery: scientists can now generate unprecedented amounts of data on genomic variation and gene and protein expression. New biomarkers to guide and improve therapy are certain to emerge from this effort, but the effective clinical application involves challenges that are also unprecedented (4). Reliable methods for the analysis of voluminous and complex data are needed, as well as new technical standards addressing the management of samples from which dynamic measurements, such as gene expression, will be made.
The significance of these issues is underscored by the recent suspension of three trials based on promising reports of a new method of genomic profiling to predict chemotherapy response; however, some of the initial work on which the trials were based has been retracted (5, 6). The circumstances that triggered the investigation of these trials, including allegations of falsified credentials, should not distract from the underlying concerns about premature clinical study of pharmacogenomic tests. In the wake of these events, an Institute of Medicine (IOM) committee has been convened. Tellingly, its first task is to define criteria for evaluating genomic and other molecular predictors, to determine test readiness for clinical trials. The criteria to be defined by the IOM Committee will be then applied to the three suspended cancer genomics clinical trials as a test case (7). Another similar effort to define criteria for moving genomic tests forward into translational research is being undertaken by a committee convened by Duke University (the Translational Medicine Quality Framework, G Ginsberg, personal communication). Both these efforts recognize the need for robust and possibly innovative scientific criteria for the evaluation of tests based on innovative and sophisticated technology.
As the IOM Committee or other expert groups define the criteria, they will need to take into account the high expectations for this new class of biomarkers and acknowledge the financial and intellectual investments of researchers who develop innovative technology, with their potential to generate conflicts of interest. An array of procedures have been developed in recent years to manage and reduce financial conflicts of interest, but non-financial conflicts have received less attention (8, 9). Non-financial conflicts refer to a researcher’s personal interests in the success of her research – not only for the recognition and other career benefits it may bring, but also for its contribution to new knowledge and validation of the researcher’s ideas.
People who develop new knowledge relevant to clinical care are understandably committed to its potential benefits, and have an interest in seeing the benefits established. Such interests are an inherent part of scientific inquiry, and an important driver of research success. However, they have the potential to create inadvertent biases, not unlike financial conflicts of interest (10). The traditional method of blinding the researcher to the case/control status of a participant or research sample is an example of a methodological strategy to reduce such biases.
In this context, it is possible that the validation of new, complex genomic tests might benefit from inclusion of a procedural element: independent review of the data and analyses supporting test performance. While some reviewers with appropriate expertise might also have intellectual conflicts, procedures to assure a transparent review process could assure that a test has met an appropriate threshold of validation to support its use in high stakes research decisions, such as the allocation of patients to different arms of a clinical trial or the commitment of public resources to a large-scale study. In addition, the independent analysis could provide useful insight regarding trial design elements that can adequately address uncertainties about the test. The goal would be to insert a step in the scientific process that ensures rigor as tests are moved into clinical trials – not a new regulatory procedure that might stifle innovation.
An additional review procedure of this kind is likely to be necessary only in certain circumstances: when the testing process is complex and the proposed use could have significant impact on study outcomes. As the TPMT and KRAS examples illustrate (2, 3), many pharmacogenetic tests based on well-defined gene variants can be adequately validated without such external review. However, novel applications of genomics and related molecular tools will involve more complicated measurements and new analytical methods, often incorporating algorithms for data interpretation. These tests have the potential to provide a more comprehensive assessment of genetic change and its impact on disease biology and outcomes, but are also more difficult to interpret. In keeping with these advances, and in appropriate circumstances, a review of original data and analytic procedures – more substantive than the standard peer review of a journal article - may be warranted.
Investigators may find such external review advantageous for several reasons. First, investigators can disclose to research participants that the data which serve as the basis for treatment decisions in the new study have been reviewed and validated by an external group, easing any participant concerns regarding conflicts, financial or non-financial. Second, investors and/or funding agencies may have greater confidence that a study will be a good investment and more likely to be successful. Third, an external review may facilitate publication in peer-reviewed journals. To address concerns about proprietary data, investigators can require confidentiality agreements for reviewers.
Gene expression profiling and other complex molecular measures are opening new opportunities for the field of pharmacogenomics. In this explosive period of discovery and technology development, new strategies for test evaluation are needed, and must be carefully considered. When a test is sufficiently complex, independent review of pharmacogenomic data can be beneficial to all parties. Innovations need champions and often the team who developed the test are the best people to design a trial to assess its use. But participants, funders – and researchers themselves – stand to benefit if those who develop and champion innovative and technically complex tests are willing to yield to a rigorous independent assessment of their data and analytic methods prior to moving the research to clinical assessment.
More fundamentally, achieving the goal of transparency in scientific research requires on-going consideration of methodology. The more complex the test, the more likely an independent assessment may help to ensure appropriate validation: analogous to blinding, an independent review would prevent unconscious bias in data analysis and interpretation. Other measures to ensure robust translational science may emerge as innovation continues. From a practical perspective, the pathway to benefit is always a work in progress.
This work was supported in part by grants U01GM092676 and R01GM081416 from the National Institutes of Health.
Dr. Haga is a member of the Patient Advisory and Public Policy Board of Generations Health.