|Home | About | Journals | Submit | Contact Us | Français|
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Several thousand human genome epidemiology association studies are published every year investigating the relationship between common genetic variants and diverse phenotypes. Transparent reporting of study methods and results allows readers to better assess the validity of study findings. Here, we document reporting practices of human genome epidemiology studies.
Articles were randomly selected from a continuously updated database of human genome epidemiology association studies to be representative of genetic epidemiology literature. The main analysis evaluated 315 articles published in 2001–2003. For a comparative update, we evaluated 28 more recent articles published in 2006, focusing on issues that were poorly reported in 2001–2003.
During both time periods, most studies comprised relatively small study populations and examined one or more genetic variants within a single gene. Articles were inconsistent in reporting the data needed to assess selection bias and the methods used to minimize misclassification (of the genotype, outcome, and environmental exposure) or to identify population stratification. Statistical power, the use of unrelated study participants, and the use of replicate samples were reported more often in articles published during 2006 when compared with the earlier sample.
We conclude that many items needed to assess error and bias in human genome epidemiology association studies are not consistently reported. Although some improvements were seen over time, reporting guidelines and online supplemental material may help enhance the transparency of this literature.
Human genome epidemiology (HuGE) is a rapidly emerging scientific field that examines the influence of genomic variation on human health [1-4]. Although a large and rapidly increasing number of studies have investigated the associations between genetic variants and the risks of common diseases through observational epidemiology, few significant associations have been shown to be reproducible in multiple studies [5,6]. Transparent reporting of the study populations, methods of data collection, analytic methods, and study inferences may help readers better identify issues that can affect the reproducibility of genetic association studies. Here, we conduct a detailed evaluation of reporting practices for HuGE association studies.
In 2001, the Human Genome Epidemiology Network (HuGENet) established the HuGE Published Literature database (HuGE Pub Lit), a continually updated, searchable, online database of population-based, genetic epidemiology articles . Relevant studies are identified weekly from NCBI PubMed  by a genetic epidemiologist who records the study design, genes and diseases of interest, and interacting environmental factors . As of May 21, 2007, this database included a total of 27,386 articles that examined genotype-phenotype associations (both qualitative and quantitative traits) published in 2,773 journals. Further details regarding the contents of this database have been previously described . This information along with the title, contributing authors, abstract, journal, date of publication, and the unique PubMed Identifier (PMID) are deposited in the HuGE Pub Lit database . To select articles for this analysis, we queried the HuGE Pub Lit database for population-based studies that used observational study designs (i.e., case-control, cohort, and cross-sectional studies) to investigate gene-disease associations, interactions between genetic variants (interlocus or gene-gene interactions), or gene-environment interactions. Family-based linkage studies were not collected systematically in HuGE Pub Lit and, therefore, were not included in this study. In addition, we restricted our analysis to full text articles because studies presented only as concise summaries (e.g., as letters or abstracts) could have increased the heterogeneity of our sample.
Our evaluation was designed in 2004 and data collection and analyses were conducted in 2004–2007. For the main analysis, we drew a five percent simple random sample (SRS) of articles that were returned by the query described above, published from 2001 to 2003, and curated in HuGE Pub Lit before May 30th, 2004 (n = 8,115) to yield a dataset of 406 articles. To provide an updated description of reporting practices and to assess improvements in reporting, we randomly selected (SRS) 40 articles that were published during 2006 from articles that were returned by our database query, added to PubMed in 2006, and curated in HuGE Pub Lit before May 18, 2007 (n = 5,353). After each article was read, 91 from 2001–2003 and 12 from 2006 were excluded from the analysis for the following reasons: not written in English (2001–2003: n = 28, 2006: n = 6), population screening studies (2001–2003: n = 23, 2006: n = 0), clinical trials or pharmacogenomic studies (2001–2003: n = 16, 2006: n = 3), not full-length articles (i.e., letter or abstract) (2001–2003: n = 11, 2006: n = 0), failed to fulfill the inclusion criteria for HuGE Pub Lit  on closer scrutiny (2001–2003: n = 6, 2006: n = 1), family studies (2001–2003: n = 3, 2006: n = 2), studies of genetic tests (2001–2003: n = 2, 2006: n = 0), or meta-analyses (2001–2003: n = 2, 2006: n = 0).
Data were abstracted from each original publication in duplicate by two independent data extractors. All discrepancies between the independent extractors were discussed and a consensus was reached.
For the 2001–2003 articles, a standardized abstraction form was developed and piloted for 10 articles; the form was revised according to the results of this pilot study to ensure that the definitions for the collected items were clear and unambiguous. Items on this final form were designed to collect information on the reporting of study design, genotyping method, population stratification, analytical methods (including the analysis of multiple genetic variants and gene-environment interactions), and study inferences. In addition, the final form accommodated different observational study designs, multiple groups of study participants, and the consideration of more than one postulated genetic risk factor as well as additional environmental factors. Articles were coded as potentially misclassifying the disease or environmental exposure status of study participants when the article did not explicitly state that these factors were directly measured for all participants in the study population. When multiple groups of study participants were reported in a study, we recorded the sample size of the largest group for cohort and cross-sectional studies and the largest case and control groups for case-control studies. Items were collected separately for case and control groups for case-control studies and for all study participants regardless of disease status for cohort and cross-sectional studies. For the purpose of this analysis, data collected for case and control groups were combined so that statistics could be calculated for all study participants. Information (e.g., mean or median age and sex distribution) was considered as given for all study participants only if it was provided for all case and control groups. Additionally, for case-control studies, we recorded whether cases and controls were described as drawn from the same population according to one or more of the following definitions: 1) geographic region, 2) clinical population, 3) general population (i.e., ethnic group), or whether information on the choice of suitable controls was missing or incomplete.
Fourteen items were assessed in the HuGE articles published in 2006. These included the number of study participants, genes, polymorphisms, and environmental factors assessed in gene-environment interactions. In addition, we selected ten items that were applicable to all study designs and that had been reported in fewer than 50% of the articles published in 2001–2003.
The data analysis was conducted using SAS 9.13 (SAS Institute, Cary, NC). Counts and percents were calculated for the items abstracted from the articles. Comparisons of articles published in 2006 vs. those published in 2001–2003 used the Mann-Whitney U test for continuous variables and Fisher's exact test for binary variables.
The 315 articles selected for analysis were published in 194 journals and reported on the findings of 227 case-control, 32 cohort, and 56 cross-sectional studies. In addition to population-based studies, three articles also described family-based analyses. Data pertaining exclusively to these family-based analyses were not included in this report. As shown in Table Table1,1, most articles (75.9 percent) reported sample sizes of fewer than 500 study participants; 9.2 percent reported sample sizes greater than or equal to 1,000 (median = 265, interquartile range (IQR) 142–471). Statistical power was reported in 12.7 percent of articles. Multiple study populations (e.g., more than one case or control group) were reported in 25.4 percent of the articles. Most of the studies provided at least some information about the origin (87.9 percent) and the enrollment criteria (97.5 percent) of the study participants. The sex distribution was provided in three-quarters of the articles, whereas the median or mean age of the study participants and a measure of the variation around this value (e.g., IQR or standard deviation) were reported for 65.4 percent and 54.6 percent, respectively. One in six articles explicitly stated that the study participants were unrelated. We estimated that 11.8 percent of studies could have misclassified the outcome of interest.
Seven percent of studies reported that the genotyping results were validated with the use of replicate samples, and an additional 9.8 percent reported that a different method of validation was used (Table (Table2).2). A blind evaluation of the genetic test to the outcome (11.1 percent) or of the outcome to the genetic test (3.8 percent) was rarely reported. Few articles reported that any potential participants had been excluded (11.8 percent) or commented on the number of samples that could not be genotyped (15.6 percent).
As shown in Table Table3,3, almost 60 percent of the articles indicated that all study participants were drawn from the same ethnic population, whereas 9.5 percent reported that the study population included more than one ethnic group. Most of these articles (76.7 percent) either stratified by or controlled for ethnicity; however, a few (23.3 percent) pooled ethnic groups together or did not provide clear information on how data from different ethnic groups were analyzed. The use of unlinked genetic markers to assess population stratification was extremely rare (0.6 percent). Among case-control studies (n = 227), two-thirds indicated that the cases and controls were drawn from the same geographic area; one in five indicated that cases and controls were drawn from the same clinical population, and one-quarter indicated that cases and controls were drawn from the same general population. More than one-third of these articles were unclear about the source populations or reported no information at all on this aspect.
Approximately one-half of the articles stated that they examined whether the study populations were in Hardy-Weinberg equilibrium; of these, 6.6 percent reported that the genotype frequencies deviated from those expected under Hardy-Weinberg equilibrium (Table (Table4).4). Summary data (e.g., genotype/allele frequencies presented in a tabular format) were reported on all genetic variants of interest for the outcomes in 87 percent of articles. Analysis using alleles (54.6 percent) was less common than analysis using genotypes (85.7 percent). When genotypes were analyzed, a considerable proportion of articles reported on specific genetic comparisons based on dominant or recessive models (20.7 percent); among these studies, 41.1 percent provided a justification for using the selected model. One in ten articles reported corrections for multiple comparisons; most (70.0 percent) used a Bonferroni correction as the method of adjustment. One article reported using both Tukey's and Scheffe's tests to control for multiple comparisons .
Overall, less than 40 percent of the articles discussed the public health, medical, or clinical implications of their findings. Less than one in six articles claimed to be the first to analyze a particular association. For the articles that did not make this claim, 8.6 percent clearly made reference to the first study on the issue. Six percent of articles clearly referenced a systematic review, and 1.9 percent referenced a non-systematic review.
Nearly two-thirds of the articles (N = 201) investigated multiple genetic variants, often in more than one gene (Table (Table5).5). These studies varied in their reporting of linkage disequilibrium (22.9 percent), haplotype analysis (21.4 percent), and gene-gene interactions (24.4 percent). When articles reported on interlocus or gene-gene interactions, slightly over half estimated the relative risk of the phenotypic outcome as an odds ratio; only 4.1 percent reported an absolute difference, and none presented an attributable fraction. The remainder did not present a measure of risk. One-half of the studies that reported on interlocus or gene-gene interactions reported that at least one of these interactions was statistically significant.
Gene-environment interactions were discussed in 15.2 percent (n = 48) of the articles (Table (Table6).6). Among these articles, 70.8 percent examined one environmental factor, 20.8 percent examined two, and 8.4 percent examined three or more. We estimated that the potential to misclassify the environmental factor was present in as many as three-fourths of the articles. Very few studies (6.2 percent) presented a description of the possible sources of error in the measurement of the environmental factor or reported the use of dose-dependent models. None of the articles indicated whether the assessment of the environmental factor was blinded to genotype or whether laboratory staff performing the genetic tests was blind to the environmental factor. Risk was quantified as a risk or odds ratio in slightly more than one-half of these studies; however, none reported absolute differences or attributable fractions. A statistically significant gene-environment interaction was reported in 29.2 percent of the papers.
The number of study participants, genes and polymorphisms analyzed, and environmental factors examined in gene-environment interactions were similar for the two time periods (Table (Table7).7). Articles in the 2006 sample tended to use sample sizes of less than 500 (75.0 percent) and report on a single gene (75.0 percent), multiple genetic variants (64.3 percent), and no gene-environment interactions (92.9 percent).
Three of the ten items that were reported in fewer than 50 percent of the articles from 2001–2003 were reported significantly more often in the 2006 articles (Table (Table7).7). Studies published in 2006 were more likely to report the available power of the study (2001–2003: 12.7 percent; 2006: 28.6 percent; p = .03), the use of unrelated study participants (2001–2003: 17.8 percent; 2006: 35.7 percent; p = .03), and the validation of genotypic results using duplicate samples (2001–2003: 7.0 percent; 2006: 21.4 percent; p = .02). Nevertheless, every item except for Hardy-Weinberg equilibrium was reported in fewer than half of all articles in the 2006 sample.
Many published claims of gene-disease association have not been replicated when studied in independent samples [5,6]. Suspected causes of this inconsistency include the assessment of statistical significance without accounting for the low prior probability of association, low statistical power, improper selection of participants, measurement error, confounding, and the selective reporting of results in the published literature [1,2,5,6,10-15]. Previous analyses have found that many published articles in genetic epidemiology do not provide sufficient information to evaluate these causes [16-18]. However, the results of these analyses were limited to a specific phenotypic outcome (e.g., sepsis) [16,17] or are outdated . Our analysis provides an updated review of reporting on these key elements in two representative samples of HuGE articles.
Representative of the literature in this field, most of the studies in our samples were small: only about 10 percent of the studies reported sample sizes that exceeded 1,000. Several meta-analyses have found significant differences between the results of small and large genetic association studies [6,19]. Growing evidence suggests that individual genetic variants impart only a modest effect on the risk of developing complex, multifactorial diseases [20-22]. Thus, enrolling many thousands or even tens of thousands of individuals may be required to achieve the necessary power to identify and validate true genetic associations [13,20,23,24].
Our ability to assess the potential for selection biases was severely limited in many of the studies we examined. Although most studies provided some qualitative descriptions of the study participants (such as origin and enrollment criteria), reporting was sporadic for even simple descriptors, such as age and sex. Potentially important details, such as the number of exclusions or the number of samples that could not be genotyped, were often omitted.
Misclassification can severely limit study power and bias the results [14,23,25-27]. We determined about a tenth of studies may have misclassified their phenotypic outcomes and three-fourths of studies may have misclassified their environmental factors. A small proportion of studies reported measures such as genotyping replicate samples and blinding the research staff [27,28] to help ensure that the genetic data were not misclassified. Although the practice of detecting genotyping errors through tests of Hardy-Weinberg Equilibrium is still being debated [27,29-31], approximately half of studies reported HWE test results.
Population stratification may occur when study participants are selected from subpopulations with a different prevalence of the phenotypes and genotypes [21,30,32,33]. Although the extent to which population stratification contributes to spurious findings remains debatable [34-36], most of the articles in our sample provided descriptions for the ethnic origin of the study participants, and almost all case-control studies indicated that the cases and controls were drawn from the same population. A few studies reported the use of unlinked markers to provide evidence that population stratification was not an issue in the analysis. Genome-wide association studies provide considerable genetic data to examine and correct for population stratification (e.g. by principal component analysis) [37-39].
As a result of the selective reporting of significant results from multiple analyses and publication bias, the extent of type I error in the published literature may be great [11,14,21,23,26,28,30,40,41]. Although most studies reported results for only a few polymorphisms and environmental factors, it is difficult to determine the number actually tested; only a minority of articles reported using corrections for multiple comparisons, even for the reported associations. Reporting justifications for specific genetic comparisons could suggest that studies were founded on an a priori hypothesis and were not the result of selective reporting. However, less than one-half of the studies that assessed dominant or recessive genetic models provided justifications for the use of these specific comparisons and not others. Among articles that described gene-gene or gene-environment interactions, a substantial proportion reported statistically significant results. However, many of these may be spurious, given the limited power of most studies to identify true associations, let alone interaction effects [20,32]. The high frequency of "positive" results in our study sample could reflect a combination of multiple testing, selective reporting, and publication bias .
The use of common reporting standards could increase the transparency of research methodology, thus helping to identify selective reporting and sources of bias and confounding while allowing for a more complete synthesis of data across consortia or in meta-analyses [1,10,42]. The results of our study were presented at a HuGENet sponsored workshop, Strengthening the Reporting of Genetic Associations (STREGA) . The workshop concluded with an agreement to develop an extension of the STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) statement  to address some of the specific challenges (e.g., generation of genetic data, population stratification, haplotype inference, HWE, and multiple testing) posed by reporting results of genetic epidemiology studies. STREGA is currently finalized for publication, as of writing this article.
By publishing supplementary information online, journals increasingly provide authors the opportunity to present their study methods and results in greater detail than is permitted in print [45,46]. Recently, an increase in such supplements–often used to report additional methods, tables, and figures–was documented for a number of high impact journals . However, authors and journals need to ensure that this information remains available to readers and is not lost in broken links .
In summary, our results provide evidence that many details needed to assess the validity of study findings are not consistently reported in human genome epidemiology studies, though some improvement has been seen recently. The use of standard reporting guidelines and online supplements could help readers to better judge the scientific evidence. As large-scale genotyping platforms are rapidly introduced in human genome epidemiology, the importance of transparent reporting of the background, epidemiological methods, and population characteristics cannot be understated given the challenge of assessing, interpreting, and discussing ever-greater amounts of data.
The authors declare that they have no competing interests.
AY assisted in data collection, performed the analysis, and drafted the manuscript. EE, FKK, and NAP assisted in data collection and the revision of the manuscript. MC identified and indexed relevant articles for the HuGE Published Literature database. MCW and BKL created the standardized abstraction form and assisted in the sampling process and data collection. WY curates the HuGE Published Literature and assisted in the sampling process. MG, JPAI, and MJK designed the study, oversaw the project, and revised the manuscript. All authors read and approved the final manuscript.
The findings and conclusions in this report are those of the authors and do not necessarily represent the views of the Centers for Disease Control and Prevention.
The pre-publication history for this paper can be accessed here: