We read with interest the paper by Kuo and Feingold (henceforth K&F) “What's the best statistic for a simple test of genetic association in a case-control study” [Kuo and Feingold, 2010], in which the authors compared the power of three logistic regression models to detect genetic effects, concluded that “the most commonly used approach to handle covariates—modeling covariate main effects but not interaction—is almost never a good idea,” and recommended modeling only the genetic factors without covariate adjustment in genome-scanning. We feel that the issue of covariate adjustment in logistic regression models was oversimplified and the conclusion was unjustified. In this letter, we attempt to explain the observations found by K&F using established results in the statistical literature, to confirm the theoretical results in the genetic association study scenario by mimicking and extending the simulation study performed by K&F, and to discuss their implication on covariate adjustment in genome-scanning.

The impact of covariate adjustment on the precision of regression coefficient estimators in classic linear models depends on multiple correlations between variables; however, adjusting for covariates in logistic regression models always leads to a loss of precision. Denote by

*Y*a quantitative trait, by*G*a genotype, and by*E*a covariate, e.g. an environmental factor. Suppose we fit to data two linear regression models*E*(*Y**G*) = α+β_{1}*G*and*E*(*Y**G*,*E*) = α′ + β′_{1}*G*+ β′_{2}*E*. The asymptotic relative precision (ARP) of the estimator ′_{1}to_{1}can be derived as(1) |

where ρ

*is the correlation coefficient between*_{GE}*G*and*E*, and ρ_{YE}_{}*is the partial correlation coefficient between*_{G}*Y*and*E*given*G*. Thus, whether ARP (′_{1}to_{1}) is greater or smaller than one depends on both the correlations—between*G*and*E*, and between*Y*and*E*[Robinson and Jewell, 1991]. On the other hand, adjusting for covariates in logistic regression models leads to increased variances of coefficient estimators regardless of correlations between variables. Denote by D*ε*{0,1} affection status. Suppose we fit to data the two logistic regression models (a): logit*P*(*D*= 1*G*) = α+β_{1}*G*and (b): logit*P*(*D*= 1*G, E*) = α′ + β′_{1}*G*+ β′_{2}*E*. It can be shown that ARP (′_{1}to_{1}≤ 1, i.e. formula (1), established in classic linear regression models, breaks down in logistic regression models; moreover, the larger the magnitude of ′_{2}, the poorer the precision of the estimator ′_{1}[Wickramaratne and Holford, 1990; Robinson and Jewell, 1991]. The increase in variance of ′_{1}over that of_{1}can lead to a power loss in testing the null hypothesis of no genetic effect when ′_{1}is asymptotically equal to_{1}, which explains the results of model 1—genetic effect only—in K&F, i.e. model (*a*) is more powerful than model (*b*).Now when model (

*b*) underlies the disease susceptibility and*G*and*E*are independent, omitting the environmental factor leads to a downward bias in estimating effect sizes of the genetic factor, i.e., |_{1}| < |′_{1}| [Gail et al., 1984; Neuhaus and Jewell, 1993]. Noting that Var (_{1}) < Var (′_{1}), it is of interest to investigate which model, (a) or (b), is more powerful in testing the null hypothesis of no genetic effect. It has been shown that the asymptotic relative efficiency (ARE) of the two hypothesis tests meets the inequality ARE (′_{1}to_{1}at β_{1}= 0) ≥ 1 [Robinson and Jewell, 1991; Neuhaus, 1998]. Therefore, in testing the null hypothesis of no genetic effect, omitting disease risk/preventive factors that are independent of genetic factors always leads to efficiency loss, which explains the results of model 2—genetic and environmental marginal effects only—in K&F, i.e. model (b) is more powerful than model (a). Note that K&F did not clearly state which model was more powerful, though their figures showed it was model (b), which we confirmed by mimicking their simulation study as presented below. Furthermore, the magnitude of the loss increases with the strength of the association between the omitted covariate and the disease [Neuhaus, 1998].We first simulated case-control data with a single-locus disease model and a covariate using the method by K&F. For simplicity, we only simulated an additive genetic model with the penetrance functions summarized in Table I, in which models 1, 2a, 3a, and 3b are identical to the models 1, 2, 3, and 4, respectively, in Supplemental Table 1 of K&F. Model 1 contains a genetic effect only. Models 2a–2c have genetic and environmental marginal effects only. From models 2a to 2c, the effect size of the environmental factor increases. Models 3a and 3b include both marginal and interaction effects of genetic and environmental factors. From models 3a to 3b, the interaction effect size decreases. In all the models, we set the minor allele frequency equal to 0.3, probability of exposure equal to 0.5, and the genetic and environmental factors are independent in the general population. Under each model, we simulated 10,000 replicates with 300 cases and 300 controls in each data set. We analyzed each data set by both the models (a) and (b), and compared the coefficient estimates, their standard errors, and power to detect the genetic effect (Table II). As predicted by ARP (′

_{1}to_{1}) ≤ 1 in logistic regression models, the standard error of_{1}was smaller than that of ′_{1}in all the six genetic models. For model 1, although there was no statistical difference between the estimates ′_{1}and_{1}, the analysis model (*b*) was slightly less powerful than model (a) owing to a decreased precision of ′_{1}compared with_{1}. For models 2a–2c, both the magnitude and standard error of_{1}were smaller than those of ′_{1}, and the power of model (b) was greater than that of model (a) in testing for the genetic effect; all the differences became more significant as the environmental effects increased from models 2a to 2c. There has been no theoretical study investigating the estimation bias and efficiency with omitted covariates and interaction terms when the underlying true model includes both marginal and interaction effects, and we speculate the results depend on the relative magnitude and direction of both effects. In models 3a and 3b, the marginal and interaction effects were in the same direction, and the magnitude of_{1}was greater than that of ′_{1}. An intuitive explanation is that a larger proportion of the interaction effect is absorbed into the estimated marginal effect_{1}than into ′_{1}resulting in_{1}> ′_{1}. The greater coefficient estimate, together with a smaller standard error, led to the greater power of model (a) than model (b) in testing the genetic effect.We then carried out simulations under the logistic regression model to examine the magnitude of power loss due to omitting predictive covariates that are independent of the genetic effect. Assuming the true disease model was (b), in which α′ was chosen such that the disease prevalence equaled to 0.15, and β′

_{1}was chosen such that the odds ratio (OR) of per copy of disease-disposing allele equaled to 1.2, we simulated a diallelic marker with the minor allele frequency equal to 0.3, and a binary covariate independent of the genetic variant with the OR*ε*{1.5,2.0,2.5,3.0,3.5,4.0,4.5,5.0} and the exposure frequency*ε*{0.2,0.5}. The parameter settings mimicked those of a typical complex trait suggested by results of genome-wide association studies. Under each scenario 10,000 replicates with 1,000 cases and 1,000 controls in each data set were simulated. We analyzed each data set by both models (a) and (b), and compared their power to detect the genetic effect (Table III). Omitting the covariate always led to power loss, and the magnitude of the loss increased with the effect size of the covariate. When the OR was less than 3.0, the power loss was trivial; however, the loss can be substantial when the OR increased to 5.0. For example, if there were disparity of disease prevalence between males and females, omitting the covariate sex would result in considerable power loss.A conventional wisdom in classic linear regression is that adjusting for covariates associated with the response variable can improve the precision of estimates by reducing the residual variance [Fisher, 1932]; however, covariate adjustment in logistic regression models always leads to a loss of precision. Nonetheless, this loss of precision does not always result in a loss of power. When the genetic and risk/preventive environmental factors are independent and do not have interaction effects on the disease, it is always more efficient to adjust for the predictive covariates. Note that this conclusion holds not only in logistic regression models but also in the class of generalized linear models [Neuhaus, 1998]. A person's genetic background is determined from birth, thus in most cases it is not unreasonable to assume it is independent of his/her subsequent environmental exposure, which has been a key assumption in some study designs to investigate gene-environmental interaction [Piegorsch et al., 1994; Chatterjee and Carroll, 2005]. If we assume that only a small proportion of genetic variants interact with the known environmental factors on disease susceptibility, then, contrary to the conclusion by K&F, we recommend adjusting for predictive covariates at the genome-scanning stage.