PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of amjepidLink to Publisher's site
 
Am J Epidemiol. 2012 February 15; 175(4): 325–331.
Published online 2012 January 20. doi:  10.1093/aje/kwr316
PMCID: PMC3271814

Regression Calibration When Foods (Measured With Error) Are the Variables of Interest: Markedly Non-Gaussian Data With Many Zeroes

Abstract

Regression calibration has been described as a means of correcting effects of measurement error for normally distributed dietary variables. When foods are the items of interest, true distributions of intake are often positively skewed, may contain many zeroes, and are usually not described by well-known statistical distributions. The authors considered the validity of regression calibration assumptions where data are non-Gaussian. Such data (including many zeroes) were simulated, and use of the regression calibration algorithm was evaluated. An example used data from Adventist Health Study 2 (2002–2008). In this special situation, a linear calibration model does (as usual) at least approximately correct the parameter that captures the exposure-disease association in the “disease” model. Poor fit in the calibration model does not produce biased calibrated estimates when the “disease” model is linear, and it produces little bias in a nonlinear “disease” model if the model is approximately linear. Poor fit will adversely affect statistical power, but more complex linear calibration models can help here. The authors conclude that non-Gaussian data with many zeroes do not invalidate regression calibration. Irrespective of fit, linear regression calibration in this situation at least approximately corrects bias. More complex linear calibration equations that improve fit may increase power over that of uncalibrated regressions.

Keywords: bias (epidemiology), foods, measurement error, power, regression calibration

Measurement error is a well-recognized problem in nutritional epidemiology (1, 2). Although it has been described for error correction in dietary analyses, regression calibration has rarely been applied to analyses where foods are the variables of interest. Regression calibration as developed in the epidemiologic literature (3) has usually been limited to data that are approximately normally distributed, although this does not appear to be required by the regression calibration algorithm (4).

Thus, it is unclear whether measurement error correction by regression calibration as has been previously described is a satisfactory procedure where foods are the exposures of interest, and indeed there are some special considerations in this setting. Hypotheses about relations of food/food-group consumption with disease are commonly tested in nutritional epidemiology, and there is a clear need for measurement error correction in such regressions. However, the independent variables may have distributions that are quite irregular and far from normal.

Food variables with many zero values are common in most cohorts because of individual likes and dislikes that result in many subjects’ choosing not to eat some foods. For instance, vegetarians eat no meats, and at least one-sixth of Seventh-day Adventists in the Adventist Health Study 2 (AHS-2) cohort (5) eat virtually no soy, but half of Adventists on average eat soy at Asian levels (6). More than 25% of subjects indicated zero intakes (eaten less than once per month) in questionnaire responses for 21 of 49 foods/food groups evaluated in this study.

Rosner and Gore (7) evaluated the standard approach to regression calibration where data on individual foods (apparently without zero intakes) are transformed to approximate normality. However, such transformations are impossible for some foods or food groups with many zero intakes or irregular distributions.

Other recent work (810) dealing with this sort of data has focused on mixture models that use separate models for zero and nonzero data. The method of Kipnis et al. (10) is the most relevant and is described for survey settings in which all study participants have outcome data but also reference dietary data such as data from repeated recalls, which are rare in existing cohort studies. Optional questionnaire dietary data would increase precision, especially for foods consumed less frequently. This motivates joint likelihood-based analysis of the calibration and main studies to estimate disease risk. The models are often quite statistically complex (810) and involve stronger assumptions about the distributions of true exposure than are required for the simpler regression calibration methods that we discuss here.

Regression calibration requires that a calibration equation relating the mean of true diet (T) to questionnaire estimates (Q) be constructed in order to correct bias. When there are many zero values, E(T|Q) will not have normally distributed homoscedastic residuals and may not have a simple linear form. Errors may be positively skewed at values of Q close to the origin as negative errors around E(T|Q) become crowded by the line T = 0. Does this affect estimation and hypothesis-testing in a calibrated “disease” regression?

Although the calibration relation, E(T|Q), is usually modeled as a simple linear function, there are other, more complex linear models that can potentially improve statistical power. Transformations such as logarithms may also improve power by reducing heteroscedasticity. With non-Gaussian data, estimates of variances and confidence intervals will usually require nonparametric techniques such as resampling (11, 12). In this article, we simulate a data set containing many zeroes and a positive skew in the nonzero data in order to explore these ideas. An example using real data from the AHS-2 cohort (5) is shown.

METHODS

Regression calibration requires that an estimator of T given Q be available. Then if f is the distribution function of a disease indicator D and if the nondifferential error assumption is satisfied (i.e., f(D|T) = f(D|Q, T)), when E(T|Q) is used as the exposure variable in the disease regression, the observed coefficient will be very close (except in extreme cases) to an unbiased estimate of that which would have been obtained had T been available and used instead (13). E(T|Q) is estimated by means of a calibration equation which as used below is always a linear calibration approximation.

The linear “disease” regressions

Where the true disease regression is g(D) = αT + βT T + εT, the calibrated disease regression is g(D) = αcalib + βcalib E(T|Q) + εcalib, where Ecalib) = βT if assumptions for regression calibration are satisfied. For the moment, it is assumed that T can in fact be observed in the calibration study. The observed disease regression is g(D) = αS + βSQ + εS.

Transformations of the dietary data

Logarithmic transformations are often useful as a variance-stabilizing tool because heteroscedasticity of the calibration equation increases the variance of estimated coefficients (14). Although it is not essential, we transform both T and Q but, because of the frequent zeroes, we suggest using log(k.X + 1), X = T, Q, where k is chosen so that k.X is close to k.X + 1 for most values of X. Then adding 1 to k.X is a proportionately small increment, and a zero on the original scale remains a zero. T′ and Q′ are defined to indicate nutritional variables that are transformed in this way. In our examples, k = 1.0 was used, as most nonzero values of X were relatively large.

Impact of poor fit of a calibration model on the bias of a linear calibrated disease regression

We show here that poor fit of a calibration equation still results in a consistent calibrated linear disease regression estimator. Consider a function h(Q′) such that Cov(h(Q′), T′)/Var(h(Q′)) = 1 (i.e., the regression of T′ on h(Q′) has slope 1). By definition, Cov(h(Q′), D)/Var(h(Q′)) is the regression slope of D on h(Q′). This is equal to Cov(h(Q′), αT + βTT′ + εT)/Var(h(Q′)). So long as εT is uncorrelated with T′ and h(Q′), the nondifferential error assumption, this regression slope is βT{Cov(h(Q′), T′)/Var(h(Q′))} = βT × 1 = βT.

The calibration equation, h(Q′), can be constructed as any regression function T′ = hC(Q′) + εC = αC + β1Ch1(Q′) + β2Ch2(Q′) + … + βrChr(Q′) + εC, where C indicates the calibration equation and β’s are the corresponding true multiple regression parameters of T′ on h1(Q′) … hr(Q′). This implies (proof by contradiction is very simple) that the regression slope of T′ on hC(Q′) must be 1, and the above proof (see preceding paragraph) applies. Under usual regularity conditions, consistency of the estimate of βT follows so long as the sizes of both the main and calibration studies are allowed to increase (so that the estimates of parameters in h(Q′) and in the regression given h(Q′) approach their expected values).

Although it was not assumed that the ordinary least squares estimate of T′ given hC(Q′) is the same as E(T′|hC(Q′)), a poorly fitting linear calibration equation will still provide an estimate that consistently corrects βS in order to estimate βT when the “disease” regression is linear. One caveat is that this requires a calibration study that is a representative sample of the entire study. If the true model for E(T′|hC(Q′)) is nonlinear in Q′ and if the sample of Q′ is not representative, then the ordinary least squares slope of T′ on hC(Q′) depends upon which part of the curve is sampled.

A general regression calibration framework

From consideration of a Taylor’s theorem expansion (see Web Appendix 1, available on the Journal’s Web site (http://aje.oxfordjournals.org/)), it can be concluded that the results noted above, which apply when the “disease” model is linear, will also approximately apply to nonlinear “disease” models in many common situations encountered in epidemiologic work. These include 1) approximate deattenuation of effect estimates for measurement error correction (3, 13, 15) and 2) the situation where, among other factors, the fit of the calibration model is a determinant of power of the calibrated result (see below).

Effect of the fit of a linear calibration model on the power of a calibrated linear disease model

Define RT2 and RC2 as the multiple correlation coefficients for the correlations between D and T and between T and Q, respectively. Then, by considering noncentrality parameters of chi-squared distributions used to test hypotheses about βT and equation n1 (see Web Appendix 2), it is seen that for small RT2, the loss of effective sample size associated with the calibration is approximately inversely related to RC2 but with comparatively greater losses at higher values of RT2.

The power of an uncalibrated univariate result is approximately the same as that of a calibrated analysis when the calibration equation is simple linear (nonpolynomial) in form (16). An immediate conclusion to be drawn from the above results is that if a more complex calibration equation significantly improves the fit of the calibration equation, this calibrated result will have power exceeding that of an uncalibrated analysis.

Examining the fit of the calibration model

The influence of the fit of the calibration study on power is mediated through the value of RC2, which is improved by a better fit. To maximize power, it will be worthwhile to examine calibration equation residuals and possibly add higher-order terms, or use a linear spline or other polynomial models.

One flexible calibration model that may improve fit in regions where many subjects have the same values (e.g., zero) is a partitioned approach:

An external file that holds a picture, illustration, etc.
Object name is amjepidkwr316fx1_ht.jpg
(1)

where H is an indicator variable taking the value 1 when Q < Qcont, otherwise 0; where Qcont is the value below which the model includes m nonzero categories (a step function), represented by k = 1, …, m; and where B are the corresponding indicator variables. Thus, exact mean values of T′|Q′ are predicted by αC + δC when Q = 0 and by αC + δC + ηkC when Q falls into category k. Coefficient βC describes the slope between Q′ and T′ when QQcont.

With the model described by equation 1, by definition the mean function fit is exact when Q < Qcont. Thus, in this respect, by potentially providing a “separate” model when Q = 0, it is similar to the 2-part model of Kipnis et al. (10). At higher values of Q, the skewness of residuals is usually small, and we have used the cumres function in R software (R Foundation for Statistical Computing, Vienna, Austria) to evaluate the fit of the model there (17, 18).

Implications of non-Gaussian (and zero) data for the assumption that E(R′|T′) = T

In practice, when developing the calibration equation, T is unobservable, and a surrogate reference measure (R), such as the average of repeated dietary recalls or diaries, is employed. This brings the complication of two further assumptions. The first is that the errors in R′ and Q′ about T′ are independent. This assumption is not addressed further, since nothing different results when dealing with non-Gaussian distributions. The second assumption is that R′ = T′ + e, E(e) = 0, that is, E(R′|T′) = T′. Then the random errors, e, will not bias estimates of βcalib (19).

The association between R′ and T′ has 3 regions of interest: 1) T = 0; 2) an intermediate low-intake region of T; and 3) T is larger. The T = 0 assumption can only be satisfied when both T = 0 and R = 0, since negative values are not possible. Where subjects in fact consume none of a particular food (T = 0), they are most unlikely to then claim that they have eaten that food in a recall or food diary (R). Nevertheless, such claims are possible, and sensitivity analyses may be informative. With an intermediate low-intake region of T, R is usually a discrete measure even if it is the average from a number of days. A subject’s daily intakes will fluctuate around some underlying mean. If E(R|T) = T for a particular recall day, this will also be true for a (possibly weighted) sum across days, to form (for example) an estimate of weekly intake. When T is small, then for some subjects all days of R may have zero values, and estimated weekly intakes will erroneously also appear to be zero. However, these are counterbalanced by values of R from other subjects with the same value of T, where R values are greater than expected. When T is larger, the discrete nature of R is largely masked and the assumption that E(R|T) = T needs no further clarification. In AHS-2, this was when the average from the 6 recalls obtained was sufficiently high that the probability of all recalls being zero was low.

A value of R cannot be an unbiased estimate of T on both the transformed and untransformed scales. However, the approximation may often be sufficiently close. A second-order approximation is E[log(k.R + 1)] ≈ log(k.T + 1) – σe2/(k.T + 1)2, where σe2 will probably depend on T because of heteroscedasticity. So long as σe2 is much smaller than (k.T + 1)2, the assumption that E(R′|T′) = T′ will be approximately satisfied. An R from many replicates will minimize σe2.

Simulated data where T and Q contain many zeroes and where a log(X + 1) (i.e., X′) transformation is used

We simulated a distribution with many zeroes and a positive skew. Details of this simulation can be found in Web Appendix 3. Briefly, this was modeled approximately on soy protein intake in the AHS-2 population, where there were many zero intakes. We also investigated the effect of varying the proportion of zeroes between 25% and 60%. Disease events (D) were generated using a logistic function, conditional on T′, such that the odds ratio for disease comparing T = 7.84 g/day with T = 0 g/day was 0.6.

Populations of size 13,500 were simulated. Evaluation of the mean values and standard errors of calibrated regression coefficients used the results from 1,000 such populations of Q and D for each set of conditions, all conditional on a fixed set of T. A new calibration study of size 4,500 subjects was randomly selected from each new population, large enough so that the variance due to the calibration was small and did not confuse the main results.

Practical estimation of standard errors and confidence intervals for calibrated “disease” regression estimates

When errors are non-Gaussian, a method that bootstraps only within the calibration study has been described by Ferrari et al. (20). This uses the total variance formula

An external file that holds a picture, illustration, etc.
Object name is amjepidkwr316fx2_ht.jpg
(2)

where the mean and variance on the right side of equation 2 are calculated using the J calibrated “disease” regression results produced by the J bootstrap calibration samples. It was suggested by Ferrari et al. (20) that J ≈ 300 bootstrap samples provides a stable result. A more computation-intensive algorithm (12) has the advantage of not requiring that calibrated beta coefficients are normally distributed when producing confidence intervals.

The large calibration studies that were used in the simulations resulted in only a trivial contribution from the last term of equation 2. Thus, effects of uncertainty in the calibration equation could be ignored for practical purposes.

Adventist Health Study 2

Real data for further illustration come from AHS-2 (5), a cohort study of 96,000 Seventh-day Adventists living in the United States and Canada (2002–2008). A 130-item food frequency questionnaire was completed by study members. For cereals and meat analogs, there were several pages of commonly consumed commercial products and space for write-in brands.

A sample (n = 1,011) of these subjects comprise a calibration study, which represents the cohort very closely (21). These subjects completed a second food frequency questionnaire and six 24-hour telephone dietary recalls. For each subject, recalls were collected in 2 blocks of 3 days, each 5–6 months apart. Each block provided a synthetic week, created by appropriate weighting of a Saturday, a Sunday, and a weekday—days on which data were always collected. The recalls were obtained interactively by telephone using Nutrition Data System software (22), which also produced nutrient values. Validity correlations between food frequency questionnaire estimates and dietary recall estimates are generally relatively high (21). Energy adjustment was not included in this illustration, since using the density method did not improve the validity of data for soy (or other foods/food groups that were tested). Further details about the cohort are provided elsewhere (5).

The estimate of soy intake in the food frequency questionnaire was gathered from questions that included intake of canned soybeans, fermented soy products, and tofu and a 2-page list of commercial soy products and soy milks often used by this population. In addition, there was space for write-ins, which were coded separately. In the AHS-2 data, standard errors of calibrated coefficients were estimated using the method referred to above (20) with 300 bootstrap samples of the calibration data.

RESULTS

Regression calibration in simulated data with many zero values

These simulated data showed the expected means, variances, and correlations. There were approximately 232, 240, and 270 disease events when zeroes represented 25%, 40%, and 60% of the exposure data, respectively, thus corresponding to disease risks of 0.0172, 0.0178, and 0.0200. Disease risk is higher with a greater proportion of zero intakes, consistent with the negative association between risk and exposure.

The simulated calibration equation was markedly nonlinear near the origin (Figure 1), but for most of the range of Q′, it was approximately linear. Thus, a simple linear calibration model applied to these data showed clear evidence of lack of fit, as indicated by cumres plots with highly significant P values.

Figure 1.
Locally weighted scatterplot-smoothed (LOESS) simulated nonlinear relation (mean function) between the “true” dietary variable, T′ (y-axis), and that measured with error, Q′ (x-axis), Adventist Health Study 2, 2002–2008. ...

Several calibration models were explored with each sample: 1) a simple linear model; 2) a linear model with both quadratic and cubic terms in Q′; 3) a linear spline model with a node close to zero; and 4) a partitioned model (equation 1) with m = 0 (i.e., linear except at the zero point in this case), this being the only one of these 4 models where the cumulative residual test indicated a good fit.

When 25% of T values were set to zero (Table 1), the mean of the 1,000 “true” beta coefficients (βT) estimated from simulated data was −0.237. The mean values of the calibrated logistic “disease” model coefficients (βcalib) were relatively close to the true value of −0.234 for all calibration models (Table 1). These values were easily within 2 standard deviations of the estimator for all except the naive (crude) regression, which was seriously biased toward the null. This is consistent with the expectation that lack of fit in a linear calibration model will not create more than trivial biases in typical logistic regressions.

Table 1.
Bias and Precision of Calibrated “Disease” Model Beta Coefficients With Different Calibration Models, Adventist Health Study 2, 2002–2008a

However, the calibration models with better fit produced importantly improved precision and statistical power. This was particularly evident when comparing the z score for the calibrated β (mean bootstrapped β divided by bootstrapped standard error) from partitioned or spline models (z = 2.12) as compared with simple linear calibration models (z = 1.69). As expected, the regression based on T′, rather than a calibrated analysis, had by far the greatest precision (z = 3.12). As also expected, the z scores for the uncalibrated regression and the simple linear calibration model were essentially the same. Nonlinear calibration models with better fit have greater z scores than the uncalibrated model. Simulations with 40% and 60% of T values set to zero produced similar results and relations.

To make the point that the logarithmic transformation is not fundamental, the analysis was also conducted on the same data without transformations (Table 2). Events were generated dependent on T, such that the effect of changing T by a standard deviation of T corresponded to an odds ratio of 0.811, to match the effect size of analyses shown in Table 1. This required that the true beta coefficient was −0.0489.

Table 2.
Results From Regression Calibration Using Simulated Data But With Untransformed Variables, Adventist Health Study 2, 2002–2008a

The z scores, and hence power, of all untransformed analyses were less than those of corresponding analyses with transformation (Table 1), consistent with the expected adverse effect of greater heteroscedasticity. The relative loss of power with calibration was also greater with untransformed data. It is also noted that the relative bias ((βS – βT)/βT ) of an uncorrected analysis, and hence the magnitude of the correction, may depend greatly on the chosen transformation (−0.574 with the transformation and −0.890 without).

We also performed simulations that included the reference method, R. Then analyses were based on Q and R rather than Q and T. The results were essentially unchanged, as expected, and details are omitted for the sake of brevity.

AHS-2 data: soy intake

Table 3 shows the results of calibration using real data from AHS-2 and assuming for this purpose that the recall data are true intakes. The distribution of soy protein consumption (16% of subjects consumed <1 g/day, treated as zero) had a strong positive skew and was related to all-cause mortality in 2 separately calibrated proportional hazards regressions. Covariates of both the disease model and the calibration equations were age, race, and gender. The first calibration equation was initially simple linear in form, but after evaluating the fit of the model, adding a squared term in log(soy protein + 1) from the questionnaire slightly improved RC2 from 0.336 to 0.339, and was the model finally used in these analyses. For soy, a partitioned calibration model had a slightly lower RC2 and was not used here.

Table 3.
Calibration Equation and Corrected and Uncorrected Proportional Hazards Regressions of All-Cause Mortality on Intake of Soy Protein, Adventist Health Study 2, 2002–2008a

The difference between corrected and uncorrected beta coefficients in the “disease” model is substantial, corresponding to odds ratios of 0.60 and 0.75, respectively. Although the standard error of the beta coefficient increases with calibration, it does so in approximate proportion to the beta coefficient values. This is because the calibration study is large, and also because the errors in dietary variables are not strongly related to covariates measured without error. The t value for this soy model was a little higher after calibration, possibly reflecting the small improvement of fit in the calibration model due to inclusion of the squared term.

DISCUSSION

These results show that non-Gaussian data that include many zeroes need not greatly complicate regression calibration. If the relation between T′ and Q′ is not well-described by a straight line, a simple linear calibration will still provide appropriate estimates when the “disease” regression is linear, or under the common condition of a relatively linear region of a logistic relation. However, a poorly fitting calibration equation will cause unnecessary loss of power. For instance, if RC2 = 0.25 and RT2 = 0.3, there is marked loss of power due to the calibration (measured in relative sample sizes for equal power). A ratio of sample sizes equal to 5.29 is necessary to obtain the same power as if T were observed. If RC2 = 0.40, this ratio improves to 3.14.

Thus, if there is substantial nonlinearity, an important improvement in power may be produced by the relatively small additional effort necessary to find a calibration model with better fit. In the AHS-2 data, we hardly ever found evidence of strong departures from linearity in the calibration equation, and the soy example (Table 3) is typical. However, in other data sets or with other variables this may not be the case.

A 2-part method described by Kipnis et al. (10) was developed for dietary surveys (with associated outcome data). It gives a combined analysis of calibration and main study data but requires that repeated reference measures (i.e., 24-hour recalls) be available for all subjects as the primary dietary measure. Logistic regression is used to model the probability that true exposure is greater than 0, and then a Box-Cox transformation and linear regression to model exposures greater than zero. It is possible that investigators in future cohort studies may be able to collect repeated recall data from all subjects, but such data are not available in most existing cohort studies.

Was the calibration worthwhile in the AHS-2 results of Table 3? As expected, power is not greatly changed compared with an uncorrected regression, but the magnitude of effect was markedly changed. This is an important benefit, giving greater interest to the effects of the dietary variable on disease risk. In multivariate regression calibration, we have previously shown that effect size, the sizes of statistical tests (especially in large studies), and power may change markedly depending on the correlations between errors (16), providing an even greater motivation for corrected analyses.

Although the variance in the calibration equation coefficients was effectively removed in analyses of the simulated data by using a large calibration study, the main results presented still hold when the calibration coefficients are less precisely identified. Var(βcalib) can be divided into the 2 parts identified in equation 2. The first part results when the calibration coefficients are known precisely, and the second results from random variation in these estimates.

The log transformation is warranted when values of Q exhibit significant heteroscedasticity or skewness. Thoresen (23) has demonstrated in logistic regression calibration that although heteroscedasticity is not a severe problem, markedly skewed variables (rather than residuals) can produce biased results. However, as noted, transformations can disturb the required approximation that E(R′|T′) = T′, even if it does hold exactly for R and T.

We also demonstrate that transformations can markedly affect the relative bias of uncorrected analyses.

In summary, where the distribution of intake of a food is markedly non-Gaussian, perhaps containing many zeroes, linear regression calibration can safely be applied, with the usual assumptions. If the calibration relation is nonlinear, the calibration population must properly represent the cohort. In this situation, it could also be worth considerable effort to optimize the fit of the calibration equation, since this will improve the power of a calibrated analysis. As usual, when compared with uncorrected analyses, appropriate regression calibration will improve bias (usually markedly), and where there is a nonlinear calibration relation, it will potentially also improve power.

Supplementary Material

Web Appendices:

Acknowledgments

Author affiliations: Department of Epidemiology and Biostatistics, School of Public Health, Loma Linda University, Loma Linda, California (Gary E. Fraser); and Department of Preventive Medicine, Keck School of Medicine, University of Southern California, Los Angeles, California (Daniel O. Stram).

This study was partly supported by National Institute of Health grant R01 CA094594.

Conflict of interest: none declared.

Glossary

Abbreviation

AHS-2
Adventist Health Study 2

References

1. Thomas D, Stram D, Dwyer J. Exposure measurement error: influence on exposure-disease. Relationships and methods of correction. Annu Rev Public Health. 1993;14:69–93. [PubMed]
2. Fraser GE. A search for truth in dietary epidemiology. Am J Clin Nutr. 2003;78(3 suppl):521S–525S. [PubMed]
3. Rosner B, Spiegelman D, Willett WC. Correction of logistic regression relative risk estimates and confidence intervals for measurement error: the case of multiple covariates measured with error. Am J Epidemiol. 1990;132(4):734–745. [PubMed]
4. Carroll RJ, Ruppert D, Stefanski LA, et al. Measurement Error in Nonlinear Models: A Modern Perspective. 2nd ed. Ithaca, NY: CRC Press; 2006. p. 38.
5. Butler TL, Fraser GE, Beeson WL, et al. Cohort profile: The Adventist Health Study-2 (AHS-2) Int J Epidemiol. 2008;37(2):260–265. [PubMed]
6. Jaceldo-Siegl K, Fraser GE, Chan J, et al. Validation of soy protein estimates from a food-frequency questionnaire with repeated 24-h recalls and isoflavonoid excretion in overnight urine in a Western population with a wide range of soy intakes. Am J Clin Nutr. 2008;87(5):1422–1427. [PMC free article] [PubMed]
7. Rosner B, Gore R. Measurement error correction in nutritional epidemiology based on individual foods, with application to the relation of diet to breast cancer. Am J Epidemiol. 2001;154(9):827–835. [PubMed]
8. Tooze JA, Midthune D, Dodd KW, et al. A new statistical method for estimating the usual intake of episodically consumed foods with application to their distribution. J Am Diet Assoc. 2006;106(10):1575–1587. [PMC free article] [PubMed]
9. Xie H, McHugo G, Sengupta A, et al. A method for analyzing longitudinal outcomes with many zeros. Ment Health Serv Res. 2004;6(4):239–246. [PubMed]
10. Kipnis V, Midthune D, Buckman DW, et al. Modeling data with excess zeros and measurement error: application to evaluating relationships between episodically consumed foods and health outcomes. Biometrics. 2009;65(4):1003–1010. [PMC free article] [PubMed]
11. Efron B, Tibshirani R. An Introduction to the Bootstrap. New York, NY: Chapman & Hall, Inc; 1993.
12. Haukka JK. Correction for covariate measurement error in generalized linear models—a bootstrap approach. Biometrics. 1995;51(3):1127–1132. [PubMed]
13. Carroll RJ, Ruppert D, Stefanski LA, et al. Measurement Error in Nonlinear Models: A Modern Perspective. 2nd ed. Ithaca, NY: CRC Press; 2006. pp. 90–95.
14. Draper NR, Smith H. Applied Regression Analysis. New York, NY: John Wiley & Sons, Inc; 1966.
15. Spiegelman D, McDermott A, Rosner B. Regression calibration method for correcting measurement-error bias in nutritional epidemiology. Am J Clin Nutr. 1997;65(4 suppl):1179S–1186S. [PubMed]
16. Fraser GE, Stram DO. Regression calibration in studies with correlated variables measured with error. Am J Epidemiol. 2001;154(9):836–844. [PubMed]
17. Su JQ, Wei LJ. A lack-of-fit test for the mean function in a generalized linear model. J Am Stat Assoc. 1991;86(414):420–426.
18. Lin DY, Wei LJ, Ying Z. Model-checking techniques based on cumulative residuals. Biometrics. 2002;58(1):1–12. [PubMed]
19. Carroll RJ, Ruppert D, Stefanski LA, et al. Measurement Error in Nonlinear Models: A Modern Perspective. 2nd ed. Ithaca, NY: CRC Press; 2006. p. 70.
20. Ferrari P, Day NE, Boshuizen HC, et al. The evaluation of the diet/disease relation in the EPIC study: considerations for the calibration and the disease models. Int J Epidemiol. 2008;37(2):368–378. [PubMed]
21. Jaceldo-Siegl K, Knutsen SF, Sabaté J, et al. Validation of nutrient intake using an FFQ and repeated 24 h recalls in black and white subjects of the Adventist Health Study-2 (AHS-2) Public Health Nutr. 2010;13(6):812–819. [PMC free article] [PubMed]
22. Schakel SF, Sievert YA, Buzzard IM. Sources of data for developing and maintaining a nutrient database. J Am Diet Assoc. 1988;88(10):1268–1271. [PubMed]
23. Thoresen M. Correction for measurement error in multiple logistic regression: a simulation study. J Stat Comput Simul. 2006;76(6):475–487.

Articles from American Journal of Epidemiology are provided here courtesy of Oxford University Press