|Home | About | Journals | Submit | Contact Us | Français|
In this review, we briefly summarize some of the key developments in nutritional epidemiology and cancer over the past two decades with a focus on the strengths and limitations of study designs and dietary assessment methods. We present the evidence on dietary fat, meat, fiber, antioxidant nutrients, and calcium in relation to carcinogenesis from large cohort studies and randomized clinical trials (RCTs) and refer to the conclusions of the 2007 World Cancer Research Fund/American Institute for Cancer Research summary report. One prominent theme that emerged is the lack of concordance of results from RCTs and observational studies. There are multiple potential reasons for these discrepancies, including differences in study population, dose and timing of the exposure, adherence to an intervention, length of follow-up, and the primary endpoint. Therefore, null findings of RCTs do not necessarily indicate a lack of effect for the tested dietary factors on cancer risk, as some of these nutrients may have chemopreventive effects if given at the right time and in the right dose. It is likely that potential benefits from the diet are due to a combination of food constituents rather than single components acting in isolation. Future efforts need to recognize the integrative nature of dietary exposures and attempt to study nutrients in the larger context of the foods and diets in which they are consumed.
Lifestyle factors, including diet, have long been recognized as potentially important modifiers of cancer risk. Initial hypotheses regarding diet were prompted by large global variations in cancer incidence and mortality rates, along with rapid changes in cancer rates among migrant populations (1–3). In 1981, Doll and Peto estimated that 35% of cancer deaths might be avoidable through changes in diet, although this estimate was imprecise (10–70%) (4). Interest in nutrition and cancer has grown considerably since that time, as evidenced by a rapid proliferation of studies examining nutritional exposures in relation to cancer risk. In this review, we will briefly summarize some of the key developments in nutritional epidemiology and cancer over the past two decades. Nutritional epidemiology can be broadly defined to include energy balance, physical activity, and alcohol, all of which are important when discussing cancer, but we will focus here on selected dietary exposures.
The first studies of diet and cancer evaluated correlations between country-specific cancer rates and food consumption patterns. These ecologic studies utilized aggregate data to identify possible relationships and generate hypotheses. However, ecologic studies are highly susceptible to confounding, where the observed relationship between two factors is actually caused by a third (or more) factor(s). Ecologic studies are also subject to the “ecologic fallacy”, in which associations observed at the aggregate level do not reflect associations at the individual level. For these reasons, results of ecologic studies must be interpreted with caution and should not be used to assert causal associations.
Case-control studies, in which cancer cases are identified over a specific time period along with cancer-free controls that represent the source population from which the cases arose, improve on ecologic studies in that they utilize individual-level data and can account for measured confounders. These studies are also efficient in terms of time and cost, but they are highly susceptible to both selection and recall biases. Recall bias, in which cases may recall their past diet differently in the context of their cancer diagnosis, is problematic because dietary assessment occurs after diagnosis. An additional concern for case-control studies of diet is that cases may have altered their diet prior to diagnosis due to early symptoms of the disease.
A key development in nutritional epidemiology over the past two decades has been the increase in results from large prospective cohort studies. In these studies, a cohort of healthy individuals is assembled and exposures are assessed at baseline, then the cohort is followed over time and cancer cases are identified as they develop. Some studies employ repeated exposure assessments over time, while others rely on a single baseline measure to provide a presumably stable estimate of diet during the follow-up period. The prospective nature of cohort studies precludes the problems of selection and recall bias inherent in case-control studies, but another form of selection bias can occur if losses to follow-up are differential with respect to case status. A disadvantage of cohort studies is that they must be very large and have long follow-up time to accrue sufficient numbers of cancer cases. Although cohort studies collect information on a multitude of exposures and potential confounders, residual confounding remains a concern in any observational study because known confounders may be imperfectly measured and researchers are unable to adjust for unknown or unmeasured confounders. In addition, many dietary factors are consumed together, which can make it difficult or impossible to isolate the effect of one specific factor.
Measurement error is another considerable problem for both case-control and cohort studies of diet. There are three main methods of assessing diet in epidemiologic studies: the food diary, in which participants record what they consume over a prescribed period of time; the dietary recall, in which participants report what they consumed during a preceding period, usually 24 hours; and the food frequency questionnaire (FFQ). The FFQ, a listing of foods for which participants report the usual frequency and amount of consumption, allows for relatively quick and inexpensive data collection on a large scale and is by far the most widely-used instrument in cohort studies. Dietary assessment relies on the ability of individuals to recall a complex collection of exposures and is known to contain measurement error. For prospective cohort studies, this error is nondifferential with respect to disease, meaning the error is similar for those who develop cancer and those who do not. Such nondifferential measurement error usually biases risk estimates towards the null, although the effect can be more complex in the context of multiple exposures (5).
FFQs employed in cohort studies are typically validated against multiple dietary recalls, and correlations generally range between 0.4 and 0.7 (6). However, dietary recalls are also imperfect measures and correlations may be artificially high due to correlated errors (7). A study comparing FFQ and 24-hour recall results to objective biomarkers of energy and protein intake found that attenuation due to measurement error in the FFQ was sufficient to obscure true relative risks (RR) of moderate magnitude (RR = 2.0), although the situation was improved somewhat by adjustment for energy intake (7). Unfortunately, objective reference biomarkers are not available for most dietary exposures, so investigators must rely on imperfect assessment tools. The use of FFQs in studies of diet and cancer has been the subject of considerable debate (8–10), and the possible impact of measurement error, particularly attenuation, needs to be considered when interpreting results of observational studies.
In contrast to observational studies, in experimental studies the investigator controls the assignment of subjects to the treatment or control group. Randomized clinical trials (RCTs) are often considered the “gold standard” when testing hypotheses because selection bias is eliminated and proper randomization theoretically results in treatment and control groups that are similar in all ways except the intervention. Confounding is thus minimized because the groups are similar with respect to both known and unknown confounders (although confounding remains possible in small studies due to chance differences between the groups or in subgroup analyses), and observed associations can be directly attributed to the intervention. However, RCTs are very expensive, and unlike prospective studies that collect a wide range of exposures and outcomes, RCTs typically can only address a small number of primary hypotheses.
A common approach in nutritional epidemiology has been to use results of observational studies to identify nutrients or other dietary factors that modify cancer risk, and then test these factors in a clinical trial. Over the past two decades, there have been numerous instances where the results of nutrition clinical trials did not match those of observational studies, including well-designed large prospective cohort studies. In these cases there are a number of possible reasons for discrepancies and it is important to interpret the evidence in the context of the strengths and limitations of the different study designs. As mentioned previously, associations observed in cohort studies could be due to residual confounding, in which case RCT results likely provide a more accurate picture of the true association. On the other hand, failure to find an association in a clinical trial should not automatically invalidate associations seen in observational studies. An RCT can only test a specific intervention, in a specific population, over a relatively short period of time. Therefore, null results of a trial do not preclude an effect of the intervention at a different dose, in a different population, or over a longer period of follow-up. Cancer is a disease that generally occurs late in life, so studies typically enroll participants who are middle-aged or older to ensure accrual of an adequate number of cases. Intervening at this late period of life may not be effective, as cumulative exposures over time are likely to be important and some cancer risk factors are known to have the greatest influence early in life (11). Observational studies also typically examine older adults, but dietary assessments may reflect cumulative exposures over a longer period of adult life.
A major problem for RCTs that test dietary modification is compliance with the intervention. It is notoriously difficult to achieve lasting dietary change, and a number of RCTs have reported low compliance with dietary interventions (12–14). Low compliance renders the groups more similar than intended and the power of the study to find an effect is reduced. Insufficient differences between the groups can also result from contamination of the control group, which occurs when the control subjects adopt some or all aspects of the intervention. While subjects can be blinded to treatment status in chemoprevention trials with supplements administered as tablets or capsules, blinding is impossible in a study of dietary change. Control subjects will often know they are not receiving the intervention and may make changes on their own in attempts to improve their health. The characteristics of the study population at baseline are also important; participants in intervention trials tend to be health conscious and are likely to have relatively healthy diets. Providing additional amounts of a nutrient may have no effect in a population that already has high intake of that nutrient, whereas supplementing a deficient group might reduce cancer risk. Finally, RCTs by necessity take a reductionist approach, often testing the effect of a single nutrient or dietary factor in isolation. This is likely to be overly simplistic, as effects of diet on cancer risk may be due to combinations of foods and complex interactions between food components (15–17). While observational study results can be plagued by confounding, they also may reveal these potential combined effects and provide clues regarding healthy dietary patterns.
Observational and experimental studies both contribute to our knowledge of nutrition and cancer, and no single study can provide a definitive answer. Rather, it is necessary to carefully review the evidence while considering the strengths and limitations of the individual studies. In this review, we will focus on recent results from large prospective cohort studies and RCTs, and we will refer to the conclusions of the 2007 World Cancer Research Fund/American Institute for Cancer Research summary report “Food, Nutrition, and Physical Activity and the Prevention of Cancer: A Global Perspective” (hereafter referred to as the WCRF/AICR Report) (2), which comprehensively reviewed the epidemiologic evidence for dietary exposures and cancer by site. We will discuss the current evidence for dietary fat, meat, fiber, antioxidant nutrients, and calcium, as these exposures have figured prominently in the literature over the past twenty years.
The hypothesized link between dietary fat and risk of cancer is controversial and has been extensively studied, particularly in relation to breast cancer (18–20). Initial evidence for a dietary fat-breast cancer association came from animal studies (21), ecologic studies (1, 22), and migrant studies (23, 24). Early case-control studies largely supported a positive association (25), although two large studies found no association (26, 27); however, these results may include recall and selection biases. Still, in the early 1990s fat intake was believed to increase breast cancer risk and national guidelines suggested reducing intake of all types of fat (28, 29). In the past two decades, a number of large prospective cohort studies examined the fat-breast cancer association with mainly null findings (20). The Nurses’ Health Study found no association between percent energy intake from fat and breast cancer with 20 years of follow-up (30), and a pooled analysis of 7 cohort studies was also null (31). A more recent pooled analysis, with longer follow-up and one additional cohort, reported a similar result for fat subtypes, with a weak positive association for substitution of saturated fat for carbohydrate consumption (32).
Although enthusiasm for the dietary fat hypothesis dimmed with these findings, controversy remained. Some argued the range of fat intake in the study populations was too narrow to detect an effect (18), but others pointed to pooled analyses as sufficiently large to capture extremes of fat intake (19). In addition, the use of FFQs led some to suggest that risk estimates were biased toward the null (18). A cohort study in the United Kingdom and a nested case-control study in the United States both found a stronger positive association between dietary fat and breast cancer from diet records compared to FFQs, seeming to illustrate this point (33, 34). However, others posited that nondifferential measurement error from FFQs was unlikely to explain the lack of association observed in most cohort studies (19, 28).
In the context of this debate, the results of the Women’s Health Initiative (WHI) were published in 2006 (13). The WHI was a randomized clinical trial of 48,835 postmenopausal women, aged 50–79 years, assigned to either dietary intervention or control, with the goals of the dietary intervention being to reduce intake of total fat to 20% of energy and increase consumption of fruits, vegetables and grains. Although after 8.1 years of follow-up there was no statistically significant reduction in invasive breast cancer, there was a suggestive protective effect of the intervention (Hazard ratio (HR) = 0.91, 95% confidence interval (CI) = 0.83–1.01) Since most participants did not meet the intervention goals (13, 28), the WHI trial may not have had the power to detect an effect of fat intake due to insufficient differences between the groups or inadequate follow-up time. Interestingly, secondary analyses suggested a lower hazard ratio among women who adhered to the intervention and women with the highest-fat diets at baseline (those with the greatest potential for change with the intervention) (13). However, the non-significant 9% lower risk in the intervention group may be explained by modest weight loss (28) or other elements of the intervention rather than a reduction in fat intake. Despite great investments of money and time, the WHI did not provide a definitive answer regarding the relationship between fat intake and breast cancer.
The 2007 WCRF/AICR Report concluded there was “limited-suggestive” evidence that higher total fat intake increases the risk of breast cancer (2). More recently, the NIH-AARP Diet and Health Study, a large prospective cohort of over 500,000 men and women, found an increased risk of breast cancer for women with a higher percentage of energy from total fat and all fat subtypes, with a stronger association after statistical correction for measurement error (35). The European Prospective Investigation into Cancer and Nutrition (EPIC), which includes participants from 10 European countries with a wide range of fat intake, reported a weak positive association only for saturated fat and breast cancer (36). These recent findings, in conjunction with the WHI results and varying results based on the type of dietary assessment, have prompted some to reassert a modest but real association between fat and breast cancer (5, 37), while others maintain that dietary fat intake in midlife is not a major cause of breast cancer (28). The dietary fat-breast cancer hypothesis exemplifies the complexities and pitfalls involved in nutritional epidemiology, as controversy remains despite extensive study.
There is no strong evidence for a role of dietary fat in relation to other cancers, and for most cancer sites the WCRF/AICR Report did not make any conclusions. The WCRF/AICR Report concluded there was “limited” evidence for total fat increasing risk of lung and “limited-suggestive” evidence for a positive association between foods containing animal fat and colorectal cancer (2). However, an earlier pooled analysis of 8 cohort studies did not find an association for lung cancer (38). In addition, most studies have found little evidence for an association between total fat intake and colorectal cancer (20), and the WCRF/AICR finding could be driven by red meat consumption (20). Interestingly, a recent analysis in the NIH-AARP study reported an increased risk of pancreatic cancer with higher intakes of total and saturated fat, with the strongest association observed for fat of animal origin (39), but other cohort studies have failed to observe these associations (40–42).
Meat has been of interest in relation to carcinogenesis since early ecologic studies observed high correlations between per capita meat consumption and cancer incidence and mortality (1, 43, 44). Since consumption of meat already constitutes approximately 10% of total energy intake in high income countries and is expected to increase globally (2, 45), epidemiologic investigations of meat and cancer have the potential for widespread impact on cancer prevention. Much of the epidemiologic research on meat and cancer has focused on colorectal cancer.
In 2007, the WCRF/AICR Report concluded there was “convincing” evidence for a positive association between red and processed meat intake and colorectal cancer (2). Reflecting mounting evidence from cohort and case-control studies during the past ten years, this conclusion was stronger than that of the 1997 WCRF/AICR Report in which red meat “probably” and processed meat “possibly” increased colorectal cancer risk (46). Several meta-analyses estimated that those with the highest red meat intake had a 28% to 35% increased risk of colorectal cancer compared to those with the lowest intake, while processed meat intake increased risk by 20% to 49% (47–49). Since the release of the 2007 WCRF/AICR Report, an analysis in the NIH-AARP Diet and Health Study found increased risk of colorectal cancer with intakes of red and processed meat (50). While most studies support a role for meat in colorectal cancer, not all findings are consistent. A recent study in the prospective EPIC-Oxford cohort found that although incidence of all cancers combined was lower among vegetarians than meat eaters, colorectal cancer incidence was higher in vegetarians (51).
Since both total energy intake and dietary fat often correlate with meat consumption, associations observed for meat may be partially explained by these related exposures. Recently, in an attempt to better understand the mechanisms underlying relationships between meat and cancer, greater attention has been placed on studying potentially carcinogenic exposures related to the type of meat as well as preparation and processing methods. These meat-related exposures include heterocyclic amines (HCAs), polycyclic aromatic hydrocarbons (PAHs), iron, heme iron, nitrate, nitrite, and N-nitroso compounds (NOCs).
Each of these meat-related exposures has a viable independent mechanism to impact carcinogenesis. HCAs and PAHs are mutagens formed in meats cooked well done at high temperatures (52–58) and produce tumors in animal models at multiple sites (59–66). Both non-heme iron and heme iron can induce oxidative DNA damage by catalyzing the formation of reactive oxygen species (67, 68). Although iron homeostasis is tightly regulated, heme iron, found predominantly in red meat, is more bioavailable than non-heme iron and its absorption is less well-regulated (69–71). Heme iron is associated with increased cytotoxicity of fecal water (72, 73) and the promotion of chemically-induced colorectal cancer in rats (74). Human studies have also found a positive association between heme iron intake and endogenous formation of NOCs within the large intestine (75, 76). NOCs are some of the most powerful chemical carcinogens (77), inducing tumors in multiple organ sites in numerous animal species (78). Nitrate and nitrite, which are added to processed meat, can form NOCs exogenously in meat and endogenously during digestion.
At the time of the 2007 WCRF/AICR Report, studies on meat preparation/cooking methods were sparse and mostly case-control in nature and studies of iron (encompassing all foods containing iron) were limited and generally of poor quality (2). To untangle the relationship between meat and cancer, future epidemiologic investigations will need to consider these more specific exposures. For example, a recent analysis of NIH-AARP Diet and Health Study participants who completed questions on cooking method and doneness indicated that heme iron, several HCAs, and nitrate from processed meat may explain associations between red and processed meat and colorectal cancer (79).
Evidence for a role for meat in other cancers is much less consistent. As of 2007, there was “limited suggestive” evidence that red meat was related to cancers of the esophagus, lung, pancreas, and endometrium, while processed meat was a “limited suggestive” risk factor for esophagus, lung, stomach, and prostate cancer (2). An all cancer analysis in the NIH-AARP Diet and Health Study supported some, but not all of the WCRF/AICR conclusions, as red meat was positively association with cancers of the esophagus, liver, and lung and processed meat was associated with lung cancer (50). While red and processed meat intake were also positively associated with pancreatic cancer among men in this same analysis (50), the Netherlands Cohort Study found no evidence of an association with pancreatic cancer (40). In the past two years, there have been several studies of meat and prostate cancer, with a positive association for red and processed meat among men in the NIH-AARP Diet and Health Study with meat cooking method and doneness data (80), a positive association with well or very-well done meat in the Agricultural Health Study (81), and null results from the Multi-Ethnic Cohort (82). Several cohorts have also recently investigated meat and breast cancer, but evidence has been mixed (83–88).
Dietary fiber is present naturally in plant foods, with high amounts in legumes, minimally processed cereals, fruits, and vegetables. Synthetic forms of fiber and fiber isolated from plant cell walls are also increasingly being added to other foods. The majority of epidemiologic investigations of dietary fiber and carcinogenesis have been in relation to colorectal cancer, and there are several potential mechanisms through which dietary fiber could reduce colorectal cancer risk. These include diluting carcinogens/procarcinogens through stool bulking, reducing transit time of feces, binding bile acids, and producing anti-carcinogenic short chain fatty acids (89). While many case-control and ecologic studies observed a protective association for fiber and colorectal cancer (90), a pooled analysis in 2005 of 13 cohorts found that dietary fiber was inversely associated with colorectal cancer in age-adjusted analyses, but the association was attenuated and no longer statistically significant after adjustment for a host of dietary and lifestyle factors (91). These contradictory findings across study designs may again illustrate the potential for recall and selection biases to influence case-control study results. The pooled analysis also raised the importance of confounding, as dietary fiber is often positively correlated with a generally healthy lifestyle and diet.
In the 1990s, based on biologically plausible mechanisms for chemoprevention and support from observational studies, several RCTs investigated the effect of dietary fiber on recurrence of colorectal adenomas (92–96), known precursors of colorectal cancer (97–99). The largest of these studies, the Polyp Prevention Trial (92), randomized 2,079 men and women to a dietary intervention with counseling support or the control group (usual diet with a brochure on healthy eating). Intervention group participants were instructed to follow a diet low in fat (20 percent of total calories) and high in fiber (18 g of dietary fiber per 1000 kcal) and fruits and vegetables (3.5 servings per 1000 kcal). Those in the intervention group reported a nearly 75 percent increase in fiber intake, with a mean intake of 17.4 g per 1000 kcal and a mean difference of 6.9 g of dietary fiber per 1000 kcal between groups at the end of the trial. However, after a median follow-up of 3.05 years, there was no effect of the intervention on rate of adenoma recurrence (92) and even with continued follow-up (up to 8 years after randomization) for a subset of participants there was no evidence of an effect (100). Although other adenoma recurrence RCTs involving fiber were also largely null (93–96), results from these relatively short interventions involving neoplastic precursor lesions do not exclude a role for dietary fiber in the etiology of colorectal carcinogenesis. Issues raised earlier such as timing and duration of the intervention, as well as compliance remain important considerations. Large alterations in diet may be necessary to achieve an effect, a point illustrated by a secondary analysis in the Polyp Prevention Trial that found “super compliers” (n=210) who reported meeting the three dietary goals at all four annual visits had a 35% statistically significant reduced odds of adenoma recurrence compared to controls (14).
Citing multiple plausible mechanisms and a dose-response relationship observed with total dietary fiber in a meta-analysis of 8 cohort studies (RR = 0.90, 95% CI = 0.84–0.97 per 10g per day), the WCRF/AICR Report concluded there was a “probable” inverse association between foods containing dietary fiber and colorectal cancer, but recognized that this association could be due to residual confounding (2). Since the WCRF/AICR judgment was based on whole foods, the panel also stated that the protective effects attributed to fiber may be due to the low energy density of high-fiber foods (2). More recently, an analysis in the NIH-AARP Diet and Health Study did not observe an association for total dietary fiber with colorectal cancer, yet whole-grain consumption was modestly inversely associated (101). Interestingly, another analysis in the NIH-AARP Diet and Health Study of cancer of the small intestine, an anatomic site which may share risk factors with the large intestine, found protective effects for whole grain foods and fiber from grains, but no effect for total dietary fiber (102). These findings suggest that components in whole grains other than fiber, such as micronutrients, phenols, or phytoestrogens, may be affecting risk. Evidence from other recent cohort studies has been mixed, with an inverse association for total dietary fiber with colon, but not rectal, cancer in Japan (103) and no association between total dietary fiber and colorectal cancer in the Women’s Health Initiative (104). Consideration of measurement error may be important when interpreting the results of cohort studies, as a recent prospective study nested within seven cohort studies in the United Kingdom found a significant protective effect of fiber intake on colorectal cancer when intake was ascertained by food diaries, but no significant association when the same analysis was conducted using intake data from food frequency questionnaires (105). Additional research with special consideration of the source of fiber is needed to better understand the role of dietary fiber in colorectal cancer.
Epidemiologic evidence for fiber in relation to other cancer sites is less clear. The only other conclusion from the 2007 WCRF/AICR Report in relation to foods containing dietary fiber was a “limited suggestive” protective effect for esophageal cancer (2), yet most of the evidence comes from case-control studies. Fiber was listed as a “possible” protective factor for pancreas, colorectum, and breast cancer in the 1997 report (46), indicating that evidence from the intervening 10 years did not support the earlier conclusions.
The idea that increased consumption of fruits and vegetables can decrease cancer risk has been a major theme of research on diet and cancer, but initial enthusiasm was based largely on results from case-control studies (11). Findings from prospective cohort studies conducted since the mid-1990s have not supported such a strong role for fruit and vegetable intake (2), suggesting that methodological biases may have influenced the case-control study results, particularly selection bias (11). The 2007 WCRF/AICR Report concluded the evidence did not reach a level of “convincing” for a protective effect of fruits and vegetables on any cancer site, in contrast to the findings of the previous 1997 edition (2, 46). Pooled analyses of cohort studies found no significant associations between fruit and vegetable intake and cancers of the breast, colon, and ovaries, but observed modest inverse associations for kidney and lung cancers (106–110). Residual confounding, particularly by smoking, is a concern in these observational studies, as individuals with healthier lifestyles generally have higher fruit and vegetable intake. For example, an analysis in the NIH-AARP Diet and Health Study found that vegetable intake was associated with a decreased risk of total cancer in men, but the association was no longer evident when restricted to men who never smoked (111). Despite limited evidence from cohort studies, the fruit and vegetable hypothesis has not been cast aside. Individual cohort studies have found associations (112–115), and the 2007 WCRF/AICR Report concluded a number of “probable” associations based on the totality of the evidence; for example, increased intakes of non-starchy vegetables and fruits “probably” decrease the risk of mouth, pharynx, larynx, esophagus, and stomach cancers (2).
The hypothesis that fruit and vegetable intake could reduce cancer risk served as a major impetus for efforts to identify specific nutrients responsible for the protective effect. This reductionist chemoprevention approach was another major theme of nutrition research during the past two decades, as numerous putative associations were reported between nutrients and cancer in observational studies. Findings from observational studies, together with supporting evidence from animal and laboratory studies, formed the basis for testing nutrient supplements in RCTs to evaluate chemopreventive effects. Testing of specific nutrients is well-suited to RCTs because participants can be given supplements or matching placebos, enabling effective blinding of the intervention as well as high rates of compliance compared to dietary modification trials. Here we will focus on three antioxidant nutrients for which prominent RCTs were conducted: beta-carotene, vitamin E, and selenium.
Epidemiologic studies suggested that individuals who consumed a high amount of dietary beta-carotene, generally from high fruit and vegetable intake, had a lower risk of cancer, particularly lung cancer (46, 116). This served as the rationale for the Alpha-Tocopherol, Beta-Carotene (ATBC) Cancer Prevention Study, an RCT in which 29,133 male smokers in Finland were randomly assigned to daily supplements of alpha-tocopherol (50 mg), beta-carotene (20 mg), both, or placebo (117). Contrary to the hypothesized decrease in lung cancer risk, the trial reported that after 5–8 years, men receiving beta-carotene supplements actually had a 16% increased lung cancer incidence. Soon after publication of the ATBC Study results, the large Beta-Carotene and Retinol Efficacy Trial (CARET) reported a 39% increase in lung cancer incidence among male smokers who received a daily combination of beta-carotene (30 mg) and retinyl palmitate (25,000 IU) (118). These RCT results served as a reminder of the potential limitations of observational studies and underscored the fact that high doses of bioactive compounds could have unforeseen effects. The results also spurred substantial debate, as has been reviewed elsewhere (119), including questions about the design and conduct of chemoprevention trials. Antioxidants such as beta-carotene were thought to reduce cancer risk by reducing oxidative damage, but it is now recognized that high doses of beta-carotene can result in pro-oxidant effects in some situations (120, 121). While potential harmful effects seem to be limited to heavy smokers (121), subsequent RCTs have confirmed the lack of a protective effect of beta-carotene supplementation on cancer risk (122, 123).
The beta-carotene story is a clear case where observational and experimental studies yielded conflicting results. However, it is important to recognize the different questions being answered by the two study designs. On the one hand, the RCTs indicated that supplementing with supra-physiologic doses of an isolated nutrient did not prevent lung cancer and in fact increased risk in heavy smokers. On the other hand, prospective cohort studies suggested that individuals who consumed the most beta-carotene-rich foods had a modestly lower risk of lung cancer (124). The RCTs clearly indicated that beta-carotene supplements are not recommended for cancer prevention, but did not discount the potential benefit of consuming foods rich in beta-carotene and other carotenoids. The 2007 WCRF/AICR Report, recognizing the complex nature of foods, judged the evidence as “probable” that consumption of foods containing carotenoids protects against mouth, pharynx, larynx, and lung cancers, and that foods containing beta-carotene protect against esophageal cancer (2). However, it is possible that the chemopreventive properties of these foods are not specifically due to their carotenoid content. Based on the evidence from RCTs, the WCRF/AICR Report also concluded there was convincing evidence that high dose beta-carotene supplements increased the risk of lung cancer in smokers (2).
Another micronutrient that has received considerable attention as a possible chemopreventive agent is vitamin E, which as a scavenger of free radicals has been reported to prevent DNA damage, lipid peroxidation, and activation of carcinogens (2). Vitamin E may also bolster the body’s defenses against cancer by enhancing the immune response (125). Based on these mechanistic links and modest evidence from epidemiologic studies (2), a number of RCTs examined daily supplementation with synthetic alpha-tocopherol (a form of vitamin E) in relation to cancer risk. As mentioned previously, a group of subjects in the ATBC Study received a daily dose of 50 mg alpha-tocopherol. After a median of 6.1 years of follow-up, there was no effect of alpha-tocopherol on lung cancer incidence, but there was a significant 34% decreased incidence of prostate cancer. These results were intriguing, but prior evidence of an alpha-tocopherol-prostate cancer link was lacking and prostate cancer was not a pre-specified primary endpoint of the ATBC Study, so further studies were needed to confirm the association.
The Physicians’ Health Study II (PHS II), an RCT specifically powered to examine prostate cancer, enrolled over 14,000 U.S. male physicians and assigned them to daily supplementation with 400 IU of alpha-tocopherol every other day, 500 mg of vitamin C daily, both, or placebo (126). During a mean follow-up of 8.0 years, there was no effect on prostate or total cancer incidence. Differences between the ATBC Study and PHS II include the prevalence of smoking (very low in PHS II) and the dose of vitamin E administered (50 IU versus 400 IU). These differences could account for the disparate results, but it is also possible the ATBC prostate cancer results occurred by chance, which is supported by the lack of a significant association in post-intervention follow-up of the ATBC Study (127). Other large-scale clinical trials such as the Heart Outcomes Prevention Evaluation and Women’s Health Study also did not observe any effects of alpha-tocopherol supplementation on cancer risk (128, 129).
It was hoped that the Selenium and Vitamin E Cancer Prevention Trial (SELECT), the largest cancer chemoprevention trial conducted to date (130), would answer the question of whether increased vitamin E intake decreases prostate cancer risk. SELECT enrolled 35,533 men free of prostate cancer at baseline (based on prostate-specific antigen (PSA) levels and digital rectal examinations), and randomly assigned them to daily supplementation with alpha-tocopherol (400 IU), selenium (200 μg), both, or placebo (130). With 5.5 years of follow-up, there was no evidence of a protective effect of vitamin E on prostate or total cancer, and there was a statistically nonsignificant increased risk of prostate cancer in the group that received only vitamin E. These results, along with other RCT findings to date, indicated that alpha-tocopherol supplementation in mid-to-late life is not effective in preventing prostate cancer.
It is important to note that the chemoprevention trials tested synthetic alpha-tocopherol, one of many forms of vitamin E. Vitamin E exists naturally as numerous tocopherols and tocotrienols with different molecular structures, bioavailabilities, and biological activities (131). Gamma-tocopherol, the most common form of vitamin E in the U.S. diet, has been shown to have greater anti-cancer activity than alpha-tocopherol in laboratory studies (131). It remains possible that vitamin E could show anti-cancer effects in humans if tested in different forms, at different dosages, or in different populations, but the preponderance of epidemiologic evidence does not support a role for vitamin E supplementation in cancer prevention.
In addition to vitamin E, SELECT tested the effects of a daily selenium supplement on prostate cancer incidence. In part, interest in selenium as a chemopreventive agent was spurred by a secondary analysis in the Nutritional Prevention of Cancer (NPC) trial, in which 1,312 patients with a history of skin cancer were randomly assigned to receive either 200 μg/day of selenium or placebo (132). NPC found no effect of selenium on recurrence of the primary endpoint skin cancer, but those receiving selenium had lower overall cancer mortality and decreased incidence of lung, prostate, and colorectal cancers. Selenium is thought to impact carcinogenesis through multiple pathways, including via antioxidant and anti-inflammatory effects of selenoenzymes (133). In laboratory studies, selenium has anti-carcinogenic effects (133, 134), particularly for prostate cancer (135, 136).
Since assessment of dietary selenium intake is problematic due to large variation in the selenium content of foods, the majority of observational evidence is based on studies of selenium concentrations in serum/plasma (reflecting short-term intake) or nails (reflecting long-term intake) (2). The 2007 WCRF/AICR Report cited consistent evidence from case-control and cohort studies, with a dose-response relationship, in concluding there was “probable” evidence that foods containing selenium and selenium supplements protect against prostate cancer (2). There was also “limited-suggestive” evidence of protective effects for lung, stomach, and colorectal cancers. However, SELECT yielded disappointing results, as there was no effect of selenium supplementation on prostate cancer incidence. There were also no effects on any prespecified secondary endpoints, including overall mortality and lung or colorectal cancers.
There are numerous possible reasons for the failure to detect the hypothesized protective effect of selenium supplementation in SELECT (137–139). Among these is the fact that the SELECT population was replete in selenium at baseline such that selenoprotein activity may have already been optimized (140). Selenium status was considerably lower at baseline in the NPC trial and the protective effect of supplementation was limited to those with the lowest baseline selenium concentrations (141). In addition, the SELECT population had a high prevalence (85%) of prostate cancer screening via annual PSA testing, and the vast majority of cases (99%) were early-stage, localized disease (130). Therefore the trial could not assess the effect of selenium on advanced or fatal disease. Nevertheless, SELECT indicated that in a well-nourished population of middle-aged adults, selenium supplementation is not effective for the prevention of prostate or other cancers.
While chemoprevention trials conducted in well-nourished adults have generally failed to show protective effects of nutrient supplements, there is some evidence that supplementation can decrease cancer risk in undernourished populations. The Linxian General Population Nutrition Intervention Trial (NIT) tested the efficacy of four vitamin-mineral combinations in 29,594 adults in Linxian, China, a region with low intakes of numerous nutrients and some of the world’s highest rates of esophageal and gastric cancers (142). NIT observed that combined supplementation with selenium, vitamin E, and beta-carotene significantly reduced mortality from total and gastric cancers. These beneficial effects remained evident up to 10 years after the intervention and were greater in those who were younger (less than age 55) at the beginning of the intervention (143). The NIT could not isolate effects of the individual antioxidants, and the results may not be generalizable to other populations, but it serves as an important reminder that the effects of nutrient supplements may differ depending on the baseline status of the population and the timing of the intervention.
Dietary calcium has been extensively studied in relation to cancer risk, particularly for colorectal cancer. Experimental evidence supports an antineoplastic role for calcium in the colon (144–146), as intracellular calcium directly influences cell proliferation, differentiation, and apoptosis, and dietary calcium may also prevent damage to the intestinal epithelium by binding bile acids (2, 147). Epidemiologic studies of dietary calcium and dairy food consumption have generally supported a protective effect of calcium intake on colorectal cancer. The WCRF/AICR Report concluded the evidence was “probable” that milk consumption protects against colorectal cancer, with the recognition that most studies of dietary calcium were performed in high-income populations where calcium may serve as a marker for dairy food consumption (2). A pooled analysis of 10 cohort studies found a significantly reduced risk of colorectal cancer for those with the highest intakes of milk (RR = 0.85, 95% CI = 0.78–0.94), dietary calcium (RR = 0.86, 95% CI = 0.78–0.95), and total calcium (RR = 0.78, 95% CI = 0.69–0.88) compared to those with the lowest intakes (148). The WCRF/AICR Report also concluded that the evidence was “probable” that calcium supplements decreased risk of colorectal cancer (2). A number of cohort studies published since the 2007 WCRF/AICR Report have also supported a protective effect of calcium intake (149–152), including the NIH-AARP Diet and Health Study, which observed significant inverse associations with colorectal cancer for dairy foods, dietary calcium, and total calcium (153). Alternatively, epidemiologic studies of adenoma incidence or recurrence have been less consistent, with some studies finding modest associations with dietary or supplemental calcium intake and others finding no relationship (154–158).
The relationship between calcium intake and colorectal neoplasia has also been examined in several large RCTs. The Calcium Polyp Prevention Study randomly assigned 930 subjects with a recent history of colorectal adenomas to either 1200 mg elemental calcium or placebo daily. After four years of follow-up there was a significant 17% reduced risk of adenoma recurrence for the calcium group, and the protective effect persisted up to five years after cessation of supplementation (159, 160). Further analyses suggested the effects were more pronounced for advanced adenomas and that calcium supplementation may act together with high vitamin D status to reduce adenoma risk (161, 162). In the European Cancer Prevention Organisation Intervention Study, which examined adenoma recurrence over 3 years in subjects assigned to either 2000 mg elemental calcium (n = 218) or placebo (n = 221) daily, calcium supplementation resulted in a modest, but nonsignificant 34% reduced risk of adenoma recurrence (96). Part of the WHI examined incident colorectal cancer in over 36,000 post-menopausal women randomly assigned to either 1000 mg elemental calcium and 400 IU vitamin D daily or placebo (163). After an average of 7.0 years of follow-up, no difference in colorectal cancer incidence was observed between the two groups.
The most obvious explanation for the discrepancy between the WHI and earlier RCT results is the different endpoints; although adenomas are known precursors of colorectal cancer often employed as early markers in RCTs, prevention of adenoma recurrence may not equate to prevention of incident colorectal cancer. However, as has been discussed previously for other trial results, there are other factors that complicate interpretation of the WHI results. First, whereas effects on adenoma recurrence can be observed relatively quickly, colorectal cancer has a long latency period and may require longer follow-up to observe preventive effects. Second, WHI participants had high baseline calcium intakes (1151 mg/day), which may have limited the effectiveness of supplementation. Results from prospective cohort studies suggest a threshold effect in which calcium intake reduces colorectal cancer risk up to approximately 1000 mg/day, with no further protection for higher intakes (156). A subgroup analysis within WHI did not observe an effect of supplementation among participants with baseline intakes below 800 mg/day, but these results do not exclude an effect in other populations with lower calcium intakes (156, 163). The WHI results suggest that calcium supplementation may not further reduce colorectal cancer risk in those who already have high calcium intake, but results from observational studies and RCTs of adenoma recurrence support a role for calcium in the prevention of colorectal cancer.
In contrast to the protective effect on colorectal cancer, there is evidence that high calcium intake may increase risk of prostate cancer. Mechanistically, high calcium intake may increase cell proliferation in the prostate by inhibiting the conversion of vitamin D to 1,25 dihydroxy vitamin D3 (164). The WCRF/AICR Report cited substantial evidence from both cohort and case-control studies, consistent with a dose-response relationship, in concluding that diets high in calcium are a “probable” cause of prostate cancer (2). A meta-analysis of eight cohort studies yielded a significant increased risk of all prostate cancer for those with the highest calcium intakes (RR = 1.27, 95% CI = 1.09–1.48), and similar increased risk was observed for advanced/aggressive prostate cancer (2). However, results are not entirely consistent, as the NIH-AARP Diet and Health Study observed no association between prostate cancer and calcium intake, even at high levels (≥ 2000 mg/day) (153). Similarly, no association was found in the Multiethnic Cohort Study, which included over 82,000 men and 4,404 prostate cancer cases (165). Guidelines from the American Cancer Society recommend that men limit calcium intake to less than 1500 mg/day due to the possible increased risk of prostate cancer (166), which coincides with observational evidence that beneficial effects of calcium on colorectal cancer may plateau at 1000 mg/day (156). The WCRF/AICR Report did not make a recommendation for calcium intake (2).
The most prominent theme to emerge from our review of epidemiologic and clinical studies of nutrition and cancer over the past two decades is the lack of concordance of results from RCTs of individual nutrients and observational studies. As we have discussed, there are a number of possible reasons for these discrepancies, including differences in study population, dose and timing of the exposure, adherence to an intervention in an RCT, length of follow-up, or the endpoint being studied. For these reasons, null findings of dietary interventions in the RCTs do not necessarily indicate a lack of effect for the dietary factors on cancer risk. It remains possible that some of these nutrients may have effects if given at the right time and in the right dose.
We now realize that any potential benefits derived from dietary sources are likely to be due to a combination of factors rather than single components acting in isolation. Nutrients most likely act in concert as parts of whole foods, in conjunction with other compounds such as phytochemicals, and effects may be moderated through multiple pathways such as energy balance, insulin response, and inflammation. Future efforts need to recognize the integrative nature of dietary exposures and attempt to study nutrients in the larger context of the foods and diets in which they are consumed. Given the limitations inherent to RCTs, namely their high cost, low compliance with complex dietary or lifestyle interventions, difficulties in testing combinations of nutrients and other bioactive food components in their natural context, and the need to intervene in older subjects to achieve sufficient statistical power, we may need to rely more heavily in the future on high-quality observational evidence, which will require better tools for the assessment of diet and related exposures. RCTs will always have a prominent place in the evidence base due to their ability to eliminate confounding, but for future RCTs of diet we need to develop intermediate biomarkers of effect, so that interventions can be tested more quickly and in fewer subjects of varying ages.
Despite the numerous pitfalls and setbacks chronicled here, there is considerable reason for optimism when viewing the field of nutrition and cancer. While RCTs have not always replicated relationships observed in observational studies, great progress has been made in our understanding of the role of nutrition in relation to a number of cancers. In addition to the relationships discussed in this review, strong evidence has accrued on the effects of energy balance, body fatness, and physical activity as contributors to cancer risk (2). Interventions in these areas are clearly warranted to reduce cancer incidence and mortality as well as that of other adverse health conditions. Relatively consistent evidence has also emerged linking alcohol consumption to increased risk of a number of cancers, including gastrointestinal and breast cancers (2).
As the field progresses, investigators can examine observational data on hundreds of thousands of participants through pooling projects and cohort consortia, yielding wider ranges of exposures and greater statistical power to examine subtle associations, interactions, and rare cancers. Technological advances offer hope for improved exposure assessments, including newly developed tools such as internet-based questionnaires, food records that could be linked to cellular phones, personal digital assistants, or cameras, as well as objective measures of physical activity with advanced accelerometers. There is also a clear need for development of objective biomarkers of dietary intake, as these are crucial both for measuring direct associations with disease and for improving the validity of dietary assessment instruments. While it is currently possible to calibrate self-report dietary data using reference biomarkers for protein and energy, obtaining these biomarkers is prohibitively expensive for large-scale studies and reference biomarkers do not exist for other nutrients. Metabolomics, in which thousands of metabolic compounds are assayed to create a unique metabolic “fingerprint” for each individual, could potentially identify new disease associations and allow identification of novel biomarkers.
Rapid advancements in genotyping and genetic analyses offer another avenue by which we can improve our understanding of the links between nutrition and cancer. Identification of specific genetic variants and associated pathways linked to cancer are yielding new clues concerning the mechanics of cancer development and the ways in which nutrients might affect these mechanisms. However, large sample sizes with both genotyping and prospective dietary assessments are needed to adequately study gene-diet interactions. This is an area of great promise, as it is likely that the effects of diet on cancer vary depending on an individual’s unique genetic background. Just as there is great hope for “personalized medicine”, we may eventually develop “personalized nutrition”, in which patients could receive nutritional advice tailored to their specific genotype. This concept is far from being a reality, but it remains vitally important to study the interrelated effects of diet and genetics. Genetic studies may also improve our ability to study the effects of nutritional exposures through the concept of Mendelian randomization, where genetic variants that mirror the biologic effects of an environmental exposure, such as diet, can be used as a proxy for that exposure (167, 168). Due to the random assortment of genotype at conception (similar to randomizing exposure in an RCT), it is possible to avoid the typical problems of confounding when studying the effects of the proxy genetic variant, thus providing a more accurate determination of the association between the dietary exposure and cancer.
Finally, the field of nutritional epidemiology has recently been employing approaches to evaluate dietary exposures as part of a collection of interrelated factors rather than isolated nutrients. Food components and combinations of foods may have synergistic effects, and equally important may be the omission of certain foods from the diet. Several statistical techniques can be used to identify dietary patterns that account for overall diet and to classify individuals based on their consumption or avoidance of a range of foods. For example, individuals can be classified as adherers to the “Mediterranean diet”, which is characterized by high consumption of vegetables, fruits, fish, nuts, and olive oil, but infrequent consumption of meat. Conversely an individual could fall under the “Western” dietary pattern, characterized by high intake of meat, dairy, sugars and processed foods. Results have been inconsistent thus far, but studies have identified associations between dietary patterns and some cancers (169–172). This more holistic view of the diet may yield clues that have not emerged when only individual components were examined.
Research from the past two decades has demonstrated that the role of nutrition in cancer is more complicated than originally thought. While a number of large-scale clinical trials failed to produce hypothesized decreases in cancer risk, it remains probable that nutritional factors play a significant role in cancer development. However, the days of looking for a single nutrient to administer to middle-aged adults to reduce their risk of cancer may be behind us. We have made great strides in our understanding of nutrition and cancer, yet much remains to be clarified and both persistence and creativity are needed to advance the field in the decade to come. For the present, prudent recommendations for cancer prevention based on the 2007 WCRF/AICR Report include: be physically active and avoid weight gain, consume a diet of primarily plant origin, and limit intake of alcohol, red meat and processed meat (2).
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.