|Home | About | Journals | Submit | Contact Us | Français|
Epidemiologic studies of persons exposed to ionizing radiation offer a wealth of information on cancer risks in humans. The Life Span Study cohort of Japanese A-bomb survivors, a large cohort that includes all ages and both sexes with a wide range of well-characterized doses, is the primary resource for estimating carcinogenic risks from low linear energy transfer external exposure. Extensive data on persons exposed for therapeutic or diagnostic medical reasons offer the opportunity to study fractionated exposure, risks at high therapeutic doses, and risks of site-specific cancers in non-Japanese populations. Studies of persons exposed for occupational and environmental reasons allow a direct evaluation of exposure at low doses and dose rates, and also provide information on different types of radiation such as radon and iodine-131. This article summarizes the findings from these studies with emphasis on studies with well-characterized doses.
Epidemiologic studies provide the necessary data for quantifying cancer risks as a function of dose and for setting radiation protection standards. Leukemia and most solid cancers have been linked with radiation. Most solid cancer data are reasonably well described by linear-dose response functions although there may be a downturn in risks at very high doses. Persons exposed early in life have especially high relative risks for many cancers, and radiation-related risk of solid cancers appears to persist throughout life.
Fifty years ago our knowledge of the role of radiation exposure in human cancer was only beginning to emerge. Leukemia had been linked with external radiation exposure in Japanese A-bomb survivors and medically exposed persons, lung cancer linked with radon exposure in underground miners, skin cancer with radiation exposure in radiologists, and bone cancer with radium exposure in luminous dial painters (Doll 1995). It was not known whether other types of cancer might be related to radiation exposure, and there were no data for quantifying cancer risks from radiation exposure in various settings.
Knowledge of cancer risks in humans has advanced tremendously in the last half century. Much of this knowledge has come from studies of A-bomb survivors in Hiroshima and Nagasaki, who have now been followed for more than 50 years. In addition, there is now a wealth of data from studies of people who have been exposed for medical, occupational or environmental reasons. Importantly, many studies have included extensive efforts to estimate doses for individual subjects making it possible to quantify risk as a function of dose, to evaluate how age at exposure and gender might modify the dose-response relationship, to examine how the risk changes as subjects are followed over time, and to investigate interactions of radiation and other exposures. Epidemiologic studies thus provide information that is needed for risk assessment and setting radiation protection standards, and also increase our understanding of the carcinogenic process.
This paper summarizes major findings from epidemiologic studies of cancer risks in relation to ionizing radiation exposure. Emphasis is on the most informative studies, which are usually those with a wide range of well-characterized doses.
MacMahon and Trichopoulos (1996) define epidemiology as “the study of the distribution and determinants of disease frequency in human populations.” The major strength of epidemiologic studies is that they provide direct information on health risks in humans. Epidemiologic studies are nearly always observational rather than experimental (in that the investigator cannot control the circumstances of exposure), and thus there is high potential for bias or confounding. Good study design and appropriate data analyses can minimize but not eliminate this potential. Such potential is especially large when studying very small risks such as those likely to be associated with low-dose exposures. Other weaknesses of epidemiologic studies include low statistical power, especially when studying less common site-specific cancers and/or risks from exposure at low doses, and uncertainties in dose estimates, which can distort dose-response relationships. Despite these weaknesses, much of what we know about human cancer risks from radiation exposure is based on epidemiologic studies.
Most of the information on radiation risks in humans has come from cohort and case-control studies. In a cohort study, a defined population is followed forward in time to evaluate the occurrence of health effects such as site-specific cancer incidence or cause-specific mortality. In the more informative studies, data on radiation exposure and other risk factors are collected with the objective of relating risks of various health effects to these variables. Strengths of cohort studies are that it is possible to evaluate both relative and absolute risk (see below) and to evaluate risks of many different health endpoints (e.g. site-specific cancer incidence). A limitation is it is not always feasible to obtain detailed information on radiation exposure and other risk factors for a large cohort.
In case-control studies, persons with and without a specified disease (cases and controls, respectively) are compared with respect to their level of radiation exposure. Many case-control studies in radiation epidemiology are “nested” in a cohort; that is, a cohort is used as the source of both cases and controls. In a nested case-control study, all cases meeting specified criteria are studied along with a stratified (matched) sample of persons who do not have the disease of interest (controls). The main reason for conducting case-control studies is that the much smaller number of subjects makes it feasible to obtain detailed data on radiation exposure and other variables, and to estimate radiation doses. Even with as few as 2 or 3 controls per case, statistical power for evaluating many hypotheses is nearly as good as if the entire cohort had been studied (Breslow and Day 1980). Drawbacks of case-control studies are that only the relative risk can be estimated and only the diseases for which cases are selected can be studied. If data on exposure and other variables are obtained by interviewing cases and controls, another drawback is that recall may be different for cases than for controls. However, this is not a problem in studies where data on exposure and other variables are obtained from historic records.
Whereas cohort and case-control studies are based on individuals, ecological studies are based on grouped data (e.g. by geographical region). Although ecological studies have the practical advantage that they can often be conducted using existing databases, such studies are of value mainly to provide preliminary data and to generate hypotheses (Piantadosi 1994, Greenland and Robins 1994). For this reason, ecological studies are given minimal attention in this paper.
A disease incidence rate is defined as the number of newly diagnosed cases of cancer or other disease per population and time period, for example, the number of newly diagnosed cases per 10,000 person-years. In the simplest situation, the rate in exposed persons, denoted by Re, is compared with the rate in “unexposed” persons, denoted by Ru. The relative risk (RR) is defined as the ratio of these two rates, Re/Ru and has no units; the excess relative risk (ERR) is defined as Re/Ru −1. The excess absolute risk (EAR) is defined as the difference of the two rates, Re − Ru, and is expressed in the same units as the rates. The RR (or ERR) is the usual measure for etiologic research, whereas the EAR is useful for estimating the absolute burden or magnitude of a health problem. In case-control studies, it is the odds ratio (OR) that is estimated, which is defined as Re(1 − Ru)/[Ru(1 −Re)]. However, in most studies, the odds ratio closely approximates the relative risk and is often referred to as such (Breslow and Day 1980). The term excess odds ratio (EOR) is also used and is OR − 1.
In studies with dose estimates, the focus of this paper, a model that plays a prominent role is one in which the ERR is expressed as a linear function of dose. That is, RR = 1 + βR dose, where βR is the excess relative risk per unit of dose, typically in gray (Gy) or in sieverts (Sv), for example ERR Gy−1. When there is no likelihood of confusion, the ERR Gy−1 is sometimes referred to in this paper as simply the ERR. The use of a linear model can be supported by radiobiological considerations. The shape of the dose-response is often investigated, but few studies have good statistical power for detecting departures from linearity. In cohort studies, but not case-control studies, the EAR can also be evaluated as a linear (or other) function of doses, and is commonly expressed per unit of dose per 10,000 person-years. Both the ERR and EAR may vary by age at exposure and other variables. The EAR is likely to increase strongly with attained age due the sharp increase in background rates with this variable for most cancers.
A major strength of radiation epidemiology studies is the extensive efforts that have been made to estimate dose. With individual dose estimates, it becomes possible to quantify risk as a function of dose; to investigate the dependency of the dose-response on factors such as dose-rate, age, and sex; and to combine data from different studies in a meaningful way (see below). A special issue of Radiation Research on “Applications of Dosimetry in Radiation Epidemiology” reviews dose estimation methods used in various types of radiation epidemiology studies (Simon et al. 2006a).
Because dose assessment is often complex and may be conducted many years after the exposures occurred, dose estimates are subject to uncertainties, which may be large. Errors that are “random”, that is, independent from subject to subject, reduce statistical power for testing dose-response relationships and increase uncertainties in parameter estimates. “Classical” random errors, which can be thought of as error that arises from an imprecise measuring device, attenuate the dose-response and result in underestimation of risk coefficients. By contrast, “Berkson” random errors, which occur when a single mean dose is used for group of subjects, do not lead to bias in linear risks coefficients. Errors that are “shared”, that is, correlated for different subjects, can lead to bias in either direction. Thus, it is important to be aware of the quality of the dosimetry in interpreting results of dose-response analyses. Methods for adjusting for dosimetry errors have been developed and have been applied in some studies (Schafer and Gilbert 2006). However, fully accounting for measurement error in studies with complex dose estimation methods can be a very challenging task, and thus relatively few studies include adjustments for dosimetry errors.
Because of the wealth of epidemiologic data pertaining to radiation risks, there is often a need to summarize information from more than one study. One way of accomplishing this is to analyze combined data on individual subjects from different studies (pooled analyses), while another is to analyze published results from different studies (meta-analyses). In radiation epidemiology, pooled analyses have been used more extensively than meta-analyses. By applying comparable statistical approaches to each study and setting out results in parallel, a better understanding of differences and similarities can be achieved. Pooled analyses also allow more precise risk estimates than can be obtained from individual studies although it is important to consider heterogeneity among the studies. In addition, they provide the best overall summary of data addressing a particular endpoint or of studies of persons sharing some characteristic (e.g. nuclear workers).
In this section, selected studies of persons exposed to external radiation, radon, and other internal emitters are reviewed, focusing on studies that include individual dose estimates. More recent findings are emphasized, and pooled analyses, in which data from several studies addressing a common issue are combined, are given special attention. It is not possible to describe or even mention every study, and many excellent studies are thus omitted. More comprehensive reviews can be found in reports by the National Research Council (NRC 2006) and the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR 2008), and also in Boice (2006), and, for medical studies, Ron (2003).
The Life Span Study (LSS) cohort of survivors of the atomic bombings in Hiroshima and Nagasaki is a major source of information for evaluating health risks from exposure to ionizing radiation and particularly for developing quantitative estimates of risk that form the basis of setting radiation protection standards. The advantages of this population include its large size (87,000 in-city subjects with dose estimates), inclusion of both sexes and all ages, well characterized doses that have been estimated for individual subjects, long-term follow-up, and high quality mortality and cancer incidence data. More than 60% of exposed survivors have doses that are less than 0.1 Sv (the range of greatest interest for radiation protection) whereas survivors with doses in the 0.1 to 4 Sv range greatly increase the precision with which risks can be estimated. In addition, the whole-body exposures received by this cohort in combination with the advantages noted above offer the opportunity of assessing risks for cancers of many specific sites.
A major international effort to reassess and improve survivor dose estimates resulted in the Dosimetry System 2002 (DS02). Preston et al. (2004) analyzed solid cancer and leukemia mortality data for the period 1950-2000 using both DS02 and the older DS86 doses, and found that estimates of risks per Sv were decreased by about 8% by the revision. Table I shows the distribution of the LSS cohort and the number of solid cancer deaths by their DS02 dose category. Recent analyses of the LSS data were corrected for dosimetry uncertainty resulting from errors in location of individual survivors. With the assumption of 35% errors in DS02 dose estimates, this correction led to about a 10% increase in the slopes of the linear dose-response.
Leukemia was the first cancer to be linked with radiation exposure in A-bomb survivors (Folley et al. 1952), and has the highest relative risk of any cancer. A linear-quadratic dose-response fitted both leukemia incidence (Preston et al. 1994) and mortality (Preston et al. 2004) data significantly better than a pure linear relationship. Both the ERR and EAR decrease with increasing age at exposure and with increasing time since exposure; for the EAR, the decline with time since exposure is much stronger for those who were young at exposure (Preston et al. 2004, NRC 2006). Of the 204 deaths from leukemia in survivors with doses of 0.005 Gy or more, 46% were estimated to be related to radiation. Preston et al. found considerable variation in the patterns of risk with age and time among leukemia subtypes.
Until the 1990's, when cancer incidence data for both Hiroshima and Nagasaki became available, mortality data served as the primary source of information on cancer risks. We focus here on a detailed report on solid cancer incidence for the period 1958-1998 (Preston et al. 2007). Analyses in this report included 17,448 first primary cancers occurring in 105,427 cohort members who survived until 1958. Unlike earlier analyses of cancer incidence and mortality, about 27,000 persons who were not in Hiroshima and Nagasaki at the time of the bombings were included. Recent risk assessments by the Committee to Assess Health Risks from Exposure to Low Levels of Ionizing Radiation (BEIR VII) (NRC 2006) and by UNSCEAR (2008) make strong use of cancer incidence data from the LSS cohort.
Solid cancer incidence data were well described by a linear dose-response function over the 0-2 Gy range (Figure 1). A statistically significant dose-response was achieved even when analyses were restricted to doses of 0.15 Gy or less. The ERR Gy−1 for females was nearly twice that for males, reflecting to some extent the higher baseline risks for males. The EAR was also larger for females, but this was not the case when gender-specific cancers (breast, prostate, and reproductive organs) were excluded. Both the ERR and EAR decreased with increasing age at exposure. Although the ERR decreased with increasing attained age, the EAR increased (Figure 2); these different patterns reflect the strong increase in baseline risks with attained age. About 11% of the solid cancers occurring in survivors with doses of 0.005 Gy or more were attributed to radiation (Preston et al. 2007).
Preston et al. (2007) evaluate incident solid cancers of 18 cancer sites as well as the residual category “all other solid cancers”. Significant dose-response relationships were found for cancers of the oral cavity, esophagus, stomach, colon, liver, lung, non-melanoma skin, female breast, ovary, bladder, central nervous system, thyroid, and the residual category. In addition, Ron et al. (2005) found such a relationship for male breast cancer based on small numbers. Statistically significant dose-response relationships were also found for five histological types of cancer: squamous cell carcinoma, adenocarcinoma, other epithelial, sarcoma, and other non-epithelial.
Several special studies of site-specific cancers have been conducted. These included either additional information on risk factors other than radiation or special pathological review of cases. Land et al. (1994) conducted a case-control study of breast cancer and found that the risks from radiation and other breast cancer risk factors (age at first pregnancy, number of births, cumulative lactation) could be described by a multiplicative model and that an additive model could be rejected. Pierce et al. (2003) evaluated lung cancer risks, and found that the effects of radiation and smoking could be described by an additive model and were significantly sub-multiplicative. In a study of salivary gland tumors, Land et al. (1996) found significant dose-response relationships for both benign and malignant tumors with especially high estimates of the ERR for mucoepidermoid carcinoma and for Warthin's tumor. Cologne et al. (1999) confirmed the dose-response relationship for liver cancer by analyzing histologically and clinically verified primary liver cancer cases, and found no evidence of differences in the dose-response for hepatocellular carcinomas and cholangiocarcinomas. In a study of skin cancer, Ron et al. (1998) found a significant dose response for basal cell carcinoma (with a suggestion that the dose-response was non-linear), but not for other types of skin cancer such as squamous cell carcinoma. Preston et al. (2002a) investigated tumors of the nervous system and pituitary gland and found a significant dose-response for all tumors with the highest ERR observed for schwannomas.
A-bomb survivors exposed in utero or in early childhood have also been studied. Although there is little evidence of radiation-related cancers occurring under age 15 years (Yoshimoto et al. 1994), Preston et al. (2008) found significant dose-response relationships for cancers occurring at ages 12-55 for both survivors exposed in utero and those exposed in early childhood.
Cancer incidence data from the LSS cohort were used by the BEIR VII Committee (NRC 2006) to estimate lifetime risks from low dose exposure (defined as 0.1 Gy or less) for the US population with estimates depending on sex and age at exposure. On average, the risk of developing cancer as a result of a single exposure of 0.1 Gy was estimated to be about 1%. Estimates for persons exposed at young ages are larger; for example, the lifetime risk for a person exposed at age 10 would be about 2%. The BEIR VII Committee also developed estimates of risks of several site-specific cancers, useful for assessing risks from partial body exposure such as those encountered in medical situations.
Epidemiologic studies of persons exposed to radiation for medical reasons offer the opportunity to study fractionated radiation exposure received over a period of time, risks at high therapeutic doses, low and high linear energy transfer (LET) radiation, and risks of site-specific cancers in non-Japanese populations with different baseline risks, and thus add considerably to what has been learned about radiation risks from the instantaneous whole body exposure received by A-bomb survivors. Most medical exposure results in non-uniform doses to the various organs of the body, and organ-specific doses from therapeutic procedures are often large (as high as 40 or more Gy).
Adults and children treated with radiation for malignant and benign conditions or exposed for diagnostic reasons have been studied. Large cohorts of patients treated with radiotherapy for cervical cancer, breast cancer, Hodgkin lymphoma, testicular cancer, and childhood cancer have been followed for decades (Travis et al. 2008). Although these cohorts usually lack detailed treatment information, they provide information on which site-specific cancers show elevated rates, and also on the temporal patterns of risk and the dependence of risk on sex, age at exposure, and attained age. Importantly, these studies have been the source of subjects for nested case-control studies in which detailed information on treatment is extracted and used as the basis for estimating radiation doses for individual subjects. In the past, radiation was also used to treat a number of benign conditions. Cohort studies of patients treated in adulthood for ankylosing spondylitis, benign breast disease, benign gynecological disease, and peptic ulcer have been carried out as have studies of persons treated in childhood for tinea capitis, enlarged thymus gland, enlarged tonsils, and skin hemangioma (NRC 2006). In many of these studies, doses to several organs were estimated for the entire cohort. Studies of persons exposed to diagnostic radiation have also been conducted, but tend to be limited by low doses and other difficulties (Ron; 2003); however, studies of tuberculosis and scoliosis patients receiving multiple exposures have been informative (NRC 2006). Finally, studies of patients exposed to radionuclides such as thorotrast, radium, and iodine-131 have been carried out.
Selected findings from these studies are summarized below, and illustrate what can be learned from studies of medically exposed persons. Radiation-related risks of leukemia, breast cancer, lung cancer, and thyroid cancer are emphasized
An association of leukemia and medical radiation exposure was first identified in a study of ankylosing spondylitis patients conducted by Court-Brown and Doll (1957) more than fifty years ago. Since then, leukemia has been linked with radiation exposure or found in excess in many studies of medically exposed persons, primarily adults (UNSCEAR 2000). Case-control studies of patients treated for cervical cancer (Boice et al. 1987) and ankylosing spondylitis (Weiss et al. 1995) indicate that risk of leukemia first increases with dose and then decreases suggesting cell killing at doses exceeding 2 to 4 Gy. Little et al. (1999) conducted a pooled analysis of data from these two studies along with A-bomb survivors, and confirmed this general pattern. However, no downturn in leukemia risk at high doses was observed in patients treated for testicular cancer (Travis et al. 2000) or breast cancer (Curtis et al. 1992). In a study of patients treated for uterine cancer (Curtis et al. 1994), the risk for relatively low dose and dose-rate brachytherapy (mean dose = 1.72 Gy) was similar to that higher dose external beam therapy (mean dose = 9.88 Gy) indicating that the risk per unit of dose was much higher for brachytherapy than for external beam therapy. The pattern of risk with age and time since exposure has been found to depend on the leukemia subtype being evaluated (Little et al. 1999, Inskip et al. 1990, Inskip et al. 1993). The evidence for radiation-related leukemia in medically exposed persons is limited primarily to forms other than chronic lymphatic leukemia (CLL) although Richardson et al. (2005) cite several instances of non-significantly elevated rates for CLL in patients treated for benign disease, particularly in patients with long follow-up.
Several case-control studies of diagnostic radiation exposure in utero have reported excess leukemia and other childhood cancers. The earliest (Stewart et al. 1956, Stewart et al. 1958) and largest of these is the Oxford Survey of Childhood Cancers (OSCC), in which a dose-response relationship was demonstrated for all childhood cancers (Bithell 1989, Doll and Wakeford). These studies have been interpreted as indicating that effects might occur at doses as low as 10 to 20 mSv (Doll and Wakeford 1997). Boice and Miller (1999), however, argue that risk estimates from these in utero studies are uncertain and question whether or not there is a radiation effect for cancers other than leukemia. Wakeford and Little (2003) analyzed data on childhood cancers from the OSCC and from Japanese A-bomb survivors who were exposed in utero or as young children and concluded that the data supported a causal explanation but that risk estimates were uncertain.
A dose-response relationship for breast cancer has been demonstrated in numerous medical radiation studies. Preston et al. (2002b) conducted a pooled analysis of breast cancer incidence data from seven medical cohorts of patients treated for benign disease and from the LSS cohort (Table II). A significant dose-response was found for each of the cohorts evaluated. A linear function provided a reasonable description of all the data although there was marginally significant evidence of a downturn at high doses. Data from the LSS, tuberculosis, and enlarged thymus cohorts could be described by models with similar dependencies of the ERR (or EAR) on attained age and age at exposure. The overall magnitude of the EAR did not vary significantly among these cohorts, but the ERR was about 4 times higher for the LSS than for the other cohorts, possibly reflecting the low breast cancer incidence rates in Japan. Recently, a study of scoliosis patients who were monitored with radiography also found evidence of a dose-response for breast cancer even though the mean cumulative dose was only 0.12 Gy (Ronckers et al. 2008).
In addition to the breast cancer studies discussed above, case-control studies of breast cancer in patients treated for malignant disease have been conducted. Studies of contralateral breast cancer have found little evidence of dose-response when data from all patients were analyzed (Storm et al. 1992, Boice et al. 1992, Stovall et al. 2008, Hooning et al. 2007). However, all but the Storm et al. study found significant dose-response relationships for patients exposed at ages under 45 years (under 40 years in the Stovall et al. study). Significant dose-response relationships for breast cancer have also been observed in Hodgkin lymphoma (Van Leeuwen et al. 2003) and childhood cancer (Guibout 2005) patients. One of the more informative studies is an international study of breast cancer in patients diagnosed with Hodgkin lymphoma under age 30 (Travis et al. 2003). The study included 105 cases and 266 matched controls, and a strong radiation dose-response that appeared to be linear over the range 0 to 42 Gy was observed. Travis et al. also found that either treatment with alkylating agent chemotherapy or receiving a dose to the ovaries that exceeded 5 Gy reduced both the overall risk of breast cancer and the slope of the radiation dose-response.
Significant dose-response relationships for lung cancer have been observed in studies of patients treated for ankylosing spondylitis (Weiss et al. 1994), peptic ulcer (Carr et al. 2002), female breast cancer (Inskip et al 1994), and Hodgkin lymphoma (van Leeuwen et al 1995). In a large international study with 222 cases and 666 controls (Travis et al 2002; Gilbert et al. 2003), a clear dose-response was observed. The mean doses in this study exceeded 20 Gy, and a linear dose-response was compatible with the data. The combined effects of radiation dose and therapy with alkylating agents were estimated to be almost exactly additive whereas the combined effect of radiation and smoking was better described by a multiplicative relationship. It is noteworthy that there was no evidence of a lung cancer dose-response in studies of tuberculosis fluoroscopy patients (Howe 1995, Davis et al. 1989); in fact, the dose response appears to be incompatible with that in the A-bomb survivors. Howe found no evidence of bias from several potential sources and attributed the finding to the fractionated nature of the exposure. However, modification of radiation-induced lung cancer risk by the presence of lung disease (tuberculosis) might contribute.
Radiation-related thyroid cancer has also been studied in many medically exposed populations. Ron et al. (1995) conducted a pooled analysis of thyroid cancer in A-bomb survivors, four cohorts of persons exposed for medical reasons in childhood (under age 15 years), and two case-control studies of persons exposed in adulthood. While there was no evidence for a dose-response in persons exposed in adulthood, a linear dose-response was observed for those exposed in childhood with a strong decrease in the ERR with increasing age at exposure. The results of these analyses have served as the basis for risk estimation for the National Institutes of Health (NIH) Radioepidemiologic tables (NIH 2003) and for the BEIR VII Committee (NRC 2006). More recently, Sigurdson et al. (2005) conducted a thyroid cancer case-control study nested in a cohort of 14,054 5-year survivors from the Childhood Cancer Survivor Study cohort. In contrast to the linear relationships described above for breast and lung cancer in Hodgkin lymphoma survivors, this study found that risks initially increased up to about 15 Gy but then declined as shown in Figure 3. The authors concluded that these findings were consistent with cell-killing at higher doses
In addition to thyroid cancer, dose-response relationships have also been observed in childhood cancer survivors for secondary leukemia (Hawkins et al. 1992) and bone cancer (Hawkins et al. 1996, Wong et al. 1997). In hereditary retinoblastoma patients, radiation appears to enhance the already elevated risks of sarcomas and other cancers (Wong et al. 1997, Kleinerman et al. 2005).
Little (2001) compared estimates of the ERR Gy−1 from 65 studies of medically exposed persons with estimates based on LSS site-specific cancer incidence (Thompson et al. 1994) and mortality data (Preston et al. 2003). He found that the estimates based on the medical studies were usually lower than those based on the LSS, although in most cases the differences were not statistically significant. He also found that the ratios of LSS and medical estimates were greatest for medical studies with the highest doses, and concluded that cell sterilization largely accounted for the discrepancies. Little noted, however, that dose fractionation and differences in baseline risks might be other contributing factors.
The medical studies described above all involved low-LET external exposure. Studies of patients exposed internally to thorotrast and radium (high-LET) have also been conducted and have been used to estimate liver and bone cancer risks, respectively, from alpha-emitters (NRC 1988, National Council on Radiation Protection and Measurements 2001, UNSCEAR 2008).
Numerous studies have evaluated cancer mortality or incidence in occupationally-exposed persons in the nuclear, medical, mining, and aviation industries (NRC 2006). For estimating risks of external low-LET exposure, the most informative studies are those of nuclear-industry workers for whom individual estimates of doses have been obtained over time with the use of personal dosimeters. Because exposure has been deliberately limited as a protection to the worker, these studies offer a direct assessment of risks resulting from exposure to radiation at low doses and dose rates, and thus provide a check on A-bomb survivor-based estimates that form the basis of radiation protection standards. Beginning in the 1980's, findings were reported from studies of workers in several nuclear facilities that began operations in the 1940's or 1950's in the United States, United Kingdom, and Canada. A major limitation of these studies is the low statistical power and potential for confounding inherent in studying small risks (Gilbert 2001). Not surprisingly, these studies produced a wide range of risk estimates usually with confidence intervals that ranged from no risk to risks several times those predicted by linear extrapolation from the Japanese A-bomb survivors. To increase the precision of risk estimates, combined analyses of data from multiple nuclear worker cohorts were carried out on a national (Gilbert et al. 1993; Carpenter et al. 1994) and international scale (Cardis et al. 1995).
Findings from the two largest and most recent of these studies are presented here (Table III). The 15-country study combined data from nuclear worker studies in 15 countries, many of which were initiated especially for this effort (Cardis et al. 2005a, Cardis et al. 2007). The final dose-response analyses were based on about 400,000 workers with a mean cumulative dose of 19.4 mSv. The dose-response for leukemia excluding CLL was of borderline statistical significance, and the estimated ERR of 1.9 Sv−1 was similar to the estimate of 1.5 Sv−1 based on analyses of male A-bomb survivors exposed between the ages of 20 and 60 years using a linear-quadratic model. The estimated ERR for all cancers excluding leukemia was 0.97 Sv−1, higher than but statistically compatible with the linear estimate of 0.32 Sv−1 based on solid cancers in A-bomb survivors. Muirhead et al. (2009) recently reported on the third analysis of data from the National Registry for Radiation Workers, including about 174,500 workers, many of whom were included in the 15-country study. Statistically significant dose-response relationships were observed for mortality from leukemia excluding CLL (ERR=1.7 Sv−1) and for mortality from all cancers excluding leukemia (ERR= 0.28 Sv−1). These estimates are similar to estimates based on male A-bomb survivors exposed between the ages of 20 and 60 years. Results based on incidence data were very similar to those based on mortality data.
Data on Chornobyl liquidators, or emergency workers, are another source of information on risks from exposure to protracted low-LET radiation (Table III). Leukemia has received special attention because in A-bomb and medical studies it has been shown to have a much higher relative risk at a given dose than other cancers, and because excess risks have been observed as early as 2 to 5 years following exposure. Two leukemia case-control studies, one nested in a cohort of liquidators in the Ukraine (Romanenko et al. 2008) and the other in cohorts of liquidators from Belarus, the Russian Federation and the Baltic countries (Kesminiene et al. 2008) have recently been carried out. In both studies, bone marrow doses for each subject were estimated using a dosimetry method that combines detailed interview data on worker activities and locations with a database of field exposure measurements. The study in the Ukraine included 71 histologically confirmed leukemias diagnosed in 1986-2000 and 501 age- and residence-matched controls. The mean dose was 76.4 mGy. For the study in Belarus, Russia and Baltic countries, analyses were based on 70 cases (40 leukemia, 20 NHL, and 10 other) and 287 matched controls. About half of the cases were histologically confirmed, and the median dose was 13 mGy. When all leukemias were analyzed, the dose-response was statistically significant for the Ukraine study (ERR Gy−1=3.4; p=0.01), but not for the Kesminiene study although the risk estimate was positive (ERR Gy−1=4.8). In both studies, confidence intervals and the ERR Gy−1 estimates were compatible with both linear and liner-quadratic leukemia estimates obtained from A-bomb survivors (Table III). Somewhat surprisingly, the estimated ERRs Gy−1 for CLL were similar to those for non-CLL leukemia in both studies.
Studies of a cohort of 26,000 workers at the Mayak nuclear facility in the Russian Federation offer a unique source of information on protracted exposure to external radiation at high doses and on exposure to plutonium. Annual external doses in the early period of operations (1948-53) often exceeded one Gy, and the mean cumulative dose for workers in the main plants hired before 1973 was about 0.5 Gy, more than an order of magnitude higher than the mean dose in the 15-country study. Significant external dose-response relationships (p < 0.001) have been observed for leukemia, cancers of the lung, liver, and bone (the major sites of plutonium deposition analyzed as a single category), and for all solid cancers other than lung, liver and bone (Shilnikova et al. 2003). An interesting finding is that for the period 3-5 years after exposure, the ERR Gy1 for leukemia was estimated to be more than an order of magnitude larger than for the period 5 or more years after exposure. Since the Shilnikova et al. analyses were published, external dose estimates have been substantially improved; analyses incorporating these improvements are underway.
The Mayak worker cohort is the only cohort with adequate statistical power for quantifying dose-response relationships for exposure to plutonium, an alpha emitter. Sokolnikov et al. (2008) quantify this relationship for lung, liver, and bone cancer and investigate the dependence on gender, attained age, and age at first exposure. Statistically significant plutonium dose-response relationships were found for all three types of cancer, and linear functions described the dose-response for lung and liver cancer reasonably well. Plutonium dosimetry is extremely challenging, and despite extensive efforts to improve plutonium dose estimates, dose-response analyses are likely to be biased as a result of dosimetry uncertainties.
The earliest studies of occupational exposure to radiation were those of underground miners exposed to inhaled alpha-emitting radon progeny. Radon had been identified as a carcinogen by the 1950s, and studies of miners in the Colorado plateau and elsewhere were established primarily for the purpose of establishing regulatory limits for workers in the uranium industry. Starting in the 1980's, concern about risks from residential radon exposure led to interest in using the underground miner studies to estimate these risks. The most comprehensive evaluation of lung cancer risks in underground miners is provided by a combined analysis of data from 11 underground miner cohorts (Lubin et al. 1995, NRC 1999). These data, which included a wide range of exposures and exposure rates, supported linearity of the exposure-response relationship for lung cancer, and indicated that the linear slope declined with decreasing exposure rate (at least at high exposures) or increasing exposure duration. Six cohorts with information on smoking allowed comparison of risks in smokers and never-smokers and confirmation of a significant lung cancer exposure-response in never-smokers (64 lung cancer deaths occurred in these subjects). Based on these analyses, it could be determined that the absolute increase in lung cancer risk attributable to radon progeny was greater in smokers than that in never-smokers, but that the combined effect of smoking and exposure to radon progeny was less than if the two risks multiplied. The models developed in these analyses were used to estimate that 10% or 14% (depending on which of two models were used) of lung cancer deaths occurring each year in the U.S. were attributable to indoor radon, with most of these deaths occurring in smokers.
In response to the interest in exposure to radon-emitters in residences noted above, a large number of lung cancer case-control studies were carried out. Analogous to the nuclear worker studies, the objective of the case-control studies was to obtain risk estimates based on a direct assessment of indoor radon exposure for comparison with estimates obtained through extrapolation from data on underground miners exposed at generally higher doses and dose rates. Similar to the worker studies, a wide range of estimates with wide confidence intervals was obtained from individual studies. This led to two major efforts to pool data: one of 7 North American case-control studies (3662 lung cancer cases; 4966 controls) (Krewski et al. 2005) and the other of 13 European case-control studies (Darby et al. 2005) (7148 lung cancer cases; 14,208 controls). All studies made use of residential histories and alpha-track detectors to estimate exposure for individual subjects. A linear exposure-response function described the data well in both studies. The estimated excess odds ratio (EOR) was 0.11 per 100 Bq/m3 (95% CI: 0.00, 0.28) in the North American study, and 0.084 (95% CI: 0.03, 0.16) in the European study. However, when analyses of the European data were adjusted for random uncertainties in exposure measurements, the estimate increased to 0.16 (95% CI: 0.05, 0.31). All these estimates are compatible with an EOR per 100 Bq/m3 of 0.12 (95% CI: 0.02, 0.25) that would be predicted by downward extrapolation of the miner data.
Although several studies have linked thyroid cancer with exposure to external radiation in childhood, it was not until persons exposed as a result of the Chornobyl accident in 1986 were studied that the link between iodine-131 (I-131) exposure and thyroid cancer was clearly established. The very large number of persons who were exposed as a result of the accident offers an excellent opportunity to quantify thyroid disease risks and to gain a better understanding of factors that might modify risk. Because I-131 concentrates in the thyroid gland, thyroid cancer and other thyroid diseases are the main health effects of concern. An early thyroid cancer case-control study (Astakova et al. 1998) and several ecological studies (Jacob et al. 1999, UNSCEAR 2000) provided strong evidence that exposure to I-131 increased thyroid cancer risks.
Two informative studies have recently reported findings. Cardis et al. (2005b) conducted a case-control study that included 276 were histologically confirmed thyroid cancer cases and 1300 matched controls in Belarus or the Russian Federation who were under age 15 at the time of the accident. Individual dose estimates were based on interview data on location and dietary habits in the time period following the accident. The median dose from all radionuclides was 365 mGy in Belarus and 40 mGy in the Russian Federation. A strong dose-response was observed (p < 0.001), and well described by a linear function over the range 0 to about 2 Gy with an EOR Gy−1 of 4.5 (95% CI: 1.2, 7.8). In iodine deficient regions, the risk was about three times higher than elsewhere, and administration of potassium iodide as a dietary supplement reduced risk by a factor of three. Tronko et al. (2006) conducted a cohort study of persons who were under age 18 and resided in heavily contaminated areas in Ukraine at the time of the accident. In this study, 13,127 individuals were screened for thyroid disease using ultrasound and palpation, and 45 thyroid cancers were identified and histologically confirmed. Dose estimates were based on radioactivity measurements in individual subjects shortly after the accident and on interview data. The mean dose was 200 mGy for cases and 78 mGy for non-cases. The dose-response could be described by a linear function, and the estimated ERR Gy−1 was 5.25 (95% CI: 1.70, 27.5). The risk estimates from these studies of I-131 exposure are similar to each other and also similar to the estimated ERR Gy−1 of 7.7 from a pooled analyses of thyroid cancer risks of persons exposed externally (Ron et al. 1995). In addition, a study of 2584 mother-child pairs from Ukraine who were in utero at the time of the accident or the two months following found, based on a small number of cases, a non-significant increased risk of thyroid cancer (Hatch et al. 2008).
A cohort of about 30,000 persons who lived in close proximity to the Techa River are being studied because of their exposure to cesium-137 and strontium-90 from radioactive materials released into the river by the Mayak nuclear facility with the largest releases occurring in the early 1950's. Like the LSS cohort, this cohort includes both sexes and persons exposed at all ages. Extensive efforts have been made to develop and improve dose estimates for each member of the cohort. The mean dose to the stomach was 30 mGy, while the mean dose to the bone surfaces (primarily from strontium) was 300 mGy. Based on 1842 solid cancer deaths (excluding bone), Krestinina et al. (2005) found a clear dose-response for all solid cancer mortality in relation to external dose with an estimated ERR Gy−1 of 0.92 (95% CI: 0.2, 1.7). An increase in the ERR with both increasing age at entry and attained age was suggested, the reverse of findings from the LSS and other cohorts. There was also a strong dose-response for non-CLL leukemia with an estimated ERR Gy−1 of 6.5 (95% CI: 1.8, 24). Risk estimates for both solid cancer and non-CLL leukemia were higher than but statistically compatible with estimates from Japanese A-bomb survivors. However, the authors advise caution in interpreting risk estimates because of uncertainties in the dose estimates. Cancer incidence has also been investigated in the Techa River Cohort. Results for solid cancer (Krestinina et al. 2007) and leukemia (Ostroumova et al. 2006) were generally similar to those for cancer mortality, and a dose-response relationship was demonstrated for breast cancer incidence (Ostroumova et al. 2008). Although not as large as the LSS cohort, the Techa River cohort offers important data on both internal and protracted exposure.
In the past 50 years, epidemiology has brought about enormous gains in our knowledge of cancer risks from radiation exposure. Epidemiologic studies have provided the necessary data for quantifying risks from medical, occupational, and environmental exposures and for setting radiation protection standards. Data on Japanese A-bomb survivors are the primary resource for estimating carcinogenic risks from low-LET external exposure, while studies of persons exposed for medical reasons provide important supplementary data, particularly on fractionated exposure at high doses. Most solid cancers have been found to be associated with radiation exposure, but risk estimates are most reliable for leukemia, all solid cancers combined, and cancers of the breast and thyroid. Numerous cohorts of underground miners and case-control studies of persons exposed in residences allow reliable estimation of lung cancer risks from exposure to radon progeny.
For radiation protection purposes, it is exposures at low doses (in the range 1 to 100 mGy) and low dose rates that are of primary interest. Yet, risk estimates that form the basis of radiation protection standards are largely derived from linear extrapolation from data on persons exposed at higher doses and dose rates. In most risk assessments of low-LET exposure, linear estimates are reduced by a Dose and Dose Rate Reduction Factor (DDREF) (usually in the range of 1.5 to 2) because of experimental evidence that risks at low doses and dose rates may be smaller than those predicted by linear extrapolation. However, the magnitude of the DDREF and whether or not it is needed is highly uncertain. The existence of a large number of epidemiologic studies with well-characterized dose estimates has made it possible to investigate the shape of the dose-response. Most data are compatible with a linear dose-response for doses up to at least 2 or 3 Gy, but the possibility of curvature or a small threshold cannot be ruled out. Studies of nuclear workers and others exposed at lower doses and dose-rates provide a more direct assessment of risks. Results thus far have generally supported the use of linear estimates obtained from higher dose data with little indication that such estimates need to be reduced for exposure received at low doses and dose rates. However, imprecise estimates and potential for confounding limit what can be learned from low dose studies. Studies of persons exposed to radionuclides such as iodine-131, strontium, and plutonium also provide direct evidence on the effects of protracted exposure.
Factors that modify risk have also been investigated. Data from the LSS (Preston et al. 2007) and several medical cohorts indicate that those exposed early in life have especially high excess relative risks for thyroid, breast, and skin cancers and for all solid cancers as a group. Radiation-related risks of solid cancers appear to persist throughout life although the relative risk may decrease as people age. These findings are taken into account in estimating lifetime risks. Interactions of radiation with other exposures such as smoking or chemotherapy have been investigated, but results have not been consistent.
Pooled analyses that combine data from several studies offer a broad overview of studies addressing a common issue and have allowed rigorous comparisons of results from different studies. In some studies, combining data has led to more precise risk estimates and a more powerful assessment of modifiers of risk. For example, pooled analyses of case-control studies of radon exposure in a residential setting clearly established an exposure-response relationship and confirmed that risks estimates based on studies of underground miners were reasonably appropriate.
Further follow-up of many of the cohorts described above should yield additional information on radiation risks especially for those exposed at younger ages. For example, 52% of the LSS cohort included in recent cancer incidence analyses remained alive at the end of 1998; 81% of those who were under age 30 at exposure remained alive (Preston et al. 2007). In the 15-country nuclear worker study, 94% of the workers were still alive at the end of the follow-up (Cardis et al. 2007). Updating of data on childhood cancer survivors will allow evaluation of second cancer risks as these survivors are followed into adulthood.
Dosimetry improvements can also increase the quality of information from current studies. Such improvements are being implemented in the Mayak worker and Techa River cohorts, which will allow more reliable estimates of risks based on the protracted doses in these cohorts. In addition, annual doses have recently been estimated for a cohort of nearly 100,000 radiologic technologists who have also received protracted doses (Simon et al. 2006b). Analyses of cancer incidence data from this cohort have identified excesses of non-CLL leukemia (Linet et al. 2005), breast cancer (Doody et al. 2006), thyroid cancer (Zabel et al. 2006), melanoma (Freedman et al. 2003), and basal cell carcinoma (Yoshinaga et al. 2005) in subjects who worked before 1950 when doses were the highest. The addition of dose estimates will permit evaluation of dose-response relationships in this cohort.
Further analyses of existing data can also be informative. Pooled analyses of data from several studies evaluating a common health endpoint can increase statistical power especially for assessing interactions, and may also help to clarify seeming discrepancies among studies. Increasingly, efforts are being made both to estimate uncertainties in dose estimates, and to take account of uncertainties in dose-response analyses. In studies where this has been done, estimated risk coefficients have often increased, but adjustments have not always greatly modified results (Schafer and Gilbert 2006).
Several new studies are underway and additional studies may be needed. Studies of cancer survivors exposed at high doses can provide important information on interactions and on patterns of risk by age and time, particularly if detailed treatment data are collected. The British Childhood Cancer Survivor Study (BCCSS) includes about 15,000 5-year survivors of childhood cancers. This study, along with the Childhood Cancer Survivor Study in the United States (Robison et al. 2002), allows evaluation of the role of radiation therapy in the risk of second cancer. There is currently little information on second gastrointestinal cancers, but an ongoing case-control should help to remedy this. Newer medical technologies have led to questions regarding the risk from the radiation exposures involved. To address these questions, studies of children exposed to CT scans are underway as are studies of radiologic technologists, radiologists and other physicians conducting interventional fluoroscopic procedures. Finally, molecular studies can lead to a better understanding of how genetic susceptibility might modify the radiation dose-response. For example, the WECARE (for Women's Environmental Cancer and Radiation Epidemiology) Study is a breast cancer case-control study designed to evaluate the joint roles of radiation and breast cancer susceptibility genes in developing a second primary breast cancer (Bernstein et al. 2004).
Although the vast amount of information obtained from epidemiology allows reasonably precise estimates of radiation risks, there is more to be learned. Estimates of risks at the very low doses and dose rates that are of interest for radiation protection remain uncertain. There is also need for better information on how radiation risk is modified by other exposures and host characteristic, which can help to identify individuals who are at especially high risk from radiation exposure. Answering these questions is likely to require both additional epidemiologic data and input from other disciplines.
I would like to thank Rochelle Curtis, Maureen Hatch, Kiyohiko Mabuchi, and Elaine Ron for helpful comments on a draft of this manuscript.
Declaration of interest: The author reports no conflicts of interest. The author alone is responsible for the content and writing of the paper.