There is a very signifciant possibility that terrorists in the near future will initiate an event in which large numbers of people will be potentially subject to significant doses of ionizing radiation. Such exposures also continue to be a very plausible scenario during the course of warfare. An accident could lead to similar concerns. In any such radiation event, the needs for rapid and accurate dose assessment are clear. Even if the exposures are unlikely to result in acute illness, experiences such as the reaction of the public to the Three Mile Island incident and the biological consequences of the disaster at Chernobyl have vividly illustrated the panic and uncertainty that can occur if populations potentially have been exposed to unknown doses of ionizing radiation. The panic and disruption are likely to be even greater in the event of the release of a “dirty bomb” or other acts of terrorism or in the course of nuclear warfare, which involve potential exposures to unknown amounts of radiation.
The appropriate management of such situations requires a capability to determine the magnitude of the exposure to individuals. While it is likely that many of the individuals will not have received clinically significant doses of radiation, the concern and probable panic from the uncertainty about the dose that has been received would be very damaging. And, of course, it is very desirable to be able to determine which individuals have received clinically significant exposures and the magnitude of the exposures. Individuals with significant risk could then have appropriate procedures initiated immediately, while those without a significant probability of acute effects could be reassured and removed from the need for further medical treatment. The determination of dose should be made while the subject is still present. As illustrated by the experience with hurricane Katrina as well as numerous other events, when major disruptions of infrastructure occur, it becomes extremely difficult to reconnect with individuals after they disperse from the scene.
Recent efforts in the development of radiation-mitigating agents (which is a prominent theme in the Centers for Medical Countermeasures for Radiation developed by NIH) further increase the needs for making a prompt and accurate assessment of risk of significant clinical effects from ionizing radiation. Many or most of these are likely to be most effective when they are delivered as soon as possible after the exposure occurs. Effective mitigating agents, like essentially all therapeutic measures, will have a significant risk of causing harm as well as having potential benefits. So they cannot be administered effectively in the absence of strong evidence of exposure, without having a high risk that they may do more harm than good. The mitigating agents also are likely to have a range of doses in which their use is appropriate, and therefore the accuracy of the dose assessment will be important for this use also.
There are several potential methodological approaches to determine the radiation dose to individuals after a radiation incident:
- reconstruction of the dose patterns in the exposed environment
- measurements based on objective biological changes such as radiation-induced changes in chromosomes
- assessments based on clinical signs and symptoms
- measurements based on a physical change associated with the individual
Reconstruction of the pattern of dose distribution would be useful but is unlikely to be able to be done in a timely and effective manner. In addition to the potential technical difficulties in making such an assessment promptly and accurately, there are a number of other factors that would make such assessments not likely to be useful for resolving the problem of dose estimation to the individuals. The accurate placement of the individual in the calculated radiation field will be difficult, especially under the chaotic conditions that are likely to accompany an incident in which radiation is released. In addition, there is likely to be a high degree of skepticism by the public as to the validity and accuracy of pronouncements of exposure and risk by the authorities, especially if the message is that no significant exposures occurred. The problems of credibility of official announcements have been demonstrated well in previous incidents in which radiation was released, such as the Chernobyl and Three Mile Island events.
The use of biological markers, such as chromosomal changes in blood cells to measure exposure, has some very attractive features. It has a long history of use to assess radiation exposures. The results would be based on measurements in the individual, indicating the total effective exposure to the individual, especially for the hematopoetic system. There are some additional well-characterized biological effects of ionizing radiation that occur in the dose range of interest. Unfortunately, these also have some very significant limitations for applications in an incident where large numbers of individuals are potentially involved and an immediate assessment is desired. The current methods such as dicentric assays and FISH require that samples be obtained from the individuals (which may be difficult under field conditions), sent to a laboratory where expert personnel can make the assessment (which severely limits the capacity to process large numbers of samples), the need to wait for relatively long periods of time for the changes to become manifest (which results in delays of up to several days before the results are available), and then a need to locate the individuals from whom the samples were taken. The quantitative relationship between some of the changes and radiation dose is not fully quantitative.
While the continuing clinical treatment of exposed individuals ultimately will be strongly based on the signs and symptoms of the patient and not just on the measurements, the signs and symptoms will provide very limited guidance for early determination of radiation dose. That is because clinically manifested biological effects, such as the various radiation syndromes, require considerable time for the effects to become evident, except for extraordinarily high doses. While it has been suggested that the time to nausea or vomiting might be a useful guideline, this seems unsuitable in the context of the confusion and panic that is likely to occur in an event with the potential for mass exposure. Individuals vary in the degree that they experience nausea and vomiting, and the presence of other persons with these symptoms greatly affects the incidence of these symptoms. The general confusion and trauma may limit the ability of individuals to remember accurately when they experienced the symptoms. Also, it would be quite feasible for terrorists to include emetic agents within the radiation-releasing device.
Methods based on physical changes in the individual that would be measured immediately and would not require processing are an attractive alternative, in principle. These would not require the evolution or processing needed for the biologically based methods and would be more readily calibrated. While several different physical methods have been suggested, such as radiation-induced changes that can give rise to luminescence (Godfrey-Smith et. al, 1997
), currently there is only one physical method that appears to be suitable at this time. This is electron paramagnetic resonance (EPR or, sometimes termed ESR). This is based on long-lived radiation-induced paramagnetic species that are stabilized in the matrix of teeth and bones or the keratin of fingernails, toenails, or hair. Ionizing radiation creates unpaired electron species as a direct result of its interactions with molecules. While most of these react immediately and disappear, some will be stabilized for long periods of time 108
years if they are generated in an appropriate matrix. The hydroxyapatite component of bone and teeth is an especially effective such matrix. This phenomenon previously has been exploited to make measurements on isolated teeth, but not in vivo
We report here such a method, based on in vivo EPR of teeth, which appears to meet most or all of the requirements for after-the-fact dosimetry applicable to individuals. (Several other papers in this issue report on the potential use of fingernails and toenails). The characteristics of in vivo EPR dosimetry based on teeth that are especially appropriate for the problem include:
- Sufficient sensitivity to measure clinically relevant doses
- Provides unambiguous data that is sufficient to make the differentiation into the designated dose subclasses
- Applicable to individuals
- Can be measured at any time after the radiation exposure
- Provides the data rapidly, while the subject is still present
- Can operate in a variety of environments
- Can be operated by minimally trained individuals
In assessing the potential value of in vivo EPR dosimetry, it is essential to focus on what are the real minimum requirements for this type of retrospective dosimetry. At the most basic level this would be to be able to determine reliably whether:
- the individual has received a dose below that which has a significant probability of leading to near term acute symptomatology
- the individual has received a dose that has a significant probability of leading to near term acute symptomatology
- the individual has received a dose above that for which there is a significant probability that therapeutic intervention will not be effective.
It is important to keep in mind what information on exposure dose is needed for effective medical decision-making: A threshold of about 150 cGy and an accuracy of about ± 50 - 75 cGy would be more than adequate for this purpose. It appears that in vivo EPR dosimetry will readily meet these criteria. This would probably still be the case even with uncertainties as high as ± 100 cGy. This is because the state of knowledge of the quantitative relationship between dose and the clinical response of human subjects to ionizing radiation is limited. It probably also is true that even with more extensive data, the intrinsic variation in the response of individuals to ionizing radiation is sufficiently large that very precise information on the doses could not usefully improve the categorization of the subjects into action classes.
Of course, it would be desirable, for other purposes, to have the sensitivity and accuracy of the technique to be greater than that discussed above, for some uses such as estimating the probability of long term effects from the exposure such as carcinogenesis. Although not a subject of this paper, there are some potential means to achieve such capabilities with in vivo EPR dosimetry, especially by extending the time of measurement.