|Home | About | Journals | Submit | Contact Us | Français|
Starting with the assumption that a device to detect unplanned radiation exposures is technically superior to current technology, we examine the additional stakeholders and processes that must be considered to move the device from the lab into use. The use is to provide reliable information to triage people for early treatment of exposure to ionizing radiation that could lead to the Acute Radiation Syndrome. The scenario is a major accident or terrorist event that leaves a large number of people potentially exposed, with the resulting need to identify those to treat promptly or not. In vivo EPR dosimetry is the exemplar of such a technique.
Three major areas are reviewed: policy considerations, regulatory clearance, and production of the device. Our analysis of policy-making indicates that the current system is very complex, with multiple significant decision-makers who may have conflicting agendas. Adoption of new technologies by policy-makers is further complicated because many sources of expert input already have made public stances or have reasons to prefer current solutions, e.g., some may have conflicts of interest in approving new devices because they are involved with the development or adoption of competing techniques. Regulatory clearance is complicated by not being able to collect evidence via clinical trials of its intended use, but pathways for approval for emergency use are under development by the FDA. The production of the new device could be problematical if the perceived market is too limited, particularly for private manufacturers; for in vivo EPR dosimetry the potential for other uses may be a mitigating factor.
Overall we conclude that technical superiority of a technique does not in itself assure its rapid and effective adoption, even where the need is great and the alternatives are not satisfactory for large populations. Many important steps remain to achieve the goals of approval and adoption for use.
“Build a better mouse-trap and the world will beat a path to your door.” This optimistic sentiment about solving a well-defined problem—usually attributed to Ralph Waldo Emerson-ignores the reality facing would-be inventors who also need to satisfy regulatory and patent laws, build a sound business plan, and hire experts in advertising and marketing as well as a savvy chief executive officer who can fend off giant corporations willing to buy out or undercut their device.
In the spirit of exposing the complexities of moving a scientifically-sound device out of the inventor’s lab and into ‘the world’ of Emerson’s credo, this paper discusses some principal factors that could affect the adoption of a technology that meets Emerson’s ‘better mousetrap’ criterion (i.e., technical superiority). However, we focus on devices that seek to solve a very important but novel national problem, i.e., one for which there is little experience and which presents special difficulties to establish both scientifically and practically the capabilities of any device to perform well in real-world situations.
The problem we focus on is the need to effectively triage treatment choices for a large number of people apparently exposed to doses of potentially lethal ionizing radiation under an unforeseen circumstance, such as a terrorist strike or a nuclear accident. The solution for this problem we feature is the use of in vivo Electron Paramagnetic Resonance (EPR) dosimetry to identify people exposed at a level requiring treatment. While we argue that EPR dosimetry is the ‘better mousetrap’, the criteria and problems apply to all techniques intended to solve this particular problem.
Evaluating any device for clinical use, be it for diagnostic or therapeutic purposes, involves much more than understanding its technical qualifications in a laboratory setting. It includes its capabilities in ‘ordinary’ usage, i.e., in the settings for which it is intended and as operated by those who would be expected to use it. There are a variety of concepts that have evolved to describe the dimensions of how well a device performs. The most common include: its efficacy, safety, appropriateness, effectiveness, productivity, efficiency, and cost-effectiveness. Briefly, as applied to a diagnostic device like EPR dosimetry, these typically focus on: its technical capability to detect exposure reliably and safely (i.e., its efficacy and safety—the usual focus of scientific and regulatory evaluation), the capability of being delivered to the right people at the right time (appropriateness), the ability to use the device reliably and validly when operated under usual circumstances, i.e. ‘in the field’ (effectiveness), the ability to process a maximum number of tests and obtain usable results to act on (productivity), the ability to produce the most results for the least cost (efficiency), and its relative ability to produce the best results for the least expense compared to alternative methods or devices (cost-effectiveness) (Flood et al., 2005).
In the circumstances of a nuclear event, other criteria may also apply, such as the device’s ability to be deployed to a region quickly and its ability to be operated effectively by generalist first-responders or operators after little training and with low maintenance (its convenience). The federal plan also emphasizes its need to fit into an emergency plan that needs to be able to handle a variety of types of events (be scalable), to address numerous simultaneous events or be tailored for specific situations (be modular), and be able to adapt to new threats and risks (be flexible) (DHS, 2004) Finally, the National Response Plan builds on an assumption that the initial response involves local community level agencies, including tribal communities as appropriate (be accessible to local, state and federal partners)(DHS, 2004).
Even after addressing each of these concepts in evaluating a device, other important questions remain: What should be measured? How do you measure it? And why is it important to evaluate? The answers to these questions are complex for two primary reasons. (1) The measures selected to evaluate a device may address contradictory goals. For example, one device might be better at maximizing the number of people who can be sampled within a given timeframe by one ‘operator’ but may be very slow at providing results from the test that can be acted upon. (2) Stakeholders may have very different goals and perspectives about what is most important to evaluate. Their judgments may derive in part from their differing missions (for example, having regulatory oversight to ensure the safety and efficacy of a device versus being responsible for the public’s health in general or for responding to major disasters, either natural or from acts of war). Additionally, stakeholders’ expectations about the likelihood of an event occurring or being important to their mission will influence which type of measure they argue is most important. For example, if they believe that an event focused on a few hundred people is much more likely than one involving hundreds of thousands, then the ability to process huge numbers of people in a limited timeframe will be seen as less important.
For the remainder of this paper, we assume that the technical efficacy and safety of EPR has been scientifically established in the lab but has not yet passed all the hurdles to receive regulatory approval for its use. We turn next to discuss three major considerations for the adoption and implementation of EPR as the methodology of choice in assessment of individual dose exposure: general policy decisions related to making and implementing plans to measure and respond to exposure, regulatory approval of the technological capabilities (especially by the Food and Drug Administration [FDA]), and manufacturing of the device.
The problem to be solved by a device, from a technical viewpoint, is relatively straightforward: how to detect whether a person has received an exposure to ionizing radiation that is likely to produce serious clinically relevant acute symptoms, ranging from short-term morbidity such as nausea and vomiting to mortality within a few days up to six weeks. (Note: While long term effects such as increased cancer risk or teratogenic effects on fetuses can result from such exposure, we limit our discussion to the need to identify potentially treatable short term effects.)
The criteria for evaluating a new device’s technical capacity for regulatory purposes basically follow scientific standards of performance. For medical diagnostic devices, the three basic steps of clinical trials apply: Level 1 requires compiling scientific evidence in animals of the basic efficacy of the device. Level 2 establishes safety in healthy human volunteers, while Level 3 examines its ability to reliably and validly detect clinically significant levels in exposed individuals.
However, even in this relatively straightforward stage of device development and testing, finding exposed individuals to test the device’s efficacy is problematic. While patients exposed to radiation therapy provide one type of sample, most have not received whole-body radiation nor potentially lethal doses, and most are not frantic for reassurance as would likely occur in a radiation accident (Stein et al., 2004).
The complicating factors for devising and testing a device to solve the problem of concern here arise from the emergency-related and unexpected nature of exposure and its potential to involve a large population with no choice in becoming at-risk and potentially frightened about the event as well as the exposure per se. It of course would be unethical to randomize people to accidental exposure to a potentially lethal dose of radiation in order to test such devices. So how does one apply all of the criteria listed above to test which device(s) should be approved for manufacture and deployment? (We will return to this question when we examine the FDA’s role in approving such devices for use.)
Three general types of scenarios leading to accidental exposure to radiation are usually delineated, differing particularly in their assumptions about how many people were exposed and at what likely level. They are:
As DHS Secretary Michael Chertoff commented recently in remarks at the Brookings Institution (DHS, 2006a), “Everybody’s nightmare scenario is a weapon of mass destruction, radioactive or a nuclear device, coming in through a container...to one of our ports.” To illustrate the potential magnitude of people involved and the consequences, we use estimates from a scenario developed by RAND to study response strategies following a catastrophic terrorist attack. In this scenario, terrorists hide a 10-kiloton nuclear bomb in a container ship headed to the Port of Long Beach, CA. Soon after being unloaded, it explodes. A ground-based explosion under such conditions is not only considered feasible, but produces particularly large amounts of radioactive debris so that fallout would cause much of the destruction. Using strategic decision-making games with leaders from the government and emergency response community, they estimated both short and intermediate consequences. These are the most dramatic outcomes projected during the first 72 hours: “60,000 people might die instantly from the blast itself or quickly thereafter from radiation poisoning. 150,000 more might be exposed to hazardous levels of radioactive water and sediment from the port, requiring emergency medical treatment. The blast and subsequent fires might completely destroy the entire infrastructure and all ships in the Port of Long Beach and the adjoining Port of Los Angeles. 6,000,000 people might try to evacuate the Los Angeles region. 2-3,000,000 people might need relocation because fallout will have contaminated a 500-km2 area. Gasoline supplies might run critically short across the entire region because of the loss of Long Beach’s refineries—responsible for one-third of the gas west of the Rockies” (Meade and Molander, 2006, page xvi).
Accurate and rapid assessment of absorbed dose in individuals following such an incident would be critical for positively identifying both those who may benefit from medical intervention and those who did not receive clinically significant exposures to radiation so that the latter group may be reassured of their status, thereby lessening the burden on the emergency medical system and avoiding unnecessary care.
In summary, in all three scenarios, there is a clear and urgent need for accurate and rapid triage of the affected population in the event of an incident in which there is the potential for significant exposure to radiation.
The purpose of measuring exposure to radiation is to be able detect the level of radiation absorbed by the body following a radiation incident, for purposes of triaging people into care. Not all exposures will produce clinically significant results. The Centers for Disease Control and Prevention’s (CDC) fact sheet for physicians lists 5 conditions required to produce Acute Radiation Syndrome: the radiation dose must be large, external (i.e., the source is outside the patient’s body), penetrating (i.e., reaches internal organs), involve the whole body (or nearly all), and be delivered within a short period of time (CDC, 2005). The short-term initial symptoms of radiation exposure can include vomiting, nausea, and at quite high doses, diarrhea. These may not always occur and can differ in being episodic or in the level of severity and how long they last, from a few hours to several days. There is often a latent stage occurring for a few hours to several days during which individuals exposed to high levels become apparently well, even though they may have suffered severe damage to bone marrow, the gastrointestinal system, or the cardiovascular/central nervous system with symptoms appearing subsequently. Some levels of exposure are extremely likely to be lethal; others are responsive to treatment or are variable in how individuals respond. In addition to symptoms of radiation exposure, persons exposed to an explosive device may suffer blast or thermal injuries from the explosion or destruction of property, which may be fatal themselves or cause more serious responses to the effects of radiation.
The issue for any device measuring exposure is: how can you identify individuals who were exposed at a level (a) too low to require treatment, (b) at a level that should be treated, or (c) too high to benefit from treatment? In the first group, there are individuals who may seek treatment anyway. In general, this group of people are called ‘the worried well’, and one important criterion for success of a diagnostic device will be to reassure people who received low or no dose that they do not need to seek treatment (Diamond, 2003). The other end of the exposure spectrum is also problematic: the group who are too compromised to be appropriate for treatment, especially in a setting with limited resources and providers. The resources that are available may determine what is the level of cut-off for recommending non-treatment or postponement of treatment of exposed individuals, so that resources are not deflected to care of exposed individuals who are unlikely to benefit from treatment.
While there are several techniques and technologies that might potentially be adopted as a first-line choice for triage, in vivo EPR dosimetry has several particularly attractive aspects: (1) It provides the results immediately after the measurement, which can take place in the field. Events such as Hurricane Katrina have demonstrated the potential for difficulties in communication between those attempting to manage the situation and the affected population; thus, it is very desirable to be able to obtain results for triage while the individual is still present so that advice on the next steps can be immediately communicated. (2) EPR measurements can provide an estimate of absorbed dose sufficient for clinical decision making within 5 to 10 minutes per individual; the estimate is not sensitive to how long it has been since exposure; and minimally trained technicians can perform the measurements. (3) There are no unsolvable obstacles to having a large number of devices available and that can be rapidly deployed—other than the policy, regulatory and manufacturing consideration discussed below.
In contrast, other techniques that also could be used--including some that are already a part of the planning process--have distinct limitations for their use in the triage of a large number of subjects:
Policy analysts distinguish between two types of policy-making (Longest, 2006), often employing the jargon, policy vs. Policy. The former is the more general term and refers to any important, far-reaching decisions that broadly influence the development or delivery of medical devices (in this example) irrespective of whether it involves government; decision-makers may include agencies or organizations or individual actors. Note that it is broad enough to include partners or stakeholders outside of government and may include agencies within government whose primary purpose may not include medical devices. The latter term refers to public decisions and specifically would not include internal policies made by private organizations such as whether to purchase a device. While public decisions are especially likely to involve government at all levels and branches, they also can include private organizations, such as accrediting organizations or professional associations or disaster relief non-government organizations. Finally, analysts distinguish policy-making from politics where decision-makers and other stakeholders use power and manipulation of processes to influence the decisions and outcomes of policies, either for their own self-interest or social good. In this paper, we focus principally on public policies and touch on some politics that influence these decisions.
Two basic policy issues are relevant for determining what technologies will be designated for use to carry out triage for an event that has the potential to involve mass casualties. The first is who decides what plans, procedures, protocols and technologies would be potentially available for use in the case of such an event. The second is how those plans, procedures, protocols and technologies are implemented into operational policy.
The first point to understand is that there are multiple organizations that are officially and unofficially involved in the decision-making process regarding the adoption and implementation of plans, procedures, protocols and technologies for emergency response. Because there is no single decision maker, the agencies involved often have potentially competing interests and missions, priorities and expectations about ‘the problems’ to be solved by the devices, prior decisions and levels of commitment to one alternative, and motivation to protect their agency’s autonomy and importance. Moreover, these may evolve over time (including their needs to be responsive to changed parties in power and the currently elected government officials).
All these competing factors and multiple actors play a role in defining the outcomes of policies and procedures. The resulting reality is far different than the expectation of a single voice whose overall goal is the safeguarding and response to threats to the homeland, as delineated by the Department of Homeland Security in their National Response Plan. For example, Section 312(c) of the Homeland Security Act of 2002 established the Homeland Security Institute as “responsible for identification of instances when common standards and protocols could improve the interoperability and effective utilization of tools developed for field operators and responses” (Homeland Security Institute, 2006). In their “Preparedness & Response” documents the DHS states that, “In the event of a terrorist attack, natural disaster or other large-scale emergency, the Department of Homeland Security (DHS) will provide a coordinated, comprehensive federal response and mount a swift and effective recovery effort. The department assumes primary responsibility for ensuring that emergency response professionals are prepared for any situation”, which might include training on a specific technology, e.g., EPR or others“ (DHS, 2006b).
Looking at these statements one might infer that DHS is the only federal agency responsible for the adoption, implementation and deployment of technologies for events involving radiation exposure to the population. However, in implementing its mission to coordinate across agencies, DHS outlined in its 2004 National Response Plan (NRP) the plurality of federal agencies potentially involved in an emergency response in the case of a terrorist attack with nuclear devices. The list shows agencies that are sometimes primary and sometimes secondary in responding to incidents involving potential exposures to radiation; some are only involved in planning; some depend on which facilities are involved in the attack (DHS, 2004).
The 2002 NRP contained a slightly simplified list of agencies (DHS, 2002, Appendix Tables 6.1-6.6). Departments included Agriculture, Commerce, Defense, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, Interior, Justice, Labor, State, Transportation, and Veterans Affairs. It also included the Environmental Protection Agency, National Aeronautics and Space Administration, Nuclear Regulatory Commission and General Services Administration. The table names additional agencies with general responsibility for emergency response or anti-terrorist or other security responses, such as the Federal Bureau of Investigation. Finally, the NRP also stated that “tribal, local and state” authorities are responsible for the first line of action in the case of a catastrophic event. This statement in turn can lead to the expectation that the “tribal, local and state” authorities are the ones responsible for the adoption and implementation of plans, procedures, protocols and technologies.
In such a context for policy-decisions, promoting the adoption of a particular technology, e.g., EPR, at the federal and local level becomes challenging in the terms of time and resources to reach each agency. Fortunately, most tribal, local and state governances may choose not to invest the time and resources developing and creating procedures for these types of events. Instead most rely on the policies, procedures, protocol and technologies developed by many of the federal agencies.
The Centers for Disease Control and Prevention (CDC), the National Council on Radiation Protection & Measurements (NCRP), the Radiation Emergency Assistance Center/Training Site (REACT) are the most likely federal candidates for formulating these policies. (Note: none is at a high enough level within the federal bureaucracy to be listed in the NRP above.) Each of these agencies has stated its commitment to providing guidance, training support, and emergency response capabilities for mass casualty events such as a nuclear event.
For example, REACT is a deployable asset of the Department of Energy’s (DoE) National Security Agency (NSA) that provides emergency response capabilities including training for tribal, local, and state governances. This agency has the authority to make decisions about what technologies, protocols and procedures would use in the case of a radiation incident, which in turn can impact the localities that look for their support for emergency preparedness.
One of the main interests of the DoE is in events involving nuclear power plants. Unlike the situation at Chernobyl, the potential exposure of a large number of people to significant doses is not likely from an operational failure and that may influence their decisions to select dosimetry technologies that are most appropriate for such situations. On the other hand, generalizing from experience with events such as the Three-Mile Island incident, where authorities knew that life-threatening exposures were extremely unlikely, the reaction of the public to that incident suggests that it may be prudent to have a capability to measure exposure of the at-risk population anyway. And, while the probability of terrorists causing a mass radiation release from a nuclear power plant is considered to be quite low, such an event or even an unsuccessful attempt could trigger a public demand for extensive testing of the population.
As another example, one of the main objectives of the NCRP is the formulation and wide dissemination of information, guidance, and recommendations on radiation protection and dosimetry based on consensus of scientific experts. It monitors areas in which development and publication in the literature can make an important contribution the public interest. In contrast to the first example, their advice for measuring exposure to events involving potential radiation exposure might vary based on multiple factors such as size of the locality under exposure, e.g., New York City. Their advice would gravitate towards technologies such as EPR that can process a large number of people quickly and accurately, without the need to follow people for several days to either monitor clinical symptoms or wait for lab results. On the other hand, exposure in smaller localities such as Cape Cod, Massachusetts, might be well handled by the health care system using other technologies and permit clinicians to treat the onset of actual symptoms rather than the ‘level of exposure’.
In September 2005, in coordination with other federal agencies, two units within the Department of Health and Human Services that are responsible for scientific research involving humans, the National Institutes of Health (NIH) and one of the Institutes -The National Institute of Allergy and Infectious Disease (NIAID)—responding to the delegation of authority for doing so, created the Centers for Medical Countermeasure against Radiation (CMCR) with the objective to expand the medical options to triage, prevent and/or treat radiation induced injury. A primary focus of these centers is to develop techniques and devices to provide accurate dose assessment in a triage scenario. As part of its stated mission, these centers are focused on product development and integration of technologies into existing organizational structures for emergency response. It is not clear how the developments from these centers will be interfaced with the policy-making that resides in other agencies.
Finally, there are also non-profit organizations that can provide assistance, training, and services for tribal, local, and state authorities. One such organization is the Conference of Radiation Control Program Directors, Inc., whose main objectives include “to provide a common forum for the exchange of information among state and local radiation control programs and to provide a mechanism for states to communicate with the federal government on radiation protection issues”.
While there are advantages to having multiple agencies and perspectives, one can easily imagine circumstances in which each agency provides difference guidance to local governances, i.e., involving different policies, procedures, protocols and technologies for handling exposure in nuclear events. In the face of an actual incident, it will be important to have people all knowing which policy or procedure to follow. Moreover, even if the advice is not truly contradictory but only apparently so, confusion may ensue when multiple localities are involved, and coordination and integration of their response capabilities is required.
Adding to the complexity of the actual decision-making processes are the multiple sources of external expert input available to policy-makers as to what is needed and which approaches are likely to meet these needs. Several different groups, usually involving governmental and non-governmental topic experts (both medical and radiological physics) and representatives of involved agencies meet periodically and episodically, sometimes producing “consensus documents” with varying degrees of official status.
These groups often have overlapping memberships and draw heavily on what has been decided by previous groups, so there is a significant value placed on previous policy positions, e.g., use of cytogenetic assays for dosimetry. The topic has become increasingly popular at scientific meetings where radiation is part of the usual subject matter, e.g., annual meetings of Radiation Research, ASTRO, and Medical Physics. Sometimes the meetings (as illustrated by this special issue) publish consensus documents, with the participants drawn from a combination of the members plus invited attendees who are not so regularly involved in the consensus groups.
Overall the “expert sources” provide advice that has a lot of experience and knowledge behind it. However, the overlapping membership and their prior involvement in alternative approaches may lead to being predisposed toward certain kinds of solutions and a general reluctance to change and may add an additional barrier for the introduction of new approaches. Paradoxically this tendency to prefer ‘tried and true’ approaches may be heightened by the development of an important and valuable aspect of preparation for response to a terrorist event - the creation of a strategic stockpile. Once a drug or device has been vetted for inclusion in the stockpile, there is a tremendous investment of money and commitment for that solution. This may further inhibit the addition of approaches that could be viewed as displacing or ‘wasting’ the investments that already have been made, despite value in having flexibility in responses or improved methods.
There are also organizational factors that could influence decisions on what types of approaches should be chosen (including governmental agencies). For example, if an organization has limited financial capabilities, the policy may be steered towards lower cost approaches, and these considerations may influence the decision-making in an unconscious as well as an explicit basis.
There also can be organizationally-based conflicts of interest. Some organizations that are involved in making policy also are involved in the development of specific approaches to the problem. For example, the cytogenetically based assays have been developed or improved by some of the same organizations that are now involved in policy making about which techniques to approve. It seems possible that this will influence the decisions that are put forth.
A different but related important aspect is the role of history of prior decisions. Even if we cannot precisely define the decision-making paths, it is clear that many people and organizations involved in making current decisions have made previous decisions as to the best approaches to use. In the absence of overwhelming evidence that more optimal solutions are available (or perhaps even with such evidence), there may be a tendency to stay with approaches that were developed previously through fairly extensive and wide-ranging processes.
For example, we argue that clinging to the concept of time-to-emesis as a criterion to be employed for triage is a reflection of such a tendency. Neither the scientific theories nor clinical evidence on the relationship between time to emesis and radiation exposure provide a robust basis for its advocacy for use for triage. The published data, when replotted and considered statistically, give a standard deviation (750 cGy) that is too large) to be useful for triage decision-making—and that does not take into account the additional deviation that will be caused by the effects of the chaos and the occurrence of nausea and vomiting in others.
Several other non-technical considerations will have to be resolved by policy makers, such as the cost of adoption and implementation, effects on displacing established technologies, if any, and the current state of development. Another major consideration for policy makers is the emphasis on the “worried well”. For example, existing cytological techniques cannot be applied to large numbers of people. However, because of the worried well, even if the event has a low probability of exposing many people to significant exposures, decision makers may feel a strong need to have available a technique that can provide information very rapidly on exposure dose for everyone exposed. If there is a concern about the potential for persistent chaotic conditions (as were well-illustrated following Hurricane Katrina), this also might lead to an emphasis on approaches that can provide measurements quickly and before the population becomes dispersed.
Closely related to the policy decision as to choosing what approaches are available is the question of how to implement these decisions once made. There are several different basic policy approaches for the adoption and implementation of a technique such as EPR dosimetry for triage. One question is whether the decision is mandated or permissive (i.e., the approach is approved for use if the operational entity at the local or federal agency wishes to use it).
Another issue is the level of command for implementation, e.g., local, regional or national level. As vividly brought out in the response to Katrina, problems arose when there was ambiguity as to whether federal response could be initiated in the absence of a request from the local authorities (state or city). This ambiguity may also occur in the response to radiation events. As stated in the NRP, the DHS assigns responsibility to the “tribal, local or state” authorities for the first line of response to any catastrophic event, and only when their resources are overwhelmed can they request support to the federal government. Yet other policy statements suggest that the federal government would be the operational authority in particular circumstances, e.g., when the Secretary of the DHS determines that the incident reached national significance.
There also is the practical component impacting actual responses. When an event occurs, the first responders are almost certainly going to be based locally. Therefore, a policy decision that ‘allows’ using EPR dosimetry for triage would be meaningless if the responders implement their actions based on assets available locally and they do not include EPR.
The NRP attempts to delineate the organizational structure to be adopted in the case of a catastrophic emergency. The structure involves a multi-tier hierarchy of officers in charge of coordinating the multiple local and federal agencies that can be called upon to respond to the emergency. However, the competing responsibilities assigned by law to each of these organizations and their well-intended but uncoordinated initial actions can create future coordination problems, especially assuming that different agencies have adopted different policies, plans, procedures, protocols for response to exposures to radiation, as described above.
In summary, there are many complex aspects of policy making that could have a significant effect on the ability of EPR to become an implemented strategy for the initial triage after a radiation incident. The complexity derives principally from the multiple, not well-defined nor linked levels of decision-making as well as the growing and changing plans, players, and process. Finally, there is little precedent and experience regarding responding to a radiation event to guide effective implementation in actual events. So--while current policies do not dictate that contradictory or competing or self-interested decisions will be made, the large number of stakeholders involved and complexities of implementation suggest these might occur in practice. Consequently, it is not clear that the development of a technology with superior technical capabilities will inevitably result in the world beating a path to your door.
Although regulatory approval for devices used in humans is a subset of government policy-making, we discuss it separately because of its importance in explaining the complexity of going from the step of establishing technical superiority in the lab and safety and efficacy in clinical trials, to obtaining approved use in humans. We review several pathways (and branches of these paths) currently available to obtain approval from the Federal Drug Agency (FDA) for using a device like in vivo EPR dosimetry for use in any of the scenarios listed earlier.
First, there is the ‘usual’ pathway for preparing information for and undergoing the approval process for a medical device. The Center for Devices and Radiological Health (CDRH) is the division within the FDA that is charged with approving and monitoring devices involving detection, prevention or treatment of radiation exposures. Most devices reviewed by the FDA are designed for use in diagnosing or treating patients a) for specified disease categories, b) manufactured and distributed by companies also subject to FDA rules, and c) generally are to be used in health care settings and by health care professions whose credentials and quality are ‘regulated’ by another set of policies.
The rules and procedures for this path are well-established (FDA, 2003a). Their primary focus is on reviewing the device’s safety (i.e., operational studies conducted with animals and humans that establish it can be used in normal populations without causing unacceptable risks of harm; see below) and effectiveness1 (i.e., that scientifically controlled clinical studies provide evidence that the device produces better outcomes or superior diagnostic accuracy than would be expected under ‘usual care’ or a placebo comparison).
The FDA requires those petitioning for approval to supply evidence to support being considered for review. The application process includes registering the device and helps determine what type of information is required for approval and the likely length of time between submission and approval. For example, the petitioner needs FDA concurrence when identifying the ‘class’ the device qualifies for and when seeking to qualify for exemptions. There are three classes, differing by how much risk the device poses for humans and how much evidence there is from similar but approved devices that can establish its likely safety and effectiveness. Three, the highest level, requires more data and a more stringent review processes. Potential exemptions include whether a simpler, shorter ‘third party review’ by registered expert reviewers could be used (FDA, 2003b) or whether the device qualifies as a “Humanitarian Use Device” (which allows simpler procedures and typically is restricted to situations where very few people are at risk [FDA, 2006]).
In 2002, in recognition that radiation incidents of the type considered here present special challenges in establishing safety and effectiveness, the FDA issued rules to allow drugs intended for treatment of victims of terrorist attacks to be approved on the basis of animal tests only. “This rule will apply when adequate and well-controlled clinical studies in humans cannot be ethically conducted and field efficacy studies are not feasible,” according to the FDA. To qualify for this exemption, other conditions must also be met, e.g., “the mechanism of effects of the harmful substance or agent and the protective or remedial mechanism of the drug are ‘reasonably well understood’ (FDA, 2002).
While these rules do not apply to devices, they set a precedent that may be followed as the FDA continues to evolve its process to be more appropriate for responding to terrorist errors. Indeed, at a recent meeting on regulatory issues regarding devices to respond to terrorist incidents involving radiation, representatives of the FDA indicated that such modifications were under active consideration (Center for Medical Countermeasure against Radiation’s Annual Meeting in Bethesda, MD June 7 and 8, 2006). Moreover, the FDA recently issued rules intended to reduce the burden of requiring informed consent “to permit the use of investigational in vitro diagnostic devices to identify chemical, biological, radiological, or nuclear agents without informed consent in certain circumstances” (Federal Register, 2006). Again, while these do not apply to in vivo EPR dosimetry, they may impact approval of other devices intended to measure exposure to nuclear events.
Finally, once the details about what is needed for review have been determined, the review process begins. It typically involves both internal review and often involves external experts. Parts of the process are open to the public (including the petitioners) and parts are closed. There are two stages of approval, Pre-Market Approval (PMA) and Post-Market Evaluation and Surveillance. PMA is device-specific but generally includes regulatory requirements for Device Listing, Medical Device Reporting, Establishment Registration and Quality System Compliance Inspection. After receiving PMA, the device continues to be evaluated both by scientific evaluations of its use and by reporting requirements for any untoward outcome or complications apparently related to use of the device. Once approved, general regulatory controls apply, including requirements for registering manufacturers, distributors, packagers and re-labelers and for their complying with good manufacturing standards and appropriate labeling and marketing.
Again, the FDA is considering having a mechanism to bypass this stage under conditions such as a radiation incident. In 2005, the FDA issued policy guidelines to allow the emergency use of unapproved devices or drugs in the event of catastrophic event of national significance, called The Emergency Use Authorization (FDA, 2005). Under this policy either the Secretary of Defense, Secretary of Homeland Security or the Secretary of Health and Human Services has the authority to declare an emergency if the incident is pertinent to military, domestic, or public health, respectively, in which case the FDA can then authorize the use of an unapproved device.
How do these rules potentially apply to the device considered here? First, recall that we are assuming that in vivo EPR dosimetry is technically superior (using “the better mousetrap” criterion) or at a minimum is equally effective or better than alternatives. Below, we provide evidence that it is also safe. However, as outlined above, there are many steps and issues to address before it can receive FDA approval.
One potential solution is to first seek ‘traditional approval’ for its use in non-emergency situations, i.e., not as a measurement to determine the need for acute medical intervention but as a way of post hoc determination of actual exposure. Indeed, one use already under investigation is measurement of exposure in individuals for an event which occurred long before. Survivors of nuclear bombs from World War II in Japan are one such group. Technicians of radiation-generating machines who may have received accidental high doses are another. First-responders, employees present at the accident, and populations nearby to the accidents at Three Mile Island and Chernobyl are others. The determination of actual exposure in these individuals may still guide their medical care and may help in differential diagnoses where symptoms do not ‘match’ the patients’ claimed exposure. Past exposure was measured by their claimed proximity to the event and subsequent clinical signs—which may not be accurate for individuals seeking reassurance or care at this point in time. This use may represent an easier path to gain initial approval because its use may qualify for the Humanitarian Use Device exception. Once approved, then seeking approval for other uses—such as for decisions to triage for acute treatment—may be easier because safety and effectiveness have been established. In particular, it may allow other uses for the same device to enter the approval process as Class 1 or 2 devices that are reviewed by easier and shorter processes than Class 3 and may be able to use the Third Party Review.
While in vivo devices were not included in current FDA-allowed exceptions that take into consideration the unusual problems of testing the device in humans, it may be possible to successfully argue that animal studies should be allowed in lieu of human studies of effectiveness for EPR as well. An appropriate analogy to argue that such diagnostic devices should concentrate on establishing their technical capability rather than ‘clinical usefulness in humans’ would be with the measurement of serum calcium. A device made for this purpose needs to have known sensitivity and accuracy, but does not need to show that it affects medical outcomes.
In the event of an actual incident, even if approval is not complete, EPR could comply with all the required criteria for Emergency Use Authorization: 1) the specified agent in the declaration of emergency has the potential to cause a life-threatening disease; 2) it is reasonable to believe that the product may be effective in diagnosing, treating, or preventing the serious or life-threatening disease or condition; 3) the known and potential benefits outweigh the known and potential risks of the product when used, and 4) there is no adequate, approved, and available alternative to the product (FDA, 2005).
There also are emerging pathways for the use of devices that cannot ethically be tested fully under the conditions under which they will operate. In vivo EPR dosimetry as intended for nuclear incidents discussed here fits into this pathway and in many respects may become the easier path to approval.
Regardless of the approval pathway and the technical accuracy of EPR dosimetry, the device can be finally approved only if is shown to be safe for use under the specified circumstances. This evidence has already been established. A published study looked at the principal sources of safety concerns, i.e., related to the magnetic field and the radio frequency generation of EPR. It used as criteria for safety, levels that have already been approved for use in comparative devices, i.e., Magnetic Resonance Imaging (MRI) for clinical use. Safety was demonstrated in both factors. In particular, the magnetic field of EPR was significantly low and static (0.04 Tesla) in comparison to the larger magnetic fields generated by MRI machines, and it did not involve rapidly changing gradients. The radio frequency of EPR is about 1200 MHz, which is considered safe by FDA-approved standards. In fact, heat generation by this radio frequency level was proven to be well below the limits established by the FDA (Salikhov et al., 2005).
In addition to these factors, EPR has been used for dosimetry on in vitro samples for more than 40 years, including studies on isolated teeth. These studies provide evidence that EPR is a safe procedure for the operators. Second, it has been used without safety problems in many studies in vivo in animals for more than 25 years and more than 10 years in human subjects, again providing evidence that there have been no untoward events.
Experimental protocols have been approved by human subject committees at Uniformed Services University of Health Sciences (Bethesda, MD), Dartmouth Medical School (Hanover, NH), Ohio State University (Columbus, OH), and the National Cancer Institute (Bethesda, MD) for the use of EPR in vivo in human subjects. Several studies have already been conducted safely in human subjects at Dartmouth, in which there was specific attention paid to the occurrence of two aspects of MRI that are also pertinent to EPR: local heating and nerve stimulation. No untoward effects were found. More recently measurements have been made successfully in vivo on irradiated teeth placed within the mouth in volunteers and in volunteers who are cancer patients who had received radiation doses to their teeth in the course of radiation therapy for treatment of their disease. These recent studies provide further evidence that EPR can be performed safely on human participants.
In summary, the regulatory aspects for the implementation of in vivo EPR dosimetry can indeed be overcome, but the process is not straightforward and will be time consuming. The existing and developing methods for authorizing emergency use or the development of terrorist-related response devices may help overcome some of the roadblocks that may otherwise complicate getting regulatory approval for EPR.
Assuming that the pertinent policy makers have decided that this technique should be part of the initial response and that FDA regulatory approval has been obtained, the story is not complete. Another important set of conditions remains: the development of the capabilities to manufacture field-ready instruments and distribute an adequate number of them to meet the response-needs projected by the policy-makers.
Finding companies willing to undertake the final engineering and manufacture of the instruments will depend on several factors. Assuming the manufacturers are privately owned, the potential to be successful will play a part in the attractiveness to undertake manufacture of the instruments. Factors such as the potential size and nature of the market will be key, as well as the ability of ‘customers’ to buy the devices. Because the principal market for the device is likely to be dominated by governmental agencies at all levels and with varying missions to respond to scenarios of the type discussed here, the numbers of instruments needed, the number of agencies involved and each of their respective interests and their ability to use appropriated funds to purchase the devices will influence the ultimate availability of EPR dosimetry to use when needed.
Other potential ‘customers’ include civilian users of radiation, especially the nuclear power industry. They too are subject to regulations that may impact their interest in purchasing EPR instruments, e.g., their ability to re-coup expenses for terrorist response planning. Other related uses for the dosimeter may affect their purchase by customers (and therefore their appeal to manufacturers). For example, other uses include retrospective verification of radiation dose from radiation therapy for quality control, evaluation of potential high occupational exposures, measurements of extra-terrestrial exposures in astronauts and, if the sensitivity of the method improves sufficiently for epidemiological use, the surveillance of ‘unusual’ environmental exposures of radiation.
Independent of the uses described here, there are some very attractive other uses of in vivo EPR that could be carried out using the instrumental capabilities developed for dosimetry. Manufacturers may be willing to invest in the machines for one use -even as a loss leader—because of its potential future use in clinical care. For example, clinical uses beyond terrorist related events include: assessing the level of oxygen in tissues to enhance therapy of tumors (because response to radiation therapy for cancer appears to depend on the tumor’s current sensitivity to oxygen) and treatment of ischemic diseases, especially peripheral vascular disease in diabetics (because evidence of poor oxygenation of tissues may trigger treatment decisions) (Swartz et al., 2004).
Nonetheless, the initial evaluation of the potential market by manufacturers is likely to be driven by the use of EPR dosimetry for immediate triage after potential exposures of large numbers of individuals. Potential manufacturers are likely to be positively attracted by the considerable degree of interest in the devices as demonstrated by the willingness of the Department of Defense-one of the principal potential users—to fund the development of EPR dosimeters. (For example, federal contracts funding this device have been obtained by our research group from both the Defense Threat Reduction Agency [DTRA] and Defense Advanced Research Projects Agency [DARPA].)
While a federal willingness to commit funds to the development of EPR is a positive sign, it falls short of ensuring - even if the device completely fulfills the criteria for performance specified in the contracts- that the Department of Defense will order the devices. Even if the DoD authorizes the purchase of EPR instruments, the implications for manufacturers are rather different if the initial request is for 10 instruments or 1,000--while both numbers currently are quite plausible.
A decision by a private company to manufacture deployable versions of the EPR dosimetry would entail a willingness to take the financial risk to make the instruments, and this would be closely related to the potential for achieving appropriate profits. If the decision-makers would view the role of EPR dosimetry as an important part of the regional rapid response teams that have been developed, the size of the market would be modest but perhaps the price per unit would be inadequate to warrant proceeding. On the other hand, if a decision were made to deploy a version of the instrument suitable for the intended use (in the field) by the intended users (probably first responders or other individuals with little or no experience in the operation of EPR spectrometers), then the size of the market would be much larger and attractive to manufacturers.
One potentially important contributor to the decision to manufacture the dosimeter would be a positive decision by the Project BioShield program to support the use of EPR dosimetry. This program, which has a mandate to provide materials for response to terrorist-caused incidents, has responsibilities that include (HHS, 2004):
The pathways to acceptance in the Project BioShield program are complex and difficult but, if successful, this would provide a very effective mechanism for obtaining full deployment of EPR dosimetry. It should be noted, however, that the Project BioShield program is not essential for deployment by government.
In summary, while there are many potentially attractive aspects for a company to decide to manufacture the EPR dosimeters, there remain some significant uncertainties as to whether companies will want to manufacture a ‘better mousetrap’.
While there are considerable uncertainties about the details that will influence the deployment of in vivo EPR dosimetry ‘in the real world’, we believe the process will depend strongly on factors beyond merely establishing its technical capabilities. We have identified three general areas, each of which could be the limiting factor: decision-making by responsible policy makers, approval by regulatory processes, and the development of the capability to produce the device in a form and numbers needed for the purpose. While establishing their complexity, we also have sought to illustrate how these complexities can be overcome.
This work was supported by NIH grant U19 AI067733; “In Vivo EPR Dosimetry System for Retrospective Measurement of Clinically Significant Acute Radiation Exposures,” Dept. of Defense # MD A905-02-C-0011 (DTRA) and used the facilities of the “EPR Center for the Study of Viable Systems”, NIH (NIBIB) grant P41 EB002032.
1Note: the FDA refers to this as effectiveness and so we use their term here. In our criteria, this is better described as establishing its efficacy, i.e., under scientifically controlled studies, rather than in ‘usual use’.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.