Does Human Behavioral Neurotoxicology Research Address Risk Assessment Needs?
Human research in Behavioral Neurotoxicology began, as Behavioral Toxicology, in the 1960s with the early work by Helena Hanninen and others at the Finnish Institute for Occupational Health. Since that time, a large database has accumulated through cross-sectional research that associates lower behavioral performance in people chronically-exposed to chemicals when compared to performance in people who are not exposed to those chemicals. This is a virtual database, not a physical database, although Anger and Johnson (1985)
and Anger (1990
) have published summaries of that research that reflects the database at those points in time. One goal of those summaries was to examine the virtual database to determine if consistency was emerging. The conclusion in each case was that there was consistency across studies of the same functional tests in populations exposed to the same chemical despite differences in cultural or ethnic background and language of the tests and instructions. The goal of this paper is to determine if the data needed for risk assessment was also emerging from that virtual database.
Hazard identification, at least for human research, is typically established by demonstrating that a group that has been exposed to a chemical has lower (adverse) performance on a behavioral test or tests than comparable controls, such as from the same workplace or town. Most early human behavioral neurotoxicology studies (1970s, 1980s) did just that, often without measuring the exposure or an internal biomarker of exposure (Anger, 1990
). This is the background that leads to the question, has recent behavioral research provided the data for risk assessment?
Dose-response assessment requires associating measures of external exposure and/or internal dose with behavioral performance that are graded with regard to exposure or internal dose. Of course, the direction of the association must be that higher exposures are associated with lower (poorer) behavioral performance. In the past decade, occupational or dietary exposure to two chemicals that have been studied intensively, methylmercury and manganese, were selected for analysis. The research on those chemicals from 1990 to 2007 was identified through Medline searches of that time period and the publications were reviewed to determine if it met the criterion of supplying dose-dependent evidence of behavioral performance declines for risk assessment. The results of those studies reporting statistically significant differences (i.e., reliable positive effects) are reviewed to determine if the data needed for risk assessment emerges from that virtual research database. The results are summarized below and the basic information about each publication is listed in . In some cases duplicate reports of the same study were omitted, though in some cases the same population was re-analyzed but the second analysis revealed additional information and was thus included.
Reference, Population Studied, Statistic Used, Sample Size and Metric of Exposure for studies of Manganese and Methylmercury
The research on human subjects exposed to mercury has been focused on birth cohorts, singleton births, and community residents who consume fish, particularly indigenous populations in disparate, isolated locations.
Cordier et al. (2002
) collected hair from mothers and found correlations between methymercury and the Stanford-Binet Copying (visuospatial) test in the offspring. Similarly, Augur et al. (2005)
reported that hair MeHg correlated with tremor and Yokoo et al. (2003)
found hair MeHg correlated with attention, fine motor performance, and verbal memory, and Weihe et al. (2005
found correlations between hair MeHg and global neurobehavioral deficits.
Cord blood was collected by Grandjean et al. (2003)
who reported correlations with verbal learning and memory performance scores. In 2006, Debes et al. reported that cord blood, cord tissue, and maternal hair MeHg correlated with finger tapping speed, continuous performance, and cued naming in offspring. Weil et al. (2005)
found blood MeHg correlated with Rey complex figure delayed recall
Research on manganese has studied behavioral deficits in a wide variety of individuals, including community residents, welders, and workers involved in shipbuilding, electrical work, ore milling, smelters, ferroalloy processing, foundry, battery, and agriculture.
There are several reports of behavioral test declines correlated with increased manganese exposure. Lucchini et al. (1997)
reported finger tapping, symbol digit, digit span, and additions declines associated with external exposure. Myers et al. (2003)
reported digit-symbol, Luria-N item 1R performance associated with external exposure. Park et al (2006)
found an association between exposure and the Rey-Osterrieth Complex Figure task, a working memory Index, Stroop Color Word, and Auditory Consonant Trigrams. Bowler et al. (2007)
later reported a correlation between exposure and tests of executive function, memory, sustained concentration and sequencing, verbal learning in the same population.
There are also a number of reports of lower performance on behavioral tests associated with increased measures of internal does. Lucchini et al (1995)
reported an association between_blood manganese and aiming. In 1997, Lucchini followed that up with a report of correlations between blood manganese and urine manganese with finger tapping, symbol digit, digit span, and additions. Mergler et al. (1999)
found a correlation between blood manganese and coordinated movements, learning, and recall. Bowler et al (2007)
found a correlation between blood manganese and measures of executive function, memory, sustained concentration and sequencing, verbal learning, cognitive flexibility, visuospatial constructional ability, and visual contrast sensitivity.
The case for manganese is strong, certainly stronger than that for mercury. The weakness of the mercury research is that different studies employed or found diverse behavioral test results. Though all correlated with internal exposure in the children or parents of the children, consistency in findings did not emerge in a compelling way. Manganese, on the other hand, reveals consistent functional deficits and specific tests (symbol digit/digit symbol) correlated with external and internal measures of exposure. It makes the case clearly that there are dose related behavioral deficits associated with manganese exposure. Of course, the mercury exposures may have been relatively closer to the no effect threshold and manganese exposures were relatively higher above the no effect threshold, though such comparisons are difficult to make. The results with mercury, however, reminds the research community to select a core set of tests as Johnson et al. (1987)
and Iregren and Letz (1991)
suggested so many years ago.
Interpretation of Small Effect Sizes in Neurotoxicological Studies: Characterizing Individual Versus Population Risk
Risk assessors who rely on epidemiological studies in which neurobehavioral function are the critical endpoints frequently must wrestle with the difficult question: When is a neurotoxicant-associated change in performance large enough to be considered “important” from a public health standpoint? Certainly changes such as a 2–3 point decrease in IQ for a 10 µg/dL increase in blood lead (International Programme on Chemical Safety, 1995
) or a decline of 0.1 SD in test score for a doubling of cord blood mercury level (Grandjean, et al., 2006a
) do not indicate the presence of disease, per se
From a purely clinical perspective, such exposure-related decrements in performance, which usually fall within the standard error of measurement of the outcome measures, might be considered unimportant because, even among individuals with higher exposures, function generally remains “within normal limits” (Kaufman, 2001
), and minor variations “within normal limits” usually have little or no import with regard to utilization of health care resources. In this view, the primary goal in selecting exposure standards should be to prevent impairments that bring people to medical attention because they meet the diagnostic criteria for a “disease.”
To some extent, the claim that such decrements are trivial represents a failure to appreciate the critical distinction between individual and population risk. Issues germane to this claim are explored in the following sections see also (Bellinger, 2007
; Bellinger, 2004)
The first issue pertains to the different metrics used to characterize individual and population risk. Individual risk, which is of primary interest to the clinician (and appropriately so, from a patient’s perspective), captures the likelihood that a specific individual will become a “case” given a particular characteristic such as age, gender, blood lead level, or glycemic index. The appropriate metric, therefore, is the relative risk of disease among individuals in the stratum of the characteristic to which a patient belongs.
Population risk, which is of primary interest to the risk assessor, captures the proportion of cases of a disease within a population that can be attributed to the characteristic of interest. The appropriate metric, therefore, takes into account both the relative risk of individuals within a particular stratum and the proportion of the population that falls into that stratum. One seemingly paradoxical implication of this is that, under certain circumstances, most cases of a disease will arise from the large portion of the population that, at the level of the individual, is at relatively low risk. For example, women older than 44 years of age deliver infants with Down Syndrome at a rate that is approximately 50 times the rate among women less than 30 years of age (34.6/1000 versus 0.7/1000). However, because of the large differences in birth rate across age strata, more than 50% of infants with Down Syndrome are born to women <30 years old and only 2% born to women >44 years (Alberman and Berry, 1979
). A similar relationship holds between serum cholesterol and the risk of coronary heart disease. While an individual with a high level is clearly at increased risk compared to an individual with a low level, 90% of coronary heart disease cases arise among individuals with levels that place them in the middle portion of the population distribution and are “within normal limits” by conventional clinical criteria. The distribution of serum cholesterol levels among cases of coronary heart disease thus differs very little from the distribution of levels among individuals who remain disease-free (Rose, 1985
). Although we will be wrong most of the time in predicting, on the basis of serum cholesterol level, the specific
individuals within a population who will develop coronary heart disease, we can, with greater confidence, predict that, of two randomly selected individuals from that population, the one with a higher serum cholesterol level is more likely to become a case. This is the basis for the usefulness of clinical risk-stratification algorithms. These examples illustrate, however, the importance of distinguishing between factors that determine individual risk and factors that determine population risk.
A second dimension of the distinction between individual and population risk pertains to the fact that, within a population, the mean value of a health indicator is highly correlated with the proportion of individuals within that population who meet diagnostic criteria for the disease that corresponds to the indicator. Moreover, the slopes of the relationships between indicators and disease prevalences can be rather steep, which means that a small difference between the mean values in two populations can be associated with large differences between populations in case prevalence. These principles have been demonstrated with regard to body mass index and obesity, systolic blood pressure and hypertension, and alcohol intake and alcoholism (Rose and Day, 1990
). An important implication is that a modest shift in the mean value of a health indicator in a particular population might result in a large change in the proportion of that population who meet diagnostic criteria for a disease. One can calculate, for example, that a decrease of 4 mmHg in the mean diastolic blood pressure in a population would result in a halving of the percentage of individuals in that population who have a pressure >90 mmHg, the clinical cut-off defining hypertension. Similarly, the prevalence of obesity (body mass index >30 kg/m2
) could be halved by reducing mean body weight in the population by 2 kg. The validity of this theoretical prediction has been confirmed empirically in prospective studies of changes in risk factor distribution and disease prevalence (Laaser, et al., 2001
; Whittington and Huppert, 1996
These observations are relevant to neurobehavioral toxicology because they imply that it is not only the small number of individuals with the highest toxicant biomarker levels that contribute to the total burden of morbidity associated with exposure. An important fraction of the morbidity will be contributed by the much larger proportion of individuals in the population who have levels in the middle
of the distribution. Therefore, to reduce the total morbidity associated with exposure to a neurotoxicant, we need to focus on the “total dose” to the population. This would involve shifting the entire biomarker distribution in the direction of lower risk, in addition to reducing the number of individuals in the “high risk” tail. Several studies demonstrate that a modest neurotoxicant-associated shift in the mean level of performance, in the adverse direction, results in surprisingly large increases in the frequency of children who perform very poorly (e.g., >1 SD below the mean), as well as a surprisingly large reduction in the frequency of children who perform very well (Jacobson and Jacobson, 1996
; McMichael, et al., 1988
; Needleman, et al., 1982
Important caveats limit the generality of the conclusion that reducing the mean biomarker level will produce large benefits at the population level. The magnitude of the benefits depends strongly on the shape of the dose-effect relationship between biomarker level and the risk of disease (Bellinger, 2007
). Benefits will be greatest when the relationship is linear or plateaus only at high biomarker levels that are rare within the population. The benefits will be less certain if the relationship is U-shaped (e.g., as with an essential nutrient, such as manganese, that is neurotoxic in states of both deficiency and excess) or if risk is relatively stable, increasing only at high (and rare) biomarker levels. Under the latter circumstances, a modest reduction in the mean biomarker level might not benefit population health and, in the case of a U-shaped dose-effect relationship, might even harm some individuals by reducing their level into the range in which risk is increased.
In conclusion, it is important to recognize that the challenges encountered in evaluating the import of neurotoxicant-associated neurobehavioral changes are not fundamentally different from those encountered in other areas of chronic disease epidemiology and can be approached using the same principles. The clinical and public health (or population) perspectives are complementary rather than incompatible. Each reflects their respective practitioner’s concerns, i.e., the health of an individual patient and the health of a population. The small exposure-associated shift in central tendency that is typically observed is less important as a measure of the change that can be expected in the health of an individual within the study sample than it is as an indicator of what is likely to be happening to the population from which the study sample was drawn and of which it is, hopefully, representative.
That the mean shift observed in a health indicator is within the standard error of measurement (SEM) of the test instrument used is sometimes interpreted as an indication that the change in performance is “in the noise” of measurement and therefore trivial. The concept of the SEM this is not germane, however, to an evaluation of the potential impact of the exposure on the population. Under some circumstances, a small shift in the mean indicates that individuals in the population have crossed the not-so-bright line separating “within normal limits” from “disease” and now fulfill diagnostic criteria for “disease.” Cases of frank disease are often relatively rare, however. Due to limitations in statistical power, we are generally unable to detect significant increases in disease frequency using the sample sizes typical of epidemiological studies. Were resources available to study 20,000 individuals instead of 200, the principles described here suggest that we would see that a modest exposure-associated shift in the mean score on a neurobehavioral test is accompanied by a significant increase in the prevalence of individuals who meet diagnostic criteria for categorically-defined neurobehavioral disorders, such as ADHD or learning disability. If this result is of interest to one with a clinical perspective, so, too, should a modest shift in the mean value of the health index that serves as the basis of case definition.
Assessment of Neurobehavioral Effects in Vulnerable Populations: The Example of Pesticide Exposure in Children
The developing nervous system is vulnerable to chemical exposures. In light of the increasing prevalence of developmental disabilities, there is concern about the impact of chemicals on neurodevelopment. Children are exposed to chemicals through the air they breathe, the food they eat and the water they drink (CDC, 2003
). The majority of these chemicals are not evaluated for their potential toxicity, effects on development, or interactive effects with other chemicals, prior to commercial introduction (Goldman and Koduru, 2000
; U.S. Environmental Protection Agency, 1998
). Although the link between high exposures to neurotoxic chemicals and damage to the nervous system is well established, research has also shown a link between low exposures and subclinical deficits. Identifying and characterizing the impact of chemicals on the developing brain allows us to develop programs to prevent exposure. Methods used to evaluate neurobehavioral functioning in vulnerable populations are described using organophosphate pesticide exposure in children as an example.
Organophosphorous pesticides (OPs) are currently the most commonly utilized pesticides in the world, consisting of nearly 40 different chemical members registered by the US-EPA (www.epa.gov
). About 73 million pounds of OP pesticides were used in the United States in 2001 (70% of all insecticides) (Kiely, 2004
). Children of agricultural workers may have a higher risk of exposure to pesticides compared to the general populations because of the close proximity of their homes to the fields where pesticides are applied and from take-home exposure (Fenske, et al., 2002
; Loewenherz, et al., 1997
). Although pesticide metabolite levels were not significantly increased in spouses and children living on a farm during pesticide applications, children who had contact or were observing the mixing or application had higher metabolite levels than those who were out of the area when pesticides were mixed or applied (Acquavella, et al., 2005
; Alexander, et al., 2007
). While information is available on the impact of acute exposure in adults and occupational exposure, little information is available examining the impact of exposures in children.
Behavioral performance tests have been used to assess workplace exposure and have become the most efficient methods (in terms of cost and time) to screen for adverse effects of neurotoxic exposures in adult workers (Anger, 2003
). The heightened concern over the potential impact of environmental exposures on neurological functioning in children has led to the development of neurobehavioral test batteries for use with children (e.g., Amler and Gibertini, 1996
). Children from all cultures and backgrounds are at risk. However, ethnic minorities and children from low-income families are often at greater risk because of poor nutrition, an impoverished environment, and limited access to medical care. There is a need for reliable, easy-to-administer batteries to assess neurotoxic exposure in children.
Unlike adult neurobehavioral testing, in which a number of tests batteries have been developed (Anger, 2003
), there have been very few attempts to develop specific neurobehavioral batteries for children. The Pediatric Environmental Neurobehavioral Test Battery or PENTB (Amler and Gibertini, 1996
) is a consensus test battery developed to assess possible neurotoxic effects in children living near hazardous waste sites. It combines observational measures and questionnaires for very young children with performance measures that are introduced for preschool children. Although it was first introduced in 1996 it has not been widely used. Another approach is to adapt the tests that have detected neurotoxic effects in adults for similar studies in children. This is an appealing choice because they have a proven ability to detect chemical exposure effects and they may allow comparisons across ages. Computer-based tests may easily be adapted for use with children (Dahl, et al., 1996
; Otto, et al., 1996
; Rohlman, et al., 2001b
To evaluate pesticide exposure in a Latino farmworker population in the US, we developed a battery for preschool and school-age children. The goal was to develop a battery for young children that included measures that had demonstrated sensitivity to organophosphate pesticide exposure and, because of the unknown nature of effects of organophosphate pesticides in children, to assess a wide range of neurobehavioral functions. The battery was assembled by combining computerized tests from the Behavioral Assessment and Research System (BARS), performance tests adapted from the Pediatric Environmental Test Battery (PENTB), and a test of recall and recognition (Rohlman, et al., 2000
; Rohlman, et al., 2001b
). The current battery has been validated in several cultures and socio-economic status classes, with only minor modifications needed (Chiodo, et al., 2006
; Eckerman, et al., 2006
; Rohlman, et al., in press
). Performance tests from the BARS have been used in several studies examining pesticide exposure in children and adolescents (Eckerman, et al., 2006
; Rohlman, et al., 2001a
; Rohlman, et al., 2007c
; Rohlman, et al., 2005
). During adolescence, children often are working in agriculture and are occupationally exposed to pesticides. In order to characterize the impact of neurotoxic exposure on children from diverse populations, the ability to generalize neurobehavioral results by using this type of battery is crucial.
To assess risk to children it is necessary to associate measures of exposure with adverse outcomes, establishing a dose-response relationship. Studies examining pesticide exposure in children have used a variety of methods to classify exposure including environmental monitoring (indoor air, dust samples, surface wipes), maternal and child exposure measures (urinary metabolites, acetyl cholinesterase level) and pesticide source information (pesticide use, home inventory, proximity to agricultural field, parent’s occupations). Pesticide source information often relies on self-report and the link between these classifications and actual exposure is often unknown and the amount if exposure in any given population may vary considerably (McCauley, et al., 2006
). These variations in exposure classifications may be responsible for inconsistencies found among studies (Alavanja, et al., 2004
). Several studies have examined pesticide exposure in children, however, the findings for these studies vary across the different populations. The children studied range from infants to adolescents, with varying exposures to pesticides, including prenatal exposure, acute poisoning incidents or chronic exposure across the lifespan (See ; Dabrowski, et al., 2003
; Eckerman, et al., 2006
; Eskenazi, et al., 2004
; Eskenazi, et al., 2007
; Grandjean, et al., 2006b
; Handal, et al., 2007
; Kofman, et al., 2006
; Perera, et al., 2004
; Rauh, et al., 2006
; Rohlman, et al., 2001a
; Rohlman, et al., 2007b
; Rohlman, et al., 2005
; Ruckart, et al., 2004
; Whyatt, et al., 2004
; Young, et al., 2004
). The methods used to assess development and performance and to measure exposure also varied across studies. Although there are inconsistencies across the studies, the evidence suggests that there are performance deficits associated with exposure and an increase in the reporting of developmental and behavioral problems associated with exposure.
Because a variety of factors can influence the association between exposure and a neurotoxicant it is important to include measurement of these factors as part of the study (Jacobson and Jacobson, 2005
). Demographic variables are known to impact performance on neurobehavioral tests in adults and children (Anger, et al., 1997
; Rohlman, et al., 2007a
) but other influences also need to be considered. These include, but are not limited to, exposure to other toxicants, prenatal influences, nutrition, genetic predisposition and socio-environmental influences throughout the child’s lifetime (Dietrich and Bellinger, 1994
; Jacobson and Jacobson, 2005
; Weiss and Bellinger, 2006
). These variables impact performance on neurobehavioral tests and if they are included, are typically treated as covariates or confounders in the studies. However, while this approach is commonly used, it fails to examine the impact of the joint contributions of the factors that impact development and influence exposure (Weiss and Bellinger, 2006
). Simply controlling for covariates and confounders in order to examine the impact of a neurotoxicant on performance does not recognize that most adverse effects involve a combination of factors. Although there has been an increase in the inclusion of these factors in research, more work is needed to ensure we are accurately measuring the early social environment of the child and its impact and interaction with the environment and genetic predisposition of the child.
There is also a need for prospective studies to assess developmental effects across time. Because of the rapid growth and development of the child, there is a need for longitudinal studies to assess changes over time. Furthermore, functional effects of early exposure may not become apparent until later in life (Jacobson and Jacobson, 2005
Children can be at higher risk to exposures to some environmental contaminants. Neurobehavioral assessment in children can be a useful tool to assess risk to children. Standardized methods are necessary to allow accurate interpretation of the data, but it is also important to consider other modifiers of performance. As pesticides are used around the world, cross-cultural methods will allow comparisons across studies. Replication is important to confirm the validity of findings. Studies from the animal literature provide valuable information for risk assessment because of their ability to control for confounders and to examine the mechanism of exposure. Studies with children provide information about the cognitive and behavioral endpoints that are impacted by exposure and the doses at which adverse effects are seen (Jacobson and Jacobson, 2005
Lifetime Exposure to Cumulative Neurotoxicants: How to Define Effective Preventive Strategies to Avoid the Risk of Long-term Effects
Unfortunately, exposure to neurotoxic agents is becoming a more frequent and common event in the workplace and in the general environment. This is mainly due to several factors such as the increasing growth of the chemical industry worldwide (RNCOS, 2007), to the multiple use of chemicals in carious industries and the fact that the already large number of neurotoxic substances (Lucchini, et al., 2005
) is constantly increasing with newly generated compounds that are needed for a rapidly changing market. It is also evident in the field of neurotoxicology that exposure scenarios are evolving quickly in terms of intensity, but also in spatial and temporal dimensions, leading towards the condition of “lifetime” exposure in the population. This can be particularly dangerous, especially when dealing with substances that besides acute toxicity, are able to accumulate in the body and cause long-term cumulative effects. Exposure intensity of the most well-known neurotoxicants has markedly decreased in workplaces, especially in the developed world, and also positive signs of improvement are showing in developing countries. Awareness programs such as the mercury toxicity by UNEP (2005)
have favored the adoption of preventive actions. This improvement is well documented for traditionally known agents such as lead, mercury, manganese, aluminum, toluene, xylene, styrene, acrylamide, ethylene oxide, and others. New substances and compounds are constantly being introduced, and therefore toxicological knowledge is always very limited. However, at the same time, exposure is no longer confined in the workplace, but is rapidly expanding to the general environment in many ways. Due to the active transportation of ultra fine particles by winds, direct emissions in the air, water and soil, exposure levels can also increase at a considerable distance from the point sources, as shown by the presence of various contaminants in the Artic that have been originated by industrial and agricultural activities (Macdonald, et al., 2000
). Contamination of soil and water from industrial emissions and wastes and from pesticide application favors the entrance of neurotoxicants in the food chain through crops, fish and drinking water. Therefore dietary exposure may represent an additional indirect carrier of neurotoxicants regardless of the distance from the contamination and exposure sources.
Finally, the temporal dimension is gaining more and more importance in neurotoxicological context due to the combination of different aspects: a) the constant increase of average life expectancy and working life, and b) the presence of pre- and postnatal exposure. The combination of these conditions is worrying because in addition to increasing the total duration of exposure through the lifetime, it introduces the importance of “when” exposure takes place in life, which can be a critical parameter besides the quantitative ones. From the very beginning of the embryonic development through infancy, puberty, adulthood, and old age, the human body faces critical stages. The same exposure dose can cause different neurotoxic effects according to relative conditions of hypersensitivity (). Pre- and post-natal exposure is particularly dangerous in this context, due to the high vulnerability of the developing brain to neurotoxicants (Grandjean, et al., 2007
) whose hazardous properties are largely unknown and underestimated (Grandjean and Landrigan, 2006
Exposure to neurotoxicants through lifetime with the different exposure routes in the first row, the different stages in development in the second row, and the critical conditions in the third row
The concepts from above imply that exposure is changing very rapidly. More consideration has always been given to exposure intensity as expressed by dose concept, starting in the Middle Ages with Paracelsus, and continuing today with modern occupational and environmental toxicology that have developed a consolidated knowledge in environmental measures and biological monitoring as a suitable basis for exposure assessment. Safe exposure levels have been identified in risk assessment as mainly based on current exposure dose and intensity.
Therefore, due to the changes in exposure scenarios and the importance of cumulative rather then acute or subacute manifestiations of neurotoxicity, now require a substantial modification to this approach. In fact, according to Haber’s Law
, exposure is the product of intensity and duration and the final product does not change when intensity decreases and duration increases. So therefore, the final outcomes of health effects may remain as an uncontrolled risk factor if preventive measures are not undertaken correctly. A non-adequately controlled exposure to various neurotoxicants, beginning during fetal life and continuing through infancy, puberty, and adult life with eventual occupational exposure, may likely pose a high risk for long-term neurodegenrative effects in the population. Very recent estimates predict an astonishing tendency of an increase of neurologic diseases in the coming years, and not only for elderly individuals (Albert, 2007
). Consequently, the trend may not only relate to the increasingly aged population per se, but also to the exposure to neurotoxicants. Parkinson Disease is estimated to increase substantially and primarily in those countries with poorly controlled exposure conditions, for example China, India, Brazil, and Bangladesh (Dorsey, et al., 2005
). An important part of this increase may likely be the result of non-adequate control of lifetime exposure to neurotoxicants.
Unfortunately, regulatory agencies are not addressing the issues in a proper way, since the procedure for the definition of preventive policies is often too slow, in some cases inadequate (Silbergeld and Weaver, 2007
), and rarely aimed at protection from lifetime exposure. Regulatory bodies should proceed with a more timely-oriented agenda in the definition of protective standards for the newly produced compounds and also with a more frequent re-evaluation of already defined safe exposure levels. This is made possible today thanks to a constantly increasing scientific production in the area of neurotoxicology (Lucchini, et al., 2005
), and to the availability of reliable methods for the assessment of nervous system functions. The REACH program in the EU (REACH, 2006
) offers opportunities for prevention, by controlling the emission of newly produced substances with potential neurotoxicant properties and imposing the registration and evaluation with toxicity testing in order to authorize industrial and commercial use. Unfortunately, testing for neurodevelopmental toxicity is not required by this regulation. Data on the developmental effects due to perinatal exposure are lacking for the majority of neurotoxic substances so the use of uncertainty factors is necessary to protect individuals from this type of exposure. This procedure of conventional risk assessment is highly imprecise and should be avoided by increasing the availability of scientific information, and this can only be obtained by increasing resources for research in neuro-developmental toxicology.
Manganese (Mn) is an example of a neurotoxicant with a well-known cumulative mechanism of toxicity that tends to accumulate in the brain because of a very slow elimination rate. In the case of exposure to levels exceeding the homeostatic range of this essential element, an overload condition can be established with possible late effects resulting in increased risk of Parkinsonism. This has been shown in welders (Racette, et al., 2005
) and in populations with prolonged environmental exposure to industrial emissions and car traffic with the use of Mn-based additive in gasoline (Finkelstein and Jerrett, 2007
; Lucchini, et al., 2007
). There is concern for populations with very high Mn content in drinking water, such as in Bangladesh (Frisbie, et al., 2002
). Exposure to Mn can begin during the pre-natal life through the mothers’ exposure and consequent passage through the placenta. Absorption of high concentrations, exceeding homeostatic range, can also take place post-natally. Being an essential element, Mn is needed by the organism and especially for the developing brain, as a constituent of important metalloenzymes such as arginase, glutamine synthetase, pyruvate carboxylase and superoxido-desmutase. In order to provide Mn to the developing brain, the intestinal absorption of this element is high, whereas the excretion rate is low, due to the incomplete development of the biliaric pathway, mostly responsible for Mn elimination. High concentration of Mn in maternal milk and formulas can further increase Mn overload. This may further continue during childhood and adulthood for environmental and/or occupational exposure. The main targets of toxicity are represented by coordination of fine movements, cognitive functions related to memory, and aggressive behavior. These functions can be explored with appropriate neurobehavioral testing (Zoni, et al., 2007
), and dose-effect-response relationship can be assessed for the identification of safe exposure levels. The need to protect from health effects due to a lifetime exposure to these neurotoxicants, requires that in the assessment of dose-effect-response, adequate exposure metrics are used, able to reflect the body burden and represent cumulative exposure. These metrics should ideally be obtained with suitable biomarkers, or with cumulative exposure indices that approximate long-term exposure.
Health Canada, the Canadian Federal department of health, conducted a new human health risk assessment for inhaled Mn, given the large advances regarding Mn toxicity, toxicokinetics and exposure after a previous assessment in 1994 (Egyed, et al., 2006
). For this purpose, relevant epidemiological studies of both occupationally and environmentally exposed subjects were reviewed in detail. In most occupational studies, geometric mean levels of airborne Mn exposure were below 400 µg/m3
, and sub-clinical manifestations of Mn neurotoxicity were investigated. Overall, tests of fine motor skills and hand tremor resulted as the most sensitive endpoints in the detection of neurofunctional decrements due to Mn exposure, while results of tests of cognitive ability and memory were less consistent. The study of the effects of Mn among Italian ferroalloy workers by Lucchini and co-workers (1999)
was identified as the critical study for the derivation of a new air quality guideline for Mn in Canada, due to the extensive information on exposure, health outcomes and confounders collected in that study, as well as the relatively low average exposures measured (geometric mean respirable Mn=25 µg/m3
and geometric mean total Mn=79 µg/m3
). Two exposure metrics, average respirable exposure over the lifetime of the subject and average respirable exposure over the 5 years prior to testing, were calculated by multiplying the available annual average concentration for each worker for the number of exposure years. All health outcomes and the effects of confounders were examined and the quantitative risk assessment of the raw dataset included both benchmark concentration analysis and a NOAEL/LOAEL analysis. The methods have different pros and cons, but the use of both approaches allowed a comparison of the final results. Different mathematical models were used as fits for a specific dose-response. Because of the different shapes, especially in the low dose region, these fits provided differing estimates of the BMCL05, but they were all considered in the process for risk assessment. Potential interactive effects between Mn exposure and age, alcohol consumption, iron status and smoking parameters on health endpoints were investigated. BMCL05
’s were derived for SPES finger-tapping speed, Luria Nebraska tests of repetitive hand motions, the digit span memory test and a test of mental arithmetic abilities. In addition, a BMCL05
was derived for serum prolactin. Evidence from the literature regarding the increased sensitivity of various subpopulations, including infants, children, the elderly, persons with pre-parkinsonism, persons on parenteral nutrition, and persons with chronic liver disease was reviewed, to guarantee adequate protection. A range of guidance values was derived from the BMCL05
’s and will be made public by Health Canada. Uncertainty factors had to be introduced for the derivation of such values to account for uncertainties related to several issues, such as the use of data from an occupational population, the protection of more sensitive subgroups of the population, the relative imprecision of the cumulative exposure indices to cover the entire lifetime: Nevertheless, this procedure represents an example of a precautionary effort toward better protection from cumulative effects. Community studies on Mn toxicity, including the assessment of cumulative-lifetime exposure, will be needed to further reduce the uncertainty level in the risk assessment.