Search tips
Search criteria

Results 1-25 (1542990)

Clipboard (0)

Related Articles

1.  On a Closed-Form Doubly Robust Estimator of the Adjusted Odds Ratio for a Binary Exposure 
American Journal of Epidemiology  2013;177(11):1314-1316.
Epidemiologic studies often aim to estimate the odds ratio for the association between a binary exposure and a binary disease outcome. Because confounding bias is of serious concern in observational studies, investigators typically estimate the adjusted odds ratio in a multivariate logistic regression which conditions on a large number of potential confounders. It is well known that modeling error in specification of the confounders can lead to substantial bias in the adjusted odds ratio for exposure. As a remedy, Tchetgen Tchetgen et al. (Biometrika. 2010;97(1):171–180) recently developed so-called doubly robust estimators of an adjusted odds ratio by carefully combining standard logistic regression with reverse regression analysis, in which exposure is the dependent variable and both the outcome and the confounders are the independent variables. Double robustness implies that only one of the 2 modeling strategies needs to be correct in order to make valid inferences about the odds ratio parameter. In this paper, I aim to introduce this recent methodology into the epidemiologic literature by presenting a simple closed-form doubly robust estimator of the adjusted odds ratio for a binary exposure. A SAS macro (SAS Institute Inc., Cary, North Carolina) is given in an online appendix to facilitate use of the approach in routine epidemiologic practice, and a simulated data example is also provided for the purpose of illustration.
PMCID: PMC3664333  PMID: 23558352
case-control sampling; doubly robust estimator; logistic regression; odds ratio; SAS macro
2.  Alternatives for logistic regression in cross-sectional studies: an empirical comparison of models that directly estimate the prevalence ratio 
Cross-sectional studies with binary outcomes analyzed by logistic regression are frequent in the epidemiological literature. However, the odds ratio can importantly overestimate the prevalence ratio, the measure of choice in these studies. Also, controlling for confounding is not equivalent for the two measures. In this paper we explore alternatives for modeling data of such studies with techniques that directly estimate the prevalence ratio.
We compared Cox regression with constant time at risk, Poisson regression and log-binomial regression against the standard Mantel-Haenszel estimators. Models with robust variance estimators in Cox and Poisson regressions and variance corrected by the scale parameter in Poisson regression were also evaluated.
Three outcomes, from a cross-sectional study carried out in Pelotas, Brazil, with different levels of prevalence were explored: weight-for-age deficit (4%), asthma (31%) and mother in a paid job (52%). Unadjusted Cox/Poisson regression and Poisson regression with scale parameter adjusted by deviance performed worst in terms of interval estimates. Poisson regression with scale parameter adjusted by χ2 showed variable performance depending on the outcome prevalence. Cox/Poisson regression with robust variance, and log-binomial regression performed equally well when the model was correctly specified.
Cox or Poisson regression with robust variance and log-binomial regression provide correct estimates and are a better alternative for the analysis of cross-sectional studies with binary outcomes than logistic regression, since the prevalence ratio is more interpretable and easier to communicate to non-specialists than the odds ratio. However, precautions are needed to avoid estimation problems in specific situations.
PMCID: PMC521200  PMID: 14567763
Cox regression; cross-sectional studies; logistic regression; odds ratio; Poisson regression; prevalence ratio; robust variance; statistical models
3.  Risk of Adverse Pregnancy Outcomes among Women Practicing Poor Sanitation in Rural India: A Population-Based Prospective Cohort Study 
PLoS Medicine  2015;12(7):e1001851.
The importance of maternal sanitation behaviour during pregnancy for birth outcomes remains unclear. Poor sanitation practices can promote infection and induce stress during pregnancy and may contribute to adverse pregnancy outcomes (APOs). We aimed to assess whether poor sanitation practices were associated with increased risk of APOs such as preterm birth and low birth weight in a population-based study in rural India.
Methods and Findings
A prospective cohort of pregnant women (n = 670) in their first trimester of pregnancy was enrolled and followed until birth. Socio-demographic, clinical, and anthropometric factors, along with access to toilets and sanitation practices, were recorded at enrolment (12th week of gestation). A trained community health volunteer conducted home visits to ensure retention in the study and learn about study outcomes during the course of pregnancy. Unadjusted odds ratios (ORs) and adjusted odds ratios (AORs) and 95% confidence intervals for APOs were estimated by logistic regression models. Of the 667 women who were retained at the end of the study, 58.2% practiced open defecation and 25.7% experienced APOs, including 130 (19.4%) preterm births, 95 (14.2%) births with low birth weight, 11 (1.7%) spontaneous abortions, and six (0.9%) stillbirths. Unadjusted ORs for APOs (OR: 2.53; 95% CI: 1.72–3.71), preterm birth (OR: 2.36; 95% CI: 1.54–3.62), and low birth weight (OR: 2.00; 95% CI: 1.24–3.23) were found to be significantly associated with open defecation practices. After adjustment for potential confounders such as maternal socio-demographic and clinical factors, open defecation was still significantly associated with increased odds of APOs (AOR: 2.38; 95% CI: 1.49–3.80) and preterm birth (AOR: 2.22; 95% CI: 1.29–3.79) but not low birth weight (AOR: 1.61; 95% CI: 0.94–2.73). The association between APOs and open defecation was independent of poverty and caste. Even though we accounted for several key confounding factors in our estimates, the possibility of residual confounding should not be ruled out. We did not identify specific exposure pathways that led to the outcomes.
This study provides the first evidence, to our knowledge, that poor sanitation is associated with a higher risk of APOs. Additional studies are required to elucidate the socio-behavioural and/or biological basis of this association so that appropriate targeted interventions might be designed to support improved birth outcomes in vulnerable populations. While it is intuitive to expect that caste and poverty are associated with poor sanitation practice driving APOs, and we cannot rule out additional confounders, our results demonstrate that the association of poor sanitation practices (open defecation) with these outcomes is independent of poverty. Our results support the need to assess the mechanisms, both biological and behavioural, by which limited access to improved sanitation leads to APOs.
Pinaki Panigrahi and colleagues examine the association between adverse pregnancy outcomes and sanitation practices in pregnant women in rural India.
Editors' Summary
Pregnancy is usually a happy time for women and their families. But, for some women, pregnancy ends unhappily. Some women lose their baby during early pregnancy (spontaneous abortion or miscarriage) or during late pregnancy (stillbirth). Others have their baby earlier than expected (preterm birth) or have a baby with low birth weight, two outcomes that adversely affect the baby’s survival and long-term health. The burden of adverse pregnancy outcomes (low birth weight, preterm birth, stillbirth, and spontaneous abortion) is substantial across the world but is particularly high in resource-limited settings. More than 60% of all preterm births take place in Asia and sub-Saharan Africa, and in India alone nearly 13 million babies (47% of all births) had a low birth weight in 2010. Many risk factors for adverse pregnancy outcomes have been identified, including infection, diabetes, poor antenatal care, and other socio-economic factors, but a clear causal mechanism for adverse pregnancy outcomes has not been established.
Why Was This Study Done?
One potential risk factor for adverse pregnancy outcomes, particularly in resource-limited settings, is poor sanitation—the inadequate provision of facilities and services for the safe disposal of human urine and feces. The WHO/UNICEF Joint Monitoring Programme for Water Supply and Sanitation estimates that, globally, 1.1 billion people defecate in the open, a practice that can expose individuals to contact with human feces containing infectious organisms and that can contaminate food and water. Poor sanitation might contribute to adverse pregnancy outcomes by promoting infection or by causing stress during pregnancy. Women might, for example, limit their intake of food and water to avoid having to use inadequate toilet facilities, thereby adversely affecting the health of their unborn child. Here, the researchers assess whether poor sanitation practices are associated with an increased risk of adverse pregnancy outcomes by undertaking a population-based prospective study in two rural areas of Odisha state, India. Odisha has a high infant death rate (57 deaths per 1,000 live births), only 18.2% of households have access to an improved latrine (a facility such as a flush toilet that hygienically prevents human contact with human excreta), and 75% of households practice open defecation.
What Did the Researchers Do and Find?
For their study, the researchers enrolled 670 women during the first trimester of their pregnancy. They recorded socio-demographic data (for example, age, level of education, and household assets), clinical data, weight and height, and toilet access and sanitation practices for each woman at enrollment and followed them through pregnancy until birth. Nearly two-thirds of the women practiced open defecation, and a quarter experienced an adverse pregnancy outcome, most commonly a preterm birth and/or having a baby with low birth weight. After adjustment for potential confounding factors (factors that might affect outcomes, such as socio-demographic characteristics), open defecation was significantly associated with adverse pregnancy outcomes (all four adverse outcomes considered together) and with preterm birth, but not with low birth weight (a significant association is one that is unlikely to have happened by chance). Specifically, the adjusted odds ratios (an indicator of the strength of association between an exposure and an outcome; an odds ratio of more than one indicates that an exposure increases the risk of an outcome) of adverse pregnancy outcomes and preterm birth among women practicing open defecation compared with women with access to a latrine were 2.38 and 2.22, respectively. Notably, these associations were independent of poverty, caste, and religion.
What Do These Findings Mean?
These findings indicate that, among women in Odisha, defecation in the open (poor sanitation) during pregnancy is associated with a higher risk of any adverse pregnancy outcome and of preterm birth than the use of a latrine. Counterintuitively, these findings also suggest that the association between open defecation and adverse pregnancy outcomes is not explained by poverty. Although the researchers adjusted for numerous confounding factors in their analysis, the women who defecated in the open may have shared some other unknown characteristic (residual confounding) that was actually responsible for their increased risk of an adverse pregnancy outcome. Further studies are now needed to determine the socio-behavioral and/or biological basis of the association between poor sanitation and adverse pregnancy outcomes. Appropriate public health interventions can then be designed to reduce the burden of adverse pregnancy outcomes among women living in settings where there is limited access to adequate sanitation.
Additional Information
This list of resources contains links that can be accessed when viewing the PDF on a device or via the online version of the article at
The March of Dimes, a non-profit organization for pregnancy and baby health, provides information on pregnancy loss, preterm birth, and low birth weight
Tommy’s, a UK non-profit organization that funds research into stillbirth, premature birth, and miscarriage, also provides information about adverse pregnancy outcomes
The World Health Organization (WHO) provides information on water, sanitation, and health (in several languages)
The WHO/UNICEF Joint Monitoring Programme for Water Supply and Sanitation monitors progress toward improved global sanitation; its 2014 report on progress in water sanitation is available (in several languages)
The children’s charity UNICEF, which protects the rights of children and young people around the world, provides information on water, sanitation, and health (in several languages)
The Water Supply and Sanitation Collaborative Council and the non-governmental organization Practical Action provide information on approaches and technologies for improving sanitation
A PLOS Medicine Collection on water and sanitation and a Policy Forum by Velleman et al. on improving water, sanitation, and hygiene for maternal and newborn health are available
PMCID: PMC4511257  PMID: 26151447
4.  A case-control study to investigate the risk of leukaemia associated with exposure to benzene in petroleum marketing and distribution workers in the United Kingdom. 
OBJECTIVES: To investigate the risk of leukaemia in workers in the petroleum distribution industry who were exposed to low levels of benzene. METHODS: From the cohort of distribution workers, 91 cases were identified as having leukaemia on either a death certificate or on cancer registration. These cases were compared with controls (four per case) randomly selected from the cohort, who were from the same company as the respective case, matched for age, and alive and under follow up at the time of case occurrence. Work histories were collected for the cases and controls, together with information about the terminals at which they had worked, fuel compositions, and occupational hygiene measurements of benzene. These data were used to derive quantitative estimates of personal exposure to benzene. Odds ratios (OR) were calculated conditional on the matching, to identify those variables in the study which were associated with risk of leukaemia. Examination of the potential effects of confounding and other variables was carried out with conditional logistic regression. Analyses were carried out for all leukaemia and separately for acute lymphoblastic, chronic lymphocytic, acute myeloid and monocytic, and chronic myeloid leukaemias. RESULTS: There was no significant increase in the overall risk of all leukaemias with higher cumulative exposure to benzene or with intensity of exposure, but risk was consistently doubled in subjects employed in the industry for > 10 years. Acute lymphoblastic leukaemia tended to occur in workers employed after 1950, who started work after the age of 30, worked for a short duration, and experienced low cumulative exposure with few peaks. The ORs did not increase with increasing cumulative exposure. The risk of chronic lymphocytic leukaemia seemed to be related most closely to duration of employment and the highest risk occurred in white collar workers with long service. These workers had only background levels of benzene exposure. There was no evidence of an association of risk with any exposure variables, and no evidence of an increasing risk with increasing cumulative exposure, mean intensity, or maximum intensity of exposure. The patterns of risk for acute myeloid and monocytic leukaemia were different from those of the lymphoid subgroups, in which duration of employment was the variable most closely related to risk. Risk was increased to an OR of 2.8 (95% confidence interval (95% CI) 0.8 to 9.4) for a cumulative exposure between 4.5 and 45 ppm-years compared with < 0.45 ppm-years. For mean intensity between 0.2 and 0.4 ppm an OR of 2.8 (95% CI 0.9 to 8.5) was found compared with < 0.02 ppm. Risk did not increase with cumulative exposure, maximum intensity, or mean intensity of exposure when treated as continuous variables. Cases of acute myeloid and monocytic leukaemia were more often classified as having peaked exposures than controls, and when variables characterising peaks, particularly daily and weekly peaks, were included in the analysis these tended to dominate the other exposure variables. However, because of the small numbers it is not possible to distinguish the relative influence of peaked and unpeaked exposures on risk of acute myeloid and monocytic leukaemia. There was no evidence of an increased risk of chronic myeloid leukaemia with increases in cumulative exposure, maximum intensity, mean intensity, and duration of employment, either as continuous or categorical variables. Analyses exploring the sensitivity of the results to the source and quality of the work histories showed similar patterns in general. However, no increases in ORs for categories of cumulative exposure were found for acute myeloid and monocytic leukaemia in the data set which included work histories obtained from personnel records still in existence, although numbers were reduced. Analyses excluding the last five and 10 years of exposure showed a tendency for ORs to reduce for chronic lymphocytic leukaemia and chronic myeloid leukaemia, and to increase for acute myeloid and monocytic leukaemia. Limitations of the study include uncertainties and gaps in the information collected, and small numbers in subcategories of exposure which can lead to wide CIs around the risk estimates and poor fit of the mathematical models. CONCLUSIONS: There is no evidence in this study of an association between exposure to benzene and lymphoid leukaemia, either acute or chronic. There is some suggestion of a relation between exposure to benzene and myeloid leukaemia, in particular for acute myeloid and monocytic leukaemia. Peaked exposures seemed to be experienced for this disease. However, in view of the limitations of the study, doubt remains as to whether the risk of acute myeloid and monocytic leukaemia is increased by cumulative exposures of < 45 ppm-years. Further work is recommended to review the work histories and redefine their quality, to explore the discrepancies between results for categorical and continuous variables, and to develop ranges around the expose estimates to enable further sensitivity analyses to be carried out.
PMCID: PMC1128678  PMID: 9155776
5.  The Association Between Short Interpregnancy Interval and Preterm Birth in Louisiana: A Comparison of Methods 
Maternal and child health journal  2013;17(5):933-939.
There is growing interest in the application of propensity scores (PS) in epidemiologic studies, especially within the field of reproductive epidemiology. This retrospective cohort study assesses the impact of a short interpregnancy interval (IPI) on preterm birth and compares the results of the conventional logistic regression analysis with analyses utilizing a PS.
The study included 96,378 singleton infants from Louisiana birth certificate data (1995–2007). Five regression models designed for methods comparison are presented.
Ten percent (10.17%) of all births were preterm; 26.83% of births were from a short IPI. The PS-adjusted model produced a more conservative estimate of the exposure variable compared to the conventional logistic regression method (β-coefficient: 0.21 vs. 0.43), as well as a smaller standard error (0.024 vs. 0.028), odds ratio and 95% confidence intervals c1.15 (1.09, 1.20) vs. 1.23 (1.17, 1.30)]. The inclusion of more covariate and interaction terms in the PS did not change the estimates of the exposure variable.
This analysis indicates that PS-adjusted regression may be appropriate for validation of conventional methods in a large dataset with a fairly common outcome. PS’s may be beneficial in producing more precise estimates, especially for models with many confounders and effect modifiers and where conventional adjustment with logistic regression is unsatisfactory. Short intervals between pregnancies are associated with preterm birth in this population, according to either technique. Birth spacing is an issue that women have some control over. Educational interventions, including birth control, should be applied during prenatal visits and following delivery.
PMCID: PMC4407683  PMID: 22791206
birth interval; interpregnancy interval; logistic regression; pregnancy interval; preterm birth; propensity score
6.  Long-Term Exposure to Silica Dust and Risk of Total and Cause-Specific Mortality in Chinese Workers: A Cohort Study 
PLoS Medicine  2012;9(4):e1001206.
A retro-prospective cohort study by Weihong Chen and colleagues provides new estimates for the risk of total and cause-specific mortality due to long-term silica dust exposure among Chinese workers.
Human exposure to silica dust is very common in both working and living environments. However, the potential long-term health effects have not been well established across different exposure situations.
Methods and Findings
We studied 74,040 workers who worked at 29 metal mines and pottery factories in China for 1 y or more between January 1, 1960, and December 31, 1974, with follow-up until December 31, 2003 (median follow-up of 33 y). We estimated the cumulative silica dust exposure (CDE) for each worker by linking work history to a job–exposure matrix. We calculated standardized mortality ratios for underlying causes of death based on Chinese national mortality rates. Hazard ratios (HRs) for selected causes of death associated with CDE were estimated using the Cox proportional hazards model. The population attributable risks were estimated based on the prevalence of workers with silica dust exposure and HRs. The number of deaths attributable to silica dust exposure among Chinese workers was then calculated using the population attributable risk and the national mortality rate. We observed 19,516 deaths during 2,306,428 person-years of follow-up. Mortality from all causes was higher among workers exposed to silica dust than among non-exposed workers (993 versus 551 per 100,000 person-years). We observed significant positive exposure–response relationships between CDE (measured in milligrams/cubic meter–years, i.e., the sum of silica dust concentrations multiplied by the years of silica exposure) and mortality from all causes (HR 1.026, 95% confidence interval 1.023–1.029), respiratory diseases (1.069, 1.064–1.074), respiratory tuberculosis (1.065, 1.059–1.071), and cardiovascular disease (1.031, 1.025–1.036). Significantly elevated standardized mortality ratios were observed for all causes (1.06, 95% confidence interval 1.01–1.11), ischemic heart disease (1.65, 1.35–1.99), and pneumoconiosis (11.01, 7.67–14.95) among workers exposed to respirable silica concentrations equal to or lower than 0.1 mg/m3. After adjustment for potential confounders, including smoking, silica dust exposure accounted for 15.2% of all deaths in this study. We estimated that 4.2% of deaths (231,104 cases) among Chinese workers were attributable to silica dust exposure. The limitations of this study included a lack of data on dietary patterns and leisure time physical activity, possible underestimation of silica dust exposure for individuals who worked at the mines/factories before 1950, and a small number of deaths (4.3%) where the cause of death was based on oral reports from relatives.
Long-term silica dust exposure was associated with substantially increased mortality among Chinese workers. The increased risk was observed not only for deaths due to respiratory diseases and lung cancer, but also for deaths due to cardiovascular disease.
Please see later in the article for the Editors' Summary
Editors' Summary
Walk along most sandy beaches and you will be walking on millions of grains of crystalline silica, one of the commonest minerals on earth and a major ingredient in glass and in ceramic glazes. Silica is also used in the manufacture of building materials, in foundry castings, and for sandblasting, and respirable (breathable) crystalline silica particles are produced during quarrying and mining. Unfortunately, silica dust is not innocuous. Several serious diseases are associated with exposure to this dust, including silicosis (a chronic lung disease characterized by scarring and destruction of lung tissue), lung cancer, and pulmonary tuberculosis (a serious lung infection). Moreover, exposure to silica dust increases the risk of death (mortality). Worryingly, recent reports indicate that in the US and Europe, about 1.7 and 3.0 million people, respectively, are occupationally exposed to silica dust, figures that are dwarfed by the more than 23 million workers who are exposed in China. Occupational silica exposure, therefore, represents an important global public health concern.
Why Was This Study Done?
Although the lung-related adverse health effects of exposure to silica dust have been extensively studied, silica-related health effects may not be limited to these diseases. For example, could silica dust particles increase the risk of cardiovascular disease (diseases that affect the heart and circulation)? Other environmental particulates, such as the products of internal combustion engines, are associated with an increased risk of cardiovascular disease, but no one knows if the same is true for silica dust particles. Moreover, although it is clear that high levels of exposure to silica dust are dangerous, little is known about the adverse health effects of lower exposure levels. In this cohort study, the researchers examined the effect of long-term exposure to silica dust on the risk of all cause and cause-specific mortality in a large group (cohort) of Chinese workers.
What Did the Researchers Do and Find?
The researchers estimated the cumulative silica dust exposure for 74,040 workers at 29 metal mines and pottery factories from 1960 to 2003 from individual work histories and more than four million measurements of workplace dust concentrations, and collected health and mortality data for all the workers. Death from all causes was higher among workers exposed to silica dust than among non-exposed workers (993 versus 551 deaths per 100,000 person-years), and there was a positive exposure–response relationship between silica dust exposure and death from all causes, respiratory diseases, respiratory tuberculosis, and cardiovascular disease. For example, the hazard ratio for all cause death was 1.026 for every increase in cumulative silica dust exposure of 1 mg/m3-year; a hazard ratio is the incidence of an event in an exposed group divided by its incidence in an unexposed group. Notably, there was significantly increased mortality from all causes, ischemic heart disease, and silicosis among workers exposed to respirable silica concentrations at or below 0.1 mg/m3, the workplace exposure limit for silica dust set by the US Occupational Safety and Health Administration. For example, the standardized mortality ratio (SMR) for silicosis among people exposed to low levels of silica dust was 11.01; an SMR is the ratio of observed deaths in a cohort to expected deaths calculated from recorded deaths in the general population. Finally, the researchers used their data to estimate that, in 2008, 4.2% of deaths among industrial workers in China (231,104 deaths) were attributable to silica dust exposure.
What Do These Findings Mean?
These findings indicate that long-term silica dust exposure is associated with substantially increased mortality among Chinese workers. They confirm that there is an exposure–response relationship between silica dust exposure and a heightened risk of death from respiratory diseases and lung cancer. That is, the risk of death from these diseases increases as exposure to silica dust increases. In addition, they show a significant relationship between silica dust exposure and death from cardiovascular diseases. Importantly, these findings suggest that even levels of silica dust that are considered safe increase the risk of death. The accuracy of these findings may be affected by the accuracy of the silica dust exposure estimates and/or by confounding (other factors shared by the people exposed to silica such as diet may have affected their risk of death). Nevertheless, these findings highlight the need to tighten regulations on workplace dust control in China and elsewhere.
Additional Information
Please access these websites via the online version of this summary at
The American Lung Association provides information on silicosis
The US Centers for Disease Control and Prevention provides information on silica in the workplace, including links to relevant US National Institute for Occupational Health and Safety publications, and information on silicosis and other pneumoconioses
The US Occupational Safety and Health Administration also has detailed information on occupational exposure to crystalline silica
What does silicosis mean to you is a video provided by the US Mine Safety and Health Administration that includes personal experiences of silicosis; Dont let silica dust you is a video produced by the Association of Occupational and Environmental Clinics that identifies ways to reduce silica dust exposure in the workplace
The MedlinePlus encyclopedia has a page on silicosis (in English and Spanish)
The International Labour Organization provides information on health surveillance for those exposed to respirable crystalline silica
The World Health Organization has published a report about the health effects of crystalline silica and quartz
PMCID: PMC3328438  PMID: 22529751
7.  Re-evaluation of link between interpregnancy interval and adverse birth outcomes: retrospective cohort study matching two intervals per mother 
Objective To re-evaluate the causal effect of interpregnancy interval on adverse birth outcomes, on the basis that previous studies relying on between mother comparisons may have inadequately adjusted for confounding by maternal risk factors.
Design Retrospective cohort study using conditional logistic regression (matching two intervals per mother so each mother acts as her own control) to model the incidence of adverse birth outcomes as a function of interpregnancy interval; additional unconditional logistic regression with adjustment for confounders enabled comparison with the unmatched design of previous studies.
Setting Perth, Western Australia, 1980-2010.
Participants 40 441 mothers who each delivered three liveborn singleton neonates.
Main outcome measures Preterm birth (<37 weeks), small for gestational age birth (<10th centile of birth weight by sex and gestational age), and low birth weight (<2500 g).
Results Within mother analysis of interpregnancy intervals indicated a much weaker effect of short intervals on the odds of preterm birth and low birth weight compared with estimates generated using a traditional between mother analysis. The traditional unmatched design estimated an adjusted odds ratio for an interpregnancy interval of 0-5 months (relative to the reference category of 18-23 months) of 1.41 (95% confidence interval 1.31 to 1.51) for preterm birth, 1.26 (1.15 to 1.37) for low birth weight, and 0.98 (0.92 to 1.06) for small for gestational age birth. In comparison, the matched design showed a much weaker effect of short interpregnancy interval on preterm birth (odds ratio 1.07, 0.86 to 1.34) and low birth weight (1.03, 0.79 to 1.34), and the effect for small for gestational age birth remained small (1.08, 0.87 to 1.34). Both the unmatched and matched models estimated a high odds of small for gestational age birth and low birth weight for long interpregnancy intervals (longer than 59 months), but the estimated effect of long interpregnancy intervals on the odds of preterm birth was much weaker in the matched model than in the unmatched model.
Conclusion This study questions the causal effect of short interpregnancy intervals on adverse birth outcomes and points to the possibility of unmeasured or inadequately specified maternal factors in previous studies.
PMCID: PMC4137882  PMID: 25056260
8.  Determining the Probability Distribution and Evaluating Sensitivity and False Positive Rate of a Confounder Detection Method Applied To Logistic Regression 
In epidemiologic studies researchers are often interested in detecting confounding (when a third variable is both associated with and affects associations between the outcome and predictors). Confounder detection methods often compare regression coefficients obtained from “crude” models that exclude the possible confounder(s) and “adjusted” models that include the variable(s). One such method compares the relative difference in effect estimates to a cutoff of 10% with differences of at least 10% providing evidence of confounding.
In this study we derive the asymptotic distribution of the relative change in effect statistic applied to logistic regression and evaluate the sensitivity and false positive rate of the 10% cutoff method using the asymptotic distribution. We then verify the results using simulated data.
When applied to a logistic regression models with a dichotomous outcome, exposure, and possible confounder, we found the 10% cutoff method to have an asymptotic lognormal distribution. For sample sizes of at least 300 the authors found that when confounding existed, over 80% of models had >10% changes in odds ratios. When the confounder was not associated with the outcome, the false positive rate increased as the strength of the association between the predictor and confounder increased. When the confounder and predictor were independent of one another, false positives were rare (most < 10%).
Researchers must be aware of high false positive rates when applying change in estimate confounder detection methods to data where the exposure is associated with possible confounder variables.
PMCID: PMC3571096  PMID: 23420565
10% Rule; Variable Selection; Model Building; Sensitivity; False Positive Rate
9.  Case-control study of oral contraceptives and risk of thromboembolic stroke: results from International Study on Oral Contraceptives and Health of Young Women. 
BMJ : British Medical Journal  1997;315(7121):1502-1504.
OBJECTIVE: To determine the influence of oral contraceptives (particularly those containing modern progestins) on the risk for ischaemic stroke in women aged 16-44 years. DESIGN: Matched case-control study. SETTING: 16 Centres in the United Kingdom, Germany, France, Switzerland, and Austria. SUBJECTS: Cases were 220 women aged 16-44 who had an incident ischaemic stroke. Controls were 775 women (at least one hospital and one community control per case) unaffected by stroke who were matched with the corresponding case for 5 year age band and for hospital or community setting. Information on exposure and confounding variables were collected in a face to face interview. MAIN OUTCOME MEASURES: Odds ratios derived with stratified analysis and unconditional logistic regression to adjust for potential confounding. RESULTS: Adjusted odds ratios (95% confidence intervals) for ischaemic stroke (unmatched analysis) were 4.4 (2.0 to 9.9), 3.4 (2.1 to 5.5), and 3.9 (2.3 to 6.6) for current use of first, second, and third generation oral contraceptives, respectively. The risk ratio for third versus second generation was 1.1 (0.7 to 2.0) and was similar in the United Kingdom and other European countries. The risk estimates were lower if blood pressure was checked before prescription. CONCLUSION: Although there is a small relative risk of occlusive stroke for women of reproductive age who currently use oral contraceptives, the attributable risk is very small because the incidence in this age range is very low. There is no difference between the risk of oral contraceptives of the third and second generation; only first generation oral contraceptives seem to be associated with a higher risk. This small increase in risk may be further reduced by efforts to control cardiovascular risk factors, particularly high blood pressure.
PMCID: PMC2127931  PMID: 9420491
10.  What's the Risk? A Simple Approach for Estimating Adjusted Risk Measures from Nonlinear Models Including Logistic Regression 
Health Services Research  2009;44(1):288-302.
To develop and validate a general method (called regression risk analysis) to estimate adjusted risk measures from logistic and other nonlinear multiple regression models. We show how to estimate standard errors for these estimates. These measures could supplant various approximations (e.g., adjusted odds ratio [AOR]) that may diverge, especially when outcomes are common.
Study Design
Regression risk analysis estimates were compared with internal standards as well as with Mantel–Haenszel estimates, Poisson and log-binomial regressions, and a widely used (but flawed) equation to calculate adjusted risk ratios (ARR) from AOR.
Data Collection
Data sets produced using Monte Carlo simulations.
Principal Findings
Regression risk analysis accurately estimates ARR and differences directly from multiple regression models, even when confounders are continuous, distributions are skewed, outcomes are common, and effect size is large. It is statistically sound and intuitive, and has properties favoring it over other methods in many cases.
Regression risk analysis should be the new standard for presenting findings from multiple regression analysis of dichotomous outcomes for cross-sectional, cohort, and population-based case–control studies, particularly when outcomes are common or effect size is large.
PMCID: PMC2669627  PMID: 18793213
Multiple regression analysis; logistic regression; nonlinear models; odds ratio; relative risk; risk adjustment; risk ratio
11.  Keeping children safe at home: protocol for a case–control study of modifiable risk factors for scalds 
Injury Prevention  2014;20(5):e11.
Scalds are one of the most common forms of thermal injury in young children worldwide. Childhood scald injuries, which mostly occur in the home, result in substantial health service use and considerable morbidity and mortality. There is little research on effective interventions to prevent scald injuries in young children.
To determine the relationship between a range of modifiable risk factors for medically attended scalds in children under the age of 5 years.
A multicentre case-control study in UK hospitals and minor injury units with parallel home observation to validate parental reported exposures. Cases will be 0–4 years old with a medically attended scald injury which occurred in their home or garden, matched on gender and age with community controls. An additional control group will comprise unmatched hospital controls drawn from children aged 0–4 years attending the same hospitals and minor injury units for other types of injury. Conditional logistic regression will be used for the analysis of cases and matched controls, and unconditional logistic regression for the analysis of cases and unmatched controls to estimate ORs and 95% CI, adjusted and unadjusted for confounding variables.
Main exposure measures
Use of safety equipment and safety practices for scald prevention and scald hazards.
This large case-control study will investigate modifiable risk factors for scalds injuries, adjust for potential confounders and validate measures of exposure. Its findings will enhance the evidence base for prevention of scalds injuries in young children.
PMCID: PMC4174015  PMID: 24842981
12.  Association between Prenatal Exposure to Antiretroviral Therapy and Birth Defects: An Analysis of the French Perinatal Cohort Study (ANRS CO1/CO11) 
PLoS Medicine  2014;11(4):e1001635.
Jeanne Sibiude and colleagues use the French Perinatal Cohort to estimate the prevalence of birth defects in children born to HIV-infected women receiving antiretroviral therapy during pregnancy.
Please see later in the article for the Editors' Summary
Antiretroviral therapy (ART) has major benefits during pregnancy, both for maternal health and to prevent mother-to-child transmission of HIV. Safety issues, including teratogenic risk, need to be evaluated. We estimated the prevalence of birth defects in children born to HIV-infected women receiving ART during pregnancy, and assessed the independent association of birth defects with each antiretroviral (ARV) drug used.
Methods and Findings
The French Perinatal Cohort prospectively enrolls HIV-infected women delivering in 90 centers throughout France. Children are followed by pediatricians until 2 y of age according to national guidelines.
We included 13,124 live births between 1994 and 2010, among which, 42% (n = 5,388) were exposed to ART in the first trimester of pregnancy. Birth defects were studied using both European Surveillance of Congenital Anomalies (EUROCAT) and Metropolitan Atlanta Congenital Defects Program (MACDP) classifications; associations with ART were evaluated using univariate and multivariate logistic regressions. Correction for multiple comparisons was not performed because the analyses were based on hypotheses emanating from previous findings in the literature and the robustness of the findings of the current study. The prevalence of birth defects was 4.4% (95% CI 4.0%–4.7%), according to the EUROCAT classification. In multivariate analysis adjusting for other ARV drugs, maternal age, geographical origin, intravenous drug use, and type of maternity center, a significant association was found between exposure to zidovudine in the first trimester and congenital heart defects: 2.3% (74/3,267), adjusted odds ratio (AOR) = 2.2 (95% CI 1.3–3.7), p = 0.003, absolute risk difference attributed to zidovudine +1.2% (95% CI +0.5; +1.9%). Didanosine and indinavir were associated with head and neck defects, respectively: 0.5%, AOR = 3.4 (95% CI 1.1–10.4), p = 0.04; 0.9%, AOR = 3.8 (95% CI 1.1–13.8), p = 0.04. We found a significant association between efavirenz and neurological defects (n = 4) using the MACDP classification: AOR = 3.0 (95% CI 1.1–8.5), p = 0.04, absolute risk +0.7% (95% CI +0.07%; +1.3%). But the association was not significant using the less inclusive EUROCAT classification: AOR = 2.1 (95% CI 0.7–5.9), p = 0.16. No association was found between birth defects and lopinavir or ritonavir with a power >85% for an odds ratio of 1.5, nor for nevirapine, tenofovir, stavudine, or abacavir with a power >70%. Limitations of the present study were the absence of data on termination of pregnancy, stillbirths, tobacco and alcohol intake, and concomitant medication.
We found a specific association between in utero exposure to zidovudine and heart defects; the mechanisms need to be elucidated. The association between efavirenz and neurological defects must be interpreted with caution. For the other drugs not associated with birth defects, the results were reassuring. Finally, whatever the impact that some ARV drugs may have on birth defects, it is surpassed by the major role of ART in the successful prevention of mother-to-child transmission of HIV.
Please see later in the article for the Editors' Summary
Editors' Summary
AIDS and HIV infection are commonly treated with antiretroviral therapy (ART), a combination of individual drugs that work together to prevent the replication of the virus and further spread of the infection. Starting in the 1990s, studies have shown that ART of HIV-infected women can substantially reduce transmission of the virus to the child during pregnancy and birth. Based on these results, ART was subsequently recommended for pregnant women. Since 2004, ART has been standard therapy for pregnant women with HIV/AIDS in high-income countries, and it is now recommended for all HIV-infected women worldwide. Several different antiviral drug combinations have been shown to be effective and are used to prevent mother-to-infant transmission. However, as with any other drugs taken during pregnancy, there is concern that ART can harm the developing fetus.
Why Was This Study Done?
Several previous studies have assessed the risk that ART taken by a pregnant woman might pose to her developing fetus, but the results have been inconsistent. Animal studies suggested an elevated risk for some drugs but not others. While some clinical studies have reported increases in birth defects in children born to mothers on ART, others have shown no such increase.
The discrepancy may be due to differences between the populations included in the studies and the different methods used to diagnose birth defects. Additional large studies are therefore necessary to obtain more and better evidence on the potential harm of individual anti-HIV drugs to children exposed during pregnancy. So in this study, the authors conducted a large cohort study in France to assess the relationship between different antiretroviral drugs and specific birth defects.
What Did the Researchers Do and Find?
The researchers used a large national health database known as the French Perinatal Cohort that contains information on HIV-infected mothers who delivered infants in 90 centers throughout France. Pediatricians follow all children, whatever their HIV status, to two years of age, and health statistics are collected according to national health-care guidelines. Analyzing the records, the researchers estimated the rate at which birth defects occurred in children exposed to antiretroviral drugs during pregnancy.
The researchers included 13,124 children who were born alive between 1994 and 2010 and had been exposed to ART during pregnancy. Children exposed in the first trimester of pregnancy, and those exposed during the second or third trimester, were compared to a control group (children not exposed to the drug during the whole pregnancy). Using two birth defect classification systems (EUROCAT and MACDP—MACDP collects more details on disease classification than EUROCAT), the researchers sought to detect a link between the occurrence of birth defects and exposure to individual antiretroviral drugs.
They found a small increase in the risk for heart defects in children with exposure to zidovudine. They also found an association between efavirenz exposure and a small increase in neurological defects, but only when using the MACDP classification system. The authors found no association between other antiretroviral drugs, including nevirapine (acting similar to efavirenz); tenofovir, stavudine, and abacavir (all three acting similar to zidovudine); and lopinavir and ritonavir (proteinase inhibitors) and any type of birth defect.
What Do These Findings Mean?
These findings show that, overall, the risks of birth defects in children exposed to antiretroviral drugs in utero are small when considering the clear benefit of preventing mother-to-child transmission of HIV. However, where there are safe and effective alternatives, it might be appropriate to avoid use by pregnant women of those drugs that are associated with elevated risks of birth defects.
Worldwide, a large number of children are exposed to zidovudine in utero, and these results suggest (though cannot prove) that these children may be at a slightly higher risk of heart defects. Current World Health Organization (WHO) guidelines for the prevention of mother-to-child transmission no longer recommend zidovudine for first-line therapy.
The implications of the higher rate of neurological birth defects observed in infants exposed to efavirenz in the first trimester are less clear. The EUROCAT classification excludes minor neurological abnormalities without serious medical consequences, and so the WHO guidelines that stress the importance of careful clinical follow-up of children with exposure to efavirenz seem adequate, based on the findings of this study. The study is limited by the lack of data on the use of additional medication and alcohol and tobacco use, which could have a direct impact on fetal development, and by the absence of data on birth defects and antiretroviral drug exposure from low-income countries. However, the findings of this study overall are reassuring and suggest that apart from zidovudine and possibly efavirenz, other antiretroviral drugs are not associated with birth defects, and their use during pregnancy does not pose a risk to the infant.
Additional Information
Please access these websites via the online version of this summary at
This study is further discussed in a PLOS Medicine Perspective by Mofenson and Watts
The World Health Organization has a webpage on mother-to-child transmission of HIV
The US National Institutes of Health provides links to additional information on mother-to-child transmission of HIV
The Elizabeth Glaser Pediatric AIDS Foundation also has a webpage on mother-to-child transmission
The French Perinatal Cohort has a webpage describing the cohort and its main publications (in French, with a summary in English)
PMCID: PMC4004551  PMID: 24781315
13.  A case-control study of malignant and non-malignant respiratory disease among employees of a fiberglass manufacturing facility. II. Exposure assessment. 
A case-control study of malignant and non-malignant respiratory disease among employees of the Owens-Corning Fiberglas Corporation's Newark, Ohio plant was undertaken. The aim was to determine the extent to which exposures to substances in the Newark plant environment, to non-workplace factors, or to a combination may play a part in the risk of mortality from respiratory disease among workers in this plant. A historical environmental reconstruction of the plant was undertaken to characterise the exposure profile for workers in this plant from its beginnings in 1934 to the end of 1987. The exposure profile provided estimates of cumulative exposure to respirable fibres, fine fibres, asbestos, talc, formaldehyde, silica, and asphalt fumes. Employment histories from Owens-Corning Fiberglas provided information on employment characteristics (duration of employment, year of hire, age at first hire) and an interview survey obtained information on demographic characteristics (birthdate, race, education, marital state, parent's ethnic background, and place of birth), lifetime residence, occupational and smoking histories, hobbies, and personal and family medical history. Matched, unadjusted odds ratios (ORs) were used to assess the association between lung cancer or non-malignant respiratory disease and the cumulative exposure history, demographic characteristics, and employment variables. Only the smoking variables and employment characteristics (year of hire and age at first hire) were statistically significant for lung cancer. For non-malignant respiratory disease, only the smoking variables were statistically significant in the univariate analysis. Of the variables entered into a conditional logistic regression model for lung cancer, only smoking (smoked for six months or more v never smoked: OR = 26.17, 95% confidence interval (95% CI) 3.316-206.5) and age at first hire (35 and over v less than 35: OR = 0.244, 95% CI 0.083-0.717) were statistically significant. There were, however, increased ORs for year of employment (first hired before 1945 v first hire after 1945: OR = 1.944, 95% CI 0.850-4.445), talc (cumulative exposure >1000 fibres/ml days v never exposed: OR = 1.355, 95% CI 0.407-5.515), and asphalt fumes (cumulative exposure >0.01 mg/m(3) days v never exposed: OR 1.131, 95% CI 0.468-2.730). For non-malignant respiratory disease, only the smoking variable was significant in the conditional logistic regression analysis (OR = 2.637, 95% CI 1.146-6.069). There were raised ORs for the higher cumulative exposure categories for respirable fibres, asbestos, silica, and asphalt fumes. For both silica and asphalt fumes, ORs were more than double the reference groups for all exposure categories. A limited number of subjects were exposed to fine fibres. The scarcity of cases and controls limits the extent to which analyses for fine fibre may be carried out. Within those limitations, among those who had worked with fine fibre, the unadjusted, unmatched OR for lung cancer was (1.0 (95% CI 0.229-4.373) and for non-malignant respiratory disease, the OR was 1.5 (95% CI 0.336-6.702). The unadjusted OR for lung cancer for exposure to fine fibre was consistent with that for all respirable fibre and does not suggest an association. For non-malignant respiratory disease, the unadjusted OR for fine fibre was opposite in direction from that for all respirable fibres. Within the limitations of the available data on fibre, there is o suggestion that exposure to fine fibre has resulted in an increase in risk of lung cancer. The increased OR for non-malignant respiratory disease is inconclusive. The results of this population, in this place and time, neither respirable fibres nor any of the substances investigated as part of the plant environment are statistically significant factors for lung cancer risk although there are increased ORs for exposure to talc and asphalt fumes. Smoking is the most important factors in risk for lung cancer in this population. The situation is less clear for non-malignant respiratory disease. Unlike lung cancer, non-malignant respiratory represents a constellation of outcomes and not a single well defined end point. Although smoking was the only statistically significant factor for non-malignant respiratory disease in this analysis, the ORs for respirable fibres, asbestos, silica, and asphalt fumes were greater than unity for the highest exposure categories. Although the raised ORs for these substances may represent the results of a random process, they may be suggestive of an increased risk and require further investigation.
PMCID: PMC1012175  PMID: 8398858
14.  Smoking and high-risk mammographic parenchymal patterns: a case-control study 
Breast Cancer Research  1999;2(1):59-63.
Current smoking was strongly and inversely associated with high-risk patterns, after adjustment for concomitant risk factors. Relative to never smokers, current smokers were significantly less likely to have a high-risk pattern. Similar results were obtained when the analysis was confined to postmenopausal women. Past smoking was not related to the mammographic parenchymal patterns. The overall effect in postmenopausal women lost its significance when adjusted for other risk factors for P2/DY patterns that were found to be significant in the present study, although the results are still strongly suggestive. The present data indicate that adjustment for current smoking status is important when evaluating the relationship between mammographic parenchymal pattern and breast cancer risk. They also indicate that smoking is a prominent potential confounder when analyzing effects of other risk factors such as obesity-related variables. It appears that parenchymal patterns may act as an informative biomarker of the effect of cigarette smoking on breast cancer risk.
Overall, epidemiological studies [1,2,3,4] have reported no substantial association between cigarette smoking and the risk of breast cancer. Some studies [5,6,7] reported a significant increase of breast cancer risk among smokers. In recent studies that addressed the association between breast cancer and cigarette smoking, however, there was some suggestion of a decreased risk [8,9,10], especially among current smokers, ranging from approximately 10 to 30% [9,10]. Brunet et al [11] reported that smoking might reduce the risk of breast cancer by 44% in carriers of BRCA1 or BRCA2 gene mutations. Wolfe [12] described four different mammographic patterns created by variations in the relative amounts of fat, epithelial and connective tissue in the breast, designated N1, P1, P2 and DY. Women with either P2 or DY pattern are considered at greater risk for breast cancer than those with N1 or P1 pattern [12,13,14,15]. There are no published studies that assessed the relationship between smoking and mammographic parenchymal patterns.
To evaluate whether mammographic parenchymal patterns as classified by Wolfe, which have been positively associated with breast cancer risk, are affected by smoking. In this case-control study, nested within the European Prospective Investigation on Cancer in Norfolk (EPIC-Norfolk) cohort [16], the association between smoking habits and mammographic parenchymal patterns are examined. The full results will be published elsewhere.
Study subjects were members of the EPIC cohort in Norwich who also attended the prevalence screening round at the Norwich Breast Screening Centre between November 1989 and December 1997, and were free of breast cancer at that screening. Cases were defined as women with a P2/DY Wolfe's mammographic parenchymal pattern on the prevalence screen mammograms. A total of 203 women with P2/DY patterns were identified as cases and were individually matched by date of birth (within 1 year) and date of prevalence screening (within 3 months) with 203 women with N1/P1 patterns who served as control individuals.
Two views, the mediolateral and craniocaudal mammograms, of both breasts were independently reviewed by two of the authors (ES and RW) to determine the Wolfe mammographic parenchymal pattern.
Considerable information on health and lifestyle factors was available from the EPIC Health and Lifestyle Questionnaire [16]. In the present study we examined the subjects' personal history of benign breast diseases, menstrual and reproductive factors, oral contraception and hormone replacement therapy, smoking, and anthropometric information such as body mass index and waist:hip ratio.
Odds ratios (ORs) and their 95% confidence intervals (CIs) were calculated by conditional logistic regression [17], and were adjusted for possible confounding factors.
The characteristics of the cases and controls are presented in Table 1. Cases were leaner than controls. A larger percentage of cases were nulliparous, premenopausal, current hormone replacement therapy users, had a personal history of benign breast diseases, and had had a hysterectomy. A larger proportion of controls had more than three births and were current smokers.
Table 2 shows the unadjusted and adjusted OR estimates for Wolfe's high-risk mammographic parenchymal patterns and smoking in the total study population and in postmenopausal women separately. Current smoking was strongly and inversely associated with high-risk patterns, after adjustment for concomitant risk factors. Relative to never smokers, current smokers were significantly less likely to have a high-risk pattern (OR 0.37, 95% CI 0.14-0.94). Similar results were obtained when the analysis was confined to postmenopausal women. Past smoking was not related to mammographic parenchymal patterns. The overall effect in postmenopausal women lost its significance when adjusted for other risk factors for P2/DY patterns that were found to be significant in the present study, although the results were still strongly suggestive. There was no interaction between cigarette smoking and body mass index.
In the present study we found a strong inverse relationship between current smoking and high-risk mammographic parenchymal patterns of breast tissue as classified by Wolfe [12]. These findings are not completely unprecedented; Greendale et al [18] found a reduced risk of breast density in association with smoking, although the magnitude of the reduction was unclear. The present findings suggest that this reduction is large.
Recent studies [9,10] have suggested that breast cancer risk may be reduced among current smokers. In a multicentre Italian case-control study, Braga et al [10] found that, relative to nonsmokers, current smokers had a reduced risk of breast cancer (OR 0.84, 95% CI 0.7-1.0). These findings were recently supported by Gammon et al [9], who reported that breast cancer risk in younger women (younger than 45 years) may be reduced among current smokers who began smoking at an early age (OR 0.59, 95% CI 0.41-0.85 for age 15 years or younger) and among long-term smokers (OR 0.70, 95% CI 0.52-0.94 for those who had smoked for 21 years or more).
The possible protective effect of smoking might be due to its anti-oestrogenic effect [1,2,19]. Recently there has been renewed interest in the potential effect of smoking on breast cancer risk, and whether individuals may respond differently on the basis of differences in metabolism of bioproducts of smoking [20,21]. Different relationships between smoking and breast cancer risk have been suggested that are dependent on the rapid or slow status of acetylators of aromatic amines [20,21]. More recent studies [22,23], however, do not support these findings.
The present study design minimized the opportunity for bias to influence the findings. Because subjects were unaware of their own case-control status, the possibility of recall bias in reporting smoking status was minimized. Systematic error in the assessment of mammograms was avoided because reading was done without knowledge of the risk factor data. Furthermore, the associations observed are unlikely to be explained by the confounding effect of other known breast cancer risk factors, because we adjusted for these in the analysis. We did not have information on passive smoking status, however, which has recently been reported to be a possible confounder [5,6,21,24].
The present data indicate that adjustment for current smoking status is important when evaluating the relationship between mammographic parenchymal pattern and breast cancer risk. They also indicate smoking as a prominent potential confounder when analyzing effects of other risk factors such as obesity-related variables. It seems that parenchymal patterns may act as an informative biomarker of the effect of cigarette smoking on breast cancer risk.
PMCID: PMC13911  PMID: 11056684
mammography; screening; smoking; Wolfe's parenchymal patterns
15.  Confounder Summary Scores When Comparing the Effects of Multiple Drug Exposures 
Little information is available comparing methods to adjust for confounding when considering multiple drug exposures. We compared three analytic strategies to control for confounding based on measured variables: conventional multivariable, exposure propensity score (EPS) and disease risk score (DRS).
Each method was applied to a dataset (2000–2006) recently used to examine the comparative effectiveness of four drugs. The relative effectiveness of risedronate, nasal calcitonin and raloxifene in preventing nonvertebral fracture, were each compared to alendronate. EPSs were derived both by using multinomial logistic regression (single model EPS) and by three separate logistic regression models (separate model EPS). DRSs were derived and event rates compared using Cox proportional hazard models. DRSs derived among the entire cohort (full cohort DRS) was compared to DRSs derived only among the referent alendronate (unexposed cohort DRS).
Less than 5 percent deviation from the base estimate (conventional multivariable) was observed applying single model EPS, separate model EPS or full cohort DRS. Applying the unexposed cohort DRS when background risk for fracture differed between comparison drug exposure cohorts resulted in −7% to +13% deviation from our base estimate.
With sufficient numbers of exposed and outcomes, either conventional multivariable, EPS or full cohort DRS may be used to adjust for confounding to compare the effects of multiple drug exposures. However, our data also suggest that unexposed cohort DRS may be problematic when background risks differ between referent and exposed groups. Further empirical and simulation studies will help to clarify the generalizability of our findings.
PMCID: PMC2800174  PMID: 19757416
Drug Evaluation; Epidemiology; Pharmaceutical; Epidemiological Methods; Population Studies
16.  Effectiveness of the Standard WHO Recommended Retreatment Regimen (Category II) for Tuberculosis in Kampala, Uganda: A Prospective Cohort Study 
PLoS Medicine  2011;8(3):e1000427.
Prospective evaluation of the effectiveness of the WHO-recommended standardized retreatment regimen for tuberculosis by Edward Jones-López and colleagues reveals an unacceptable proportion of unsuccessful outcomes.
Each year, 10%–20% of patients with tuberculosis (TB) in low- and middle-income countries present with previously treated TB and are empirically started on a World Health Organization (WHO)-recommended standardized retreatment regimen. The effectiveness of this retreatment regimen has not been systematically evaluated.
Methods and Findings
From July 2003 to January 2007, we enrolled smear-positive, pulmonary TB patients into a prospective cohort to study treatment outcomes and mortality during and after treatment with the standardized retreatment regimen. Median time of follow-up was 21 months (interquartile range 12–33 months). A total of 29/148 (20%) HIV-uninfected and 37/140 (26%) HIV-infected patients had an unsuccessful treatment outcome. In a multiple logistic regression analysis to adjust for confounding, factors associated with an unsuccessful treatment outcome were poor adherence (adjusted odds ratio [aOR] associated with missing half or more of scheduled doses 2.39; 95% confidence interval (CI) 1.10–5.22), HIV infection (2.16; 1.01–4.61), age (aOR for 10-year increase 1.59; 1.13–2.25), and duration of TB symptoms (aOR for 1-month increase 1.12; 1.04–1.20). All patients with multidrug-resistant TB had an unsuccessful treatment outcome. HIV-infected individuals were more likely to die than HIV-uninfected individuals (p<0.0001). Multidrug-resistant TB at enrolment was the only common risk factor for death during follow-up for both HIV-infected (adjusted hazard ratio [aHR] 17.9; 6.0–53.4) and HIV-uninfected (14.7; 4.1–52.2) individuals. Other risk factors for death during follow-up among HIV-infected patients were CD4<50 cells/ml and no antiretroviral treatment (aHR 7.4, compared to patients with CD4≥200; 3.0–18.8) and Karnofsky score <70 (2.1; 1.1–4.1); and among HIV-uninfected patients were poor adherence (missing half or more of doses) (3.5; 1.1–10.6) and duration of TB symptoms (aHR for a 1-month increase 1.9; 1.0–3.5).
The recommended regimen for retreatment TB in Uganda yields an unacceptable proportion of unsuccessful outcomes. There is a need to evaluate new treatment strategies in these patients.
Please see later in the article for the Editors' Summary
Editors' Summary
One-third of the world's population is currently infected with Mycobacterium tuberculosis, the bacterium that causes tuberculosis (TB), and 5%–10% of HIV-uninfected individuals will go on to develop disease and become infectious. The risk of progression from infection to disease in HIV infected is much higher. If left untreated, each person with active TB may infect 10 to 15 people every year, reinforcing the public health priority of controlling TB through adequate treatment. Patients with a previous history of TB treatment are a major concern for TB programs throughout the world because these patients are at a much higher risk of harboring a form of TB that is resistant to the drugs most frequently used, resulting in poorer treatment outcomes and significantly complicating current management strategies. More then 1 million people in over 90 countries need to be “re-treated” after failing, interrupting, or relapsing from previous TB treatment.
Every year, 10%–20% of people with TB in low- and middle-income countries are started on a standardized five-drug retreatment regimen as recommended by the World Health Organization (WHO). Yet, unlike treatment regimens for newly diagnosed TB patients, the recommended retreatment regimen (also known as the category II regimen) has never been properly evaluated in randomized clinical trials or prospective cohort studies. Rather, this regimen was recommended by experts before the current situation of widespread drug-resistant TB and HIV infection.
Why Was This Study Done?
WHO surveillance data suggest that the retreatment regimen is successful in about 70% of patients, but retrospective studies that have evaluated the regimen's efficacy showed variable treatment responses with success rates ranging from 26% to 92%. However, these studies have generally only assessed outcomes at the completion of the retreatment regimen, and few have examined the risk of TB recurrence, especially in people who are also infected with HIV and so are more likely to experience TB recurrence—an issue of particular concern in sub-Saharan Africa. Therefore, in this study based in Kampala, Uganda, the researchers conducted a prospective cohort study to assess treatment and survival outcomes in patients previously treated for TB and to identify factors associated with poor outcomes. Given the overwhelming contribution of HIV infection to death, the researchers categorized their survival analysis by HIV status.
What Did the Researchers Do and Find?
The researchers recruited consecutive smear-positive TB patients who were admitted to Mulago Hospital, Kampala, Uganda, for the retreatment of TB with the standard retreatment regimen between July 2003 and January 2007. Eligible patients received daily directly observed therapy and after hospital discharge, were seen every month during their 8-month TB-retreatment course. Home health visitors assessed treatment adherence through treatment card review, monthly pill counts, and patient self-report. After the completion of the retreatment regimen, patients were evaluated for TB recurrence every 3 months for a median of 21 months. The researchers then used a statistical model to identify treatment outcomes and mortality HIV-uninfected and HIV-infected patients.
The researchers found that 29/148 (20%) of HIV-uninfected and 37/140 (26%) of HIV-infected patients had an unsuccessful treatment outcome. Factors associated with an unsuccessful treatment outcome were poor adherence, HIV infection, increasing age, and duration of TB symptoms. All patients with multidrug resistant TB, a form of TB that is resistant to the two most important drugs used to treat TB, had an unsuccessful treatment outcome. In addition, HIV-infected subjects were more likely to die than HIV-uninfected subjects (p<0.0001), and having multidrug resistant TB at enrollment was the only common risk factor for death during follow-up for both HIV-infected and HIV uninfected patients. Other risk factors for death among HIV-infected patients were CD4<50 cells/ml and no antiretroviral therapy treatment and among HIV-uninfected patients were poor adherence and duration of TB symptoms.
What Do These Findings Mean?
The researchers found that although 70%–80% of patients had a successful treatment outcome on completion of antituberculous therapy (a result that compares well with retrospective studies), the standard retreatment regimen had low treatment response rates and was associated with poor long-term outcomes in certain subgroups of patients, particularly those with multidrug resistant TB and HIV.
These findings indicate that the standard retreatment approach to TB as implemented in low- and middle-income settings is inadequate and stress the importance of a new, more effective, strategies. Improved access to rapid diagnostics for TB drug-resistance, second-line TB treatment, and antiretroviral therapy is urgently needed, along with a strong evidence base to guide clinicians and policy makers on how best to use these tools.
Additional Information
Please access these Web sites via the online version of this summary at
The World Health Organization has information on TB, TB retreatment, and multidrug-resistant TB
WHO also provides information on TB/HIV coinfection
The Stop TB Partnership provides information on the global plan to stop TB
PMCID: PMC3058098  PMID: 21423586
17.  Is a Cutoff of 10% Appropriate for the Change-in-Estimate Criterion of Confounder Identification? 
Journal of Epidemiology  2014;24(2):161-167.
When using the change-in-estimate criterion, a cutoff of 10% is commonly used to identify confounders. However, the appropriateness of this cutoff has never been evaluated. This study investigated cutoffs required under different conditions.
Four simulations were performed to select cutoffs that achieved a significance level of 5% and a power of 80%, using linear regression and logistic regression. A total of 10 000 simulations were run to obtain the percentage differences of the 4 fitted regression coefficients (with and without adjustment).
In linear regression, larger effect size, larger sample size, and lower standard deviation of the error term led to a lower cutoff point at a 5% significance level. In contrast, larger effect size and a lower exposure–confounder correlation led to a lower cutoff point at 80% power. In logistic regression, a lower odds ratio and larger sample size led to a lower cutoff point at a 5% significance level, while a lower odds ratio, larger sample size, and lower exposure–confounder correlation yielded a lower cutoff point at 80% power.
Cutoff points for the change-in-estimate criterion varied according to the effect size of the exposure–outcome relationship, sample size, standard deviation of the regression error, and exposure–confounder correlation.
PMCID: PMC3983286  PMID: 24317343
causality; confounding factors; regression; simulation; statistical models
18.  Estimates of Pandemic Influenza Vaccine Effectiveness in Europe, 2009–2010: Results of Influenza Monitoring Vaccine Effectiveness in Europe (I-MOVE) Multicentre Case-Control Study 
PLoS Medicine  2011;8(1):e1000388.
Results from a European multicentre case-control study reported by Marta Valenciano and colleagues suggest good protection by the pandemic monovalent H1N1 vaccine against pH1N1 and no effect of the 2009–2010 seasonal influenza vaccine on H1N1.
A multicentre case-control study based on sentinel practitioner surveillance networks from seven European countries was undertaken to estimate the effectiveness of 2009–2010 pandemic and seasonal influenza vaccines against medically attended influenza-like illness (ILI) laboratory-confirmed as pandemic influenza A (H1N1) (pH1N1).
Methods and Findings
Sentinel practitioners swabbed ILI patients using systematic sampling. We included in the study patients meeting the European ILI case definition with onset of symptoms >14 days after the start of national pandemic vaccination campaigns. We compared pH1N1 cases to influenza laboratory-negative controls. A valid vaccination corresponded to >14 days between receiving a dose of vaccine and symptom onset. We estimated pooled vaccine effectiveness (VE) as 1 minus the odds ratio with the study site as a fixed effect. Using logistic regression, we adjusted VE for potential confounding factors (age group, sex, month of onset, chronic diseases and related hospitalizations, smoking history, seasonal influenza vaccinations, practitioner visits in previous year). We conducted a complete case analysis excluding individuals with missing values and a multiple multivariate imputation to estimate missing values. The multivariate imputation (n = 2902) adjusted pandemic VE (PIVE) estimates were 71.9% (95% confidence interval [CI] 45.6–85.5) overall; 78.4% (95% CI 54.4–89.8) in patients <65 years; and 72.9% (95% CI 39.8–87.8) in individuals without chronic disease. The complete case (n = 1,502) adjusted PIVE were 66.0% (95% CI 23.9–84.8), 71.3% (95% CI 29.1–88.4), and 70.2% (95% CI 19.4–89.0), respectively. The adjusted PIVE was 66.0% (95% CI −69.9 to 93.2) if vaccinated 8–14 days before ILI onset. The adjusted 2009–2010 seasonal influenza VE was 9.9% (95% CI −65.2 to 50.9).
Our results suggest good protection of the pandemic monovalent vaccine against medically attended pH1N1 and no effect of the 2009–2010 seasonal influenza vaccine. However, the late availability of the pandemic vaccine and subsequent limited coverage with this vaccine hampered our ability to study vaccine benefits during the outbreak period. Future studies should include estimation of the effectiveness of the new trivalent vaccine in the upcoming 2010–2011 season, when vaccination will occur before the influenza season starts.
Please see later in the article for the Editors' Summary
Editors' Summary
Following the World Health Organization's declaration of pandemic phase six in June 2009, manufacturers developed vaccines against pandemic influenza A 2009 (pH1N1). On the basis of the scientific opinion of the European Medicines Agency, the European Commission initially granted marketing authorization to three pandemic vaccines for use in European countries. During the autumn of 2009, most European countries included the 2009–2010 seasonal influenza vaccine and the pandemic vaccine in their influenza vaccination programs.
The Influenza Monitoring Vaccine Effectiveness in Europe network (established to monitor seasonal and pandemic influenza vaccine effectiveness) conducted seven case-control and three cohort studies in seven European countries in 2009–2010 to estimate the effectiveness of the pandemic and seasonal vaccines. Data from the seven pilot case-control studies were pooled to provide overall adjusted estimates of vaccine effectiveness.
Why Was This Study Done?
After seasonal and pandemic vaccines are made available to populations, it is necessary to estimate the effectiveness of the vaccines at the population level during every influenza season. Therefore, this study was conducted in European countries to estimate the pandemic influenza vaccine effectiveness and seasonal influenza vaccine effectiveness against people presenting to their doctor with influenza-like illness who were confirmed (by laboratory tests) to be infected with pH1N1.
What Did the Researchers Do and Find?
The researchers conducted a multicenter case-control study on the basis of practitioner surveillance networks from seven countries—France, Hungary, Ireland, Italy, Romania, Portugal, and Spain. Patients consulting a participating practitioner for influenza-like illness had a nasal or throat swab taken within 8 days of symptom onset. Cases were swabbed patients who tested positive for pH1N1. Patients presenting with influenza-like illness whose swab tested negative for any influenza virus were controls.
Individuals were considered vaccinated if they had received a dose of the vaccine more than 14 days before the date of onset of influenza-like illness and unvaccinated if they were not vaccinated at all, or if the vaccine was given less than 15 days before the onset of symptoms. The researchers analyzed pandemic influenza vaccination effectiveness in those vaccinated less than 8 days, those vaccinated between and including 8 and 14 days, and those vaccinated more than 14 days before onset of symptoms compared to those who had never been vaccinated.
The researchers used modeling (taking account of all potential confounding factors) to estimate adjusted vaccine effectiveness and stratified the adjusted pandemic influenza vaccine effectiveness and the adjusted seasonal influenza vaccine effectiveness in three age groups (<15, 15–64, and ≥65 years of age).
The adjusted results suggest that the 2009–2010 seasonal influenza vaccine did not protect against pH1N1 illness. However, one dose of the pandemic vaccines used in the participating countries conferred good protection (65.5%–100% according to various stratifications performed) against pH1N1 in people who attended their practitioner with influenza-like illness, especially in people aged <65 years and in those without any chronic disease. Furthermore, good pandemic influenza vaccine effectiveness was observed as early as 8 days after vaccination.
What Do These Findings Mean?
The results of this study provide early estimates of the pandemic influenza vaccine effectiveness suggesting that the monovalent pandemic vaccines have been effective. The findings also give an indication of the vaccine effectiveness for the Influenza A (H1N1) 2009 strain included in the 2010–2011 seasonal vaccines, although specific vaccine effectiveness studies will have to be conducted to verify if similar good effectiveness are observed with 2010–2011 trivalent vaccines. However, the results of this study should be interpreted with caution because of limitations in the pandemic context (late timing of the studies, low incidence, low vaccine coverage leading to imprecise estimates) and potential biases due the study design, confounding factors, and missing values. The researchers recommend that in future season studies, the sample size per country should be enlarged in order to allow for precise pooled and stratified analyses.
Additional Information
Please access these websites via the online version of this summary at
The World Health Organization has information on H1N1 vaccination
The US Centers for Disease Control and Prevention provides a fact sheet on the 2009 H1N1 influenza virus
The US Department of Health and Human services has a comprehensive website on flu
The European Centre for Disease Prevention and Control provides information on 2009 H1N1 pandemic
The European Centre for Disease Prevention and Control presents a summary of the 2009 H1N1 pandemic in Europe and elsewhere
PMCID: PMC3019108  PMID: 21379316
19.  Seasonal modification of the association between temperature and adult emergency department visits for asthma: a case-crossover study 
Environmental Health  2012;11:55.
The objective of this study is to characterize the effect of temperature on emergency department visits for asthma and modification of this association by season. This association is of interest in its own right, and also important to understand because temperature may be an important confounder in analyses of associations between other environmental exposures and asthma. For example, the case-crossover study design is commonly used to investigate associations between air pollution and respiratory outcomes, such as asthma. This approach controls for confounding by month and season by design, and permits adjustment for potential confounding by temperature through regression modeling. However, such models may fail to adequately control for confounding if temperature effects are seasonal, since case-crossover analyses rarely account for interactions between matching factors (such as calendar month) and temperature.
We conducted a case-crossover study to determine whether the association between temperature and emergency department visits for asthma varies by season or month. Asthma emergency department visits among North Carolina adults during 2007–2008 were identified using a statewide surveillance system. Marginal as well as season- and month-specific associations between asthma visits and temperature were estimated with conditional logistic regression.
The association between temperature and adult emergency department visits for asthma is near null when the overall association is examined [odds ratio (OR) per 5 degrees Celsius = 1.01, 95% confidence interval (CI): 1.00, 1.02]. However, significant variation in temperature-asthma associations was observed by season (chi-square = 18.94, 3 degrees of freedom, p <0.001) and by month of the year (chi-square = 45.46, 11 degrees of freedom, p <0.001). ORs per 5 degrees Celsius were increased in February (OR = 1.06, 95% CI: 1.02, 1.10), July (OR = 1.16, 95% CI: 1.04, 1.29), and December (OR = 1.04, 95% CI: 1.01, 1.07) and decreased in September (OR = 0.92, 95% CI: 0.87, 0.97).
Our empirical example suggests that there is significant seasonal variation in temperature-asthma associations. Epidemiological studies rarely account for interactions between ambient temperature and temporal matching factors (such as month of year) in the case-crossover design. These findings suggest that greater attention should be given to seasonal modification of associations between temperature and respiratory outcomes in case-crossover analyses of other environmental asthma triggers.
PMCID: PMC3489538  PMID: 22898319
Asthma; Temperature; Season; Case-crossover
20.  Network-based Regularization for Matched Case-Control Analysis of High-dimensional DNA Methylation Data 
Statistics in medicine  2012;32(12):2127-2139.
The matched case-control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case-control studies with high-dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier article, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network-based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non-tumor tissues or between pre-treatment and post-treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this article, we developed a penalized conditional logistic model using the network-based penalty that encourages a grouping-effect of 1) linked CpG sites within a gene or 2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high-dimensional variable selection problems for matched case-control data. We further investigated the benefits of utilizing biological group or graph information for matched case-control data. The proposed method was applied to a genome-wide DNA methylation study on hepatocellular carcinoma (HCC) where DNA methylation levels of tumor and adjacent non-tumor tissues from HCC patients were investigated using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper.
PMCID: PMC4038397  PMID: 23212810
DNA methylation; Genetic pathways; Matched case-control; Network-based regularization; Penalized conditional logistic; Variable selection
21.  Antidepressant Response in Patients With Major Depression Exposed to NSAIDs: A Pharmacovigilance Study 
The American journal of psychiatry  2012;169(10):1065-1072.
It has been suggested that there is a mechanism by which nonsteroidal anti-inflammatory drugs (NSAIDs) may interfere with antidepressant response, and poorer outcomes among NSAID-treated patients were reported in the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study. To attempt to confirm this association in an independent population-based treatment cohort and explore potential confounding variables, the authors examined use of NSAIDs and related medications among 1,528 outpatients in a New England health care system.
Treatment outcomes were classified using a validated machine learning tool applied to electronic medical records. Logistic regression was used to examine the association between medication exposure and treatment outcomes, adjusted for potential confounding variables. To further elucidate confounding and treatment specificity of the observed effects, data from the STAR*D study were reanalyzed.
NSAID exposure was associated with a greater likelihood of depression classified as treatment resistant compared with depression classified as responsive to selective serotonin reuptake inhibitors (odds ratio=1.55, 95% CI=1.21–2.00). This association was apparent in the NSAIDs-only group but not in those using other agents with NSAID-like mechanisms (cyclooxygenase-2 inhibitors and salicylates). Inclusion of age, sex, ethnicity, and measures of comorbidity and health care utilization in regression models indicated confounding; association with outcome was no longer significant in fully adjusted models. Reanalysis of STAR*D results likewise identified an association in NSAIDs but not NSAID-like drugs, with more modest effects persisting after adjustment for potential confounding variables.
These results support an association between NSAID use and poorer antidepressant outcomes in major depressive disorder but indicate that some of the observed effect may be a result of confounding.
PMCID: PMC3787520  PMID: 23032386
22.  Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis 
The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case–control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters.
The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages.
By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression.
Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine stratification.
Electronic supplementary material
The online version of this article (doi:10.1186/1471-2288-14-122) contains supplementary material, which is available to authorized users.
PMCID: PMC4280686  PMID: 25417555
Statistics; Conditional distributions; Poisson regression; Time series regression; Environment
23.  Pooled Exposure Assessment for Matched Case-Control Studies 
Epidemiology (Cambridge, Mass.)  2011;22(5):704-712.
Exposure assessment using biologic specimens is important for epidemiology but may become impracticable if assays are expensive, specimen volumes are marginally adequate, or analyte levels fall below the limit of detection. Pooled exposure assessment can provide an effective remedy for these problems in unmatched case-control studies. We extend pooled exposure strategies to handle specimens collected in a matched case-control study. We show that if a logistic model applies to individuals, then a logistic model also applies to an analysis using pooled exposures. Consequently, the individual-level odds ratio can be estimated while conserving both cost and specimen. We discuss appropriate pooling strategies for a single exposure, with adjustment for multiple, possibly continuous, covariates(confounders)and assessment of effect modification by a categorical variable. We assess the performance of the approach via simulations and conclude that pooled strategies can markedly improve efficiency for matched as well as unmatched case-control studies.
PMCID: PMC3160274  PMID: 21747285
24.  Risk of Violent Crime in Individuals with Epilepsy and Traumatic Brain Injury: A 35-Year Swedish Population Study 
PLoS Medicine  2011;8(12):e1001150.
Seena Fazel and colleagues report findings from a longitudinal follow-up study in Sweden that evaluated the risks of violent crime subsequent to hospitalization for epilepsy, or traumatic brain injury. The researchers control for familial confounding with sibling controls. The analyses call into question an association between epilepsy and violent crime, although they do suggest that there may be a relationship between traumatic brain injury and violent crime.
Epilepsy and traumatic brain injury are common neurological conditions, with general population prevalence estimates around 0.5% and 0.3%, respectively. Although both illnesses are associated with various adverse outcomes, and expert opinion has suggested increased criminality, links with violent behaviour remain uncertain.
Methods and Findings
We combined Swedish population registers from 1973 to 2009, and examined associations of epilepsy (n = 22,947) and traumatic brain injury (n = 22,914) with subsequent violent crime (defined as convictions for homicide, assault, robbery, arson, any sexual offense, or illegal threats or intimidation). Each case was age and gender matched with ten general population controls, and analysed using conditional logistic regression with adjustment for socio-demographic factors. In addition, we compared cases with unaffected siblings.
Among the traumatic brain injury cases, 2,011 individuals (8.8%) committed violent crime after diagnosis, which, compared with population controls (n = 229,118), corresponded to a substantially increased risk (adjusted odds ratio [aOR] = 3.3, 95% CI: 3.1–3.5); this risk was attenuated when cases were compared with unaffected siblings (aOR = 2.0, 1.8–2.3). Among individuals with epilepsy, 973 (4.2%) committed a violent offense after diagnosis, corresponding to a significantly increased odds of violent crime compared with 224,006 population controls (aOR = 1.5, 1.4–1.7). However, this association disappeared when individuals with epilepsy were compared with their unaffected siblings (aOR = 1.1, 0.9–1.2). We found heterogeneity in violence risk by age of disease onset, severity, comorbidity with substance abuse, and clinical subgroups. Case ascertainment was restricted to patient registers.
In this longitudinal population-based study, we found that, after adjustment for familial confounding, epilepsy was not associated with increased risk of violent crime, questioning expert opinion that has suggested a causal relationship. In contrast, although there was some attenuation in risk estimates after adjustment for familial factors and substance abuse in individuals with traumatic brain injury, we found a significantly increased risk of violent crime. The implications of these findings will vary for clinical services, the criminal justice system, and patient charities.
Please see later in the article for the Editors' Summary
Editors' Summary
News stories linking mental illness (diseases that appear primarily as abnormalities of thought, feeling or behavior) with violence frequently hit the headlines. But what about neurological conditions—disorders of the brain, spinal cord, and nerves? People with these disorders, which include dementia, Parkinson's disease, and brain tumors, often experience stigmatization and discrimination, a situation that is made worse by the media and by some experts suggesting that some neurological conditions increase the risk of violence. For example, many modern textbooks assert that epilepsy—a neurological condition that causes repeated seizures or fits—is associated with increased criminality and violence. Similarly, various case studies have linked traumatic brain injury—damage to the brain caused by a sudden blow to the head—with an increased risk of violence.
Why Was This Study Done?
Despite public and expert perceptions, very little is actually known about the relationship between epilepsy and traumatic brain injury and violence. In particular, few if any population-based, longitudinal studies have investigated whether there is an association between the onset of either of these two neurological conditions and violence at a later date. This information might make it easier to address the stigma that is associated with these conditions. Moreover, it might help scientists understand the neurobiological basis of violence, and it could help health professionals appropriately manage individuals with these two disorders. In this longitudinal study, the researchers begin to remedy the lack of hard information about links between neurological conditions and violence by investigating the risk of violent crime associated with epilepsy and with traumatic brain injury in the Swedish population.
What Did the Researchers Do and Find?
The researchers used the National Patient Register to identify all the cases of epilepsy and traumatic brain injury that occurred in Sweden between 1973 and 2009. They matched each case (nearly 23,000 for each condition) with ten members of the general population and retrieved data on all convictions for violent crime over the same period from the Crime Register. They then linked these data together using the personal identification numbers that identify Swedish residents in national registries. 4.2% of individuals with epilepsy had at least one conviction for violence after their diagnosis, but only 2.5% of the general population controls did. That is, epilepsy increased the absolute risk of a conviction for violence by 1.7%. Using a regression analysis that adjusted for age, gender, and various socio-demographic factors, the researchers calculated that the odds of individuals with epilepsy committing a violent crime were 1.5 times higher than for general population controls (an adjusted odds ratio [aOR] of 1.5). The strength of this association was reduced when further adjustment was made for substance abuse, and disappeared when individuals with epilepsy were compared with their unaffected siblings (a sibling control study). Similarly, 8.8% of individuals with traumatic brain injury were convicted of a violent crime after their diagnosis compared to only 3% of controls, giving an aOR of 3.3. Again, the strength of this association was reduced when affected individuals were compared to their unaffected siblings (aOR = 2.0) and when adjustment was made for substance abuse (aOR = 2.3).
What Do These Findings Mean?
Although some aspects of this study may have affected the accuracy of its findings, these results nevertheless challenge the idea that there are strong direct links between epilepsy and violent crime. The low absolute rate of violent crime and the lack of any association between epilepsy and violent crime in the sibling control study argue against a strong link, a potentially important finding given the stigmatization of epilepsy. For traumatic brain injury, the reduced association with violent crime in the sibling control study compared with the general population control study suggests that shared familial features may be responsible for some of the association between brain injury and violence. As with epilepsy, this finding should help patient charities who are trying to reduce the stigma associated with traumatic brain injury. Importantly, however, these findings also suggest that some groups of patients with these conditions (for example, patients with head injuries who abuse illegal drugs and alcohol) would benefit from being assessed for their risk of behaving violently and from appropriate management.
Additional Information
Please access these websites via the online version of this summary at
This study is further discussed in a PLoS Medicine Perspective by Jan Volavka
The US National Institute of Neurological Disorders and Stroke provides detailed information about traumatic brain injury and about epilepsy (in English and Spanish)
The UK National Health Service Choices website provides information about severe head injury, including a personal story about a head injury sustained in a motor vehicle accident, and information about epilepsy, including personal stories about living with epilepsy
Healthtalkonline has information on epilepsy, including patient perspectives
MedlinePlus provide links to further resources on traumatic brain injury and on epilepsy (available in English and Spanish)
PMCID: PMC3246446  PMID: 22215988
25.  Laryngeal and hypopharyngeal cancers and occupational exposure to formaldehyde and various dusts: a case-control study in France 
OBJECTIVES—A case-control study was conducted in France to assess possible associations between occupational exposures and squamous cell carcinomas of the larynx and hypopharynx.
METHODS—The study was restricted to men, and included 201 hypopharyngeal cancers, 296 laryngeal cancers, and 296 controls (patients with other tumour sites). Detailed information on smoking, alcohol consumption, and lifetime occupational history was collected. Occupational exposure to seven substances (formaldehyde, leather dust, wood dust, flour dust, coal dust, silica dust, and textile dust) was assessed with a job exposure matrix. Exposure variables used in the analysis were probability, duration, and cumulative level of exposure. Odds ratios (ORs) with their 95% confidence intervals (95% CIs) were estimated by unconditional logistic regression, and were adjusted for major confounding factors (age, smoking, alcohol, and when relevant other occupational exposures).
RESULTS—Hypopharyngeal cancer was found to be associated with exposure to coal dust (OR 2.31, 95% CI 1.21 to 4.40), with a significant rise in risk with probability (p<0.005 for trend) and level (p<0.007 for trend) of exposure. Exposure to coal dust was also associated with an increased risk of laryngeal cancer (OR 1.67, 95% CI 0.92 to 3.02), but no dose-response pattern was found. A significant relation, limited to hypopharyngeal cancer, was found with the probability of exposure to formaldehyde (p<0.005 for trend), with a fourfold risk for the highest category (OR 3.78 , 95% CI 1.50 to 9.49). When subjects exposed to formaldehyde with a low probability were excluded, the risk also increased with duration (p<0.04) and cumulative level of exposure (p<0.14). No significant association was found for any other substance.
CONCLUSION—These results indicate that exposure to formaldehyde and coal dust may increase the risk of hypopharyngeal cancer.

Keywords: laryngeal cancer; hypopharyngeal cancer; occupational exposure; job exposure matrix; formaldehyde; coal dust
PMCID: PMC1739886  PMID: 11024201

Results 1-25 (1542990)