Search tips
Search criteria

Results 1-25 (1199525)

Clipboard (0)

Related Articles

1.  Defining Catastrophic Costs and Comparing Their Importance for Adverse Tuberculosis Outcome with Multi-Drug Resistance: A Prospective Cohort Study, Peru 
PLoS Medicine  2014;11(7):e1001675.
Tom Wingfield and colleagues investigate the relationship between catastrophic costs and tuberculosis outcomes for patients receiving free tuberculosis care in Peru.
Please see later in the article for the Editors' Summary
Even when tuberculosis (TB) treatment is free, hidden costs incurred by patients and their households (TB-affected households) may worsen poverty and health. Extreme TB-associated costs have been termed “catastrophic” but are poorly defined. We studied TB-affected households' hidden costs and their association with adverse TB outcome to create a clinically relevant definition of catastrophic costs.
Methods and Findings
From 26 October 2002 to 30 November 2009, TB patients (n = 876, 11% with multi-drug-resistant [MDR] TB) and healthy controls (n = 487) were recruited to a prospective cohort study in shantytowns in Lima, Peru. Patients were interviewed prior to and every 2–4 wk throughout treatment, recording direct (household expenses) and indirect (lost income) TB-related costs. Costs were expressed as a proportion of the household's annual income. In poorer households, costs were lower but constituted a higher proportion of the household's annual income: 27% (95% CI = 20%–43%) in the least-poor houses versus 48% (95% CI = 36%–50%) in the poorest. Adverse TB outcome was defined as death, treatment abandonment or treatment failure during therapy, or recurrence within 2 y. 23% (166/725) of patients with a defined treatment outcome had an adverse outcome. Total costs ≥20% of household annual income was defined as catastrophic because this threshold was most strongly associated with adverse TB outcome. Catastrophic costs were incurred by 345 households (39%). Having MDR TB was associated with a higher likelihood of incurring catastrophic costs (54% [95% CI = 43%–61%] versus 38% [95% CI = 34%–41%], p<0.003). Adverse outcome was independently associated with MDR TB (odds ratio [OR] = 8.4 [95% CI = 4.7–15], p<0.001), previous TB (OR = 2.1 [95% CI = 1.3–3.5], p = 0.005), days too unwell to work pre-treatment (OR = 1.01 [95% CI = 1.00–1.01], p = 0.02), and catastrophic costs (OR = 1.7 [95% CI = 1.1–2.6], p = 0.01). The adjusted population attributable fraction of adverse outcomes explained by catastrophic costs was 18% (95% CI = 6.9%–28%), similar to that of MDR TB (20% [95% CI = 14%–25%]). Sensitivity analyses demonstrated that existing catastrophic costs thresholds (≥10% or ≥15% of household annual income) were not associated with adverse outcome in our setting. Study limitations included not measuring certain “dis-saving” variables (including selling household items) and gathering only 6 mo of costs-specific follow-up data for MDR TB patients.
Despite free TB care, having TB disease was expensive for impoverished TB patients in Peru. Incurring higher relative costs was associated with adverse TB outcome. The population attributable fraction indicated that catastrophic costs and MDR TB were associated with similar proportions of adverse outcomes. Thus TB is a socioeconomic as well as infectious problem, and TB control interventions should address both the economic and clinical aspects of this disease.
Please see later in the article for the Editors' Summary
Editors' Summary
Caused by the infectious microbe Mycobacterium tuberculosis, tuberculosis (or TB) is a global health problem. In 2012, an estimated 8.6 million people fell ill with TB, and 1.3 million were estimated to have died because of the disease. Poverty is widely recognized as an important risk factor for TB, and developing nations shoulder a disproportionate burden of both poverty and TB disease. For example, in Lima (the capital of Peru), the incidence of TB follows the poverty map, sparing residents living in rich areas of the city while spreading among poorer residents that live in overcrowded households.
The Peruvian government, non-profit organizations, and the World Health Organization (WHO) have extended healthcare programs to provide free diagnosis and treatment for TB and drug-resistant strains of TB in Peru, but rates of new TB cases remain high. For example, in Ventanilla (an area of 16 shantytowns located in northern Lima), the rate of infection was higher during the study period, at 162 new cases per 100,000 people per year, than the national average. About one-third of the 277,895 residents of Ventanilla live on under US$1 per day.
Why Was This Study Done?
Poverty increases the risks associated with contracting TB infection, but the disease also affects the most economically productive age group, and the income of TB-affected households often decreases post-diagnosis, exacerbating poverty. A recent WHO consultation report proposed a target of eradicating catastrophic costs for TB-affected families by 2035, but hidden TB-related costs remain understudied, and there is no international consensus defining catastrophic costs incurred by patients and households affected by TB. Lost income and the cost of transport are among hidden costs associated with free treatment programs; these costs and their potential impact on patients and their households are not well defined. Here the researchers sought to clarify and characterize TB-related costs and explore whether there is a relationship between the hidden costs associated with free TB treatment programs and the likelihood of completing treatment and becoming cured of TB.
What Did the Researchers Do and Find?
Over a seven-year period (2002–2009), the researchers recruited 876 study participants with TB diagnosed at health posts located in Ventanilla. To provide a comparative control group, a sample of 487 healthy individuals was also recruited to participate. Participants were interviewed prior to treatment, and households' TB-related direct expenses and indirect expenses (lost income attributed to TB) were recorded every 2–4 wk. Data were collected during scheduled household visits.
TB patients were poorer than controls, and analysis of the data showed that accessing free TB care was expensive for TB patients, especially those with multi-drug-resistant (MDR) TB. Total expenses were similar pre-treatment compared to during treatment for TB patients, despite receiving free care (1.1 versus 1.2 times the same household's monthly income). Even though direct expenses (for example, costs of medical examinations and medicines other than anti-TB therapy) were lower in the poorest households, their total expenses (direct and indirect) made up a greater proportion of their household annual income: 48% for the poorest households compared to 27% in the least-poor households.
The researchers defined costs that were equal to or above one-fifth (20%) of household annual income as catastrophic because this threshold marked the greatest association with adverse treatment outcomes such as death, abandoning treatment, failing to respond to treatment, or TB recurrence. By calculating the population attributable fraction—the proportional reduction in population adverse treatment outcomes that could occur if a risk factor was reduced to zero—the authors estimate that adverse TB outcomes explained by catastrophic costs and MDR TB were similar: 18% for catastrophic costs and 20% for MDR TB.
What Do These Findings Mean?
The findings of this study indicate a potential role for social protection as a means to improve TB disease control and health, as well as defining a novel, evidence-based threshold for catastrophic costs for TB-affected households of 20% or more of annual income. Addressing the economic impact of diagnosis and treatment in impoverished communities may increase the odds of curing TB.
Study limitations included only six months of follow-up data being gathered on costs for each participant and not recording “dissavings,” such as selling of household items in response to financial shock. Because the study was observational, the authors aren't able to determine the direction of the association between catastrophic costs and TB outcome. Even so, the study indicates that TB is a socioeconomic as well as infectious problem, and that TB control interventions should address both the economic and clinical aspects of the disease.
Additional Information
Please access these websites via the online version of this summary at
The World Health Organization provides information on all aspects of tuberculosis, including the Global Tuberculosis Report 2013
The US Centers for Disease Control and Prevention has information about tuberculosis
Médecins Sans Frontières's TB&ME blog provides patients' stories of living with MDR TB
TB Alert, a UK-based charity that promotes TB awareness worldwide, has information on TB in several European, African, and Asian languages
More information is available about the Innovation For Health and Development (IFHAD) charity and its research team's work in Peru
PMCID: PMC4098993  PMID: 25025331
2.  Familial Identification: Population Structure and Relationship Distinguishability 
PLoS Genetics  2012;8(2):e1002469.
With the expansion of offender/arrestee DNA profile databases, genetic forensic identification has become commonplace in the United States criminal justice system. Implementation of familial searching has been proposed to extend forensic identification to family members of individuals with profiles in offender/arrestee DNA databases. In familial searching, a partial genetic profile match between a database entrant and a crime scene sample is used to implicate genetic relatives of the database entrant as potential sources of the crime scene sample. In addition to concerns regarding civil liberties, familial searching poses unanswered statistical questions. In this study, we define confidence intervals on estimated likelihood ratios for familial identification. Using these confidence intervals, we consider familial searching in a structured population. We show that relatives and unrelated individuals from population samples with lower gene diversity over the loci considered are less distinguishable. We also consider cases where the most appropriate population sample for individuals considered is unknown. We find that as a less appropriate population sample, and thus allele frequency distribution, is assumed, relatives and unrelated individuals become more difficult to distinguish. In addition, we show that relationship distinguishability increases with the number of markers considered, but decreases for more distant genetic familial relationships. All of these results indicate that caution is warranted in the application of familial searching in structured populations, such as in the United States.
Author Summary
The forensic identification of criminal suspects through DNA profiling is now common in the United States. Indirect identification by familial DNA profiling is increasingly proposed to extend the utility of DNA databases. In familial searching, a DNA profile from a crime scene partially matches a database profile entry, implicating close relatives of the partial match. While the basic principles behind familial searching methods are simple and elegant, statistical confidence that a partially matched profile belongs to a true genetic relative has not been fully explored. Here, we derive relative identification likelihood ratio statistics and consider how the ability of familial searching to distinguish relatives from unrelated individuals varies over population samples and is affected by inaccurately assumed population background. We observe lower relationship distinguishability for population samples with less identifying information in the genetic loci considered. Additionally, we show that relationship distinguishability decreases with discordance between true and assumed population samples. These results indicate that, if an inappropriate genetic population group is assumed, individuals from certain marginalized groups may be disproportionately more often subject to false familial identification. Our results suggest that care is warranted in the use and interpretation of familial searching forensic techniques.
PMCID: PMC3276546  PMID: 22346758
3.  SNPs as Supplements in Simple Kinship Analysis or as Core Markers in Distant Pairwise Relationship Tests: When Do SNPs Add Value or Replace Well-Established and Powerful STR Tests? 
Genetic tests for kinship testing routinely reach likelihoods that provide virtual proof of the claimed relationship by typing microsatellites-commonly consisting of 12–15 standard forensic short tandem repeats (STRs). Single nucleotide polymorphisms (SNPs) have also been applied to kinship testing but these binary markers are required in greater numbers than multiple-allele STRs. However SNPs offer certain advantageous characteristics not found in STRs, including, much higher mutational stability, good performance typing highly degraded DNA, and the ability to be readily up-scaled to very high marker numbers reaching over a million loci. This article outlines kinship testing applications where SNPs markedly improve the genetic data obtained. In particular we explore the minimum number of SNPs that will be required to confirm pairwise relationship claims in deficient pedigrees that typify missing persons’ identification or war grave investigations where commonly few surviving relatives are available for comparison and the DNA is highly degraded.
We describe the application of SNPs alongside STRs when incomplete profiles or allelic instability in STRs create ambiguous results, we review the use of high density SNP arrays when the relationship claim is very distant, and we outline simulations of kinship analyses with STRs supplemented with SNPs in order to estimate the practical limit of pairwise relationships that can be differentiated from random unrelated pairs from the same population.
The minimum number of SNPs for robust statistical inference of parent-offspring relationships through to those of second cousins (S-3–3) is estimated for both simple, single multiplex SNP sets and for subsets of million-SNP arrays.
There is considerable scope for resolving ambiguous STR results and for improving the statistical power of kinship analysis by adding small-scale SNP sets but where the pedigree is deficient the pairwise relationships must be relatively close. For more distant relationships it is possible to reduce chip-based SNP arrays from the million+ markers down to ∼7,000. However, such numbers indicate that current genotyping approaches will not be able to deliver sufficient data to resolve distant pairwise relationships from the limited DNA typical of the most challenging identification cases.
PMCID: PMC3375139  PMID: 22851936
STR; SNP; Indel; Relationship testing; Deficient pedigrees; High-density SNP arrays
4.  Panel of polymorphic heterologous microsatellite loci to genotype critically endangered Bengal tiger: a pilot study 
SpringerPlus  2014;3:4.
In India, six landscapes and source populations that are important for long-term conservation of Bengal tigers (Panthera tigris tigris) have been identified. Except for a few studies, nothing is known regarding the genetic structure and extent of gene flow among most of the tiger populations across India as the majority of them are small, fragmented and isolated. Thus, individual-based relationships are required to understand the species ecology and biology for planning effective conservation and genetics-based individual identification has been widely used. But this needs screening and describing characteristics of microsatellite loci from DNA from good-quality sources so that the required number of loci can be selected and the genotyping error rate minimized. In the studies so far conducted on the Bengal tiger, a very small number of loci (n = 35) have been tested with high-quality source of DNA, and information on locus-specific characteristics is lacking. The use of such characteristics has been strongly recommended in the literature to minimize the error rate and by the International Society for Forensic Genetics (ISFG) for forensic purposes. Therefore, we describe for the first time locus-specific genetic and genotyping profile characteristics, crucial for population genetic studies, using high-quality source of DNA of the Bengal tiger. We screened 39 heterologous microsatellite loci (Sumatran tiger, domestic cat, Asiatic lion and snow leopard) in captive individuals (n = 8), of which 21 loci are being reported for the first time in the Bengal tiger, providing an additional choice for selection. The mean relatedness coefficient (R = −0.143) indicates that the selected tigers were unrelated. Thirty-four loci were polymorphic, with the number of alleles ranging from 2 to 7 per locus, and the remaining five loci were monomorphic. Based on the PIC values (> 0.500), and other characteristics, we suggest that 16 loci (3 to 7 alleles) be used for genetic and forensic study purposes. The probabilities of matching genotypes of unrelated individuals (3.692 × 10-19) and siblings (4.003 × 10-6) are within the values needed for undertaking studies in population genetics, relatedness, sociobiology and forensics.
PMCID: PMC3895153  PMID: 24455462
Bengal tiger; Panthera tigris tigris; Heterologous microsatellite loci; Genotyping panel
5.  Genetic parameters of five new European Standard Set STR loci (D10S1248, D22S1045, D2S441, D1S1656, D12S391) in the population of eastern Croatia 
Croatian Medical Journal  2012;53(5):409-415.
To establish allele frequencies and genetic parameters in eastern Croatia population and to compare them with those in other populations. The second aim was to compare the genetic profiles obtained with different forensic kits amplifying the same genetic markers.
Blood samples of 217 unrelated individuals from eastern Croatia were genotyped using AmpFlSTR NGM kit. Allele distribution and other genetic parameters were determined for 15 short tandem repeat (STR) loci, including the 5 loci recently added to the European Standard Set (ESS) of STR loci (D10S1248, D22S1045, D2S441, D1S1656, and D12S391). Ninety-six samples underwent duplicate analysis using AmpFlSTR Identifiler kit.
Power of discrimination was highest for the two new ESS loci, D1S1656 (0.97254) and D12S391 (0.97339). Comparison of allele frequencies for 5 new ESS loci in our sample with previously published population data showed a significant difference from Maghreb population on D2S441 and from American Caucasian population on D1S1656. Comparison of allele frequencies for standard 10 STR loci with all the neighboring populations’ data showed a significant difference only from Albanian population (on D2S1338, D18S51, and TH01). Discordant genotypes were observed in 5 (5.2%) samples at a single locus when amplified with both AmpFlSTR NGM and AmpFlSTR Identifiler kit.
New ESS STR loci are highly polymorphic and short, and therefore very useful for the analysis of challenging forensic samples. DNA samples purposed for establishing databases should be routinely amplified in duplicate.
PMCID: PMC3496899  PMID: 23100202
6.  Pregnancy Weight Gain and Childhood Body Weight: A Within-Family Comparison 
PLoS Medicine  2013;10(10):e1001521.
David Ludwig and colleagues examine the within-family relationship between pregnancy weight gain and the offspring's childhood weight gain, thereby reducing the influence of genes and environment.
Please see later in the article for the Editors' Summary
Excessive pregnancy weight gain is associated with obesity in the offspring, but this relationship may be confounded by genetic and other shared influences. We aimed to examine the association of pregnancy weight gain with body mass index (BMI) in the offspring, using a within-family design to minimize confounding.
Methods and Findings
In this population-based cohort study, we matched records of all live births in Arkansas with state-mandated data on childhood BMI collected in public schools (from August 18, 2003 to June 2, 2011). The cohort included 42,133 women who had more than one singleton pregnancy and their 91,045 offspring. We examined how differences in weight gain that occurred during two or more pregnancies for each woman predicted her children's BMI and odds ratio (OR) of being overweight or obese (BMI≥85th percentile) at a mean age of 11.9 years, using a within-family design. For every additional kg of pregnancy weight gain, childhood BMI increased by 0.0220 (95% CI 0.0134–0.0306, p<0.0001) and the OR of overweight/obesity increased by 1.007 (CI 1.003–1.012, p = 0.0008). Variations in pregnancy weight gain accounted for a 0.43 kg/m2 difference in childhood BMI. After adjustment for birth weight, the association of pregnancy weight gain with childhood BMI was attenuated but remained statistically significant (0.0143 kg/m2 per kg of pregnancy weight gain, CI 0.0057–0.0229, p = 0.0007).
High pregnancy weight gain is associated with increased body weight of the offspring in childhood, and this effect is only partially mediated through higher birth weight. Translation of these findings to public health obesity prevention requires additional study.
Please see later in the article for the Editors' Summary
Editors' Summary
Childhood obesity has become a worldwide epidemic. For example, in the United States, the number of obese children has more than doubled in the past 30 years. 7% of American children aged 6–11 years were obese in 1980, compared to nearly 18% in 2010. Because of the rising levels of obesity, the current generation of children may have a shorter life span than their parents for the first time in 200 years.
Childhood obesity has both immediate and long-term effects on health. The initial problems are usually psychological. Obese children often experience discrimination, leading to low self-esteem and depression. Their physical health also suffers. They are more likely to be at risk of cardiovascular disease from high cholesterol and high blood pressure. They may also develop pre-diabetes or diabetes type II. In the long-term, obese children tend to become obese adults, putting them at risk of premature death from stroke, heart disease, or cancer.
There are many factors that lead to childhood obesity and they often act in combination. A major risk factor, especially for younger children, is having at least one obese parent. The challenge lies in unravelling the complex links between the genetic and environmental factors that are likely to be involved.
Why Was This Study Done?
Several studies have shown that a child's weight is influenced by his/her mother's weight before pregnancy and her weight gain during pregnancy. An obese mother, or a mother who puts on more pregnancy weight than average, is more likely to have an obese child.
One explanation for the effects of pregnancy weight gain is that the mother's overeating directly affects the baby's development. It may change the baby's brain and metabolism in such a way as to increase the child's long-term risk of obesity. Animal studies have confirmed that the offspring of overfed rats show these kinds of physiological changes. However, another possible explanation is that mother and baby share a similar genetic make-up and environment so that a child becomes obese from inheriting genetic risk factors, and growing up in a household where being overweight is the norm.
The studies in humans that have been carried out to date have not been able to distinguish between these explanations. Some have given conflicting results. The aim of this study was therefore to look for evidence of links between pregnancy weight gain and children's weight, using an approach that would separate the impact of genetic and environmental factors from a direct effect on the developing baby.
What Did the Researchers Do and Find?
The researchers examined data from the population of the US state of Arkansas recorded between 2003 and 2011. They looked at the health records of over 42,000 women who had given birth to more than one child during this period. This gave them information about how much weight the women had gained during each of their pregnancies. The researchers also looked at the school records of the children, over 91,000 in total, which included the children's body mass index (BMI, which factors in both height and weight). They analyzed the data to see if there was a link between the mothers' pregnancy weight gain and the child's BMI at around 12 years of age. Most importantly, they looked at these links within families, comparing children born to the same mother. The rationale for this approach was that these children would share a similar genetic make-up and would have grown up in similar environments. By taking genetics and environment into account in this manner, any remaining evidence of an impact of pregnancy weight gain on the children's BMI would have to be explained by other factors.
The results showed that the amount of weight each mother gained in pregnancy predicted her children's BMI and the likelihood of her children being overweight or obese. For every additional kg the mother gained during pregnancy, the children's BMI increased by 0.022. The children of mothers who put on the most weight had a BMI that was on average 0.43 higher than the children whose mothers had put on the least weight.
The study leaves some questions unanswered, including whether the mother's weight before pregnancy makes a difference to their children's BMI. The researchers were not able to obtain these measurements, nor the weight of the fathers. There may have also been other factors that weren't measured that might explain the links that were found.
What Do These Findings Mean?
This study shows that mothers who gain excessive weight during pregnancy increase the risk of their child becoming obese. This appears to be partly due to a direct effect on the developing baby.
These results represent a significant public health concern, even though the impact on an individual basis is relatively small. They could contribute to several hundred thousand cases of childhood obesity worldwide. Importantly, they also suggest that some cases could be prevented by measures to limit excessive weight gain during pregnancy. Such an approach could prove effective, as most mothers will not want to damage their child's health, and might therefore be highly motivated to change their behavior. However, because inadequate weight gain during pregnancy can also adversely affect the developing fetus, it will be essential for women to receive clear information about what constitutes optimal weight gain during pregnancy.
Additional Information
Please access these websites via the online version of this summary at
The US Centers for Disease Control and Prevention provide Childhood Obesity Facts
The UK National Health Service article “How much weight will I put on during my pregnancy?” provides information on pregnancy and weight gain and links to related resources
PMCID: PMC3794857  PMID: 24130460
7.  Robust relationship inference in genome-wide association studies 
Bioinformatics  2010;26(22):2867-2873.
Motivation: Genome-wide association studies (GWASs) have been widely used to map loci contributing to variation in complex traits and risk of diseases in humans. Accurate specification of familial relationships is crucial for family-based GWAS, as well as in population-based GWAS with unknown (or unrecognized) family structure. The family structure in a GWAS should be routinely investigated using the SNP data prior to the analysis of population structure or phenotype. Existing algorithms for relationship inference have a major weakness of estimating allele frequencies at each SNP from the entire sample, under a strong assumption of homogeneous population structure. This assumption is often untenable.
Results: Here, we present a rapid algorithm for relationship inference using high-throughput genotype data typical of GWAS that allows the presence of unknown population substructure. The relationship of any pair of individuals can be precisely inferred by robust estimation of their kinship coefficient, independent of sample composition or population structure (sample invariance). We present simulation experiments to demonstrate that the algorithm has sufficient power to provide reliable inference on millions of unrelated pairs and thousands of relative pairs (up to 3rd-degree relationships). Application of our robust algorithm to HapMap and GWAS datasets demonstrates that it performs properly even under extreme population stratification, while algorithms assuming a homogeneous population give systematically biased results. Our extremely efficient implementation performs relationship inference on millions of pairs of individuals in a matter of minutes, dozens of times faster than the most efficient existing algorithm known to us.
Availability: Our robust relationship inference algorithm is implemented in a freely available software package, KING, available for download at∼wc9c/KING.
Supplementary information: Supplementary data are available at Bioinformatics online.
PMCID: PMC3025716  PMID: 20926424
8.  Parenthood—A Contributing Factor to Childhood Obesity 
Prevalence of childhood obesity and its complications have increased world-wide. Parental status may be associated with children’s health outcomes including their eating habits, body weight and blood cholesterol. The National Health and Nutrition Examination Survey (NHANES) for the years 1988–1994, provided a unique opportunity for matching parents to children enabling analyses of joint demographics, racial differences and health indicators. Specifically, the NHANES III data, 1988–1994, of 219 households with single-parents and 780 dual-parent households were analyzed as predictors for primary outcome variables of children’s Body Mass Index (BMI), dietary nutrient intakes and blood cholesterol. Children of single-parent households were significantly (p < 0.01) more overweight than children of dual-parent households. Total calorie and saturated fatty acid intakes were higher among children of single-parent households than dual-parent households (p < 0.05). On average, Black children were more overweight (p < 0.04) than children of other races. The study results implied a strong relationship between single-parent status and excess weight in children. Further studies are needed to explore the dynamics of single-parent households and its influence on childhood diet and obesity. Parental involvement in the development of school- and community-based obesity prevention programs are suggested for effective health initiatives. Economic constraints and cultural preferences may be communicated directly by family involvement in these much needed public health programs.
PMCID: PMC2922726  PMID: 20717539
children’s diet; childhood obesity; NHANES; single-parent households; BMI; blood-cholesterol
9.  Effect of Household-Based Drinking Water Chlorination on Diarrhoea among Children under Five in Orissa, India: A Double-Blind Randomised Placebo-Controlled Trial 
PLoS Medicine  2013;10(8):e1001497.
Sophie Boisson and colleagues conducted a double-blind, randomized placebo-controlled trial in Orissa, a state in southeast India, to evaluate the effect of household water treatment in preventing diarrheal illnesses in children aged under five years of age.
Please see later in the article for the Editors' Summary
Boiling, disinfecting, and filtering water within the home can improve the microbiological quality of drinking water among the hundreds of millions of people who rely on unsafe water supplies. However, the impact of these interventions on diarrhoea is unclear. Most studies using open trial designs have reported a protective effect on diarrhoea while blinded studies of household water treatment in low-income settings have found no such effect. However, none of those studies were powered to detect an impact among children under five and participants were followed-up over short periods of time. The aim of this study was to measure the effect of in-home water disinfection on diarrhoea among children under five.
Methods and Findings
We conducted a double-blind randomised controlled trial between November 2010 and December 2011. The study included 2,163 households and 2,986 children under five in rural and urban communities of Orissa, India. The intervention consisted of an intensive promotion campaign and free distribution of sodium dichloroisocyanurate (NaDCC) tablets during bi-monthly households visits. An independent evaluation team visited households monthly for one year to collect health data and water samples. The primary outcome was the longitudinal prevalence of diarrhoea (3-day point prevalence) among children aged under five. Weight-for-age was also measured at each visit to assess its potential as a proxy marker for diarrhoea. Adherence was monitored each month through caregiver's reports and the presence of residual free chlorine in the child's drinking water at the time of visit. On 20% of the total household visits, children's drinking water was assayed for thermotolerant coliforms (TTC), an indicator of faecal contamination. The primary analysis was on an intention-to-treat basis. Binomial regression with a log link function and robust standard errors was used to compare prevalence of diarrhoea between arms. We used generalised estimating equations to account for clustering at the household level. The impact of the intervention on weight-for-age z scores (WAZ) was analysed using random effect linear regression.
Over the follow-up period, 84,391 child-days of observations were recorded, representing 88% of total possible child-days of observation. The longitudinal prevalence of diarrhoea among intervention children was 1.69% compared to 1.74% among controls. After adjusting for clustering within household, the prevalence ratio of the intervention to control was 0.95 (95% CI 0.79–1.13). The mean WAZ was similar among children of the intervention and control groups (−1.586 versus −1.589, respectively). Among intervention households, 51% reported their child's drinking water to be treated with the tablets at the time of visit, though only 32% of water samples tested positive for residual chlorine. Faecal contamination of drinking water was lower among intervention households than controls (geometric mean TTC count of 50 [95% CI 44–57] per 100 ml compared to 122 [95% CI 107–139] per 100 ml among controls [p<0.001] [n = 4,546]).
Our study was designed to overcome the shortcomings of previous double-blinded trials of household water treatment in low-income settings. The sample size was larger, the follow-up period longer, both urban and rural populations were included, and adherence and water quality were monitored extensively over time. These results provide no evidence that the intervention was protective against diarrhoea. Low compliance and modest reduction in water contamination may have contributed to the lack of effect. However, our findings are consistent with other blinded studies of similar interventions and raise additional questions about the actual health impact of household water treatment under these conditions.
Trial Registration NCT01202383
Please see later in the article for the Editors' Summary
Editors' Summary
Millennium Development Goal 7 calls for halving the proportion of the global population without sustainable access to safe drinking water between 1990 and 2015. Although this target was met in 2010, according to latest figures, 768 million people world-wide still rely on unimproved drinking water sources. Access to clean drinking water is integral to good health and a key strategy in reducing diarrhoeal illness: Currently, 1.3 million children aged less than five years die of diarrhoeal illnesses every year with a sixth of such deaths occurring in one country—India. Although India has recently made substantial progress in improving water supplies throughout the country, currently almost 90% of the rural population does not have a water connection to their house and drinking water supplies throughout the country are extensively contaminated with human waste. A strategy internationally referred to as Household Water Treatment and Safe Storage (HWTS), which involves people boiling, chlorinating, and filtering water at home, has been recommended by the World Health Organization and UNICEF to improve water quality at the point of delivery.
Why Was This Study Done?
The WHO and UNICEF strategy to promote HWTS is based on previous studies from low-income settings that found that such interventions could reduce diarrhoeal illnesses by between 30%–40%. However, these studies had several limitations including reporting bias, short follow up periods, and small sample sizes; and importantly, in blinded studies (in which both the study participants and researchers are unaware of which participants are receiving the intervention or the control) have found no evidence that HWTS is protective against diarrhoeal illnesses. So the researchers conducted a blinded study (a double-blind, randomized placebo-controlled trial) in Orissa, a state in southeast India, to address those shortcomings and evaluate the effect of household water treatment in preventing diarrhoeal illnesses in children under five years of age.
What Did the Researchers Do and Find?
The researchers conducted their study in 11 informal settlements (where the inhabitants do not benefit from public water or sewers) in the state's capital city and also in 20 rural villages. 2,163 households were randomized to receive the intervention—the promotion and free distribution of sodium dichloroisocyanurate (chlorine) disinfection tablets with instruction on how to use them—or placebo tablets that were similar in appearance and had the same effervescent base as the chlorine tablets. Trained field workers visited households every month for 12 months (between December 2010 and December 2011) to record whether any child had experienced diarrhoea in the previous three days (as reported by the primary care giver). The researchers tested compliance with the intervention by asking participants if they had treated the water and also by testing for chlorine in the water.
Using these methods, the researchers found that over the 12-month follow-up period, the longitudinal prevalence of diarrhoea among children in the intervention group was 1.69% compared to 1.74% in the control group, a non-significant finding (a finding that could have happened by chance). There was also no difference in diarrhoea prevalence among other household members in the two groups and no difference in weight for age z scores (a measurement of growth) between children in the two groups. The researchers also found that although just over half (51%) of households in the intervention group reported treating their water, on testing, only 32% of water samples tested positive for chlorine. Finally, the researchers found that water quality (as measured by thermotolerant coliforms, TTCs) was better in the intervention group than the control group.
What Do These Findings Mean?
These findings suggest that treating water with chlorine tablets has no effect in reducing the prevalence of diarrhoea in both children aged under five years and in other household members in Orissa, India. However, poor compliance was a major issue with only a third of households in the intervention group confirmed as treating their water with chlorine tablets. Furthermore, these findings are limited in that the prevalence of diarrhoea was lower than expected, which may have also reduced the power of detecting a potential effect of the intervention. Nevertheless, this study raises questions about the health impact of household water treatment and highlights the key challenge of poor compliance with public health interventions.
Additional Information
Please access these websites via the online version of this summary at
The website of the World Health Organization has a section dedicated to household water treatment and safe storage, including a network to promote the use of HWTS and a toolkit to measure HWTS
The Water Institute hosts the communications portal for the International Network on Household Water Treatment and Safe Storage
PMCID: PMC3747993  PMID: 23976883
10.  Examining Periodontal Disease Disparities Among U.S. Adults 20 Years of Age and Older: NHANES III (1988–1994) and NHANES 1999–2004 
Public Health Reports  2012;127(5):497-506.
We examined disparities in periodontal disease in U.S. adults according to age, sex, race/ethnicity, country of birth, education, income, and poverty-income ratio within and between the third National Health and Nutrition Examination Survey (NHANES III, 1988–1994) and NHANES 1999–2004.
We assessed disparities and changes therein using prevalence differences and ratios, as well as the Symmetrized Theil Index (STI). While these measures document disparities between pairs of population subgroups, and changes in relative disparities between surveys, the STI is a summary measure of health disparities that also tracks between-group disparities relative to the total population.
Prevalence differences and ratios for the prevalence of periodontitis, the mean pocket depth (PD), and the mean clinical attachment loss (CAL) suggest that periodontal disease significantly decreased between NHANES III and NHANES 1999–2004 (p<0.01). However, the STI for the prevalence of periodontitis suggests that disparities significantly increased within categories of race/ethnicity, country of birth, and education in NHANES 1999–2004 compared with NHANES III. These findings were corroborated for mean PD and mean CAL (p<0.001): the overall STI significantly increased for mean PD from 4.53% in NHANES III to 11.02% in NHANES 1999–2004 and for mean CAL for teeth with CAL >0 from 31.73% in NHANES III to 43.36% in NHANES 1999–2004.
Our findings suggest that inequalities in periodontal disease significantly decreased between NHANES III and NHANES 1999–2004 in the total population and across selected characteristics of the population. However, these inequalities increased within groups of the population in NHANES 1999–2004 compared with NHANES III. These findings call attention to the absolute and relative differences not only between population groups across surveys, but also within population groups within and between surveys.
PMCID: PMC3407849  PMID: 22942467
11.  Rapid Scaling Up of Insecticide-Treated Bed Net Coverage in Africa and Its Relationship with Development Assistance for Health: A Systematic Synthesis of Supply, Distribution, and Household Survey Data 
PLoS Medicine  2010;7(8):e1000328.
Stephen Lim and colleagues use several sources of data to estimate the changes in distribution of insecticide-treated bed nets across Africa between 2000 and 2008, and to analyze the link between development assistance and net coverage.
Development assistance for health (DAH) targeted at malaria has risen exponentially over the last 10 years, with a large fraction of these resources directed toward the distribution of insecticide-treated bed nets (ITNs). Identifying countries that have been successful in scaling up ITN coverage and understanding the role of DAH is critical for making progress in countries where coverage remains low. Sparse and inconsistent sources of data have prevented robust estimates of the coverage of ITNs over time.
Methods and Principal Findings
We combined data from manufacturer reports of ITN deliveries to countries, National Malaria Control Program (NMCP) reports of ITNs distributed to health facilities and operational partners, and household survey data using Bayesian inference on a deterministic compartmental model of ITN distribution. For 44 countries in Africa, we calculated (1) ITN ownership coverage, defined as the proportion of households that own at least one ITN, and (2) ITN use in children under 5 coverage, defined as the proportion of children under the age of 5 years who slept under an ITN. Using regression, we examined the relationship between cumulative DAH targeted at malaria between 2000 and 2008 and the change in national-level ITN coverage over the same time period. In 1999, assuming that all ITNs are owned and used in populations at risk of malaria, mean coverage of ITN ownership and use in children under 5 among populations at risk of malaria were 2.2% and 1.5%, respectively, and were uniformly low across all 44 countries. In 2003, coverage of ITN ownership and use in children under 5 was 5.1% (95% uncertainty interval 4.6% to 5.7%) and 3.7% (2.9% to 4.9%); in 2006 it was 17.5% (16.4% to 18.8%) and 12.9% (10.8% to 15.4%); and by 2008 it was 32.8% (31.4% to 34.4%) and 26.6% (22.3% to 30.9%), respectively. In 2008, four countries had ITN ownership coverage of 80% or greater; six countries were between 60% and 80%; nine countries were between 40% and 60%; 12 countries were between 20% and 40%; and 13 countries had coverage below 20%. Excluding four outlier countries, each US$1 per capita in malaria DAH was associated with a significant increase in ITN household coverage and ITN use in children under 5 coverage of 5.3 percentage points (3.7 to 6.9) and 4.6 percentage points (2.5 to 6.7), respectively.
Rapid increases in ITN coverage have occurred in some of the poorest countries, but coverage remains low in large populations at risk. DAH targeted at malaria can lead to improvements in ITN coverage; inadequate financing may be a reason for lack of progress in some countries.
Please see later in the article for the Editors' Summary
Editors' Summary
Malaria is a major global public-health problem. Nearly half of the world's population is at risk of this parasitic disease, which kills about one million people (mainly children living in sub-Saharan Africa) every year. Malaria is transmitted to people through the bites of infected night-flying mosquitoes. Soon after entering the human body, the parasite begins to replicate in red blood cells, bursting out every 2–3 days and infecting more red blood cells. The parasite's presence in the bloodstream causes malaria's characteristic fever and can cause fatal organ damage. Malaria can be prevented by controlling the mosquitoes that spread the parasite and by sleeping under insecticide-treated bed nets (ITNs) to avoid mosquito bites. In trials, ITN use reduced deaths in young children by about 20%. Consequently, the widespread provision of ITNs is a mainstay of the World Health Organization's efforts to control malaria and in 2005 the World Assembly agreed to a target of providing ITNs for 80% of the people at risk of malaria by 2010.
Why Was This Study Done?
Development assistance for health (DAH) targeted at malaria has increased considerably over the past decade. Much of this resource has been directed toward increasing ITN coverage, but has it been used effectively? To answer this question and to track progress toward universal ITN provision, reliable estimates of ITN coverage are critical. Most attempts to quantify ITN coverage have relied on single sources of data such as manufacturers' records of ITNs supplied to individual countries, National Malaria Control Program reports on ITN distribution, or household surveys of ITN use. Because each of these data sources has weaknesses, robust estimates of ITN coverage over time cannot be calculated from a single data source. In this study, the researchers combine data from these three sources to calculate ITN ownership coverage (the proportion of households owning at least one ITN) and ITN use in children under 5 coverage (the proportion of children under the age of 5 years sleeping under an ITN) for 44 African countries between 1999 and 2008. They also investigate the relationship between changes in ITN coverage and the cumulative DAH targeted for malaria for each country over this period.
What Did the Researchers Do and Find?
The researchers combined the three sources of data by applying a statistical method called Bayesian inference to a “deterministic compartmental model” of ITN distribution, a flow chart that represents ITN movement into and within countries. In 1999, the researchers estimate, ITN ownership and ITN use by young children were uniformly low across the 44 countries. On average, only 2.2% of households owned ITNs and only 1.5% of young children slept under bed nets. By 2008, 32.8% of households owned ITNs and 26.6% of young children slept under ITNs but there were now large differences in ITN coverage between countries. In four countries, 80% or more of households owned an ITN but in 13 countries (including Nigeria), ITN ownership was below 20%. Finally, the researchers used a statistical technique called regression to reveal that the estimated increase in national ITN coverage between 2000 and 2008 was strongly related to the cumulative national DAH targeted for malaria (calculated by identifying all the grants and loans provided for malaria control) over the same period.
What Do These Findings Mean?
The accuracy of these findings depends on the assumptions included in the model of ITN distribution and the quality of the data fed into it. Nevertheless, this systematic analysis provides new insights into the progress of ITN provision in Africa and a robust way to monitor future ITN coverage. Its findings show that several countries, even some very poor countries, have managed to scale up their ITN coverage from near zero to above 60%. However, because countries such as Nigeria that have large populations at risk of malaria continue to have low ITN coverage, Africa as a whole falls far short of the target of 80% ITN coverage by 2010. Finally, the clear relationship between the expansion of DAH targeted at malaria and increased ITN coverage suggests that inadequate funding may be responsible for the lack of progress in some countries and indicates that continued external financial assistance will be required to maintain the improvements in ITN coverage that have already been achieved.
Additional Information
Please access these Web sites via the online version of this summary at
Further information is available on the Institute for Health Metrics and Evaluation at the University of Washington
Information is available from the World Health Organization on malaria (in several languages); the 2009 World Malaria Report provides details of the current global malaria situation
The US Centers for Disease Control and Prevention provide information on malaria and on insecticide-treated bed nets (in English and Spanish)
Information is available from the Roll Back Malaria Partnership on its approach to the global control of malaria including fact sheets on malaria in Africa and on insecticide-treated bed nets (in English, French and Portuguese)
MedlinePlus provides links to additional information on malaria (in English and Spanish)
PMCID: PMC2923089  PMID: 20808957
12.  Health and Human Rights in Chin State, Western Burma: A Population-Based Assessment Using Multistaged Household Cluster Sampling 
PLoS Medicine  2011;8(2):e1001007.
Sollom and colleagues report the findings from a household survey study carried out in Western Burma; they report a high prevalence of human rights violations such as forced labor, food theft, forced displacement, beatings, and ethnic persecution.
The Chin State of Burma (also known as Myanmar) is an isolated ethnic minority area with poor health outcomes and reports of food insecurity and human rights violations. We report on a population-based assessment of health and human rights in Chin State. We sought to quantify reported human rights violations in Chin State and associations between these reported violations and health status at the household level.
Methods and Findings
Multistaged household cluster sampling was done. Heads of household were interviewed on demographics, access to health care, health status, food insecurity, forced displacement, forced labor, and other human rights violations during the preceding 12 months. Ratios of the prevalence of household hunger comparing exposed and unexposed to each reported violation were estimated using binomial regression, and 95% confidence intervals (CIs) were constructed. Multivariate models were done to adjust for possible confounders. Overall, 91.9% of households (95% CI 89.7%–94.1%) reported forced labor in the past 12 months. Forty-three percent of households met FANTA-2 (Food and Nutrition Technical Assistance II project) definitions for moderate to severe household hunger. Common violations reported were food theft, livestock theft or killing, forced displacement, beatings and torture, detentions, disappearances, and religious and ethnic persecution. Self reporting of multiple rights abuses was independently associated with household hunger.
Our findings indicate widespread self-reports of human rights violations. The nature and extent of these violations may warrant investigation by the United Nations or International Criminal Court.
Please see later in the article for the Editors' Summary
Editors' Summary
More than 60 years after the adoption of the Universal Declaration of Human Rights, thousands of people around the world are still deprived of their basic human rights—life, liberty, and security of person. In many countries, people live in fear of arbitrary arrest and detention, torture, forced labor, religious and ethnic persecution, forced displacement, and murder. In addition, ongoing conflicts and despotic governments deprive them of the ability to grow sufficient food (resulting in food insecurity) and deny them access to essential health care. In Burma, for example, the military junta, which seized power in 1962, frequently confiscates land unlawfully, demands forced labor, and uses violence against anyone who protests. Burma is also one of the world's poorest countries in terms of health indicators. Its average life expectancy is 54 years, its maternal mortality rate (380 deaths among women from pregnancy-related causes per 100,000 live births) is nearly ten times higher than that of neighboring Thailand, and its under-five death rate (122/1000 live births) is twice that of nearby countries. Moreover, nearly half of Burmese children under 5 are stunted, and a third of young children are underweight, indicators of malnutrition in a country that, on paper, has a food surplus.
Why Was This Study Done?
Investigators are increasingly using population-based methods to quantify the associations between human rights violations and health outcomes. In eastern Burma, for example, population-based research has recently revealed a link between human rights violations and reduced access to maternal health-care services. In this study, the researchers undertake a population-based assessment of health and human rights in Chin State, an ethnic minority area in western Burma where multiple reports of human rights abuses have been documented and from which thousands of people have fled. In particular, the researchers investigate correlations between household hunger and household experiences of human rights violations—food security in Chin State is affected by periodic expansions of rat populations that devastate crop yields, by farmers being forced by the government to grow an inedible oil crop (jatropha), and by the Burmese military regularly stealing food and livestock.
What Did the Researchers Do and Find?
Local surveyors questioned the heads of randomly selected households in Chin State about their household's access to health care and its health status, and about forced labor and other human rights violations experienced by the household during the preceding 12 months. They also asked three standard questions about food availability, the answers to which were combined to provide a measure of household hunger. Of the 621 households interviewed, 91.9% reported at least one episode of a household member being forced to work in the preceding 12 months. The Burmese military imposed two-thirds of these forced labor demands. Other human rights violations reported included beating or torture (14.8% of households), religious or ethnic persecutions (14.1% of households), and detention or imprisonment of a family member (5.9% of households). Forty-three percent of the households met the US Agency for International Development Food and Nutrition Technical Assistance (FANTA) definition for moderate to severe household hunger, and human rights violations related to food insecurity were common. For example, more than half the households were forced to give up food out of fear of violence. A statistical analysis of these data indicated that the prevalence of household hunger was 6.51 times higher in households that had experienced three food-related human rights violations than in households that had not experienced such violations.
What Do These Findings Mean?
These findings quantify the extent to which the Chin ethnic minority in Burma is subjected to multiple human rights violations and indicate the geographical spread of these abuses. Importantly, they show that the health impacts of human rights violations in Chin State are substantial. In addition, they suggest that the indirect health outcomes of human rights violations probably dwarf the mortality from direct killings. Although this study has some limitations (for example, surveyors had to work in secret and it was not safe for them to collect biological samples that could have given a more accurate indication of the health status of households than questions alone), these findings should encourage the international community to intensify its efforts to reduce human rights violations in Burma.
Additional Information
Please access these websites via the online version of this summary at
The UN Universal Declaration of Human Rights is available in numerous languages
The Burma Campaign UK and Human Rights Watch provide detailed information about human rights violations in Burma (in several languages)
The World Health Organization provides information on health in Burma and on human rights (in several languages)
The Mae Tao clinic also provides general information about Burma and its health services (including some information in Thai)
A PLoS Medicine Research Article by Luke Mullany and colleagues provides data on human rights violations and maternal health in Burma
The Chin Human Rights Organization is working to protect and promote the rights of the Chin people
The Global Health Access Program (GHAP) provides information on health in Burma
FANTA works to improve nutrition and global food security policies
PMCID: PMC3035608  PMID: 21346799
13.  Is Economic Growth Associated with Reduction in Child Undernutrition in India? 
PLoS Medicine  2011;8(3):e1000424.
An analysis of cross-sectional data from repeated household surveys in India, combined with data on economic growth, fails to find strong evidence that recent economic growth in India is associated with a reduction in child undernutrition.
Economic growth is widely perceived as a major policy instrument in reducing childhood undernutrition in India. We assessed the association between changes in state per capita income and the risk of undernutrition among children in India.
Methods and Findings
Data for this analysis came from three cross-sectional waves of the National Family Health Survey (NFHS) conducted in 1992–93, 1998–99, and 2005–06 in India. The sample sizes in the three waves were 33,816, 30,383, and 28,876 children, respectively. After excluding observations missing on the child anthropometric measures and the independent variables included in the study, the analytic sample size was 28,066, 26,121, and 23,139, respectively, with a pooled sample size of 77,326 children. The proportion of missing data was 12%–20%. The outcomes were underweight, stunting, and wasting, defined as more than two standard deviations below the World Health Organization–determined median scores by age and gender. We also examined severe underweight, severe stunting, and severe wasting. The main exposure of interest was per capita income at the state level at each survey period measured as per capita net state domestic product measured in 2008 prices. We estimated fixed and random effects logistic models that accounted for the clustering of the data. In models that did not account for survey-period effects, there appeared to be an inverse association between state economic growth and risk of undernutrition among children. However, in models accounting for data structure related to repeated cross-sectional design through survey period effects, state economic growth was not associated with the risk of underweight (OR 1.01, 95% CI 0.98, 1.04), stunting (OR 1.02, 95% CI 0.99, 1.05), and wasting (OR 0.99, 95% CI 0.96, 1.02). Adjustment for demographic and socioeconomic covariates did not alter these estimates. Similar patterns were observed for severe undernutrition outcomes.
We failed to find consistent evidence that economic growth leads to reduction in childhood undernutrition in India. Direct investments in appropriate health interventions may be necessary to reduce childhood undernutrition in India.
Please see later in the article for the Editors' Summary
Editors' Summary
Good nutrition during childhood is essential for health and survival. Undernourished children are more susceptible to infections and more likely to die from common ailments such as diarrhea than well-nourished children. Thus, globally, undernutrition contributes to more than a third of deaths among children under 5 years old. Experts use three physical measurements to determine whether a child is undernourished. An "underweight" child has a low weight for his or her age and gender when compared to the World Health Organization Child Growth Standards, which chart the growth of a reference population. A "stunted" child has a low height for his or her age; stunting is an indicator of chronic undernutrition. A "wasted" child has a low weight for his or her height; wasting is an indicator of acute undernutrition and often follows an earthquake, flood, or other emergency. The prevalence (how often a condition occurs within a population) of undernutrition is particularly high in India. Here, almost half of children under the age of 3 are underweight, about half are stunted, and a quarter are wasted.
Why Was This Study Done?
Although the prevalence of undernutrition in India is decreasing, progress is slow. Economic growth is widely regarded as the major way to reduce child undernutrition in India. Economic growth, the argument goes, will increase incomes, reduce poverty, and increase access to health services and nutrition. But some experts believe that better education for women and reduced household sizes might have a greater influence on child undernutrition than economic growth. And others believe that healthier, better fed populations lead to increased economic growth rather than the other way around. In this study, the researchers assess the association between economic growth and child undernutrition in India by analyzing the relationship between changes in per capita income in individual Indian states and the individual risk of undernutrition among children in India.
What Did the Researchers Do and Find?
For their analyses, the researchers used data on 77,326 Indian children that were collected in the 1992–93, 1998–99, and 2005–06 National Family Health Surveys; these surveys are part of the Demographic and Health Surveys, a project that collects health data in developing countries to aid health-system development. The researchers used eight "ecological" statistical models to investigate whether there was an association between underweight, stunting, or wasting and per capita income at the state level in each survey period; these ecological models assumed that the risk of undernutrition was the same for every child in a state. They also used 10 "multilevel" models to quantify the association between state-level growth and the individual-level risk of undernutrition. The multilevel models also took account of various combinations of additional factors likely to affect undernutrition (for example, mother's education and marital status). In five of the ecological models, there was no statistically significant association between state economic growth and average levels of child undernutrition at the state level (statistically significant associations are unlikely to have arisen by chance). Similarly, in eight of the multilevel models, there was no statistical evidence for an association between economic growth and undernutrition.
What Do These Findings Mean?
These findings provide little statistical support for the widely held assumption that there is an association between the risk of child undernutrition and economic growth in India. By contrast, a previous study that used data from 63 countries collected over 26 years did find evidence that national economic growth was inversely associated with the risk of child undernutrition. However, this study was an ecological study and did not, therefore, allow for the possibility that the risk of undernutrition might vary between children in one state and between states. Further, the target of inference in this study was "explaining" between-country differences, while the target of inference in this analysis was explaining within country differences over time. The researchers suggest several reasons why there might not be a clear association between economic growth and undernutrition in India. For example, they suggest, economic growth in India might have only benefitted privileged sections of society. Whether this or an alternative explanation accounts for the lack of an association, it seems likely that further reductions in the prevalence of child undernutrition in India (and possibly in other developing countries) will require direct investment in health and health-related programs; expecting economic growth to improve child undernutrition might not be a viable option after all.
Additional Information
Please access these websites via the online version of this summary at
The charity UNICEF, which protects the rights of children and young people around the world, provides detailed statistics on child undernutrition and on child nutrition and undernutrition in India
The WHO Child Growth Standards are available (in several languages)
More information on the Demographic and Health Surveys and on the Indian National Family Health Surveys is available
The United Nations Millennium Development Goals website provides information on ongoing world efforts to reduce hunger and child mortality
PMCID: PMC3050933  PMID: 21408084
14.  Identifying the genetic determinants of transcription factor activity 
Genome-wide messenger RNA expression levels are highly heritable. However, the molecular mechanisms underlying this heritability are poorly understood.The influence of trans-acting polymorphisms is often mediated by changes in the regulatory activity of one or more sequence-specific transcription factors (TFs). We use a method that exploits prior information about the DNA-binding specificity of each TF to estimate its genotype-specific regulatory activity. To this end, we perform linear regression of genotype-specific differential mRNA expression on TF-specific promoter-binding affinity.Treating inferred TF activity as a quantitative trait and mapping it across a panel of segregants from an experimental genetic cross allows us to identify trans-acting loci (‘aQTLs') whose allelic variation modulates the TF. A few of these aQTL regions contain the gene encoding the TF itself; several others contain a gene whose protein product is known to interact with the TF.Our method is strictly causal, as it only uses sequence-based features as predictors. Application to budding yeast demonstrates a dramatic increase in statistical power, compared with existing methods, to detect locus-TF associations and trans-acting loci. Our aQTL mapping strategy also succeeds in mouse.
Genetic sequence variation naturally perturbs mRNA expression levels in the cell. In recent years, analysis of parallel genotyping and expression profiling data for segregants from genetic crosses between parental strains has revealed that mRNA expression levels are highly heritable. Expression quantitative trait loci (eQTLs), whose allelic variation regulates the expression level of individual genes, have successfully been identified (Brem et al, 2002; Schadt et al, 2003). The molecular mechanisms underlying the heritability of mRNA expression are poorly understood. However, they are likely to involve mediation by transcription factors (TFs). We present a new transcription-factor-centric method that greatly increases our ability to understand what drives the genetic variation in mRNA expression (Figure 1). Our method identifies genomic loci (‘aQTLs') whose allelic variation modulates the protein-level activity of specific TFs. To map aQTLs, we integrate genotyping and expression profiling data with quantitative prior information about DNA-binding specificity of transcription factors in the form of position-specific affinity matrices (Bussemaker et al, 2007). We applied our method in two different organisms: budding yeast and mouse.
In our approach, the inferred TF activity is explicitly treated as a quantitative trait, and genetically mapped. The decrease of ‘phenotype space' from that of all genes (in the eQTL approach) to that of all TFs (in our aQTL approach) increases the statistical power to detect trans-acting loci in two distinct ways. First, as each inferred TF activity is derived from a large number of genes, it is far less noisy than mRNA levels of individual genes. Second, the number of trait/marker combinations that needs to be tested for statistical significance in parallel is roughly two orders of magnitude smaller than for eQTLs. We identified a total of 103 locus-TF associations, a more than six-fold improvement over the 17 locus-TF associations identified by several existing methods (Brem et al, 2002; Yvert et al, 2003; Lee et al, 2006; Smith and Kruglyak, 2008; Zhu et al, 2008). The total number of distinct genomic loci identified as an aQTL equals 31, which includes 11 of the 13 previously identified eQTL hotspots (Smith and Kruglyak, 2008).
To better understand the mechanisms underlying the identified genetic linkages, we examined the genes within each aQTL region. First, we found four ‘local' aQTLs, which encompass the gene encoding the TF itself. This includes the known polymorphism in the HAP1 gene (Brem et al, 2002), but also novel predictions of trans-acting polymorphisms in RFX1, STB5, and HAP4. Second, using high-throughput protein–protein interaction data, we identified putative causal genes for several aQTLs. For example, we predict that a polymorphism in the cyclin-dependent kinase CDC28 antagonistically modulates the functionally distinct cell cycle regulators Fkh1 and Fkh2. In this and other cases, our approach naturally accounts for post-translational modulation of TF activity at the protein level.
We validated our ability to predict locus-TF associations in yeast using gene expression profiles of allele replacement strains from a previous study (Smith and Kruglyak, 2008). Chromosome 15 contains an aQTL whose allelic status influences the activity of no fewer than 30 distinct TFs. This locus includes IRA2, which controls intracellular cAMP levels. We used the gene expression profile of IRA2 replacement strains to confirm that the polymorphism within IRA2 indeed modulates a subset of the TFs whose activity was predicted to link to this locus, and no other TFs.
Application of our approach to mouse data identified an aQTL modulating the activity of a specific TF in liver cells. We identified an aQTL on mouse chromosome 7 for Zscan4, a transcription factor containing four zinc finger domains and a SCAN domain. Even though we could not detect a candidate causal gene for Zscan4p because of lack of information about the mouse genome, our result demonstrates that our method also works in higher eukaryotes.
In summary, aQTL mapping has a greatly improved sensitivity to detect molecular mechanisms underlying the heritability of gene expression. The successful application of our approach to yeast and mouse data underscores the value of explicitly treating the inferred TF activity as a quantitative trait for increasing statistical power of detecting trans-acting loci. Furthermore, our method is computationally efficient, and easily applicable to any other organism whenever prior information about the DNA-binding specificity of TFs is available.
Analysis of parallel genotyping and expression profiling data has shown that mRNA expression levels are highly heritable. Currently, only a tiny fraction of this genetic variance can be mechanistically accounted for. The influence of trans-acting polymorphisms on gene expression traits is often mediated by transcription factors (TFs). We present a method that exploits prior knowledge about the in vitro DNA-binding specificity of a TF in order to map the loci (‘aQTLs') whose inheritance modulates its protein-level regulatory activity. Genome-wide regression of differential mRNA expression on predicted promoter affinity is used to estimate segregant-specific TF activity, which is subsequently mapped as a quantitative phenotype. In budding yeast, our method identifies six times as many locus-TF associations and more than twice as many trans-acting loci as all existing methods combined. Application to mouse data from an F2 intercross identified an aQTL on chromosome VII modulating the activity of Zscan4 in liver cells. Our method has greatly improved statistical power over existing methods, is mechanism based, strictly causal, computationally efficient, and generally applicable.
PMCID: PMC2964119  PMID: 20865005
gene expression; gene regulatory networks; genetic variation; quantitative trait loci; transcription factors
15.  Cryptic Distant Relatives Are Common in Both Isolated and Cosmopolitan Genetic Samples 
PLoS ONE  2012;7(4):e34267.
Although a few hundred single nucleotide polymorphisms (SNPs) suffice to infer close familial relationships, high density genome-wide SNP data make possible the inference of more distant relationships such as 2nd to 9th cousinships. In order to characterize the relationship between genetic similarity and degree of kinship given a timeframe of 100–300 years, we analyzed the sharing of DNA inferred to be identical by descent (IBD) in a subset of individuals from the 23andMe customer database (n = 22,757) and from the Human Genome Diversity Panel (HGDP-CEPH, n = 952). With data from 121 populations, we show that the average amount of DNA shared IBD in most ethnolinguistically-defined populations, for example Native American groups, Finns and Ashkenazi Jews, differs from continentally-defined populations by several orders of magnitude. Via extensive pedigree-based simulations, we determined bounds for predicted degrees of relationship given the amount of genomic IBD sharing in both endogamous and ‘unrelated’ population samples. Using these bounds as a guide, we detected tens of thousands of 2nd to 9th degree cousin pairs within a heterogenous set of 5,000 Europeans. The ubiquity of distant relatives, detected via IBD segments, in both ethnolinguistic populations and in large ‘unrelated’ populations samples has important implications for genetic genealogy, forensics and genotype/phenotype mapping studies.
PMCID: PMC3317976  PMID: 22509285
16.  Reducing the Impact of the Next Influenza Pandemic Using Household-Based Public Health Interventions 
PLoS Medicine  2006;3(9):e361.
The outbreak of highly pathogenic H5N1 influenza in domestic poultry and wild birds has caused global concern over the possible evolution of a novel human strain [1]. If such a strain emerges, and is not controlled at source [2,3], a pandemic is likely to result. Health policy in most countries will then be focused on reducing morbidity and mortality.
Methods and Findings
We estimate the expected reduction in primary attack rates for different household-based interventions using a mathematical model of influenza transmission within and between households. We show that, for lower transmissibility strains [2,4], the combination of household-based quarantine, isolation of cases outside the household, and targeted prophylactic use of anti-virals will be highly effective and likely feasible across a range of plausible transmission scenarios. For example, for a basic reproductive number (the average number of people infected by a typically infectious individual in an otherwise susceptible population) of 1.8, assuming only 50% compliance, this combination could reduce the infection (symptomatic) attack rate from 74% (49%) to 40% (27%), requiring peak quarantine and isolation levels of 6.2% and 0.8% of the population, respectively, and an overall anti-viral stockpile of 3.9 doses per member of the population. Although contact tracing may be additionally effective, the resources required make it impractical in most scenarios.
National influenza pandemic preparedness plans currently focus on reducing the impact associated with a constant attack rate, rather than on reducing transmission. Our findings suggest that the additional benefits and resource requirements of household-based interventions in reducing average levels of transmission should also be considered, even when expected levels of compliance are only moderate.
Voluntary household-based quarantine and external isolation are likely to be effective in limiting the morbidity and mortality of an influenza pandemic, even if such a pandemic cannot be entirely prevented, and even if compliance with these interventions is moderate.
Editors' Summary
Naturally occurring variation in the influenza virus can lead both to localized annual epidemics and to less frequent global pandemics of catastrophic proportions. The most destructive of the three influenza pandemics of the 20th century, the so-called Spanish flu of 1918–1919, is estimated to have caused 20 million deaths. As evidenced by ongoing tracking efforts and news media coverage of H5N1 avian influenza, contemporary approaches to monitoring and communications can be expected to alert health officials and the general public of the emergence of new, potentially pandemic strains before they spread globally.
Why Was This Study Done?
In order to act most effectively on advance notice of an approaching influenza pandemic, public health workers need to know which available interventions are likely to be most effective. This study was done to estimate the effectiveness of specific preventive measures that communities might implement to reduce the impact of pandemic flu. In particular, the study evaluates methods to reduce person-to-person transmission of influenza, in the likely scenario that complete control cannot be achieved by mass vaccination and anti-viral treatment alone.
What Did the Researchers Do and Find?
The researchers developed a mathematical model—essentially a computer simulation—to simulate the course of pandemic influenza in a hypothetical population at risk for infection at home, through external peer networks such as schools and workplaces, and through general community transmission. Parameters such as the distribution of household sizes, the rate at which individuals develop symptoms from nonpandemic viruses, and the risk of infection within households were derived from demographic and epidemiologic data from Hong Kong, as well as empirical studies of influenza transmission. A model based on these parameters was then used to calculate the effects of interventions including voluntary household quarantine, voluntary individual isolation in a facility outside the home, and contact tracing (that is, asking infectious individuals to identify people whom they may have infected and then warning those people) on the spread of pandemic influenza through the population. The model also took into account the anti-viral treatment of exposed, asymptomatic household members and of individuals in isolation, and assumed that all intervention strategies were put into place before the arrival of individuals infected with the pandemic virus.
  Using this model, the authors predicted that even if only half of the population were to comply with public health interventions, the proportion infected during the first year of an influenza pandemic could be substantially reduced by a combination of household-based quarantine, isolation of actively infected individuals in a location outside the household, and targeted prophylactic treatment of exposed individuals with anti-viral drugs. Based on an influenza-associated mortality rate of 0.5% (as has been estimated for New York City in the 1918–1919 pandemic), the magnitude of the predicted benefit of these interventions is a reduction from 49% to 27% in the proportion of the population who become ill in the first year of the pandemic, which would correspond to 16,000 fewer deaths in a city the size of Hong Kong (6.8 million people). In the model, anti-viral treatment appeared to be about as effective as isolation when each was used in combination with household quarantine, but would require stockpiling 3.9 doses of anti-viral for each member of the population. Contact tracing was predicted to provide a modest additional benefit over quarantine and isolation, but also to increase considerably the proportion of the population in quarantine.
What Do These Findings Mean?
This study predicts that voluntary household-based quarantine and external isolation can be effective in limiting the morbidity and mortality of an influenza pandemic, even if such a pandemic cannot be entirely prevented, and even if compliance with these interventions is far from uniform. These simulations can therefore inform preparedness plans in the absence of data from actual intervention trials, which would be impossible outside (and impractical within) the context of an actual pandemic. Like all mathematical models, however, the one presented in this study relies on a number of assumptions regarding the characteristics and circumstances of the situation that it is intended to represent. For example, the authors found that the efficacy of policies to reduce the rate of infection vary according to the ease with which a given virus spreads from person to person. Because this parameter (known as the basic reproductive ratio, R0) cannot be reliably predicted for a new viral strain based on past epidemics, the authors note that in an actual influenza pandemic rapid determinations of R0 in areas already involved would be necessary to finalize public health responses in threatened areas. Further, the implementation of the interventions that appear beneficial in this model would require devoting attention and resources to practical considerations, such as how to staff isolation centers and provide food and water to those in household quarantine. However accurate the scientific data and predictive models may be, their effectiveness can only be realized through well-coordinated local, as well as international, efforts.
Additional Information.
Please access these Web sites via the online version of this summary at
• World Health Organization influenza pandemic preparedness page
• US Department of Health and Human Services avian and pandemic flu information site
• Pandemic influenza page from the Public Health Agency of Canada
• Emergency planning page on pandemic flu from the England Department of Health
• Wikipedia entry on pandemic influenza with links to individual country resources (note: Wikipedia is a free Internet encyclopedia that anyone can edit)
PMCID: PMC1526768  PMID: 16881729
17.  A Large Population Genetic Study of 15 Autosomal Short Tandem Repeat Loci for Establishment of Korean DNA Profile Database 
Molecules and Cells  2011;32(1):15-19.
Genotyping of highly polymorphic short tandem repeat (STR) markers is widely used for the genetic identification of individuals in forensic DNA analyses and in paternity disputes. The National DNA Profile Databank recently established by the DNA Identification Act in Korea contains the computerized STR DNA profiles of individuals convicted of crimes. For the establishment of a large autosomal STR loci population database, 1805 samples were obtained at random from Korean individuals and 15 autosomal STR markers were analyzed using the AmpFlSTR Identifiler PCR Amplification kit. For the 15 autosomal STR markers, no deviations from the Hardy-Weinberg equilibrium were observed. The most informative locus in our data set was the D2S1338 with a discrimination power of 0.9699. The combined matching probability was 1.521 × 10-17. This large STR profile dataset including atypical alleles will be important for the establishment of the Korean DNA database and for forensic applications.
PMCID: PMC3887661  PMID: 21597912
autosomal STRs; DNA profile data bank; Korean; microvariant; population database
18.  Simple regression models as a threshold for selecting AFLP loci with reduced error rates 
BMC Bioinformatics  2012;13:268.
Amplified fragment length polymorphism is a popular DNA marker technique that has applications in multiple fields of study. Technological improvements and decreasing costs have dramatically increased the number of markers that can be generated in an amplified fragment length polymorphism experiment. As datasets increase in size, the number of genotyping errors also increases. Error within a DNA marker dataset can result in reduced statistical power, incorrect conclusions, and decreased reproducibility. It is essential that error within a dataset be recognized and reduced where possible, while still balancing the need for genomic diversity.
Using simple regression with a second-degree polynomial term, a model was fit to describe the relationship between locus-specific error rate and the frequency of present alleles. This model was then used to set a moving error rate threshold that varied based on the frequency of present alleles at a given locus. Loci with error rates greater than the threshold were removed from further analyses. This method of selecting loci is advantageous, as it accounts for differences in error rate between loci of varying frequencies of present alleles. An example using this method to select loci is demonstrated in an amplified fragment length polymorphism dataset generated from the North American prairie species big bluestem. Within this dataset the error rate was reduced from 12.5% to 8.8% by removal of loci with error rates greater than the defined threshold. By repeating the method on selected loci, the error rate was further reduced to 5.9%. This reduction in error resulted in a substantial increase in the amount of genetic variation attributable to regional and population variation.
This paper demonstrates a logical and computationally simple method for selecting loci with a reduced error rate. In the context of a genetic diversity study, this method resulted in an increased ability to detect differences between populations. Further application of this locus selection method, in addition to error-reducing methodological precautions, will result in amplified fragment length polymorphism datasets with reduced error rates. This reduction in error rate should result in greater power to detect differences and increased reproducibility.
PMCID: PMC3534328  PMID: 23072295
19.  A straightforward multiallelic significance test for the Hardy-Weinberg equilibrium law 
Genetics and Molecular Biology  2009;32(3):619-625.
Much forensic inference based upon DNA evidence is made assuming Hardy-Weinberg Equilibrium (HWE) for the genetic loci being used. Several statistical tests to detect and measure deviation from HWE have been devised, and their limitations become more obvious when testing for deviation within multiallelic DNA loci. The most popular methods-Chi-square and Likelihood-ratio tests-are based on asymptotic results and cannot guarantee a good performance in the presence of low frequency genotypes. Since the parameter space dimension increases at a quadratic rate on the number of alleles, some authors suggest applying sequential methods, where the multiallelic case is reformulated as a sequence of “biallelic” tests. However, in this approach it is not obvious how to assess the general evidence of the original hypothesis; nor is it clear how to establish the significance level for its acceptance/rejection. In this work, we introduce a straightforward method for the multiallelic HWE test, which overcomes the aforementioned issues of sequential methods. The core theory for the proposed method is given by the Full Bayesian Significance Test (FBST), an intuitive Bayesian approach which does not assign positive probabilities to zero measure sets when testing sharp hypotheses. We compare FBST performance to Chi-square, Likelihood-ratio and Markov chain tests, in three numerical experiments. The results suggest that FBST is a robust and high performance method for the HWE test, even in the presence of several alleles and small sample sizes.
PMCID: PMC3036052  PMID: 21637528
Hardy-Weinberg equilibrium; significance tests; FBST
20.  Computational Toxicology of Chloroform: Reverse Dosimetry Using Bayesian Inference, Markov Chain Monte Carlo Simulation, and Human Biomonitoring Data 
Environmental Health Perspectives  2008;116(8):1040-1046.
One problem of interpreting population-based biomonitoring data is the reconstruction of corresponding external exposure in cases where no such data are available.
We demonstrate the use of a computational framework that integrates physiologically based pharmacokinetic (PBPK) modeling, Bayesian inference, and Markov chain Monte Carlo simulation to obtain a population estimate of environmental chloroform source concentrations consistent with human biomonitoring data. The biomonitoring data consist of chloroform blood concentrations measured as part of the Third National Health and Nutrition Examination Survey (NHANES III), and for which no corresponding exposure data were collected.
We used a combined PBPK and shower exposure model to consider several routes and sources of exposure: ingestion of tap water, inhalation of ambient household air, and inhalation and dermal absorption while showering. We determined posterior distributions for chloroform concentration in tap water and ambient household air using U.S. Environmental Protection Agency Total Exposure Assessment Methodology (TEAM) data as prior distributions for the Bayesian analysis.
Posterior distributions for exposure indicate that 95% of the population represented by the NHANES III data had likely chloroform exposures ≤ 67 μg/L in tap water and ≤ 0.02 μg/L in ambient household air.
Our results demonstrate the application of computer simulation to aid in the interpretation of human biomonitoring data in the context of the exposure–health evaluation–risk assessment continuum. These results should be considered as a demonstration of the method and can be improved with the addition of more detailed data.
PMCID: PMC2516557  PMID: 18709138
Bayesian; biomonitoring; chloroform; Markov chain Monte Carlo; MC; MCMC; Monte Carlo; PBPK; reverse dosimetry
21.  DNA identification by pedigree likelihood ratio accommodating population substructure and mutations 
DNA typing is an important tool in missing-person identification, especially in mass-fatality disasters. Identification methods comparing a DNA profile from unidentified human remains with that of a direct (from the person) or indirect (for example, from a biological relative) reference sample and ranking the pairwise likelihood ratios (LR) is straightforward and well defined. However, for indirect comparison cases in which several members from a family can serve as reference samples, the full power of kinship analysis is not entirely exploited. Because biologically related family members are not genetically independent, more information and thus greater power can be attained by simultaneous use of all pedigree members in most cases, although distant relationships may reduce the power. In this study, an improvement was made on the method for missing-person identification for autosomal and lineage-based markers, by considering jointly the DNA profile data of all available family reference samples. The missing person is evaluated by a pedigree LR of the probability of DNA evidence under alternative hypotheses (for example, the missing person is unrelated or if they belong to this pedigree with a specified biological relationship) and can be ranked for all pedigrees within a database. Pedigree LRs are adjusted for population substructure according to the recommendations of the second National Research Council (NRCII) Report. A realistic mutation model was also incorporated to accommodate the possibility of false exclusion. The results show that the effect of mutation on the pedigree LR is moderate, but LRs can be significantly decreased by the effect of population substructure. Finally, Y chromosome and mitochondrial DNA were integrated into the analysis to increase the power of identification. A program titled MPKin was developed, combining the aforementioned features to facilitate genetic analysis for identifying missing persons. The computational complexity of the algorithms is explained, and several ways to reduce the complexity are introduced.
PMCID: PMC2990736  PMID: 21092343
22.  Detection of Pleiotropy through a Phenome-Wide Association Study (PheWAS) of Epidemiologic Data as Part of the Environmental Architecture for Genes Linked to Environment (EAGLE) Study 
PLoS Genetics  2014;10(12):e1004678.
We performed a Phenome-wide association study (PheWAS) utilizing diverse genotypic and phenotypic data existing across multiple populations in the National Health and Nutrition Examination Surveys (NHANES), conducted by the Centers for Disease Control and Prevention (CDC), and accessed by the Epidemiological Architecture for Genes Linked to Environment (EAGLE) study. We calculated comprehensive tests of association in Genetic NHANES using 80 SNPs and 1,008 phenotypes (grouped into 184 phenotype classes), stratified by race-ethnicity. Genetic NHANES includes three surveys (NHANES III, 1999–2000, and 2001–2002) and three race-ethnicities: non-Hispanic whites (n = 6,634), non-Hispanic blacks (n = 3,458), and Mexican Americans (n = 3,950). We identified 69 PheWAS associations replicating across surveys for the same SNP, phenotype-class, direction of effect, and race-ethnicity at p<0.01, allele frequency >0.01, and sample size >200. Of these 69 PheWAS associations, 39 replicated previously reported SNP-phenotype associations, 9 were related to previously reported associations, and 21 were novel associations. Fourteen results had the same direction of effect across more than one race-ethnicity: one result was novel, 11 replicated previously reported associations, and two were related to previously reported results. Thirteen SNPs showed evidence of pleiotropy. We further explored results with gene-based biological networks, contrasting the direction of effect for pleiotropic associations across phenotypes. One PheWAS result was ABCG2 missense SNP rs2231142, associated with uric acid levels in both non-Hispanic whites and Mexican Americans, protoporphyrin levels in non-Hispanic whites and Mexican Americans, and blood pressure levels in Mexican Americans. Another example was SNP rs1800588 near LIPC, significantly associated with the novel phenotypes of folate levels (Mexican Americans), vitamin E levels (non-Hispanic whites) and triglyceride levels (non-Hispanic whites), and replication for cholesterol levels. The results of this PheWAS show the utility of this approach for exposing more of the complex genetic architecture underlying multiple traits, through generating novel hypotheses for future research.
Author Summary
The Epidemiological Architecture for Genes Linked to Environment (EAGLE) study performed a Phenome-Wide Association Study (PheWAS) to investigate comprehensive associations between a wide range of phenotypes and single-nucleotide polymorphisms using the diverse genotypic and phenotypic data that exists across multiple populations in the National Health and Nutrition Examination Surveys (NHANES), conducted by the Centers for Disease Control and Prevention (CDC). In this study, we replicated known genotype-phenotype associations, identified genotypes associated with phenotypes related to previously reported associations, and most importantly, identified a series of novel genotype-phenotype associations. We also identified potential pleiotropy; that is, SNPs associated with more than one phenotype. We explored the features of these PheWAS results, characterizing any potential functionality of the SNPs of this study, determining association results that were found in more than one racial/ethnic group for the same SNP and phenotype, identifying novel direction of effect relationships for SNPs demonstrating potential pleiotropy, and investigating the association results in the context of gene-based biological networks. Through considering the SNP associations on multiple phenotypic outcomes, as well as through exploring pleiotropy, we may be able to leverage the results of PheWAS to uncover more of the complex underlying genomic architecture of complex traits.
PMCID: PMC4256091  PMID: 25474351
23.  Identifying and reducing AFLP genotyping error: an example of tradeoffs when comparing population structure in broadcast spawning versus brooding oysters 
Heredity  2012;108(6):616-625.
Phylogeographic inferences about gene flow are strengthened through comparison of co-distributed taxa, but also depend on adequate genomic sampling. Amplified fragment length polymorphisms (AFLPs) provide a rapid and inexpensive source of multilocus allele frequency data for making genomically robust inferences. Every AFLP study initially generates markers with a range of locus-specific genotyping error rates and applies criteria to select a subset for analysis. However, there has been very little empirical evaluation of the best tradeoff between culling all but the lowest-error loci to minimize overall genotyping error versus the potential for increasing population genetic signal by retaining more loci. Here, we used AFLPs to compare population structure in co-distributed broadcast spawning (Crassostrea virginica) and brooding (Ostrea equestris) oyster species. Using existing methods for almost entirely automated marker selection and scoring, genotyping error tradeoffs were evaluated by comparing results across a nested series of data sets with mean mismatch errors of 0, 1, 2, 3, 4 and >4%. Artifactual population structure was diagnosed in high-error data sets and we assessed the low-error point at which expected population substructure signal was lost. In both species, we identified substructure patterns deemed to be inaccurate at average mismatch error rates ⩽2 and >4%. In the species comparison, the optimum data sets showed higher gene flow for the brooding oyster with more oceanic salinity tolerances. AFLP tradeoffs may differ among studies, but our results suggest that important signal may be lost in the pursuit of ‘acceptable' error levels and our procedures provide a general method for empirically exploring these tradeoffs.
PMCID: PMC3356811  PMID: 22274647
mismatch error; homoplasy; larval dispersal; gene flow; Crassostrea virginica; Ostrea equestris
24.  The Influence of Distance and Level of Care on Delivery Place in Rural Zambia: A Study of Linked National Data in a Geographic Information System 
PLoS Medicine  2011;8(1):e1000394.
Using linked national data in a geographic information system system, Sabine Gabrysch and colleagues investigate the effects of distance to care and level of care on women's use of health facilities for delivery in rural Zambia.
Maternal and perinatal mortality could be reduced if all women delivered in settings where skilled attendants could provide emergency obstetric care (EmOC) if complications arise. Research on determinants of skilled attendance at delivery has focussed on household and individual factors, neglecting the influence of the health service environment, in part due to a lack of suitable data. The aim of this study was to quantify the effects of distance to care and level of care on women's use of health facilities for delivery in rural Zambia, and to compare their population impact to that of other important determinants.
Methods and Findings
Using a geographic information system (GIS), we linked national household data from the Zambian Demographic and Health Survey 2007 with national facility data from the Zambian Health Facility Census 2005 and calculated straight-line distances. Health facilities were classified by whether they provided comprehensive EmOC (CEmOC), basic EmOC (BEmOC), or limited or substandard services. Multivariable multilevel logistic regression analyses were performed to investigate the influence of distance to care and level of care on place of delivery (facility or home) for 3,682 rural births, controlling for a wide range of confounders. Only a third of rural Zambian births occurred at a health facility, and half of all births were to mothers living more than 25 km from a facility of BEmOC standard or better. As distance to the closest health facility doubled, the odds of facility delivery decreased by 29% (95% CI, 14%–40%). Independently, each step increase in level of care led to 26% higher odds of facility delivery (95% CI, 7%–48%). The population impact of poor geographic access to EmOC was at least of similar magnitude as that of low maternal education, household poverty, or lack of female autonomy.
Lack of geographic access to emergency obstetric care is a key factor explaining why most rural deliveries in Zambia still occur at home without skilled care. Addressing geographic and quality barriers is crucial to increase service use and to lower maternal and perinatal mortality. Linking datasets using GIS has great potential for future research and can help overcome the neglect of health system factors in research and policy.
Please see later in the article for the Editors' Summary
Editors' Summary
Approximately 360,000 women die each year in pregnancy and childbirth, of which more than 200,000 in sub-Saharan Africa, where a woman's lifetime risk of dying during or following pregnancy remains as high as 1 in 31 (compared to 1 in 4,300 in the developed world). The target of Millennium Development Goal 5 is to reduce the maternal mortality ratio by three quarters by 2015. Most maternal and neonatal deaths in low-income countries could be prevented if all women delivered their babies in settings where skilled birth attendants (such as midwives) were available and could provide emergency obstetric care to both mothers and babies in case of complications. Yet every year roughly 50 million women give birth at home without skilled care.
Why was this study done?
The likelihood of a woman giving birth in a health facility under the care of a skilled birth attendant depends on many factors. These include characteristics of the mother and her family, such as education level and household wealth, and aspects of the health service environment—distance to the nearest health facility and the quality of care provided at that facility, for example. However, research to date has typically focused on household and individual factors, neglecting the influence of the health service environment on choice of delivery place, largely because suitable data was not available. In this study in rural Zambia, the researchers aimed to quantify the effects of the health service environment, namely distance to health care and the level of care provided, on pregnant women's use of health facilities for giving birth. To put these factors in context, the researchers compared the impact of distance to quality care on place of delivery to that of other important factors, such as poverty and education.
What did the researchers do and find?
Using a geographic information system (GIS), the researchers linked national household data (from the 2007 Zambia Demographic and Health Survey) with national facility data (from the 2005 Zambian Health Facility Census) and calculated straight-line distances between women's villages and health facilities. Health facilities were classified as providing comprehensive emergency obstetric care, basic emergency obstetric care, or limited or substandard services by using reported capability to perform a certain number of the eight emergency obstetric care signal functions: injectable antibiotics, injectable oxytocics, injectable anticonvulsants, manual removal of placenta, manual removal of retained products, assisted vaginal delivery, cesarean section, and blood transfusion, as well as criteria on staffing, opening hours and referral capacity. The researchers used data from 3,682 rural births and multivariable multilevel logistic regression analyses to investigate whether distance to, and level of care at the closest delivery facility influence place of delivery (health facility or home), keeping other influential factors constant.
The researchers found that only a third of births in rural Zambia occurred at a health facility, and half of all mothers who gave birth lived more than 25 km from a health facility that provided basic emergency obstetric services. As distance to the closest health facility doubled, the odds of a women giving birth in a health facility decreased by 29%. Independently, each step increase in the level of emergency obstetric care provided at the closest delivery facility led to an increased likelihood (26% higher odds) of a woman delivering her baby at a facility. The researchers estimated that the impact of poor geographic access to emergency obstetric services was of similar magnitude as that of low maternal education, household poverty, or lack of female autonomy.
What do these findings mean?
The results of this study suggest that poor geographic access to emergency obstetric care is a key factor in explaining why most women in rural Zambia still deliver their babies at home without skilled care. Therefore, in order to increase the number of women delivering in health facilities and thus reduce maternal and neonatal mortality, it is crucial to address the geographic and quality barriers to delivery service use. Furthermore, the methodology used in this study—linking datasets using GIS— has great potential for future research as it can help explore the influence of health system factors also for other health problems.
Additional Information
Please access these websites via the online version of this summary at
Information about emergency obstetric care is provided by the United Nations Population Fund (UNFPA)
Various topics on maternal health are presented by WHO, WHO Regional Office Africa, by UNPFA, and UNICEF
WHO offers detailed information about MDG5
Family Care International offers more information about maternal and neonatal health
The Averting Maternal Death and Disability program (AMDD) provides information on needs assessments of emergency obstetric and newborn care
Countdown to 2015 tracks progress in maternal, newborn, and child survival
WHO provides free online viewing of BBC Fight for Life videos describing delivery experiences in different countries
PMCID: PMC3026699  PMID: 21283606
25.  Inferring separate parental admixture components in unknown DNA samples using autosomal SNPs 
European Journal of Human Genetics  2012;20(12):1283-1289.
The identification of ancestral admixture proportions for human DNA samples has recently had success in forensic cases. Current methods infer admixture proportions for the target sample, but not for their parents, which provides an additional layer of information that may aid certain forensic investigations. We describe new maximum likelihood methods (LEAPFrOG and LEAPFrOG Expectation Maximisation), for inferring both an individual's admixture proportions and the admixture proportions possessed by the unobserved parents, with respect to two or more source populations, using single-nucleotide polymorphism data typed only in the target individual. This is achieved by examining the increase in heterozygosity in the offspring of parents who are from different populations or who represent different mixtures from a number of source populations. We validated the methods via simulation; combining chromosomes from different Hapmap Phase III population samples to emulate first-generation admixture. Performance was strong for individuals with mixed African/European (YRI/CEU) ancestry, but poor for mixed Japanese/Chinese (JPT/CHB) ancestry, reflecting the difficulty in distinguishing closely related source populations. A total of 11 African-American trios were used to compare the parental admixture inferred from their own genotypes against that inferred purely from their offspring genotypes. We examined the performance of 34 ancestry informative markers from a multiplex kit for ancestry inference. Simulations showed that estimates were unreliable when parents had similar admixture, suggesting more markers are needed. Our results demonstrate that ancestral backgrounds of case samples and their parents are obtainable to aid in forensic investigations, provided that high-throughput methods are adopted by the forensic community.
PMCID: PMC3499753  PMID: 22739346
population genetics; SNPs; admixture; statistical genetics; genomics; forensic genetics

Results 1-25 (1199525)