PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (2810)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
more »
1.  Intra-tumor Genetic Heterogeneity and Mortality in Head and Neck Cancer: Analysis of Data from The Cancer Genome Atlas 
PLoS Medicine  2015;12(2):e1001786.
Background
Although the involvement of intra-tumor genetic heterogeneity in tumor progression, treatment resistance, and metastasis is established, genetic heterogeneity is seldom examined in clinical trials or practice. Many studies of heterogeneity have had prespecified markers for tumor subpopulations, limiting their generalizability, or have involved massive efforts such as separate analysis of hundreds of individual cells, limiting their clinical use. We recently developed a general measure of intra-tumor genetic heterogeneity based on whole-exome sequencing (WES) of bulk tumor DNA, called mutant-allele tumor heterogeneity (MATH). Here, we examine data collected as part of a large, multi-institutional study to validate this measure and determine whether intra-tumor heterogeneity is itself related to mortality.
Methods and Findings
Clinical and WES data were obtained from The Cancer Genome Atlas in October 2013 for 305 patients with head and neck squamous cell carcinoma (HNSCC), from 14 institutions. Initial pathologic diagnoses were between 1992 and 2011 (median, 2008). Median time to death for 131 deceased patients was 14 mo; median follow-up of living patients was 22 mo. Tumor MATH values were calculated from WES results. Despite the multiple head and neck tumor subsites and the variety of treatments, we found in this retrospective analysis a substantial relation of high MATH values to decreased overall survival (Cox proportional hazards analysis: hazard ratio for high/low heterogeneity, 2.2; 95% CI 1.4 to 3.3). This relation of intra-tumor heterogeneity to survival was not due to intra-tumor heterogeneity’s associations with other clinical or molecular characteristics, including age, human papillomavirus status, tumor grade and TP53 mutation, and N classification. MATH improved prognostication over that provided by traditional clinical and molecular characteristics, maintained a significant relation to survival in multivariate analyses, and distinguished outcomes among patients having oral-cavity or laryngeal cancers even when standard disease staging was taken into account. Prospective studies, however, will be required before MATH can be used prognostically in clinical trials or practice. Such studies will need to examine homogeneously treated HNSCC at specific head and neck subsites, and determine the influence of cancer therapy on MATH values. Analysis of MATH and outcome in human-papillomavirus-positive oropharyngeal squamous cell carcinoma is particularly needed.
Conclusions
To our knowledge this study is the first to combine data from hundreds of patients, treated at multiple institutions, to document a relation between intra-tumor heterogeneity and overall survival in any type of cancer. We suggest applying the simply calculated MATH metric of heterogeneity to prospective studies of HNSCC and other tumor types.
In this study, Rocco and colleagues examine data collected as part of a large, multi-institutional study, to validate a measure of tumor heterogeneity called MATH and determine whether intra-tumor heterogeneity is itself related to mortality.
Editors’ Summary
Background
Normally, the cells in human tissues and organs only reproduce (a process called cell division) when new cells are needed for growth or to repair damaged tissues. But sometimes a cell somewhere in the body acquires a genetic change (mutation) that disrupts the control of cell division and allows the cell to grow continuously. As the mutated cell grows and divides, it accumulates additional mutations that allow it to grow even faster and eventually from a lump, or tumor (cancer). Other mutations subsequently allow the tumor to spread around the body (metastasize) and destroy healthy tissues. Tumors can arise anywhere in the body—there are more than 200 different types of cancer—and about one in three people will develop some form of cancer during their lifetime. Many cancers can now be successfully treated, however, and people often survive for years after a diagnosis of cancer before, eventually, dying from another disease.
Why Was This Study Done?
The gradual acquisition of mutations by tumor cells leads to the formation of subpopulations of cells, each carrying a different set of mutations. This “intra-tumor heterogeneity” can produce tumor subclones that grow particularly quickly, that metastasize aggressively, or that are resistant to cancer treatments. Consequently, researchers have hypothesized that high intra-tumor heterogeneity leads to worse clinical outcomes and have suggested that a simple measure of this heterogeneity would be a useful addition to the cancer staging system currently used by clinicians for predicting the likely outcome (prognosis) of patients with cancer. Here, the researchers investigate whether a measure of intra-tumor heterogeneity called “mutant-allele tumor heterogeneity” (MATH) is related to mortality (death) among patients with head and neck squamous cell carcinoma (HNSCC)—cancers that begin in the cells that line the moist surfaces inside the head and neck, such as cancers of the mouth and the larynx (voice box). MATH is based on whole-exome sequencing (WES) of tumor and matched normal DNA. WES uses powerful DNA-sequencing systems to determine the variations of all the coding regions (exons) of the known genes in the human genome (genetic blueprint).
What Did the Researchers Do and Find?
The researchers obtained clinical and WES data for 305 patients who were treated in 14 institutions, primarily in the US, after diagnosis of HNSCC from The Cancer Genome Atlas, a catalog established by the US National Institutes of Health to map the key genomic changes in major types and subtypes of cancer. They calculated tumor MATH values for the patients from their WES results and retrospectively analyzed whether there was an association between the MATH values and patient survival. Despite the patients having tumors at various subsites and being given different treatments, every 10% increase in MATH value corresponded to an 8.8% increased risk (hazard) of death. Using a previously defined MATH-value cutoff to distinguish high- from low-heterogeneity tumors, compared to patients with low-heterogeneity tumors, patients with high-heterogeneity tumors were more than twice as likely to die (a hazard ratio of 2.2). Other statistical analyses indicated that MATH provided improved prognostic information compared to that provided by established clinical and molecular characteristics and human papillomavirus (HPV) status (HPV-positive HNSCC at some subsites has a better prognosis than HPV-negative HNSCC). In particular, MATH provided prognostic information beyond that provided by standard disease staging among patients with mouth or laryngeal cancers.
What Do These Findings Mean?
By using data from more than 300 patients treated at multiple institutions, these findings validate the use of MATH as a measure of intra-tumor heterogeneity in HNSCC. Moreover, they provide one of the first large-scale demonstrations that intra-tumor heterogeneity is clinically important in the prognosis of any type of cancer. Before the MATH metric can be used in clinical trials or in clinical practice as a prognostic tool, its ability to predict outcomes needs to be tested in prospective studies that examine the relation between MATH and the outcomes of patients with identically treated HNSCC at specific head and neck subsites, that evaluate the use of MATH for prognostication in other tumor types, and that determine the influence of cancer treatments on MATH values. Nevertheless, these findings suggest that MATH should be considered as a biomarker for survival in HNSCC and other tumor types, and raise the possibility that clinicians could use MATH values to decide on the best treatment for individual patients and to choose patients for inclusion in clinical trials.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001786.
The US National Cancer Institute (NCI) provides information about cancer and how it develops and about head and neck cancer (in English and Spanish)
Cancer Research UK, a not-for-profit organization, provides general information about cancer and how it develops, and detailed information about head and neck cancer; the Merseyside Regional Head and Neck Cancer Centre provides patient stories about HNSCC
Wikipedia provides information about tumor heterogeneity, and about whole-exome sequencing (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
Information about The Cancer Genome Atlas is available
A PLOS Blog entry by Jessica Wapner explains more about MATH
doi:10.1371/journal.pmed.1001786
PMCID: PMC4323109  PMID: 25668320
3.  The Role of Abcb5 Alleles in Susceptibility to Haloperidol-Induced Toxicity in Mice and Humans 
PLoS Medicine  2015;12(2):e1001782.
Background
We know very little about the genetic factors affecting susceptibility to drug-induced central nervous system (CNS) toxicities, and this has limited our ability to optimally utilize existing drugs or to develop new drugs for CNS disorders. For example, haloperidol is a potent dopamine antagonist that is used to treat psychotic disorders, but 50% of treated patients develop characteristic extrapyramidal symptoms caused by haloperidol-induced toxicity (HIT), which limits its clinical utility. We do not have any information about the genetic factors affecting this drug-induced toxicity. HIT in humans is directly mirrored in a murine genetic model, where inbred mouse strains are differentially susceptible to HIT. Therefore, we genetically analyzed this murine model and performed a translational human genetic association study.
Methods and Findings
A whole genome SNP database and computational genetic mapping were used to analyze the murine genetic model of HIT. Guided by the mouse genetic analysis, we demonstrate that genetic variation within an ABC-drug efflux transporter (Abcb5) affected susceptibility to HIT. In situ hybridization results reveal that Abcb5 is expressed in brain capillaries, and by cerebellar Purkinje cells. We also analyzed chromosome substitution strains, imaged haloperidol abundance in brain tissue sections and directly measured haloperidol (and its metabolite) levels in brain, and characterized Abcb5 knockout mice. Our results demonstrate that Abcb5 is part of the blood-brain barrier; it affects susceptibility to HIT by altering the brain concentration of haloperidol. Moreover, a genetic association study in a haloperidol-treated human cohort indicates that human ABCB5 alleles had a time-dependent effect on susceptibility to individual and combined measures of HIT. Abcb5 alleles are pharmacogenetic factors that affect susceptibility to HIT, but it is likely that additional pharmacogenetic susceptibility factors will be discovered.
Conclusions
ABCB5 alleles alter susceptibility to HIT in mouse and humans. This discovery leads to a new model that (at least in part) explains inter-individual differences in susceptibility to a drug-induced CNS toxicity.
Gary Peltz and colleagues examine the role of ABCB5 alleles in haloperidol-induced toxicity in a murine genetic model and humans treated with haloperidol.
Editors' Summary
Background
The brain is the control center of the human body. This complex organ controls thoughts, memory, speech, and movement, it is the seat of intelligence, and it regulates the function of many organs. The brain comprises many different parts, all of which work together but all of which have their own special functions. For example, the forebrain is involved in intellectual activities such as thinking whereas the hindbrain controls the body’s vital functions and movements. Messages are passed between the various regions of the brain and to other parts of the body by specialized cells called neurons, which release and receive signal molecules known as neurotransmitters. Like all the organs in the body, blood vessels supply the brain with the oxygen, water, and nutrients it needs to function. Importantly, however, the brain is protected from infectious agents and other potentially dangerous substances circulating in the blood by the “blood-brain barrier,” a highly selective permeability barrier that is formed by the cells lining the fine blood vessels (capillaries) within the brain.
Why Was This Study Done?
Although drugs have been developed to treat various brain disorders, more active and less toxic drugs are needed to improve the treatment of many if not most of these conditions. Unfortunately, relatively little is known about how the blood-brain barrier regulates the entry of drugs into the brain or about the genetic factors that affect the brain’s susceptibility to drug-induced toxicities. It is not known, for example, why about half of patients given haloperidol—a drug used to treat psychotic disorders (conditions that affect how people think, feel, or behave)—develop tremors and other symptoms caused by alterations in the brain region that controls voluntary movements. Here, to improve our understanding of how drugs enter the brain and impact its function, the researchers investigate the genetic factors that affect haloperidol-induced toxicity by genetically analyzing several inbred mouse strains (every individual in an inbred mouse strain is genetically identical) with different susceptibilities to haloperidol-induced toxicity and by undertaking a human genetic association study (a study that looks for non-chance associations between specific traits and genetic variants).
What Did the Researchers Do and Find?
The researchers used a database of genetic variants called single nucleotide polymorphisms (SNPs) and a computational genetic mapping approach to show first that variations within the gene encoding Abcb5 affected susceptibility to haloperidol-induced toxicity (indicated by changes in the length of time taken by mice to move their paws when placed on an inclined wire-mesh screen) among inbred mouse strains. Abcb5 is an ATP-binding cassette transporter, a type of protein that moves molecules across cell membranes. The researchers next showed that Abcb5 is expressed in brain capillaries, which is the location of the blood-brain barrier. Abcb5 was also expressed in cerebellar Purkinje cells, which help to control motor (intentional) movements. They also measured the measured the effect of haloperidol and the haloperidol concentration in brain tissue sections in mice that were genetically engineered to make no Abcb5 (Abcb5 knockout mice). Finally, the researchers investigated whether specific alleles (alternative versions) of ABCB5 are associated with haloperidol-induced toxicity in people. Among a group of 85 patients treated with haloperidol for a psychotic illness, one specific ABCB5 allele was associated with haloperidol-induced toxicity during the first few days of treatment.
What Do These Findings Mean?
These findings indicate that Abcb5 is a component of the blood-brain barrier in mice and suggest that genetic variants in the gene encoding this protein underlie, at least in part, the differences in susceptibility to haloperidol-induced toxicity seen among inbred mice strains. Moreover, the human genetic association study indicates that a specific ABCB5 allele also affects the susceptibility of people to haloperidol-induced toxicity. The researchers note that other ABCB5 alleles or other genetic factors that affect haloperidol-induced toxicity in people might emerge if larger groups of patients were studied. However, based on their findings, the researchers propose a new model for the genetic mechanisms that underlie inter-individual and cell type-specific differences in susceptibility to haloperidol-induced brain toxicity. If confirmed in future studies, this model might facilitate the development of more effective and less toxic drugs to treat a range of brain disorders.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001782.
The US National Institute of Neurological Disorders and Stroke provides information about a wide range of brain diseases (in English and Spanish); its fact sheet “Brain Basics: Know Your Brain” is a simple introduction to the human brain; its “Blueprint Neurotherapeutics Network” was established to develop new drugs for disorders affecting the brain and other parts of the nervous system
MedlinePlus provides links to additional resources about brain diseases and their treatment (in English and Spanish)
Wikipedia provides information about haloperidol, about ATP-binding cassette transporters and about genetic association (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1001782
PMCID: PMC4315575  PMID: 25647612
4.  Evaluation of a Minimally Invasive Cell Sampling Device Coupled with Assessment of Trefoil Factor 3 Expression for Diagnosing Barrett's Esophagus: A Multi-Center Case–Control Study 
PLoS Medicine  2015;12(1):e1001780.
Background
Barrett's esophagus (BE) is a commonly undiagnosed condition that predisposes to esophageal adenocarcinoma. Routine endoscopic screening for BE is not recommended because of the burden this would impose on the health care system. The objective of this study was to determine whether a novel approach using a minimally invasive cell sampling device, the Cytosponge, coupled with immunohistochemical staining for the biomarker Trefoil Factor 3 (TFF3), could be used to identify patients who warrant endoscopy to diagnose BE.
Methods and Findings
A case–control study was performed across 11 UK hospitals between July 2011 and December 2013. In total, 1,110 individuals comprising 463 controls with dyspepsia and reflux symptoms and 647 BE cases swallowed a Cytosponge prior to endoscopy. The primary outcome measures were to evaluate the safety, acceptability, and accuracy of the Cytosponge-TFF3 test compared with endoscopy and biopsy.
In all, 1,042 (93.9%) patients successfully swallowed the Cytosponge, and no serious adverse events were attributed to the device. The Cytosponge was rated favorably, using a visual analogue scale, compared with endoscopy (p < 0.001), and patients who were not sedated for endoscopy were more likely to rate the Cytosponge higher than endoscopy (Mann-Whitney test, p < 0.001). The overall sensitivity of the test was 79.9% (95% CI 76.4%–83.0%), increasing to 87.2% (95% CI 83.0%–90.6%) for patients with ≥3 cm of circumferential BE, known to confer a higher cancer risk. The sensitivity increased to 89.7% (95% CI 82.3%–94.8%) in 107 patients who swallowed the device twice during the study course. There was no loss of sensitivity in patients with dysplasia. The specificity for diagnosing BE was 92.4% (95% CI 89.5%–94.7%). The case–control design of the study means that the results are not generalizable to a primary care population. Another limitation is that the acceptability data were limited to a single measure.
Conclusions
The Cytosponge-TFF3 test is safe and acceptable, and has accuracy comparable to other screening tests. This test may be a simple and inexpensive approach to identify patients with reflux symptoms who warrant endoscopy to diagnose BE.
Editors' Summary
Background
Barrett's esophagus is a condition in which the cells lining the esophagus (the tube that transports food from the mouth to the stomach) change and begin to resemble the cells lining the intestines. Although some people with Barrett's esophagus complain of burning indigestion or acid reflux from the stomach into the esophagus, many people have no symptoms or do not seek medical advice, so the condition often remains undiagnosed. Long-term acid reflux (gastroesophageal reflux disease), obesity, and being male are all risk factors for Barrett's esophagus, but the condition's exact cause is unclear. Importantly, people with Barrett's esophagus are more likely to develop esophageal cancer than people with a normal esophagus, especially if a long length (segment) of the esophagus is affected or if the esophagus contains abnormally growing “dysplastic” cells. Although esophageal cancer is rare in the general population, 1%–5% of people with Barrett's esophagus develop this type of cancer; about half of people diagnosed with esophageal cancer die within a year of diagnosis.
Why Was This Study Done?
Early detection and treatment of esophageal cancer increases an affected individual's chances of survival. Thus, experts recommend that people with multiple risk factors for Barrett's esophagus undergo endoscopic screening—a procedure that uses a small camera attached to a long flexible tube to look for esophageal abnormalities. Once diagnosed, patients with Barrett's esophagus generally enter an endoscopic surveillance program so that dysplastic cells can be identified as soon as they appear and removed using endoscopic surgery or “radiofrequency ablation” to prevent cancer development. However, although endoscopic screening of everyone with reflux symptoms for Barrett's esophagus could potentially reduce deaths from esophageal cancer, such screening is not affordable for most health care systems. In this case–control study, the researchers investigate whether a cell sampling device called the Cytosponge coupled with immunohistochemical staining for Trefoil Factor 3 (TFF3, a biomarker of Barrett's esophagus) can be used to identify individuals who warrant endoscopic investigation. A case–control study compares the characteristics of patients with and without a specific disease. The Cytosponge is a small capsule-encased sponge that is attached to a string. The capsule rapidly dissolves in the stomach after being swallowed, and the sponge collects esophageal cells for TFF3 staining when it is retrieved by pulling on the string.
What Did the Researchers Do and Find?
The researchers enrolled 463 individuals attending 11 UK hospitals for investigational endoscopy for dyspepsia and reflux symptoms as controls, and 647 patients with Barrett's esophagus who were attending hospital for monitoring endoscopy. Before undergoing endoscopy, the study participants swallowed a Cytosponge so that the researchers could evaluate the safety, acceptability, and accuracy of the Cytosponge-TFF3 test for the diagnosis of Barrett's esophagus compared with endoscopy. Nearly 94% of the participants swallowed the Cytosponge successfully, there were no adverse effects attributed to the device, and those participants that swallowed the device generally rated the experience as acceptable. The overall sensitivity of the Cytosponge-TFF3 test (its ability to detect true positives) was 79.9%. That is, 79.9% of the individuals with endoscopically diagnosed Barrett's esophagus were identified as having the condition using the new test. The sensitivity of the test was greater among patients who had a longer length of affected esophagus and importantly was not reduced in patients with dysplasia. Compared to endoscopy, the specificity of the Cytosponge-TFF3 test (its ability to detect true negatives) was 92.4%. That is, 92.4% of people unaffected by Barrett's esophagus were correctly identified as being unaffected.
What Do These Findings Mean?
The case–control design of this study means that its results are not generalizable to a primary care population. Also, the study used only a single measure of the acceptability of the Cytosponge-TFF3 test, Nevertheless, these findings indicate that this minimally invasive test for Barrett's esophagus is safe and acceptable, and that its accuracy is similar to that of colorectal cancer and cervical cancer screening tests. The Cytosponge-TFF3 test might, therefore, provide a simple, inexpensive way to identify those patients with reflux symptoms who warrant endoscopy to diagnose Barrett's esophagus, although randomized controlled trials of the test are needed before its routine clinical implementation. Moreover, because most people with Barrett's esophagus never develop esophageal cancer, additional biomarkers ideally need to be added to the test before its routine implementation to identify those individuals who have the greatest risk of esophageal cancer, and thereby avoid overtreatment of Barrett's esophagus.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001780.
The US National Institute of Diabetes and Digestive and Kidney Diseases provides detailed information about Barrett's esophagus and gastroesophageal reflux disease
The US National Cancer Institute provides information for patients and health professionals about esophageal cancer (in English and Spanish)
Cancer Research UK (a non-profit organization) provides detailed information about Barrett's esophagus (including a video about having the Cytosponge test and further information about this study, the BEST2 Study) and about esophageal cancer
The UK National Health Service Choices website has pages on the complications of gastroesophageal reflux and on esophageal cancer (including a real story)
Heartburn Cancer Awareness Support is a non-profit organization that aims to improve public awareness and provides support for people affected by Barrett's esophagus; the organization's website explains the range of initiatives to promote education and awareness as well as highlighting personal stories of those affected by Barrett's esophagus and esophageal cancer
The British Society of Gastroenterology has published guidelines on the diagnosis and management of Barrett's esophagus
The UK National Institute for Health and Care Excellence has published guidelines for gastroesophageal reflux
The Barrett's Esophagus Campaign is a UK-based non-profit organization that supports research into the condition and provides support for people affected by Barrett's esophagus; its website includes personal stories about the condition
In a multi-center case-control study, Rebecca Fitzgerald and colleagues examine whether a minimally invasive cell sampling device could be used to identify patients who warrant endoscopy to diagnose Barrett's esophagus.
doi:10.1371/journal.pmed.1001780
PMCID: PMC4310596  PMID: 25634542
6.  Supporting Those Who Go to Fight Ebola 
PLoS Medicine  2015;12(1):e1001781.
doi:10.1371/journal.pmed.1001781
PMCID: PMC4306478  PMID: 25622033
7.  Hormonal Contraception and the Risk of HIV Acquisition: An Individual Participant Data Meta-analysis 
PLoS Medicine  2015;12(1):e1001778.
In a meta-analysis of individual participant data, Charles Morrison and colleagues explore the association between hormonal contraception use and risk of HIV infection in sub-Saharan Africa.
Background
Observational studies of a putative association between hormonal contraception (HC) and HIV acquisition have produced conflicting results. We conducted an individual participant data (IPD) meta-analysis of studies from sub-Saharan Africa to compare the incidence of HIV infection in women using combined oral contraceptives (COCs) or the injectable progestins depot-medroxyprogesterone acetate (DMPA) or norethisterone enanthate (NET-EN) with women not using HC.
Methods and Findings
Eligible studies measured HC exposure and incident HIV infection prospectively using standardized measures, enrolled women aged 15–49 y, recorded ≥15 incident HIV infections, and measured prespecified covariates. Our primary analysis estimated the adjusted hazard ratio (aHR) using two-stage random effects meta-analysis, controlling for region, marital status, age, number of sex partners, and condom use. We included 18 studies, including 37,124 women (43,613 woman-years) and 1,830 incident HIV infections. Relative to no HC use, the aHR for HIV acquisition was 1.50 (95% CI 1.24–1.83) for DMPA use, 1.24 (95% CI 0.84–1.82) for NET-EN use, and 1.03 (95% CI 0.88–1.20) for COC use. Between-study heterogeneity was mild (I2 < 50%). DMPA use was associated with increased HIV acquisition compared with COC use (aHR 1.43, 95% CI 1.23–1.67) and NET-EN use (aHR 1.32, 95% CI 1.08–1.61). Effect estimates were attenuated for studies at lower risk of methodological bias (compared with no HC use, aHR for DMPA use 1.22, 95% CI 0.99–1.50; for NET-EN use 0.67, 95% CI 0.47–0.96; and for COC use 0.91, 95% CI 0.73–1.41) compared to those at higher risk of bias (pinteraction = 0.003). Neither age nor herpes simplex virus type 2 infection status modified the HC–HIV relationship.
Conclusions
This IPD meta-analysis found no evidence that COC or NET-EN use increases women’s risk of HIV but adds to the evidence that DMPA may increase HIV risk, underscoring the need for additional safe and effective contraceptive options for women at high HIV risk. A randomized controlled trial would provide more definitive evidence about the effects of hormonal contraception, particularly DMPA, on HIV risk.
Editors’ Summary
Background
AIDS has killed about 36 million people since the first recorded case of the disease in 1981. About 35 million people (including 25 million living in sub-Saharan Africa) are currently infected with HIV, the virus that causes AIDS, and every year, another 2.3 million people become newly infected with HIV. At the beginning of the epidemic, more men than women were infected with HIV. Now, about half of all adults infected with HIV are women. In 2013, almost 60% of all new HIV infections among young people aged 15–24 years occurred among women, and it is estimated that, worldwide, 50 young women are newly infected with HIV every hour. Most women become infected with HIV through unprotected intercourse with an infected male partner—biologically, women are twice as likely to become infected through unprotected intercourse as men. A woman’s risk of becoming infected with HIV can be reduced by abstaining from sex, by having one or a few partners, and by always using condoms.
Why Was This Study Done?
Women and societies both benefit from effective contraception. When contraception is available, women can avoid unintended pregnancies, fewer women and babies die during pregnancy and childbirth, and maternal and infant health improves. However, some (but not all) observational studies (investigations that measure associations between the characteristics of participants and their subsequent development of specific diseases) have reported an association between hormonal contraceptive use and an increased risk of HIV acquisition by women. So, does hormonal contraception increase the risk of HIV acquisition among women or not? Here, to investigate this question, the researchers undertake an individual participant data meta-analysis of studies conducted in sub-Saharan Africa (a region where both HIV infection and unintended pregnancies are common) to compare the incidence of HIV infection (the number of new cases in a population during a given time period) among women using and not using hormonal contraception. Meta-analysis is a statistical method that combines the results of several studies; an individual participant data meta-analysis combines the data recorded for each individual involved in the studies rather than the aggregated results from each study.
What Did the Researchers Do and Find?
The researchers included 18 studies that measured hormonal contraceptive use and incident HIV infection among women aged 15–49 years living in sub-Saharan Africa in their meta-analysis. More than 37,000 women took part in these studies, and 1,830 became newly infected with HIV. Half of the women were not using hormonal contraception, a quarter were using depot-medroxyprogesterone acetate (DMPA; an injectable hormonal contraceptive), and the remainder were using combined oral contraceptives (COCs) or norethisterone enanthate (NET-EN, another injectable contraceptive). After adjustment for other factors likely to influence HIV acquisition (for example, condom use), women using DMPA had a 1.5-fold increased risk of HIV acquisition compared to women not using hormonal contraception. There was a slightly increased risk of HIV acquisition among women using NET-EN compared to women not using hormonal contraception, but this increase was not statistically significant (it may have happened by chance alone). There was no increased risk of HIV acquisition associated with COC use. DMPA use was associated with a 1.43-fold and 1.32-fold increased risk of HIV acquisition compared with COC and NET-EN use, respectively. Finally, neither age nor herpes simplex virus 2 infection status modified the effect of hormonal contraceptive use on HIV acquisition.
What Do These Findings Mean?
The findings of this individual patient data meta-analysis provide no evidence that COC or NET-EN use increases a woman’s risk of acquiring HIV, but add to the evidence suggesting that DMPA use increases the risk of HIV acquisition. These findings are likely to be more accurate than those of previous meta-analyses that used aggregated data but are likely to be limited by the quality, design, and representativeness of the studies included in the analysis. These findings nevertheless highlight the need to develop additional safe and effective contraceptive options for women at risk of HIV, particularly those living in sub-Saharan Africa, where although contraceptive use is generally low, DMPA is the most widely used hormonal contraceptive. In addition, these findings highlight the need to initiate randomized controlled trials to provide more definitive evidence of the effects of hormonal contraception, particularly DMPA, on HIV risk.
Additional Information.
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001778.
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
NAM/aidsmap provides basic information about HIV/AIDS, and summaries of recent research findings on HIV care and treatment, including personal stories about living with HIV/AIDS and a news report on this meta-analysis
Information is available from Avert, an international AIDS charity, on many aspects of HIV/AIDS, including detailed information on women, HIV, and AIDS, and on HIV and AIDS in South Africa (in English and Spanish); personal stories of women living with HIV are available
The World Health Organization provides information on all aspects of HIV/AIDS (in several languages); information about a 2012 WHO technical consultation about hormonal contraception and HIV
The 2013 UNAIDS World AIDS Day report provides up-to-date information about the AIDS epidemic and efforts to halt it; UNAIDS also provides information about HIV and hormonal contraception
doi:10.1371/journal.pmed.1001778
PMCID: PMC4303292  PMID: 25612136
8.  Randomized Controlled Trials in Environmental Health Research: Unethical or Underutilized? 
PLoS Medicine  2015;12(1):e1001775.
Ryan Allen and colleagues argue that more randomized controlled trials in environmental health would complement a strong tradition of observational research.
Please see later in the article for the Editors' Summary
doi:10.1371/journal.pmed.1001775
PMCID: PMC4285396  PMID: 25562846
9.  Association between Respiratory Syncytial Virus Activity and Pneumococcal Disease in Infants: A Time Series Analysis of US Hospitalization Data 
PLoS Medicine  2015;12(1):e1001776.
Daniel Weinberger and colleagues examine a possible interaction between two serious respiratory infections in children under 2 years of age.
Please see later in the article for the Editors' Summary
Background
The importance of bacterial infections following respiratory syncytial virus (RSV) remains unclear. We evaluated whether variations in RSV epidemic timing and magnitude are associated with variations in pneumococcal disease epidemics and whether changes in pneumococcal disease following the introduction of seven-valent pneumococcal conjugate vaccine (PCV7) were associated with changes in the rate of hospitalizations coded as RSV.
Methods and Findings
We used data from the State Inpatient Databases (Agency for Healthcare Research and Quality), including >700,000 RSV hospitalizations and >16,000 pneumococcal pneumonia hospitalizations in 36 states (1992/1993–2008/2009). Harmonic regression was used to estimate the timing of the average seasonal peak of RSV, pneumococcal pneumonia, and pneumococcal septicemia. We then estimated the association between the incidence of pneumococcal disease in children and the activity of RSV and influenza (where there is a well-established association) using Poisson regression models that controlled for shared seasonal variations. Finally, we estimated changes in the rate of hospitalizations coded as RSV following the introduction of PCV7. RSV and pneumococcal pneumonia shared a distinctive spatiotemporal pattern (correlation of peak timing: ρ = 0.70, 95% CI: 0.45, 0.84). RSV was associated with a significant increase in the incidence of pneumococcal pneumonia in children aged <1 y (attributable percent [AP]: 20.3%, 95% CI: 17.4%, 25.1%) and among children aged 1–2 y (AP: 10.1%, 95% CI: 7.6%, 13.9%). Influenza was also associated with an increase in pneumococcal pneumonia among children aged 1–2 y (AP: 3.2%, 95% CI: 1.7%, 4.7%). Finally, we observed a significant decline in RSV-coded hospitalizations in children aged <1 y following PCV7 introduction (−18.0%, 95% CI: −22.6%, −13.1%, for 2004/2005–2008/2009 versus 1997/1998–1999/2000). This study used aggregated hospitalization data, and studies with individual-level, laboratory-confirmed data could help to confirm these findings.
Conclusions
These analyses provide evidence for an interaction between RSV and pneumococcal pneumonia. Future work should evaluate whether treatment for secondary bacterial infections could be considered for pneumonia cases even if a child tests positive for RSV.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Respiratory infections—bacterial and viral infections of the lungs and the airways (the tubes that take oxygen-rich air to the lungs)—are major causes of illness and death in children worldwide. Pneumonia (infection of the lungs) alone is responsible for about 15% of all child deaths. The leading cause of bacterial pneumonia in children is Streptococcus pneumoniae, which is transmitted through contact with infected respiratory secretions. S. pneumoniae usually causes noninvasive diseases such as bronchitis, but sometimes the bacteria invade the lungs, the bloodstream, or the covering of the brain, where they cause pneumonia, septicemia, or meningitis, respectively. These potentially fatal invasive pneumococcal diseases can be treated with antibiotics but can also be prevented by vaccination with pneumococcal conjugate vaccines such as PCV7. The leading cause of viral pneumonia is respiratory syncytial virus (RSV), which is also readily transmitted through contact with infected respiratory secretions. Almost all children have an RSV infection before their second birthday—RSV usually causes a mild cold-like illness. However, some children infected with RSV develop pneumonia and have to be admitted to hospital for supportive care such as the provision of supplemental oxygen; there is no specific treatment for RSV infection.
Why Was This Study Done?
Co-infections with bacteria and viruses can sometimes have a synergistic effect and lead to more severe disease than an infection with either type of pathogen (disease-causing organism) alone. For example, influenza infections increase the risk of invasive pneumococcal disease. But does pneumococcal disease also interact with RSV infection? It is important to understand the interaction between pneumococcal disease and RSV to improve the treatment of respiratory infections in young children, but the importance of bacterial infections following RSV infection is currently unclear. Here, the researchers undertake a time series analysis of US hospitalization data to investigate the association between RSV activity and pneumococcal disease in infants. Time series analysis uses statistical methods to analyze data collected at successive, evenly spaced time points.
What Did the Researchers Do and Find?
For their analysis, the researchers used data collected between 1992/1993 and 2008/2009 by the State Inpatient Databases on more than 700,000 hospitalizations for RSV and more than 16,000 hospitalizations for pneumococcal pneumonia or septicemia among children under two years old in 36 US states. Using a statistical technique called harmonic regression to measure seasonal variations in disease incidence (the rate of occurrence of new cases of a disease), the researchers show that RSV and pneumococcal pneumonia shared a distinctive spatiotemporal pattern over the study period. Next, using Poisson regression models (another type of statistical analysis), they show that RSV was associated with significant increases (increases unlikely to have happened by chance) in the incidence of pneumococcal disease. Among children under one year old, 20.3% of pneumococcal pneumonia cases were associated with RSV activity; among children 1–2 years old, 10.1% of pneumococcal pneumonia cases were associated with RSV activity. Finally, the researchers report that following the introduction of routine vaccination in the US against S. pneumoniae with PCV7 in 2000, there was a significant decline in hospitalizations for RSV among children under one year old.
What Do These Findings Mean?
These findings provide evidence for an interaction between RSV and pneumococcal pneumonia and indicate that RSV is associated with increases in the incidence of pneumococcal pneumonia, particularly in young infants. Notably, the finding that RSV hospitalizations declined after the introduction of routine pneumococcal vaccination suggests that some RSV hospitalizations may have a joint viral–bacterial etiology (cause), although it is possible that PCV7 vaccination reduced the diagnosis of RSV because fewer children were hospitalized with pneumococcal disease and subsequently tested for RSV. Because this is an ecological study (an observational investigation that looks at risk factors and outcomes in temporally and geographically defined populations), these findings do not provide evidence for a causal link between hospitalizations for RSV and pneumococcal pneumonia. The similar spatiotemporal patterns for the two infections might reflect another unknown factor shared by the children who were hospitalized for RSV or pneumococcal pneumonia. Moreover, because pooled hospitalization discharge data were used in this study, these results need to be confirmed through analysis of individual-level, laboratory-confirmed data. Importantly, however, these findings support the initiation of studies to determine whether treatment for bacterial infections should be considered for children with pneumonia even if they have tested positive for RSV.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001776.
The US National Heart, Lung, and Blood Institute provides information about the respiratory system and about pneumonia
The US Centers for Disease Control and Prevention provides information on all aspects of pneumococcal disease and pneumococcal vaccination, including personal stories and information about RSV infection
The UK National Health Service Choices website provides information about pneumonia (including a personal story) and about pneumococcal diseases
KidsHealth, a website provided by the US-based non-profit Nemours Foundation, includes information on pneumonia and on RSV (in English and Spanish)
MedlinePlus provides links to other resources about pneumonia, RSV infections, and pneumococcal infections (in English and Spanish)
HCUPnet provides aggregated hospitalization data from the State Inpatient Databases used in this study
doi:10.1371/journal.pmed.1001776
PMCID: PMC4285401  PMID: 25562317
10.  A Stronger Post-Publication Culture Is Needed for Better Science 
PLoS Medicine  2014;11(12):e1001772.
Hilda Bastian considers post-publication commenting and the cultural changes that are needed to better capture this intellectual effort.
Please see later in the article for the Editors' Summary
doi:10.1371/journal.pmed.1001772
PMCID: PMC4280106  PMID: 25548904
11.  Artemisinin-Naphthoquine versus Artemether-Lumefantrine for Uncomplicated Malaria in Papua New Guinean Children: An Open-Label Randomized Trial 
PLoS Medicine  2014;11(12):e1001773.
In a randomized controlled trial Tim Davis and colleagues investigate Artemisinin-naphthoquine versus artemether-lumefantrine for the treatment of P. falciparum and P. vivax malaria.
Please see later in the article for the Editors' Summary
Background
Artemisinin combination therapies (ACTs) with broad efficacy are needed where multiple Plasmodium species are transmitted, especially in children, who bear the brunt of infection in endemic areas. In Papua New Guinea (PNG), artemether-lumefantrine is the first-line treatment for uncomplicated malaria, but it has limited efficacy against P. vivax. Artemisinin-naphthoquine should have greater activity in vivax malaria because the elimination of naphthoquine is slower than that of lumefantrine. In this study, the efficacy, tolerability, and safety of these ACTs were assessed in PNG children aged 0.5–5 y.
Methods and Findings
An open-label, randomized, parallel-group trial of artemether-lumefantrine (six doses over 3 d) and artemisinin-naphthoquine (three daily doses) was conducted between 28 March 2011 and 22 April 2013. Parasitologic outcomes were assessed without knowledge of treatment allocation. Primary endpoints were the 42-d P. falciparum PCR-corrected adequate clinical and parasitologic response (ACPR) and the P. vivax PCR-uncorrected 42-d ACPR. Non-inferiority and superiority designs were used for falciparum and vivax malaria, respectively. Because the artemisinin-naphthoquine regimen involved three doses rather than the manufacturer-specified single dose, the first 188 children underwent detailed safety monitoring. Of 2,542 febrile children screened, 267 were randomized, and 186 with falciparum and 47 with vivax malaria completed the 42-d follow-up. Both ACTs were safe and well tolerated. P. falciparum ACPRs were 97.8% and 100.0% in artemether-lumefantrine and artemisinin-naphthoquine-treated patients, respectively (difference 2.2% [95% CI −3.0% to 8.4%] versus −5.0% non-inferiority margin, p = 0.24), and P. vivax ACPRs were 30.0% and 100.0%, respectively (difference 70.0% [95% CI 40.9%–87.2%], p<0.001). Limitations included the exclusion of 11% of randomized patients with sub-threshold parasitemias on confirmatory microscopy and direct observation of only morning artemether-lumefantrine dosing.
Conclusions
Artemisinin-naphthoquine is non-inferior to artemether-lumefantrine in PNG children with falciparum malaria but has greater efficacy against vivax malaria, findings with implications in similar geo-epidemiologic settings within and beyond Oceania.
Trial registration
Australian New Zealand Clinical Trials Registry ACTRN12610000913077
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Malaria is a mosquito-borne parasitic disease that kills more than 600,000 people (mainly young children in sub-Saharan Africa) every year. Plasmodium falciparum causes most of these deaths, but P. vivax is the most common and most widely distributed cause of malaria outside sub-Saharan Africa. Infection with malaria parasites causes recurring flu-like symptoms and must be treated promptly with antimalarial drugs to prevent the development of anemia and potentially fatal damage to the brain and other organs. In the past, malaria was treated with “monotherapies” such as chloroquine, but the parasites quickly developed resistance to many of these inexpensive drugs. The World Health Organization now recommends artemisinin combination therapy (ACT) for first-line treatment of malaria in all regions where there is drug-resistant malaria. In ACT, artemisinin derivatives (fast-acting antimalarial drugs that are rapidly cleared from the body) are used in combination with a slower acting, more slowly eliminated partner drug to prevent reemergence of the original infection and to reduce the chances of the malaria parasites becoming resistant to either drug.
Why Was This Study Done?
Because falciparum and vivax malaria respond differently to antimalarial drugs, wherever there is transmission of both types of malaria but limited facilities for species-specific malaria diagnosis—as in Papua New Guinea—compromises have to be made about which ACT should be used for the treatment of malaria. Thus, Papua New Guinea's national guidelines recommend artemether-lumefantrine, which is effective against the more deadly P. falciparum, for first-line treatment of uncomplicated (mild) malaria even though this ACT is ineffective against the more common P. vivax. In this open-label randomized trial (a study in which participants are randomly assigned to receive different drugs but know which drug they are being given), the researchers ask whether an alternative ACT might be preferable for the treatment of uncomplicated malaria in young children in Papua New Guinea by comparing outcomes after treatment with artemether-lumefantrine versus artemisinin-naphthoquine (an ACT that should be more effective against vivax malaria than artemether-lumefantrine because naphthoquine stays in the body longer than lumefantrine). Specifically, the researchers test the non-inferiority of artemisinin-naphthoquine compared to artemether-lumefantrine for the treatment of falciparum malaria (whether artemisinin-naphthoquine is not worse than artemether-lumefantrine) and the superiority of artemisinin-naphthoquine compared to artemether-lumefantrine for the treatment of vivax malaria (whether artemisinin-naphthoquine is better than artemether-lumefantrine).
What Did the Researchers Do and Find?
The researchers assigned nearly 250 children (aged 0.5 to 5 years) with falciparum malaria, vivax malaria, or both types of malaria to receive six doses of artemether-lumefantrine over three days or three daily doses of artemisinin-naphthoquine. They then followed the children to see how many children in each treatment group and with each type of malaria were free of malaria 42 days after treatment (an “adequate clinical and parasitological response”). Among the patients originally infected with P. falciparum, 97.8% of those treated with artemether-lumefantrine and 100% of those treated with artemisinin-naphthoquine were clear of their original P. falciparum infection (though some had acquired a new P. falciparum infection) 42 days after treatment. By contrast, among the patients infected with P. vivax, 30% of those treated with artemether-lumefantrine and 100% of those treated with artemisinin-naphthoquine were clear of P. vivax infection 42 days after treatment. Both ACTs were safe and well tolerated.
What Do These Findings Mean?
These findings indicate that artemisinin-naphthoquine was non-inferior to artemether-lumefantrine for the treatment of uncomplicated falciparum malaria among young children in Papua New Guinea and had greater efficacy than artemether-lumefantrine against vivax malaria. The accuracy of these findings may be limited by several aspects of the study design. For example, not all the artemether-lumefantrine doses were directly observed, so some children may not have received the full treatment course. Moreover, because all the study participants lived in coastal communities in Papua New Guinea where malaria is highly endemic, treatment responses among children living in areas with lower levels of malaria transmission might be different. Nevertheless, these findings suggest that artemisinin-naphthoquine should be considered alongside other ACTs for the treatment of uncomplicated malaria in regions where there is transmission of multiple Plasmodium species and that artemisinin-naphthoquine may be better than artemether-lumefantrine for the treatment of uncomplicated malaria in young children in regions where P. vivax predominates.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001773.
Information is available from the World Health Organization on malaria (in several languages); the World Malaria Report 2013 provides details on the current global malaria situation, including information on malaria in Papua New Guinea; the World Health Organization's Guidelines for the Treatment of Malaria is available
The US Centers for Disease Control and Prevention provides information on malaria (in English and Spanish), including personal stories about malaria
Information is available from the Roll Back Malaria Partnership on the global control of malaria, including information about malaria in Papua New Guinea, malaria in children, and ACTs
The Malaria Vaccine Initiative has a fact sheet on Plasmodium vivax malaria
MedlinePlus provides links to additional information on malaria (in English and Spanish)
More information about this trial is available
doi:10.1371/journal.pmed.1001773
PMCID: PMC4280121  PMID: 25549086
12.  Efficacy of Neonatal HBV Vaccination on Liver Cancer and Other Liver Diseases over 30-Year Follow-up of the Qidong Hepatitis B Intervention Study: A Cluster Randomized Controlled Trial 
PLoS Medicine  2014;11(12):e1001774.
In a 30-year follow-up of the Qidong Hepatitis B Intervention Study, Yawei Zhang and colleagues examine the effects of neonatal vaccination on liver diseases.
Please see later in the article for the Editors' Summary
Background
Neonatal hepatitis B vaccination has been implemented worldwide to prevent hepatitis B virus (HBV) infections. Its long-term protective efficacy on primary liver cancer (PLC) and other liver diseases has not been fully examined.
Methods and Findings
The Qidong Hepatitis B Intervention Study, a population-based, cluster randomized, controlled trial between 1985 and 1990 in Qidong, China, included 39,292 newborns who were randomly assigned to the vaccination group in which 38,366 participants completed the HBV vaccination series and 34,441 newborns who were randomly assigned to the control group in which the participants received neither a vaccine nor a placebo. However, 23,368 (67.8%) participants in the control group received catch-up vaccination at age 10–14 years. By December 2013, a total of 3,895 (10.2%) in the vaccination group and 3,898 (11.3%) in the control group were lost to follow-up. Information on PLC incidence and liver disease mortality were collected through linkage of all remaining cohort members to a well-established population-based tumor registry until December 31, 2013. Two cross-sectional surveys on HBV surface antigen (HBsAg) seroprevalence were conducted in 1996–2000 and 2008–2012. The participation rates of the two surveys were 57.5% (21,770) and 50.7% (17,204) in the vaccination group and 36.3% (12,184) and 58.6% (17,395) in the control group, respectively. Using intention-to-treat analysis, we found that the incidence rate of PLC and the mortality rates of severe end-stage liver diseases and infant fulminant hepatitis were significantly lower in the vaccination group than the control group with efficacies of 84% (95% CI 23%–97%), 70% (95% CI 15%–89%), and 69% (95% CI 34%–85%), respectively. The estimated efficacy of catch-up vaccination on HBsAg seroprevalence in early adulthood was 21% (95% CI 10%–30%), substantially weaker than that of the neonatal vaccination (72%, 95% CI 68%–75%). Receiving a booster at age 10–14 years decreased HBsAg seroprevalence if participants were born to HBsAg-positive mothers (hazard ratio [HR] = 0.68, 95% CI 0.47–0.97). Limitations to consider in interpreting the study results include the small number of individuals with PLC, participants lost to follow-up, and the large proportion of participants who did not provide serum samples at follow-up.
Conclusions
Neonatal HBV vaccination was found to significantly decrease HBsAg seroprevalence in childhood through young adulthood and subsequently reduce the risk of PLC and other liver diseases in young adults in rural China. The findings underscore the importance of neonatal HBV vaccination. Our results also suggest that an adolescence booster should be considered in individuals born to HBsAg-positive mothers and who have completed the HBV neonatal vaccination series.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Hepatitis B is a life-threatening liver infection caused by the hepatitis B virus (HBV). HBV, which is transmitted through contact with the blood or other bodily fluids of an infected person, can cause both acute (short-term) and chronic (long-term) liver infections. Acute infections rarely cause any symptoms and more than 90% of adults who become infected with HBV (usually through sexual intercourse with an infected partner or through the use of contaminated needles) are virus-free within 6 months. However, in sub-Saharan Africa, East Asia, and other regions where HBV infection is common, HBV is usually transmitted from mother to child at birth or between individuals during early childhood and, unfortunately, most infants who are infected with HBV during the first year of life and many children who are infected before the age of 6 years develop a chronic HBV infection. Such infections can cause liver cancer, liver cirrhosis (scarring of the liver), and other fatal liver diseases. In addition, HBV infection around the time of birth can cause infant fulminant hepatitis, a rare but frequently fatal condition.
Why Was This Study Done?
HBV infections kill about 780,000 people worldwide annually but can be prevented by neonatal vaccination—immunization against HBV at birth. A vaccine against HBV became available in 1982 and many countries now include HBV vaccination at birth followed by additional vaccine doses during early childhood in their national vaccination programs. But, although HBV vaccination has greatly reduced the rate of chronic HBV infection, the protective efficacy of neonatal HBV vaccination against liver diseases has not been fully examined. Here, the researchers investigate how well neonatal HBV vaccination protects against primary liver cancer and other liver diseases by undertaking a 30-year follow-up of the Qidong Hepatitis B intervention Study (QHBIS). This cluster randomized controlled trial of neonatal HBV vaccination was conducted between 1983 and 1990 in Qidong County, a rural area in China with a high incidence of HBV-related primary liver cancer and other liver diseases. A cluster randomized controlled trial compares outcomes in groups of people (towns in this study) chosen at random to receive an intervention or a control treatment (here, vaccination or no vaccination; this study design was ethically acceptable during the 1980s when HBV vaccination was unavailable in rural China but would be unethical nowadays).
What Did the Researchers Do and Find?
The QHBIS assigned nearly 80,000 newborns to receive either a full course of HBV vaccinations (the vaccination group) or no vaccination (the control group); two-thirds of the control group participants received a catch-up vaccination at age 10–14 years. The researchers obtained data on how many trial participants developed primary liver cancer or died from a liver disease during the follow-up period from a population-based tumor registry. They also obtained information on HBsAg seroprevalence—the presence of HBsAg (an HBV surface protein) in the blood of the participants, an indicator of current HBV infection—from surveys undertaken in1996–2000 and 2008–2012. The researchers estimate that the protective efficacy of vaccination was 84% for primary liver cancer (vaccination reduced the incidence of liver cancer by 84%), 70% for death from liver diseases, and 69% for the incidence of infant fulminant hepatitis. Overall, the efficacy of catch-up vaccination on HBsAg seroprevalence in early adulthood was weak compared with neonatal vaccination (21% versus 72%). Notably, receiving a booster vaccination at age 10–14 years decreased HBsAg seroprevalence among participants who were born to HBsAg-positive mothers.
What Do These Findings Mean?
The small number of cases of primary liver cancer and other liver diseases observed during the 30-year follow-up, the length of follow-up, and the availability of incomplete data on seroprevalence all limit the accuracy of these findings. Nevertheless, these findings indicate that neonatal HBV vaccination greatly reduced HBsAg seroprevalence (an indicator of current HBV infection) in childhood and young adulthood and subsequently reduced the risk of liver cancer and other liver diseases in young adults. These findings therefore support the importance of neonatal HBV vaccination. In addition, they suggest that booster vaccination during adolescence might consolidate the efficacy of neonatal vaccination among individuals who were born to HBsAg-positive mothers, a suggestion that needs to be confirmed in randomized controlled trials before booster vaccines are introduced into vaccination programs.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001774.
The World Health Organization provides a fact sheet about hepatitis B (available in several languages) and information about hepatitis B vaccination
The World Hepatitis Alliance (an international not-for-profit, non-governmental organization) provides information about viral hepatitis, including some personal stories about hepatitis B from Bangladesh, Pakistan, the Philippines, and Malawi
The UK National Health Service Choices website provides information about hepatitis B
The not-for-profit British Liver Trust provides information about hepatitis B, including Hepatitis B: PATH B, an interactive educational resource designed to improve the lives of people living with chronic hepatitis B
MedlinePlus provides links to other resources about hepatitis B (in English and Spanish)
Information about the Qidong Hepatitis B intervention Study is available
Chinese Center for Disease Control and Prevention provides links about hepatitis B prevention in Chinese
doi:10.1371/journal.pmed.1001774
PMCID: PMC4280122  PMID: 25549238
13.  Genomic Predictors for Recurrence Patterns of Hepatocellular Carcinoma: Model Derivation and Validation 
PLoS Medicine  2014;11(12):e1001770.
In this study, Lee and colleagues develop a genomic predictor that can identify patients at high risk for late recurrence of hepatocellular carcinoma (HCC) and provided new biomarkers for risk stratification.
Background
Typically observed at 2 y after surgical resection, late recurrence is a major challenge in the management of hepatocellular carcinoma (HCC). We aimed to develop a genomic predictor that can identify patients at high risk for late recurrence and assess its clinical implications.
Methods and Findings
Systematic analysis of gene expression data from human liver undergoing hepatic injury and regeneration revealed a 233-gene signature that was significantly associated with late recurrence of HCC. Using this signature, we developed a prognostic predictor that can identify patients at high risk of late recurrence, and tested and validated the robustness of the predictor in patients (n = 396) who underwent surgery between 1990 and 2011 at four centers (210 recurrences during a median of 3.7 y of follow-up). In multivariate analysis, this signature was the strongest risk factor for late recurrence (hazard ratio, 2.2; 95% confidence interval, 1.3–3.7; p = 0.002). In contrast, our previously developed tumor-derived 65-gene risk score was significantly associated with early recurrence (p = 0.005) but not with late recurrence (p = 0.7). In multivariate analysis, the 65-gene risk score was the strongest risk factor for very early recurrence (<1 y after surgical resection) (hazard ratio, 1.7; 95% confidence interval, 1.1–2.6; p = 0.01). The potential significance of STAT3 activation in late recurrence was predicted by gene network analysis and validated later. We also developed and validated 4- and 20-gene predictors from the full 233-gene predictor. The main limitation of the study is that most of the patients in our study were hepatitis B virus–positive. Further investigations are needed to test our prediction models in patients with different etiologies of HCC, such as hepatitis C virus.
Conclusions
Two independently developed predictors reflected well the differences between early and late recurrence of HCC at the molecular level and provided new biomarkers for risk stratification.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Primary liver cancer—a tumor that starts when a liver cell acquires genetic changes that allow it to grow uncontrollably—is the second-leading cause of cancer-related deaths worldwide, killing more than 600,000 people annually. If hepatocellular cancer (HCC; the most common type of liver cancer) is diagnosed in its early stages, it can be treated by surgically removing part of the liver (resection), by liver transplantation, or by local ablation, which uses an electric current to destroy the cancer cells. Unfortunately, the symptoms of HCC, which include weight loss, tiredness, and jaundice (yellowing of the skin and eyes), are vague and rarely appear until the cancer has spread throughout the liver. Consequently, HCC is rarely diagnosed before the cancer is advanced and untreatable, and has a poor prognosis (likely outcome)—fewer than 5% of patients survive for five or more years after diagnosis. The exact cause of HCC is unclear, but chronic liver (hepatic) injury and inflammation (caused, for example, by infection with hepatitis B virus [HBV] or by alcohol abuse) promote tumor development.
Why Was This Study Done?
Even when it is diagnosed early, HCC has a poor prognosis because it often recurs. Patients treated for HCC can experience two distinct types of tumor recurrence. Early recurrence, which usually happens within the first two years after surgery, arises from the spread of primary cancer cells into the surrounding liver that left behind during surgery. Late recurrence, which typically happens more than two years after surgery, involves the development of completely new tumors and seems to be the result of chronic liver damage. Because early and late recurrence have different clinical courses, it would be useful to be able to predict which patients are at high risk of which type of recurrence. Given that injury, inflammation, and regeneration seem to prime the liver for HCC development, might the gene expression patterns associated with these conditions serve as predictive markers for the identification of patients at risk of late recurrence of HCC? Here, the researchers develop a genomic predictor for the late recurrence of HCC by examining gene expression patterns in tissue samples from livers that were undergoing injury and regeneration.
What Did the Researchers Do and Find?
By comparing gene expression data obtained from liver biopsies taken before and after liver transplantation or resection and recorded in the US National Center for Biotechnology Information Gene Expression Omnibus database, the researchers identified 233 genes whose expression in liver differed before and after liver injury (the hepatic injury and regeneration, or HIR, signature). Statistical analyses indicate that the expression of the HIR signature in archived tissue samples was significantly associated with late recurrence of HCC in three independent groups of patients, but not with early recurrence (a significant association between two variables is one that is unlikely to have arisen by chance). By contrast, a tumor-derived 65-gene signature previously developed by the researchers was significantly associated with early recurrence but not with late recurrence. Notably, as few as four genes from the HIR signature were sufficient to construct a reliable predictor for late recurrence of HCC. Finally, the researchers report that many of the genes in the HIR signature encode proteins involved in inflammation and cell death, but that others encode proteins involved in cellular growth and proliferation such as STAT3, a protein with a well-known role in liver regeneration.
What Do These Findings Mean?
These findings identify a gene expression signature that was significantly associated with late recurrence of HCC in three independent groups of patients. Because most of these patients were infected with HBV, the ability of the HIR signature to predict late occurrence of HCC may be limited to HBV-related HCC and may not be generalizable to HCC related to other causes. Moreover, the predictive ability of the HIR signature needs to be tested in a prospective study in which samples are taken and analyzed at baseline and patients are followed to see whether their HCC recurs; the current retrospective study analyzed stored tissue samples. Importantly, however, the HIR signature associated with late recurrence and the 65-gene signature associated with early recurrence provide new insights into the biological differences between late and early recurrence of HCC at the molecular level. Knowing about these differences may lead to new treatments for HCC and may help clinicians choose the most appropriate treatments for their patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001770.
The US National Cancer Institute provides information about all aspects of cancer, including detailed information for patients and professionals about primary liver cancer (in English and Spanish)
The American Cancer Society also provides information about liver cancer (including information on support programs and services; available in several languages)
The UK National Health Service Choices website provides information about primary liver cancer (including a video about coping with cancer)
Cancer Research UK (a not-for-profit organization) also provides detailed information about primary liver cancer (including information about living with primary liver cancer)
MD Anderson Cancer Center provides information about symptoms, diagnosis, treatment, and prevention of primary liver cancer
MedlinePlus provides links to further resources about liver cancer (in English and Spanish)
doi:10.1371/journal.pmed.1001770
PMCID: PMC4275163  PMID: 25536056
14.  World Health Organization Guidelines for Management of Acute Stress, PTSD, and Bereavement: Key Challenges on the Road Ahead 
PLoS Medicine  2014;11(12):e1001769.
Wietse Tol and colleagues discuss some of the key challenges for implementation of new WHO guidelines for stress-related mental health disorders in low- and middle-income countries.
Please see later in the article for the Editors' Summary
doi:10.1371/journal.pmed.1001769
PMCID: PMC4267806  PMID: 25514024
15.  Home-Based Versus Mobile Clinic HIV Testing and Counseling in Rural Lesotho: A Cluster-Randomized Trial 
PLoS Medicine  2014;11(12):e1001768.
Niklaus Labhardt and colleagues investigate how different HIV testing and counseling strategies, based on home visits or mobile clinics, reach different populations in a rural African setting.
Please see later in the article for the Editors' Summary
Background
The success of HIV programs relies on widely accessible HIV testing and counseling (HTC) services at health facilities as well as in the community. Home-based HTC (HB-HTC) is a popular community-based approach to reach persons who do not test at health facilities. Data comparing HB-HTC to other community-based HTC approaches are very limited. This trial compares HB-HTC to mobile clinic HTC (MC-HTC).
Methods and Findings
The trial was powered to test the hypothesis of higher HTC uptake in HB-HTC campaigns than in MC-HTC campaigns. Twelve clusters were randomly allocated to HB-HTC or MC-HTC. The six clusters in the HB-HTC group received 30 1-d multi-disease campaigns (five villages per cluster) that delivered services by going door-to-door, whereas the six clusters in MC-HTC group received campaigns involving community gatherings in the 30 villages with subsequent service provision in mobile clinics. Time allocation and human resources were standardized and equal in both groups. All individuals accessing the campaigns with unknown HIV status or whose last HIV test was >12 wk ago and was negative were eligible. All outcomes were assessed at the individual level. Statistical analysis used multivariable logistic regression. Odds ratios and p-values were adjusted for gender, age, and cluster effect.
Out of 3,197 participants from the 12 clusters, 2,563 (80.2%) were eligible (HB-HTC: 1,171; MC-HTC: 1,392). The results for the primary outcomes were as follows. Overall HTC uptake was higher in the HB-HTC group than in the MC-HTC group (92.5% versus 86.7%; adjusted odds ratio [aOR]: 2.06; 95% CI: 1.18–3.60; p = 0. 011). Among adolescents and adults ≥12 y, HTC uptake did not differ significantly between the two groups; however, in children <12 y, HTC uptake was higher in the HB-HTC arm (87.5% versus 58.7%; aOR: 4.91; 95% CI: 2.41–10.0; p<0.001). Out of those who took up HTC, 114 (4.9%) tested HIV-positive, 39 (3.6%) in the HB-HTC arm and 75 (6.2%) in the MC-HTC arm (aOR: 0.64; 95% CI: 0.48–0.86; p = 0.002). Ten (25.6%) and 19 (25.3%) individuals in the HB-HTC and in the MC-HTC arms, respectively, linked to HIV care within 1 mo after testing positive. Findings for secondary outcomes were as follows: HB-HTC reached more first-time testers, particularly among adolescents and young adults, and had a higher proportion of men among participants. However, after adjusting for clustering, the difference in male participation was not significant anymore.
Age distribution among participants and immunological and clinical stages among persons newly diagnosed HIV-positive did not differ significantly between the two groups. Major study limitations included the campaigns' restriction to weekdays and a relatively low HIV prevalence among participants, the latter indicating that both arms may have reached an underexposed population.
Conclusions
This study demonstrates that both HB-HTC and MC-HTC can achieve high uptake of HTC. The choice between these two community-based strategies will depend on the objective of the activity: HB-HTC was better in reaching children, individuals who had never tested before, and men, while MC-HTC detected more new HIV infections. The low rate of linkage to care after a positive HIV test warrants future consideration of combining community-based HTC approaches with strategies to improve linkage to care for persons who test HIV-positive.
Trial registration
ClinicalTrials.gov NCT01459120
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Annually, about 2.3 million people become newly infected with HIV, the virus that causes AIDS by gradually destroying CD4 cells and other immune system cells, thereby leaving HIV-infected individuals susceptible to other serious infections. HIV can be transmitted through unprotected sex with an infected partner, from an HIV-positive mother to her unborn child, or through the injection of drugs with shared needles. Infection with HIV is usually diagnosed by looking for antibodies to HIV in the blood or saliva. After diagnosis, the progression of HIV infection is monitored by regularly counting the number of CD4 cells in the blood. Initiation of antiretroviral therapy (ART)—a combination of drugs that keeps HIV replication in check but that does not cure the infection—is recommended when an individual's CD4 count falls below 500 cells/µl or when he or she develops signs of advanced or severe disease, such as unusual infections.
Why Was This Study Done?
To control HIV/AIDS, HIV transmission needs to be reduced, and ART delivery needs to be increased. In settings of high HIV prevalence, universal coverage of HIV testing and counseling (HTC) is essential if these goals are to be met. Unfortunately, many people refuse “facility-based” HTC (HTC delivered at health care facilities) because they fear stigmatization and discrimination. Moreover, many people in resource-limited settings rarely visit health care facilities. Community-based HTC may be one way to increase the uptake of HTC, particularly among populations that are hard to reach, such as men and first-time testers, but which form of community-based HTC will be most effective? In this cluster-randomized trial, the researchers ask whether home-based HTC (HB-HTC)—community-based HTC in which health care workers go door-to-door to offer HTC to people in their own home—results in a higher uptake of HTC than HTC delivered through community gatherings and mobile clinics (MC-HTC) in two rural areas in Lesotho. Nearly a quarter of adults are HIV-positive in Lesotho, but only 61% of people who need ART currently receive treatment. A cluster-randomized trial compares outcomes in groups (clusters) of people chosen at random to receive different interventions.
What Did the Researchers Do and Find?
The researchers allocated 12 clusters, each comprising a health center and its catchment area, to the HB-HTC or MC-HTC intervention. In the HB-HTC arm (1,171 participants), HTC teams going door-to-door delivered a multi-disease campaign that included HTC to five villages in each cluster. In the MC-HTC arm (1,392 participants), the multi-disease campaign was delivered at community gatherings with subsequent service provision in mobile clinics. Overall, HTC uptake was higher in the HB-HTC arm than in the MC-HTC arm (92.5% and 86.7% uptake, respectively). Among participants aged ≥12 years, there was no significant difference in HTC uptake between the arms, whereas among children aged <12 years, HTC uptake was significantly higher in the HB-HTC arm than in the MC-HTC arm (87.5% versus 58.7%; a significant difference is a difference unlikely to have happened by chance). Among individuals who took up HTC, 3.6% and 6.2% tested positive for HIV in the HB-HTC arm and MC-HTC arm, respectively. In both arms, only a quarter of individuals who tested positive accessed HIV care within a month of their positive test result. Finally, HB-HTC reached more first-time testers (particularly among adolescents) and tended to reach more men than MC-HTC.
What Do These Findings Mean?
These findings suggest that, in rural Lesotho, both HB-HTC and MC-HTC delivered as part of a multi-disease campaign can achieve a high uptake of HTC. Various aspects of the trial design (for example, the small number of clusters) may limit the accuracy of the findings reported here. Notably, however, these findings suggest that the choice between HB-HTC and MC-HTC should be guided by the objective of the HTC intervention in specific settings. Where equity of access is of concern and where increased HTC coverage, particularly among groups in which HTC coverage is generally poor (including men, first-time testers, and children), is paramount, HB-HTC may be the preferred option. By contrast, the MC-HTC approach may be more appropriate in settings where the detection of new HIV infections is the major goal. Finally, and importantly, the findings of this trial highlight the need for further research into strategies designed to improve the linkage between HIV testing and enrollment into care.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001768.
The World Health Organization provides information on all aspects of HIV/AIDS, including information on HIV counseling and testing (in several languages)
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
NAM/aidsmap provides basic information about HIV/AIDS and summaries of recent research findings on HIV care and treatment
Information is available from Avert, an international AIDS charity, on many aspects of HIV/AIDS, including information on the global HIV/AIDS epidemic, on HIV testing, and on HIV/AIDS in Lesotho
The UK National Health Service Choices website provides information (including personal stories) about HIV/AIDS
The “UNAIDS Report on the Global AIDS Epidemic 2013” provides up-to-date information about the AIDS epidemic and efforts to halt it
Stories about living with HIV/AIDS are available through Avert and through healthtalk.org
More information about this trial is available
doi:10.1371/journal.pmed.1001768
PMCID: PMC4267810  PMID: 25513807
16.  From Joint Thinking to Joint Action: A Call to Action on Improving Water, Sanitation, and Hygiene for Maternal and Newborn Health 
PLoS Medicine  2014;11(12):e1001771.
Yael Velleman and colleagues argue for stronger integration between the water, sanitation, and hygiene (WASH) and maternal and newborn health sectors.
Please see later in the article for the Editors' Summary
doi:10.1371/journal.pmed.1001771
PMCID: PMC4264687  PMID: 25502229
17.  Alcohol Harm Reduction: Corporate Capture of a Key Concept 
PLoS Medicine  2014;11(12):e1001767.
Jim McCambridge and colleagues reflect on how the concept of harm reduction may be being usurped by the alcohol industry.
Please see later in the article for the Editors' Summary
doi:10.1371/journal.pmed.1001767
PMCID: PMC4260782  PMID: 25490717
18.  Impact of Replacing Smear Microscopy with Xpert MTB/RIF for Diagnosing Tuberculosis in Brazil: A Stepped-Wedge Cluster-Randomized Trial 
PLoS Medicine  2014;11(12):e1001766.
Betina Durovni and colleagues evaluated whether implementation of Xpert MTB/RIF increased the notification rate of laboratory-confirmed pulmonary tuberculosis and reduced the time to tuberculosis treatment initiation in 14 Brazilian primary care laboratories.
Please see later in the article for the Editors' Summary
Background
Abundant evidence on Xpert MTB/RIF accuracy for diagnosing tuberculosis (TB) and rifampicin resistance has been produced, yet there are few data on the population benefit of its programmatic use. We assessed whether the implementation of Xpert MTB/RIF in routine conditions would (1) increase the notification rate of laboratory-confirmed pulmonary TB to the national notification system and (2) reduce the time to TB treatment initiation (primary endpoints).
Methods and Findings
We conducted a stepped-wedge cluster-randomized trial from 4 February to 4 October 2012 in 14 primary care laboratories in two Brazilian cities. Diagnostic specimens were included for 11,705 baseline (smear microscopy) and 12,522 intervention (Xpert MTB/RIF) patients presumed to have TB. Single-sputum-sample Xpert MTB/RIF replaced two-sputum-sample smear microscopy for routine diagnosis of pulmonary TB. In total, 1,137 (9.7%) tests in the baseline arm and 1,777 (14.2%) in the intervention arm were positive (p<0.001), resulting in an increased bacteriologically confirmed notification rate of 59% (95% CI = 31%, 88%). However, the overall notification rate did not increase (15%, 95% CI = −6%, 37%), and we observed no change in the notification rate for those without a test result (−3%, 95% CI = −37%, 30%). Median time to treatment decreased from 11.4 d (interquartile range [IQR] = 8.5–14.5) to 8.1 d (IQR = 5.4–9.3) (p = 0.04), although not among confirmed cases (median 7.5 [IQR = 4.9–10.0] versus 7.3 [IQR = 3.4–9.0], p = 0.51). Prevalence of rifampicin resistance detected by Xpert was 3.3% (95% CI = 2.4%, 4.3%) among new patients and 7.4% (95% CI = 4.3%, 11.7%) among retreatment patients, with a 98% (95% CI = 87%, 99%) positive predictive value compared to phenotypic drug susceptibility testing. Missing data in the information systems may have biased our primary endpoints. However, sensitivity analyses assessing the effects of missing data did not affect our results.
Conclusions
Replacing smear microscopy with Xpert MTB/RIF in Brazil increased confirmation of pulmonary TB. An additional benefit was the accurate detection of rifampicin resistance. However, no increase on overall notification rates was observed, possibly because of high rates of empirical TB treatment.
Trial registration
ClinicalTrials.gov NCT01363765
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Tuberculosis—a contagious bacterial disease that usually infects the lungs—is a global public health problem. Each year, about 8.6 million people develop active tuberculosis and at least 1.3 million people die from the disease, mainly in resource-limited countries. Mycobacterium tuberculosis, the bacterium that causes tuberculosis, is spread in airborne droplets when people with active disease cough or sneeze. The characteristic symptoms of tuberculosis include cough, weight loss, and night sweats. Diagnostic tests for tuberculosis include sputum smear microscopy (microscopic analysis of mucus coughed up from the lungs), the growth (culture) of M. tuberculosis from sputum samples, and molecular tests (for example, the Xpert MTB/RIF test) that rapidly and accurately detect M. tuberculosis in sputum and determine its antibiotic resistance. Tuberculosis can be cured by taking several antibiotics daily for at least six months, although the emergence of multidrug-resistant tuberculosis is making the disease increasingly hard to treat.
Why Was This Study Done?
Quick, accurate diagnosis of active tuberculosis is essential to reduce the global tuberculosis burden, but in most high-burden settings diagnosis relies on sputum smear analysis, which fails to identify many infected people. Mycobacterial culture correctly identifies more infected people but is slow, costly, and rarely available in resource-limited settings. In late 2010, therefore, the World Health Organization recommended the routine use of the Xpert MTB/RIF assay (Xpert) for tuberculosis diagnosis, and several resource-limited countries are currently scaling up the use of Xpert in their national tuberculosis control programs. However, although Xpert works well in ideal conditions, little is known about its performance in routine (real-life) settings. In this pragmatic stepped-wedge cluster-randomized trial, the researchers assess the impact of replacing smear microscopy with Xpert for the diagnosis of tuberculosis in Brazil, an upper-middle-income country with a high tuberculosis burden. A pragmatic trial asks whether an intervention works under real-life conditions; a stepped-wedge cluster-randomized trial sequentially and randomly rolls out an intervention to groups (clusters) of people.
What Did the Researchers Do and Find?
The researchers randomly assigned 14 tuberculosis diagnosis laboratories in two cities to switch at different times from smear microscopy to Xpert for tuberculosis diagnosis. Specifically, at the start of the eight-month trial, all the laboratories used smear microscopy for tuberculosis diagnosis. At the end of each month, two laboratories switched to using Xpert, so that in the final month of the trial, all the laboratories were using Xpert. During the trial, 11,705 samples from patients with symptoms consistent with tuberculosis were examined using smear microscopy (baseline arm), and 12,522 samples were examined using Xpert (intervention arm). The researchers obtained the results of these tests from a database of all the diagnostic tests ordered in the Brazilian public laboratory system, and they obtained data on tuberculosis notifications during the trial period from the national notification system. In total, 9.7% and 14.2% of the tests in the baseline and intervention arm, respectively, were positive, and the laboratory-confirmed tuberculosis notification rate was 1.59 times higher in the Xpert arm than in the smear microscopy arm. However, the overall notification rate (which included people who began treatment on the basis of symptoms alone) did not increase during the trial. The time to treatment (the time between the laboratory test date and the notification date, when treatment usually starts in Brazil) was about 11 days and eight days in the smear microscopy and Xpert arms, respectively.
What Do These Findings Mean?
The findings indicate that, in a setting where laboratory diagnosis for tuberculosis was largely restricted to sputum smear examination, the implementation of Xpert increased the rates of laboratory-confirmed pulmonary (lung) tuberculosis notifications and reduced the time to treatment initiation, two endpoints of public health relevance. However, implementation of Xpert did not increase the overall notification rate of pulmonary tuberculosis (probably because of the high rate of empiric tuberculosis treatment in Brazil), although it did facilitate accurate and rapid detection of rifampicin resistance. The accuracy of these findings may be limited by certain aspects of the trial design, and further studies are needed to evaluate the possible effects of Xpert beyond diagnosis and the time to treatment initiation. Nevertheless, these findings suggest that replacing smear microscopy with Xpert has the potential to increase the confirmation (but not detection) of pulmonary tuberculosis and to reduce the time to treatment initiation at the population level in Brazil and other resource-limited countries.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001766.
The World Health Organization (WHO) provides information (in several languages) on tuberculosis, on tuberculosis diagnostics, and on the rollout of Xpert; further information about WHO's endorsement of Xpert is included in a Strategic and Technical Advisory Group for Tuberculosis report; the “Global Tuberculosis Report 2013” provides information about tuberculosis around the world, including Brazil
The Stop TB Partnership is working towards tuberculosis elimination and provides patient stories about tuberculosis (in English and Spanish); the Tuberculosis Vaccine Initiative (a not-for-profit organization) also provides personal stories about tuberculosis
The US Centers for Disease Control and Prevention provides information about tuberculosis and its diagnosis (in English and Spanish)
The US National Institute of Allergy and Infectious Diseases also has detailed information on all aspects of tuberculosis
More information about this trial is available
doi:10.1371/journal.pmed.1001766
PMCID: PMC4260794  PMID: 25490549
19.  Metabolic Signatures of Adiposity in Young Adults: Mendelian Randomization Analysis and Effects of Weight Change 
PLoS Medicine  2014;11(12):e1001765.
In this study, Wurtz and colleagues investigated to what extent elevated body mass index (BMI) within the normal weight range has causal influences on the detailed systemic metabolite profile in early adulthood using Mendelian randomization analysis.
Please see later in the article for the Editors' Summary
Background
Increased adiposity is linked with higher risk for cardiometabolic diseases. We aimed to determine to what extent elevated body mass index (BMI) within the normal weight range has causal effects on the detailed systemic metabolite profile in early adulthood.
Methods and Findings
We used Mendelian randomization to estimate causal effects of BMI on 82 metabolic measures in 12,664 adolescents and young adults from four population-based cohorts in Finland (mean age 26 y, range 16–39 y; 51% women; mean ± standard deviation BMI 24±4 kg/m2). Circulating metabolites were quantified by high-throughput nuclear magnetic resonance metabolomics and biochemical assays. In cross-sectional analyses, elevated BMI was adversely associated with cardiometabolic risk markers throughout the systemic metabolite profile, including lipoprotein subclasses, fatty acid composition, amino acids, inflammatory markers, and various hormones (p<0.0005 for 68 measures). Metabolite associations with BMI were generally stronger for men than for women (median 136%, interquartile range 125%–183%). A gene score for predisposition to elevated BMI, composed of 32 established genetic correlates, was used as the instrument to assess causality. Causal effects of elevated BMI closely matched observational estimates (correspondence 87%±3%; R2 = 0.89), suggesting causative influences of adiposity on the levels of numerous metabolites (p<0.0005 for 24 measures), including lipoprotein lipid subclasses and particle size, branched-chain and aromatic amino acids, and inflammation-related glycoprotein acetyls. Causal analyses of certain metabolites and potential sex differences warrant stronger statistical power. Metabolite changes associated with change in BMI during 6 y of follow-up were examined for 1,488 individuals. Change in BMI was accompanied by widespread metabolite changes, which had an association pattern similar to that of the cross-sectional observations, yet with greater metabolic effects (correspondence 160%±2%; R2 = 0.92).
Conclusions
Mendelian randomization indicates causal adverse effects of increased adiposity with multiple cardiometabolic risk markers across the metabolite profile in adolescents and young adults within the non-obese weight range. Consistent with the causal influences of adiposity, weight changes were paralleled by extensive metabolic changes, suggesting a broadly modifiable systemic metabolite profile in early adulthood.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Adiposity—having excessive body fat—is a growing global threat to public health. Body mass index (BMI, calculated by dividing a person's weight in kilograms by their height in meters squared) is a coarse indicator of excess body weight, but the measure is useful in large population studies. Compared to people with a lean body weight (a BMI of 18.5–24.9 kg/m2), individuals with higher BMI have an elevated risk of developing life-shortening cardiometabolic diseases—cardiovascular diseases that affect the heart and/or the blood vessels (for example, heart failure and stroke) and metabolic diseases that affect the cellular chemical reactions that sustain life (for example, diabetes). People become unhealthily fat by consuming food and drink that contains more energy (calories) than they need for their daily activities. So adiposity can be prevented and reversed by eating less and exercising more.
Why Was This Study Done?
Epidemiological studies, which record the patterns of risk factors and disease in populations, suggest that the illness and death associated with excess body weight is partly attributable to abnormalities in how individuals with high adiposity metabolize carbohydrates and fats, leading to higher blood sugar and cholesterol levels. Further, adiposity is also associated with many other deviations in the metabolic profile than these commonly measured risk factors. However, epidemiological studies cannot prove that adiposity causes specific changes in a person's systemic (overall) metabolic profile because individuals with high BMI may share other characteristics (confounding factors) that are the actual causes of both adiposity and metabolic abnormalities. Moreover, having a change in some aspect of metabolism could also lead to adiposity, rather than vice versa (reverse causation). Importantly, if there is a causal effect of adiposity on cardiometabolic risk factor levels, it might be possible to prevent the progression towards cardiometabolic diseases by weight loss. Here, the researchers use “Mendelian randomization” to examine whether increased BMI within the normal and overweight range is causally influencing the metabolic risk factors from many biological pathways during early adulthood. Because gene variants are inherited randomly, they are not prone to confounding and are free from reverse causation. Several gene variants are known to lead to modestly increased BMI. Thus, an investigation of the associations between these gene variants and risk factors across the systemic metabolite profile in a population of healthy individuals can indicate whether higher BMI is causally related to known and novel metabolic risk factors and higher cardiometabolic disease risk.
What Did the Researchers Do and Find?
The researchers measured the BMI of 12,664 adolescents and young adults (average BMI 24.7 kg/m2) living in Finland and the blood levels of 82 metabolites in these young individuals at a single time point. Statistical analysis of these data indicated that elevated BMI was adversely associated with numerous cardiometabolic risk factors. For example, elevated BMI was associated with raised levels of low-density lipoprotein, “bad” cholesterol that increases cardiovascular disease risk. Next, the researchers used a gene score for predisposition to increased BMI, composed of 32 gene variants correlated with increased BMI, as an “instrumental variable” to assess whether adiposity causes metabolite abnormalities. The effects on the systemic metabolite profile of a 1-kg/m2 increment in BMI due to genetic predisposition closely matched the effects of an observed 1-kg/m2 increment in adulthood BMI on the metabolic profile. That is, higher levels of adiposity had causal effects on the levels of numerous blood-based metabolic risk factors, including higher levels of low-density lipoprotein cholesterol and triglyceride-carrying lipoproteins, protein markers of chronic inflammation and adverse liver function, impaired insulin sensitivity, and elevated concentrations of several amino acids that have recently been linked with the risk for developing diabetes. Elevated BMI also causally led to lower levels of certain high-density lipoprotein lipids in the blood, a marker for the risk of future cardiovascular disease. Finally, an examination of the metabolic changes associated with changes in BMI in 1,488 young adults after a period of six years showed that those metabolic measures that were most strongly associated with BMI at a single time point likewise displayed the highest responsiveness to weight change over time.
What Do These Findings Mean?
These findings suggest that increased adiposity has causal adverse effects on multiple cardiometabolic risk markers in non-obese young adults beyond the effects on cholesterol and blood sugar. Like all Mendelian randomization studies, the reliability of the causal association reported here depends on several assumptions made by the researchers. Nevertheless, these findings suggest that increased adiposity has causal adverse effects on multiple cardiometabolic risk markers in non-obese young adults. Importantly, the results of both the causal effect analyses and the longitudinal study suggest that there is no threshold below which a BMI increase does not adversely affect the metabolic profile, and that a systemic metabolic profile linked with high cardiometabolic disease risk that becomes established during early adulthood can be reversed. Overall, these findings therefore highlight the importance of weight reduction as a key target for metabolic risk factor control among young adults.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001765.
The Computational Medicine Research Team of the University of Oulu has a webpage that provides further information on metabolite profiling by high-throughput NMR metabolomics
The World Health Organization provides information on obesity (in several languages)
The Global Burden of Disease Study website provides the latest details about global obesity trends
The UK National Health Service Choices website provides information about obesity, cardiovascular disease, and type 2 diabetes (including some personal stories)
The American Heart Association provides information on all aspects of cardiovascular disease and diabetes and on keeping healthy; its website includes personal stories about heart attacks, stroke, and diabetes
The US Centers for Disease Control and Prevention has information on all aspects of overweight and obesity and information about heart disease, stroke, and diabetes
MedlinePlus provides links to other sources of information on heart disease, vascular disease, and obesity (in English and Spanish)
Wikipedia has a page on Mendelian randomization (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1001765
PMCID: PMC4260795  PMID: 25490400
20.  Tracking Rural Health Facility Financial Data in Resource-Limited Settings: A Case Study from Rwanda 
PLoS Medicine  2014;11(12):e1001763.
Chunling Lu and colleagues describe a project for tracking health center financial data in two rural districts of Rwanda, which could be adapted for other low- or middle-income countries.
Please see later in the article for the Editors' Summary
doi:10.1371/journal.pmed.1001763
PMCID: PMC4251825  PMID: 25460586
21.  Evaluation of the Lung Cancer Risks at Which to Screen Ever- and Never-Smokers: Screening Rules Applied to the PLCO and NLST Cohorts 
PLoS Medicine  2014;11(12):e1001764.
Martin Tammemägi and colleagues evaluate which risk groups of individuals, including nonsmokers and high-risk individuals from 65 to 80 years of age, should be screened for lung cancer using computed tomography.
Please see later in the article for the Editors' Summary
Background
Lung cancer risks at which individuals should be screened with computed tomography (CT) for lung cancer are undecided. This study's objectives are to identify a risk threshold for selecting individuals for screening, to compare its efficiency with the U.S. Preventive Services Task Force (USPSTF) criteria for identifying screenees, and to determine whether never-smokers should be screened. Lung cancer risks are compared between smokers aged 55–64 and ≥65–80 y.
Methods and Findings
Applying the PLCOm2012 model, a model based on 6-y lung cancer incidence, we identified the risk threshold above which National Lung Screening Trial (NLST, n = 53,452) CT arm lung cancer mortality rates were consistently lower than rates in the chest X-ray (CXR) arm. We evaluated the USPSTF and PLCOm2012 risk criteria in intervention arm (CXR) smokers (n = 37,327) of the Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO). The numbers of smokers selected for screening, and the sensitivities, specificities, and positive predictive values (PPVs) for identifying lung cancers were assessed. A modified model (PLCOall2014) evaluated risks in never-smokers. At PLCOm2012 risk ≥0.0151, the 65th percentile of risk, the NLST CT arm mortality rates are consistently below the CXR arm's rates. The number needed to screen to prevent one lung cancer death in the 65th to 100th percentile risk group is 255 (95% CI 143 to 1,184), and in the 30th to <65th percentile risk group is 963 (95% CI 291 to −754); the number needed to screen could not be estimated in the <30th percentile risk group because of absence of lung cancer deaths. When applied to PLCO intervention arm smokers, compared to the USPSTF criteria, the PLCOm2012 risk ≥0.0151 threshold selected 8.8% fewer individuals for screening (p<0.001) but identified 12.4% more lung cancers (sensitivity 80.1% [95% CI 76.8%–83.0%] versus 71.2% [95% CI 67.6%–74.6%], p<0.001), had fewer false-positives (specificity 66.2% [95% CI 65.7%–66.7%] versus 62.7% [95% CI 62.2%–63.1%], p<0.001), and had higher PPV (4.2% [95% CI 3.9%–4.6%] versus 3.4% [95% CI 3.1%–3.7%], p<0.001). In total, 26% of individuals selected for screening based on USPSTF criteria had risks below the threshold PLCOm2012 risk ≥0.0151. Of PLCO former smokers with quit time >15 y, 8.5% had PLCOm2012 risk ≥0.0151. None of 65,711 PLCO never-smokers had PLCOm2012 risk ≥0.0151. Risks and lung cancers were significantly greater in PLCO smokers aged ≥65–80 y than in those aged 55–64 y. This study omitted cost-effectiveness analysis.
Conclusions
The USPSTF criteria for CT screening include some low-risk individuals and exclude some high-risk individuals. Use of the PLCOm2012 risk ≥0.0151 criterion can improve screening efficiency. Currently, never-smokers should not be screened. Smokers aged ≥65–80 y are a high-risk group who may benefit from screening.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Lung cancer is the most commonly occurring cancer in the world and the most common cause of cancer-related deaths. Like all cancers, lung cancer occurs when cells acquire genetic changes that allow them to grow uncontrollably and to move around the body (metastasize). The most common trigger for these genetic changes in lung cancer is exposure to cigarette smoke. Symptoms of lung cancer include a persistent cough and breathlessness. If lung cancer is diagnosed when it is confined to the lung (stage I), the tumor can often be removed surgically. Stage II tumors, which have spread into nearby lymph nodes, are usually treated with surgery plus chemotherapy or radiotherapy. For more advanced lung cancers that have spread throughout the chest (stage III) or the body (stage IV), surgery is rarely helpful and these tumors are treated with chemotherapy and radiotherapy alone. Overall, because most lung cancers are not detected until they are advanced, less than 17% of people diagnosed with lung cancer survive for five years.
Why Was This Study Done?
Screening for lung cancer—looking for early disease in healthy people—could save lives. In the US National Lung Screening Trial (NLST), annual screening with computed tomography (CT) reduced lung cancer mortality by 20% among smokers at high risk of developing cancer compared with screening with a chest X-ray. But what criteria should be used to decide who is screened for lung cancer? The US Preventive Services Task Force (USPSTF), for example, recommends annual CT screening of people who are 55–80 years old, have smoked 30 or more pack-years (one pack-year is defined as a pack of cigarettes per day for one year), and—if they are former smokers—quit smoking less than 15 years ago. However, some experts think lung cancer risk prediction models—statistical models that estimate risk based on numerous personal characteristics—should be used to select people for screening. Here, the researchers evaluate PLCOm2012, a lung cancer risk prediction model based on the incidence of lung cancer among smokers enrolled in the US Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO). Specifically, the researchers use NLST and PLCO screening trial data to identify a PLCOm2012 risk threshold for selecting people for screening and to compare the efficiency of the PLCOm2012 model and the USPSTF criteria for identifying “screenees.”
What Did the Researchers Do and Find?
By analyzing NLST data, the researchers calculated that at PLCOm2012 risk ≥0.0151, mortality (death) rates among NLST participants screened with CT were consistently below mortality rates among NLST participants screened with chest X-ray and that 255 people with a PLCOm2012 risk ≥0.0151 would need to be screened to prevent one lung cancer death. Next, they used data collected from smokers in the screened arm of the PLCO trial to compare the efficiency of the PLCOm2012 and USPSTF criteria for identifying screenees. They found that 8.8% fewer people had a PLCOm2012 risk ≥0.0151 than met USPSTF criteria for screening, but 12.4% more lung cancers were identified. Thus, using PLCOm2012 improved the sensitivity and specificity of the selection of individuals for lung cancer screening over using UPSTF criteria. Notably, 8.5% of PLCO former smokers with quit times of more than 15 years had PLCOm2012 risk ≥0.0151, none of the PLCO never-smokers had PLCOm2012 risk ≥0.0151, and the calculated risks and incidence of lung cancer were greater among PLCO smokers aged ≥65–80 years than among those aged 55–64 years.
What Do These Findings Mean?
Despite the absence of a cost-effectiveness analysis in this study, these findings suggest that the use of the PLCOm2012 risk ≥0.0151 threshold rather than USPSTF criteria for selecting individuals for lung cancer screening could improve screening efficiency. The findings have several other important implications. First, these findings suggest that screening may be justified in people who stopped smoking more than 15 years ago; USPSTF currently recommends that screening stop once an individual's quit time exceeds 15 years. Second, these findings do not support lung cancer screening among never-smokers. Finally, these findings suggest that smokers aged ≥65–80 years might benefit from screening, although the presence of additional illnesses and reduced life expectancy need to be considered before recommending the provision of routine lung cancer screening to this section of the population.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001764.
The US National Cancer Institute provides information about all aspects of lung cancer for patients and health-care professionals, including information on lung cancer screening (in English and Spanish)
Cancer Research UK also provides detailed information about lung cancer and about lung cancer screening
The UK National Health Service Choices website has a page on lung cancer that includes personal stories
MedlinePlus provides links to other sources of information about lung cancer (in English and Spanish)
Information about the USPSTF recommendations for lung cancer screening is available
doi:10.1371/journal.pmed.1001764
PMCID: PMC4251899  PMID: 25460915
22.  (How) Can We Reduce Violence Against Women by 50% over the Next 30 Years? 
PLoS Medicine  2014;11(11):e1001761.
In this month's editorial, PLOS Medicine Editorial Board member Rachel Jewkes identifies challenges that the violence prevention research community faces in substantially reducing violence against women and girls over the next 30 years.
Please see later in the article for the Editors' Summary
doi:10.1371/journal.pmed.1001761
PMCID: PMC4244032  PMID: 25423110
23.  Impact of Xpert MTB/RIF for TB Diagnosis in a Primary Care Clinic with High TB and HIV Prevalence in South Africa: A Pragmatic Randomised Trial 
PLoS Medicine  2014;11(11):e1001760.
Helen Cox and colleagues investigate the impact Xpert MTB/RIF for diagnosing patients with presumptive tuberculosis in a large primary care clinic in Khayelitsha, Cape Town.
Please see later in the article for the Editors' Summary
Background
Xpert MTB/RIF is approved for use in tuberculosis (TB) and rifampicin-resistance diagnosis. However, data are limited on the impact of Xpert under routine conditions in settings with high TB burden.
Methods and Findings
A pragmatic prospective cluster-randomised trial of Xpert for all individuals with presumptive (symptomatic) TB compared to the routine diagnostic algorithm of sputum microscopy and limited use of culture was conducted in a large TB/HIV primary care clinic. The primary outcome was the proportion of bacteriologically confirmed TB cases not initiating TB treatment by 3 mo after presentation. Secondary outcomes included time to TB treatment and mortality. Unblinded randomisation occurred on a weekly basis. Xpert and smear microscopy were performed on site. Analysis was both by intention to treat (ITT) and per protocol.
Between 7 September 2010 and 28 October 2011, 1,985 participants were assigned to the Xpert (n = 982) and routine (n = 1,003) diagnostic algorithms (ITT analysis); 882 received Xpert and 1,063 routine (per protocol analysis). 13% (32/257) of individuals with bacteriologically confirmed TB (smear, culture, or Xpert) did not initiate treatment by 3 mo after presentation in the Xpert arm, compared to 25% (41/167) in the routine arm (ITT analysis, risk ratio 0.51, 95% CI 0.33–0.77, p = 0.0052).
The yield of bacteriologically confirmed TB cases among patients with presumptive TB was 17% (167/1,003) with routine diagnosis and 26% (257/982) with Xpert diagnosis (ITT analysis, risk ratio 1.57, 95% CI 1.32–1.87, p<0.001). This difference in diagnosis rates resulted in a higher rate of treatment initiation in the Xpert arm: 23% (229/1,003) and 28% (277/982) in the routine and Xpert arms, respectively (ITT analysis, risk ratio 1.24, 95% CI 1.06–1.44, p = 0.013). Time to treatment initiation was improved overall (ITT analysis, hazard ratio 0.76, 95% CI 0.63–0.92, p = 0.005) and among HIV-infected participants (ITT analysis, hazard ratio 0.67, 95% CI 0.53–0.85, p = 0.001). There was no difference in 6-mo mortality with Xpert versus routine diagnosis. Study limitations included incorrect intervention allocation for a high proportion of participants and that the study was conducted in a single clinic.
Conclusions
These data suggest that in this routine primary care setting, use of Xpert to diagnose TB increased the number of individuals with bacteriologically confirmed TB who were treated by 3 mo and reduced time to treatment initiation, particularly among HIV-infected participants.
Trial registration
Pan African Clinical Trials Registry PACTR201010000255244
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In 2012, about 8.6 million people developed active tuberculosis (TB)—a contagious mycobacterial disease that usually affects the lungs—and at least 1.3 million people died from the disease. Most of these deaths were in low- and middle-income countries, and a fifth were in HIV-positive individuals, who are particularly susceptible to TB. Mycobacterium tuberculosis, the bacterium that causes TB, is spread in airborne droplets when people with active disease cough or sneeze. The characteristic symptoms of TB include a cough, weight loss, and night sweats. Diagnostic tests for TB include microscopic examination of sputum (mucus coughed up from the lungs), growth (culture) of M. tuberculosis from sputum, and molecular tests (for example, the automated Xpert MTB/RIF test) that rapidly and accurately detect M. tuberculosis in sputum and determine its antibiotic resistance. TB can be cured by taking several antibiotics daily for at least six months, although the emergence of multidrug-resistant TB is making the disease harder to treat.
Why Was This Study Done?
To improve TB control, active disease needs to be diagnosed and treated quickly. However, sputum microscopy, the mainstay of TB diagnosis in many high-burden settings, fails to identify up to half of infected people, and mycobacterial culture (the “gold standard” of TB diagnosis) is slow and often unavailable in resource-limited settings. In late 2010, the World Health Organization recommended the routine use of the Xpert MTB/RIF test (Xpert) for TB diagnosis, and several low- and middle-income countries are now scaling up access to Xpert in their national TB control programs. But although Xpert performs well in ideal conditions, little is known about the impact of its implementation in routine (real-life) settings. In this pragmatic cluster-randomized trial, the researchers assess the health impacts of Xpert in a large TB/HIV primary health care clinic in South Africa, an upper-middle-income country that began to scale up access to Xpert for individuals showing symptoms of TB (individuals with presumptive TB) in 2011. A pragmatic trial asks whether an intervention works under real-life conditions; a cluster-randomized trial randomly assigns groups of people to receive alternative interventions and compares outcomes in the differently treated “clusters.”
What Did the Researchers Do and Find?
The researchers assigned everyone with presumptive TB attending a TB/HIV primary health care clinic in Cape Town to receive either Xpert for TB diagnosis or routine sputum microscopy and limited culture. Specifically, Xpert was requested on the routine laboratory request forms for individuals attending the clinic during randomly designated Xpert weeks but not during randomly designated routine testing weeks. During the 51-week trial, 982 individuals were assigned to the Xpert arm, and 1,003 were assigned to the routine testing arm, but because clinic staff sometimes failed to request Xpert during Xpert weeks, only 882 participants in the Xpert arm received the intervention. In an “intention to treat” analysis (an analysis that considers the outcomes of all the participants in a trial whether or not they received their assigned intervention), 13% of bacteriologically confirmed TB cases in the Xpert arm did not initiate TB treatment by three months after enrollment (the trial's primary outcome) compared to 25% in the routine testing arm. The proportion of participants with microbiologically confirmed TB and the proportion initiating TB treatment were higher in the Xpert arm than in the routine testing arm. Finally, the time to treatment initiation was lower in the Xpert arm than in the routine testing arm, particularly among HIV-infected participants.
What Do These Findings Mean?
These findings show that, in this primary health care setting, the provision of Xpert for TB diagnosis in individuals with presumptive TB provided benefits over testing that relied primarily on sputum microscopy. Notably, these benefits were seen even though a substantial proportion of individuals assigned to the Xpert intervention did not actually receive an Xpert test. The pragmatic nature of this trial, which aimed to minimize clinic disruption, and other aspects of the trial design may limit the accuracy and generalizability of these findings. Moreover, further studies are needed to discover whether the use of Xpert in real-life settings reduces the burden of TB illness and death over the long term. Nevertheless, these findings suggest that the implementation of Xpert has the potential to improve the outcomes of TB control programs and may also improve outcomes for individuals.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001760.
The World Health Organization provides information (in several languages) on all aspects of tuberculosis, including general information on tuberculosis diagnostics and specific information on the roll-out of the Xpert MTB/RIF test; further information about the World Health Organization's endorsement of Xpert MTB/RIF is included in a Strategic and Technical Advisory Group for Tuberculosis report; the “Global Tuberculosis Report 2013” provides information about tuberculosis around the world, including in South Africa
The Stop TB Partnership is working towards tuberculosis elimination and provides patient stories about tuberculosis (in English and Spanish); the Tuberculosis Vaccine Initiative (a not-for-profit organization) also provides personal stories about tuberculosis
The US Centers for Disease Control and Prevention has information about tuberculosis and its diagnosis (in English and Spanish)
The US National Institute of Allergy and Infectious Diseases also has detailed information on all aspects of tuberculosis
The South African National Tuberculosis Management Guidelines 2014 are available
The Foundation for Innovative New Diagnostics, a not-for-profit organization that helps to develop and introduce new diagnostic tests for tuberculosis, malaria, and neglected tropical diseases, has detailed information about the Xpert MTB/RIF test
More information about TB, HIV, and drug-resistant TB treatment in Khayelitsha, Cape Town, South Africa are provided by Médecins sans Frontières, South Africa
doi:10.1371/journal.pmed.1001760
PMCID: PMC4244039  PMID: 25423041
24.  Effect of Treatment of Obstructive Sleep Apnea on Depressive Symptoms: Systematic Review and Meta-Analysis 
PLoS Medicine  2014;11(11):e1001762.
In a meta-analysis of randomized controlled trials, Matthew James and colleagues investigate the effects of continuous positive airway pressure or mandibular advancement devices on depression.
Please see later in the article for the Editors' Summary
Background
Obstructive sleep apnea (OSA) is associated with increased morbidity and mortality, and decreased quality of life. Treatment with continuous positive airway pressure (CPAP) or mandibular advancement devices (MADs) is effective for many symptoms of OSA. However, it remains controversial whether treatment with CPAP or MAD also improves depressive symptoms.
Methods and Findings
We performed a systematic review and meta-analysis of randomized controlled trials that examined the effect of CPAP or MADs on depressive symptoms in patients with OSA. We searched Medline, EMBASE, the Cochrane Central Registry of Controlled Trials, and PsycINFO from the inception of the databases until August 15, 2014, for relevant articles.
In a random effects meta-analysis of 19 identified trials, CPAP treatment resulted in an improvement in depressive symptoms compared to control, but with significant heterogeneity between trials (Q statistic, p<0.001; I2 = 71.3%, 95% CI: 54%, 82%). CPAP treatment resulted in significantly greater improvement in depressive symptoms in the two trials with a higher burden of depression at baseline (meta-regression, p<0.001). The pooled standardized mean difference (SMD) in depressive symptoms with CPAP treatment in these two trial populations with baseline depression was 2.004 (95% CI: 1.387, 2.621), compared to 0.197 (95% CI: 0.059, 0.334) for 15 trials of populations without depression at baseline. Pooled estimates of the treatment effect of CPAP were greater in parallel arm trials than in crossover trials (meta-regression, p = 0.076). Random effects meta-analysis of five trials of MADs showed a significant improvement in depressive symptoms with MADs versus controls: SMD = 0.214 (95% CI: 0.026, 0.401) without significant heterogeneity (I2 = 0%, 95% CI: 0%, 79%). Studies were limited by the use of depressive symptom scales that have not been validated specifically in people with OSA.
Conclusions
CPAP and MADs may be useful components of treatment of depressive symptoms in individuals with OSA and depression. The efficacy of CPAP and MADs compared to standard therapies for depression is unknown.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Obstructive sleep apnea (OSA) is a sleep-related breathing disorder that is particularly common among middle-aged and elderly people, although most are unaware that they have the condition. It is characterized by the occurrence of numerous brief (ten seconds or so) breathing interruptions during sleep. These “apneas” occur when relaxation of the upper airway muscles decreases airflow, which lowers the level of oxygen in the blood. Consequently, affected individuals are frequently aroused from deep sleep as they struggle to breathe. Symptoms of OSA include loud snoring and daytime sleepiness. Treatments include lifestyle changes such as losing weight (excess fat around the neck increases airway collapse) and smoking cessation. Mild to moderate OSA can also be treated using a mandibular advancement device (MAD), a “splint” that fits inside the mouth and pushes the jaw and tongue forward to increase the space at the back of the throat and reduce airway narrowing. For severe OSA, doctors recommend continuous positive airway pressure (CPAP), in which a machine blows pressurized air into the airway through a facemask to keep it open.
Why Was This Study Done?
OSA is a serious condition that is associated with an increased risk of illness and death. Clinical depression (long-lasting, overwhelming feelings of sadness and hopelessness), for example, is common among people with OSA. The interaction between these frequently co-morbid (co-existing) conditions is complex. The sleep disruption and weight gain that are often associated with depression could cause or worsen OSA. Conversely, OSA could trigger depression by causing sleep disruption and by inducing cognitive changes (changes in thinking) by intermittently starving the brain of oxygen. If the latter scenario is correct, then treating OSA with CPAP or MADs might improve depressive symptoms. Several trials have investigated this possibility, but their results have been equivocal. Here, the researchers undertake a systematic review and meta-analysis of randomized controlled trials that have examined the effect of CPAP or MADs on depressive symptoms in patients with OSA to find out whether treating co-morbid OSA in patients with depression can help to treat depression. A randomized controlled trial compares the outcomes of individuals chosen to receive different interventions through the play of chance, a systematic review uses predefined criteria to identify all the research on a given topic, and meta-analysis uses statistical methods to combine the results of several studies.
What Did the Researchers Do and Find?
The researchers identified 22 trials that investigated the effects of CPAP or MAD treatment in patients with OSA and that measured depressive symptoms before and after treatment. Meta-analysis of the results of 19 trials that provided information about the effect of CPAP on depressive symptoms indicated that CPAP improved depressive symptoms compared to the control intervention (usually sham CPAP) but revealed considerable heterogeneity (variability) between trials. Notably, CPAP treatment resulted in a greater improvement in depressive symptoms in trials in which there was a high prevalence of depression at baseline than in trials in which there was a low prevalence of depression at baseline. Moreover, the magnitude of this improvement in depressive symptoms in trials with a high prevalence of depression at baseline was large enough to be clinically relevant. Meta-analysis of five trials that provided information about the effect of MADs on depressive symptoms indicated that MADs also improved depressive symptoms compared to the control intervention (sham MAD).
What Do These Findings Mean?
These findings suggest that both CPAP and MAD treatment for OSA can result in modest improvements in depressive symptoms and that populations with high initial levels of depressive symptoms may reap the greatest benefits of CPAP treatment. These findings give no indication of the efficacy of CPAP and MADs compared to standard treatments for depression such as antidepressant medications. Moreover, their accuracy may be limited by methodological limitations within the trials included in the meta-analyses reported here. For example, the questionnaires used to measure depression in these trials were not validated for use in people with OSA. Further high-quality randomized controlled trials are therefore needed to confirm the findings of this systematic review and meta-analysis. For now, however, these findings suggest that the use of CPAP and MADs may help improve depressive symptoms among people with OSA.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001762.
The US National Heart, Lung, and Blood Institute has information (including several videos) about sleep apnea (in English and Spanish)
The UK National Health Service Choices website provides information and personal stories about obstructive sleep apnea and depression
The not-for-profit American Sleep Apnea Association provides detailed information about sleep apnea for patients and healthcare professionals, including personal stories about the condition
The US National Institute of Mental Health provides information on all aspects of depression (in English and Spanish)
The Anxiety and Depression Association of America provides information about sleep disorders
The MedlinePlus encyclopedia has a page on obstructive sleep apnea; MedlinePlus provides links to further information and advice about obstructive sleep apnea and about depression (in English and Spanish)
doi:10.1371/journal.pmed.1001762
PMCID: PMC4244041  PMID: 25423175
25.  Computerized Cognitive Training in Cognitively Healthy Older Adults: A Systematic Review and Meta-Analysis of Effect Modifiers 
PLoS Medicine  2014;11(11):e1001756.
Michael Valenzuela and colleagues systematically review and meta-analyze the evidence that computerized cognitive training improves cognitive skills in older adults with normal cognition.
Please see later in the article for the Editors' Summary
Background
New effective interventions to attenuate age-related cognitive decline are a global priority. Computerized cognitive training (CCT) is believed to be safe and can be inexpensive, but neither its efficacy in enhancing cognitive performance in healthy older adults nor the impact of design factors on such efficacy has been systematically analyzed. Our aim therefore was to quantitatively assess whether CCT programs can enhance cognition in healthy older adults, discriminate responsive from nonresponsive cognitive domains, and identify the most salient design factors.
Methods and Findings
We systematically searched Medline, Embase, and PsycINFO for relevant studies from the databases' inception to 9 July 2014. Eligible studies were randomized controlled trials investigating the effects of ≥4 h of CCT on performance in neuropsychological tests in older adults without dementia or other cognitive impairment. Fifty-two studies encompassing 4,885 participants were eligible. Intervention designs varied considerably, but after removal of one outlier, heterogeneity across studies was small (I2 = 29.92%). There was no systematic evidence of publication bias. The overall effect size (Hedges' g, random effects model) for CCT versus control was small and statistically significant, g = 0.22 (95% CI 0.15 to 0.29). Small to moderate effect sizes were found for nonverbal memory, g = 0.24 (95% CI 0.09 to 0.38); verbal memory, g = 0.08 (95% CI 0.01 to 0.15); working memory (WM), g = 0.22 (95% CI 0.09 to 0.35); processing speed, g = 0.31 (95% CI 0.11 to 0.50); and visuospatial skills, g = 0.30 (95% CI 0.07 to 0.54). No significant effects were found for executive functions and attention. Moderator analyses revealed that home-based administration was ineffective compared to group-based training, and that more than three training sessions per week was ineffective versus three or fewer. There was no evidence for the effectiveness of WM training, and only weak evidence for sessions less than 30 min. These results are limited to healthy older adults, and do not address the durability of training effects.
Conclusions
CCT is modestly effective at improving cognitive performance in healthy older adults, but efficacy varies across cognitive domains and is largely determined by design choices. Unsupervised at-home training and training more than three times per week are specifically ineffective. Further research is required to enhance efficacy of the intervention.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
As we get older, we notice many bodily changes. Our hair goes grey, we develop new aches and pains, and getting out of bed in the morning takes longer than it did when we were young. Our brain may also show signs of aging. It may take us longer to learn new information, we may lose our keys more frequently, and we may forget people's names. Cognitive decline—developing worsened thinking, language, memory, understanding, and judgment—can be a normal part of aging, but it can also be an early sign of dementia, a group of brain disorders characterized by a severe, irreversible decline in cognitive functions. We know that age-related physical decline can be attenuated by keeping physically active; similarly, engaging in activities that stimulate the brain throughout life is thought to enhance cognition in later life and reduce the risk of age-related cognitive decline and dementia. Thus, having an active social life and doing challenging activities that stimulate both the brain and the body may help to stave off cognitive decline.
Why Was This Study Done?
“Brain training” may be another way of keeping mentally fit. The sale of computerized cognitive training (CCT) packages, which provide standardized, cognitively challenging tasks designed to “exercise” various cognitive functions, is a lucrative and expanding business. But does CCT work? Given the rising global incidence of dementia, effective interventions that attenuate age-related cognitive decline are urgently needed. However, the impact of CCT on cognitive performance in older adults is unclear, and little is known about what makes a good CCT package. In this systematic review and meta-analysis, the researchers assess whether CCT programs improve cognitive test performance in cognitively healthy older adults and identify the aspects of cognition (cognitive domains) that are responsive to CCT, and the CCT design features that are most important in improving cognitive performance. A systematic review uses pre-defined criteria to identify all the research on a given topic; meta-analysis uses statistical methods to combine the results of several studies.
What Did the Researchers Do and Find?
The researchers identified 51 trials that investigated the effects of more than four hours of CCT on nearly 5,000 cognitively healthy older adults by measuring several cognitive functions before and after CCT. Meta-analysis of these studies indicated that the overall effect size for CCT (compared to control individuals who did not participate in CCT) was small but statistically significant. An effect size quantifies the difference between two groups; a statistically significant result is a result that is unlikely to have occurred by chance. So, the meta-analysis suggests that CCT slightly increased overall cognitive function. Notably, CCT also had small to moderate significant effects on individual cognitive functions. For example, some CCT slightly improved nonverbal memory (the ability to remember visual images) and working memory (the ability to remember recent events; short-term memory). However, CCT had no significant effect on executive functions (cognitive processes involved in planning and judgment) or attention (selective concentration on one aspect of the environment). The design of CCT used in the different studies varied considerably, and “moderator” analyses revealed that home-based CCT was not effective, whereas center-based CCT was effective, and that training sessions undertaken more than three times a week were not effective. There was also some weak evidence suggesting that CCT sessions lasting less than 30 minutes may be ineffective. Finally, there was no evidence for the effectiveness of working memory training by itself (for example, programs that ask individuals to recall series of letters).
What Do These Findings Mean?
These findings suggest that CCT produces small improvements in cognitive performance in cognitively healthy older adults but that the efficacy of CCT varies across cognitive domains and is largely determined by design aspects of CCT. The most important result was that “do-it-yourself” CCT at home did not produce improvements. Rather, the small improvements seen were in individuals supervised by a trainer in a center and undergoing sessions 1–3 times a week. Because only cognitively healthy older adults were enrolled in the studies considered in this systematic review and meta-analysis, these findings do not necessarily apply to cognitively impaired individuals. Moreover, because all the included studies measured cognitive function immediately after CCT, these findings provide no information about the durability of the effects of CCT or about how the effects of CCT on cognitive function translate into real-life outcomes for individuals such as independence and the long-term risk of dementia. The researchers call, therefore, for additional research into CCT, an intervention that might help to attenuate age-related cognitive decline and improve the quality of life for older individuals.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001756.
This study is further discussed in a PLOS Medicine Perspective by Druin Burch
The US National Institute on Aging provides information for patients and carers about age-related forgetfulness, about memory and cognitive health, and about dementia (in English and Spanish)
The UK National Health Service Choices website also provides information about dementia and about memory loss
MedlinePlus provides links to additional resources about memory, mild cognitive impairment, and dementia (in English and Spanish)
doi:10.1371/journal.pmed.1001756
PMCID: PMC4236015  PMID: 25405755

Results 1-25 (2810)