PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1208940)

Clipboard (0)
None

Related Articles

1.  Prognostic Accuracy of WHO Growth Standards to Predict Mortality in a Large-Scale Nutritional Program in Niger 
PLoS Medicine  2009;6(3):e1000039.
Background
Important differences exist in the diagnosis of malnutrition when comparing the 2006 World Health Organization (WHO) Child Growth Standards and the 1977 National Center for Health Statistics (NCHS) reference. However, their relationship with mortality has not been studied. Here, we assessed the accuracy of the WHO standards and the NCHS reference in predicting death in a population of malnourished children in a large nutritional program in Niger.
Methods and Findings
We analyzed data from 64,484 children aged 6–59 mo admitted with malnutrition (<80% weight-for-height percentage of the median [WH]% [NCHS] and/or mid-upper arm circumference [MUAC] <110 mm and/or presence of edema) in 2006 into the Médecins Sans Frontières (MSF) nutritional program in Maradi, Niger. Sensitivity and specificity of weight-for-height in terms of Z score (WHZ) and WH% for both WHO standards and NCHS reference were calculated using mortality as the gold standard. Sensitivity and specificity of MUAC were also calculated. The receiver operating characteristic (ROC) curve was traced for these cutoffs and its area under curve (AUC) estimated. In predicting mortality, WHZ (NCHS) and WH% (NCHS) showed AUC values of 0.63 (95% confidence interval [CI] 0.60–0.66) and 0.71 (CI 0.68–0.74), respectively. WHZ (WHO) and WH% (WHO) appeared to provide higher accuracy with AUC values of 0.76 (CI 0.75–0.80) and 0.77 (CI 0.75–0.80), respectively. The relationship between MUAC and mortality risk appeared to be relatively weak, with AUC = 0.63 (CI 0.60–0.67). Analyses stratified by sex and age yielded similar results.
Conclusions
These results suggest that in this population of children being treated for malnutrition, WH indicators calculated using WHO standards were more accurate for predicting mortality risk than those calculated using the NCHS reference. The findings are valid for a population of already malnourished children and are not necessarily generalizable to a population of children being screened for malnutrition. Future work is needed to assess which criteria are best for admission purposes to identify children most likely to benefit from therapeutic or supplementary feeding programs.
Rebecca Grais and colleagues assess the accuracy of WHO growth standards in predicting death among malnourished children admitted to a large nutritional program in Niger.
Editors' Summary
Background.
Malnutrition causes more than a third of child deaths worldwide. The World Health Organization (WHO) estimates there are 178 million malnourished children globally, all of whom are vulnerable to disease and 20 million of whom are at risk of death. Poverty, rising food prices, food scarcity, and natural disasters all contribute significantly to malnutrition, but children's lives can be saved if aid agencies are able to identify and treat acute malnutrition early. This can be done by comparing a child's body measurements to those of healthy children.
In 1977 the US National Center for Health Statistics (NCHS) introduced child growth reference charts describing how US children grow. The charts enable the height of a child of a given age to be compared with the set of “percentile curves,” which show, for example, whether the child is on the 90th or the 10th centile—that is, whether taller than 90% or 10% of their peers. These NCHS reference charts were subsequently adopted by the WHO for international use. In 2006, the WHO began to use new growth charts, based on children from a variety of countries raised in optimal environments for healthy growth. These provide a standard for how all children should grow, regardless of ethnic background or wealth.
Why Was This Study Done?
It is known that the WHO standards and the NCHS reference differ in how they identify malnutrition. Estimates of malnutrition are higher with the WHO standard than the NCHS reference. This affects the cost of international programs to treat malnutrition, as more children will be diagnosed and treated when the WHO standards are used. However, it is not known how the different growth measures differ in predicting which children's lives are at risk from malnutrition. The researchers saw that the data in their nutritional program could help provide this information.
What Did the Researchers Do and Find?
The researchers examined data on the body measurements of over 60,000 children aged between 6 mo and 5 y enrolled in a Médecins sans Frontières (MSF) nutritional programme in Maradi, Niger during 2006. Children were assessed as having acute malnutrition (wasting) and enrolled in the feeding program if their weight-for-height was less than 80% of the NCHS average, if their mid-upper arm circumference (MUAC) was under 110 mm (for children 65–110 cm), or they had swelling in both feet.
The authors evaluated three measures to see which was most accurate at predicting that children would die under treatment: low weight-for-height as measured against each of the WHO standard and NCHS reference, and low MUAC. For each measure, they compared the proportion of correct predictions of death (sensitivity) and the proportion of correct predictions of survival (specificity) for a range of possible cutoffs (or thresholds) for diagnosis.
They found that the WHO standard gave more accurate predictions than the NCHS reference or the MUAC of which children would die under treatment. The results were similar when the children were grouped by age or sex.
What Do these Findings Mean?
The results suggest that, at least in this population, the WHO standards are a more accurate predictor of death following malnutrition. This agrees with what might be expected, as the WHO standard is more up-to-date as well as aiming to show how healthy children from a range of settings should grow.
Nevertheless, an important limitation is that the children in the study had already been diagnosed as malnourished and were receiving treatment. As a result, the authors cannot say definitively which measure is better at predicting what children in the general population are acutely malnourished and would benefit most from treatment.
It should also be noted that children were predominantly entered into the feeding program by the weight-for-height indicator rather than by the MUAC. This may be a reason why the MUAC appears worse at predicting death than weight-for-height. Missing and inaccurate data, for instance on the exact ages of some children, also limit the findings.
In addition, the findings do not provide guidance on the cutoffs that should be used in deciding whether to enter a child into a feeding program. Different cutoffs represent a trade-off between treating more children needlessly in order to catch all in need, and treating fewer children and missing some in need. The study also cannot be used to advise on whether weight-for-height or the MUAC is more appropriate in a given context. In certain crisis situations, for instance, some authorities suggest it may be more practical to use the MUAC, as it requires less equipment or training.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000039.
The UN Standing Committee on Nutrition homepage publishes international briefings on nutrition as a foundation for development
The US National Center for Health Statistics provides background information on its 1977 growth charts and how they were developed in the context of explaining how they differ from revised charts produced in 2000
The World Heath Organization publishes country profile information on its child growth standards and also on Niger
Médecins sans Frontières also provides information on its work in Niger
The EC-FAO Food Security Information for Action Programme is funded by the European Commission (EC) and implemented by the Food and Agriculture Organization of the United Nations (FAO). It aims to help nations formulate more effective anti-hunger policies and provides online materials, including a guide to nutritional status assessment and analysis, which includes information on the contexts in which different indicators are useful
doi:10.1371/journal.pmed.1000039
PMCID: PMC2650722  PMID: 19260760
2.  Clinical results of XMR-assisted percutaneous transforaminal endoscopic lumbar discectomy 
Background
Although percutaneous endoscopic lumbar discectomy (PELD) has shown favorable outcomes in the majority of lumbar discectomy cases, there were also some failures. The most common cause of failure is the incomplete removal of disc fragments. The skin entry point for the guide-needle trajectory and the optimal placement of the working sleeve are largely blind, which might lead to the inadequate removal of disc fragments. The objective of this study was to present our early experiences with image-guided PELD using a specially designed fluoroscope with magnetic resonance imaging-equipped operative suite (XMR) for the treatment of lumbar disc herniation.
Methods
This prospective study included 89 patients who had undergone PELD via the transforaminal approach using an XMR protocol. Pre- and postoperative examinations (at 12 weeks) included a detailed clinical history, visual analogue scale (VAS), Oswestry disability index (ODI), and radiological workups. The results were categorized as excellent, good, fair, and poor according to MacNab's criteria. At the final follow-up, the minimum follow-up time for the subjects was 2 years. The need for revision surgeries and postoperative complications were noted on follow-up.
Results
Postoperative mean ODI decreased from 67.4% to 5.61%. Mean VAS score for back and leg pain improved significantly from 4 to 2.3 and from 7.99 to 1.04, respectively. Four (4.49%) patients underwent a second-stage PELD after intraoperative XMR had shown remnant fragments after the first stage. As per MacNab's criteria, 76 patients (85.4%) showed excellent, 8 (8.89%) good, 3 (3.37%) fair, and 2 (2.25) poor results. Four (4.49%) patients had remnant disc fragments on XMR, which were removed during the same procedure. All of these patients had either highly migrated or sequestrated disc fragments preoperatively. Four (4.49%) other patients needed a second, open surgery due to symptomatic postoperative hematoma (n = 2) and recurrent disc herniation (n = 2).
Conclusions
This prospective analysis indicates that XMR-assisted PELD provides a precise skin entry point. It also confirms that decompression occurs intraoperatively, which negates the need for a separate surgery and thus increases the success rate of PELD, particularly in highly migrated or sequestrated discs. However, further extensive experience is required to confirm the advantages and feasibility of PELD in terms of cost effectiveness.
doi:10.1186/1749-799X-8-14
PMCID: PMC3668223  PMID: 23705685
Percutaneous endoscopic lumbar discectomy; Incomplete disc removal; XMR-guided procedure; High success rate
3.  Worldwide Incidence of Malaria in 2009: Estimates, Time Trends, and a Critique of Methods 
PLoS Medicine  2011;8(12):e1001142.
Richard Cibulskis and colleagues present estimates of the worldwide incidence of malaria in 2009, together with a critique of different estimation methods, including those based on risk maps constructed from surveys of parasite prevalence, and those based on routine case reports compiled by health ministries.
Background
Measuring progress towards Millennium Development Goal 6, including estimates of, and time trends in, the number of malaria cases, has relied on risk maps constructed from surveys of parasite prevalence, and on routine case reports compiled by health ministries. Here we present a critique of both methods, illustrated with national incidence estimates for 2009.
Methods and Findings
We compiled information on the number of cases reported by National Malaria Control Programs in 99 countries with ongoing malaria transmission. For 71 countries we estimated the total incidence of Plasmodium falciparum and P. vivax by adjusting the number of reported cases using data on reporting completeness, the proportion of suspects that are parasite-positive, the proportion of confirmed cases due to each Plasmodium species, and the extent to which patients use public sector health facilities. All four factors varied markedly among countries and regions. For 28 African countries with less reliable routine surveillance data, we estimated the number of cases from model-based methods that link measures of malaria transmission with case incidence. In 2009, 98% of cases were due to P. falciparum in Africa and 65% in other regions. There were an estimated 225 million malaria cases (5th–95th centiles, 146–316 million) worldwide, 176 (110–248) million in the African region, and 49 (36–68) million elsewhere. Our estimates are lower than other published figures, especially survey-based estimates for non-African countries.
Conclusions
Estimates of malaria incidence derived from routine surveillance data were typically lower than those derived from surveys of parasite prevalence. Carefully interpreted surveillance data can be used to monitor malaria trends in response to control efforts, and to highlight areas where malaria programs and health information systems need to be strengthened. As malaria incidence declines around the world, evaluation of control efforts will increasingly rely on robust systems of routine surveillance.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Malaria is a life-threatening disease caused by the Plasmodium parasite, which is transmitted to people through the bites of infected mosquitoes. According to latest estimates from the World Health Organization (WHO), in 2009, there were 225 million cases of malaria and an estimated 781,000 deaths worldwide—most deaths occurring among children living in the WHO African Region (mainly sub-Saharan Africa). Knowing the burden of malaria in any country is an essential component of public health planning and accurately estimating the global burden is essential to monitor progress towards the United Nations' Millennium Development Goals.
Currently, there are generally two approaches used to estimate malaria incidence:
One method uses routine surveillance reports of malaria cases compiled by national health ministries, which are analyzed to take into account some deficincies in data collection, such as incomplete reporting by health facilities, the potential for overdiagnosis of malaria among patients with fever, and the use of private health facilities or none at all. The second method uses population-based surveys of Plasmodium prevalence and case incidence from selected locations in malaria endemic areas and then uses this information to generate risk maps and to estimate the case incidence of malaria per 1,000 population, for all of the world's malaria endemic regions. The Malaria Atlas Project—a database of malaria epidemiology based on medical intelligence and satellite-derived climate data—uses this second method.
Why Was This Study Done?
In order for malaria epidemiology to be as accurate as possible, an evaluation of the strengths and weaknesses of both methods is necessary. In this study, the researchers analyzed the merits of the estimates calculated by using the different approaches, to highlight areas in which both methods need to be improved to provide better assessments of malaria control.
What Did the Researchers Do and Find?
The researchers estimated the number of malaria cases in 2009, for each of the 99 countries with ongoing malaria transmission using a combination of the two methods. The researchers used the first method for 56 malaria endemic countries outside the WHO African Region, and for nine African countries which had the quality of data necessary to calculate estimates using the researchers statistical model—which the researchers devised to take the upper and lower limits of case detection into account. The researchers used the second method for 34 countries in the African Region to classify malaria risk into low-transmission and high-transmission categories, and then to derive incidence rates for populations from observational studies conducted in populations in which there were no malaria control activities. For both methods, the researchers conducted a statistical analysis to determine the range of uncertainty.
The researchers found that using a combination of methods there was a combined total of 225 million malaria cases, in the 99 countries malaria endemic countries—the majority of cases (78%) were in the WHO African region, followed by the Southeast Asian (15%) and Eastern Mediterranean regions. In Africa, there were 214 cases per 1,000 population, compared with 23 per 1,000 in the Eastern Mediterranean region, and 19 per 1,000 in the Southeast Asia region. Sixteen countries accounted for 80% of all estimated cases globally—all but two countries were in the African region. The researchers found that despite the differences between methods 1 and 2, the ratio of the upper and lower limit for country estimates was approximately the same.
What Do These Findings Mean?
Using the combined methods, the incidence of malaria was estimated to be lower than previous estimates, particularly outside of Africa. Nevertheless the methods suggest that malaria surveillance systems currently miss the majority of cases, detecting less than 10% of those estimated to occur globally. Although the best assessment of malaria burden and trends should rely on a combination of surveillance and survey data, accurate surveillance is the ultimate goal for malaria control programs, especially as routine surveillance has advantages for estimating case incidence, spatially and through time. However, as the researchers have identified in this study, strengthening surveillance requires a critical evaluation of inherent errors and these errors must be adequately addressed in order to have confidence in estimates of malaria burden and trends, and therefore, the return on investments for malaria control programs.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001142.
This study is further discussed in a PLoS Medicine Perspective by Ivo Mueller and colleagues
The WHO provides information on malaria and produces the World Malaria Report each year, summarizing global progress in malaria control
More information is available on The Malaria Atlas Project
doi:10.1371/journal.pmed.1001142
PMCID: PMC3243721  PMID: 22205883
4.  Measuring Adult Mortality Using Sibling Survival: A New Analytical Method and New Results for 44 Countries, 1974–2006 
PLoS Medicine  2010;7(4):e1000260.
Julie Rajaratnam and colleagues describe a novel method, called the Corrected Sibling Survival method, to measure adult mortality in countries without good vital registration by use of histories taken from surviving siblings.
Background
For several decades, global public health efforts have focused on the development and application of disease control programs to improve child survival in developing populations. The need to reliably monitor the impact of such intervention programs in countries has led to significant advances in demographic methods and data sources, particularly with large-scale, cross-national survey programs such as the Demographic and Health Surveys (DHS). Although no comparable effort has been undertaken for adult mortality, the availability of large datasets with information on adult survival from censuses and household surveys offers an important opportunity to dramatically improve our knowledge about levels and trends in adult mortality in countries without good vital registration. To date, attempts to measure adult mortality from questions in censuses and surveys have generally led to implausibly low levels of adult mortality owing to biases inherent in survey data such as survival and recall bias. Recent methodological developments and the increasing availability of large surveys with information on sibling survival suggest that it may well be timely to reassess the pessimism that has prevailed around the use of sibling histories to measure adult mortality.
Methods and Findings
We present the Corrected Sibling Survival (CSS) method, which addresses both the survival and recall biases that have plagued the use of survey data to estimate adult mortality. Using logistic regression, our method directly estimates the probability of dying in a given country, by age, sex, and time period from sibling history data. The logistic regression framework borrows strength across surveys and time periods for the estimation of the age patterns of mortality, and facilitates the implementation of solutions for the underrepresentation of high-mortality families and recall bias. We apply the method to generate estimates of and trends in adult mortality, using the summary measure 45q15—the probability of a 15-y old dying before his or her 60th birthday—for 44 countries with DHS sibling survival data. Our findings suggest that levels of adult mortality prevailing in many developing countries are substantially higher than previously suggested by other analyses of sibling history data. Generally, our estimates show the risk of adult death between ages 15 and 60 y to be about 20%–35% for females and 25%–45% for males in sub-Saharan African populations largely unaffected by HIV. In countries of Southern Africa, where the HIV epidemic has been most pronounced, as many as eight out of ten men alive at age 15 y will be dead by age 60, as will six out of ten women. Adult mortality levels in populations of Asia and Latin America are generally lower than in Africa, particularly for women. The exceptions are Haiti and Cambodia, where mortality risks are comparable to many countries in Africa. In all other countries with data, the probability of dying between ages 15 and 60 y was typically around 10% for women and 20% for men, not much higher than the levels prevailing in several more developed countries.
Conclusions
Our results represent an expansion of direct knowledge of levels and trends in adult mortality in the developing world. The CSS method provides grounds for renewed optimism in collecting sibling survival data. We suggest that all nationally representative survey programs with adequate sample size ought to implement this critical module for tracking adult mortality in order to more reliably understand the levels and patterns of adult mortality, and how they are changing.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Governments and international health agencies need accurate information on births and deaths in populations to help them plan health care policies and monitor the effectiveness of public-health programs designed, for example, to prevent premature deaths from preventable causes such as tobacco smoking. In developed countries, full information on births and deaths is recorded in “vital registration systems.” Unfortunately, very few developing countries have complete vital registration systems. In most African countries, for example, less than one-quarter of deaths are counted through vital registration systems. To fill this information gap, scientists have developed several methods to estimate mortality levels (the proportion of deaths in populations) and trends in mortality (how the proportion of deaths in populations changes over time) from data collected in household surveys and censuses. A household survey collects data about family members (for example, number, age, and sex) for a national sample of households randomly selected from a list of households collected in a census (a periodic count of a population).
Why Was This Study Done?
To date, global public-health efforts have concentrated on improving child survival. Consequently, methods for calculating child mortality levels and trends from surveys are well-developed and generally yield accurate estimates. By contrast, although attempts have been made to measure adult mortality using sibling survival histories (records of the sex, age if alive, or age at death, if dead, of all the children born to survey respondents' mothers that are collected in many household surveys), these attempts have often produced implausibly low estimates of adult mortality. These low estimates arise because people do not always recall deaths accurately when questioned (recall bias) and because families that have fallen apart, possibly because of family deaths, are underrepresented in household surveys (selection bias). In this study, the researchers develop a corrected sibling survival (CSS) method that addresses the problems of selection and recall bias and use their method to estimate mortality levels and trends in 44 developing countries between 1974 and 2006.
What Did the Researchers Do and Find?
The researchers used a statistical approach called logistic regression to develop the CSS method. They then used the method to estimate the probability of a 15-year-old dying before his or her 60th birthday from sibling survival data collected by the Demographic and Health Surveys program (DHS, a project started in 1984 to help developing countries collect data on population and health trends). Levels of adult mortality estimated in this way were considerably higher than those suggested by previous analyses of sibling history data. For example, the risk of adult death between the ages of 15 and 60 years was 20%–35% for women and 25%–45% for men living in sub-Saharan African countries largely unaffected by HIV and 60% for women and 80% for men living in countries in Southern Africa where the HIV epidemic is worst. Importantly, the researchers show that their mortality level estimates compare well to those obtained from vital registration data and other data sources where available. So, for example, in the Philippines, adult mortality levels estimated using the CSS method were similar to those obtained from vital registration data. Finally, the researchers used the CSS method to estimate mortality trends. These calculations reveal, for example, that there has been a 3–4-fold increase in adult mortality since the late 1980s in Zimbabwe, a country badly affected by the HIV epidemic.
What Do These Findings Mean?
These findings suggest that the CSS method, which applies a correction for both selection and recall bias, yields more accurate estimates of adult mortality in developing countries from sibling survival data than previous methods. Given their findings, the researchers suggest that sibling survival histories should be routinely collected in all future household survey programs and, if possible, these surveys should be expanded so that all respondents are asked about sibling histories—currently the DHS only collects sibling histories from women aged 15–49 years. Widespread collection of such data and their analysis using the CSS method, the researchers conclude, would help governments and international agencies track trends in adult mortality and progress toward major health and development targets.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000260.
This study and two related PLoS Medicine Research Articles by Rajaratnam et al. and by Murray et al. are further discussed in a PLoS Medicine Perspective by Mathers and Boerma
Information is available about the Demographic and Health Surveys
The Institute for Health Metrics and Evaluation makes available high-quality information on population health, its determinants, and the performance of health systems
Grand Challenges in Global Health provides information on research into better ways for developing countries to measure their health status
The World Health Organization Statistical Information System (WHOSIS) is an interactive database that brings together core health statistics for WHO member states, including information on vital registration of deaths; the WHO Health Metrics Network is a global collaboration focused on improving sources of vital statistics
doi:10.1371/journal.pmed.1000260
PMCID: PMC2854132  PMID: 20405004
5.  Effects of Two Commercial Electronic Prescribing Systems on Prescribing Error Rates in Hospital In-Patients: A Before and After Study 
PLoS Medicine  2012;9(1):e1001164.
In a before-and-after study, Johanna Westbrook and colleagues evaluate the change in prescribing error rates after the introduction of two commercial electronic prescribing systems in two Australian hospitals.
Background
Considerable investments are being made in commercial electronic prescribing systems (e-prescribing) in many countries. Few studies have measured or evaluated their effectiveness at reducing prescribing error rates, and interactions between system design and errors are not well understood, despite increasing concerns regarding new errors associated with system use. This study evaluated the effectiveness of two commercial e-prescribing systems in reducing prescribing error rates and their propensities for introducing new types of error.
Methods and Results
We conducted a before and after study involving medication chart audit of 3,291 admissions (1,923 at baseline and 1,368 post e-prescribing system) at two Australian teaching hospitals. In Hospital A, the Cerner Millennium e-prescribing system was implemented on one ward, and three wards, which did not receive the e-prescribing system, acted as controls. In Hospital B, the iSoft MedChart system was implemented on two wards and we compared before and after error rates. Procedural (e.g., unclear and incomplete prescribing orders) and clinical (e.g., wrong dose, wrong drug) errors were identified. Prescribing error rates per admission and per 100 patient days; rates of serious errors (5-point severity scale, those ≥3 were categorised as serious) by hospital and study period; and rates and categories of postintervention “system-related” errors (where system functionality or design contributed to the error) were calculated. Use of an e-prescribing system was associated with a statistically significant reduction in error rates in all three intervention wards (respectively reductions of 66.1% [95% CI 53.9%–78.3%]; 57.5% [33.8%–81.2%]; and 60.5% [48.5%–72.4%]). The use of the system resulted in a decline in errors at Hospital A from 6.25 per admission (95% CI 5.23–7.28) to 2.12 (95% CI 1.71–2.54; p<0.0001) and at Hospital B from 3.62 (95% CI 3.30–3.93) to 1.46 (95% CI 1.20–1.73; p<0.0001). This decrease was driven by a large reduction in unclear, illegal, and incomplete orders. The Hospital A control wards experienced no significant change (respectively −12.8% [95% CI −41.1% to 15.5%]; −11.3% [−40.1% to 17.5%]; −20.1% [−52.2% to 12.4%]). There was limited change in clinical error rates, but serious errors decreased by 44% (0.25 per admission to 0.14; p = 0.0002) across the intervention wards compared to the control wards (17% reduction; 0.30–0.25; p = 0.40). Both hospitals experienced system-related errors (0.73 and 0.51 per admission), which accounted for 35% of postsystem errors in the intervention wards; each system was associated with different types of system-related errors.
Conclusions
Implementation of these commercial e-prescribing systems resulted in statistically significant reductions in prescribing error rates. Reductions in clinical errors were limited in the absence of substantial decision support, but a statistically significant decline in serious errors was observed. System-related errors require close attention as they are frequent, but are potentially remediable by system redesign and user training. Limitations included a lack of control wards at Hospital B and an inability to randomize wards to the intervention.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Medication errors—for example, prescribing the wrong drug or giving a drug by the wrong route—frequently occur in health care settings and are responsible for thousands of deaths every year. Until recently, medicines were prescribed and dispensed using systems based on hand-written scripts. In hospitals, for example, physicians wrote orders for medications directly onto a medication chart, which was then used by the nursing staff to give drugs to their patients. However, drugs are now increasingly being prescribed using electronic prescribing (e-prescribing) systems. With these systems, prescribers use a computer and order medications for their patients with the help of a drug information database and menu items, free text boxes, and prewritten orders for specific conditions (so-called passive decision support). The system reviews the patient's medication and known allergy list and alerts the physician to any potential problems, including drug interactions (active decision support). Then after the physician has responded to these alerts, the order is transmitted electronically to the pharmacy and/or the nursing staff who administer the prescription.
Why Was This Study Done?
By avoiding the need for physicians to write out prescriptions and by providing active and passive decision support, e-prescribing has the potential to reduce medication errors. But, even though many countries are investing in expensive commercial e-prescribing systems, few studies have evaluated the effects of these systems on prescribing error rates. Moreover, little is known about the interactions between system design and errors despite fears that e-prescribing might introduce new errors. In this study, the researchers analyze prescribing error rates in hospital in-patients before and after the implementation of two commercial e-prescribing systems.
What Did the Researchers Do and Find?
The researchers examined medication charts for procedural errors (unclear, incomplete, or illegal orders) and for clinical errors (for example, wrong drug or dose) at two Australian hospitals before and after the introduction of commercial e-prescribing systems. At Hospital A, the Cerner Millennium e-prescribing system was introduced on one ward; three other wards acted as controls. At Hospital B, the researchers compared the error rates on two wards before and after the introduction of the iSoft MedChart e-prescribing system. The introduction of an e-prescribing system was associated with a substantial reduction in error rates in the three intervention wards; error rates on the control wards did not change significantly during the study. At Hospital A, medication errors declined from 6.25 to 2.12 per admission after the introduction of e-prescribing whereas at Hospital B, they declined from 3.62 to 1.46 per admission. This reduction in error rates was mainly driven by a reduction in procedural error rates and there was only a limited change in overall clinical error rates. Notably, however, the rate of serious errors decreased across the intervention wards from 0.25 to 0.14 per admission (a 44% reduction), whereas the serious error rate only decreased by 17% in the control wards during the study. Finally, system-related errors (for example, selection of an inappropriate drug located on a drop-down menu next to a likely drug selection) accounted for 35% of errors in the intervention wards after the implementation of e-prescribing.
What Do These Findings Mean?
These findings show that the implementation of these two e-prescribing systems markedly reduced hospital in-patient prescribing error rates, mainly by reducing the number of incomplete, illegal, or unclear medication orders. The limited decision support built into both the e-prescribing systems used here may explain the limited reduction in clinical error rates but, importantly, both e-prescribing systems reduced serious medication errors. Finally, the high rate of system-related errors recorded in this study is worrying but is potentially remediable by system redesign and user training. Because this was a “real-world” study, it was not possible to choose the intervention wards randomly. Moreover, there was no control ward at Hospital B, and the wards included in the study had very different specialties. These and other aspects of the study design may limit the generalizability of these findings, which need to be confirmed and extended in additional studies. Even so, these findings provide persuasive evidence of the current and potential ability of commercial e-prescribing systems to reduce prescribing errors in hospital in-patients provided these systems are continually monitored and refined to improve their performance.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001164.
ClinfoWiki has pages on medication errors and on electronic prescribing (note: the Clinical Informatics Wiki is a free online resource that anyone can add to or edit)
Electronic prescribing in hospitals challenges and lessons learned describes the implementation of e-prescribing in UK hospitals; more information about e-prescribing in the UK is available on the NHS Connecting for Health Website
The Clinicians Guide to e-Prescribing provides up-to-date information about e-prescribing in the USA
Information about e-prescribing in Australia is also available
Information about electronic health records in Australia
doi:10.1371/journal.pmed.1001164
PMCID: PMC3269428  PMID: 22303286
6.  Validation of the Symptom Pattern Method for Analyzing Verbal Autopsy Data 
PLoS Medicine  2007;4(11):e327.
Background
Cause of death data are a critical input to formulating good public health policy. In the absence of reliable vital registration data, information collected after death from household members, called verbal autopsy (VA), is commonly used to study causes of death. VA data are usually analyzed by physician-coded verbal autopsy (PCVA). PCVA is expensive and its comparability across regions is questionable. Nearly all validation studies of PCVA have allowed physicians access to information collected from the household members' recall of medical records or contact with health services, thus exaggerating accuracy of PCVA in communities where few deaths had any interaction with the health system. In this study we develop and validate a statistical strategy for analyzing VA data that overcomes the limitations of PCVA.
Methods and Findings
We propose and validate a method that combines the advantages of methods proposed by King and Lu, and Byass, which we term the symptom pattern (SP) method. The SP method uses two sources of VA data. First, it requires a dataset for which we know the true cause of death, but which need not be representative of the population of interest; this dataset might come from deaths that occur in a hospital. The SP method can then be applied to a second VA sample that is representative of the population of interest. From the hospital data we compute the properties of each symptom; that is, the probability of responding yes to each symptom, given the true cause of death. These symptom properties allow us first to estimate the population-level cause-specific mortality fractions (CSMFs), and to then use the CSMFs as an input in assigning a cause of death to each individual VA response. Finally, we use our individual cause-of-death assignments to refine our population-level CSMF estimates. The results from applying our method to data collected in China are promising. At the population level, SP estimates the CSMFs with 16% average relative error and 0.7% average absolute error, while PCVA results in 27% average relative error and 1.1% average absolute error. At the individual level, SP assigns the correct cause of death in 83% of the cases, while PCVA does so for 69% of the cases. We also compare the results of SP and PCVA when both methods have restricted access to the information from the medical record recall section of the VA instrument. At the population level, without medical record recall, the SP method estimates the CSMFs with 14% average relative error and 0.6% average absolute error, while PCVA results in 70% average relative error and 3.2% average absolute error. For individual estimates without medical record recall, SP assigns the correct cause of death in 78% of cases, while PCVA does so for 38% of cases.
Conclusions
Our results from the data collected in China suggest that the SP method outperforms PCVA, both at the population and especially at the individual level. Further study is needed on additional VA datasets in order to continue validation of the method, and to understand how the symptom properties vary as a function of culture, language, and other factors. Our results also suggest that PCVA relies heavily on household recall of medical records and related information, limiting its applicability in low-resource settings. SP does not require that additional information to adequately estimate causes of death.
Chris Murray and colleagues propose and, using data from China, validate a new strategy for analyzing verbal autopsy data that combines the advantages of previous methods.
Editors' Summary
Background.
All countries need to know the leading causes of death among their people. Only with accurate cause-of-death data can their public-health officials and medical professionals develop relevant health policies and programs and monitor how they affect the nation's health. In developed countries, vital registration systems record specific causes of death that have been certified by doctors for most deaths. But, in developing countries, vital registration systems are rarely anywhere near complete, a situation that is unlikely to change in the near future. An approach that is being used increasingly to get information on the patterns of death in poor countries is “verbal autopsy” (VA). Trained personnel interview household members about the symptoms the deceased had before his/her death, and the circumstances surrounding the death, using a standard form. These forms are then reviewed by a doctor, who assigns a cause of death from a list of codes called the International Classification of Diseases. This process is called physician-coded verbal autopsy (PCVA).
Why Was This Study Done?
PCVA is a costly, time-consuming way of analyzing VA data and may not be comparable across regions, because it relies on the views of local doctors about the likely causes of death. In addition, although several studies have suggested that PCVA is reasonably accurate, such studies have usually included information collected from household members about medical records or contacts with health services. In regions where there is little contact with health services, PCVA may be much more inaccurate. Ideally what is needed is a method for assigning causes of death from VA data that does not involve physician review. In this study, the researchers have developed a statistical method—the symptom pattern (SP) method—for analyzing VA data and asked whether it can overcome the limitations of PCVA.
What Did the Researchers Do and Find?
The SP method uses VA data collected about a group of patients for whom the true cause of death is known to calculate the probability for each cause of death that a household member will answer yes when asked about various symptoms. These so-called “symptom properties” can be used to calculate population cause-specific mortality fractions (CSMFs—the proportion of the population that dies from each disease) from VA data and, using a type of statistical analysis called Bayesian statistics, can be used to assign causes of deaths to individuals. When used with data from a VA study done in China, the SP method estimated population CSMFs with an average relative error of 16% (this measure indicates how much the estimated and true CSMFs deviate), whereas PCVA estimated them with an average relative error of 27%. At the individual level, the SP method assigned the correct cause of death in 83% of cases; PCVA was right only 69% of the time. Removing the medical record recall section of the VA data had little effect on the accuracy with which the two methods estimated population CSMFs. However, whereas the SP method still assigned the correct cause of death in 78% of individual cases, the PVCA did so in only 38% of cases
What Do These Findings Mean?
These findings suggest that the SP method for analyzing VA data can outperform PCVA at both the population and the individual level. In particular, the SP method may be much better than PCVA at assigning the cause of death for individuals who have had little contact with health services before dying, a common situation in the poorest regions of world. The SP method needs to be validated using data from other parts of the world and also needs to be tested in multi-country validation studies to build up information about how culture and language affect the likelihood of specific symptoms being reported in VAs for each cause of death. Provided the SP method works as well in other countries as it apparently does in China, its adoption, together with improvements in how VA data are collected, has the potential to improve the accuracy of cause-of-death data in developing countries.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040327.
• An accompanying paper by Murray and colleagues describes an alternative approach to collecting accurate cause-of-death data in developing countries
• World Health Organization provides information on health statistics and health information systems, on the International Classification of Diseases, on the Health Metrics Network, a global collaboration focused on improving sources of vital statistics and cause-of-death data, and on verbal autopsy standards
• Grand Challenges in Global Health provides information on research into better ways for developing countries to measure their health status
doi:10.1371/journal.pmed.0040327
PMCID: PMC2080648  PMID: 18031196
7.  Cardiovascular magnetic resonance imaging of isolated perfused pig hearts in a 3T clinical MR scanner 
Purpose
An isolated perfused pig heart model has recently been proposed for the development of novel methods in standard clinical magnetic resonance (MR) scanners. The original set-up required the electrical system to be within the safe part of the MR-room, which introduced significant background noise. The purpose of the current work was to refine the system to overcome this limitation so that all electrical parts are completely outside the scanner room.
Methods
Four pig hearts were explanted under terminal anaesthesia from large white cross landrace pigs. All hearts underwent cardiovascular magnetic resonance (CMR) scanning in the MR part of a novel combined 3T MR and x-ray fluoroscopy (XMR) suite. CMR scanning included real-time k-t SENSE functional imaging, k-t SENSE accelerated perfusion imaging and late gadolinium enhancement imaging. Interference with image quality was assessed by spurious echo imaging and compared to noise levels acquired while operating the electrical parts within the scanner room.
Results
Imaging was performed successfully in all hearts. The system proved suitable for isolated heart perfusion in a novel 3T XMR suite. No significant additional noise was introduced into the scanner room by our set-up.
Conclusions
We have substantially improved a previous version of an isolated perfused pig heart model and made it applicable for MR imaging in a state of the art clinical 3T XMR imaging suite. The use of this system should aid novel CMR sequence development and translation into clinical practice.
doi:10.1556/IMAS.4.2012.4.3
PMCID: PMC3831784  PMID: 24265875
cardiovascular magnetic resonance imaging; coronary artery disease; isolated heart perfusion; Langendorff; pig; translational research
8.  Implementation of Statistical Process Control for Proteomic Experiments via LC MS/MS 
Statistical process control (SPC) is a robust set of tools that aids in the visualization, detection, and identification of assignable causes of variation in any process that creates products, services, or information. A tool has been developed termed Statistical Process Control in Proteomics (SProCoP) which implements aspects of SPC (e.g., control charts and Pareto analysis) into the Skyline proteomics software. It monitors five quality control metrics in a shotgun or targeted proteomic workflow. None of these metrics require peptide identification. The source code, written in the R statistical language, runs directly from the Skyline interface which supports the use of raw data files from several of the mass spectrometry vendors. It provides real time evaluation of the chromatographic performance (e.g., retention time reproducibility, peak asymmetry, and resolution); and mass spectrometric performance (targeted peptide ion intensity and mass measurement accuracy for high resolving power instruments) via control charts. Thresholds are experiment- and instrument-specific and are determined empirically from user-defined quality control standards that enable the separation of random noise and systematic error. Finally, Pareto analysis provides a summary of performance metrics and guides the user to metrics with high variance. The utility of these charts to evaluate proteomic experiments is illustrated in two case studies.
doi:10.1007/s13361-013-0824-5
PMCID: PMC4020592  PMID: 24496601
Quality Control; Statistical Process Control; Proteomics; Mass Spectrometry; Shewhart Control Charts
9.  Case-mix and the use of control charts in monitoring mortality rates after coronary artery bypass 
Background
There is debate about the role of crude mortality rates and case-mix adjusted mortality rates in monitoring the outcomes of treatment. In the context of quality improvement a key purpose of monitoring is to identify special cause variation as this type of variation should be investigated to identify possible causes. This paper investigates agreement between the identification of special cause variation in risk adjusted and observed hospital specific mortality rates after coronary artery bypass grafting in New York hospitals.
Methods
Coronary artery bypass grafting mortality rates between 1994 and 2003 were obtained from the New York State Department of Health's cardiovascular reports for 41 hospitals. Cross-sectional control charts of crude (observed) and risk adjusted mortality rates were produced for each year. Special cause variation was defined as a data point beyond the 99.9% probability limits: hospitals showing special cause variation were identified for each year. Longitudinal control charts of crude (observed) and risk adjusted mortality rates were produced for each hospital with data for all ten years (n = 27). Special cause variation was defined as a data point beyond 99.9% probability limits, two out of three consecutive data points beyond 95% probability limits (two standard deviations from the mean) or a run of five consecutive points on one side of the mean. Years showing special cause variation in mortality were identified for each hospital. Cohen's Kappa was calculated for agreement between special causes identified in crude and risk-adjusted control charts.
Results
In cross sectional analysis the Cohen's Kappa was 0.54 (95% confidence interval: 0.28 to 0.78), indicating moderate agreement between the crude and risk-adjusted control charts with sensitivity 0.4 (95% confidence interval 0.17–0.69) and specificity 0.98 (95% confidence interval: 0.95–0.99). In longitudinal analysis, the Cohen's Kappa was 0.61 (95% confidence interval: 0.39 to 0.83) indicating good agreement between the tests with sensitivity 0.63 (95% confidence interval: 0.39–0.82) and specificity 0.98 (95% confidence interval: 0.96 to 0.99).
Conclusion
There is moderate-good agreement between signals of special cause variation between observed and risk-adjusted mortality. Analysis of observed hospital specific CABG mortality over time and with other hospitals appears to be useful in identifying special causes of variation. Case-mix adjustment may not be essential for longitudinal monitoring of outcomes using control charts.
doi:10.1186/1472-6963-7-63
PMCID: PMC1867815  PMID: 17470276
10.  Automated Detection of Infectious Disease Outbreaks in Hospitals: A Retrospective Cohort Study 
PLoS Medicine  2010;7(2):e1000238.
Susan Huang and colleagues describe an automated statistical software, WHONET-SaTScan, its application in a hospital, and the potential it has to identify hospital infection clusters that had escaped routine detection.
Background
Detection of outbreaks of hospital-acquired infections is often based on simple rules, such as the occurrence of three new cases of a single pathogen in two weeks on the same ward. These rules typically focus on only a few pathogens, and they do not account for the pathogens' underlying prevalence, the normal random variation in rates, and clusters that may occur beyond a single ward, such as those associated with specialty services. Ideally, outbreak detection programs should evaluate many pathogens, using a wide array of data sources.
Methods and Findings
We applied a space-time permutation scan statistic to microbiology data from patients admitted to a 750-bed academic medical center in 2002–2006, using WHONET-SaTScan laboratory information software from the World Health Organization (WHO) Collaborating Centre for Surveillance of Antimicrobial Resistance. We evaluated patients' first isolates for each potential pathogenic species. In order to evaluate hospital-associated infections, only pathogens first isolated >2 d after admission were included. Clusters were sought daily across the entire hospital, as well as in hospital wards, specialty services, and using similar antimicrobial susceptibility profiles. We assessed clusters that had a likelihood of occurring by chance less than once per year. For methicillin-resistant Staphylococcus aureus (MRSA) or vancomycin-resistant enterococci (VRE), WHONET-SaTScan–generated clusters were compared to those previously identified by the Infection Control program, which were based on a rule-based criterion of three occurrences in two weeks in the same ward. Two hospital epidemiologists independently classified each cluster's importance. From 2002 to 2006, WHONET-SaTScan found 59 clusters involving 2–27 patients (median 4). Clusters were identified by antimicrobial resistance profile (41%), wards (29%), service (13%), and hospital-wide assessments (17%). WHONET-SaTScan rapidly detected the two previously known gram-negative pathogen clusters. Compared to rule-based thresholds, WHONET-SaTScan considered only one of 73 previously designated MRSA clusters and 0 of 87 VRE clusters as episodes statistically unlikely to have occurred by chance. WHONET-SaTScan identified six MRSA and four VRE clusters that were previously unknown. Epidemiologists considered more than 95% of the 59 detected clusters to merit consideration, with 27% warranting active investigation or intervention.
Conclusions
Automated statistical software identified hospital clusters that had escaped routine detection. It also classified many previously identified clusters as events likely to occur because of normal random fluctuations. This automated method has the potential to provide valuable real-time guidance both by identifying otherwise unrecognized outbreaks and by preventing the unnecessary implementation of resource-intensive infection control measures that interfere with regular patient care.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Admission to a hospital is often a life-saving necessity—individuals injured in a road accident, for example, may need immediate medical and surgical attention if they are to survive. Unfortunately, many patients acquire infections, some of which are life-threatening, during their stay in a hospital. The World Health Organization has estimated that, globally, 8.7% of hospital patients develop hospital-acquired infections (infections that are identified more than two days after admission to hospital). In the US alone, 2 million people develop a hospital-acquired infection every year, often an infection of a surgical wound, or a urinary tract or lung infection. Infections are common among hospital patients because increasing age or underlying illnesses can reduce immunity to infection and because many medical and surgical procedures bypass the body's natural protective barriers. In addition, poor infection control practices can facilitate the transmission of bacteria—including meticillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococci (VRE)—and other infectious agents (pathogens) between patients.
Why Was This Study Done?
Sometimes, the number of cases of hospital-acquired infections increases unexpectedly or a new infection emerges. Such clusters account for relatively few health care–associated infections, but, because they may arise from the transmission of a pathogen within a hospital, they need to be rapidly identified and measures implemented (for example, isolation of affected patients) to stop transmission if an outbreak is confirmed. Currently, the detection of clusters of hospital-acquired infections is based on simple rules, such as the occurrence of three new cases of a single pathogen in two weeks on the same ward. This rule-based approach relies on the human eye to detect infection clusters within microbiology data (information collected on the pathogens isolated from patients), it focuses on a few pathogens, and it does not consider the random variation in infection rates or the possibility that clusters might be associated with shared facilities rather than with individual wards. In this study, the researchers test whether an automated statistical system can detect outbreaks of hospital-acquired infections quickly and accurately.
What Did the Researchers Do and Find?
The researchers combined two software packages used to track diseases in populations to create the WHONET-SaTScan cluster detection tool. They then compared the clusters of hospital-acquired infection identified by the new tool in microbiology data from a 750-bed US academic medical center with those generated by the hospital's infection control program, which was largely based on the simple rule described above. WHONET-SaTScan found 59 clusters of infection that occurred between 2002 and 2006, about three-quarters of which were identified by characteristics other than a ward-based location. Nearly half the cluster alerts were generated on the basis of shared antibiotic susceptibility patterns. Although WHONET-SaTScan identified all the clusters previously identified by the hospital's infection control program, it classified most of these clusters as likely to be the result of normal random variations in infection rates rather than the result of “true” outbreaks. By contrast, the hospital's infection control department only identified three of the 59 statistically significant clusters identified by WHONET-SaTScan. Furthermore, the new tool identified six previously unknown MRSA outbreaks and four previously unknown VRE outbreaks. Finally, two hospital epidemiologists (scientists who study diseases in populations) classified 95% of the clusters detected by WHONET-SaTScan as worthy of consideration by the hospital infection control team and a quarter of the clusters as warranting active investigation or intervention.
What Do These Findings Mean?
These findings suggest that automated statistical software should be able to detect clusters of hospital-acquired infections that would escape detection using routine rule-based systems. Importantly, they also suggest that an automated system would be able to discount a large number of supposed outbreaks identified by rule-based systems. These findings need to be confirmed in other settings and in prospective studies in which the outcomes of clusters detected with WHONET-SaTScan are carefully analyzed. For now, however, these findings suggest that automated statistical tools could provide hospital infection control experts with valuable real-time guidance by identifying outbreaks that would be missed by routine detection methods and by preventing the implementation of intensive and costly infection control measures in situations where they are unnecessary.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000238.
The World Health Organization's Prevention of Hospital-Acquired Infections, A Practical Guide contains detailed information on all aspects of hospital-acquired infections
MedlinePlus provides links to information on infection control in hospitals (in English and Spanish)
The US Centers for Disease Control and Prevention also provides information on infectious diseases in health care settings (in English and Spanish)
The WHONET/Baclink software and the SatScan software, the two components of WHONET-SaTScan are both available on the internet (the WHONET-SaTScan cluster detection tool is freely available as part of the version of WHONET/BacLink released June 2009)
doi:10.1371/journal.pmed.1000238
PMCID: PMC2826381  PMID: 20186274
11.  BMI and Risk of Serious Upper Body Injury Following Motor Vehicle Crashes: Concordance of Real-World and Computer-Simulated Observations 
PLoS Medicine  2010;7(3):e1000250.
Shankuan Zhu and colleagues use computer crash simulations, as well as real-world data, to evaluate whether driver obesity is associated with greater risk of body injury in motor vehicle crashes.
Background
Men tend to have more upper body mass and fat than women, a physical characteristic that may predispose them to severe motor vehicle crash (MVC) injuries, particularly in certain body regions. This study examined MVC-related regional body injury and its association with the presence of driver obesity using both real-world data and computer crash simulation.
Methods and Findings
Real-world data were from the 2001 to 2005 National Automotive Sampling System Crashworthiness Data System. A total of 10,941 drivers who were aged 18 years or older involved in frontal collision crashes were eligible for the study. Sex-specific logistic regression models were developed to analyze the associations between MVC injury and the presence of driver obesity. In order to confirm the findings from real-world data, computer models of obese subjects were constructed and crash simulations were performed. According to real-world data, obese men had a substantially higher risk of injury, especially serious injury, to the upper body regions including head, face, thorax, and spine than normal weight men (all p<0.05). A U-shaped relation was found between body mass index (BMI) and serious injury in the abdominal region for both men and women (p<0.05 for both BMI and BMI2). In the high-BMI range, men were more likely to be seriously injured than were women for all body regions except the extremities and abdominal region (all p<0.05 for interaction between BMI and sex). The findings from the computer simulation were generally consistent with the real-world results in the present study.
Conclusions
Obese men endured a much higher risk of injury to upper body regions during MVCs. This higher risk may be attributed to differences in body shape, fat distribution, and center of gravity between obese and normal-weight subjects, and between men and women.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Worldwide, accidents involving motor vehicles kill 1.2 million people and injure as many as 50 million people every year. Collisions between motor vehicles, between vehicles and stationary objects, or between vehicles and pedestrians are responsible for one in 50 deaths and are the 11th leading cause of death globally. Many factors contribute to the risk of motor traffic accidents and the likelihood of subsequent injury or death. These risk factors include vehicle design, vehicle speeds, road design, driver impairment through, for example, alcohol use, and other driver characteristics such as age. Faced with an ever-increasing death toll on their roads, many countries have introduced lower speed limits, mandatory seat belt use, and greater penalties for drunk driving to reduce the carnage. Road design and traffic management initiatives have also been introduced to try to reduce the incidence of road traffic accidents and cars now include many features that provide protection in crashes for their occupants such as airbags and crumple zones.
Why Was This Study Done?
Although these measures have reduced the number of crashes and casualties, a better understanding of the risk factors associated with motor vehicle crashes is needed to deal with this important public-health problem. Another major public-health problem is obesity—having excess body fat. Obesity increases the risk of heart disease and diabetes but also contributes to the severity of motor vehicle crash injuries. Men with a high body mass index (an individual's weight in kilograms divided by height in meters squared; a BMI of 30 or more indicates obesity) have a higher risk of death after a motor vehicle accident than men with a normal BMI (18.5–24.9). This association between death and obesity is not seen in women, however, possibly because men and women accumulate fat on different parts of their body and the resultant difference in body shape could affect how male and female bodies move during traffic collisions and how much protection existing car safety features afford them. In this study, therefore, the researchers investigated how driver obesity affects the risk of serious injuries in different parts of the body following real and simulated motor vehicle crashes in men and women.
What Did the Researchers Do and Find?
The researchers extracted data about injuries and BMIs for nearly 11,000 adult men and women who were involved in a frontal motor vehicle collision between 2001 and 2005 from the Crashworthiness Data System of the US National Automotive Sampling System. They then used detailed statistical methods to look for associations between specific injuries and driver obesity. The researchers also constructed computer models of obese drivers and subjected these models to simulated crashes. Their analysis of the real-world data showed that obese men had a substantially higher risk of injury to the upper body (the head, face, chest, and spine) than men with a normal weight. Serious injury in the abdominal region was most likely at low and high BMIs for both men and women. Finally, obese men were more likely to be seriously injured than obese women for all body regions except the extremities and the abdominal region. The researchers' computer simulations confirmed many of these real-world findings.
What Do These Findings Mean?
These findings suggest that obese men have a higher risk of injury, particularly to their upper body, from motor vehicle crashes than men with a normal body weight or than obese women. The researchers suggest that this higher risk may be attributed to differences in body shape, fat distribution, and center of gravity between obese and normal weight individuals and between men and women. These findings, although limited by missing data, suggest that motor vehicle safety features should be adjusted to take into account the ongoing obesity epidemic. Currently, two-thirds of people in the US are overweight or obese, yet a crash test dummy with a normal BMI is still used during the design of car cabins. Finally, although more studies are needed to understand the biomechanical responses of the human body during vehicle collisions, the findings in this study could aid the identification of groups of people at particularly high risk of injury or death on the roads who could then be helped to reduce their risk.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000250.
Wikipedia has a page on traffic collision (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The World Health Organization has information about road traffic injuries as a public-health problem; its World report on road traffic injury prevention is available in several languages
The US Centers for Disease Control and Prevention provides detailed information about overweight and obesity (in several languages)
MedlinePlus provides links to further resources about obesity (in English and Spanish)
The US National Automotive Sampling System Crashworthiness Data System contains detailed data on thousands of US motor vehicle crashes
doi:10.1371/journal.pmed.1000250
PMCID: PMC2846859  PMID: 20361024
12.  Statistical process control of mortality series in the Australian and New Zealand Intensive Care Society (ANZICS) adult patient database: implications of the data generating process 
Background
Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency.
Methods
Monthly mean raw mortality (at hospital discharge) time series, 1995–2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) “in-control” status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance.
Results
The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model.
Conclusions
The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues.
doi:10.1186/1471-2288-13-66
PMCID: PMC3697995  PMID: 23705957
Statistical process control; Time series; Autocorrelation; Seasonality; Volatility; Exponentially weighted moving average smoothing; Autoregressive moving average models; GARCH models
13.  Progress toward Global Reduction in Under-Five Mortality: A Bootstrap Analysis of Uncertainty in Millennium Development Goal 4 Estimates 
PLoS Medicine  2012;9(12):e1001355.
Leontine Alkema and colleagues use a bootstrap procedure to assess the uncertainty around the estimates of the under-five mortality rate produced by the United Nations Inter-Agency Group for Child Mortality Estimation.
Background
Millennium Development Goal 4 calls for an annual rate of reduction (ARR) of the under-five mortality rate (U5MR) of 4.4% between 1990 and 2015. Progress is measured through the point estimates of the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME). To facilitate evidence-based conclusions about progress toward the goal, we assessed the uncertainty in the estimates arising from sampling errors and biases in data series and the inferior quality of specific data series.
Methods and Findings
We implemented a bootstrap procedure to construct 90% uncertainty intervals (UIs) for the U5MR and ARR to complement the UN IGME estimates. We constructed the bounds for all countries without a generalized HIV epidemic, where a standard estimation approach is carried out (174 countries). In the bootstrap procedure, potential biases in levels and trends of data series of different source types were accounted for. There is considerable uncertainty about the U5MR, particularly for high mortality countries and in recent years. Among 86 countries with a U5MR of at least 40 deaths per 1,000 live births in 1990, the median width of the UI, relative to the U5MR level, was 19% for 1990 and 48% for 2011, with the increase in uncertainty due to more limited data availability. The median absolute width of the 90% UI for the ARR from 1990 to 2011 was 2.2%. Although the ARR point estimate for all high mortality countries was greater than zero, for eight of them uncertainty included the possibility of no improvement between 1990 and 2011. For 13 countries, it is deemed likely that the ARR from 1990 to 2011 exceeded 4.4%.
Conclusions
In light of the upcoming evaluation of Millennium Development Goal 4 in 2015, uncertainty assessments need to be taken into account to avoid unwarranted conclusions about countries' progress based on limited data.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In September 2000, world leaders adopted the United Nations Millennium Declaration, committing member states (countries) to a new global partnership to reduce extreme poverty and improve global health by setting out a series of time-bound targets with a deadline of 2015—the Millennium Development Goals (MDGs). There are eight MDGs and the fourth, MDG 4, focuses on reducing the number of deaths in children aged under five years by two-thirds from the 1990 level. Monitoring progress towards meeting all of the MDG targets is of vital importance to measure the effectiveness of interventions and to prioritize slow progress areas. MDG 4 has three specific indicators, and every year, the United Nations Inter-agency Group for Child Mortality Estimation (the UN IGME, which includes the key agencies the United Nations Children's Fund, the World Health Organization, the World Bank, and the United Nations Population Division) produces and publishes estimates of child death rates for all countries.
Why Was This Study Done?
Many poorer countries do not have the infrastructure and the functioning vital registration systems in place to record the number of child deaths. Therefore, it is difficult to accurately assess levels and trends in the rate of child deaths because there is limited information (data) or because the data that exists may be inaccurate or of poor quality. In order to deal with this situation, analyzing trends in under-five child death rates (to show progress towards MDG 4) currently focuses on the “best” estimates from countries, a process that relies on “point” estimates. But this practice can lead to inaccurate results and comparisons. It is therefore important to identify a framework for calculating the uncertainty surrounding these estimates. In this study, the researchers use a statistical method to calculate plausible uncertainty intervals for the estimates of death rates in children aged under five years and the yearly reduction in those rates.
What Did the Researchers Do and Find?
The researchers used the publicly available information from the UN IGME 2012 database, which collates data from a variety of sources, and a statistical method called bootstrapping to construct uncertainty levels for 174 countries out of 195 countries for which the UN IGME published estimates in 2012. This new method improves current practice for estimating the extent of data errors, as it takes into account the structure and (potentially poor) quality of the data. The researchers used 90% as the uncertainty level and categorized countries according to the likelihood of meeting the MDG 4 target.
Using these methods, the researchers found that in countries with high child mortality rates (40 or more deaths per 1,000 children in 1990), there was a lot of uncertainty (wide uncertainty intervals) about the levels and trends of death rates in children aged under five years, especially more recently, because of the limited availability of data. Overall, in 2011 the median width of the uncertainty interval for the child death rate was 48% among the 86 countries with high death rates, compared to 19% in 1990. Using their new method, the researchers found that for eight countries, it is not clear whether any progress had been made in reducing child mortality, but for 13 countries, it is deemed likely that progress exceeded the MDG 4 target.
What Do These Findings Mean?
These findings suggest that new uncertainty assessments constructed by a statistical method called bootstrapping can provide more insights into countries' progress in reducing child mortality and meeting the MDG 4 target. As demonstrated in this study, when data are limited, uncertainty intervals should to be taken into account when estimating progress towards MDG 4 in order to give more accurate assessments on a country' progress, thus allowing for more realistic comparisons and conclusions.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001355.
The UN website has more information about the Millennium Development Goals, including country-specific data
More information is available from UNICEF's ChildInfo website about the UN IGME and child mortality
All UN IGME child mortality estimates and data are available via CME Info
Countdown to 2015 tracks coverage levels for health interventions proven to reduce child mortality and proposes new actions to reach MDG 4
doi:10.1371/journal.pmed.1001355
PMCID: PMC3519895  PMID: 23239945
14.  “The 3/3 Strategy”: A Successful Multifaceted Hospital Wide Hand Hygiene Intervention Based on WHO and Continuous Quality Improvement Methodology 
PLoS ONE  2012;7(10):e47200.
Background
Only multifaceted hospital wide interventions have been successful in achieving sustained improvements in hand hygiene (HH) compliance.
Methodology/Principal Findings
Pre-post intervention study of HH performance at baseline (October 2007– December 2009) and during intervention, which included two phases. Phase 1 (2010) included multimodal WHO approach. Phase 2 (2011) added Continuous Quality Improvement (CQI) tools and was based on: a) Increase of alcohol hand rub (AHR) solution placement (from 0.57 dispensers/bed to 1.56); b) Increase in frequency of audits (three days every three weeks: “3/3 strategy”); c) Implementation of a standardized register form of HH corrective actions; d) Statistical Process Control (SPC) as time series analysis methodology through appropriate control charts. During the intervention period we performed 819 scheduled direct observation audits which provided data from 11,714 HH opportunities. The most remarkable findings were: a) significant improvements in HH compliance with respect to baseline (25% mean increase); b) sustained high level (82%) of HH compliance during intervention; c) significant increase in AHRs consumption over time; c) significant decrease in the rate of healthcare-acquired MRSA; d) small but significant improvements in HH compliance when comparing phase 2 to phase 1 [79.5% (95% CI: 78.2–80.7) vs 84.6% (95% CI:83.8–85.4), p<0.05]; e) successful use of control charts to identify significant negative and positive deviations (special causes) related to the HH compliance process over time (“positive”: 90.1% as highest HH compliance coinciding with the “World hygiene day”; and “negative”:73.7% as lowest HH compliance coinciding with a statutory lay-off proceeding).
Conclusions/Significance
CQI tools may be a key addition to WHO strategy to maintain a good HH performance over time. In addition, SPC has shown to be a powerful methodology to detect special causes in HH performance (positive and negative) and to help establishing adequate feedback to healthcare workers.
doi:10.1371/journal.pone.0047200
PMCID: PMC3478274  PMID: 23110061
15.  mRNA turnover rate limits siRNA and microRNA efficacy 
Based on a simple model of the mRNA life cycle, we predict that mRNAs with high turnover rates in the cell are more difficult to perturb with RNAi.We test this hypothesis using a luciferase reporter system and obtain additional evidence from a variety of large-scale data sets, including microRNA overexpression experiments and RT–qPCR-based efficacy measurements for thousands of siRNAs.Our results suggest that mRNA half-lives will influence how mRNAs are differentially perturbed whenever small RNA levels change in the cell, not only after transfection but also during differentiation, pathogenesis and normal cell physiology.
What determines how strongly an mRNA responds to a microRNA or an siRNA? We know that properties of the sequence match between the small RNA and the mRNA are crucial. However, large-scale validations of siRNA efficacies have shown that certain transcripts remain recalcitrant to perturbation even after repeated redesign of the siRNA (Krueger et al, 2007). Weak response to RNAi may thus be an inherent property of the mRNA, but the underlying factors have proven difficult to uncover.
siRNAs induce degradation by sequence-specific cleavage of their target mRNAs (Elbashir et al, 2001). MicroRNAs, too, induce mRNA degradation, and ∼80% of their effect on protein levels can be explained by changes in transcript abundance (Hendrickson et al, 2009; Guo et al, 2010). Given that multiple factors act simultaneously to degrade individual mRNAs, we here consider whether variable responses to micro/siRNA regulation may, in part, be explained simply by the basic dynamics of mRNA turnover. If a transcript is already under strong destabilizing regulation, it is theoretically possible that the relative change in abundance after the addition of a novel degrading factor would be less pronounced compared with a stable transcript (Figure 1). mRNA turnover is achieved by a multitude of factors, and the influence of such factors on targetability can be explored. However, their combined action, including yet unknown factors, is summarized into a single property: the mRNA decay rate.
First, we explored the theoretical relationship between the pre-existing turnover rate of an mRNA, and its expected susceptibility to perturbation by a small RNA. We assumed a basic model of the mRNA life cycle, in which the rate of transcription is constant and the rate of degradation is described by first-order kinetics. Under this model, the relative change in steady-state expression level will become smaller as the pre-existing decay rate grows larger, independent of the transcription rate. This relationship persists also if we assume various degrees of synergy and antagonism between the pre-existing factors and the external factor, with increasing synergism leading to transcripts being more equally targetable, regardless of their pre-existing decay rate.
We next generated a series of four luciferase reporter constructs with destabilizing AU-rich elements (AREs) of various strengths incorporated into their 3′ UTRs. To evaluate how the different constructs would respond to perturbation, we performed co-transfections with an siRNA targeted at the coding region of the luciferase gene. This reduced the signal of the non-destabilized construct to 26% compared with a control siRNA. In contrast, the most destabilized construct showed 42% remaining reporter activity, and we could observe a dose–response relationship across the series.
The reporter experiment encouraged an investigation of this effect on real-world mRNAs. We analyzed a set of 2622 siRNAs, for which individual efficacies were determined using RT–qPCR 48 h post-transfection in HeLa cells (www.appliedbiosystems.com). Of these, 1778 could be associated with an experimentally determined decay rate (Figure 4A). Although the overall correlation between the two variables was modest (Spearman's rank correlation rs=0.22, P<1e−20), we found that siRNAs directed at high-turnover (t1/2<200 min) and medium-turnover (2001000 min) transcripts (P<8e−11 and 4e−9, respectively, two-tailed KS-test, Figure 4B). While 41.6% (498/1196) of the siRNAs directed at low-turnover transcripts reached 10% remaining expression or better, only 16.7% (31/186) of the siRNAs that targeted high-turnover mRNAs reached this high degree of silencing (Figure 4B). Reduced targetability (25.2%, 100/396) was also seen for transcripts with medium-turnover rate.
Our results based on siRNA data suggested that turnover rates could also influence microRNA targeting. By assembling genome-wide mRNA expression data from 20 published microRNA transfections in HeLa cells, we found that predicted target mRNAs with short and medium half-life were significantly less repressed after transfection than their long-lived counterparts (P<8e−5 and P<0.03, respectively, two-tailed KS-test). Specifically, 10.2% (293/2874) of long-lived targets versus 4.4% (41/942) of short-lived targets were strongly (z-score <−3) repressed. siRNAs are known to cause off-target effects that are mediated, in part, by microRNA-like seed complementarity (Jackson et al, 2006). We analyzed changes in transcript levels after transfection of seven different siRNAs, each with a unique seed region (Jackson et al, 2006). Putative ‘off-targets' were identified by mapping of non-conserved seed matches in 3′ UTRs. We found that low-turnover mRNAs (t1/2 >1000 min) were more affected by seed-mediated off-target silencing than high-turnover mRNAs (t1/2 <200 min), with twice as many long-lived seed-containing transcripts (3.8 versus 1.9%) being strongly (z-score <−3) repressed.
In summary, mRNA turnover rates have an important influence on the changes exerted by small RNAs on mRNA levels. It can be assumed that mRNA half-lives will influence how mRNAs are differentially perturbed whenever small RNA levels change in the cell, not only after transfection but also during differentiation, pathogenesis and normal cell physiology.
The microRNA pathway participates in basic cellular processes and its discovery has enabled the development of si/shRNAs as powerful investigational tools and potential therapeutics. Based on a simple kinetic model of the mRNA life cycle, we hypothesized that mRNAs with high turnover rates may be more resistant to RNAi-mediated silencing. The results of a simple reporter experiment strongly supported this hypothesis. We followed this with a genome-wide scale analysis of a rich corpus of experiments, including RT–qPCR validation data for thousands of siRNAs, siRNA/microRNA overexpression data and mRNA stability data. We find that short-lived transcripts are less affected by microRNA overexpression, suggesting that microRNA target prediction would be improved if mRNA turnover rates were considered. Similarly, short-lived transcripts are more difficult to silence using siRNAs, and our results may explain why certain transcripts are inherently recalcitrant to perturbation by small RNAs.
doi:10.1038/msb.2010.89
PMCID: PMC3010119  PMID: 21081925
microRNA; mRNA decay; RNAi; siRNA
16.  Effectiveness of Early Antiretroviral Therapy Initiation to Improve Survival among HIV-Infected Adults with Tuberculosis: A Retrospective Cohort Study 
PLoS Medicine  2011;8(5):e1001029.
Molly Franke, Megan Murray, and colleagues report that early cART reduces mortality among HIV-infected adults with tuberculosis and improves retention in care, regardless of CD4 count.
Background
Randomized clinical trials examining the optimal time to initiate combination antiretroviral therapy (cART) in HIV-infected adults with sputum smear-positive tuberculosis (TB) disease have demonstrated improved survival among those who initiate cART earlier during TB treatment. Since these trials incorporated rigorous diagnostic criteria, it is unclear whether these results are generalizable to the vast majority of HIV-infected patients with TB, for whom standard diagnostic tools are unavailable. We aimed to examine whether early cART initiation improved survival among HIV-infected adults who were diagnosed with TB in a clinical setting.
Methods and Findings
We retrospectively reviewed charts for 308 HIV-infected adults in Rwanda with a CD4 count≤350 cells/µl and a TB diagnosis. We estimated the effect of cART on survival using marginal structural models and simulated 2-y survival curves for the cohort under different cART strategies:start cART 15, 30, 60, or 180 d after TB treatment or never start cART. We conducted secondary analyses with composite endpoints of (1) death, default, or lost to follow-up and (2) death, hospitalization, or serious opportunistic infection. Early cART initiation led to a survival benefit that was most marked for individuals with low CD4 counts. For individuals with CD4 counts of 50 or 100 cells/µl, cART initiation at day 15 yielded 2-y survival probabilities of 0.82 (95% confidence interval: [0.76, 0.89]) and 0.86 (95% confidence interval: [0.80, 0.92]), respectively. These were significantly higher than the probabilities computed under later start times. Results were similar for the endpoint of death, hospitalization, or serious opportunistic infection. cART initiation at day 15 versus later times was protective against death, default, or loss to follow-up, regardless of CD4 count. As with any observational study, the validity of these findings assumes that biases from residual confounding by unmeasured factors and from model misspecification are small.
Conclusions
Early cART reduced mortality among individuals with low CD4 counts and improved retention in care, regardless of CD4 count.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
HIV infection has exacerbated the global tuberculosis (TB) epidemic, especially in sub-Saharan Africa, in which in some countries, 70% of people with TB are currently also HIV positive—a condition commonly described as HIV/TB co-infection. The management of patients with HIV/TB co-infection is a major public health concern.
There is relatively little good evidence on the best time to initiate combination antiretroviral therapy (cART) in adults with HIV/TB co-infection. Clinicians sometimes defer cART in individuals initiating TB treatment because of concerns about complications (such as immune reconstitution inflammatory syndrome) and the risk of reduced adherence if patients have to remember to take two sets of pills. However, starting cART later in those patients who are infected with both HIV and TB can result in potentially avoidable deaths during therapy.
Why Was This Study Done?
Several randomized control trials (RCTs) have been carried out, and the results of three of these studies suggest that, among individuals with severe immune suppression, early initiation of cART (two to four weeks after the start of TB treatment) leads to better survival than later ART initiation (two to three months after the start of TB treatment). These results were reported in abstract form, but the full papers have not yet been published. One problem with RCTs is that they are carried out under controlled conditions that might not represent well the conditions in varied settings around the world. Therefore, observational studies that examine how effective a treatment is in routine clinical conditions can provide information that complements that obtained during clinical trials. In this study, the researchers aimed to confirm the results from RCTs among a cohort of adult patients with HIV/TB co-infection in Rwanda, diagnosed under routine program conditions and using routinely collected clinical data. The researchers also wanted to investigate whether early cART initiation reduced the risk of other adverse outcomes, including treatment default and loss to follow-up.
What Did the Researchers Do and Find?
The researchers retrospectively reviewed the charts and other program records of 308 patients with HIV, who had CD4 counts≤350 cells/µl, were aged 15 years or more, had never previously taken cART, and received their first TB treatment at one of five cART sites (two urban, three rural) in Rwanda between January 2004 and February 2007. Using this method, the researchers collected baseline demographic and clinical variables and relevant clinical follow-up data. They then used this data to estimate the effect of cART on survival by using sophisticated statistical models that calculated the effects of initiating cART at 15, 30, 60, or 180 d after the start of TB treatment or not at all.
The researchers then conducted a further analysis to assess combined outcomes of (1) death, default, lost to follow-up, and (2) death, hospitalization due to any cause, or occurrence of severe opportunistic infections, such as Kaposi's sarcoma. The researchers used the resulting multivariable model to estimate survival probabilities for each individual, based on his/her baseline characteristics.
The researchers found that when they set their model to first CD4 cell counts of 50 and 100 cells/µl, and starting cART at day 15, mean survival probabilities at two years were 0.82 and 0.86, respectively, statistically significantly higher than the survival probabilities calculated for each of the other treatment strategies, where cART was started later. They observed a similar pattern for the combined outcome of death, hospitalization, or serious opportunistic infection In addition, two-year outcomes for death or lost to follow-up were also improved with early cART, regardless of CD4 count at treatment initiation.
What Do These Findings Mean?
These findings show that in a real world program setting, starting cART 15 d after the start of TB treatment is more beneficial (measured by differences in survival probabilities) among patients with HIV/TB co-infection who have CD4 cell counts≤100 cells/µl than starting later. Early cART initiation may also increase retention in care for all individuals with CD4 cell counts≤350 cells/µl.
As the outcomes of this modeling study are based on data from a retrospective observational study, the biases associated with use of these data must be carefully addressed. However, the results support the recommendation of cART initiation after 15 d of TB treatment for patients with CD4 cell counts≤100 cells/µl and can be used as an advocacy base for TB treatment to be used as an opportunity to refer and retain HIV-infected individuals in care, regardless of CD4 cell count.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001029.
Information is available on HIV/TB co-infection from the World Health Organization, the US Centers for Disease Control and Prevention, and the International AIDS Society
doi:10.1371/journal.pmed.1001029
PMCID: PMC3086874  PMID: 21559327
17.  Moving from Data on Deaths to Public Health Policy in Agincourt, South Africa: Approaches to Analysing and Understanding Verbal Autopsy Findings 
PLoS Medicine  2010;7(8):e1000325.
Peter Byass and colleagues compared two methods of assessing data from verbal autopsies, review by physicians or probabilistic modeling, and show that probabilistic modeling is the most efficient means of analyzing these data
Background
Cause of death data are an essential source for public health planning, but their availability and quality are lacking in many parts of the world. Interviewing family and friends after a death has occurred (a procedure known as verbal autopsy) provides a source of data where deaths otherwise go unregistered; but sound methods for interpreting and analysing the ensuing data are essential. Two main approaches are commonly used: either physicians review individual interview material to arrive at probable cause of death, or probabilistic models process the data into likely cause(s). Here we compare and contrast these approaches as applied to a series of 6,153 deaths which occurred in a rural South African population from 1992 to 2005. We do not attempt to validate either approach in absolute terms.
Methods and Findings
The InterVA probabilistic model was applied to a series of 6,153 deaths which had previously been reviewed by physicians. Physicians used a total of 250 cause-of-death codes, many of which occurred very rarely, while the model used 33. Cause-specific mortality fractions, overall and for population subgroups, were derived from the model's output, and the physician causes coded into comparable categories. The ten highest-ranking causes accounted for 83% and 88% of all deaths by physician interpretation and probabilistic modelling respectively, and eight of the highest ten causes were common to both approaches. Top-ranking causes of death were classified by population subgroup and period, as done previously for the physician-interpreted material. Uncertainty around the cause(s) of individual deaths was recognised as an important concept that should be reflected in overall analyses. One notably discrepant group involved pulmonary tuberculosis as a cause of death in adults aged over 65, and these cases are discussed in more detail, but the group only accounted for 3.5% of overall deaths.
Conclusions
There were no differences between physician interpretation and probabilistic modelling that might have led to substantially different public health policy conclusions at the population level. Physician interpretation was more nuanced than the model, for example in identifying cancers at particular sites, but did not capture the uncertainty associated with individual cases. Probabilistic modelling was substantially cheaper and faster, and completely internally consistent. Both approaches characterised the rise of HIV-related mortality in this population during the period observed, and reached similar findings on other major causes of mortality. For many purposes probabilistic modelling appears to be the best available means of moving from data on deaths to public health actions.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Whenever someone dies in a developed country, the cause of death is determined by a doctor and entered into a “vital registration system,” a record of all the births and deaths in that country. Public-health officials and medical professionals use this detailed and complete information about causes of death to develop public-health programs and to monitor how these programs affect the nation's health. Unfortunately, in many developing countries dying people are not attended by doctors and vital registration systems are incomplete. In most African countries, for example, less than one-quarter of deaths are recorded in vital registration systems. One increasingly important way to improve knowledge about the patterns of death in developing countries is “verbal autopsy” (VA). Using a standard form, trained personnel ask relatives and caregivers about the symptoms that the deceased had before his/her death and about the circumstances surrounding the death. Physicians then review these forms and assign a specific cause of death from a shortened version of the International Classification of Diseases, a list of codes for hundreds of diseases.
Why Was This Study Done?
Physician review of VA forms is time-consuming and expensive. Consequently, computer-based, “probabilistic” models have been developed that process the VA data and provide a likely cause of death. These models are faster and cheaper than physician review of VAs and, because they do not rely on the views of local doctors about the likely causes of death, they are more internally consistent. But are physician review and probabilistic models equally sound ways of interpreting VA data? In this study, the researchers compare and contrast the interpretation of VA data by physician review and by a probabilistic model called the InterVA model by applying these two approaches to the deaths that occurred in Agincourt, a rural region of northeast South Africa, between 1992 and 2005. The Agincourt health and sociodemographic surveillance system is a member of the INDEPTH Network, a global network that is evaluating the health and demographic characteristics (for example, age, gender, and education) of populations in low- and middle-income countries over several years.
What Did the Researchers Do and Find?
The researchers applied the InterVA probabilistic model to 6,153 deaths that had been previously reviewed by physicians. They grouped the 250 cause-of-death codes used by the physicians into categories comparable with the 33 cause-of-death codes used by the InterVA model and derived cause-specific mortality fractions (the proportions of the population dying from specific causes) for the whole population and for subgroups (for example, deaths in different age groups and deaths occurring over specific periods of time) from the output of both approaches. The ten highest-ranking causes of death accounted for 83% and 88% of all deaths by physician interpretation and by probabilistic modelling, respectively. Eight of the most frequent causes of death—HIV, tuberculosis, chronic heart conditions, diarrhea, pneumonia/sepsis, transport-related accidents, homicides, and indeterminate—were common to both interpretation methods. Both methods coded about a third of all deaths as indeterminate, often because of incomplete VA data. Generally, there was close agreement between the methods for the five principal causes of death for each age group and for each period of time, although one notable discrepancy was pulmonary (lung) tuberculosis, which accounted for 6.4% and 21.3% of deaths in this age group, respectively, according to the physicians and to the model. However, these deaths accounted for only 3.5% of all the deaths.
What Do These Findings Mean?
These findings reveal no differences between the cause-specific mortality fractions determined from VA data by physician interpretation and by probabilistic modelling that might have led to substantially different public-health policy programmes being initiated in this population. Importantly, both approaches clearly chart the rise of HIV-related mortality in this South African population between 1992 and 2005 and reach similar findings on other major causes of mortality. The researchers note that, although preparing the amount of VA data considered here for entry into the probabilistic model took several days, the model itself runs very quickly and always gives consistent answers. Given these findings, the researchers conclude that in many settings probabilistic modeling represents the best means of moving from VA data to public-health actions.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000325.
The importance of accurate data on death is further discussed in a perspective previously published in PLoS Medicine Perspective by Colin Mathers and Ties Boerma
The World Health Organization (WHO) provides information on the vital registration of deaths and on the International Classification of Diseases; the WHO Health Metrics Network is a global collaboration focused on improving sources of vital statistics; and the WHO Global Health Observatory brings together core health statistics for WHO member states
The INDEPTH Network is a global collaboration that is collecting health statistics from developing countries; it provides more information about the Agincourt health and socio-demographic surveillance system and access to standard VA forms
Information on the Agincourt health and sociodemographic surveillance system is available on the University of Witwatersrand Web site
The InterVA Web site provides resources for interpreting verbal autopsy data and the Umeå Centre for Global Health Reseach, where the InterVA model was developed, is found at http://www.globalhealthresearch.net
A recent PLoS Medicine Essay by Peter Byass, lead author of this study, discusses The Unequal World of Health Data
doi:10.1371/journal.pmed.1000325
PMCID: PMC2923087  PMID: 20808956
18.  Are Molecular Haplotypes Worth the Time and Expense? A Cost-Effective Method for Applying Molecular Haplotypes 
PLoS Genetics  2006;2(8):e127.
Because current molecular haplotyping methods are expensive and not amenable to automation, many researchers rely on statistical methods to infer haplotype pairs from multilocus genotypes, and subsequently treat these inferred haplotype pairs as observations. These procedures are prone to haplotype misclassification. We examine the effect of these misclassification errors on the false-positive rate and power for two association tests. These tests include the standard likelihood ratio test (LRTstd) and a likelihood ratio test that employs a double-sampling approach to allow for the misclassification inherent in the haplotype inference procedure (LRTae). We aim to determine the cost–benefit relationship of increasing the proportion of individuals with molecular haplotype measurements in addition to genotypes to raise the power gain of the LRTae over the LRTstd. This analysis should provide a guideline for determining the minimum number of molecular haplotypes required for desired power. Our simulations under the null hypothesis of equal haplotype frequencies in cases and controls indicate that (1) for each statistic, permutation methods maintain the correct type I error; (2) specific multilocus genotypes that are misclassified as the incorrect haplotype pair are consistently misclassified throughout each entire dataset; and (3) our simulations under the alternative hypothesis showed a significant power gain for the LRTae over the LRTstd for a subset of the parameter settings. Permutation methods should be used exclusively to determine significance for each statistic. For fixed cost, the power gain of the LRTae over the LRTstd varied depending on the relative costs of genotyping, molecular haplotyping, and phenotyping. The LRTae showed the greatest benefit over the LRTstd when the cost of phenotyping was very high relative to the cost of genotyping. This situation is likely to occur in a replication study as opposed to a whole-genome association study.
Synopsis
Localizing genes for complex genetic diseases presents a major challenge. Recent technological advances such as genotyping arrays containing hundreds of thousands of genomic “landmarks,” and databases cataloging these “landmarks” and the levels of correlation between them, have aided in these endeavors. To utilize these resources most effectively, many researchers employ a gene-mapping technique called haplotype-based association in order to examine the variation present at multiple genomic sites jointly for a role in and/or an association with the disease state. Although methods that determine haplotype pairs directly by biological assays are currently available, they rarely are used due to their expense and incongruity to automation. Statistical methods provide an inexpensive, relatively accurate means to determine haplotype pairs. However, these statistical methods can provide erroneous results. In this article, the authors compare a standard statistical method for performing a haplotype-based association test with a method that accounts for the misclassification of haplotype pairs as part of the test. Under a number of feasible scenarios, the performance of the new test exceeded that of the standard test.
doi:10.1371/journal.pgen.0020127
PMCID: PMC1550282  PMID: 16933998
19.  Using statistical process control to make data-based clinical decisions. 
Applied behavior analysis is based on an investigation of variability due to interrelationships among antecedents, behavior, and consequences. This permits testable hypotheses about the causes of behavior as well as for the course of treatment to be evaluated empirically. Such information provides corrective feedback for making data-based clinical decisions. This paper considers how a different approach to the analysis of variability based on the writings of Walter Shewart and W. Edwards Deming in the area of industrial quality control helps to achieve similar objectives. Statistical process control (SPC) was developed to implement a process of continual product improvement while achieving compliance with production standards and other requirements for promoting customer satisfaction. SPC involves the use of simple statistical tools, such as histograms and control charts, as well as problem-solving techniques, such as flow charts, cause-and-effect diagrams, and Pareto charts, to implement Deming's management philosophy. These data-analytic procedures can be incorporated into a human service organization to help to achieve its stated objectives in a manner that leads to continuous improvement in the functioning of the clients who are its customers. Examples are provided to illustrate how SPC procedures can be used to analyze behavioral data. Issues related to the application of these tools for making data-based clinical decisions and for creating an organizational climate that promotes their routine use in applied settings are also considered.
doi:10.1901/jaba.1995.28-349
PMCID: PMC1279837  PMID: 7592154
20.  Associations between Intimate Partner Violence and Termination of Pregnancy: A Systematic Review and Meta-Analysis 
PLoS Medicine  2014;11(1):e1001581.
Lucy Chappell and colleagues conduct a systematic review and meta analysis to investigate a possible association between intimate partner violence and termination of pregnancy.
Please see later in the article for the Editors' Summary
Background
Intimate partner violence (IPV) and termination of pregnancy (TOP) are global health concerns, but their interaction is undetermined. The aim of this study was to determine whether there is an association between IPV and TOP.
Methods and Findings
A systematic review based on a search of Medline, Embase, PsycINFO, and Ovid Maternity and Infant Care from each database's inception to 21 September 2013 for peer-reviewed articles of any design and language found 74 studies regarding women who had undergone TOP and had experienced at least one domain (physical, sexual, or emotional) of IPV. Prevalence of IPV and association between IPV and TOP were meta-analysed. Sample sizes ranged from eight to 33,385 participants. Worldwide, rates of IPV in the preceding year in women undergoing TOP ranged from 2.5% to 30%. Lifetime prevalence by meta-analysis was shown to be 24.9% (95% CI 19.9% to 30.6%); heterogeneity was high (I2>90%), and variation was not explained by study design, quality, or size, or country gross national income per capita. IPV, including history of rape, sexual assault, contraceptive sabotage, and coerced decision-making, was associated with TOP, and with repeat TOPs. By meta-analysis, partner not knowing about the TOP was shown to be significantly associated with IPV (pooled odds ratio 2.97, 95% CI 2.39 to 3.69). Women in violent relationships were more likely to have concealed the TOP from their partner than those who were not. Demographic factors including age, ethnicity, education, marital status, income, employment, and drug and alcohol use showed no strong or consistent mediating effect. Few long-term outcomes were studied. Women welcomed the opportunity to disclose IPV and be offered help. Limitations include study heterogeneity, potential underreporting of both IPV and TOP in primary data sources, and inherent difficulties in validation.
Conclusions
IPV is associated with TOP. Novel public health approaches are required to prevent IPV. TOP services provide an opportune health-based setting to design and test interventions.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Intimate partner violence (sometimes referred to as domestic violence) is one of the commonest forms of violence against women and is a global health problem. The World Health Organization defines intimate partner violence as any act of physical, psychological, or sexual aggression or any controlling behavior (for example, restriction of access to assistance) perpetrated by the woman's current or past intimate partner. Although men also experience it, intimate partner violence is overwhelmingly experienced by women, particularly when repeated or severe. Studies indicate that the prevalence (the percentage of a population affected by a condition) of intimate partner violence varies widely within and between countries: the prevalence of intimate partner violence among women ranges from 15% in Japan to 71% in Ethiopia, and the lifetime prevalence of rape (forced sex) within intimate relationships ranges from 5.9% to 42% across the world, for example. Overall, a third of women experience intimate partner violence at some time during their lifetimes. The health consequences of such violence include physical injury, depression, suicidal behavior, and gastrointestinal disorders.
Why Was This Study Done?
Intimate partner violence can also lead to gynecological disorders (conditions affecting the female reproductive organs), unwanted pregnancy, premature labour and birth, and sexually transmitted infections. Because violence may begin or intensify during pregnancy, some countries recommend routine questioning about intimate partner violence during antenatal care. However, women seeking termination of pregnancy (induced abortion) are not routinely asked about intimate partner violence. Every year, many women worldwide terminate a pregnancy. Nearly half of these terminations are unsafe, and complications arising from unsafe abortions are responsible for more than 10% of maternal deaths (deaths from pregnancy or childbirth-related complications). It is important to know whether intimate partner violence and termination of pregnancy are associated in order to develop effective strategies to deal with both these global health concerns. Here, the researchers conducted a systematic review and meta-analysis to investigate the associations between intimate partner violence and termination or pregnancy. A systematic review identifies all the research on a given topic using predefined criteria; meta-analysis combines the results of several studies.
What Did the Researchers Do and Find?
The researchers identified 74 studies that provided information about experiences of intimate partner violence among women who had had a termination of pregnancy. Data in these studies indicated that, worldwide, intimate partner violence rates among women undergoing termination ranged from 2.5% to 30% in the preceding year and from 14% to 40% over their lifetime. In the meta-analysis, the lifetime prevalence of intimate partner violence was 24.9% among termination-seeking populations. The identified studies provided evidence that intimate partner violence was associated with termination and with repeat termination. In one study, for example, women presenting for a third termination were more than two and a half times more likely to have a history of physical or sexual violence than women presenting for their first termination. Moreover, according to the meta-analysis, women in violent relationships were three times as likely to conceal a termination from their partner as women in non-violent relationships. Finally, the studies indicated that women undergoing terminations of pregnancy welcomed the opportunity to disclose their experiences of intimate partner violence and to be offered help.
What Do These Findings Mean?
These findings indicate that intimate partner violence is associated with termination of pregnancy and that a woman's partner not knowing about the termination is a risk factor for intimate partner violence among women seeking termination. Overall, the researchers' findings support the concept that violence can lead to pregnancy and to subsequent termination of pregnancy, and that there may be a repetitive cycle of abuse and pregnancy. The accuracy of these findings is limited by heterogeneity (variability) among the included studies, by the likelihood of underreporting of both intimate partner violence and termination in the included studies, and by lack of validation of reports of violence through, for example, police reports. Nevertheless, health-care professionals should consider the possibility that women seeking termination of pregnancy may be experiencing intimate partner violence. In trying to prevent repeat terminations, health-care professionals should be aware that while focusing on preventing conception may reduce the chances of a woman becoming pregnant, she may still be vulnerable to abuse. Finally, given the clear associations between intimate partner violence and termination of pregnancy, the researchers suggest that termination services represent an appropriate setting in which to test interventions designed to reduce intimate partner violence.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001581.
The World Health organization provides detailed information about intimate partner violence and about termination of pregnancy (some information available in several languages)
MedlinePlus provides links to other resources about intimate partner violence and about termination of pregnancy (in English and Spanish)
The World Bank has a webpage that discusses the role of the health sector in preventing gender-based violence and a webpage with links to other resources about gender-based violence
The Gender and Health Research Unit of the South African Medical Research Council provides links to further resources about intimate partner violence (research briefs/policy briefs/fact sheets/research reports)
DIVERHSE (Domestic & Interpersonal Violence: Effecting Responses in the Health Sector in Europe) is a European forum for health professionals, nongovernmental organizations, policy-makers, and academics to share their expertise and good practice in developing and evaluating interventions to address violence against women and children in a variety of health-care settings
London School of Hygiene & Tropical Medicine's Gender Violence and Health Centre also has a number of research resources
The UK National Health Service Choices website provides personal stories of intimate partner violence during pregnancy
The March of Dimes provides information on identifying intimate partner violence during pregnancy and making a safety plan
doi:10.1371/journal.pmed.1001581
PMCID: PMC3883805  PMID: 24409101
21.  Statistical process control as a tool for research and healthcare improvement 
Quality & safety in health care  2003;12(6):458-464.


 Improvement of health care requires making changes in processes of care and service delivery. Although process performance is measured to determine if these changes are having the desired beneficial effects, this analysis is complicated by the existence of natural variation—that is, repeated measurements naturally yield different values and, even if nothing was done, a subsequent measurement might seem to indicate a better or worse performance. Traditional statistical analysis methods account for natural variation but require aggregation of measurements over time, which can delay decision making. Statistical process control (SPC) is a branch of statistics that combines rigorous time series analysis methods with graphical presentation of data, often yielding insights into the data more quickly and in a way more understandable to lay decision makers. SPC and its primary tool—the control chart—provide researchers and practitioners with a method of better understanding and communicating data from healthcare improvement efforts. This paper provides an overview of SPC and several practical examples of the healthcare applications of control charts.
doi:10.1136/qhc.12.6.458
PMCID: PMC1758030  PMID: 14645763
22.  Climate Cycles and Forecasts of Cutaneous Leishmaniasis, a Nonstationary Vector-Borne Disease 
PLoS Medicine  2006;3(8):e295.
Background
Cutaneous leishmaniasis (CL) is one of the main emergent diseases in the Americas. As in other vector-transmitted diseases, its transmission is sensitive to the physical environment, but no study has addressed the nonstationary nature of such relationships or the interannual patterns of cycling of the disease.
Methods and Findings
We studied monthly data, spanning from 1991 to 2001, of CL incidence in Costa Rica using several approaches for nonstationary time series analysis in order to ensure robustness in the description of CL's cycles. Interannual cycles of the disease and the association of these cycles to climate variables were described using frequency and time-frequency techniques for time series analysis. We fitted linear models to the data using climatic predictors, and tested forecasting accuracy for several intervals of time. Forecasts were evaluated using “out of fit” data (i.e., data not used to fit the models). We showed that CL has cycles of approximately 3 y that are coherent with those of temperature and El Niño Southern Oscillation indices (Sea Surface Temperature 4 and Multivariate ENSO Index).
Conclusions
Linear models using temperature and MEI can predict satisfactorily CL incidence dynamics up to 12 mo ahead, with an accuracy that varies from 72% to 77% depending on prediction time. They clearly outperform simpler models with no climate predictors, a finding that further supports a dynamical link between the disease and climate.
Using mathematical models, the authors show that cutaneous leishmaniasis has cycles of approximately three years that are related to temperature cycles and indices of the El Niño Southern Oscillation.
Editors' Summary
Background.
Every year, 2 million people become infected with a pathogenic species of Leishmania, a parasite that is transmitted to humans through the bites of infected sand flies. These flies—the insect vectors for disease transmission—pick up parasites by biting infected animals—the reservoirs for the parasite. Once in a person, some species of Leishmania can cause cutaneous leishmaniasis, a condition characterized by numerous skin lesions. These usually heal spontaneously but can leave ugly, sometimes disabling scars. Leishmaniasis is endemic and constantly present in many tropical and temperate countries, but as with other diseases that are transmitted by insect vectors (for example, malaria), the occurrence of cases has a strong seasonal pattern and also varies from year to year (interannual variability). These fluctuations suggest that leishmaniasis transmission is sensitive to seasonal changes in the climate and to climatic events like the El Niño Southern Oscillation (ENSO), a major cause of interannual weather and climate variation around the world that repeats every 3–4 years. This sensitivity arises because the climate directly affects the abundance of sand flies and how quickly the parasites replicate.
Why Was This Study Done?
It would be very useful to have early warning systems for leishmaniasis and other vector-transmitted diseases so that public health officials could prepare for epidemics—or spikes in the number of cases—of these diseases. Monitoring of climatic changes could form the basis of such systems. But although it is clear that the transmission of cutaneous leishmaniasis is affected by fluctuations in the climate, there have been no detailed studies into the dynamics of its seasonal or yearly variation. In this study, the researchers used climatic data and information about cutaneous leishmaniasis in Costa Rica to build statistical models that investigate how climate affects leishmaniasis transmission.
What Did the Researchers Do and Find?
The researchers obtained the monthly records for cutaneous leishmaniasis in Costa Rica for 1991 to 2001. They then used several advanced statistical models to investigate how these data relate to climatic variables such as the sea surface temperature (a measure of El Niño, a large-scale warming of the sea), average temperature in Costa Rica, and the MEI (the Multivariate ENSO Index, a collection of temperature and air pressure measurements that predicts when the ENSO is going to occur). Their analyses show that cutaneous leishmaniasis cases usually peak in May and that the incidence of the disease (number of cases occurring in the population over a set time period) rises and falls in three-year cycles. These cycles, they report, match up with similar-length cycles in the climatic variables that they investigated. Furthermore, when the researchers tested the models they had constructed with data that had not been used to construct the models (“out of fit” data), the models predicted variations in the incidence of cutaneous leishmaniasis for up 12 months with an accuracy of about 75% (that is, the predictions were correct three times out of four).
What Do These Findings Mean?
The finding that interannual cycles of climate variables and cutaneous leishmaniasis coincide provides strong evidence that climate does indeed affect the transmission of this disease. This link is strengthened by the ability of the statistical models described by the researchers to predict outbreaks with high accuracy. The researchers' new insights into how climate affects the transmission of cutaneous leishmaniasis are important because they open the door to the possibility of setting up an early warning system for this increasingly common disease. The same statistical approach could be used to improve understanding of how climate affects the dynamics of other vector-transmitted diseases and to design early warning systems for them as well—the World Health Organization has identified 18 diseases for which climate-based early warning systems might be useful but no such systems are currently being used to plan disease control strategies. Finally, the improved understanding of the relationship between climate and disease transmission that the researchers have gained through their study is an important step towards being able to predict how the incidence and distribution of leishmaniasis and other vector-transmitted diseases will be affected by global warming.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030295.
United States Centers for Disease Control and Prevention fact sheet on leishmaniasis
MedlinePlus encyclopedia entry on leishmaniasis
World Health Organization information on leishmaniasis and on climate change and health
Wikipedia pages on leishmaniasis and on the El Niño Southern Oscillation (note that Wikipedia is a free online encyclopedia that anyone can edit)
doi:10.1371/journal.pmed.0030295
PMCID: PMC1539092  PMID: 16903778
23.  Measuring Coverage in MNCH: A Prospective Validation Study in Pakistan and Bangladesh on Measuring Correct Treatment of Childhood Pneumonia 
PLoS Medicine  2013;10(5):e1001422.
Background
Antibiotic treatment for pneumonia as measured by Demographic and Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS) is a key indicator for tracking progress in achieving Millennium Development Goal 4. Concerns about the validity of this indicator led us to perform an evaluation in urban and rural settings in Pakistan and Bangladesh.
Methods and Findings
Caregivers of 950 children under 5 y with pneumonia and 980 with “no pneumonia” were identified in urban and rural settings and allocated for DHS/MICS questions 2 or 4 wk later. Study physicians assigned a diagnosis of pneumonia as reference standard; the predictive ability of DHS/MICS questions and additional measurement tools to identify pneumonia versus non-pneumonia cases was evaluated.
Results at both sites showed suboptimal discriminative power, with no difference between 2- or 4-wk recall. Individual patterns of sensitivity and specificity varied substantially across study sites (sensitivity 66.9% and 45.5%, and specificity 68.8% and 69.5%, for DHS in Pakistan and Bangladesh, respectively). Prescribed antibiotics for pneumonia were correctly recalled by about two-thirds of caregivers using DHS questions, increasing to 72% and 82% in Pakistan and Bangladesh, respectively, using a drug chart and detailed enquiry.
Conclusions
Monitoring antibiotic treatment of pneumonia is essential for national and global programs. Current (DHS/MICS questions) and proposed new (video and pneumonia score) methods of identifying pneumonia based on maternal recall discriminate poorly between pneumonia and children with cough. Furthermore, these methods have a low yield to identify children who have true pneumonia. Reported antibiotic treatment rates among these children are therefore not a valid proxy indicator of pneumonia treatment rates. These results have important implications for program monitoring and suggest that data in its current format from DHS/MICS surveys should not be used for the purpose of monitoring antibiotic treatment rates in children with pneumonia at the present time.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Pneumonia is a major cause of death in children younger than five years across the globe, with approximately 1.2 million children younger than five years dying from pneumonia every year. Pneumonia can be caused by bacteria, fungi, or viruses. It is possible to effectively treat bacterial pneumonia with appropriate antibiotics; however, only about 30% of children receive the antibiotic treatment they need. The Millennium Development Goals (MDGs) are eight international development goals that were established in 2000. The fourth goal (MDG 4) aims to reduce child mortality, specifically, to reduce the under-five mortality rate by two-thirds, between 1990 and 2015. Given that approximately 18% of all deaths in children under five are caused by pneumonia, providing universal coverage with effective treatments for pneumonia is an important part of MDG 4.
To ensure that MDG 4 targets are met, it is important to measure progress in providing effective treatments. For pneumonia, one of the key indicators for measuring progress is the proportion of children with pneumonia in a population who receive antibiotic treatment, also known as the antibiotic treatment rate. The antibiotic treatment rate is often measured using surveys, such as the Demographic and Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS), which collect nationally representative data about populations and health in developing countries.
Why Was This Study Done?
Concerns have been raised about whether information collected from DHS and MICS is able to accurately identify cases of pneumonia. In a clinical setting, pneumonia is typically diagnosed based on a combination of physical symptoms, including coughing, rapid breathing, or difficulty breathing, and a chest X-ray. The surveys rely on information collected from interviews of mothers and primary caregivers using structured questions about whether the child has experienced physical symptoms in the past two weeks and whether these were chest-related. The DHS survey labels this condition as “symptoms of acute respiratory infection,” while the MICS survey uses the term “suspected pneumonia.” Thus, these surveys provide a proxy measure for pneumonia that is limited by the reliance on the recall of symptoms by the mother or caregiver. Here the researchers have evaluated the use of these surveys to discriminate physician-diagnosed pneumonia and to provide accurate recall of antibiotic treatment in urban and rural settings in Pakistan and Bangladesh.
What Did the Researchers Do and Find?
The researchers identified caregivers of 950 children under five years with pneumonia and 980 who had a cough or cold but did not have pneumonia from urban and rural settings in Pakistan and Bangladesh. Cases of pneumonia were identified based on a physician diagnosis using World Health Organization guidelines. They randomly assigned caregivers to be interviewed using DHS and MICS questions with either a two- or four-week recall period. They then assessed how well the DHS and MICS questions were able to accurately diagnose pneumonia and accurately recall antibiotic use. In addition, they asked caregivers to complete a pneumonia score questionnaire and showed them a video tool showing children with and without pneumonia, as well as a medication drug chart, to determine if these alternative measures improved the accuracy of pneumonia diagnosis or recall of antibiotic use. They found that both surveys, the pneumonia score, and the video tool had poor ability to discriminate between children with and without physician-diagnosed pneumonia, and there were no differences between using two- or four-week recall. The sensitivity (proportion of pneumonia cases that were correctly identified) ranged from 23% to 72%, and the specificity (the proportion of “no pneumonia” cases that were correctly identified) ranged from 53% to 83%, depending on the setting. They also observed that prescribed antibiotics for pneumonia were correctly recalled by about two-thirds of caregivers using DHS questions, and this increased to about three-quarters of caregivers when using a drug chart and detailed enquiry.
What Do These Findings Mean?
The findings of this study suggest that the current use of questions from DHS and MICS based on mother or caregiver recall are not sufficient for accurately identifying pneumonia and antibiotic use in children. Because these surveys have poor ability to identify children who have true pneumonia, reported antibiotic treatment rates for children with pneumonia based on data from these surveys may not be accurate, and these surveys should not be used to monitor treatment rates. These findings should be interpreted cautiously, given the relatively high rate of loss to follow-up and delayed follow-up in some of the children and because some of the settings in this study may not be similar to other low-income settings.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001422.
More information is available on the United Nations goal to reduce child mortality (MDG 4)
The World Health Organization provides information on pneumonia, its impact on children, and the global action plan for prevention and control of pneumonia
More information is available on Demographic and Health Surveys and Multiple Indicator Cluster Surveys
KidsHealth, a resource maintained by the Nemours Foundation (a not-for-profit organization for children's health) provides information for parents on pneumonia (in English and Spanish)
MedlinePlus provides links to additional information on pneumonia (in English and Spanish)
doi:10.1371/journal.pmed.1001422
PMCID: PMC3646205  PMID: 23667339
24.  Effect of Water, Sanitation, and Hygiene on the Prevention of Trachoma: A Systematic Review and Meta-Analysis 
PLoS Medicine  2014;11(2):e1001605.
Matthew Freeman and colleagues identified 86 individual studies that reported a measure of the effect of water, sanitation, and hygiene on trachoma and conducted 15 meta-analyses for specific exposure-outcome pairs.
Please see later in the article for the Editors' Summary
Background
Trachoma is the world's leading cause of infectious blindness. The World Health Organization (WHO) has endorsed the SAFE strategy in order to eliminate blindness due to trachoma by 2020 through “surgery,” “antibiotics,” “facial cleanliness,” and “environmental improvement.” While the S and A components have been widely implemented, evidence and specific targets are lacking for the F and E components, of which water, sanitation, and hygiene (WASH) are critical elements. Data on the impact of WASH on trachoma are needed to support policy and program recommendations. Our objective was to systematically review the literature and conduct meta-analyses where possible to report the effects of WASH conditions on trachoma and identify research gaps.
Methods and Findings
We systematically searched PubMed, Embase, ISI Web of Knowledge, MedCarib, Lilacs, REPIDISCA, DESASTRES, and African Index Medicus databases through October 27, 2013 with no restrictions on language or year of publication. Studies were eligible for inclusion if they reported a measure of the effect of WASH on trachoma, either active disease indicated by observed signs of trachomatous inflammation or Chlamydia trachomatis infection diagnosed using PCR. We identified 86 studies that reported a measure of the effect of WASH on trachoma. To evaluate study quality, we developed a set of criteria derived from the GRADE methodology. Publication bias was assessed using funnel plots. If three or more studies reported measures of effect for a comparable WASH exposure and trachoma outcome, we conducted a random-effects meta-analysis. We conducted 15 meta-analyses for specific exposure-outcome pairs. Access to sanitation was associated with lower trachoma as measured by the presence of trachomatous inflammation-follicular or trachomatous inflammation-intense (TF/TI) (odds ratio [OR] 0.85, 95% CI 0.75–0.95) and C. trachomatis infection (OR 0.67, 95% CI 0.55–0.78). Having a clean face was significantly associated with reduced odds of TF/TI (OR 0.42, 95% CI 0.32–0.52), as were facial cleanliness indicators lack of ocular discharge (OR 0.42, 95% CI 0.23–0.61) and lack of nasal discharge (OR 0.62, 95% CI 0.52–0.72). Facial cleanliness indicators were also associated with reduced odds of C. trachomatis infection: lack of ocular discharge (OR 0.40, 95% CI 0.31–0.49) and lack of nasal discharge (OR 0.56, 95% CI 0.37–0.76). Other hygiene factors found to be significantly associated with reduced TF/TI included face washing at least once daily (OR 0.76, 95% CI 0.57–0.96), face washing at least twice daily (OR 0.85, 95% CI 0.80–0.90), soap use (OR 0.76, 95% CI 0.59–0.93), towel use (OR 0.65, 95% CI 0.53–0.78), and daily bathing practices (OR 0.76, 95% CI 0.53–0.99). Living within 1 km of a water source was not found to be significantly associated with TF/TI or C. trachomatis infection, and the use of sanitation facilities was not found to be significantly associated with TF/TI.
Conclusions
We found strong evidence to support F and E components of the SAFE strategy. Though limitations included moderate to high heterogenity, low study quality, and the lack of standard definitions, these findings support the importance of WASH in trachoma elimination strategies and the need for the development of standardized approaches to measuring WASH in trachoma control programs.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Trachoma is a bacterial eye infection, which if left untreated may lead to irreversible blindness. Repeated infections over many years cause scarring on the eyelid, making the eyelashes turn inward. This causes pain and damage to the cornea at the front of the eye, which eventually leads to loss of vision. The disease is most common in rural areas in low-income countries, specifically sub-Saharan Africa. It spreads easily through contact with the discharge from an infected eye or nose, by hands, or by flies landing on the face. Women and children are more often affected than men. Trachoma is the world's leading cause of preventable blindness. A global alliance, led by The World Health Organization, is aiming to eliminate trachoma by 2020 by adopting the SAFE strategy. There are four components of this strategy. Two relate to treating the disease—“surgery” and “antibiotics.” The other two components relate to long-term prevention by promoting “facial” cleanliness and “environmental” changes (for example improving access to water and sanitation or reducing the breeding grounds for flies).
Why Was This Study Done?
The SAFE approach has been very successful in reducing the number of people with trachoma from 84 million in 2003 to 21.4 million in 2012. However, it is widely recognized that efforts need to be scaled up to reach the 2020 goal. Furthermore, if current improvements are to be sustained, then more attention needs to be given to the “F” and “E” elements and effective prevention. This study aimed to identify the most effective ways to improve hygiene, sanitation, and access to water for better trachoma control, and to find better ways of monitoring progress. The overall goal was to summarize the evidence in order to devise strategic and cost-effective approaches to trachoma prevention.
What Did the Researchers Do and Find?
The researchers conducted a systematic review, which involved first identifying and then assessing the quality of all of the research published on this topic. They then carried out a statistical analysis of the combined data from these studies, with the aim of drawing more robust conclusions (a meta-analysis). The analysis involved 15 different water, sanitation, and hygiene exposures (either hardware or practices, as determined by what was available in the literature) to determine which had the biggest impact on reducing the levels of trachoma. Most of the data came from studies carried out in Africa. The findings suggested that 11 of these exposures made a significant difference to the risk of infection or clinical symptoms of the disease. Improving personal hygiene had the greatest impact. Effective measures included face washing once or twice a day, using soap, using a towel, and daily bathing. Similarly, access to a sanitation facility, rather than open ground, also had a positive impact. The researchers also analyzed the data relating to water access. However, the studies so far have not yet measured this in a way that addresses the issues relevant to trachoma infection. Most studies have looked at whether the distance from a water source has an impact (and it seems it does not), whereas it may be more important to assess whether people have access to clean water or to enough water to wash. Many of these analyses require additional research to further clarify the impact of individual water, sanitation, and hygiene exposure on disease.
What Do These Findings Mean?
Overall, the results support that notion that water, sanitation, and hygiene are important components of an integrated strategy to control trachoma. Based on the research available to date, the two most effective ways are face washing and having access to a household-level sanitation facility, typically a simple pit latrine. The findings also point to ways in which current policy could be improved. Firstly, public health guidance should be placing greater emphasis on keeping the face clean. Current advice tends to focus on washing with clean water, but use of soap appears more effective. There are also opportunities for organizations to collaborate in this area. For example, organizations focusing on the prevention of diarrhea in children, which promote handwashing, could at the same time campaign for face washing to reduce transmission of trachoma. The second policy area to target is access to good quality sanitation. Such policy initiatives need to be better resourced in countries where trachoma is a problem. For example, although sub-Saharan Africa has the world's highest burden of trachoma, more than 50% of households there still do not have access to any sanitation facility.
There were a number of limitations to this study, which may affect the strength of the conclusions. The researchers found that many studies on this topic were observational, meaning that they did not assess an intervention and employ a control group, thus they are of limited rigor for assessing the impact of a water, sanitation, and hygiene intervention on trachoma. There was also a lot of variation in the way that different studies had defined and measured improvements to water, sanitation, and hygiene access. This made it difficult to make comparisons. Standard methods and indicators need to be developed for this purpose. The study also highlighted gaps in the research. More work is required to determine precisely what is needed in terms of access to water to reduce the incidence of trachoma. Similarly, in terms of improving sanitation, it is still unclear whether ensuring every household has a simple, onsite facility would be more effective than providing clean communal facilities. The potential role of schools in promoting relevant public health measures also needs investigation.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001605.
WHO provides information on trachoma (in several languages)
The US Centers for Disease Control and Prevention provide information on trachoma
International Trachoma Initiative is dedicated to the goal of elimination of blinding trachoma
The Carter Center: Trachoma Control Program has a Trachoma Health Education Materials Library
WASHNTD has an online manual resource for NTDs for WASH policy and programming
doi:10.1371/journal.pmed.1001605
PMCID: PMC3934994  PMID: 24586120
25.  Care Seeking for Neonatal Illness in Low- and Middle-Income Countries: A Systematic Review 
PLoS Medicine  2012;9(3):e1001183.
Hadley Herbert and colleagues systematically review newborn care-seeking behaviors by caregivers in low- and middle-income countries.
Background
Despite recent achievements to reduce child mortality, neonatal deaths continue to remain high, accounting for 41% of all deaths in children under five years of age worldwide, of which over 90% occur in low- and middle-income countries (LMICs). Infections are a leading cause of death and limitations in care seeking for ill neonates contribute to high mortality rates. As estimates for care-seeking behaviors in LMICs have not been studied, this review describes care seeking for neonatal illnesses in LMICs, with particular attention to type of care sought.
Methods and Findings
We conducted a systematic literature review of studies that reported the proportion of caregivers that sought care for ill or suspected ill neonates in LMICs. The initial search yielded 784 studies, of which 22 studies described relevant data from community household surveys, facility-based surveys, and intervention trials. The majority of studies were from South Asia (n = 17/22), set in rural areas (n = 17/22), and published within the last 4 years (n = 18/22). Of the 9,098 neonates who were ill or suspected to be ill, 4,320 caregivers sought some type of care, including care from a health facility (n = 370) or provider (n = 1,813). Care seeking ranged between 10% and 100% among caregivers with a median of 59%. Care seeking from a health care provider yielded a similar range and median, while care seeking at a health care facility ranged between 1% and 100%, with a median of 20%. Care-seeking estimates were limited by the few studies conducted in urban settings and regions other than South Asia. There was a lack of consistency regarding illness, care-seeking, and care provider definitions.
Conclusions
There is a paucity of data regarding newborn care-seeking behaviors; in South Asia, care seeking is low for newborn illness, especially in terms of care sought from health care facilities and medically trained providers. There is a need for representative data to describe care-seeking patterns in different geographic regions and better understand mechanisms to enhance care seeking during this vulnerable time period.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Worldwide around 3.3 million babies die within their first month of life every year. While the global neonatal mortality rate has declined by 28% between 1990 and 2009 (from 33.2 deaths per 1,000 livebirths to 23.9), the proportion of under-five child deaths that are now in the neonatal period has increased in all regions of the world and currently stands at 41%. Of these deaths, over 90% occur in low- and middle-income countries (LMICs), making the risk of death in the neonatal period in LMICs more than six times higher than in high-income countries. In LMIC settings most babies are born at home so inappropriate and delayed care seeking can contribute substantially to neonatal mortality. Infection causes over a quarter of all deaths in neonates, but in LMICs diagnosis is often based on nonspecific clinical signs, which may delay the provision of care.
Why Was This Study Done?
In order to improve neonatal survival in LMICs, health care facilities and providers must not only be available and accessible but a baby's caregiver, often a parent or other family member, must also recognize that the baby is ill and seek help. To address this problem with effective strategies, an understanding is needed of the patterns of care-seeking behavior by babies' caregivers in seeking help from health-care facilities or providers. In this study, the researchers explored the extent and nature of care-seeking behaviors by the caregivers of ill babies in LMIC settings.
What Did the Researchers Do and Find?
Using multiple databases, the researchers conducted a comprehensive review up until October 2011 of all relevant studies including those that had not been formally published. Using specific criteria, the researchers then identified 22 appropriate studies (out of a possible 784) and recorded the same information from each study, including the number of neonates with illness or suspected illness, the number of care providers who sought care, and where care was sought. They also assessed the quality of each included study (the majority of which were from rural areas in South Asia) on the basis of a validated method for reviewing intervention effectiveness. The researchers found that the definitions of neonatal illness and care-seeking behavior varied considerably between studies or were not defined at all. Because of these inherent study differences it was not possible to statistically combine the results from the identified studies using a technique called meta-analysis, instead the researchers reported literature estimates and described their findings narratively.
The researchers' analysis included 9,098 neonates who were identified in community-based studies as being ill or suspected of being ill and a total of 4,320 related care-seeking events: care seeking ranged between 10% and 100% among caregivers including seeking care from a health facility (370) or from a health provider (1,813). Furthermore, between 4% to 100% of caregivers sought care from a trained medical provider and 4% to 48% specified receiving care at a health care facility: caregivers typically sought help from primary health care, secondary health care, and pharmacies and some from an unqualified health provider. The researchers also identified seven community-based intervention studies that included interventions such as essential newborn care, birth preparedness, and illness recognition, where all showed an increase in care seeking following the intervention.
What Do These Findings Mean?
These findings highlight the lack of a standardized and consistent approach to neonate care-seeking behaviors described in the literature. However, despite the large variations of results, care seeking for newborn illnesses in LMICs appears to be low in general and remains a key challenge to improving neonatal mortality. Global research efforts to define, understand, and address care seeking, may help to reduce the global burden of neonatal mortality. However, to achieve sustainable improvements in neonatal survival, changes are needed to both increase the demand for newborn care and strengthen health care systems to improve access to, and quality of, care. This review also shows that there is a role for interventions within the community to encourage appropriate and timely care seeking. Finally, by addressing the inconsistencies and establishing standardized terms to identify barriers to care, future studies may be able to better generalize the factors and delays that influence neonatal care seeking.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001183.
A recent PLoS Medicine study has the latest figures on neonatal mortality worldwide
UNICEF provides information about progress toward United Nations Millennium Development Goal 4
UNICEF also has information about neonatal mortality
The United Nations Population Fund has information on home births
doi:10.1371/journal.pmed.1001183
PMCID: PMC3295826  PMID: 22412355

Results 1-25 (1208940)