PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1175054)

Clipboard (0)
None

Related Articles

1.  Human metabolic profiles are stably controlled by genetic and environmental variation 
A comprehensive variation map of the human metabolome identifies genetic and stable-environmental sources as major drivers of metabolite concentrations. The data suggest that sample sizes of a few thousand are sufficient to detect metabolite biomarkers predictive of disease.
We designed a longitudinal twin study to characterize the genetic, stable-environmental, and longitudinally fluctuating influences on metabolite concentrations in two human biofluids—urine and plasma—focusing specifically on the representative subset of metabolites detectable by 1H nuclear magnetic resonance (1H NMR) spectroscopy.We identified widespread genetic and stable-environmental influences on the (urine and plasma) metabolomes, with (30 and 42%) attributable on average to familial sources, and (47 and 60%) attributable to longitudinally stable sources.Ten of the metabolites annotated in the study are estimated to have >60% familial contribution to their variation in concentration.Our findings have implications for the design and interpretation of 1H NMR-based molecular epidemiology studies. On the basis of the stable component of variation quantified in the current paper, we specified a model of disease association under which we inferred that sample sizes of a few thousand should be sufficient to detect disease-predictive metabolite biomarkers.
Metabolites are small molecules involved in biochemical processes in living systems. Their concentration in biofluids, such as urine and plasma, can offer insights into the functional status of biological pathways within an organism, and reflect input from multiple levels of biological organization—genetic, epigenetic, transcriptomic, and proteomic—as well as from environmental and lifestyle factors. Metabolite levels have the potential to indicate a broad variety of deviations from the ‘normal' physiological state, such as those that accompany a disease, or an increased susceptibility to disease. A number of recent studies have demonstrated that metabolite concentrations can be used to diagnose disease states accurately. A more ambitious goal is to identify metabolite biomarkers that are predictive of future disease onset, providing the possibility of intervention in susceptible individuals.
If an extreme concentration of a metabolite is to serve as an indicator of disease status, it is usually important to know the distribution of metabolite levels among healthy individuals. It is also useful to characterize the sources of that observed variation in the healthy population. A proportion of that variation—the heritable component—is attributable to genetic differences between individuals, potentially at many genetic loci. An effective, molecular indicator of a heritable, complex disease is likely to have a substantive heritable component. Non-heritable biological variation in metabolite concentrations can arise from a variety of environmental influences, such as dietary intake, lifestyle choices, general physical condition, composition of gut microflora, and use of medication. Variation across a population in stable-environmental influences leads to long-term differences between individuals in their baseline metabolite levels. Dynamic environmental pressures lead to short-term fluctuations within an individual about their baseline level. A metabolite whose concentration changes substantially in response to short-term pressures is relatively unlikely to offer long-term prediction of disease. In summary, the potential suitability of a metabolite to predict disease is reflected by the relative contributions of heritable and stable/unstable-environmental factors to its variation in concentration across the healthy population.
Studies involving twins are an established technique for quantifying the heritable component of phenotypes in human populations. Monozygotic (MZ) twins share the same DNA genome-wide, while dizygotic (DZ) twins share approximately half their inherited DNA, as do ordinary siblings. By comparing the average extent of phenotypic concordance within MZ pairs to that within DZ pairs, it is possible to quantify the heritability of a trait, and also to quantify the familiality, which refers to the combination of heritable and common-environmental effects (i.e., environmental influences shared by twins in a pair). In addition to incorporating twins into the study design, it is useful to quantify the phenotype in some individuals at multiple time points. The longitudinal aspect of such a study allows environmental effects to be decomposed into those that affect the phenotype over the short term and those that exert stable influence.
For the current study, urine and blood samples were collected from a cohort of MZ and DZ twins, with some twins donating samples on two occasions several months apart. Samples were analysed by 1H nuclear magnetic resonance (1H NMR) spectroscopy—an untargeted, discovery-driven technique for quantifying metabolite concentrations in biological samples. The application of 1H NMR to a biological sample creates a spectrum, made up of multiple peaks, with each peak's size quantitatively representing the concentration of its corresponding hydrogen-containing metabolite.
In each biological sample in our study, we extracted a full set of peaks, and thereby quantified the concentrations of all common plasma and urine metabolites detectable by 1H NMR. We developed bespoke statistical methods to decompose the observed concentration variation at each metabolite peak into that originating from familial, individual-environmental, and unstable-environmental sources.
We quantified the variability landscape across all common metabolite peaks in the urine and plasma 1H NMR metabolomes. We annotated a subset of peaks with a total of 65 metabolites; the variance decompositions for these are shown in Figure 1. Ten metabolites' concentrations were estimated to have familial contributions in excess of 60%. The average proportion of stable variation across all extracted metabolite peaks was estimated to be 47% in the urine samples and 60% in the plasma samples; the average estimated familiality was 30% for urine and 42% for plasma. These results comprise the first quantitative variation map of the 1H NMR metabolome. The identification and quantification of substantive widespread stability provides support for the use of these biofluids in molecular epidemiology studies. On the basis of our findings, we performed power calculations for a hypothetical study searching for predictive disease biomarkers among 1H NMR-detectable urine and plasma metabolites. Our calculations suggest that sample sizes of 2000–5000 should allow reliable identification of disease-predictive metabolite concentrations explaining 5–10% of disease risk, while greater sample sizes of 5000–20 000 would be required to identify metabolite concentrations explaining 1–2% of disease risk.
1H Nuclear Magnetic Resonance spectroscopy (1H NMR) is increasingly used to measure metabolite concentrations in sets of biological samples for top-down systems biology and molecular epidemiology. For such purposes, knowledge of the sources of human variation in metabolite concentrations is valuable, but currently sparse. We conducted and analysed a study to create such a resource. In our unique design, identical and non-identical twin pairs donated plasma and urine samples longitudinally. We acquired 1H NMR spectra on the samples, and statistically decomposed variation in metabolite concentration into familial (genetic and common-environmental), individual-environmental, and longitudinally unstable components. We estimate that stable variation, comprising familial and individual-environmental factors, accounts on average for 60% (plasma) and 47% (urine) of biological variation in 1H NMR-detectable metabolite concentrations. Clinically predictive metabolic variation is likely nested within this stable component, so our results have implications for the effective design of biomarker-discovery studies. We provide a power-calculation method which reveals that sample sizes of a few thousand should offer sufficient statistical precision to detect 1H NMR-based biomarkers quantifying predisposition to disease.
doi:10.1038/msb.2011.57
PMCID: PMC3202796  PMID: 21878913
biomarker; 1H nuclear magnetic resonance spectroscopy; metabolome-wide association study; top-down systems biology; variance decomposition
2.  The Effectiveness of Web-Based vs. Non-Web-Based Interventions: A Meta-Analysis of Behavioral Change Outcomes 
Background
A primary focus of self-care interventions for chronic illness is the encouragement of an individual's behavior change necessitating knowledge sharing, education, and understanding of the condition. The use of the Internet to deliver Web-based interventions to patients is increasing rapidly. In a 7-year period (1996 to 2003), there was a 12-fold increase in MEDLINE citations for “Web-based therapies.” The use and effectiveness of Web-based interventions to encourage an individual's change in behavior compared to non-Web-based interventions have not been substantially reviewed.
Objective
This meta-analysis was undertaken to provide further information on patient/client knowledge and behavioral change outcomes after Web-based interventions as compared to outcomes seen after implementation of non-Web-based interventions.
Methods
The MEDLINE, CINAHL, Cochrane Library, EMBASE, ERIC, and PSYCHInfo databases were searched for relevant citations between the years 1996 and 2003. Identified articles were retrieved, reviewed, and assessed according to established criteria for quality and inclusion/exclusion in the study. Twenty-two articles were deemed appropriate for the study and selected for analysis. Effect sizes were calculated to ascertain a standardized difference between the intervention (Web-based) and control (non-Web-based) groups by applying the appropriate meta-analytic technique. Homogeneity analysis, forest plot review, and sensitivity analyses were performed to ascertain the comparability of the studies.
Results
Aggregation of participant data revealed a total of 11,754 participants (5,841 women and 5,729 men). The average age of participants was 41.5 years. In those studies reporting attrition rates, the average drop out rate was 21% for both the intervention and control groups. For the five Web-based studies that reported usage statistics, time spent/session/person ranged from 4.5 to 45 minutes. Session logons/person/week ranged from 2.6 logons/person over 32 weeks to 1008 logons/person over 36 weeks. The intervention designs included one-time Web-participant health outcome studies compared to non-Web participant health outcomes, self-paced interventions, and longitudinal, repeated measure intervention studies. Longitudinal studies ranged from 3 weeks to 78 weeks in duration. The effect sizes for the studied outcomes ranged from -.01 to .75. Broad variability in the focus of the studied outcomes precluded the calculation of an overall effect size for the compared outcome variables in the Web-based compared to the non-Web-based interventions. Homogeneity statistic estimation also revealed widely differing study parameters (Qw16 = 49.993, P ≤ .001). There was no significant difference between study length and effect size. Sixteen of the 17 studied effect outcomes revealed improved knowledge and/or improved behavioral outcomes for participants using the Web-based interventions. Five studies provided group information to compare the validity of Web-based vs. non-Web-based instruments using one-time cross-sectional studies. These studies revealed effect sizes ranging from -.25 to +.29. Homogeneity statistic estimation again revealed widely differing study parameters (Qw4 = 18.238, P ≤ .001).
Conclusions
The effect size comparisons in the use of Web-based interventions compared to non-Web-based interventions showed an improvement in outcomes for individuals using Web-based interventions to achieve the specified knowledge and/or behavior change for the studied outcome variables. These outcomes included increased exercise time, increased knowledge of nutritional status, increased knowledge of asthma treatment, increased participation in healthcare, slower health decline, improved body shape perception, and 18-month weight loss maintenance.
doi:10.2196/jmir.6.4.e40
PMCID: PMC1550624  PMID: 15631964
Web-based intervention; non-Web-based intervention; Web-based therapy, Internet; meta-analysis; patient outcomes; adults
3.  Minimizing attrition bias: a longitudinal study of depressive symptoms in an elderly cohort 
Background
Attrition from mortality is common in longitudinal studies of the elderly. Ignoring the resulting non-response or missing data can bias study results.
Methods
1260 elderly participants underwent biennial follow-up assessments over 10 years. Many missed one or more assessments over this period. We compared three statistical models to evaluate the impact of missing data on an analysis of depressive symptoms over time. The first analytic model (generalized mixed model) treated non-response as data missing at random. The other two models used shared parameter methods; each had different specifications for dropout but both jointly modeled both outcome and dropout through a common random effect.
Results
The presence of depressive symptoms was associated with being female, having less education, functional impairment, using more prescription drugs, and taking antidepressant drugs. In all three models, the same variables were significantly associated with depression and in the same direction. However, the strength of the associations differed widely between the generalized mixed model and the shared parameter models. Although the two shared parameter models had different assumptions about the dropout process, they yielded similar estimates for the outcome. One model fitted the data better, and the other was computationally faster.
Conclusions
Dropout does not occur randomly in longitudinal studies of the elderly. Thus, simply ignoring it can yield biased results. Shared parameter models are a powerful, flexible, and easily implemented tool for analyzing longitudinal data while minimizing bias due to nonrandom attrition.
doi:10.1017/S104161020900876X
PMCID: PMC2733930  PMID: 19288971
discrete failure time model; dropout; non-ignorable nonresponse; shared parameter model; Weibull model
4.  Progress toward Global Reduction in Under-Five Mortality: A Bootstrap Analysis of Uncertainty in Millennium Development Goal 4 Estimates 
PLoS Medicine  2012;9(12):e1001355.
Leontine Alkema and colleagues use a bootstrap procedure to assess the uncertainty around the estimates of the under-five mortality rate produced by the United Nations Inter-Agency Group for Child Mortality Estimation.
Background
Millennium Development Goal 4 calls for an annual rate of reduction (ARR) of the under-five mortality rate (U5MR) of 4.4% between 1990 and 2015. Progress is measured through the point estimates of the United Nations Inter-agency Group for Child Mortality Estimation (UN IGME). To facilitate evidence-based conclusions about progress toward the goal, we assessed the uncertainty in the estimates arising from sampling errors and biases in data series and the inferior quality of specific data series.
Methods and Findings
We implemented a bootstrap procedure to construct 90% uncertainty intervals (UIs) for the U5MR and ARR to complement the UN IGME estimates. We constructed the bounds for all countries without a generalized HIV epidemic, where a standard estimation approach is carried out (174 countries). In the bootstrap procedure, potential biases in levels and trends of data series of different source types were accounted for. There is considerable uncertainty about the U5MR, particularly for high mortality countries and in recent years. Among 86 countries with a U5MR of at least 40 deaths per 1,000 live births in 1990, the median width of the UI, relative to the U5MR level, was 19% for 1990 and 48% for 2011, with the increase in uncertainty due to more limited data availability. The median absolute width of the 90% UI for the ARR from 1990 to 2011 was 2.2%. Although the ARR point estimate for all high mortality countries was greater than zero, for eight of them uncertainty included the possibility of no improvement between 1990 and 2011. For 13 countries, it is deemed likely that the ARR from 1990 to 2011 exceeded 4.4%.
Conclusions
In light of the upcoming evaluation of Millennium Development Goal 4 in 2015, uncertainty assessments need to be taken into account to avoid unwarranted conclusions about countries' progress based on limited data.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In September 2000, world leaders adopted the United Nations Millennium Declaration, committing member states (countries) to a new global partnership to reduce extreme poverty and improve global health by setting out a series of time-bound targets with a deadline of 2015—the Millennium Development Goals (MDGs). There are eight MDGs and the fourth, MDG 4, focuses on reducing the number of deaths in children aged under five years by two-thirds from the 1990 level. Monitoring progress towards meeting all of the MDG targets is of vital importance to measure the effectiveness of interventions and to prioritize slow progress areas. MDG 4 has three specific indicators, and every year, the United Nations Inter-agency Group for Child Mortality Estimation (the UN IGME, which includes the key agencies the United Nations Children's Fund, the World Health Organization, the World Bank, and the United Nations Population Division) produces and publishes estimates of child death rates for all countries.
Why Was This Study Done?
Many poorer countries do not have the infrastructure and the functioning vital registration systems in place to record the number of child deaths. Therefore, it is difficult to accurately assess levels and trends in the rate of child deaths because there is limited information (data) or because the data that exists may be inaccurate or of poor quality. In order to deal with this situation, analyzing trends in under-five child death rates (to show progress towards MDG 4) currently focuses on the “best” estimates from countries, a process that relies on “point” estimates. But this practice can lead to inaccurate results and comparisons. It is therefore important to identify a framework for calculating the uncertainty surrounding these estimates. In this study, the researchers use a statistical method to calculate plausible uncertainty intervals for the estimates of death rates in children aged under five years and the yearly reduction in those rates.
What Did the Researchers Do and Find?
The researchers used the publicly available information from the UN IGME 2012 database, which collates data from a variety of sources, and a statistical method called bootstrapping to construct uncertainty levels for 174 countries out of 195 countries for which the UN IGME published estimates in 2012. This new method improves current practice for estimating the extent of data errors, as it takes into account the structure and (potentially poor) quality of the data. The researchers used 90% as the uncertainty level and categorized countries according to the likelihood of meeting the MDG 4 target.
Using these methods, the researchers found that in countries with high child mortality rates (40 or more deaths per 1,000 children in 1990), there was a lot of uncertainty (wide uncertainty intervals) about the levels and trends of death rates in children aged under five years, especially more recently, because of the limited availability of data. Overall, in 2011 the median width of the uncertainty interval for the child death rate was 48% among the 86 countries with high death rates, compared to 19% in 1990. Using their new method, the researchers found that for eight countries, it is not clear whether any progress had been made in reducing child mortality, but for 13 countries, it is deemed likely that progress exceeded the MDG 4 target.
What Do These Findings Mean?
These findings suggest that new uncertainty assessments constructed by a statistical method called bootstrapping can provide more insights into countries' progress in reducing child mortality and meeting the MDG 4 target. As demonstrated in this study, when data are limited, uncertainty intervals should to be taken into account when estimating progress towards MDG 4 in order to give more accurate assessments on a country' progress, thus allowing for more realistic comparisons and conclusions.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001355.
The UN website has more information about the Millennium Development Goals, including country-specific data
More information is available from UNICEF's ChildInfo website about the UN IGME and child mortality
All UN IGME child mortality estimates and data are available via CME Info
Countdown to 2015 tracks coverage levels for health interventions proven to reduce child mortality and proposes new actions to reach MDG 4
doi:10.1371/journal.pmed.1001355
PMCID: PMC3519895  PMID: 23239945
5.  Interrupted Time-Series Analysis of Regulations to Reduce Paracetamol (Acetaminophen) Poisoning 
PLoS Medicine  2007;4(4):e105.
Background
Paracetamol (acetaminophen) poisoning is the leading cause of acute liver failure in Great Britain and the United States. Successful interventions to reduced harm from paracetamol poisoning are needed. To achieve this, the government of the United Kingdom introduced legislation in 1998 limiting the pack size of paracetamol sold in shops. Several studies have reported recent decreases in fatal poisonings involving paracetamol. We use interrupted time-series analysis to evaluate whether the recent fall in the number of paracetamol deaths is different to trends in fatal poisoning involving aspirin, paracetamol compounds, antidepressants, or nondrug poisoning suicide.
Methods and Findings
We calculated directly age-standardised mortality rates for paracetamol poisoning in England and Wales from 1993 to 2004. We used an ordinary least-squares regression model divided into pre- and postintervention segments at 1999. The model included a term for autocorrelation within the time series. We tested for changes in the level and slope between the pre- and postintervention segments. To assess whether observed changes in the time series were unique to paracetamol, we compared against poisoning deaths involving compound paracetamol (not covered by the regulations), aspirin, antidepressants, and nonpoisoning suicide deaths. We did this comparison by calculating a ratio of each comparison series with paracetamol and applying a segmented regression model to the ratios. No change in the ratio level or slope indicated no difference compared to the control series. There were about 2,200 deaths involving paracetamol. The age-standardised mortality rate rose from 8.1 per million in 1993 to 8.8 per million in 1997, subsequently falling to about 5.3 per million in 2004. After the regulations were introduced, deaths dropped by 2.69 per million (p = 0.003). Trends in the age-standardised mortality rate for paracetamol compounds, aspirin, and antidepressants were broadly similar to paracetamol, increasing until 1997 and then declining. Nondrug poisoning suicide also declined during the study period, but was highest in 1993. The segmented regression models showed that the age-standardised mortality rate for compound paracetamol dropped less after the regulations (p = 0.012) but declined more rapidly afterward (p = 0.031). However, age-standardised rates for aspirin and antidepressants fell in a similar way to paracetamol after the regulations. Nondrug poisoning suicide declined at a similar rate to paracetamol after the regulations were introduced.
Conclusions
Introduction of regulations to limit availability of paracetamol coincided with a decrease in paracetamol-poisoning mortality. However, fatal poisoning involving aspirin, antidepressants, and to a lesser degree, paracetamol compounds, also showed similar trends. This raises the question whether the decline in paracetamol deaths was due to the regulations or was part of a wider trend in decreasing drug-poisoning mortality. We found little evidence to support the hypothesis that the 1998 regulations limiting pack size resulted in a greater reduction in poisoning deaths involving paracetamol than occurred for other drugs or nondrug poisoning suicide.
Analysis of mortality rates for paracetamol poisoning in England and Wales does not support the view that regulations limiting pack size have been responsible for a reduction in deaths.
Editors' Summary
Background.
Paracetamol—known as acetaminophen in the United States—is a cheap and effective painkiller. It is widely used to relieve minor aches and pains as well as fevers and headaches. Recommended doses of paracetamol are considered safe in humans, but overdoses are toxic and can cause liver failure and death. Because this drug is very easy to get hold of, there are many overdoses each year, either accidental or deliberate. In the UK, paracetamol poisoning is the most common cause of acute liver failure. Toward the end of 1998, new laws were introduced in the UK to try to reduce the number of paracetamol overdoses. These laws said that pharmacies could not sell packs of paracetamol containing more than 32 tablets and other shops could not sell packs with more than 16 tablets. One of the reasons behind the introduction of this law was that many suicides are not preplanned and, therefore, if it was harder for people to get hold of or keep large quantities of tablets, they might be less likely to attempt suicide or accidentally overdose.
Why Was This Study Done?
Following the introduction of these new laws, the number of deaths caused by paracetamol overdose in the UK dropped. However, it is possible that the drop in deaths came about for a variety of different reasons and not just as a result of the new laws on paracetamol pack size. For example, the suicide rate might have been falling anyway due to other changes in society and the fall in death rate from paracetamol might just have been part of that trend. It is important to find out whether the legal changes that were introduced to address a public health problem did in fact bring about a change for the better. This knowledge would also be relevant to other countries that are considering similar changes.
What Did the Researchers Do and Find?
The researchers used data from the Office of National Statistics, which holds information on drug poisoning deaths in England and Wales. These data were then broken down by the type of drug that was mentioned on the death certificate. The researchers compared death rates involving the following drugs: paracetamol; paracetamol-containing compounds (which were not subject to the new pack size laws); aspirin; antidepressant drugs; and then finally non-drug poisoning suicides. The reason for comparing death rates involving paracetamol against death rates involving other drugs, or non-drug suicide, was that this method would allow the researchers to see if the drop in paracetamol deaths followed overall trends in the poisoning or suicide rates or not. If the paracetamol death rate dropped following introduction of the new laws but the rates of other types of poisoning or suicide did not, then there would be a link between the new laws and a fall in paracetamol suicides. The researchers compared these death data within specific time periods before the end of 1998 (when the new laws on paracetamol pack size were introduced) and after.
Overall, there were nearly 2,200 deaths involving paracetamol between 1993 and 2004. The number of deaths per year involving paracetamol dropped substantially when comparing the periods of time before the end of 1998 and after it. However, the number of deaths per year involving any drug, and the non-drug suicides, also fell during this period of time. When comparing the trends for paracetamol deaths with other poisoning or suicide deaths, the researchers did not find any statistical evidence that the fall in paracetamol deaths was any different to the overall trend in poisoning or suicide death rates.
What Do These Findings Mean?
Although the paracetamol death rate fell immediately following the new laws on pack size, this study suggests the link might just be coincidence. The researchers could not find any data supporting the idea that the new laws caused a drop in paracetamol deaths. However, this was an observational study, not a true experimental one: the researchers here were clearly not able to set up equivalent “experimental” and “control” groups for comparison. It is very difficult to prove or disprove conclusively that new laws such as this are, or are not, effective.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040105
Information is available from Medline Plus about suicide
Wikipedia has an entry on paracetamol (note that Wikipedia is an internet encyclopedia anyone can edit)
Information about regulation of drugs in the UK is available from the Medicines and Healthcare Regulatory Agency
The Office for National Statistics provides key economic and social data about the UK, and is involved in many other important projects
doi:10.1371/journal.pmed.0040105
PMCID: PMC1845154  PMID: 17407385
6.  Evaluation of medical and health economic effectiveness of non-pharmacological secondary prevention of coronary heart disease 
Background
Coronary heart disease (CHD) is a common and potentially fatal malady with a life time prevalence of over 20%. For Germany, the mortality attributable to chronic ischemic heart disease or acute myocardial infarction is estimated at 140,000 deaths per year. An association between prognosis of CHD and lifestyle risk factors has been consistently shown. To positively influence lifestyle risk factors in patients with CHD, non-pharmaceutical secondary prevention strategies are frequently recommended and implemented.
Objectives
The aim of this HTA (HTA = Health Technology Assessment) is to summarise the current literature on strategies for non-pharmaceutical secondary prevention in patients with CHD and to evaluate their medical effectiveness/efficacy and cost-effectiveness as well as the ethical, social and legal implications. In addition, this report aims to compare the effectiveness and efficacy of different intervention components and to evaluate the generalisability with regard to the German context.
Methods
Relevant publications were identified by means of a structured search of databases accessed through the German Institute of Medical Documentation and Information (DIMDI). In addition, a manual search of identified reference lists was conducted. The present report includes German and English literature published between January 2003 and September 2008 targeting adults with CHD. The methodological quality of included studies was assessed according to pre-defined quality criteria, based on the criteria of evidence based medicine.
Results
Among 9,074 publications 43 medical publications met the inclusion criteria. Overall study quality is satisfactory, but only half the studies report overall mortality or cardiac mortality as an outcome, while the remaining studies report less reliable outcome parameters. The follow-up duration varies between twelve and 120 months. Although overall effectiveness of non-pharmaceutical secondary prevention programs shows considerable heterogeneity, there is evidence for the long-term effectiveness concerning mortality, recurrent cardiac events and quality of life. Interventions based on exercise and also multicomponent interventions report more conclusive evidence for reducing mortality, while interventions focusing on psychosocial risk factors seem to be more effective in improving quality of life. Only two studies from Germany fulfill the methodological criteria and are included in this report.
Additionally, 25 economic publications met the inclusion criteria. Both, quantity and quality of publications dealing with combined interventions are higher compared with those investigating single component interventions. However, there are difficulties in transferring the international results into the German health care system, because of its specific structure of the rehabilitation system. While international literature mostly shows a positive cost-effectiveness ratio of combined programs, almost without exception, studies investigate out-of hospital or home-based programs. The examination of publications evaluating the cost-effectiveness of single interventions merely shows a positive trend of exercise-based and smoking cessation programs. Due to a lack of appropriate studies, no conclusive evidence regarding psychosocial and dietary interventions is available.
Altogether eleven publications concerned with ethical or social issues of non-pharmacological secondary prevention strategies are included. These studies are relatively confirm the assumption that patients with a lower socioeconomic background reflect a population at increased risk and therefore have specific needs to participate in rehabilitation programs. However, there currently remains uncertainty, whether these patients participate in rehabilitation more or less often. As barriers, which deter patients from attending, aspects like a lack of motivation, family commitments or the distance between home and rehabilitation centres are identified. Psychological factors like anxiety, depression and uncertainty as well as physical constraints are also pointed out.
Discussion
Non-pharmacological secondary preventive strategies are safe and effective in improving mortality, morbidity and quality of life in patients with CHD. Because of the small number of reliable studies with long term follow up over 60 months, sustainability of observed intervention effects has to be regarded with caution. Due to a lack of suitable studies, it was not possible to determine the effectiveness of interventions in important patient subgroups as well as the comparative effectiveness of different intervention strategies, conclusively. Future research should, amongst others, attempt to investigate these questions in methodologically rigorous studies.
With regard to the cost-effectiveness of non-pharmacological interventions, overall, international studies show positive results. However, there are considerable limitations due to the qualitative and quantitative deficiencies of identified studies. The special characteristics of the German rehabilitation system with its primarily inpatient offers result in further difficulties, when trying to transfer international study results to the German health care system. Both, studies demonstrating the cost-effectiveness of inpatient programs and those investigating the cost-effectiveness of single interventions are currently not available. To examine the German rehabilitation programs concerning their efficiency and their potential for optimisation, there is a need for further research.
Concerning social and ethical issues, a lack of studies addressing the structure of rehabilitation participants in Germany is striking. The same applies to studies examining the reasons for none participation in non-pharmacological secondary prevention programs. Evidence regarding these questions would provide an informative basis for optimising rehabilitation programs in Germany.
Conclusion
Non-pharmacological secondary prevention interventions are safe and able to reduce mortality from CHD and cardiac events, as well as to imporve patient’s quality of life. Nevertheless, there is considerable need for research; especially the effectiveness of interventions for important subgroups of CHD patients has to be evaluated. In addition to intervention effectiveness, there is also some evidence that interventions generate an appropriate cost-effectiveness ratio. However, future research should investigate this further. The same applies to the sustainability of secondary prevention programs and patient’s reasons for not attending them.
doi:10.3205/hta000078
PMCID: PMC3011286  PMID: 21289903
Coronary heart disease; secondary prevention; prevention, non-pharmacological; effectiveness; cost-effectiveness; efficiency; intervention, psychosocial; intervention, multimodal; exercise; training; reduction, stress; smoking cessation; dietary change; rehabilitation
7.  Packaging Health Services When Resources Are Limited: The Example of a Cervical Cancer Screening Visit 
PLoS Medicine  2006;3(11):e434.
Background
Increasing evidence supporting the value of screening women for cervical cancer once in their lifetime, coupled with mounting interest in scaling up successful screening demonstration projects, present challenges to public health decision makers seeking to take full advantage of the single-visit opportunity to provide additional services. We present an analytic framework for packaging multiple interventions during a single point of contact, explicitly taking into account a budget and scarce human resources, constraints acknowledged as significant obstacles for provision of health services in poor countries.
Methods and Findings
We developed a binary integer programming (IP) model capable of identifying an optimal package of health services to be provided during a single visit for a particular target population. Inputs to the IP model are derived using state-transition models, which compute lifetime costs and health benefits associated with each intervention. In a simplified example of a single lifetime cervical cancer screening visit, we identified packages of interventions among six diseases that maximized disability-adjusted life years (DALYs) averted subject to budget and human resource constraints in four resource-poor regions. Data were obtained from regional reports and surveys from the World Health Organization, international databases, the published literature, and expert opinion. With only a budget constraint, interventions for depression and iron deficiency anemia were packaged with cervical cancer screening, while the more costly breast cancer and cardiovascular disease interventions were not. Including personnel constraints resulted in shifting of interventions included in the package, not only across diseases but also between low- and high-intensity intervention options within diseases.
Conclusions
The results of our example suggest several key themes: Packaging other interventions during a one-time visit has the potential to increase health gains; the shortage of personnel represents a real-world constraint that can impact the optimal package of services; and the shortage of different types of personnel may influence the contents of the package of services. Our methods provide a general framework to enhance a decision maker's ability to simultaneously consider costs, benefits, and important nonmonetary constraints. We encourage analysts working on real-world problems to shift from considering costs and benefits of interventions for a single disease to exploring what synergies might be achievable by thinking across disease burdens.
Jane Kim and colleagues analyzed the possible ways that multiple health interventions might be packaged together during a single visit, taking into account scarce financial and human resources.
Editors' Summary
Background.
Public health decision makers in developed and developing countries are exploring the idea of providing packages of health checks at specific times during a person's lifetime to detect and/or prevent life-threatening diseases such as diabetes, heart problems, and some cancers. Bundling together tests for different diseases has advantages for both health-care systems and patients. It can save time and money for both parties and, by associating health checks with life events such as childbirth, it can take advantage of a valuable opportunity to check on the overall health of individuals who may otherwise rarely visit a doctor. But money and other resources (for example, nurses to measure blood pressure) are always limited, even in wealthy countries, so decision makers have to assess the likely costs and benefits of packages of interventions before putting them into action.
Why Was This Study Done?
Recent evidence suggests that women in developing countries would benefit from a once-in-a-lifetime screen for cervical cancer, a leading cause of cancer death for this population. If such a screening strategy for cervical cancer were introduced, it might provide a good opportunity to offer women other health checks, but it is unclear which interventions should be packaged together. In this study, the researchers have developed an analytic framework to identify an optimal package of health services to offer to women attending a clinic for their lifetime cervical cancer screen. Their model takes into account monetary limitations and possible shortages in trained personnel to do the health checks, and balances these constraints against the likely health benefits for the women.
What Did the Researchers Do and Find?
The researchers developed a “mathematical programming” model to identify an optimal package of health services to be provided during a single visit. They then used their model to estimate the average costs and health outcomes per woman of various combinations of health interventions for 35- to 40-year-old women living in four regions of the world with high adult death rates. The researchers chose breast cancer, cardiovascular disease, depression, anemia caused by iron deficiency, and sexually transmitted diseases as health conditions to be checked in addition to cervical cancer during the single visit. They considered two ways—one cheap in terms of money and people; the other more expensive but often more effective—of checking for or dealing with each potential health problem. When they set a realistic budgetary constraint (based on the annual health budget of the poorest countries and a single health check per woman in the two decades following her reproductive years), the optimal health package generated by the model for all four regions included cervical cancer screening done by testing for human papillomavirus (an effective but complex test), treatment for depression, and screening or treatment for anemia. When a 50% shortage in general (for example, nurses) and specialized (for example, doctors) personnel time was also included, the health benefits of the package were maximized by using a simpler test for cervical cancer and by treating anemia but not depression; this freed up resources in some regions to screen for breast cancer or cardiovascular disease.
What Do These Findings Mean?
The model described by the researchers provides a way to explore the potential advantages of delivering a package of health interventions to individuals in a single visit. Like all mathematical models, its conclusions rely heavily on the data used in its construction. Indeed, the researchers stress that, because they did not have full data on the effectiveness of each intervention and made many other assumptions, their results on their own cannot be used to make policy decisions. Nevertheless, their results clearly show that the packaging of multiple health services during a single visit has great potential to maximize health gains, provided the right interventions are chosen. Most importantly, their analysis shows that in the real world the shortage of personnel, which has been ignored in previous analyses even though it is a major problem in many developing countries, will affect which health conditions and specific interventions should be bundled together to provide the greatest impact on public health.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030434.g001.
The World Health Organization has information on choosing cost-effective health interventions and on human resources for health
The American Cancer Society offers patient information on cervical cancer
The Alliance for Cervical Cancer Prevention includes information about cervical cancer prevention programs in developing countries
doi:10.1371/journal.pmed.0030434
PMCID: PMC1635742  PMID: 17105337
8.  HIV Treatment as Prevention: Systematic Comparison of Mathematical Models of the Potential Impact of Antiretroviral Therapy on HIV Incidence in South Africa 
PLoS Medicine  2012;9(7):e1001245.
Background
Many mathematical models have investigated the impact of expanding access to antiretroviral therapy (ART) on new HIV infections. Comparing results and conclusions across models is challenging because models have addressed slightly different questions and have reported different outcome metrics. This study compares the predictions of several mathematical models simulating the same ART intervention programmes to determine the extent to which models agree about the epidemiological impact of expanded ART.
Methods and Findings
Twelve independent mathematical models evaluated a set of standardised ART intervention scenarios in South Africa and reported a common set of outputs. Intervention scenarios systematically varied the CD4 count threshold for treatment eligibility, access to treatment, and programme retention. For a scenario in which 80% of HIV-infected individuals start treatment on average 1 y after their CD4 count drops below 350 cells/µl and 85% remain on treatment after 3 y, the models projected that HIV incidence would be 35% to 54% lower 8 y after the introduction of ART, compared to a counterfactual scenario in which there is no ART. More variation existed in the estimated long-term (38 y) reductions in incidence. The impact of optimistic interventions including immediate ART initiation varied widely across models, maintaining substantial uncertainty about the theoretical prospect for elimination of HIV from the population using ART alone over the next four decades. The number of person-years of ART per infection averted over 8 y ranged between 5.8 and 18.7. Considering the actual scale-up of ART in South Africa, seven models estimated that current HIV incidence is 17% to 32% lower than it would have been in the absence of ART. Differences between model assumptions about CD4 decline and HIV transmissibility over the course of infection explained only a modest amount of the variation in model results.
Conclusions
Mathematical models evaluating the impact of ART vary substantially in structure, complexity, and parameter choices, but all suggest that ART, at high levels of access and with high adherence, has the potential to substantially reduce new HIV infections. There was broad agreement regarding the short-term epidemiologic impact of ambitious treatment scale-up, but more variation in longer term projections and in the efficiency with which treatment can reduce new infections. Differences between model predictions could not be explained by differences in model structure or parameterization that were hypothesized to affect intervention impact.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Following the first reported case of AIDS in 1981, the number of people infected with HIV, the virus that causes AIDS, increased rapidly. In recent years, the number of people becoming newly infected has declined slightly, but the virus continues to spread at unacceptably high levels. In 2010 alone, 2.7 million people became HIV-positive. HIV, which is usually transmitted through unprotected sex, destroys CD4 lymphocytes and other immune system cells, leaving infected individuals susceptible to other infections. Early in the AIDS epidemic, half of HIV-infected people died within eleven years of infection. Then, in 1996, antiretroviral therapy (ART) became available, and, for people living in affluent countries, HIV/AIDS gradually became considered a chronic condition. But because ART was expensive, for people living in developing countries HIV/AIDS remained a fatal condition. Roll-out of ART in developing countries first started in the early 2000s. In 2006, the international community set a target of achieving universal ART coverage by 2010. Although this target has still not been reached, by the end of 2010, 6.6 million of the estimated 15 million people in need of ART in developing countries were receiving ART.
Why Was This Study Done?
Several studies suggest that ART, in addition to reducing illness and death among HIV-positive people, reduces HIV transmission. Consequently, there is interest in expanding the provision of ART as a strategy for reducing the spread of HIV (“HIV treatment as prevention"), particularly in sub-Saharan Africa, where one in 20 adults is HIV-positive. It is important to understand exactly how ART might contribute to averting HIV transmission. Several mathematical models that simulate HIV infection and disease progression have been developed to investigate the impact of expanding access to ART on the incidence of HIV (the number of new infections occurring in a population over a year). But, although all these models predict that increased ART coverage will have epidemiologic (population) benefits, they vary widely in their estimates of the magnitude of these benefits. In this study, the researchers systematically compare the predictions of 12 mathematical models of the HIV epidemic in South Africa, simulating the same ART intervention programs to determine the extent to which different models agree about the impact of expanded ART.
What Did the Researchers Do and Find?
The researchers invited groups who had previously developed mathematical models of the epidemiological impact of expanded access to ART in South Africa to participate in a systematic comparison exercise in which their models were used to simulate ART scale-up scenarios in which the CD4 count threshold for treatment eligibility, access to treatment, and retention on treatment were systematically varied. To exclude variation resulting from different model assumptions about the past and current ART program, it was assumed that ART is introduced into the population in the year 2012, with no treatment provision prior to this, and interventions were evaluated in comparison to an artificial counterfactual scenario in which no treatment is provided. A standard scenario based on the World Health Organization's recommended threshold for initiation of ART, although unrepresentative of current provision in South Africa, was used to compare the models. In this scenario, 80% of HIV-infected individuals received treatment, they started treatment on average a year after their CD4 count dropped below 350 cells per microliter of blood, and 85% remained on treatment after three years. The models predicted that, with a start point of 2012, the HIV incidence would be 35%–54% lower in 2020 and 32%–74% lower in 2050 compared to a counterfactual scenario where there was no ART. Estimates of the number of person-years of ART needed per infection averted (the efficiency with which ART reduced new infections) ranged from 6.3–18.7 and from 4.5–20.2 over the periods 2012–2020 and 2012–2050, respectively. Finally, estimates of the impact of ambitious interventions (for example, immediate treatment of all HIV-positive individuals) varied widely across the models.
What Do These Findings Mean?
Although the mathematical models used in this study had different characteristics, all 12 predict that ART, at high levels of access and adherence, has the potential to reduce new HIV infections. However, although the models broadly agree about the short-term epidemiologic impact of treatment scale-up, their longer-term projections (including whether ART alone can eliminate HIV infection) and their estimates of the efficiency with which ART can reduce new infections vary widely. Importantly, it is possible that all these predictions will be wrong—all the models may have excluded some aspect of HIV transmission that will be found in the future to be crucial. Finally, these findings do not aim to indicate which specific ART interventions should be used to reduce the incidence of HIV. Rather, by comparing the models that are being used to investigate the feasibility of “HIV treatment as prevention," these findings should help modelers and policy-makers think critically about how the assumptions underlying these models affect the models' predictions.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001245.
This study is part of the July 2012 PLoS Medicine Collection, Investigating the Impact of Treatment on New HIV Infections
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
NAM/aidsmap provides basic information about HIV/AIDS and summaries of recent research findings on HIV care and treatment
Information is available from Avert, an international AIDS charity on many aspects of HIV/AIDS, including information on HIV/AIDS treatment and care, on HIV treatment as prevention, and on HIV/AIDS in South Africa (in English and Spanish)
The World Health Organization provides information about universal access to AIDS treatment (in English, French, and Spanish); its 2010 ART guidelines can be downloaded
The HIV Modelling Consortium aims to improve scientific support for decision-making by coordinating mathematical modeling of the HIV epidemic
Patient stories about living with HIV/AIDS are available through Avert; the charity website Healthtalkonline also provides personal stories about living with HIV, including stories about taking anti-HIV drugs and the challenges of anti-HIV drugs
doi:10.1371/journal.pmed.1001245
PMCID: PMC3393664  PMID: 22802730
9.  Measuring Adult Mortality Using Sibling Survival: A New Analytical Method and New Results for 44 Countries, 1974–2006 
PLoS Medicine  2010;7(4):e1000260.
Julie Rajaratnam and colleagues describe a novel method, called the Corrected Sibling Survival method, to measure adult mortality in countries without good vital registration by use of histories taken from surviving siblings.
Background
For several decades, global public health efforts have focused on the development and application of disease control programs to improve child survival in developing populations. The need to reliably monitor the impact of such intervention programs in countries has led to significant advances in demographic methods and data sources, particularly with large-scale, cross-national survey programs such as the Demographic and Health Surveys (DHS). Although no comparable effort has been undertaken for adult mortality, the availability of large datasets with information on adult survival from censuses and household surveys offers an important opportunity to dramatically improve our knowledge about levels and trends in adult mortality in countries without good vital registration. To date, attempts to measure adult mortality from questions in censuses and surveys have generally led to implausibly low levels of adult mortality owing to biases inherent in survey data such as survival and recall bias. Recent methodological developments and the increasing availability of large surveys with information on sibling survival suggest that it may well be timely to reassess the pessimism that has prevailed around the use of sibling histories to measure adult mortality.
Methods and Findings
We present the Corrected Sibling Survival (CSS) method, which addresses both the survival and recall biases that have plagued the use of survey data to estimate adult mortality. Using logistic regression, our method directly estimates the probability of dying in a given country, by age, sex, and time period from sibling history data. The logistic regression framework borrows strength across surveys and time periods for the estimation of the age patterns of mortality, and facilitates the implementation of solutions for the underrepresentation of high-mortality families and recall bias. We apply the method to generate estimates of and trends in adult mortality, using the summary measure 45q15—the probability of a 15-y old dying before his or her 60th birthday—for 44 countries with DHS sibling survival data. Our findings suggest that levels of adult mortality prevailing in many developing countries are substantially higher than previously suggested by other analyses of sibling history data. Generally, our estimates show the risk of adult death between ages 15 and 60 y to be about 20%–35% for females and 25%–45% for males in sub-Saharan African populations largely unaffected by HIV. In countries of Southern Africa, where the HIV epidemic has been most pronounced, as many as eight out of ten men alive at age 15 y will be dead by age 60, as will six out of ten women. Adult mortality levels in populations of Asia and Latin America are generally lower than in Africa, particularly for women. The exceptions are Haiti and Cambodia, where mortality risks are comparable to many countries in Africa. In all other countries with data, the probability of dying between ages 15 and 60 y was typically around 10% for women and 20% for men, not much higher than the levels prevailing in several more developed countries.
Conclusions
Our results represent an expansion of direct knowledge of levels and trends in adult mortality in the developing world. The CSS method provides grounds for renewed optimism in collecting sibling survival data. We suggest that all nationally representative survey programs with adequate sample size ought to implement this critical module for tracking adult mortality in order to more reliably understand the levels and patterns of adult mortality, and how they are changing.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Governments and international health agencies need accurate information on births and deaths in populations to help them plan health care policies and monitor the effectiveness of public-health programs designed, for example, to prevent premature deaths from preventable causes such as tobacco smoking. In developed countries, full information on births and deaths is recorded in “vital registration systems.” Unfortunately, very few developing countries have complete vital registration systems. In most African countries, for example, less than one-quarter of deaths are counted through vital registration systems. To fill this information gap, scientists have developed several methods to estimate mortality levels (the proportion of deaths in populations) and trends in mortality (how the proportion of deaths in populations changes over time) from data collected in household surveys and censuses. A household survey collects data about family members (for example, number, age, and sex) for a national sample of households randomly selected from a list of households collected in a census (a periodic count of a population).
Why Was This Study Done?
To date, global public-health efforts have concentrated on improving child survival. Consequently, methods for calculating child mortality levels and trends from surveys are well-developed and generally yield accurate estimates. By contrast, although attempts have been made to measure adult mortality using sibling survival histories (records of the sex, age if alive, or age at death, if dead, of all the children born to survey respondents' mothers that are collected in many household surveys), these attempts have often produced implausibly low estimates of adult mortality. These low estimates arise because people do not always recall deaths accurately when questioned (recall bias) and because families that have fallen apart, possibly because of family deaths, are underrepresented in household surveys (selection bias). In this study, the researchers develop a corrected sibling survival (CSS) method that addresses the problems of selection and recall bias and use their method to estimate mortality levels and trends in 44 developing countries between 1974 and 2006.
What Did the Researchers Do and Find?
The researchers used a statistical approach called logistic regression to develop the CSS method. They then used the method to estimate the probability of a 15-year-old dying before his or her 60th birthday from sibling survival data collected by the Demographic and Health Surveys program (DHS, a project started in 1984 to help developing countries collect data on population and health trends). Levels of adult mortality estimated in this way were considerably higher than those suggested by previous analyses of sibling history data. For example, the risk of adult death between the ages of 15 and 60 years was 20%–35% for women and 25%–45% for men living in sub-Saharan African countries largely unaffected by HIV and 60% for women and 80% for men living in countries in Southern Africa where the HIV epidemic is worst. Importantly, the researchers show that their mortality level estimates compare well to those obtained from vital registration data and other data sources where available. So, for example, in the Philippines, adult mortality levels estimated using the CSS method were similar to those obtained from vital registration data. Finally, the researchers used the CSS method to estimate mortality trends. These calculations reveal, for example, that there has been a 3–4-fold increase in adult mortality since the late 1980s in Zimbabwe, a country badly affected by the HIV epidemic.
What Do These Findings Mean?
These findings suggest that the CSS method, which applies a correction for both selection and recall bias, yields more accurate estimates of adult mortality in developing countries from sibling survival data than previous methods. Given their findings, the researchers suggest that sibling survival histories should be routinely collected in all future household survey programs and, if possible, these surveys should be expanded so that all respondents are asked about sibling histories—currently the DHS only collects sibling histories from women aged 15–49 years. Widespread collection of such data and their analysis using the CSS method, the researchers conclude, would help governments and international agencies track trends in adult mortality and progress toward major health and development targets.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000260.
This study and two related PLoS Medicine Research Articles by Rajaratnam et al. and by Murray et al. are further discussed in a PLoS Medicine Perspective by Mathers and Boerma
Information is available about the Demographic and Health Surveys
The Institute for Health Metrics and Evaluation makes available high-quality information on population health, its determinants, and the performance of health systems
Grand Challenges in Global Health provides information on research into better ways for developing countries to measure their health status
The World Health Organization Statistical Information System (WHOSIS) is an interactive database that brings together core health statistics for WHO member states, including information on vital registration of deaths; the WHO Health Metrics Network is a global collaboration focused on improving sources of vital statistics
doi:10.1371/journal.pmed.1000260
PMCID: PMC2854132  PMID: 20405004
10.  Estimating Incidence from Prevalence in Generalised HIV Epidemics: Methods and Validation 
PLoS Medicine  2008;5(4):e80.
Background
HIV surveillance of generalised epidemics in Africa primarily relies on prevalence at antenatal clinics, but estimates of incidence in the general population would be more useful. Repeated cross-sectional measures of HIV prevalence are now becoming available for general populations in many countries, and we aim to develop and validate methods that use these data to estimate HIV incidence.
Methods and Findings
Two methods were developed that decompose observed changes in prevalence between two serosurveys into the contributions of new infections and mortality. Method 1 uses cohort mortality rates, and method 2 uses information on survival after infection. The performance of these two methods was assessed using simulated data from a mathematical model and actual data from three community-based cohort studies in Africa. Comparison with simulated data indicated that these methods can accurately estimates incidence rates and changes in incidence in a variety of epidemic conditions. Method 1 is simple to implement but relies on locally appropriate mortality data, whilst method 2 can make use of the same survival distribution in a wide range of scenarios. The estimates from both methods are within the 95% confidence intervals of almost all actual measurements of HIV incidence in adults and young people, and the patterns of incidence over age are correctly captured.
Conclusions
It is possible to estimate incidence from cross-sectional prevalence data with sufficient accuracy to monitor the HIV epidemic. Although these methods will theoretically work in any context, we have able to test them only in southern and eastern Africa, where HIV epidemics are mature and generalised. The choice of method will depend on the local availability of HIV mortality data.
Timothy Hallett and colleagues develop and test two user-friendly methods to estimate HIV incidence based on changes in cross-sectional prevalence, using either mortality rates or survival after infection.
Editors' Summary
Background.
More than 25 million people have died from AIDS and about 33 million people are currently infected with human immunodeficiency virus (HIV, the virus that causes AIDS). Faced with this threat to human health, governments and international agencies are working together to halt the AIDS epidemic. An important part of this effort is HIV surveillance. The spread of HIV needs to be monitored to assess the impact of interventions (for example, the provision of antiretroviral drugs) and to plan for current and future health care needs. HIV surveillance in countries where the epidemic has spread beyond specific groups into the whole population (a generalized epidemic) has mainly relied on determining the prevalence of HIV infection (the fraction of the population that is infected) among women attending antenatal clinics. Recently, however, household health surveys (for example, the Demographic and Health Surveys) have begun to use blood testing for antibodies to the AIDS virus (serological testing) to provide more accurate estimates of HIV prevalence in the general adult population.
Why Was This Study Done?
Although prevalence estimates provide useful information about the HIV epidemic, another important indicator is incidence—the number of new infections occurring during a specific time period. Incidence measurements provide more information about temporal changes in the epidemic and transmission patterns and allow public-health experts to make better predictions of future health care needs. But, whereas prevalence can be measured with anonymized serological surveys, individuals would have to be identified and followed up in repeat serological surveys to provide a direct measurement of incidence. This is expensive and hard to achieve in many settings. In this study, therefore, the researchers develop and validate two mathematical methods to estimate HIV incidence in generalized HIV epidemics from prevalence data.
What Did the Researchers Do and Find?
Changes in the fraction of the population living with HIV (prevalence) can occur not only because of changes in the rate of new infections (incidence), but also because mortality rates are much higher for infected individuals than others. The researchers' methods disentangle the contributions to HIV prevalence (as measured in serological surveys) made by new infections from those due to deaths from AIDS and other causes. Their first method incorporates information on death rates collected in cohort studies of HIV infection (cohort studies investigate outcomes in groups of people); their second method uses information on survival after HIV infection, also collected in long-running cohort studies. The accuracy of both methods was assessed using computer-simulated data and actual data on HIV prevalence and incidence collected in three community-based cohort studies in Zimbabwe and Uganda (countries with generalized but declining HIV epidemics) and Tanzania (a country with a generalized, stable epidemic). Both methods provided accurate estimates of HIV incidence from the simulated data. Using the data collected in Africa, the mean difference between actual measurements of incidence and the estimate provided by method 1 was 19%; for method 2 it was 14%. In addition, the measured and estimated incidences were in good agreement for all age groups.
What Do These Findings Mean?
These findings suggest HIV incidence rates can be estimated from repeat surveys of prevalence with sufficient accuracy to monitor the HIV epidemic. The accuracy of the estimates across all age groups is particularly important because knowledge of the age-related risk pattern provides the information on transmission patterns that is needed to design effective intervention programs. Because these methods were tested using data only from southern and eastern Africa where the HIV epidemic is mature and generalized, they may not work as well in regions where the epidemic is restricted to subsets of the population. Other factors that might affect their accuracy include the amount of international migration and the uptake of antiretroviral therapies. Nevertheless, with the increased availability of serial measurements of serological prevalence, these new methods for estimating HIV incidence from HIV prevalence could prove extremely useful for monitoring the progress of national HIV epidemics and for guiding HIV control programs. The authors include spreadsheets that can be used to calculate incidence by either method from consecutive survey data.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050080.
The US National Institute of Allergy and Infectious Diseases provides information on HIV infection and AIDS
The US Centers for Disease Control and Prevention provides information on global HIV/AIDS topics (in English and Spanish)
The HIV InSite provides comprehensive and up-to-date information on all aspects of HIV/AIDS from the University of California San Francisco, including country reports on the AIDS epidemic in 195 countries, including Uganda, Zimbabwe, and Tanzania
Avert, an international AIDS charity, provides information on all aspects of HIV/AIDS, including fact sheets on understanding HIV and AIDS statistics, and on HIV and AIDS in Africa
The Demographic and Health Surveys program collects, analyzes, and disseminates information on health and population trends in countries around the world
doi:10.1371/journal.pmed.0050080
PMCID: PMC2288620  PMID: 18590346
11.  Spatial Scan Statistics for Models with Excess Zeros and Overdispersion 
Objective
To propose a more realistic model for disease cluster detection, through a modification of the spatial scan statistic to account simultaneously for inflated zeros and overdispersion.
Introduction
Spatial Scan Statistics [1] usually assume Poisson or Binomial distributed data, which is not adequate in many disease surveillance scenarios. For example, small areas distant from hospitals may exhibit a smaller number of cases than expected in those simple models. Also, underreporting may occur in underdeveloped regions, due to inefficient data collection or the difficulty to access remote sites. Those factors generate excess zero case counts or overdispersion, inducing a violation of the statistical model and also increasing the type I error (false alarms). Overdispersion occurs when data variance is greater than the predicted by the used model. To accommodate it, an extra parameter must be included; in the Poisson model, one makes the variance equal to the mean.
Methods
Tools like the Generalized Poisson (GP) and the Double Poisson [2] may be a better option for this kind of problem, modeling separately the mean and variance, which could be easily adjusted by covariates. When excess zeros occur, the Zero Inflated Poisson (ZIP) model is used, although ZIP’s estimated parameters may be severely biased if nonzero counts are too dispersed, compared to the Poisson distribution. In this case the Inflated Zero models for the Generalized Poisson (ZIGP), Double Poisson (ZIDP) and Negative Binomial (ZINB) could be good alternatives to the joint modeling of excess zeros and overdispersion. By one hand, Zero Inflated Poisson (ZIP) models were proposed using the spatial scan statistic to deal with the excess zeros [3]. By the other hand, another spatial scan statistic was based on a Poisson-Gamma mixture model for overdispersion [4]. In this work we present a model which includes inflated zeros and overdispersion simultaneously, based on the ZIDP model. Let the parameter p indicate the zero inflation. As the the remaining parameters of the observed cases map and the parameter p are not independent, the likelihood maximization process is not straightforward; it becomes even more complicated when we include covariates in the analysis. To solve this problem we introduce a vector of latent variables in order to factorize the likelihood, and obtain a facilitator for the maximization process using the E-M (Expectation-Maximization) algorithm. We derive the formulas to maximize iteratively the likelihood, and implement a computer program using the E-M algorithm to estimate the parameters under null and alternative hypothesis. The p-value is obtained via the Fast Double Bootstrap Test [5].
Results
Numerical simulations are conducted to assess the effectiveness of the method. We present results for Hanseniasis surveillance in the Brazilian Amazon in 2010 using this technique. We obtain the most likely spatial clusters for the Poisson, ZIP, Poisson-Gamma mixture and ZIDP models and compare the results.
Conclusions
The Zero Inflated Double Poisson Spatial Scan Statistic for disease cluster detection incorporates the flexibility of previous models, accounting for inflated zeros and overdispersion simultaneously.
The Hanseniasis study case map, due to excess of zero cases counts in many municipalities of the Brazilian Amazon and the presence of overdispersion, was a good benchmark to test the ZIDP model. The results obtained are easier to understand compared to each of the previous spatial scan statistic models, the Zero Inflated Poisson (ZIP) model and the Poisson-Gamma mixture model for overdispersion, taken separetely. The E-M algorithm and the Fast Double Bootstrap test are computationally efficient for this type of problem.
PMCID: PMC3692937
Scan statistics; Zero inflated; Overdispersion; Expectation-Maximization algorithm
12.  The Effect of Automated Alerts on Provider Ordering Behavior in an Outpatient Setting 
PLoS Medicine  2005;2(9):e255.
Background
Computerized order entry systems have the potential to prevent medication errors and decrease adverse drug events with the use of clinical-decision support systems presenting alerts to providers. Despite the large volume of medications prescribed in the outpatient setting, few studies have assessed the impact of automated alerts on medication errors related to drug–laboratory interactions in an outpatient primary-care setting.
Methods and Findings
A primary-care clinic in an integrated safety net institution was the setting for the study. In collaboration with commercial information technology vendors, rules were developed to address a set of drug–laboratory interactions. All patients seen in the clinic during the study period were eligible for the intervention. As providers ordered medications on a computer, an alert was displayed if a relevant drug–laboratory interaction existed. Comparisons were made between baseline and postintervention time periods. Provider ordering behavior was monitored focusing on the number of medication orders not completed and the number of rule-associated laboratory test orders initiated after alert display. Adverse drug events were assessed by doing a random sample of chart reviews using the Naranjo scoring scale.
The rule processed 16,291 times during the study period on all possible medication orders: 7,017 during the pre-intervention period and 9,274 during the postintervention period. During the postintervention period, an alert was displayed for 11.8% (1,093 out of 9,274) of the times the rule processed, with 5.6% for only “missing laboratory values,” 6.0% for only “abnormal laboratory values,” and 0.2% for both types of alerts. Focusing on 18 high-volume and high-risk medications revealed a significant increase in the percentage of time the provider stopped the ordering process and did not complete the medication order when an alert for an abnormal rule-associated laboratory result was displayed (5.6% vs. 10.9%, p = 0.03, Generalized Estimating Equations test). The provider also increased ordering of the rule-associated laboratory test when an alert was displayed (39% at baseline vs. 51% during post intervention, p < 0.001). There was a non-statistically significant difference towards less “definite” or “probable” adverse drug events defined by Naranjo scoring (10.3% at baseline vs. 4.3% during postintervention, p = 0.23).
Conclusion
Providers will adhere to alerts and will use this information to improve patient care. Specifically, in response to drug–laboratory interaction alerts, providers will significantly increase the ordering of appropriate laboratory tests. There may be a concomitant change in adverse drug events that would require a larger study to confirm. Implementation of rules technology to prevent medication errors could be an effective tool for reducing medication errors in an outpatient setting.
A computerized order entry system that alerted providers to potential problems was shown to be able to influence prescribing practice
doi:10.1371/journal.pmed.0020255
PMCID: PMC1198038  PMID: 16128621
13.  Self-Management Support Interventions for Persons With Chronic Disease 
Background
Self-management support interventions such as the Stanford Chronic Disease Self-Management Program (CDSMP) are becoming more widespread in attempt to help individuals better self-manage chronic disease.
Objective
To systematically assess the clinical effectiveness of self-management support interventions for persons with chronic diseases.
Data Sources
A literature search was performed on January 15, 2012, using OVID MEDLINE, OVID MEDLINE In-Process and Other Non-Indexed Citations, OVID EMBASE, EBSCO Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Wiley Cochrane Library, and the Centre for Reviews and Dissemination database for studies published between January 1, 2000, and January 15, 2012. A January 1, 2000, start date was used because the concept of non-disease-specific/general chronic disease self-management was first published only in 1999. Reference lists were examined for any additional relevant studies not identified through the search.
Review Methods
Randomized controlled trials (RCTs) comparing self-management support interventions for general chronic disease against usual care were included for analysis. Results of RCTs were pooled using a random-effects model with standardized mean difference as the summary statistic.
Results
Ten primary RCTs met the inclusion criteria (n = 6,074). Nine of these evaluated the Stanford CDSMP across various populations; results, therefore, focus on the CDSMP.
Health status outcomes: There was a small, statistically significant improvement in favour of CDSMP across most health status measures, including pain, disability, fatigue, depression, health distress, and self-rated health (GRADE quality low). There was no significant difference between modalities for dyspnea (GRADE quality very low). There was significant improvement in health-related quality of life according to the EuroQol 5-D in favour of CDSMP, but inconsistent findings across other quality-of-life measures.
Healthy behaviour outcomes: There was a small, statistically significant improvement in favour of CDSMP across all healthy behaviours, including aerobic exercise, cognitive symptom management, and communication with health care professionals (GRADE quality low).
Self-efficacy: There was a small, statistically significant improvement in self-efficacy in favour of CDSMP (GRADE quality low).
Health care utilization outcomes: There were no statistically significant differences between modalities with respect to visits with general practitioners, visits to the emergency department, days in hospital, or hospitalizations (GRADE quality very low).
All results were measured over the short term (median 6 months of follow-up).
Limitations
Trials generally did not appropriately report data according to intention-to-treat principles. Results therefore reflect “available case analyses,” including only those participants whose outcome status was recorded. For this reason, there is high uncertainty around point estimates.
Conclusions
The Stanford CDSMP led to statistically significant, albeit clinically minimal, short-term improvements across a number of health status measures (including some measures of health-related quality of life), healthy behaviours, and self-efficacy compared to usual care. However, there was no evidence to suggest that the CDSMP improved health care utilization. More research is needed to explore longer-term outcomes, the impact of self-management on clinical outcomes, and to better identify responders and non-responders.
Plain Language Summary
Self-management support interventions are becoming more common as a structured way of helping patients learn to better manage their chronic disease. To assess the effects of these support interventions, we looked at the results of 10 studies involving a total of 6,074 people with various chronic diseases, such as arthritis and chronic pain, chronic respiratory diseases, depression, diabetes, heart disease, and stroke. Most trials focused on a program called the Stanford Chronic Disease Self-Management Program (CDSMP). When compared to usual care, the CDSMP led to modest, short-term improvements in pain, disability, fatigue, depression, health distress, self-rated health, and health-related quality of life, but it is not possible to say whether these changes were clinically important. The CDSMP also increased how often people undertook aerobic exercise, how often they practiced stress/pain reduction techniques, and how often they communicated with their health care practitioners. The CDSMP did not reduce the number of primary care doctor visits, emergency department visits, the number of days in hospital, or the number of times people were hospitalized. In general, there was high uncertainty around the quality of the evidence, and more research is needed to better understand the effect of self-management support on long-term outcomes and on important clinical outcomes, as well as to better identify who could benefit most from self-management support interventions like the CDSMP.
PMCID: PMC3814807  PMID: 24194800
14.  Estimating coverage of a women’s group intervention among a population of pregnant women in rural Bangladesh 
Background
Reducing maternal and child mortality requires focused attention on better access, utilisation and coverage of good quality health services and interventions aimed at improving maternal and newborn health among target populations, in particular, pregnant women. Intervention coverage in resource and data poor settings is rarely documented. This paper describes four different methods, and their underlying assumptions, to estimate coverage of a community mobilisation women’s group intervention for maternal and newborn health among a population of pregnant women in rural Bangladesh.
Methods
Primary and secondary data sources were used to estimate the intervention’s coverage among pregnant women. Four methods were used: (1) direct measurement of a proxy indicator using intervention survey data; (2) direct measurement among intervention participants and modelled extrapolation based on routine longitudinal surveillance of births; (3) direct measurement among participants and modelled extrapolation based on cross-sectional measurements and national data; and (4) direct measurement among participants and modelled extrapolation based on published national data.
Results
The estimated women’s group intervention’s coverage among pregnant women ranged from 30% to 34%, depending on method used. Differences likely reflect differing assumptions and methodological biases of the various methods.
Conclusion
In the absence of complete and timely population data, choice of coverage estimation method must be based on the strengths and limitations of available methods, capacity and resources for measurement and the ultimate end user needs. Each of the methods presented and discussed here is likely to provide a useful understanding of intervention coverage at a single point in time and Methods 1 and 2 may also provide more reliable estimates of coverage trends.
Footnotes
1Unpublished data from three focus group discussions with women’s group members and facilitators participating in the Women’s Groups intervention.
doi:10.1186/1471-2393-12-60
PMCID: PMC3407726  PMID: 22747973
15.  Projections of Global Mortality and Burden of Disease from 2002 to 2030 
PLoS Medicine  2006;3(11):e442.
Background
Global and regional projections of mortality and burden of disease by cause for the years 2000, 2010, and 2030 were published by Murray and Lopez in 1996 as part of the Global Burden of Disease project. These projections, which are based on 1990 data, continue to be widely quoted, although they are substantially outdated; in particular, they substantially underestimated the spread of HIV/AIDS. To address the widespread demand for information on likely future trends in global health, and thereby to support international health policy and priority setting, we have prepared new projections of mortality and burden of disease to 2030 starting from World Health Organization estimates of mortality and burden of disease for 2002. This paper describes the methods, assumptions, input data, and results.
Methods and Findings
Relatively simple models were used to project future health trends under three scenarios—baseline, optimistic, and pessimistic—based largely on projections of economic and social development, and using the historically observed relationships of these with cause-specific mortality rates. Data inputs have been updated to take account of the greater availability of death registration data and the latest available projections for HIV/AIDS, income, human capital, tobacco smoking, body mass index, and other inputs. In all three scenarios there is a dramatic shift in the distribution of deaths from younger to older ages and from communicable, maternal, perinatal, and nutritional causes to noncommunicable disease causes. The risk of death for children younger than 5 y is projected to fall by nearly 50% in the baseline scenario between 2002 and 2030. The proportion of deaths due to noncommunicable disease is projected to rise from 59% in 2002 to 69% in 2030. Global HIV/AIDS deaths are projected to rise from 2.8 million in 2002 to 6.5 million in 2030 under the baseline scenario, which assumes coverage with antiretroviral drugs reaches 80% by 2012. Under the optimistic scenario, which also assumes increased prevention activity, HIV/AIDS deaths are projected to drop to 3.7 million in 2030. Total tobacco-attributable deaths are projected to rise from 5.4 million in 2005 to 6.4 million in 2015 and 8.3 million in 2030 under our baseline scenario. Tobacco is projected to kill 50% more people in 2015 than HIV/AIDS, and to be responsible for 10% of all deaths globally. The three leading causes of burden of disease in 2030 are projected to include HIV/AIDS, unipolar depressive disorders, and ischaemic heart disease in the baseline and pessimistic scenarios. Road traffic accidents are the fourth leading cause in the baseline scenario, and the third leading cause ahead of ischaemic heart disease in the optimistic scenario. Under the baseline scenario, HIV/AIDS becomes the leading cause of burden of disease in middle- and low-income countries by 2015.
Conclusions
These projections represent a set of three visions of the future for population health, based on certain explicit assumptions. Despite the wide uncertainty ranges around future projections, they enable us to appreciate better the implications for health and health policy of currently observed trends, and the likely impact of fairly certain future trends, such as the ageing of the population, the continued spread of HIV/AIDS in many regions, and the continuation of the epidemiological transition in developing countries. The results depend strongly on the assumption that future mortality trends in poor countries will have a relationship to economic and social development similar to those that have occurred in the higher-income countries.
The presented projections suggest a dramatic shift in the distribution of deaths from younger to older ages and from communicable, maternal, perinatal, and nutritional causes to non-communicable disease causes. HIV/AIDS and tobacco remain major killers and possible targets for intervention.
Editors' Summary
Background.
For most of human history, little has been known about the main causes of illness in different countries and which diseases kill most people. But public-health officials need to know whether heart disease kills more people than cancer in their country, for example, or whether diabetes causes more disability than mental illness so that they can use their resources wisely. They also have to have some idea about how patterns of illness (morbidity) and death (mortality) are likely to change so that they can plan for the future. In the early 1990s, the World Bank sponsored the 1990 Global Burden of Disease study carried out by researchers at Harvard University and the World Health Organization (WHO). This study provided the first comprehensive, global estimates of death and illness by age, sex, and region. It also provided projections of the global burden of disease and mortality up to 2020 using models that assumed that health trends are related to a set of independent variables. These variables were income per person (as people become richer, they, live longer), average number of years of education (as this “human capital” increases, so does life expectancy), time (to allow for improved knowledge about various diseases), and tobacco use (a major global cause of illness and death).
Why Was This Study Done?
These health projections have been widely used by WHO and governments to help them plan their health policies. However, because they are based on the 1990 estimates of the global burden of disease, the projections now need updating, particularly since they underestimate the spread of HIV/AIDS and the associated increase in death from tuberculosis. In this study, the researchers used similar methods to those used in the 1990 Global Burden of Disease study to prepare new projections of mortality and burden of disease up to 2030 starting from the 2002 WHO global estimates of mortality and burden of disease.
What Did the Researchers Do and Find?
As before, the researchers used projections of socio-economic development to model future patterns of mortality and illness for a baseline scenario, a pessimistic scenario that assumed a slower rate of socio-economic development, and an optimistic scenario that assumed a faster rate of growth. Their analysis predicts that between 2002 and 2030 for all three scenarios life expectancy will increase around the world, fewer children younger than 5 years will die, and the proportion of people dying from non-communicable diseases such as heart disease and cancer will increase. Although deaths from infectious diseases will decrease overall, HIV/AIDS deaths will continue to increase; the exact magnitude of the increase will depend on how many people have access to antiretroviral drugs and the efficacy of prevention programs. But, even given the rise in HIV/AIDS deaths, the new projections predict that more people will die of tobacco-related disease than of HIV/AIDS in 2015. The researchers also predict that by 2030, the three leading causes of illness will be HIV/AIDS, depression, and ischaemic heart disease (problems caused by a poor blood supply to the heart) in the baseline and pessimistic scenarios; in the optimistic scenario, road-traffic accidents will replace heart disease as the third leading cause (there will be more traffic accidents with faster economic growth).
What Do These Findings Mean?
The models used by the researchers provide a wealth of information about possible patterns of global death and illness between 2002 and 2030, but because they include many assumptions, like all models, they can provide only indications of future trends, not absolute figures. For example, based on global mortality data from 2002, the researchers estimate that global deaths in 2030 will be 64.9 million under the optimistic scenario. However, the actual figure may be quite a bit bigger or smaller because accurate baseline counts of deaths were not available for every country in the world. Another limitation of the study is that the models used assume that future increases in prosperity in developing countries will affect their population's health in the same way as similar increases affected health in the past in countries with death registration data (these are mostly developed countries). However, even given these and other limitations, the projections reported in this study provide useful insights into the future health of the world. These can now be used by public-health officials to plan future policy and to monitor the effect of new public-health initiatives on the global burden of disease and death.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030442.
World Health Organization, provides information on the Global Burden of Disease Project and links to other related resources Global Burden of Disease Project
Harvard School of Public Health, Burden of Disease Unit, offers information on the 1990 Global Burden of Disease study and its projections Harvard School of Public Health
doi:10.1371/journal.pmed.0030442
PMCID: PMC1664601  PMID: 17132052
16.  Psychosocial Interventions for Perinatal Common Mental Disorders Delivered by Providers Who Are Not Mental Health Specialists in Low- and Middle-Income Countries: A Systematic Review and Meta-Analysis 
PLoS Medicine  2013;10(10):e1001541.
In a systematic review and meta-analysis, Kelly Clarke and colleagues examine the effect of psychosocial interventions delivered by non–mental health specialists for perinatal common mental disorders in low- and middle-income countries.
Please see later in the article for the Editors' Summary
Background
Perinatal common mental disorders (PCMDs) are a major cause of disability among women. Psychosocial interventions are one approach to reduce the burden of PCMDs. Working with care providers who are not mental health specialists, in the community or in antenatal health care facilities, can expand access to these interventions in low-resource settings. We assessed effects of such interventions compared to usual perinatal care, as well as effects of interventions based on intervention type, delivery method, and timing.
Methods and Findings
We conducted a systematic review, meta-analysis, and meta-regression. We searched databases including Embase and the Global Health Library (up to 7 July 2013) for randomized and non-randomized trials of psychosocial interventions delivered by non-specialist mental health care providers in community settings and antenatal health care facilities in low- and middle-income countries. We pooled outcomes from ten trials for 18,738 participants. Interventions led to an overall reduction in PCMDs compared to usual care when using continuous data for PCMD symptomatology (effect size [ES] −0.34; 95% CI −0.53, −0.16) and binary categorizations for presence or absence of PCMDs (odds ratio 0.59; 95% CI 0.26, 0.92). We found a significantly larger ES for psychological interventions (three studies; ES −0.46; 95% CI −0.58, −0.33) than for health promotion interventions (seven studies; ES −0.15; 95% CI −0.27, −0.02). Both individual (five studies; ES −0.18; 95% CI −0.34, −0.01) and group (three studies; ES −0.48; 95% CI −0.85, −0.11) interventions were effective compared to usual care, though delivery method was not associated with ES (meta-regression β coefficient −0.11; 95% CI −0.36, 0.14). Combined group and individual interventions (based on two studies) had no benefit compared to usual care, nor did interventions restricted to pregnancy (three studies). Intervention timing was not associated with ES (β 0.16; 95% CI −0.16, 0.49). The small number of trials and heterogeneity of interventions limit our findings.
Conclusions
Psychosocial interventions delivered by non-specialists are beneficial for PCMDs, especially psychological interventions. Research is needed on interventions in low-income countries, treatment versus preventive approaches, and cost-effectiveness.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Perinatal common mental health disorders are among the most common health problems in pregnancy and the postpartum period. In low- and middle-income countries, about 16% of women during pregnancy and about 20% of women in the postpartum period will suffer from a perinatal common mental health disorder. These disorders, including depression and anxiety, are a major cause of disability in women and have been linked to young children under their care being underweight and stunted.
Why Was This Study Done?
While research shows that both pharmacological (e.g., antidepressants or anti-anxiety medications) and non-pharmacological (e.g., psychotherapy, education, or health promotion) interventions are effective for preventing and treating perinatal common mental disorders, most of this research took place in high-income countries. These findings may not be applicable in low-resource settings, where there is limited access to mental health care providers such as psychiatrists and psychologists, and to medications. Thus, non-pharmacological interventions delivered by providers who are not mental health specialists may be important as ways to treat perinatal common mental health disorders in these types of settings. In this study the researchers systematically reviewed research estimating the effectiveness of non-pharmacological interventions for perinatal common mental disorders that were delivered by providers who were not mental health specialists (including health workers, lay persons, and doctors or midwives) in low- and middle-income countries. The researchers also used meta-analysis and meta-regression—statistical methods that are used to combine the results from multiple studies—to estimate the relative effects of these interventions on mental health symptoms.
What Did the Researchers Do and Find?
The researchers searched multiple databases using key search terms to identify randomized and non-randomized clinical trials. Using specific criteria, the researchers retrieved and assessed 37 full papers, of which 11 met the criteria for their systematic review. Seven of these studies were from upper middle-income countries (China, South Africa, Columbia, Mexico, Argentina, Cuba, and Brazil), and four trials were from the lower middle-income countries of Pakistan and India, but there were no trials from low-income countries. The researchers assessed the quality of the selected studies, and one study was excluded from meta-analysis because of poor quality.
Combining results from the ten remaining studies, the researchers found that compared to usual perinatal care (which in most cases included no mental health care), interventions delivered by a providers who were not mental health specialists were associated with an overall reduction in mental health symptoms and the likelihood of being diagnosed with a mental health disorder. The researchers then performed additional analyses to assess relative effects by intervention type, timing, and delivery mode. They observed that both psychological interventions, such as psychotherapy and cognitive behavioral therapy, and health promotion interventions that were less focused on mental health led to significant improvement in mental health symptoms, but psychological interventions were associated with greater effects than health promotion interventions. Interventions delivered both during pregnancy and postnatally were associated with significant benefits when compared to usual care; however, when interventions were delivered during pregnancy only, the benefits were not significantly greater than usual care. When investigating mode of delivery, the researchers observed that both group and individual interventions were associated with improvements in symptoms.
What Do These Findings Mean?
These findings indicate that non-pharmacological interventions delivered by providers who are not mental health specialists could be useful for reducing symptoms of perinatal mental health disorders in middle-income countries. However, these findings should be interpreted with caution given that they are based on a small number of studies with a large amount of variation in the study designs, settings, timing, personnel, duration, and whether the intervention was delivered to a group, individually, or both. Furthermore, when the researchers excluded studies of the lowest quality, the observed benefits of these interventions were smaller, indicating that this analysis may overestimate the true effect of interventions. Nevertheless, the findings do provide support for the use of non-pharmacological interventions, delivered by non-specialists, for perinatal mental health disorders. Further studies should be undertaken in low-income countries.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001541
The World Health Organization provides information about perinatal mental health disorders
The UK Royal College of Psychiatrists has information for professionals and patients about perinatal mental health disorders
doi:10.1371/journal.pmed.1001541
PMCID: PMC3812075  PMID: 24204215
17.  Statistical Analysis of Daily Smoking Status in Smoking Cessation Clinical Trials 
Addiction (Abingdon, England)  2011;106(11):2039-2046.
Aims
Smoking cessation trials generally record information on daily smoking behavior, but base analyses on measures of smoking status at the end of treatment (EOT). We present an alternative approach that analyzes the entire sequence of daily smoking status observations.
Methods
We analyzed daily abstinence data from a smoking cessation trial, using two longitudinal logistic regression methods: A mixed-effects (ME) model and a generalized estimating equations (GEE) model. We compared results to a standard analysis that takes as outcome abstinence status at EOT. We evaluated time-varying covariates (smoking history and time-varying drug effect) in the longitudinal analysis and compared ME and GEE approaches.
Results
We observed some differences in the estimated treatment effect odds ratios across models, with narrower confidence intervals under the longitudinal models. GEE yields similar results to ME when only baseline factors appear in the model, but gives biased results when one includes time-varying covariates. The longitudinal models indicate that the quit probability declines and the drug effect varies over time. Both the previous day’s smoking status and recent smoking history predict quit probability, independently of the drug effect.
Conclusion
When analysing outcomes of studies from smoking cessation interventions, longitudinal models with multiple outcome data points, rather than just end of treatment, can makes efficient use of the data and incorporate time-varying covariates. The generalized estimating equations approach should be avoided when using time-varying predictors.
doi:10.1111/j.1360-0443.2011.03519.x
PMCID: PMC3197211  PMID: 21631623
Generalized estimating equations; longitudinal analysis; mixed-effects model
18.  Promoting Regular Mammography Screening II. Results From a Randomized Controlled Trial in US Women Veterans 
Background
Few health promotion trials have evaluated strategies to increase regular mammography screening. We conducted a randomized controlled trial of two theory-based interventions in a population-based, nationally representative sample of women veterans.
Methods
Study candidates 52 years and older were randomly sampled from the National Registry of Women Veterans and randomly assigned to three groups. Groups 1 and 2 received interventions that varied in the extent of personalization (tailored and targeted vs targeted-only, respectively); group 3 was a survey-only control group. Postintervention follow-up surveys were mailed to all women after 1 and 2 years. Outcome measures were self-reported mammography coverage (completion of one postintervention mammogram) and compliance (completion of two postintervention mammograms). In decreasingly conservative analyses (intention-to-treat [ITT], modified intention-to-treat [MITT], and per-protocol [PP]), we examined crude coverage and compliance estimates and adjusted for covariates and variable follow-up time across study groups using Cox proportional hazards regression. For the PP analyses, we also used logistic regression.
Results
None of the among-group differences in the crude incidence estimates for mammography coverage was statistically significant in ITT, MITT, or PP analyses. Crude estimates of compliance differed at statistically significant levels in the PP analyses and at levels approaching statistical significance in the ITT and MITT analyses. Absolute differences favoring the intervention over the control groups were 1%–3% for ITT analysis, 1%–5% for MITT analysis, and 2%–6% for the PP analysis. Results from Cox modeling showed no statistically significant effect of the interventions on coverage or compliance in the ITT, MITT, or PP analyses, although hazard rate ratios (HRRs) for coverage were consistently slightly higher in the intervention groups than the control group (range for HRRs = 1.05–1.09). A PP analysis using logistic regression produced odds ratios (ORs) that were consistently higher than the corresponding hazard rate ratios for both coverage and compliance (range for ORs = 1.15–1.29).
Conclusions
In none of our primary analyses did the tailored and targeted intervention result in higher mammography rates than the targeted-only intervention, and there was limited support for either intervention being more effective than the baseline survey alone. We found that adjustment for variable follow-up time produced more conservative (less favorable) intervention effect estimates.
doi:10.1093/jnci/djn026
PMCID: PMC2830858  PMID: 18314474
19.  Measurement Error of Dietary Self-Report in Intervention Trials 
American Journal of Epidemiology  2010;172(7):819-827.
Dietary intervention trials aim to change dietary patterns of individuals. Participating in such trials could impact dietary self-report in divergent ways: Dietary counseling and training on portion-size estimation could improve self-report accuracy; participant burden could increase systematic error. Such intervention-associated biases could complicate interpretation of trial results. The authors investigated intervention-associated biases in reported total carotenoid intake using data on 3,088 breast cancer survivors recruited between 1995 and 2000 and followed through 2006 in the Women's Healthy Eating and Living Study, a randomized intervention trial. Longitudinal data from 2 self-report methods (24-hour recalls and food frequency questionnaires) and a plasma carotenoid biomarker were collected. A flexible measurement error model was postulated. Parameters were estimated in a Bayesian framework by using Markov chain Monte Carlo methods. Results indicated that the validity (i.e., correlation with “true” intake) of both self-report methods was significantly higher during follow-up for intervention versus nonintervention participants (4-year validity estimates: intervention = 0.57 for food frequency questionnaires and 0.58 for 24-hour recalls; nonintervention = 0.42 for food frequency questionnaires and 0.48 for 24-hour recalls). However, within- and between-instrument error correlations during follow-up were higher among intervention participants, indicating an increase in systematic error. Diet interventions can impact measurement errors of dietary self-report. Appropriate statistical methods should be applied to examine intervention-associated biases when interpreting results of diet trials.
doi:10.1093/aje/kwq216
PMCID: PMC3025654  PMID: 20720101
bias (epidemiology); diet; intervention studies; Markov chain Monte Carlo; measurement error; nutrition assessment; reproducibility of results; validity
20.  Repeated Measures Semiparametric Regression Using Targeted Maximum Likelihood Methodology with Application to Transcription Factor Activity Discovery 
In longitudinal and repeated measures data analysis, often the goal is to determine the effect of a treatment or aspect on a particular outcome (e.g., disease progression). We consider a semiparametric repeated measures regression model, where the parametric component models effect of the variable of interest and any modification by other covariates. The expectation of this parametric component over the other covariates is a measure of variable importance. Here, we present a targeted maximum likelihood estimator of the finite dimensional regression parameter, which is easily estimated using standard software for generalized estimating equations.
The targeted maximum likelihood method provides double robust and locally efficient estimates of the variable importance parameters and inference based on the influence curve. We demonstrate these properties through simulation under correct and incorrect model specification, and apply our method in practice to estimating the activity of transcription factor (TF) over cell cycle in yeast. We specifically target the importance of SWI4, SWI6, MBP1, MCM1, ACE2, FKH2, NDD1, and SWI5.
The semiparametric model allows us to determine the importance of a TF at specific time points by specifying time indicators as potential effect modifiers of the TF. Our results are promising, showing significant importance trends during the expected time periods. This methodology can also be used as a variable importance analysis tool to assess the effect of a large number of variables such as gene expressions or single nucleotide polymorphisms.
doi:10.2202/1544-6115.1553
PMCID: PMC3122882  PMID: 21291412
targeted maximum likelihood; semiparametric; repeated measures; longitudinal; transcription factors
21.  Area Disease Estimation Based on Sentinel Hospital Records 
PLoS ONE  2011;6(8):e23428.
Background
Population health attributes (such as disease incidence and prevalence) are often estimated using sentinel hospital records, which are subject to multiple sources of uncertainty. When applied to these health attributes, commonly used biased estimation techniques can lead to false conclusions and ineffective disease intervention and control. Although some estimators can account for measurement error (in the form of white noise, usually after de-trending), most mainstream health statistics techniques cannot generate unbiased and minimum error variance estimates when the available data are biased.
Methods and Findings
A new technique, called the Biased Sample Hospital-based Area Disease Estimation (B-SHADE), is introduced that generates space-time population disease estimates using biased hospital records. The effectiveness of the technique is empirically evaluated in terms of hospital records of disease incidence (for hand-foot-mouth disease and fever syndrome cases) in Shanghai (China) during a two-year period. The B-SHADE technique uses a weighted summation of sentinel hospital records to derive unbiased and minimum error variance estimates of area incidence. The calculation of these weights is the outcome of a process that combines: the available space-time information; a rigorous assessment of both, the horizontal relationships between hospital records and the vertical links between each hospital's records and the overall disease situation in the region. In this way, the representativeness of the sentinel hospital records was improved, the possible biases of these records were corrected, and the generated area incidence estimates were best linear unbiased estimates (BLUE). Using the same hospital records, the performance of the B-SHADE technique was compared against two mainstream estimators.
Conclusions
The B-SHADE technique involves a hospital network-based model that blends the optimal estimation features of the Block Kriging method and the sample bias correction efficiency of the ratio estimator method. In this way, B-SHADE can overcome the limitations of both methods: Block Kriging's inadequacy concerning the correction of sample bias and spatial clustering; and the ratio estimator's limitation as regards error minimization. The generality of the B-SHADE technique is further demonstrated by the fact that it reduces to Block Kriging in the case of unbiased samples; to ratio estimator if there is no correlation between hospitals; and to simple statistic if the hospital records are neither biased nor space-time correlated. In addition to the theoretical advantages of the B-SHADE technique over the two other methods above, two real world case studies (hand-foot-mouth disease and fever syndrome cases) demonstrated its empirical superiority, as well.
doi:10.1371/journal.pone.0023428
PMCID: PMC3160318  PMID: 21886791
22.  The Effects of Mandatory Prescribing of Thiazides for Newly Treated, Uncomplicated Hypertension: Interrupted Time-Series Analysis 
PLoS Medicine  2007;4(7):e232.
Background
The purpose of our study was to evaluate the effects of a new reimbursement rule for antihypertensive medication that made thiazides mandatory first-line drugs for newly treated, uncomplicated hypertension. The objective of the new regulation was to reduce drug expenditures.
Methods and Findings
We conducted an interrupted time-series analysis on prescribing data before and after the new reimbursement rule for antihypertensive medication was put into effect. All patients started on antihypertensive medication in 61 general practices in Norway were included in the analysis. The new rule was put forward by the Ministry of Health and was approved by parliament. Adherence to the rule was monitored only minimally, and there were no penalties for non-adherence. Our primary outcome was the proportion of thiazide prescriptions among all prescriptions made for persons started on antihypertensive medication. Secondary outcomes included the proportion of patients who, within 4 mo, reached recommended blood-pressure goals and the proportion of patients who, within 4 mo, were not started on a second antihypertensive drug. We also compared drug costs before and after the intervention. During the baseline period, 10% of patients started on antihypertensive medication were given a thiazide prescription. This proportion rose steadily during the transition period, after which it remained stable at 25%. For other outcomes, no statistically significant differences were demonstrated. Achievement of treatment goals was slightly higher (56.6% versus 58.4%) after the new rule was introduced, and the prescribing of a second drug was slightly lower (24.0% versus 21.8%). Drug costs were reduced by an estimated Norwegian kroner 4.8 million (€0.58 million, US$0.72 million) in the first year, which is equivalent to Norwegian kroner 1.06 per inhabitant (€0.13, US$0.16).
Conclusions
Prescribing of thiazides in Norway for uncomplicated hypertension more than doubled after a reimbursement rule requiring the use of thiazides as the first-choice therapy was put into effect. However, the resulting savings on drug expenditures were modest. There were no significant changes in the achievement of treatment goals or in the prescribing of a second antihypertensive drug.
Atle Fretheim and colleagues found that the prescribing of thiazides in Norway for uncomplicated hypertension more than doubled after a rule requiring their use as first-choice therapy was put into effect.
Editors' Summary
Background.
High blood pressure (hypertension) is a common medical condition, especially among elderly people. It has no obvious symptoms but can lead to heart attacks, heart failure, strokes, or kidney failure. It is diagnosed by measuring blood pressure—the force that blood moving around the body exerts on the inside of arteries (large blood vessels). Many factors affect blood pressure (which depends on the amount of blood being pumped round the body and on the size and condition of the arteries), but overweight people and individuals who eat fatty or salty food are at high risk of developing hypertension. Mild hypertension can often be corrected by making lifestyle changes, but many patients also take one or more antihypertensive agents. These include thiazide diuretics and several types of non-thiazide drugs, many of which reduce heart rate or contractility and/or dilate blood vessels.
Why Was This Study Done?
Antihypertensive agents are a major part of national drug expenditure in developed countries, where as many as one person in ten is treated for hypertension. The different classes of drugs are all effective, but their cost varies widely. Thiazides, for example, are a tenth of the price of many non-thiazide drugs. In Norway, the low use of thiazides recently led the government to impose a new reimbursement rule aimed at reducing public expenditure on antihypertensive drugs. Since March 2004, family doctors have been reimbursed for drug costs only if they prescribe thiazides as first-line therapy for uncomplicated hypertension, unless there are medical reasons for selecting other drugs. Adherence to the rule has not been monitored, and there is no penalty for non-adherence, so has this intervention changed prescribing practices? To find out, the researchers in this study analyzed Norwegian prescribing data before and after the new rule came into effect.
What Did the Researchers Do and Find?
The researchers analyzed the monthly antihypertensive drug–prescribing records of 61 practices around Oslo, Norway, between January 2003 and November 2003 (pre-intervention period), between December 2003 and February 2004 (transition period), and between March 2004 and January 2005 (post-intervention period). This type of study is called an “interrupted time series”. During the pre-intervention period, one in ten patients starting antihypertensive medication was prescribed a thiazide drug. This proportion gradually increased during the transition period before stabilizing at one in four patients throughout the post-intervention period. A slightly higher proportion of patients reached their recommended blood-pressure goal after the rule was introduced than before, and a slightly lower proportion needed to switch to a second drug class, but both these small differences may have been due to chance. Finally, the researchers estimated that the observed change in prescribing practices reduced drug costs per Norwegian by US$0.16 (€0.13) in the first year.
What Do These Findings Mean?
Past attempts to change antihypertensive-prescribing practices by trying to influence family doctors (for example, through education) have largely failed. By contrast, these findings suggest that imposing a change on them (in this case, by introducing a new reimbursement rule) can be effective (at least over the short term and in the practices included in the study), even when compliance with the change is not monitored nor noncompliance penalized. However, despite a large shift towards prescribing thiazides, three-quarters of patients were still prescribed non-thiazide drugs (possibly because of doubts about the efficacy of thiazides as first-line drugs), which emphasizes how hard it is to change doctors' prescribing habits. Further studies are needed to investigate whether the approach examined in this study can effectively contain the costs of antihypertensive drugs (and of drugs used for other common medical conditions) in the long term and in other settings. Also, because the estimated reduction in drug costs produced by the intervention was relatively modest (although likely to increase over time as more patients start on thiazides), other ways to change prescribing practices and produce savings in national drug expenditures should be investigated.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040232.
MedlinePlus encyclopedia page on hypertension (in English and Spanish)
UK National Institute for Health and Clinical Excellence information on hypertension for patients, carers, and professionals
American Heart Association information for patients on high blood pressure
An open-access research article describing the potential savings of using thiazides as the first-choice antihypertensive drug
A previous study in Norway, published in PLoS Medicine, examined what happened when doctors were actively encouraged to make more use of thiazides. There was also an economic evaluation of what this achieved
doi:10.1371/journal.pmed.0040232
PMCID: PMC1904466  PMID: 17622192
23.  Semiparametric estimation of covariance matrices for longitudinal data 
Estimation of longitudinal data covariance structure poses significant challenges because the data are usually collected at irregular time points. A viable semiparametric model for covariance matrices was proposed in Fan, Huang and Li (2007) that allows one to estimate the variance function nonparametrically and to estimate the correlation function parametrically via aggregating information from irregular and sparse data points within each subject. However, the asymptotic properties of their quasi-maximum likelihood estimator (QMLE) of parameters in the covariance model are largely unknown. In the current work, we address this problem in the context of more general models for the conditional mean function including parametric, nonparametric, or semi-parametric. We also consider the possibility of rough mean regression function and introduce the difference-based method to reduce biases in the context of varying-coefficient partially linear mean regression models. This provides a more robust estimator of the covariance function under a wider range of situations. Under some technical conditions, consistency and asymptotic normality are obtained for the QMLE of the parameters in the correlation function. Simulation studies and a real data example are used to illustrate the proposed approach.
doi:10.1198/016214508000000742
PMCID: PMC2631936  PMID: 19180247
Correlation structure; difference-based estimation; quasi-maximum likelihood; varying-coefficient partially linear model
24.  MLEP: an R package for exploring the maximum likelihood estimates of penetrance parameters 
BMC Research Notes  2012;5:465.
Background
Linkage analysis is a useful tool for detecting genetic variants that regulate a trait of interest, especially genes associated with a given disease. Although penetrance parameters play an important role in determining gene location, they are assigned arbitrary values according to the researcher’s intuition or as estimated by the maximum likelihood principle. Several methods exist by which to evaluate the maximum likelihood estimates of penetrance, although not all of these are supported by software packages and some are biased by marker genotype information, even when disease development is due solely to the genotype of a single allele.
Findings
Programs for exploring the maximum likelihood estimates of penetrance parameters were developed using the R statistical programming language supplemented by external C functions. The software returns a vector of polynomial coefficients of penetrance parameters, representing the likelihood of pedigree data. From the likelihood polynomial supplied by the proposed method, the likelihood value and its gradient can be precisely computed. To reduce the effect of the supplied dataset on the likelihood function, feasible parameter constraints can be introduced into maximum likelihood estimates, thus enabling flexible exploration of the penetrance estimates. An auxiliary program generates a perspective plot allowing visual validation of the model’s convergence. The functions are collectively available as the MLEP R package.
Conclusions
Linkage analysis using penetrance parameters estimated by the MLEP package enables feasible localization of a disease locus. This is shown through a simulation study and by demonstrating how the package is used to explore maximum likelihood estimates. Although the input dataset tends to bias the likelihood estimates, the method yields accurate results superior to the analysis using intuitive penetrance values for disease with low allele frequencies. MLEP is part of the Comprehensive R Archive Network and is freely available at http://cran.r-project.org/web/packages/MLEP/index.html.
doi:10.1186/1756-0500-5-465
PMCID: PMC3537736  PMID: 22929166
Penetrance; Maximum likelihood estimate; Linkage analysis; Polynomial evaluation
25.  Parameter Estimation and Model Selection in Computational Biology 
PLoS Computational Biology  2010;6(3):e1000696.
A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.
Author Summary
Parameter estimation is a key issue in systems biology, as it represents the crucial step to obtaining predictions from computational models of biological systems. This issue is usually addressed by “fitting” the model simulations to the observed experimental data. Such approach does not take the measurement noise into full consideration. We introduce a new method built on the combination of Kalman filtering, statistical tests, and optimization techniques. The filter is well-known in control and estimation theory and has found application in a wide range of fields, such as inertial guidance systems, weather forecasting, and economics. We show how the statistics of the measurement noise can be optimally exploited and directly incorporated into the design of the estimation algorithm in order to achieve more accurate results, and to validate/invalidate the computed estimates. We also show that a significant advantage of our estimator is that it offers a powerful tool for model selection, allowing rejection or acceptance of competing models based on the available noisy measurements. These results are of immediate practical application in computational biology, and while we demonstrate their use for two specific examples, they can in fact be used to study a wide class of biological systems.
doi:10.1371/journal.pcbi.1000696
PMCID: PMC2832681  PMID: 20221262

Results 1-25 (1175054)