PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1255583)

Clipboard (0)
None

Related Articles

1.  Biomarker Profiling by Nuclear Magnetic Resonance Spectroscopy for the Prediction of All-Cause Mortality: An Observational Study of 17,345 Persons 
PLoS Medicine  2014;11(2):e1001606.
In this study, Würtz and colleagues conducted high-throughput profiling of blood specimens in two large population-based cohorts in order to identify biomarkers for all-cause mortality and enhance risk prediction. The authors found that biomarker profiling improved prediction of the short-term risk of death from all causes above established risk factors. However, further investigations are needed to clarify the biological mechanisms and the utility of these biomarkers to guide screening and prevention.
Please see later in the article for the Editors' Summary
Background
Early identification of ambulatory persons at high short-term risk of death could benefit targeted prevention. To identify biomarkers for all-cause mortality and enhance risk prediction, we conducted high-throughput profiling of blood specimens in two large population-based cohorts.
Methods and Findings
106 candidate biomarkers were quantified by nuclear magnetic resonance spectroscopy of non-fasting plasma samples from a random subset of the Estonian Biobank (n = 9,842; age range 18–103 y; 508 deaths during a median of 5.4 y of follow-up). Biomarkers for all-cause mortality were examined using stepwise proportional hazards models. Significant biomarkers were validated and incremental predictive utility assessed in a population-based cohort from Finland (n = 7,503; 176 deaths during 5 y of follow-up). Four circulating biomarkers predicted the risk of all-cause mortality among participants from the Estonian Biobank after adjusting for conventional risk factors: alpha-1-acid glycoprotein (hazard ratio [HR] 1.67 per 1–standard deviation increment, 95% CI 1.53–1.82, p = 5×10−31), albumin (HR 0.70, 95% CI 0.65–0.76, p = 2×10−18), very-low-density lipoprotein particle size (HR 0.69, 95% CI 0.62–0.77, p = 3×10−12), and citrate (HR 1.33, 95% CI 1.21–1.45, p = 5×10−10). All four biomarkers were predictive of cardiovascular mortality, as well as death from cancer and other nonvascular diseases. One in five participants in the Estonian Biobank cohort with a biomarker summary score within the highest percentile died during the first year of follow-up, indicating prominent systemic reflections of frailty. The biomarker associations all replicated in the Finnish validation cohort. Including the four biomarkers in a risk prediction score improved risk assessment for 5-y mortality (increase in C-statistics 0.031, p = 0.01; continuous reclassification improvement 26.3%, p = 0.001).
Conclusions
Biomarker associations with cardiovascular, nonvascular, and cancer mortality suggest novel systemic connectivities across seemingly disparate morbidities. The biomarker profiling improved prediction of the short-term risk of death from all causes above established risk factors. Further investigations are needed to clarify the biological mechanisms and the utility of these biomarkers for guiding screening and prevention.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
A biomarker is a biological molecule found in blood, body fluids, or tissues that may signal an abnormal process, a condition, or a disease. The level of a particular biomarker may indicate a patient's risk of disease, or likely response to a treatment. For example, cholesterol levels are measured to assess the risk of heart disease. Most current biomarkers are used to test an individual's risk of developing a specific condition. There are none that accurately assess whether a person is at risk of ill health generally, or likely to die soon from a disease. Early and accurate identification of people who appear healthy but in fact have an underlying serious illness would provide valuable opportunities for preventative treatment.
While most tests measure the levels of a specific biomarker, there are some technologies that allow blood samples to be screened for a wide range of biomarkers. These include nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry. These tools have the potential to be used to screen the general population for a range of different biomarkers.
Why Was This Study Done?
Identifying new biomarkers that provide insight into the risk of death from all causes could be an important step in linking different diseases and assessing patient risk. The authors in this study screened patient samples using NMR spectroscopy for biomarkers that accurately predict the risk of death particularly amongst the general population, rather than amongst people already known to be ill.
What Did the Researchers Do and Find?
The researchers studied two large groups of people, one in Estonia and one in Finland. Both countries have set up health registries that collect and store blood samples and health records over many years. The registries include large numbers of people who are representative of the wider population.
The researchers first tested blood samples from a representative subset of the Estonian group, testing 9,842 samples in total. They looked at 106 different biomarkers in each sample using NMR spectroscopy. They also looked at the health records of this group and found that 508 people died during the follow-up period after the blood sample was taken, the majority from heart disease, cancer, and other diseases. Using statistical analysis, they looked for any links between the levels of different biomarkers in the blood and people's short-term risk of dying. They found that the levels of four biomarkers—plasma albumin, alpha-1-acid glycoprotein, very-low-density lipoprotein (VLDL) particle size, and citrate—appeared to accurately predict short-term risk of death. They repeated this study with the Finnish group, this time with 7,503 individuals (176 of whom died during the five-year follow-up period after giving a blood sample) and found similar results.
The researchers carried out further statistical analyses to take into account other known factors that might have contributed to the risk of life-threatening illness. These included factors such as age, weight, tobacco and alcohol use, cholesterol levels, and pre-existing illness, such as diabetes and cancer. The association between the four biomarkers and short-term risk of death remained the same even when controlling for these other factors.
The analysis also showed that combining the test results for all four biomarkers, to produce a biomarker score, provided a more accurate measure of risk than any of the biomarkers individually. This biomarker score also proved to be the strongest predictor of short-term risk of dying in the Estonian group. Individuals with a biomarker score in the top 20% had a risk of dying within five years that was 19 times greater than that of individuals with a score in the bottom 20% (288 versus 15 deaths).
What Do These Findings Mean?
This study suggests that there are four biomarkers in the blood—alpha-1-acid glycoprotein, albumin, VLDL particle size, and citrate—that can be measured by NMR spectroscopy to assess whether otherwise healthy people are at short-term risk of dying from heart disease, cancer, and other illnesses. However, further validation of these findings is still required, and additional studies should examine the biomarker specificity and associations in settings closer to clinical practice. The combined biomarker score appears to be a more accurate predictor of risk than tests for more commonly known risk factors. Identifying individuals who are at high risk using these biomarkers might help to target preventative medical treatments to those with the greatest need.
However, there are several limitations to this study. As an observational study, it provides evidence of only a correlation between a biomarker score and ill health. It does not identify any underlying causes. Other factors, not detectable by NMR spectroscopy, might be the true cause of serious health problems and would provide a more accurate assessment of risk. Nor does this study identify what kinds of treatment might prove successful in reducing the risks. Therefore, more research is needed to determine whether testing for these biomarkers would provide any clinical benefit.
There were also some technical limitations to the study. NMR spectroscopy does not detect as many biomarkers as mass spectrometry, which might therefore identify further biomarkers for a more accurate risk assessment. In addition, because both study groups were northern European, it is not yet known whether the results would be the same in other ethnic groups or populations with different lifestyles.
In spite of these limitations, the fact that the same four biomarkers are associated with a short-term risk of death from a variety of diseases does suggest that similar underlying mechanisms are taking place. This observation points to some potentially valuable areas of research to understand precisely what's contributing to the increased risk.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001606
The US National Institute of Environmental Health Sciences has information on biomarkers
The US Food and Drug Administration has a Biomarker Qualification Program to help researchers in identifying and evaluating new biomarkers
Further information on the Estonian Biobank is available
The Computational Medicine Research Team of the University of Oulu and the University of Bristol have a webpage that provides further information on high-throughput biomarker profiling by NMR spectroscopy
doi:10.1371/journal.pmed.1001606
PMCID: PMC3934819  PMID: 24586121
2.  Risk Models to Predict Chronic Kidney Disease and Its Progression: A Systematic Review 
PLoS Medicine  2012;9(11):e1001344.
A systematic review of risk prediction models conducted by Justin Echouffo-Tcheugui and Andre Kengne examines the evidence base for prediction of chronic kidney disease risk and its progression, and suitability of such models for clinical use.
Background
Chronic kidney disease (CKD) is common, and associated with increased risk of cardiovascular disease and end-stage renal disease, which are potentially preventable through early identification and treatment of individuals at risk. Although risk factors for occurrence and progression of CKD have been identified, their utility for CKD risk stratification through prediction models remains unclear. We critically assessed risk models to predict CKD and its progression, and evaluated their suitability for clinical use.
Methods and Findings
We systematically searched MEDLINE and Embase (1 January 1980 to 20 June 2012). Dual review was conducted to identify studies that reported on the development, validation, or impact assessment of a model constructed to predict the occurrence/presence of CKD or progression to advanced stages. Data were extracted on study characteristics, risk predictors, discrimination, calibration, and reclassification performance of models, as well as validation and impact analyses. We included 26 publications reporting on 30 CKD occurrence prediction risk scores and 17 CKD progression prediction risk scores. The vast majority of CKD risk models had acceptable-to-good discriminatory performance (area under the receiver operating characteristic curve>0.70) in the derivation sample. Calibration was less commonly assessed, but overall was found to be acceptable. Only eight CKD occurrence and five CKD progression risk models have been externally validated, displaying modest-to-acceptable discrimination. Whether novel biomarkers of CKD (circulatory or genetic) can improve prediction largely remains unclear, and impact studies of CKD prediction models have not yet been conducted. Limitations of risk models include the lack of ethnic diversity in derivation samples, and the scarcity of validation studies. The review is limited by the lack of an agreed-on system for rating prediction models, and the difficulty of assessing publication bias.
Conclusions
The development and clinical application of renal risk scores is in its infancy; however, the discriminatory performance of existing tools is acceptable. The effect of using these models in practice is still to be explored.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Chronic kidney disease (CKD)—the gradual loss of kidney function—is increasingly common worldwide. In the US, for example, about 26 million adults have CKD, and millions more are at risk of developing the condition. Throughout life, small structures called nephrons inside the kidneys filter waste products and excess water from the blood to make urine. If the nephrons stop working because of injury or disease, the rate of blood filtration decreases, and dangerous amounts of waste products such as creatinine build up in the blood. Symptoms of CKD, which rarely occur until the disease is very advanced, include tiredness, swollen feet and ankles, puffiness around the eyes, and frequent urination, especially at night. There is no cure for CKD, but progression of the disease can be slowed by controlling high blood pressure and diabetes, both of which cause CKD, and by adopting a healthy lifestyle. The same interventions also reduce the chances of CKD developing in the first place.
Why Was This Study Done?
CKD is associated with an increased risk of end-stage renal disease, which is treated with dialysis or by kidney transplantation (renal replacement therapies), and of cardiovascular disease. These life-threatening complications are potentially preventable through early identification and treatment of CKD, but most people present with advanced disease. Early identification would be particularly useful in developing countries, where renal replacement therapies are not readily available and resources for treating cardiovascular problems are limited. One way to identify people at risk of a disease is to use a “risk model.” Risk models are constructed by testing the ability of different combinations of risk factors that are associated with a specific disease to identify those individuals in a “derivation sample” who have the disease. The model is then validated on an independent group of people. In this systematic review (a study that uses predefined criteria to identify all the research on a given topic), the researchers critically assess the ability of existing CKD risk models to predict the occurrence of CKD and its progression, and evaluate their suitability for clinical use.
What Did the Researchers Do and Find?
The researchers identified 26 publications reporting on 30 risk models for CKD occurrence and 17 risk models for CKD progression that met their predefined criteria. The risk factors most commonly included in these models were age, sex, body mass index, diabetes status, systolic blood pressure, serum creatinine, protein in the urine, and serum albumin or total protein. Nearly all the models had acceptable-to-good discriminatory performance (a measure of how well a model separates people who have a disease from people who do not have the disease) in the derivation sample. Not all the models had been calibrated (assessed for whether the average predicted risk within a group matched the proportion that actually developed the disease), but in those that had been assessed calibration was good. Only eight CKD occurrence and five CKD progression risk models had been externally validated; discrimination in the validation samples was modest-to-acceptable. Finally, very few studies had assessed whether adding extra variables to CKD risk models (for example, genetic markers) improved prediction, and none had assessed the impact of adopting CKD risk models on the clinical care and outcomes of patients.
What Do These Findings Mean?
These findings suggest that the development and clinical application of CKD risk models is still in its infancy. Specifically, these findings indicate that the existing models need to be better calibrated and need to be externally validated in different populations (most of the models were tested only in predominantly white populations) before they are incorporated into guidelines. The impact of their use on clinical outcomes also needs to be assessed before their widespread use is recommended. Such research is worthwhile, however, because of the potential public health and clinical applications of well-designed risk models for CKD. Such models could be used to identify segments of the population that would benefit most from screening for CKD, for example. Moreover, risk communication to patients could motivate them to adopt a healthy lifestyle and to adhere to prescribed medications, and the use of models for predicting CKD progression could help clinicians tailor disease-modifying therapies to individual patient needs.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001344.
This study is further discussed in a PLOS Medicine Perspective by Maarten Taal
The US National Kidney and Urologic Diseases Information Clearinghouse provides information about all aspects of kidney disease; the US National Kidney Disease Education Program provides resources to help improve the understanding, detection, and management of kidney disease (in English and Spanish)
The UK National Health Service Choices website provides information for patients on chronic kidney disease, including some personal stories
The US National Kidney Foundation, a not-for-profit organization, provides information about chronic kidney disease (in English and Spanish)
The not-for-profit UK National Kidney Federation support and information for patients with kidney disease and for their carers, including a selection of patient experiences of kidney disease
World Kidney Day, a joint initiative between the International Society of Nephrology and the International Federation of Kidney Foundations, aims to raise awareness about kidneys and kidney disease
doi:10.1371/journal.pmed.1001344
PMCID: PMC3502517  PMID: 23185136
3.  A Risk Prediction Model for the Assessment and Triage of Women with Hypertensive Disorders of Pregnancy in Low-Resourced Settings: The miniPIERS (Pre-eclampsia Integrated Estimate of RiSk) Multi-country Prospective Cohort Study 
PLoS Medicine  2014;11(1):e1001589.
Beth Payne and colleagues use a risk prediction model, the Pre-eclampsia Integrated Estimate of RiSk (miniPIERS) to help inform the clinical assessment and triage of women with hypertensive disorders of pregnancy in low-resourced settings.
Please see later in the article for the Editors' Summary
Background
Pre-eclampsia/eclampsia are leading causes of maternal mortality and morbidity, particularly in low- and middle- income countries (LMICs). We developed the miniPIERS risk prediction model to provide a simple, evidence-based tool to identify pregnant women in LMICs at increased risk of death or major hypertensive-related complications.
Methods and Findings
From 1 July 2008 to 31 March 2012, in five LMICs, data were collected prospectively on 2,081 women with any hypertensive disorder of pregnancy admitted to a participating centre. Candidate predictors collected within 24 hours of admission were entered into a step-wise backward elimination logistic regression model to predict a composite adverse maternal outcome within 48 hours of admission. Model internal validation was accomplished by bootstrapping and external validation was completed using data from 1,300 women in the Pre-eclampsia Integrated Estimate of RiSk (fullPIERS) dataset. Predictive performance was assessed for calibration, discrimination, and stratification capacity. The final miniPIERS model included: parity (nulliparous versus multiparous); gestational age on admission; headache/visual disturbances; chest pain/dyspnoea; vaginal bleeding with abdominal pain; systolic blood pressure; and dipstick proteinuria. The miniPIERS model was well-calibrated and had an area under the receiver operating characteristic curve (AUC ROC) of 0.768 (95% CI 0.735–0.801) with an average optimism of 0.037. External validation AUC ROC was 0.713 (95% CI 0.658–0.768). A predicted probability ≥25% to define a positive test classified women with 85.5% accuracy. Limitations of this study include the composite outcome and the broad inclusion criteria of any hypertensive disorder of pregnancy. This broad approach was used to optimize model generalizability.
Conclusions
The miniPIERS model shows reasonable ability to identify women at increased risk of adverse maternal outcomes associated with the hypertensive disorders of pregnancy. It could be used in LMICs to identify women who would benefit most from interventions such as magnesium sulphate, antihypertensives, or transportation to a higher level of care.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Each year, ten million women develop pre-eclampsia or a related hypertensive (high blood pressure) disorder of pregnancy and 76,000 women die as a result. Globally, hypertensive disorders of pregnancy cause around 12% of maternal deaths—deaths of women during or shortly after pregnancy. The mildest of these disorders is gestational hypertension, high blood pressure that develops after 20 weeks of pregnancy. Gestational hypertension does not usually harm the mother or her unborn child and resolves after delivery but up to a quarter of women with this condition develop pre-eclampsia, a combination of hypertension and protein in the urine (proteinuria). Women with mild pre-eclampsia may not have any symptoms—the condition is detected during antenatal checks—but more severe pre-eclampsia can cause headaches, blurred vision, and other symptoms, and can lead to eclampsia (fits), multiple organ failure, and death of the mother and/or her baby. The only “cure” for pre-eclampsia is to deliver the baby as soon as possible but women are sometimes given antihypertensive drugs to lower their blood pressure or magnesium sulfate to prevent seizures.
Why Was This Study Done?
Women in low- and middle-income countries (LMICs) are more likely to develop complications of pre-eclampsia than women in high-income countries and most of the deaths associated with hypertensive disorders of pregnancy occur in LMICs. The high burden of illness and death in LMICs is thought to be primarily due to delays in triage (the identification of women who are or may become severely ill and who need specialist care) and delays in transporting these women to facilities where they can receive appropriate care. Because there is a shortage of health care workers who are adequately trained in the triage of suspected cases of hypertensive disorders of pregnancy in many LMICs, one way to improve the situation might be to design a simple tool to identify women at increased risk of complications or death from hypertensive disorders of pregnancy. Here, the researchers develop miniPIERS (Pre-eclampsia Integrated Estimate of RiSk), a clinical risk prediction model for adverse outcomes among women with hypertensive disorders of pregnancy suitable for use in community and primary health care facilities in LMICs.
What Did the Researchers Do and Find?
The researchers used data on candidate predictors of outcome that are easy to collect and/or measure in all health care settings and that are associated with pre-eclampsia from women admitted with any hypertensive disorder of pregnancy to participating centers in five LMICs to build a model to predict death or a serious complication such as organ damage within 48 hours of admission. The miniPIERS model included parity (whether the woman had been pregnant before), gestational age (length of pregnancy), headache/visual disturbances, chest pain/shortness of breath, vaginal bleeding with abdominal pain, systolic blood pressure, and proteinuria detected using a dipstick. The model was well-calibrated (the predicted risk of adverse outcomes agreed with the observed risk of adverse outcomes among the study participants), it had a good discriminatory ability (it could separate women who had a an adverse outcome from those who did not), and it designated women as being at high risk (25% or greater probability of an adverse outcome) with an accuracy of 85.5%. Importantly, external validation using data collected in fullPIERS, a study that developed a more complex clinical prediction model based on data from women attending tertiary hospitals in high-income countries, confirmed the predictive performance of miniPIERS.
What Do These Findings Mean?
These findings indicate that the miniPIERS model performs reasonably well as a tool to identify women at increased risk of adverse maternal outcomes associated with hypertensive disorders of pregnancy. Because miniPIERS only includes simple-to-measure personal characteristics, symptoms, and signs, it could potentially be used in resource-constrained settings to identify the women who would benefit most from interventions such as transportation to a higher level of care. However, further external validation of miniPIERS is needed using data collected from women living in LMICs before the model can be used during routine antenatal care. Moreover, the value of miniPIERS needs to be confirmed in implementation projects that examine whether its potential translates into clinical improvements. For now, though, the model could provide the basis for an education program to increase the knowledge of women, families, and community health care workers in LMICs about the signs and symptoms of hypertensive disorders of pregnancy.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001589.
The World Health Organization provides guidelines for the management of hypertensive disorders of pregnancy in low-resourced settings
The Maternal and Child Health Integrated Program provides information on pre-eclampsia and eclampsia targeted to low-resourced settings along with a tool-kit for LMIC providers
The US National Heart, Lung, and Blood Institute provides information about high blood pressure in pregnancy and a guide to lowering blood pressure in pregnancy
The UK National Health Service Choices website provides information about pre-eclampsia
The US not-for profit organization Preeclampsia Foundation provides information about all aspects of pre-eclampsia; its website includes some personal stories
The UK charity Healthtalkonline also provides personal stories about hypertensive disorders of pregnancy
MedlinePlus provides links to further information about high blood pressure and pregnancy (in English and Spanish); the MedlinePlus Encyclopedia has a video about pre-eclampsia (also in English and Spanish)
More information about miniPIERS and about fullPIERS is available
doi:10.1371/journal.pmed.1001589
PMCID: PMC3897359  PMID: 24465185
4.  Reduced Glomerular Filtration Rate and Its Association with Clinical Outcome in Older Patients at Risk of Vascular Events: Secondary Analysis 
PLoS Medicine  2009;6(1):e1000016.
Background
Reduced glomerular filtration rate (GFR) is associated with increased cardiovascular risk in young and middle aged individuals. Associations with cardiovascular disease and mortality in older people are less clearly established. We aimed to determine the predictive value of the GFR for mortality and morbidity using data from the 5,804 participants randomized in the Prospective Study of Pravastatin in the Elderly at Risk (PROSPER).
Methods and Findings
Glomerular filtration rate was estimated (eGFR) using the Modification of Diet in Renal Disease equation and was categorized in the ranges ([20–40], [40–50], [50–60]) ≥ 60 ml/min/1.73 m2. Baseline risk factors were analysed by category of eGFR, with and without adjustment for other risk factors. The associations between baseline eGFR and morbidity and mortality outcomes, accrued after an average of 3.2 y, were investigated using Cox proportional hazard models adjusting for traditional risk factors. We tested for evidence of an interaction between the benefit of statin treatment and baseline eGFR status. Age, low-density lipoprotein (LDL) and high-density lipoprotein (HDL) cholesterol, C-reactive protein (CRP), body mass index, fasting glucose, female sex, histories of hypertension and vascular disease were associated with eGFR (p = 0.001 or less) after adjustment for other risk factors. Low eGFR was independently associated with risk of all cause mortality, vascular mortality, and other noncancer mortality and with fatal and nonfatal coronary and heart failure events (hazard ratios adjusted for CRP and other risk factors (95% confidence intervals [CIs]) for eGFR < 40 ml/min/1.73m2 relative to eGFR ≥ 60 ml/min/1.73m2 respectively 2.04 (1.48–2.80), 2.37 (1.53–3.67), 3.52 (1.78–6.96), 1.64 (1.18–2.27), 3.31 (2.03–5.41). There were no nominally statistically significant interactions (p < 0.05) between randomized treatment allocation and eGFR for clinical outcomes, with the exception of the outcome of coronary heart disease death or nonfatal myocardial infarction (p = 0.021), with the interaction suggesting increased benefit of statin treatment in subjects with impaired GFRs.
Conclusions
We have established that, in an elderly population over the age of 70 y, impaired GFR is associated with female sex, with presence of vascular disease, and with levels of other risk factors that would be associated with increased risk of vascular disease. Further, impaired GFR is independently associated with significant levels of increased risk of all cause mortality and fatal vascular events and with composite fatal and nonfatal coronary and heart failure outcomes. Our analyses of the benefits of statin treatment in relation to baseline GFR suggest that there is no reason to exclude elderly patients with impaired renal function from treatment with a statin.
Using data from the PROSPER trial, Ian Ford and colleagues investigate whether reduced glomerular filtration rate is associated with cardiovascular and mortality risk among elderly people.
Editors' Summary
Background.
Cardiovascular disease (CVD)—disease that affects the heart and/or the blood vessels—is a common cause of death in developed countries. In the USA, for example, the single leading cause of death is coronary heart disease, a CVD in which narrowing of the heart's blood vessels slows or stops the blood supply to the heart and eventually causes a heart attack. Other types of CVD include stroke (in which narrowing of the blood vessels interrupts the brain's blood supply) and heart failure (a condition in which the heart can no longer pump enough blood to the rest of the body). Many factors increase the risk of developing CVD, including high blood pressure (hypertension), high blood cholesterol, having diabetes, smoking, and being overweight. Tools such as the “Framingham risk calculator” assess an individual's overall CVD risk by taking these and other risk factors into account. CVD risk can be minimized by taking drugs to reduce blood pressure or cholesterol levels (for example, pravastatin) and by making lifestyle changes.
Why Was This Study Done?
Another potential risk factor for CVD is impaired kidney (renal) function. In healthy people, the kidneys filter waste products and excess fluid out of the blood. A reduced “estimated glomerular filtration rate” (eGFR), which indicates impaired renal function, is associated with increased CVD in young and middle-aged people and increased all-cause and cardiovascular death in people who have vascular disease. But is reduced eGFR also associated with CVD and death in older people? If it is, it would be worth encouraging elderly people with reduced eGFR to avoid other CVD risk factors. In this study, the researchers determine the predictive value of eGFR for all-cause and vascular mortality (deaths caused by CVD) and for incident vascular events (a first heart attack, stroke, or heart failure) using data from the Prospective Study of Pravastatin in the Elderly at Risk (PROSPER). This clinical trial examined pravastatin's effects on CVD development among 70–82 year olds with pre-existing vascular disease or an increased risk of CVD because of smoking, hypertension, or diabetes.
What Did the Researchers Do and Find?
The trial participants were divided into four groups based on their eGFR at the start of the study. The researchers then investigated the association between baseline CVD risk factors and baseline eGFR and between baseline eGFR and vascular events and deaths that occurred during the 3-year study. Several established CVD risk factors were associated with a reduced eGFR after allowing for other risk factors. In addition, people with a low eGFR (between 20 and 40 units) were twice as likely to die from any cause as people with an eGFR above 60 units (the normal eGFR for a young person is 100 units; eGFR decreases with age) and more than three times as likely to have nonfatal coronary heart disease or heart failure. A low eGFR also increased the risk of vascular mortality, other noncancer deaths, and fatal coronary heart disease and heart failure. Finally, pravastatin treatment reduced coronary heart disease deaths and nonfatal heart attacks most effectively among participants with the greatest degree of eGFR impairment.
What Do These Findings Mean?
These findings suggest that, in elderly people, impaired renal function is associated with levels of established CVD risk factors that increase the risk of vascular disease. They also suggest that impaired kidney function increases the risk of all-cause mortality, fatal vascular events, and fatal and nonfatal coronary heat disease and heart failure. Because the study participants were carefully chosen for inclusion in PROSPER, these findings may not be generalizable to all elderly people with vascular disease or vascular disease risk factors. Nevertheless, increased efforts should probably be made to encourage elderly people with reduced eGFR and other vascular risk factors to make lifestyle changes to reduce their overall CVD risk. Finally, although the effect of statins in elderly patients with renal dysfunction needs to be examined further, these findings suggest that this group of patients should benefit at least as much from statins as elderly patients with healthy kidneys.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000016.
The MedlinePlus Encyclopedia has pages on coronary heart disease, stroke, and heart failure (in English and Spanish)
MedlinePlus provides links to many other sources of information on heart disease, vascular disease, and stroke (in English and Spanish)
The US National Institute of Diabetes and Digestive and Kidney Diseases provides information on how the kidneys work and what can go wrong with them, including a list of links to further information about kidney disease
The American Heart Association provides information on all aspects of cardiovascular disease for patients, caregivers, and professionals (in several languages)
More information about PROSPER is available on the Web site of the Vascular Biochemistry Department of the University of Glasgow
doi:10.1371/journal.pmed.1000016
PMCID: PMC2628400  PMID: 19166266
5.  Personalized Prediction of Lifetime Benefits with Statin Therapy for Asymptomatic Individuals: A Modeling Study 
PLoS Medicine  2012;9(12):e1001361.
In a modeling study conducted by Myriam Hunink and colleagues, a population-based cohort from Rotterdam is used to predict the possible lifetime benefits of statin therapy, on a personalized basis.
Background
Physicians need to inform asymptomatic individuals about personalized outcomes of statin therapy for primary prevention of cardiovascular disease (CVD). However, current prediction models focus on short-term outcomes and ignore the competing risk of death due to other causes. We aimed to predict the potential lifetime benefits with statin therapy, taking into account competing risks.
Methods and Findings
A microsimulation model based on 5-y follow-up data from the Rotterdam Study, a population-based cohort of individuals aged 55 y and older living in the Ommoord district of Rotterdam, the Netherlands, was used to estimate lifetime outcomes with and without statin therapy. The model was validated in-sample using 10-y follow-up data. We used baseline variables and model output to construct (1) a web-based calculator for gains in total and CVD-free life expectancy and (2) color charts for comparing these gains to the Systematic Coronary Risk Evaluation (SCORE) charts. In 2,428 participants (mean age 67.7 y, 35.5% men), statin therapy increased total life expectancy by 0.3 y (SD 0.2) and CVD-free life expectancy by 0.7 y (SD 0.4). Age, sex, smoking, blood pressure, hypertension, lipids, diabetes, glucose, body mass index, waist-to-hip ratio, and creatinine were included in the calculator. Gains in total and CVD-free life expectancy increased with blood pressure, unfavorable lipid levels, and body mass index after multivariable adjustment. Gains decreased considerably with advancing age, while SCORE 10-y CVD mortality risk increased with age. Twenty-five percent of participants with a low SCORE risk achieved equal or larger gains in CVD-free life expectancy than the median gain in participants with a high SCORE risk.
Conclusions
We developed tools to predict personalized increases in total and CVD-free life expectancy with statin therapy. The predicted gains we found are small. If the underlying model is validated in an independent cohort, the tools may be useful in discussing with patients their individual outcomes with statin therapy.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Cardiovascular disease (CVD) affects the heart and/or the blood vessels and is a major cause of illness and death worldwide. In the US, for example, coronary heart disease—a CVD in which narrowing of the heart's blood vessels by fatty deposits slows the blood supply to the heart and may eventually cause a heart attack—is the leading cause of death, and stroke—a CVD in which the brain's blood supply is interrupted—is the fourth leading cause of death. Established risk factors for CVD include smoking, high blood pressure, obesity, and high blood levels of a fat called low-density lipoprotein (“bad cholesterol”). Because many of these risk factors can be modified by lifestyle changes and by drugs, CVD can be prevented. Thus, physicians can assess a healthy individual's risk of developing CVD using a CVD prediction model (equations that take into account the CVD risk factors to which the individual is exposed) and can then recommend lifestyle changes and medications to reduce that individual's CVD risk.
Why Was This Study Done?
Current guidelines recommend that asymptomatic (healthy) individuals whose likely CVD risk is high should be encouraged to take statins—cholesterol-lowering drugs—as a preventative measure. Statins help to prevent CVD in healthy people with a high predicted risk of CVD, but, like all medicines, they have some unwanted side effects, so it is important that physicians can communicate both the benefits and drawbacks of statins to their patients in a way that allows them to make an informed decision about taking these drugs. Telling a patient that statins will reduce his or her short-term risk of CVD is not always helpful—patients really need to know the potential lifetime benefits of statin therapy. That is, they need to know how much longer they might live if they take statins. Here, the researchers use a mathematical model to predict the personalized lifetime benefits (increased total and CVD-free life expectancy) of statin therapy for individuals without a history of CVD.
What Did the Researchers Do and Find?
The researchers used the Rotterdam Ischemic Heart Disease & Stroke Computer Simulation (RISC) model, which simulates the life courses of individuals through six health states, from well through to CVD or non-CVD death, to estimate lifetime outcomes with and without statin therapy in a population of healthy elderly individuals. They then used these outcomes and information on baseline risk factors to develop a web-based calculator suitable for personalized prediction of the lifetime benefits of statins in routine clinical practice. The model estimated that statin therapy increases average life expectancy in the study population by 0.3 years and average CVD-free life expectancy by 0.7 years. The gains in total and CVD-free life expectancy associated with statin therapy increased with blood pressure, unfavorable cholesterol levels, and body mass index (an indicator of body fat) but decreased with age. Notably, the web-based calculator predicted that some individuals with a low ten-year CVD risk might achieve a similar or larger gain in CVD-free life expectancy with statin therapy than some individuals with a high ten-year risk. So, for example, both a 55-year-old non-smoking woman with a ten-year CVD mortality risk of 2% (a two in a hundred chance of dying of CVD within ten years) and a 65-year-old male smoker with a ten-year CVD mortality risk of 15% might both gain one year of CVD-free life expectancy with statin therapy.
What Do These Findings Mean?
These findings suggest that statin therapy can lead on average to small gains in total life expectancy and slightly larger gains in CVD-free life expectancy among healthy individuals, and show that life expectancy benefits can be predicted using an individual's risk factor profile. The accuracy and generalizability of these findings is limited by the assumptions included in the model (in particular, the model did not allow for the known side effects of statin therapy) and by the data fed into it—importantly, the risk prediction model needs to be validated using an independent dataset. If future research confirms the findings of this study, the researchers' web-based calculator could provide complementary information to the currently recommended ten-year CVD mortality risk assessment. Whether communication of personalized outcomes will ultimately result in better clinical outcomes remains to be seen, however, because patients may be less likely to choose statin therapy when provided with more information about its likely benefits.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001361.
The web-based calculator for personalized prediction of lifetime benefits with statin therapy is available (after agreement to software license)
The American Heart Association provides information about many types of cardiovascular disease for patients, carers, and professionals, including information about drug therapy for cholesterol and a heart attack risk calculator
The UK National Health Service Choices website provides information about cardiovascular disease and about statins
Information is available from the British Heart Foundation on heart disease and keeping the heart healthy; information is also available on statins, including personal stories about deciding to take statins
The US National Heart Lung and Blood Institute provides information on a wide range of cardiovascular diseases
The European Society of Cardiology's cardiovascular disease risk assessment model (SCORE) is available
MedlinePlus provides links to many other sources of information on heart diseases, vascular diseases, stroke, and statins (in English and Spanish)
doi:10.1371/journal.pmed.1001361
PMCID: PMC3531501  PMID: 23300388
6.  The Preclinical Natural History of Serous Ovarian Cancer: Defining the Target for Early Detection 
PLoS Medicine  2009;6(7):e1000114.
Pat Brown and colleagues carry out a modeling study and define what properties a biomarker-based screening test would require in order to be clinically useful.
Background
Ovarian cancer kills approximately 15,000 women in the United States every year, and more than 140,000 women worldwide. Most deaths from ovarian cancer are caused by tumors of the serous histological type, which are rarely diagnosed before the cancer has spread. Rational design of a potentially life-saving early detection and intervention strategy requires understanding the lesions we must detect in order to prevent lethal progression. Little is known about the natural history of lethal serous ovarian cancers before they become clinically apparent. We can learn about this occult period by studying the unsuspected serous cancers that are discovered in a small fraction of apparently healthy women who undergo prophylactic bilateral salpingo-oophorectomy (PBSO).
Methods and Findings
We developed models for the growth, progression, and detection of occult serous cancers on the basis of a comprehensive analysis of published data on serous cancers discovered by PBSO in BRCA1 mutation carriers. Our analysis yielded several critical insights into the early natural history of serous ovarian cancer. First, these cancers spend on average more than 4 y as in situ, stage I, or stage II cancers and approximately 1 y as stage III or IV cancers before they become clinically apparent. Second, for most of the occult period, serous cancers are less than 1 cm in diameter, and not visible on gross examination of the ovaries and Fallopian tubes. Third, the median diameter of a serous ovarian cancer when it progresses to an advanced stage (stage III or IV) is about 3 cm. Fourth, to achieve 50% sensitivity in detecting tumors before they advance to stage III, an annual screen would need to detect tumors of 1.3 cm in diameter; 80% detection sensitivity would require detecting tumors less than 0.4 cm in diameter. Fifth, to achieve a 50% reduction in serous ovarian cancer mortality with an annual screen, a test would need to detect tumors of 0.5 cm in diameter.
Conclusions
Our analysis has formalized essential conditions for successful early detection of serous ovarian cancer. Although the window of opportunity for early detection of these cancers lasts for several years, developing a test sufficiently sensitive and specific to take advantage of that opportunity will be a challenge. We estimated that the tumors we would need to detect to achieve even 50% sensitivity are more than 200 times smaller than the clinically apparent serous cancers typically used to evaluate performance of candidate biomarkers; none of the biomarker assays reported to date comes close to the required level of performance. Overcoming the signal-to-noise problem inherent in detection of tiny tumors will likely require discovery of truly cancer-specific biomarkers or development of novel approaches beyond traditional blood protein biomarkers. While this study was limited to ovarian cancers of serous histological type and to those arising in BRCA1 mutation carriers specifically, we believe that the results are relevant to other hereditary serous cancers and to sporadic ovarian cancers. A similar approach could be applied to other cancers to aid in defining their early natural history and to guide rational design of an early detection strategy.
Please see later in the article for Editors' Summary
Editors' Summary
Background
Every year about 190,000 women develop ovarian cancer and more than 140,000 die from the disease. Ovarian cancer occurs when a cell on the surface of the ovaries (two small organs in the pelvis that produce eggs) or in the Fallopian tubes (which connect the ovaries to the womb) acquires genetic changes (mutations) that allow it to grow uncontrollably and to spread around the body (metastasize). For women whose cancer is diagnosed when it is confined to the site of origin—ovary or Fallopian tube—(stage I disease), the outlook is good; 70%–80% of these women survive for at least 5 y. However, very few ovarian cancers are diagnosed this early. Usually, by the time the cancer causes symptoms (often only vague abdominal pain and mild digestive disturbances), it has spread into the pelvis (stage II disease), into the space around the gut, stomach, and liver (stage III disease), or to distant organs (stage IV disease). Patients with advanced-stage ovarian cancer are treated with surgery and chemotherapy but, despite recent treatment improvements, only 15% of women diagnosed with stage IV disease survive for 5 y.
Why Was This Study Done?
Most deaths from ovarian cancer are caused by serous ovarian cancer, a tumor subtype that is rarely diagnosed before it has spread. Early detection of serous ovarian cancer would save the lives of many women but no one knows what these cancers look like before they spread or how long they grow before they become clinically apparent. Learning about this occult (hidden) period of ovarian cancer development by observing tumors from their birth to late-stage disease is not feasible. However, some aspects of the early natural history of ovarian cancer can be studied by using data collected from healthy women who have had their ovaries and Fallopian tubes removed (prophylactic bilateral salpingo-oophorectomy [PBSO]) because they have inherited a mutated version of the BRCA1 gene that increases their ovarian cancer risk. In a few of these women, unsuspected ovarian cancer is discovered during PBSO. In this study, the researchers identify and analyze the available reports on occult serous ovarian cancer found this way and then develop mathematical models describing the early natural history of ovarian cancer.
What Did the Researchers Do and Find?
The researchers first estimated the time period during which the detection of occult tumors might save lives using the data from these reports. Serous ovarian cancers, they estimated, spend more than 4 y as in situ (a very early stage of cancer development), stage I, or stage II cancers and about 1 y as stage III and IV cancers before they become clinically apparent. Next, the researchers used the data to develop mathematical models for the growth, progression, and diagnosis of serous ovarian cancer (the accuracy of which depends on the assumptions used to build the models and on the quality of the data fed into them). These models indicated that, for most of the occult period, serous cancers had a diameter of less than 1 cm (too small to be detected during surgery or by gross examination of the ovaries or Fallopian tubes) and that more than half of serous cancers had advanced to stage III/IV by the time they measured 3 cm across. Furthermore, to enable the detection of half of serous ovarian cancers before they reached stage III, an annual screening test would need to detect cancers with a diameter of 1.3 cm and to halve deaths from serous ovarian cancer, an annual screening test would need to detect 0.5-cm diameter tumors.
What Do These Findings Mean?
These findings suggest that the time period over which the early detection of serous ovarian cancer would save lives is surprisingly long. More soberingly, the authors find that a test that is sensitive and specific enough to take advantage of this “window of opportunity” would need to detect tumors hundreds of times smaller than clinically apparent serous cancers. So far no ovarian cancer-specific protein or other biomarker has been identified that could be used to develop a test that comes anywhere near this level of performance. Identification of truly ovarian cancer-specific biomarkers or novel strategies will be needed in order to take advantage of the window of opportunity. The stages prior to clinical presentation of other lethal cancers are still very poorly understood. Similar studies of the early natural history of these cancers could help guide the development of rational early detection strategies.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000114.
The US National Cancer Institute provides a brief description of what cancer is and how it develops and information on all aspects of ovarian cancer for patients and professionals. It also provides a fact sheet on BRCA1 mutations and cancer risk (in English and Spanish)
The UK charity Cancerbackup also provides information about all aspects of ovarian cancer
MedlinePlus provides a list of links to additional information about ovarian cancer (in English and Spanish)
The Canary Foundation is a nonprofit organization dedicated to development of effective strategies for early detection of cancers including ovarian cancer.
doi:10.1371/journal.pmed.1000114
PMCID: PMC2711307  PMID: 19636370
7.  Primary prevention of coronary heart disease: integration of new data, evolving views, revised goals, and role of rosuvastatin in management. A comprehensive survey 
A recent explosion in the amount of cardiovascular risk and incipient, undetected subclinical cardiovascular pathology has swept across the globe. Nearly 70% of adult Americans are overweight or obese; the prevalence of visceral obesity stands at 53% and continues to rise. At any one time, 55% of the population is on a weight-loss diet, and almost all fail. Fewer than 15% of adults or children exercise sufficiently, and over 60% engage in no vigorous activity. Among adults, 11%–13% have diabetes, 34% have hypertension, 36% have prehypertension, 36% have prediabetes, 12% have both prediabetes and prehypertension, and 15% of the population with either diabetes, hypertension, or dyslipidemia are undiagnosed. About one-third of the adult population, and 80% of the obese, have fatty livers. With 34% of children overweight or obese, prevalence having doubled in just a few years, type 2 diabetes, hypertension, dyslipidemia, and fatty livers in children are at their highest levels ever. Half of adults have at least one cardiovascular risk factor. Not even 1% of the population attains ideal cardiovascular health. Despite falling coronary death rates for decades, coronary heart disease (CHD) death rates in US women 35 to 54 years of age may now be increasing because of the obesity epidemic. Up to 65% of patients do not have their conventional risk biomarkers under control. Only 30% of high risk patients with CHD achieve aggressive low density lipoprotein (LDL) targets. Of those patients with multiple risk factors, fewer than 10% have all of them adequately controlled. Even when patients are titrated to evidence-based targets, about 70% of cardiac events remain unaddressed. Undertreatment is also common. About two-thirds of high risk primary care patients are not taking needed medications for dyslipidemia. Poor patient adherence, typically below 50%, adds further difficulty. Hence, after all such fractional reductions are multiplied, only a modest portion of total cardiovascular risk burden is actually being eliminated, and the full potential of risk reduction remains unrealized. Worldwide the situation is similar, with the prevalence of metabolic syndrome approaching 50%. Primordial prevention, resulting from healthful lifestyle habits that do not permit the appearance of risk factors, is the preferred method to lower cardiovascular risk. Lowering the prevalence of obesity is the most urgent matter, and is pleiotropic since it affects blood pressure, lipid profiles, glucose metabolism, inflammation, and atherothrombotic disease progression. Physical activity also improves several risk factors, with the additional potential to lower heart rate. Given the current obstacles, success of primordial prevention remains uncertain. At the same time, the consequences of delay and inaction will inevitably be disastrous, and the sense of urgency mounts. Since most CHD events arise in a large subpopulation of low- to moderate-risk individuals, identifying a high proportion of those who will go on to develop events with accuracy remains unlikely. Without a refinement in risk prediction, the current model of targeting high-risk individuals for aggressive therapy may not succeed alone, especially given the rising burden of risk. Estimating cardiovascular risk over a period of 10 years, using scoring systems such as Framingham or SCORE, continues to enjoy widespread use and is recommended for all adults. Limitations in the former have been of concern, including the under- or over-estimation of risk in specific populations, a relatively short 10-year risk horizon, focus on myocardial infarction and CHD death, and exclusion of family history. Classification errors may occur in up to 37% of individuals, particularly women and the young. Several different scoring systems are discussed in this review. The use of lifetime risk is an important conceptual advance, since ≥90% of young adults with a low 10-year risk have a lifetime risk of ≥39%; over half of all American adults have a low 10-year risk but a high lifetime risk. At age 50 the absence of traditional risk factors is associated with extremely low lifetime risk and significantly greater longevity. Pathological and epidemiological data confirm that atherosclerosis begins in early childhood, and advances seamlessly and inexorably throughout life. Risk factors in childhood are similar to those in adults, and track between stages of life. When indicated, aggressive treatment should begin at the earliest indication, and be continued for years. For those patients at intermediate risk according to global risk scores, C-reactive protein (CRP), coronary artery calcium (CAC), and carotid intima-media thickness (CIMT) are available for further stratification. Using statins for primary prevention is recommended by guidelines, is prevalent, but remains underprescribed. Statin drugs are unrivaled, evidence-based, major weapons to lower cardiovascular risk. Even when low density lipoprotein cholesterol (LDL-C) targets are attained, over half of patients continue to have disease progression and clinical events. This residual risk is of great concern, and multiple sources of remaining risk exist. Though clinical evidence is incomplete, altering or raising the blood high density lipoprotein cholesterol (HDL-C) level continues to be pursued. Of all agents available, rosuvastatin produces the greatest reduction in LDL-C, LDL-P, and improvement in apoA-I/apoB, together with a favorable safety profile. Several recent proposals and methods to lower cardiovascular risk are reviewed. A combination of approaches, such as the addition of lifetime risk, refinement of risk prediction, guideline compliance, novel treatments, improvement in adherence, and primordial prevention, including environmental and social intervention, will be necessary to lower the present high risk burden.
doi:10.2147/DDDT.S14934
PMCID: PMC3140289  PMID: 21792295
primary prevention; cardiovascular risk; coronary heart disease; primordial prevention; rosuvastatin; JUPITER study; statin drugs; C-reactive protein; inflammation; low-density lipoprotein; high-density lipoprotein; diabetes; metabolic syndrome; Framingham risk score; Reynolds risk score; SCORE; coronary artery calcification; carotid intima-media thickness; hypertension; obesity; non-HDL-cholesterol; LDL-P; dysfunctional HDL; lifetime risk; advanced lipid testing; Bogalusa Heart Study
8.  Reducing the Impact of the Next Influenza Pandemic Using Household-Based Public Health Interventions 
PLoS Medicine  2006;3(9):e361.
Background
The outbreak of highly pathogenic H5N1 influenza in domestic poultry and wild birds has caused global concern over the possible evolution of a novel human strain [1]. If such a strain emerges, and is not controlled at source [2,3], a pandemic is likely to result. Health policy in most countries will then be focused on reducing morbidity and mortality.
Methods and Findings
We estimate the expected reduction in primary attack rates for different household-based interventions using a mathematical model of influenza transmission within and between households. We show that, for lower transmissibility strains [2,4], the combination of household-based quarantine, isolation of cases outside the household, and targeted prophylactic use of anti-virals will be highly effective and likely feasible across a range of plausible transmission scenarios. For example, for a basic reproductive number (the average number of people infected by a typically infectious individual in an otherwise susceptible population) of 1.8, assuming only 50% compliance, this combination could reduce the infection (symptomatic) attack rate from 74% (49%) to 40% (27%), requiring peak quarantine and isolation levels of 6.2% and 0.8% of the population, respectively, and an overall anti-viral stockpile of 3.9 doses per member of the population. Although contact tracing may be additionally effective, the resources required make it impractical in most scenarios.
Conclusions
National influenza pandemic preparedness plans currently focus on reducing the impact associated with a constant attack rate, rather than on reducing transmission. Our findings suggest that the additional benefits and resource requirements of household-based interventions in reducing average levels of transmission should also be considered, even when expected levels of compliance are only moderate.
Voluntary household-based quarantine and external isolation are likely to be effective in limiting the morbidity and mortality of an influenza pandemic, even if such a pandemic cannot be entirely prevented, and even if compliance with these interventions is moderate.
Editors' Summary
Background.
Naturally occurring variation in the influenza virus can lead both to localized annual epidemics and to less frequent global pandemics of catastrophic proportions. The most destructive of the three influenza pandemics of the 20th century, the so-called Spanish flu of 1918–1919, is estimated to have caused 20 million deaths. As evidenced by ongoing tracking efforts and news media coverage of H5N1 avian influenza, contemporary approaches to monitoring and communications can be expected to alert health officials and the general public of the emergence of new, potentially pandemic strains before they spread globally.
Why Was This Study Done?
In order to act most effectively on advance notice of an approaching influenza pandemic, public health workers need to know which available interventions are likely to be most effective. This study was done to estimate the effectiveness of specific preventive measures that communities might implement to reduce the impact of pandemic flu. In particular, the study evaluates methods to reduce person-to-person transmission of influenza, in the likely scenario that complete control cannot be achieved by mass vaccination and anti-viral treatment alone.
What Did the Researchers Do and Find?
The researchers developed a mathematical model—essentially a computer simulation—to simulate the course of pandemic influenza in a hypothetical population at risk for infection at home, through external peer networks such as schools and workplaces, and through general community transmission. Parameters such as the distribution of household sizes, the rate at which individuals develop symptoms from nonpandemic viruses, and the risk of infection within households were derived from demographic and epidemiologic data from Hong Kong, as well as empirical studies of influenza transmission. A model based on these parameters was then used to calculate the effects of interventions including voluntary household quarantine, voluntary individual isolation in a facility outside the home, and contact tracing (that is, asking infectious individuals to identify people whom they may have infected and then warning those people) on the spread of pandemic influenza through the population. The model also took into account the anti-viral treatment of exposed, asymptomatic household members and of individuals in isolation, and assumed that all intervention strategies were put into place before the arrival of individuals infected with the pandemic virus.
  Using this model, the authors predicted that even if only half of the population were to comply with public health interventions, the proportion infected during the first year of an influenza pandemic could be substantially reduced by a combination of household-based quarantine, isolation of actively infected individuals in a location outside the household, and targeted prophylactic treatment of exposed individuals with anti-viral drugs. Based on an influenza-associated mortality rate of 0.5% (as has been estimated for New York City in the 1918–1919 pandemic), the magnitude of the predicted benefit of these interventions is a reduction from 49% to 27% in the proportion of the population who become ill in the first year of the pandemic, which would correspond to 16,000 fewer deaths in a city the size of Hong Kong (6.8 million people). In the model, anti-viral treatment appeared to be about as effective as isolation when each was used in combination with household quarantine, but would require stockpiling 3.9 doses of anti-viral for each member of the population. Contact tracing was predicted to provide a modest additional benefit over quarantine and isolation, but also to increase considerably the proportion of the population in quarantine.
What Do These Findings Mean?
This study predicts that voluntary household-based quarantine and external isolation can be effective in limiting the morbidity and mortality of an influenza pandemic, even if such a pandemic cannot be entirely prevented, and even if compliance with these interventions is far from uniform. These simulations can therefore inform preparedness plans in the absence of data from actual intervention trials, which would be impossible outside (and impractical within) the context of an actual pandemic. Like all mathematical models, however, the one presented in this study relies on a number of assumptions regarding the characteristics and circumstances of the situation that it is intended to represent. For example, the authors found that the efficacy of policies to reduce the rate of infection vary according to the ease with which a given virus spreads from person to person. Because this parameter (known as the basic reproductive ratio, R0) cannot be reliably predicted for a new viral strain based on past epidemics, the authors note that in an actual influenza pandemic rapid determinations of R0 in areas already involved would be necessary to finalize public health responses in threatened areas. Further, the implementation of the interventions that appear beneficial in this model would require devoting attention and resources to practical considerations, such as how to staff isolation centers and provide food and water to those in household quarantine. However accurate the scientific data and predictive models may be, their effectiveness can only be realized through well-coordinated local, as well as international, efforts.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030361.
• World Health Organization influenza pandemic preparedness page
• US Department of Health and Human Services avian and pandemic flu information site
• Pandemic influenza page from the Public Health Agency of Canada
• Emergency planning page on pandemic flu from the England Department of Health
• Wikipedia entry on pandemic influenza with links to individual country resources (note: Wikipedia is a free Internet encyclopedia that anyone can edit)
doi:10.1371/journal.pmed.0030361
PMCID: PMC1526768  PMID: 16881729
9.  Risk Prediction for Breast, Endometrial, and Ovarian Cancer in White Women Aged 50 y or Older: Derivation and Validation from Population-Based Cohort Studies 
PLoS Medicine  2013;10(7):e1001492.
Ruth Pfeiffer and colleagues describe models to calculate absolute risks for breast, endometrial, and ovarian cancers for white, non-Hispanic women over 50 years old using easily obtainable risk factors.
Please see later in the article for the Editors' Summary
Background
Breast, endometrial, and ovarian cancers share some hormonal and epidemiologic risk factors. While several models predict absolute risk of breast cancer, there are few models for ovarian cancer in the general population, and none for endometrial cancer.
Methods and Findings
Using data on white, non-Hispanic women aged 50+ y from two large population-based cohorts (the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial [PLCO] and the National Institutes of Health–AARP Diet and Health Study [NIH-AARP]), we estimated relative and attributable risks and combined them with age-specific US-population incidence and competing mortality rates. All models included parity. The breast cancer model additionally included estrogen and progestin menopausal hormone therapy (MHT) use, other MHT use, age at first live birth, menopausal status, age at menopause, family history of breast or ovarian cancer, benign breast disease/biopsies, alcohol consumption, and body mass index (BMI); the endometrial model included menopausal status, age at menopause, BMI, smoking, oral contraceptive use, MHT use, and an interaction term between BMI and MHT use; the ovarian model included oral contraceptive use, MHT use, and family history or breast or ovarian cancer. In independent validation data (Nurses' Health Study cohort) the breast and ovarian cancer models were well calibrated; expected to observed cancer ratios were 1.00 (95% confidence interval [CI]: 0.96–1.04) for breast cancer and 1.08 (95% CI: 0.97–1.19) for ovarian cancer. The number of endometrial cancers was significantly overestimated, expected/observed = 1.20 (95% CI: 1.11–1.29). The areas under the receiver operating characteristic curves (AUCs; discriminatory power) were 0.58 (95% CI: 0.57–0.59), 0.59 (95% CI: 0.56–0.63), and 0.68 (95% CI: 0.66–0.70) for the breast, ovarian, and endometrial models, respectively.
Conclusions
These models predict absolute risks for breast, endometrial, and ovarian cancers from easily obtainable risk factors and may assist in clinical decision-making. Limitations are the modest discriminatory ability of the breast and ovarian models and that these models may not generalize to women of other races.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In 2008, just three types of cancer accounted for 10% of global cancer-related deaths. That year, about 460,000 women died from breast cancer (the most frequently diagnosed cancer among women and the fifth most common cause of cancer-related death). Another 140,000 women died from ovarian cancer, and 74,000 died from endometrial (womb) cancer (the 14th and 20th most common causes of cancer-related death, respectively). Although these three cancers originate in different tissues, they nevertheless share many risk factors. For example, current age, age at menarche (first period), and parity (the number of children a woman has had) are all strongly associated with breast, ovarian, and endometrial cancer risk. Because these cancers share many hormonal and epidemiological risk factors, a woman with a high breast cancer risk is also likely to have an above-average risk of developing ovarian or endometrial cancer.
Why Was This Study Done?
Several statistical models (for example, the Breast Cancer Risk Assessment Tool) have been developed that estimate a woman's absolute risk (probability) of developing breast cancer over the next few years or over her lifetime. Absolute risk prediction models are useful in the design of cancer prevention trials and can also help women make informed decisions about cancer prevention and treatment options. For example, a woman at high risk of breast cancer might decide to take tamoxifen for breast cancer prevention, but ideally she needs to know her absolute endometrial cancer risk before doing so because tamoxifen increases the risk of this cancer. Similarly, knowledge of her ovarian cancer risk might influence a woman's decision regarding prophylactic removal of her ovaries to reduce her breast cancer risk. There are few absolute risk prediction models for ovarian cancer, and none for endometrial cancer, so here the researchers develop models to predict the risk of these cancers and of breast cancer.
What Did the Researchers Do and Find?
Absolute risk prediction models are constructed by combining estimates for risk factors from cohorts with population-based incidence rates from cancer registries. Models are validated in an independent cohort by testing their ability to identify people with the disease in an independent cohort and their ability to predict the observed numbers of incident cases. The researchers used data on white, non-Hispanic women aged 50 years or older that were collected during two large prospective US cohort studies of cancer screening and of diet and health, and US cancer incidence and mortality rates provided by the Surveillance, Epidemiology, and End Results Program to build their models. The models all included parity as a risk factor, as well as other factors. The model for endometrial cancer, for example, also included menopausal status, age at menopause, body mass index (an indicator of the amount of body fat), oral contraceptive use, menopausal hormone therapy use, and an interaction term between menopausal hormone therapy use and body mass index. Individual women's risk for endometrial cancer calculated using this model ranged from 1.22% to 17.8% over the next 20 years depending on their exposure to various risk factors. Validation of the models using data from the US Nurses' Health Study indicated that the endometrial cancer model overestimated the risk of endometrial cancer but that the breast and ovarian cancer models were well calibrated—the predicted and observed risks for these cancers in the validation cohort agreed closely. Finally, the discriminatory power of the models (a measure of how well a model separates people who have a disease from people who do not have the disease) was modest for the breast and ovarian cancer models but somewhat better for the endometrial cancer model.
What Do These Findings Mean?
These findings show that breast, ovarian, and endometrial cancer can all be predicted using information on known risk factors for these cancers that is easily obtainable. Because these models were constructed and validated using data from white, non-Hispanic women aged 50 years or older, they may not accurately predict absolute risk for these cancers for women of other races or ethnicities. Moreover, the modest discriminatory power of the breast and ovarian cancer models means they cannot be used to decide which women should be routinely screened for these cancers. Importantly, however, these well-calibrated models should provide realistic information about an individual's risk of developing breast, ovarian, or endometrial cancer that can be used in clinical decision-making and that may assist in the identification of potential participants for research studies.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001492.
This study is further discussed in a PLOS Medicine Perspective by Lars Holmberg and Andrew Vickers
The US National Cancer Institute provides comprehensive information about cancer (in English and Spanish), including detailed information about breast cancer, ovarian cancer, and endometrial cancer;
Information on the Breast Cancer Risk Assessment Tool, the Surveillance, Epidemiology, and End Results Program, and on the prospective cohort study of screening and the diet and health study that provided the data used to build the models is also available on the NCI site
Cancer Research UK, a not-for-profit organization, provides information about cancer, including detailed information on breast cancer, ovarian cancer, and endometrial cancer
The UK National Health Service Choices website has information and personal stories about breast cancer, ovarian cancer, and endometrial cancer; the not-for-profit organization Healthtalkonline also provides personal stories about dealing with breast cancer and ovarian cancer
doi:10.1371/journal.pmed.1001492
PMCID: PMC3728034  PMID: 23935463
10.  Optimal Management of High-Risk T1G3 Bladder Cancer: A Decision Analysis 
PLoS Medicine  2007;4(9):e284.
Background
Controversy exists about the most appropriate treatment for high-risk superficial (stage T1; grade G3) bladder cancer. Immediate cystectomy offers the best chance for survival but may be associated with an impaired quality of life compared with conservative therapy. We estimated life expectancy (LE) and quality-adjusted life expectancy (QALE) for both of these treatments for men and women of different ages and comorbidity levels.
Methods and Findings
We evaluated two treatment strategies for high-risk, T1G3 bladder cancer using a decision-analytic Markov model: (1) Immediate cystectomy with neobladder creation versus (2) conservative management with intravesical bacillus Calmette-Guérin (BCG) and delayed cystectomy in individuals with resistant or progressive disease. Probabilities and utilities were derived from published literature where available, and otherwise from expert opinion. Extensive sensitivity analyses were conducted to identify variables most likely to influence the decision. Structural sensitivity analyses modifying the base case definition and the triggers for cystectomy in the conservative therapy arm were also explored. Probabilistic sensitivity analysis was used to assess the joint uncertainty of all variables simultaneously and the uncertainty in the base case results. External validation of model outputs was performed by comparing model-predicted survival rates with independent published literature. The mean LE of a 60-y-old male was 14.3 y for immediate cystectomy and 13.6 y with conservative management. With the addition of utilities, the immediate cystectomy strategy yielded a mean QALE of 12.32 y and remained preferred over conservative therapy by 0.35 y. Worsening patient comorbidity diminished the benefit of early cystectomy but altered the LE-based preferred treatment only for patients over age 70 y and the QALE-based preferred treatment for patients over age 65 y. Sensitivity analyses revealed that patients over the age of 70 y or those strongly averse to loss of sexual function, gastrointestinal dysfunction, or life without a bladder have a higher QALE with conservative therapy. The results of structural or probabilistic sensitivity analyses did not change the preferred treatment option. Model-predicted overall and disease-specific survival rates were similar to those reported in published studies, suggesting external validity.
Conclusions
Our model is, to our knowledge, the first of its kind in bladder cancer, and demonstrated that younger patients with high-risk T1G3 bladder had a higher LE and QALE with immediate cystectomy. The decision to pursue immediate cystectomy versus conservative therapy should be based on discussions that consider patient age, comorbid status, and an individual's preference for particular postcystectomy health states. Patients over the age of 70 y or those who place high value on sexual function, gastrointestinal function, or bladder preservation may benefit from a more conservative initial therapeutic approach.
Using a Markov model, Shabbir Alibhai and colleagues develop a decision analysis comparing cystectomy with conservative treatment for high-risk superficial bladder cancer depending on patient age, comorbid conditions, and preferences.
Editors' Summary
Background.
Every year, about 67,000 people in the US develop bladder cancer. Like all cancers, bladder cancer arises when a single cell begins to grow faster than normal, loses its characteristic shape, and moves into surrounding tissues. Most bladder cancers develop from cells that line the bladder (“transitional” cells) and most are detected before they spread out of this lining. These superficial or T1 stage cancers can be removed by transurethral resection of bladder tumor (TURBT). The urologist (a specialist who treats urinary tract problems) passes a small telescope into the bladder through the urethra (the tube through which urine leaves the body) and removes the tumor. If the tumor cells look normal under a microscope (so-called normal histology), the cancer is unlikely to return; if they have lost their normal appearance, the tumor is given a “G3” histological grade, which indicates a high risk of recurrence.
Why Was This Study Done?
The best treatment for T1G3 bladder cancer remains controversial. Some urologists recommend immediate radical cystectomy— surgical removal of the bladder, the urethra, and other nearby organs. This treatment often provides a complete cure but can cause serious short-term health problems and affects long-term quality of life. Patients often develop sexual dysfunction or intestinal (gut) problems and sometimes find it hard to live with a reconstructed bladder. The other recommended treatment is immunotherapy with bacillus Calmette-Guérin (BCG, bacteria that are also used to vaccinate against tuberculosis). Long-term survival is not always as good with this conservative treatment but it is less likely than surgery to cause short-term illness or to reduce quality of life. In this study, the researchers have used decision analysis (a systematic evaluation of the important factors affecting a decision) to determine whether immediate cystectomy or conservative therapy is the optimal treatment for patients with T1G3 bladder cancer. Decision analysis allowed the researchers to account for quality-of-life factors while comparing the health benefits of each treatment for T1G3 bladder cancer.
What Did the Researchers Do and Find?
Using a decision analysis model called a Markov model, the researchers calculated the months of life gained, and the quality of life expected to result, from each of the two treatments. To estimate the life expectancy (LE) associated with each treatment, the researchers incorporated the published probabilities of various outcomes of each treatment into their model. To estimate quality-adjusted life expectancy (QALE, the number of years of good quality life), they incorporated “utilities,” measures of relative satisfaction with outcomes. (A utility of 1 represents perfect health; death is assigned a value of 0, and outcomes considered less than ideal, but better than death, fall in between). For a sexually potent 60-year-old man with bladder cancer but no other illnesses, the average LE predicted by the model was nearly eight months longer with immediate cystectomy than with conservative treatment (both LEs predicted by this model matched those seen in clinical trials); the average QALE with cystectomy was 4.2 months longer than with conservative treatment. Having additional diseases decreased the benefit of immediate cystectomy but the treatment still gave a longer LE until the patient reached 70 years old, when conservative treatment became better. For QALE, this change in optimal treatment appeared at age 65. Finally, conservative treatment gave a higher QALE than immediate cystectomy for patients concerned about preserving sexual function or averse to living with intestinal problems or a reconstructed bladder.
What Do These Findings Mean?
As with all mathematical models, these results depend on the assumptions included in the model. In particular, because published probability and utility values are not available for some of the possible outcomes of the two treatments, the LE and QALE calculations could be inaccurate. Also, assigning numerical ratings to life experiences is generally something of a simplification, which could affect the reliability of the QALE (but not the LE) results. Nevertheless, these findings provide useful guidance for urologists trying to balance the benefits of immediate cystectomy or conservative treatment against the potential short-term and long-term effects of these treatments on patients' quality of life. Specifically, the results indicate that decisions on treatment for T1G3 bladder cancer should be based on a consideration of the patient's age and any coexisting disease coupled with detailed discussions with the patient about their attitudes regarding the possible health-related effects of cystectomy.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040284.
MedlinePlus encyclopedia page on bladder cancer (in English and Spanish)
Information for patients and professionals from the US National Cancer Institute on bladder cancer (in English and Spanish)
Information for patients on bladder cancer from the UK charity Cancerbackup
Online course on Decision Analysis in Health Care from George Mason University
doi:10.1371/journal.pmed.0040284
PMCID: PMC1989749  PMID: 17896857
11.  Contribution of H. pylori and Smoking Trends to US Incidence of Intestinal-Type Noncardia Gastric Adenocarcinoma: A Microsimulation Model 
PLoS Medicine  2013;10(5):e1001451.
Jennifer Yeh and colleagues examine the contribution of IHelicobacter pyloriI and smoking trends to the incidence of past and future intestinal-type noncardia gastric adenocarcinoma.
Please see later in the article for the Editors' Summary
Background
Although gastric cancer has declined dramatically in the US, the disease remains the second leading cause of cancer mortality worldwide. A better understanding of reasons for the decline can provide important insights into effective preventive strategies. We sought to estimate the contribution of risk factor trends on past and future intestinal-type noncardia gastric adenocarcinoma (NCGA) incidence.
Methods and Findings
We developed a population-based microsimulation model of intestinal-type NCGA and calibrated it to US epidemiologic data on precancerous lesions and cancer. The model explicitly incorporated the impact of Helicobacter pylori and smoking on disease natural history, for which birth cohort-specific trends were derived from the National Health and Nutrition Examination Survey (NHANES) and National Health Interview Survey (NHIS). Between 1978 and 2008, the model estimated that intestinal-type NCGA incidence declined 60% from 11.0 to 4.4 per 100,000 men, <3% discrepancy from national statistics. H. pylori and smoking trends combined accounted for 47% (range = 30%–58%) of the observed decline. With no tobacco control, incidence would have declined only 56%, suggesting that lower smoking initiation and higher cessation rates observed after the 1960s accelerated the relative decline in cancer incidence by 7% (range = 0%–21%). With continued risk factor trends, incidence is projected to decline an additional 47% between 2008 and 2040, the majority of which will be attributable to H. pylori and smoking (81%; range = 61%–100%). Limitations include assuming all other risk factors influenced gastric carcinogenesis as one factor and restricting the analysis to men.
Conclusions
Trends in modifiable risk factors explain a significant proportion of the decline of intestinal-type NCGA incidence in the US, and are projected to continue. Although past tobacco control efforts have hastened the decline, full benefits will take decades to be realized, and further discouragement of smoking and reduction of H. pylori should be priorities for gastric cancer control efforts.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Cancer of the stomach (gastric cancer) is responsible for a tenth of all cancer deaths world-wide, with an estimated 700,000 people dying from this malignancy every year, making it the second most common cause of global cancer-related deaths after lung cancer. Unfortunately, the projected global burden of this disease estimate that deaths from gastric cancer will double by 2030. Gastric cancer has a poor prognosis with only a quarter of people with this type of cancer surviving more than five years. In order to reduce deaths, it is therefore of utmost importance to identify and reduce the modifiable risk factors associated with gastric cancer. Smoking and chronic gastric infection with the bacteria Helicobacter pylori (H. pylori), are known to be two common modifiable risk factors for gastric cancer, particularly for a type of gastric cancer called intestinal-type noncardia gastric adenocarcinoma (NCGA), which occurs at the distal end of the stomach and accounts for more than half of all cases of gastric cancer in US men.
Why Was This Study Done?
H. pylori initiates a precancerous process, and so infection with this bacteria can increase intestinal-type NCGA risk by as much as 6-fold while smoking doubles cancer risk by advancing increasing progression of existing lesions. Changes in these two risk factors over the past century (especially following the US Surgeon General's Report on Smoking and Health in 1964) have led to a dramatic decline in the rates of gastric cancer in US men. Understanding the combined effects of underlying risk factor trends on health outcomes for intestinal-type NCGA at the population level can help to predict future cancer trends and burden in the US. So in this study, the researchers used a mathematical model to estimate the contribution of H. pylori and smoking trends on the decline in intestinal-type NCGA incidence in US men.
What Did the Researchers Do and Find?
The researchers used birth cohorts derived from data in two national databases, the National Health and Nutrition Examination Survey (NHANES) and National Health Interview Survey (NHIS) to develop a population-based model of intestinal-type NCGA. To ensure model predictions were consistent with epidemiologic data, the researchers calibrated the model to data on cancer and precancerous lesions and using the model, projected population outcomes between 1978 and 2040 for a base-case scenario (in which all risk factor trends were allowed to vary over time). The researchers then evaluated alternative risk factors scenarios to provide insights on the potential benefit of past and future efforts to control gastric cancer.
Using these methods, the researchers estimated that the incidence of intestinal-type NCGA (standardized by age) fell from 11.0 to 4.4 per 100,000 men between 1978 and 2008, a drop of 60%. When the researchers incorporated only H. pylori prevalence and smoking trends into the model (both of which fell dramatically over the time period) they found that intestinal-type NCGA incidence fell by only 28% (from 12.7 to 9.2 per 100,000 men), suggesting that H. pylori and smoking trends are responsible for 47% of the observed decline. The researchers found that H. pylori trends alone were responsible for 43% of the decrease in cancer but smoking trends were responsible for only a 3% drop. The researchers also found evidence that after the 1960s, observed trends in lower smoking initiation and higher cessation accelerated the decline in intestinal-type NCGA incidence by 7%. Finally, the researchers found that intestinal-type NCGA incidence is projected to decline an additional 47% between 2008 and 2040 (4.4 to 2.3 per 100,000 men) with H. pylori and smoking trends accounting for more than 80% of the observed fall.
What Do These Findings Mean?
These findings suggest that, combined with a fall in smoking rates, almost half of the observed fall in rates of intestinal-type NCGA cancer in US men between 1978 and 2008 was attributable to the decline in infection rates of H. pylori. Rates for this cancer are projected to continue to fall by 2040, with trends for both H. pylori infection and smoking accounting for more than 80% of the observed fall, highlighting the importance of the relationship between risk factors changes over time and achieving long-term reduction in cancer rates. This study is limited by the assumptions made in the model and in that it only examined one type of gastric cancer and excluded women. Nevertheless, this modeling study highlights that continued efforts to reduce rates of smoking and H. pylori infection will help to reduce rates of gastric cancer.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001451.
The National Cancer Institute gives detailed information about gastric cancer
The Gastric Cancer Foundation has information on gastric cancer for patients and professionals
Cancer Research UK explains types of gastric cancer
doi:10.1371/journal.pmed.1001451
PMCID: PMC3660292  PMID: 23700390
12.  Schizophrenia and Violence: Systematic Review and Meta-Analysis 
PLoS Medicine  2009;6(8):e1000120.
Seena Fazel and colleagues investigate the association between schizophrenia and other psychoses and violence and violent offending, and show that the increased risk appears to be partly mediated by substance abuse comorbidity.
Background
Although expert opinion has asserted that there is an increased risk of violence in individuals with schizophrenia and other psychoses, there is substantial heterogeneity between studies reporting risk of violence, and uncertainty over the causes of this heterogeneity. We undertook a systematic review of studies that report on associations between violence and schizophrenia and other psychoses. In addition, we conducted a systematic review of investigations that reported on risk of homicide in individuals with schizophrenia and other psychoses.
Methods and Findings
Bibliographic databases and reference lists were searched from 1970 to February 2009 for studies that reported on risks of interpersonal violence and/or violent criminality in individuals with schizophrenia and other psychoses compared with general population samples. These data were meta-analysed and odds ratios (ORs) were pooled using random-effects models. Ten demographic and clinical variables were extracted from each study to test for any observed heterogeneity in the risk estimates. We identified 20 individual studies reporting data from 18,423 individuals with schizophrenia and other psychoses. In men, ORs for the comparison of violence in those with schizophrenia and other psychoses with those without mental disorders varied from 1 to 7 with substantial heterogeneity (I2 = 86%). In women, ORs ranged from 4 to 29 with substantial heterogeneity (I2 = 85%). The effect of comorbid substance abuse was marked with the random-effects ORs of 2.1 (95% confidence interval [CI] 1.7–2.7) without comorbidity, and an OR of 8.9 (95% CI 5.4–14.7) with comorbidity (p<0.001 on metaregression). Risk estimates of violence in individuals with substance abuse (but without psychosis) were similar to those in individuals with psychosis with substance abuse comorbidity, and higher than all studies with psychosis irrespective of comorbidity. Choice of outcome measure, whether the sample was diagnosed with schizophrenia or with nonschizophrenic psychoses, study location, or study period were not significantly associated with risk estimates on subgroup or metaregression analysis. Further research is necessary to establish whether longitudinal designs were associated with lower risk estimates. The risk for homicide was increased in individuals with psychosis (with and without comorbid substance abuse) compared with general population controls (random-effects OR = 19.5, 95% CI 14.7–25.8).
Conclusions
Schizophrenia and other psychoses are associated with violence and violent offending, particularly homicide. However, most of the excess risk appears to be mediated by substance abuse comorbidity. The risk in these patients with comorbidity is similar to that for substance abuse without psychosis. Public health strategies for violence reduction could consider focusing on the primary and secondary prevention of substance abuse.
Please see later in the article for Editors' Summary
Editors' Summary
Background
Schizophrenia is a lifelong, severe psychotic condition. One in 100 people will have at least one episode of schizophrenia during their lifetime. Symptoms include delusions (for example, patients believe that someone is plotting against them) and hallucinations (hearing or seeing things that are not there). In men, schizophrenia usually starts in the late teens or early 20s; women tend to develop schizophrenia a little later. The causes of schizophrenia include genetic predisposition, obstetric complications, illegal drug use (substance abuse), and experiencing traumatic life events. The condition can be treated with a combination of antipsychotic drugs and supportive therapy; hospitalization may be necessary in very serious cases to prevent self harm. Many people with schizophrenia improve sufficiently after treatment to lead satisfying lives although some patients need lifelong support and supervision.
Why Was This Study Done?
Some people believe that schizophrenia and other psychoses are associated with violence, a perception that is often reinforced by news reports and that contributes to the stigma associated with mental illness. However, mental health advocacy groups and many mental health clinicians argue that it is a myth that people with mental health problems are violent. Several large, population-based studies have examined this disputed relationship. But, although some studies found no increased risk of violence among patients with schizophrenia compared with the general population, others found a marked increase in violent offending in patients with schizophrenia. Here, the researchers try to resolve this variation (“heterogeneity”) in the conclusions reached in different studies by doing a systematic review (a study that uses predefined search criteria to identify all the research on a specific topic) and a meta-analysis (a statistical method for combining the results of several studies) of the literature on associations between violence and schizophrenia and other psychoses. They also explored the relationship between substance abuse and violence.
What Did the Researchers Do and Find?
By systematically searching bibliographic databases and reference lists, the researchers identified 20 studies that compared the risk of violence in people with schizophrenia and other psychoses and the risk of violence in the general population. They then used a “random effects model” (a statistical technique that allows for heterogeneity between studies) to investigate the association between schizophrenia and violence. For men with schizophrenia or other psychoses, the pooled odds ratio (OR) from the relevant studies (which showed moderate heterogeneity) was 4.7, which was reduced to 3.8 once adjustment was made for socio-economic factors. That is, a man with schizophrenia was four to five times as likely to commit a violent act as a man in the general population. For women, the equivalent pooled OR was 8.2 but there was a much greater variation between the ORs in the individual studies than in the studies that involved men. The researchers then used “meta-regression” to investigate the heterogeneity between the studies. This analysis suggested that none of the study characteristics examined apart from co-occurring substance abuse could have caused the variation between the studies. Importantly the authors found that risk estimates of violence in people with substance abuse but no psychosis were similar to those in people with substance abuse and psychosis and higher than those in people with psychosis alone. Finally, although people with schizophrenia were nearly 20 times more likely to have committed murder than people in the general population, only one in 300 people with schizophrenia had killed someone, a similar risk to that seen in people with substance abuse.
What Do These Findings Mean?
These findings indicate that schizophrenia and other psychoses are associated with violence but that the association is strongest in people with substance abuse and most of the excess risk of violence associated with schizophrenia and other psychoses is mediated by substance abuse. However, the increased risk in patients with comorbidity was similar to that in substance abuse without psychosis. A potential implication of this finding is that violence reduction strategies that focus on preventing substance abuse among both the general population and among people with psychoses might be more successful than strategies that solely target people with mental illnesses. However, the quality of the individual studies included in this meta-analysis limits the strength of its conclusions and more research into the association between schizophrenia, substance abuse, and violence would assist in clarifying how and if strategies for violence reduction are changed.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000120.
The US National Institute of Mental Health provides information about schizophrenia (in English and Spanish)
The UK National Health Service Choices Web site has information for patients and carers about schizophrenia
The MedlinePlus Encyclopedia has a page on schizophrenia; MedlinePlus provides links to other sources of information on schizophrenia and on psychotic disorders (in English and Spanish)
The Schizophrenia and Related Disorders Alliance of America provides information and support for people with schizophrenia and their families
The time to change Web site provides information about an English campaign to reduce the stigma associated with mental illness
The Schizophrenia Research Forum provides updated research news and commentaries for the scientific community
doi:10.1371/journal.pmed.1000120
PMCID: PMC2718581  PMID: 19668362
13.  Assessing Optimal Target Populations for Influenza Vaccination Programmes: An Evidence Synthesis and Modelling Study 
PLoS Medicine  2013;10(10):e1001527.
Marc Baguelin and colleagues use virological, clinical, epidemiological, and behavioral data to estimate how policies for influenza vaccination programs may be optimized in England and Wales.
Please see later in the article for the Editors' Summary
Background
Influenza vaccine policies that maximise health benefit through efficient use of limited resources are needed. Generally, influenza vaccination programmes have targeted individuals 65 y and over and those at risk, according to World Health Organization recommendations. We developed methods to synthesise the multiplicity of surveillance datasets in order to evaluate how changing target populations in the seasonal vaccination programme would affect infection rate and mortality.
Methods and Findings
Using a contemporary evidence-synthesis approach, we use virological, clinical, epidemiological, and behavioural data to develop an age- and risk-stratified transmission model that reproduces the strain-specific behaviour of influenza over 14 seasons in England and Wales, having accounted for the vaccination uptake over this period. We estimate the reduction in infections and deaths achieved by the historical programme compared with no vaccination, and the reduction had different policies been in place over the period. We find that the current programme has averted 0.39 (95% credible interval 0.34–0.45) infections per dose of vaccine and 1.74 (1.16–3.02) deaths per 1,000 doses. Targeting transmitters by extending the current programme to 5–16-y-old children would increase the efficiency of the total programme, resulting in an overall reduction of 0.70 (0.52–0.81) infections per dose and 1.95 (1.28–3.39) deaths per 1,000 doses. In comparison, choosing the next group most at risk (50–64-y-olds) would prevent only 0.43 (0.35–0.52) infections per dose and 1.77 (1.15–3.14) deaths per 1,000 doses.
Conclusions
This study proposes a framework to integrate influenza surveillance data into transmission models. Application to data from England and Wales confirms the role of children as key infection spreaders. The most efficient use of vaccine to reduce overall influenza morbidity and mortality is thus to target children in addition to older adults.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every winter, millions of people catch influenza, a viral infection of the airways. Most infected individuals recover quickly, but seasonal influenza outbreaks (epidemics) kill about half a million people annually. In countries with advanced health systems, these deaths occur mainly among elderly people and among individuals with long-term illnesses such as asthma and heart disease that increase the risk of complications occurring after influenza virus infection. Epidemics of influenza occur because small but frequent changes in the influenza virus mean that an immune response produced one year through infection provides only partial protection against influenza the following year. Annual immunization with a vaccine that contains killed influenza viruses of the major circulating strains can greatly reduce a person's risk of catching influenza by preparing the immune system to respond quickly when challenged by a live influenza virus. Consequently, many countries run seasonal influenza vaccination programs that, in line with World Health Organization recommendations, target individuals 65 years old and older and people in high-risk groups.
Why Was This Study Done?
Is this approach the best use of available resources? Might, for example, vaccination of children—the main transmitters of influenza—provide more benefit to the whole population than vaccination of elderly people? Vaccination of children would not directly prevent as many influenza-related deaths as vaccination of elderly people, but it might indirectly prevent deaths in elderly adults by inducing herd immunity—vaccination of a large part of a population can protect unvaccinated members of the population by reducing the chances of an infection spreading. Policy makers need to know whether a change to an influenza vaccination program is likely to provide additional population benefits before altering the program. In this evidence synthesis and modeling study, the researchers combine (synthesize) longitudinal influenza surveillance datasets (data collected over time) from England and Wales, develop a mathematical model for influenza transmission based on these data using a Bayesian statistical approach, and use the model to evaluate the impact on influenza infections and deaths of changes to the seasonal influenza vaccination program in England and Wales.
What Did the Researchers Do and Find?
The researchers developed an influenza transmission model using clinical data on influenza-like illness consultations collected in a primary care surveillance scheme for each week of 14 influenza seasons in England and Wales, virological information on respiratory viruses detected in a subset of patients presenting with clinically suspected influenza, and data on vaccination coverage in the whole population (epidemiological data). They also incorporated data on social contacts (behavioral data) and on immunity to influenza viruses in the population (seroepidemiological data) into their model. To estimate the impact of potential changes to the current vaccination strategy in England and Wales, the researchers used their model, which replicated the patterns of disease observed in the surveillance data, to run simulated epidemics for each influenza season and for three strains of influenza virus under various vaccination scenarios. Compared to no vaccination, the current program (vaccination of people 65 years old and older and people in high-risk groups) averted 0.39 infections per dose of vaccine and 1.74 deaths per 1,000 doses. Notably, the model predicted that extension of the program to target 5–16-year-old children would increase the efficiency of the program and would avert 0.70 infections per dose and 1.95 deaths per 1,000 doses.
What Do These Findings Mean?
The finding that the transmission model developed by the researchers closely fit the available surveillance data suggests that the model should be able to predict what would have happened in England and Wales over the study period if an alternative vaccination regimen had been in place. The accuracy of such predictions may be limited, however, because the vaccination model is based on a series of simplifying assumptions. Importantly, given that influenza vaccination for children is being rolled out in England and Wales from September 2013, the model confirms that children are key spreaders of influenza and suggests that a vaccination program targeting children will reduce influenza infections and potentially influenza deaths in the whole population. More generally, the findings of this study support wider adoption of national vaccination strategies designed to block influenza transmission and to target those individuals most at risk from the complications of influenza infection.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371.journal.pmed.1001527.
The UK National Health Service Choices website provides information for patients about seasonal influenza and about vaccination; Public Health England (formerly the Health Protection Agency) provides information on influenza surveillance in the UK, including information about the primary care surveillance database used in this study
The World Health Organization provides information on seasonal influenza (in several languages)
The European Influenzanet is a system to monitor the activity of influenza-like illness with the aid of volunteers via the Internet
The US Centers for Disease Control and Prevention also provides information for patients and health professionals on all aspects of seasonal influenza, including information about vaccination and about the US influenza surveillance system; its website contains a short video about personal experiences of influenza
Flu.gov, a US government website, provides access to information on seasonal influenza and vaccination
MedlinePlus has links to further information about influenza and about immunization (in English and Spanish)
doi:10.1371/journal.pmed.1001527
PMCID: PMC3793005  PMID: 24115913
14.  Human metabolic profiles are stably controlled by genetic and environmental variation 
A comprehensive variation map of the human metabolome identifies genetic and stable-environmental sources as major drivers of metabolite concentrations. The data suggest that sample sizes of a few thousand are sufficient to detect metabolite biomarkers predictive of disease.
We designed a longitudinal twin study to characterize the genetic, stable-environmental, and longitudinally fluctuating influences on metabolite concentrations in two human biofluids—urine and plasma—focusing specifically on the representative subset of metabolites detectable by 1H nuclear magnetic resonance (1H NMR) spectroscopy.We identified widespread genetic and stable-environmental influences on the (urine and plasma) metabolomes, with (30 and 42%) attributable on average to familial sources, and (47 and 60%) attributable to longitudinally stable sources.Ten of the metabolites annotated in the study are estimated to have >60% familial contribution to their variation in concentration.Our findings have implications for the design and interpretation of 1H NMR-based molecular epidemiology studies. On the basis of the stable component of variation quantified in the current paper, we specified a model of disease association under which we inferred that sample sizes of a few thousand should be sufficient to detect disease-predictive metabolite biomarkers.
Metabolites are small molecules involved in biochemical processes in living systems. Their concentration in biofluids, such as urine and plasma, can offer insights into the functional status of biological pathways within an organism, and reflect input from multiple levels of biological organization—genetic, epigenetic, transcriptomic, and proteomic—as well as from environmental and lifestyle factors. Metabolite levels have the potential to indicate a broad variety of deviations from the ‘normal' physiological state, such as those that accompany a disease, or an increased susceptibility to disease. A number of recent studies have demonstrated that metabolite concentrations can be used to diagnose disease states accurately. A more ambitious goal is to identify metabolite biomarkers that are predictive of future disease onset, providing the possibility of intervention in susceptible individuals.
If an extreme concentration of a metabolite is to serve as an indicator of disease status, it is usually important to know the distribution of metabolite levels among healthy individuals. It is also useful to characterize the sources of that observed variation in the healthy population. A proportion of that variation—the heritable component—is attributable to genetic differences between individuals, potentially at many genetic loci. An effective, molecular indicator of a heritable, complex disease is likely to have a substantive heritable component. Non-heritable biological variation in metabolite concentrations can arise from a variety of environmental influences, such as dietary intake, lifestyle choices, general physical condition, composition of gut microflora, and use of medication. Variation across a population in stable-environmental influences leads to long-term differences between individuals in their baseline metabolite levels. Dynamic environmental pressures lead to short-term fluctuations within an individual about their baseline level. A metabolite whose concentration changes substantially in response to short-term pressures is relatively unlikely to offer long-term prediction of disease. In summary, the potential suitability of a metabolite to predict disease is reflected by the relative contributions of heritable and stable/unstable-environmental factors to its variation in concentration across the healthy population.
Studies involving twins are an established technique for quantifying the heritable component of phenotypes in human populations. Monozygotic (MZ) twins share the same DNA genome-wide, while dizygotic (DZ) twins share approximately half their inherited DNA, as do ordinary siblings. By comparing the average extent of phenotypic concordance within MZ pairs to that within DZ pairs, it is possible to quantify the heritability of a trait, and also to quantify the familiality, which refers to the combination of heritable and common-environmental effects (i.e., environmental influences shared by twins in a pair). In addition to incorporating twins into the study design, it is useful to quantify the phenotype in some individuals at multiple time points. The longitudinal aspect of such a study allows environmental effects to be decomposed into those that affect the phenotype over the short term and those that exert stable influence.
For the current study, urine and blood samples were collected from a cohort of MZ and DZ twins, with some twins donating samples on two occasions several months apart. Samples were analysed by 1H nuclear magnetic resonance (1H NMR) spectroscopy—an untargeted, discovery-driven technique for quantifying metabolite concentrations in biological samples. The application of 1H NMR to a biological sample creates a spectrum, made up of multiple peaks, with each peak's size quantitatively representing the concentration of its corresponding hydrogen-containing metabolite.
In each biological sample in our study, we extracted a full set of peaks, and thereby quantified the concentrations of all common plasma and urine metabolites detectable by 1H NMR. We developed bespoke statistical methods to decompose the observed concentration variation at each metabolite peak into that originating from familial, individual-environmental, and unstable-environmental sources.
We quantified the variability landscape across all common metabolite peaks in the urine and plasma 1H NMR metabolomes. We annotated a subset of peaks with a total of 65 metabolites; the variance decompositions for these are shown in Figure 1. Ten metabolites' concentrations were estimated to have familial contributions in excess of 60%. The average proportion of stable variation across all extracted metabolite peaks was estimated to be 47% in the urine samples and 60% in the plasma samples; the average estimated familiality was 30% for urine and 42% for plasma. These results comprise the first quantitative variation map of the 1H NMR metabolome. The identification and quantification of substantive widespread stability provides support for the use of these biofluids in molecular epidemiology studies. On the basis of our findings, we performed power calculations for a hypothetical study searching for predictive disease biomarkers among 1H NMR-detectable urine and plasma metabolites. Our calculations suggest that sample sizes of 2000–5000 should allow reliable identification of disease-predictive metabolite concentrations explaining 5–10% of disease risk, while greater sample sizes of 5000–20 000 would be required to identify metabolite concentrations explaining 1–2% of disease risk.
1H Nuclear Magnetic Resonance spectroscopy (1H NMR) is increasingly used to measure metabolite concentrations in sets of biological samples for top-down systems biology and molecular epidemiology. For such purposes, knowledge of the sources of human variation in metabolite concentrations is valuable, but currently sparse. We conducted and analysed a study to create such a resource. In our unique design, identical and non-identical twin pairs donated plasma and urine samples longitudinally. We acquired 1H NMR spectra on the samples, and statistically decomposed variation in metabolite concentration into familial (genetic and common-environmental), individual-environmental, and longitudinally unstable components. We estimate that stable variation, comprising familial and individual-environmental factors, accounts on average for 60% (plasma) and 47% (urine) of biological variation in 1H NMR-detectable metabolite concentrations. Clinically predictive metabolic variation is likely nested within this stable component, so our results have implications for the effective design of biomarker-discovery studies. We provide a power-calculation method which reveals that sample sizes of a few thousand should offer sufficient statistical precision to detect 1H NMR-based biomarkers quantifying predisposition to disease.
doi:10.1038/msb.2011.57
PMCID: PMC3202796  PMID: 21878913
biomarker; 1H nuclear magnetic resonance spectroscopy; metabolome-wide association study; top-down systems biology; variance decomposition
15.  Obstructive Sleep Apnea and Risk of Cardiovascular Events and All-Cause Mortality: A Decade-Long Historical Cohort Study 
PLoS Medicine  2014;11(2):e1001599.
Tetyana Kendzerska and colleagues explore the association between physiological measures of obstructive sleep apnea other than the apnea-hypopnea index and the risk of cardiovascular events.
Please see later in the article for the Editors' Summary
Background
Obstructive sleep apnea (OSA) has been reported to be a risk factor for cardiovascular (CV) disease. Although the apnea-hypopnea index (AHI) is the most commonly used measure of OSA, other less well studied OSA-related variables may be more pathophysiologically relevant and offer better prediction. The objective of this study was to evaluate the relationship between OSA-related variables and risk of CV events.
Methods and Findings
A historical cohort study was conducted using clinical database and health administrative data. Adults referred for suspected OSA who underwent diagnostic polysomnography at the sleep laboratory at St Michael's Hospital (Toronto, Canada) between 1994 and 2010 were followed through provincial health administrative data (Ontario, Canada) until May 2011 to examine the occurrence of a composite outcome (myocardial infarction, stroke, congestive heart failure, revascularization procedures, or death from any cause). Cox regression models were used to investigate the association between baseline OSA-related variables and composite outcome controlling for traditional risk factors. The results were expressed as hazard ratios (HRs) and 95% CIs; for continuous variables, HRs compare the 75th and 25th percentiles. Over a median follow-up of 68 months, 1,172 (11.5%) of 10,149 participants experienced our composite outcome. In a fully adjusted model, other than AHI OSA-related variables were significant independent predictors: time spent with oxygen saturation <90% (9 minutes versus 0; HR = 1.50, 95% CI 1.25–1.79), sleep time (4.9 versus 6.4 hours; HR = 1.20, 95% CI 1.12–1.27), awakenings (35 versus 18; HR = 1.06, 95% CI 1.02–1.10), periodic leg movements (13 versus 0/hour; HR = 1.05, 95% CI 1.03–1.07), heart rate (70 versus 56 beats per minute [bpm]; HR = 1.28, 95% CI 1.19–1.37), and daytime sleepiness (HR = 1.13, 95% CI 1.01–1.28).The main study limitation was lack of information about continuous positive airway pressure (CPAP) adherence.
Conclusion
OSA-related factors other than AHI were shown as important predictors of composite CV outcome and should be considered in future studies and clinical practice.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Obstructive sleep apnea (OSA) is a common sleep-related breathing disorder, particularly among middle-aged and elderly people. It is characterized by apnea—a brief interruption in breathing that lasts at least 10 seconds—and hypopnea—a decrease of more than 50% in the amplitude of breathing that lasts at least 10 seconds or clear but smaller decrease in amplitude associated with either oxygen desaturation or an arousal. Patients with OSA experience numerous episodes of apnea and hypopnea during the night; severe OSA is defined as having 30 or more episodes per hour (an apnea-hypopnea index [AHI] of >30). These breathing interruptions occur when relaxation of the upper airway muscles decreases the airflow, which lowers the amount of oxygen in the blood. As a result, affected individuals frequently wake from deep sleep as they struggle to breathe. Symptoms of OSA include loud snoring and daytime sleepiness. Treatments include lifestyle changes such as losing weight (excess fat around the neck increases airway collapse) and smoking cessation. For severe OSA, doctors recommend continuous positive airway pressure (CPAP), in which a machine blows pressurized air through a face mask into the airway to keep it open.
Why Was This Study Done?
OSA can be life-threatening. Most directly, daytime sleepiness can cause accidents, but OSA is also associated with an increased risk of developing cardiovascular disease (CVD, disease that affects the heart and the circulation). To date, studies that have investigated the association between OSA and the risk of myocardial infarction (heart attack), congestive heart failure, stroke, and other CVDs have used the AHI to diagnose and categorize the severity of OSA. However, by focussing on AHI, clinicians and researchers may be missing opportunities to improve their ability to predict which patients are at the highest risk of CVD. In this historical cohort study, the researchers investigate the association between other OSA-related variables (for example, blood oxygen saturation and sleep fragmentation) and the risk of cardiovascular events and all-cause mortality (death). A historical cohort study examines the medical records of groups of individuals who have different characteristics at baseline for the subsequent occurrence of specific outcomes.
What Did the Researchers Do and Find?
The researchers used administrative data (including hospitalization records and physicians' claims for services supplied to patients) to follow up adults referred for suspected OSA who underwent diagnostic polysomnography (a sleep study) at a single Canadian hospital between 1994 and 2010. A database of the polysomnography results provided information on OSA-related variables for all the study participants. Over an average follow-up of about 6 years, 11.5% of the 10,149 participants were hospitalized for a myocardial infarction, stroke, or congestive heart failure, underwent a revascularization procedure (an intervention that restores the blood supply to an organ or tissue after CVD has blocked a blood vessel), or had died from any cause. After adjusting for multiple established risk factors for CVD such as smoking and age in Cox regression models (a statistical approach that examines associations between patient variables and outcomes), several OSA-related variables (but not AHI) were significant predictors of CVD. The strongest OSA-related predictor of cardiovascular events or all-cause mortality was total sleep time spent with oxygen saturation below 90%, which increased the risk of a cardiovascular event or death by 50%. Other statistically significant OSA-related predictors (predictors that were unlikely to be associated with the outcome through chance) of cardiovascular events or death included total sleep time, number of awakenings, frequency of periodic leg movements, heart rate, and daytime sleepiness.
What Do These Findings Mean?
These findings indicate that OSA-related factors other than AHI are important predictors of the composite outcome of a cardiovascular event or all-cause mortality. Indeed, although AHI was significantly associated with the researchers' composite outcome in an analysis that did not consider other established risk factors for CVD (“confounders”), the association became non-significant after controlling for potential confounders. The accuracy of these findings, which need to be confirmed in other settings, is likely to be limited by the lack of information available about the use of CPAP by study participants and by the lack of adjustment for some important confounders. Importantly, however, these findings suggest that OSA-related factors other than AHI should be considered as predictors of CVD in future studies and in clinical practice.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001599.
The US National Heart Lung and Blood Institute has information (including several videos) about obstructive sleep apnea (in English and Spanish), sleep studies, heart disease, and other cardiovascular diseases (some information in English and Spanish)
The UK National Health Service Choices website provides information (including personal stories) about sleep apnea and about cardiovascular disease
The not-for-profit American Sleep Apnea Association provides detailed information about sleep apnea for patients and health-care professionals, including personal stories about the condition
The MedlinePlus encyclopedia has pages on obstructive sleep apnea and on polysomnography; MedlinePlus provides links to further information and advice about obstructive sleep apnea, heart diseases, and vascular diseases (in English and Spanish)
doi:10.1371/journal.pmed.1001599
PMCID: PMC3913558  PMID: 24503600
16.  Modelling the Impact of Artemisinin Combination Therapy and Long-Acting Treatments on Malaria Transmission Intensity 
PLoS Medicine  2008;5(11):e226.
Background
Artemisinin derivatives used in recently introduced combination therapies (ACTs) for Plasmodium falciparum malaria significantly lower patient infectiousness and have the potential to reduce population-level transmission of the parasite. With the increased interest in malaria elimination, understanding the impact on transmission of ACT and other antimalarial drugs with different pharmacodynamics becomes a key issue. This study estimates the reduction in transmission that may be achieved by introducing different types of treatment for symptomatic P. falciparum malaria in endemic areas.
Methods and Findings
We developed a mathematical model to predict the potential impact on transmission outcomes of introducing ACT as first-line treatment for uncomplicated malaria in six areas of varying transmission intensity in Tanzania. We also estimated the impact that could be achieved by antimalarials with different efficacy, prophylactic time, and gametocytocidal effects. Rates of treatment, asymptomatic infection, and symptomatic infection in the six study areas were estimated using the model together with data from a cross-sectional survey of 5,667 individuals conducted prior to policy change from sulfadoxine-pyrimethamine to ACT. The effects of ACT and other drug types on gametocytaemia and infectiousness to mosquitoes were independently estimated from clinical trial data. Predicted percentage reductions in prevalence of infection and incidence of clinical episodes achieved by ACT were highest in the areas with low initial transmission. A 53% reduction in prevalence of infection was seen if 100% of current treatment was switched to ACT in the area where baseline slide-prevalence of parasitaemia was lowest (3.7%), compared to an 11% reduction in the highest-transmission setting (baseline slide prevalence = 57.1%). Estimated percentage reductions in incidence of clinical episodes were similar. The absolute size of the public health impact, however, was greater in the highest-transmission area, with 54 clinical episodes per 100 persons per year averted compared to five per 100 persons per year in the lowest-transmission area. High coverage was important. Reducing presumptive treatment through improved diagnosis substantially reduced the number of treatment courses required per clinical episode averted in the lower-transmission settings although there was some loss of overall impact on transmission. An efficacious antimalarial regimen with no specific gametocytocidal properties but a long prophylactic time was estimated to be more effective at reducing transmission than a short-acting ACT in the highest-transmission setting.
Conclusions
Our results suggest that ACTs have the potential for transmission reductions approaching those achieved by insecticide-treated nets in lower-transmission settings. ACT partner drugs and nonartemisinin regimens with longer prophylactic times could result in a larger impact in higher-transmission settings, although their long term benefit must be evaluated in relation to the risk of development of parasite resistance.
Lucy Okell and colleagues predict the impact on transmission outcomes of ACT as first-line treatment for uncomplicated malaria in six areas of varying transmission intensity in Tanzania.
Editors' Summary
Background.
Plasmodium falciparum, a mosquito-borne parasite that causes malaria, kills nearly one million people every year. When an infected mosquito bites a person, it injects a life stage of the parasite called sporozoites, which invade human liver cells where they initially develop. The liver cells then release merozoites (another life stage of the parasite). These invade red blood cells where they multiply before bursting out and infecting more red blood cells, which can cause fever and damage vital organs. Some merozoites develop into gametocytes, which infect mosquitos when they take a blood meal. In the mosquito, the gametocytes give rise to sporozoites, thus completing the parasite's life cycle. Because malaria parasites are now resistant to many antimalarial drugs, the preferred first-line treatment for P. falciparum malaria in most countries is artemisinin combination therapy (ACT). Artemisinin derivatives are fast-acting antimalarial agents that, unlike previous first-line treatments, reduce the number of gametocytes in patients' blood, making them less infectious to mosquitos, and therefore have more potential to reduce malaria transmission. These compounds are used in combination with another antimalarial drug to reduce the chances of P. falciparum becoming resistant to either drug.
Why Was This Study Done?
Because malaria poses such a large global public-health burden, there is considerable national and international interest in eliminating it or at least minimizing its transmission. Malaria control agencies need to know how to choose between available types of ACT as well as other antimalarials so as to not only cure malaria illness but also prevent transmission as much as possible. The financial resources available to control malaria are limited, so for planning integrated transmission reduction programs it is important for policy makers to know what contribution their treatment policy could make in addition to other control strategies (for example, the provision of insecticide-treated bed nets to reduce mosquito bites) to reducing transmission. Furthermore, in areas with high levels of malaria, it is uncertain to what extent treatment can reduce transmission since many infected people are immune and do not suffer symptoms or seek health care, but continue to transmit to others. In this study, the researchers develop a mathematical model to predict the impact on malaria transmission of the introduction of ACT and alternative first-line treatments for malaria in six regions of Tanzania with different levels of malaria transmission.
What Did the Researchers Do and Find?
The researchers developed a “deterministic compartmental” model of malaria transmission in human and mosquito populations and included numerous variables likely to affect malaria transmission (variables were based on data collected in Tanzania just before the introduction of ACT). They then used the model to estimate the impact on malaria transmission of introducing ACT or other antimalarial drugs with different properties. The model predicted that the percentage reduction in the prevalence of infection (the fraction of the population with malaria) and the incidence of infection (the number of new cases in the population per year) associated with a 100% switch to ACT would be greater in areas with low initial transmission rates than in areas with high transmission rates. For example, in the area with the lowest initial transmission rates, the model predicted that the prevalence of infection would drop by 53%, but in the area with the highest initial transmission rate, the drop would be only 11%. However, because more people get malaria in high-transmission areas, the total number of malaria illness episodes prevented would be ten times higher in the area with highest transmission than in the area with lowest transmission. The model also predicted that, in areas with high transmission, long-acting treatments which protect patients from reinfection would reduce transmission more effectively than some common currently used ACT regimens which are gametocyte-killing but short-acting. Treatments which were both long-acting and gametocyte-killing were predicted to have the biggest impact across all settings.
What Do These Findings Mean?
As with all mathematical models, the accuracy of the predictions made by this model depend on the many assumptions incorporated into the model. In addition, because data from Tanzania were fed into the model, its predictions are to some extent specific to the area. Nevertheless the Tanzanian setting is typical of sub-Saharan malaria-affected areas, and the authors show that varying their assumptions and the data fed into the model within realistic ranges in most cases does not substantially change their overall conclusions. The findings in this study suggest that in low-transmission areas, provided ACT is widely used, ACT may reduce malaria transmission as effectively as the widespread use of insecticide-treated bed nets. The findings also suggest that the use of longer-acting regimens with or without artemisinin components might be a good way to reduce transmission in high-transmission areas, provided the development of parasite resistance can be avoided. More generally, these findings suggest that public-health officials need to take the properties of antimalarial drugs into account together with the levels of transmission in the area when designing policies in order to achieve the highest impact on malaria transmission.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050226.
This study is further discussed in a PLoS Medicine Perspective by Maciej Boni and colleagues
The MedlinePlus encyclopedia contains a page on malaria (in English and Spanish)
Information is available from the World Health Organization on malaria (in several languages)
The US Centers for Disease Control and Prevention provides information on malaria (in English and Spanish)
Information is available from the Roll Back Malaria Partnership on its approach to the global control of malaria, on artemisinin-based combination therapies, and on malaria in Tanzania
doi:10.1371/journal.pmed.0050226
PMCID: PMC2586356  PMID: 19067479
17.  Epidemiological Pathology of Dementia: Attributable-Risks at Death in the Medical Research Council Cognitive Function and Ageing Study 
PLoS Medicine  2009;6(11):e1000180.
Researchers from the Medical Research Council Cognitive Function and Ageing Neuropathology Study carry out an analysis of brain pathologies contributing to dementia, within a cohort of elderly individuals in the UK who agreed to brain donation.
Background
Dementia drug development aims to modulate pathological processes that cause clinical syndromes. Population data (epidemiological neuropathology) will help to model and predict the potential impact of such therapies on dementia burden in older people. Presently this can only be explored through post mortem findings. We report the attributable risks (ARs) for dementia at death for common age-related degenerative and vascular pathologies, and other factors, in the MRC Cognitive Function and Ageing Study (MRC CFAS).
Methods and Findings
A multicentre, prospective, longitudinal study of older people in the UK was linked to a brain donation programme. Neuropathology of 456 consecutive brain donations assessed degenerative and vascular pathologies. Logistic regression modelling, with bootstrapping and sensitivity analyses, was used to estimate AR at death for dementia for specific pathologies and other factors. The main contributors to AR at death for dementia in MRC CFAS were age (18%), small brain (12%), neocortical neuritic plaques (8%) and neurofibrillary tangles (11%), small vessel disease (12%), multiple vascular pathologies (9%), and hippocampal atrophy (10%). Other significant factors include cerebral amyloid angiopathy (7%) and Lewy bodies (3%).
Conclusions
Such AR estimates cannot be derived from the living population; rather they estimate the relative contribution of specific pathologies to dementia at death. We found that multiple pathologies determine the overall burden of dementia. The impact of therapy targeted to a specific pathology may be profound when the dementia is relatively “pure,” but may be less impressive for the majority with mixed disease, and in terms of the population. These data justify a range of strategies, and combination therapies, to combat the degenerative and vascular determinants of cognitive decline and dementia.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Losing one's belongings and forgetting people's names is often a normal part of aging. But increasing forgetfulness can also be a sign of dementia, a group of symptoms caused by several disorders that affect the structure of the brain. The commonest form of dementia is Alzheimer disease. In this, protein clumps called plaques and neurofibrillary tangles form in the brain and cause its degeneration. Vascular dementia, in which problems with blood circulation deprive parts of the brain of oxygen, is also common. People with dementia have problems with two or more “cognitive” functions—thinking, language, memory, understanding, and judgment. As the disease progresses, they gradually lose their ability to deal with normal daily activities until they need total care, their personality often changes, and they may become agitated or aggressive. Dementia is rare before the age of 65 years but about a quarter of people over 85 years old have dementia. Because more people live to a ripe old age these days, the number of people with dementia is increasing. According to the latest estimates, about 35 million people now have dementia and by 2050, 115 million may have the disorder.
Why Was This Study Done?
There is no cure for dementia but many drugs designed to modulate specific abnormal (pathological) changes in the brain that can cause the symptoms of dementia are being developed. To assess the likely impact of these potentially expensive new therapies, experts need to know what proportion of dementia is associated with each type of brain pathology. Although some brain changes can be detected in living brains with techniques such as computed tomography brain scans, most brain changes can only be studied in brains taken from people after death (post mortem brains). In this study, which is part of the UK Medical Research Council Cognitive Function and Ageing Study (MRC CFAS), the researchers look for associations between dementia in elderly people and pathological changes in their post mortem brains and estimate the attributable-risk (AR) for dementia at death associated with specific pathological features in the brain. That is, they estimate the proportion of dementia directly attributable to each type of pathology.
What Did the Researchers Do and Find?
Nearly 20 years ago, the MRC CFAS interviewed more than 18,000 people aged 65 years or older recruited at six sites in England and Wales to determine their cognitive function and their ability to deal with daily activities. 20% of the participants, which included people with and without cognitive impairment, were then assessed in more detail and invited to donate their brains for post mortem examination. As of 2004, 456 individuals had donated their brains. The dementia status of these donors was established using data from their assessment interviews and death certificates, and from interviews with relatives and carers, and their brains were carefully examined for abnormal changes. The researchers then used statistical methods to estimate the AR for dementia at death associated with various abnormal brain changes. The main contributors to AR for dementia at death included age (18% of dementia at death was attributable to this factor), plaques (8%), and neurofibrillary tangles (11%) in a brain region called the neocortex, small blood vessel disease (12%), and multiple abnormal changes in blood vessels (9%).
What Do These Findings Mean?
These findings suggest that multiple abnormal brain changes determine the overall burden of dementia. Importantly, they also suggest that dementia is often associated with mixed pathological changes—many people with dementia had brain changes consistent with both Alzheimer disease and vascular dementia. Because people with dementia live for variable lengths of time during which the abnormal changes in their brain are likely to alter, it may be difficult to extrapolate these findings to living populations of elderly people. Furthermore, only a small percentage of the MRC CFAS participants have donated their brains so the findings of this study may not apply to the general population. Nevertheless, these findings suggest that the new therapies currently under development may do little to reduce the overall burden of dementia because most people's dementia involves multiple pathologies. Consequently, it may be necessary to develop a range of strategies and combination therapies to deal with the ongoing dementia epidemic.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000180.
The US National Institute on Aging provides information for patients and carers about forgetfulness and about Alzheimer disease (in English and Spanish)
The US National Institute of Neurological Disorders and Stroke provides information about dementia (in English and Spanish)
The UK National Health Service Choices Web site also provides detailed information for patients and their carers about dementia and about Alzheimer disease
MedlinePlus provides links to additional resources about dementia and Alzheimer disease (in English and Spanish)
More information about the UK Medical Research Council Cognitive Function and Ageing Study (MRC CFAS) is available
doi:10.1371/journal.pmed.1000180
PMCID: PMC2765638  PMID: 19901977
18.  Hedging against Antiviral Resistance during the Next Influenza Pandemic Using Small Stockpiles of an Alternative Chemotherapy 
PLoS Medicine  2009;6(5):e1000085.
Mathematically simulating an influenza pandemic, Joseph Wu and colleagues predict that using a secondary antiviral drug early in local epidemics would reduce global emergence of resistance to the primary stockpiled drug.
Background
The effectiveness of single-drug antiviral interventions to reduce morbidity and mortality during the next influenza pandemic will be substantially weakened if transmissible strains emerge which are resistant to the stockpiled antiviral drugs. We developed a mathematical model to test the hypothesis that a small stockpile of a secondary antiviral drug could be used to mitigate the adverse consequences of the emergence of resistant strains.
Methods and Findings
We used a multistrain stochastic transmission model of influenza to show that the spread of antiviral resistance can be significantly reduced by deploying a small stockpile (1% population coverage) of a secondary drug during the early phase of local epidemics. We considered two strategies for the use of the secondary stockpile: early combination chemotherapy (ECC; individuals are treated with both drugs in combination while both are available); and sequential multidrug chemotherapy (SMC; individuals are treated only with the secondary drug until it is exhausted, then treated with the primary drug). We investigated all potentially important regions of unknown parameter space and found that both ECC and SMC reduced the cumulative attack rate (AR) and the resistant attack rate (RAR) unless the probability of emergence of resistance to the primary drug pA was so low (less than 1 in 10,000) that resistance was unlikely to be a problem or so high (more than 1 in 20) that resistance emerged as soon as primary drug monotherapy began. For example, when the basic reproductive number was 1.8 and 40% of symptomatic individuals were treated with antivirals, AR and RAR were 67% and 38% under monotherapy if pA = 0.01. If the probability of resistance emergence for the secondary drug was also 0.01, then SMC reduced AR and RAR to 57% and 2%. The effectiveness of ECC was similar if combination chemotherapy reduced the probabilities of resistance emergence by at least ten times. We extended our model using travel data between 105 large cities to investigate the robustness of these resistance-limiting strategies at a global scale. We found that as long as populations that were the main source of resistant strains employed these strategies (SMC or ECC), then those same strategies were also effective for populations far from the source even when some intermediate populations failed to control resistance. In essence, through the existence of many wild-type epidemics, the interconnectedness of the global network dampened the international spread of resistant strains.
Conclusions
Our results indicate that the augmentation of existing stockpiles of a single anti-influenza drug with smaller stockpiles of a second drug could be an effective and inexpensive epidemiological hedge against antiviral resistance if either SMC or ECC were used. Choosing between these strategies will require additional empirical studies. Specifically, the choice will depend on the safety of combination therapy and the synergistic effect of one antiviral in suppressing the emergence of resistance to the other antiviral when both are taken in combination.
Editors' Summary
Background
Every winter, millions of people catch influenza—a viral infection of the airways—and about half a million people die as a result. These seasonal “epidemics” occur because small but frequent changes in the viral proteins (antigens) to which the human immune system responds mean that an immune response produced one year provides only partial protection against influenza the next year. Influenza viruses also occasionally appear that contain major antigenic changes. Human populations have little or no immunity to such viruses so they can start deadly pandemics (global epidemics). The 1918–19 influenza pandemic, for example, killed 40–50 million people. The last influenza pandemic was in 1968 and many experts fear the next pandemic might strike soon. To prepare for such an eventuality, scientists are trying to develop vaccines that might work against an emerging pandemic influenza virus. In addition, many governments are stockpiling antiviral drugs for the large-scale treatment of influenza and for targeted prophylaxis (prevention). Antiviral drugs prevent the replication of the influenza virus, thereby shortening the length of time that an infected person is ill and protecting uninfected people against infection. Their widespread use should, therefore, slow the spread of pandemic influenza.
Why Was This Study Done?
Although some countries are stockpiling more than one antiviral drug in preparation for an influenza pandemic, many countries are investing in large stockpiles of a single drug, oseltamivir (Tamiflu). But influenza viruses can become resistant to antiviral drugs and the widespread use of a single drug (the primary antiviral) is likely to increase the risk that a resistant strain will emerge. If this did happen, the ability of antiviral drugs to slow the spread of a pandemic would be greatly reduced. In this study, the researchers use a mathematical model of influenza transmission to investigate whether a small stockpile of a secondary antiviral drug could be used to prevent the adverse consequences of the emergence of antiviral-resistant pandemic influenza viruses.
What Did the Researchers Do and Find?
The researchers used their model of influenza transmission to predict how two strategies for the use of a small stockpile of a secondary antiviral might affect the cumulative attack rate (AR; the final proportion of the population infected) and the resistant attack rate (RAR; the proportion of the population infected with an influenza virus strain resistant to the primary drug, a measure that may reflect the impact of antiviral resistance on death rates during a pandemic). In a large, closed population, the model predicted that both “early combination chemotherapy” (treatment with both drugs together while both are available) and “sequential multi-drug chemotherapy” (treatment with the secondary drug until it is exhausted, then treatment with the primary drug) would reduce the AR and the RAR compared with monotherapy unless the probability of emergence of resistance to the primary drug was very low (resistance rarely occurred) or very high (resistance emerged as soon as the primary drug was used). The researchers then introduced international travel data into their model to investigate whether these two strategies could limit the development of antiviral resistance at a global scale. This analysis predicted that, provided the population that was the main source of resistant strains used one of the strategies, both strategies in distant, subsequently affected populations would be able to reduce the AR and RAR even if some intermediate populations failed to control resistance.
What Do These Findings Mean?
As with all mathematical models, the accuracy of these predictions depends on the assumptions used to build the model and the data fed into it. Nevertheless, these findings suggest that both of the proposed strategies for the use of small stockpiles of secondary antiviral drugs should limit the spread of drug-resistant influenza virus more effectively than monotherapy with the primary antiviral drug. Thus, small stockpiles of secondary antivirals could provide a hedge against the development of antiviral resistance during the early phases of an influenza pandemic and are predicted to be a worthwhile public-health investment. However, note the researchers, experimental studies—including determinations of which drugs are safe to use together, and how effectively a given combination prevents resistance compared with each drug used alone—are now needed to decide which of the strategies to recommend in real-life situations. In the context of the 2009 global spread of swine flu, these findings suggest that public health officials might consider zanamivir (Relenza) as the secondary antiviral drug for resistance-limiting strategies in countries that have stockpiled oseltamivir.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000085.
The US Centers for Disease Control and Prevention provides information about influenza for patients and professionals, including specific information on pandemic influenza and on influenza antiviral drugs
The World Health Organization provides information on influenza (in several languages) and has detailed guidelines on the use of vaccines and antivirals during influenza pandemics
The UK Health Protection Agency provides information on pandemic influenza
MedlinePlus provides a list of links to other information about influenza (in English and Spanish)
doi:10.1371/journal.pmed.1000085
PMCID: PMC2680070  PMID: 19440354
19.  Red Blood Cell Transfusion and Mortality in Trauma Patients: Risk-Stratified Analysis of an Observational Study 
PLoS Medicine  2014;11(6):e1001664.
Using a large multicentre cohort, Pablo Perel and colleagues evaluate the association of red blood cell transfusion with mortality according to the predicted risk of death for trauma patients.
Please see later in the article for the Editors' Summary
Background
Haemorrhage is a common cause of death in trauma patients. Although transfusions are extensively used in the care of bleeding trauma patients, there is uncertainty about the balance of risks and benefits and how this balance depends on the baseline risk of death. Our objective was to evaluate the association of red blood cell (RBC) transfusion with mortality according to the predicted risk of death.
Methods and Findings
A secondary analysis of the CRASH-2 trial (which originally evaluated the effect of tranexamic acid on mortality in trauma patients) was conducted. The trial included 20,127 trauma patients with significant bleeding from 274 hospitals in 40 countries. We evaluated the association of RBC transfusion with mortality in four strata of predicted risk of death: <6%, 6%–20%, 21%–50%, and >50%. For this analysis the exposure considered was RBC transfusion, and the main outcome was death from all causes at 28 days. A total of 10,227 patients (50.8%) received at least one transfusion. We found strong evidence that the association of transfusion with all-cause mortality varied according to the predicted risk of death (p-value for interaction <0.0001). Transfusion was associated with an increase in all-cause mortality among patients with <6% and 6%–20% predicted risk of death (odds ratio [OR] 5.40, 95% CI 4.08–7.13, p<0.0001, and OR 2.31, 95% CI 1.96–2.73, p<0.0001, respectively), but with a decrease in all-cause mortality in patients with >50% predicted risk of death (OR 0.59, 95% CI 0.47–0.74, p<0.0001). Transfusion was associated with an increase in fatal and non-fatal vascular events (OR 2.58, 95% CI 2.05–3.24, p<0.0001). The risk associated with RBC transfusion was significantly increased for all the predicted risk of death categories, but the relative increase was higher for those with the lowest (<6%) predicted risk of death (p-value for interaction <0.0001). As this was an observational study, the results could have been affected by different types of confounding. In addition, we could not consider haemoglobin in our analysis. In sensitivity analyses, excluding patients who died early; conducting propensity score analysis adjusting by use of platelets, fresh frozen plasma, and cryoprecipitate; and adjusting for country produced results that were similar.
Conclusions
The association of transfusion with all-cause mortality appears to vary according to the predicted risk of death. Transfusion may reduce mortality in patients at high risk of death but increase mortality in those at low risk. The effect of transfusion in low-risk patients should be further tested in a randomised trial.
Trial registration
www.ClinicalTrials.gov NCT01746953
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Trauma—a serious injury to the body caused by violence or an accident—is a major global health problem. Every year, injuries caused by traffic collisions, falls, blows, and other traumatic events kill more than 5 million people (9% of annual global deaths). Indeed, for people between the ages of 5 and 44 years, injuries are among the top three causes of death in many countries. Trauma sometimes kills people through physical damage to the brain and other internal organs, but hemorrhage (serious uncontrolled bleeding) is responsible for 30%–40% of trauma-related deaths. Consequently, early trauma care focuses on minimizing hemorrhage (for example, by using compression to stop bleeding) and on restoring blood circulation after blood loss (health-care professionals refer to this as resuscitation). Red blood cell (RBC) transfusion is often used for the management of patients with trauma who are bleeding; other resuscitation products include isotonic saline and solutions of human blood proteins.
Why Was This Study Done?
Although RBC transfusion can save the lives of patients with trauma who are bleeding, there is considerable uncertainty regarding the balance of risks and benefits associated with this procedure. RBC transfusion, which is an expensive intervention, is associated with several potential adverse effects, including allergic reactions and infections. Moreover, blood supplies are limited, and the risks from transfusion are high in low- and middle-income countries, where most trauma-related deaths occur. In this study, which is a secondary analysis of data from a trial (CRASH-2) that evaluated the effect of tranexamic acid (which stops excessive bleeding) in patients with trauma, the researchers test the hypothesis that RBC transfusion may have a beneficial effect among patients at high risk of death following trauma but a harmful effect among those at low risk of death.
What Did the Researchers Do and Find?
The CRASH-2 trail included 20,127 patients with trauma and major bleeding treated in 274 hospitals in 40 countries. In their risk-stratified analysis, the researchers investigated the effect of RBC transfusion on CRASH-2 participants with a predicted risk of death (estimated using a validated model that included clinical variables such as heart rate and blood pressure) on admission to hospital of less than 6%, 6%–20%, 21%–50%, or more than 50%. That is, the researchers compared death rates among patients in each stratum of predicted risk of death who received a RBC transfusion with death rates among patients who did not receive a transfusion. Half the patients received at least one transfusion. Transfusion was associated with an increase in all-cause mortality at 28 days after trauma among patients with a predicted risk of death of less than 6% or of 6%–20%, but with a decrease in all-cause mortality among patients with a predicted risk of death of more than 50%. In absolute figures, compared to no transfusion, RBC transfusion was associated with 5.1 more deaths per 100 patients in the patient group with the lowest predicted risk of death but with 11.9 fewer deaths per 100 patients in the group with the highest predicted risk of death.
What Do These Findings Mean?
These findings show that RBC transfusion is associated with an increase in all-cause deaths among patients with trauma and major bleeding with a low predicted risk of death, but with a reduction in all-cause deaths among patients with a high predicted risk of death. In other words, these findings suggest that the effect of RBC transfusion on all-cause mortality may vary according to whether a patient with trauma has a high or low predicted risk of death. However, because the participants in the CRASH-2 trial were not randomly assigned to receive a RBC transfusion, it is not possible to conclude that receiving a RBC transfusion actually increased the death rate among patients with a low predicted risk of death. It might be that the patients with this level of predicted risk of death who received a transfusion shared other unknown characteristics (confounders) that were actually responsible for their increased death rate. Thus, to provide better guidance for clinicians caring for patients with trauma and hemorrhage, the hypothesis that RBC transfusion could be harmful among patients with trauma with a low predicted risk of death should be prospectively evaluated in a randomised controlled trial.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001664.
This study is further discussed in a PLOS Medicine Perspective by Druin Burch
The World Health Organization provides information on injuries and on violence and injury prevention (in several languages)
The US Centers for Disease Control and Prevention has information on injury and violence prevention and control
The National Trauma Institute, a US-based non-profit organization, provides information about hemorrhage after trauma and personal stories about surviving trauma
The UK National Health Service Choices website provides information about blood transfusion, including a personal story about transfusion after a serious road accident
The US National Heart, Lung, and Blood Institute also provides detailed information about blood transfusions
MedlinePlus provides links to further resources on injuries, bleeding, and blood transfusion (in English and Spanish)
More information in available about CRASH-2 (in several languages)
doi:10.1371/journal.pmed.1001664
PMCID: PMC4060995  PMID: 24937305
20.  The Effect of Tobacco Control Measures during a Period of Rising Cardiovascular Disease Risk in India: A Mathematical Model of Myocardial Infarction and Stroke 
PLoS Medicine  2013;10(7):e1001480.
In this paper from Basu and colleagues, a simulation of tobacco control and pharmacological interventions to prevent cardiovascular disease mortality in India predicted that Smokefree laws and increased tobacco taxation are likely to be the most effective measures to avert future cardiovascular deaths in India.
Please see later in the article for the Editors' Summary
Background
We simulated tobacco control and pharmacological strategies for preventing cardiovascular deaths in India, the country that is expected to experience more cardiovascular deaths than any other over the next decade.
Methods and Findings
A microsimulation model was developed to quantify the differential effects of various tobacco control measures and pharmacological therapies on myocardial infarction and stroke deaths stratified by age, gender, and urban/rural status for 2013 to 2022. The model incorporated population-representative data from India on multiple risk factors that affect myocardial infarction and stroke mortality, including hypertension, hyperlipidemia, diabetes, coronary heart disease, and cerebrovascular disease. We also included data from India on cigarette smoking, bidi smoking, chewing tobacco, and secondhand smoke. According to the model's results, smoke-free legislation and tobacco taxation would likely be the most effective strategy among a menu of tobacco control strategies (including, as well, brief cessation advice by health care providers, mass media campaigns, and an advertising ban) for reducing myocardial infarction and stroke deaths over the next decade, while cessation advice would be expected to be the least effective strategy at the population level. In combination, these tobacco control interventions could avert 25% of myocardial infarctions and strokes (95% CI: 17%–34%) if the effects of the interventions are additive. These effects are substantially larger than would be achieved through aspirin, antihypertensive, and statin therapy under most scenarios, because of limited treatment access and adherence; nevertheless, the impacts of tobacco control policies and pharmacological interventions appear to be markedly synergistic, averting up to one-third of deaths from myocardial infarction and stroke among 20- to 79-y-olds over the next 10 y. Pharmacological therapies could also be considerably more potent with further health system improvements.
Conclusions
Smoke-free laws and substantially increased tobacco taxation appear to be markedly potent population measures to avert future cardiovascular deaths in India. Despite the rise in co-morbid cardiovascular disease risk factors like hyperlipidemia and hypertension in low- and middle-income countries, tobacco control is likely to remain a highly effective strategy to reduce cardiovascular deaths.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Cardiovascular diseases (CVDs) are conditions that affect the heart and/or the circulation. In coronary heart disease, for example, narrowing of the heart's blood vessels by fatty deposits slows the blood supply to the heart and may eventually cause a heart attack (myocardial infarction). Stroke, by contrast, is a CVD in which the blood supply to the brain is interrupted. CVD has been a major cause of illness and death in high-income countries for many years, but the burden of CVD is now rapidly rising in low- and middle-income countries. Indeed, worldwide, three-quarters of all deaths from heart disease and stroke occur in low- and middle-income countries. Smoking, high blood pressure (hypertension), high blood cholesterol (hyperlipidemia), diabetes, obesity, and physical inactivity all increase an individual's risk of developing CVD. Prevention strategies and treatments for CVD include lifestyle changes (for example, smoking cessation) and taking drugs that lower blood pressure (antihypertensive drugs) or blood cholesterol levels (statins) or thin the blood (aspirin).
Why Was This Study Done?
Because tobacco use is a key risk factor for CVD and for several other noncommunicable diseases, the World Health Organization has developed an international instrument for tobacco control called the Framework Convention on Tobacco Control (FCTC). Parties to the FCTC (currently 176 countries) agree to implement a set of core tobacco control provisions including legislation to ban tobacco advertising and to increase tobacco taxes. But will tobacco control measures reduce the burden of CVD effectively in low- and middle-income countries as other risk factors for CVD are becoming more common? In this mathematical modeling study, the researchers investigated this question by simulating the effects of tobacco control measures and pharmacological strategies for preventing CVD on CVD deaths in India. Notably, many of the core FCTC provisions remain poorly implemented or unenforced in India even though it became a party to the convention in 2005. Moreover, experts predict that, over the next decade, this middle-income country will contribute more than any other nation to the global increase in CVD deaths.
What Did the Researchers Do and Find?
The researchers developed a microsimulation model (a computer model that operates at the level of individuals) to quantify the likely effects of various tobacco control measures and pharmacological therapies on deaths from myocardial infarction and stroke in India between 2013 and 2022. They incorporated population-representative data from India on risk factors that affect myocardial infarction and stroke mortality and on tobacco use and exposure to secondhand smoke into their model. They then simulated the effects of five tobacco control measures—smoke-free legislation, tobacco taxation, provision of brief cessation advice by health care providers, mass media campaigns, and advertising bans—and increased access to aspirin, antihypertensive drugs, and statins on deaths from myocardial infarction and stroke. Smoke-free legislation and tobacco taxation are likely to be the most effective strategies for reducing myocardial infarction and stroke deaths over the next decade, according to the model, and the effects of these strategies are likely to be substantially larger than those achieved by drug therapies under current health system conditions. If the effects of smoke-free legislation and tobacco taxation are additive, the model predicts that these two measures alone could avert about 9 million deaths, that is, a quarter of the expected deaths from myocardial infarction and stroke in India over the next 10 years, and that a combination of tobacco control policies and pharmacological interventions could avert up to a third of these deaths.
What Do These Findings Mean?
These findings suggest that the implementation of smoke-free laws and the introduction of increased tobacco taxes in India would yield substantial and rapid health benefits by averting future CVD deaths. The accuracy of these findings is likely to be affected by the many assumptions included in the mathematical model and by the quality of the data fed into it. Importantly, however, these finding suggest that, despite the rise in other CVD risk factors such as hypertension and hyperlipidemia, tobacco control is likely to be a highly effective strategy for the reduction of CVD deaths over the next decade in India and probably in other low- and middle-income countries. Policymakers in these countries should, therefore, work towards fuller and faster implementation of the core FCTC provisions to boost their efforts to reduce deaths from CVD.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001480.
The American Heart Association provides information on all aspects of cardiovascular disease; its website includes personal stories about heart attacks and stroke
The US Centers for Disease Control and Prevention has information on heart disease and on stroke (in English and Spanish
The UK National Health Service Choices website provides information about cardiovascular disease and stroke
MedlinePlus provides links to other sources of information on heart diseases, vascular diseases, and stroke (in English and Spanish)
The World Health Organization provides information (in several languages) about the dangers of tobacco, about the Framework Convention on Tobacco Control, and about noncommunicable diseases; its Global Noncommunicable Disease Network (NCDnet) aims to help low- and middle- income countries reduce illness and death caused by CVD and other noncommunicable diseases
SmokeFree, a website provided by the UK National Health Service, offers advice on quitting smoking and includes personal stories from people who have stopped smoking
Smokefree.gov, supported by the US National Cancer Institute and other US agencies, offers online tools and resources to help people quit smoking
doi:10.1371/journal.pmed.1001480
PMCID: PMC3706364  PMID: 23874160
21.  Are Markers of Inflammation More Strongly Associated with Risk for Fatal Than for Nonfatal Vascular Events? 
PLoS Medicine  2009;6(6):e1000099.
In a secondary analysis of a randomized trial comparing pravastatin versus placebo for the prevention of coronary and cerebral events in an elderly at-risk population, Naveed Sattar and colleagues find that inflammatory markers may be more strongly associated with risk of fatal vascular events than nonfatal vascular events.
Background
Circulating inflammatory markers may more strongly relate to risk of fatal versus nonfatal cardiovascular disease (CVD) events, but robust prospective evidence is lacking. We tested whether interleukin (IL)-6, C-reactive protein (CRP), and fibrinogen more strongly associate with fatal compared to nonfatal myocardial infarction (MI) and stroke.
Methods and Findings
In the Prospective Study of Pravastatin in the Elderly at Risk (PROSPER), baseline inflammatory markers in up to 5,680 men and women aged 70–82 y were related to risk for endpoints; nonfatal CVD (i.e., nonfatal MI and nonfatal stroke [n = 672]), fatal CVD (n = 190), death from other CV causes (n = 38), and non-CVD mortality (n = 300), over 3.2-y follow-up. Elevations in baseline IL-6 levels were significantly (p = 0.0009; competing risks model analysis) more strongly associated with fatal CVD (hazard ratio [HR] for 1 log unit increase in IL-6 1.75, 95% confidence interval [CI] 1.44–2.12) than with risk of nonfatal CVD (1.17, 95% CI 1.04–1.31), in analyses adjusted for treatment allocation. The findings were consistent in a fully adjusted model. These broad trends were similar for CRP and, to a lesser extent, for fibrinogen. The results were also similar in placebo and statin recipients (i.e., no interaction). The C-statistic for fatal CVD using traditional risk factors was significantly (+0.017; p<0.0001) improved by inclusion of IL-6 but not so for nonfatal CVD events (p = 0.20).
Conclusions
In PROSPER, inflammatory markers, in particular IL-6 and CRP, are more strongly associated with risk of fatal vascular events than nonfatal vascular events. These novel observations may have important implications for better understanding aetiology of CVD mortality, and have potential clinical relevance.
Please see later in the article for Editors' Summary
Editors' Summary
Background
Cardiovascular disease (CVD)—disease that affects the heart and/or the blood vessels—is a common cause of death in developed countries. In the USA, for example, the leading cause of death is coronary heart disease (CHD), a CVD in which narrowing of the heart's blood vessels by “atherosclerotic plaques” (fatty deposits that build up with age) slows the blood supply to the heart and may eventually cause a heart attack (myocardial infarction). Other types of CVD include stroke (in which atherosclerotic plaques interrupt the brain's blood supply) and heart failure (a condition in which the heart cannot pump enough blood to the rest of the body). Smoking, high blood pressure, high blood levels of cholesterol (a type of fat), having diabetes, and being overweight all increase a person's risk of developing CVD. Tools such as the “Framingham risk calculator” take these and other risk factors into account to assess an individual's overall risk of CVD, which can be reduced by taking drugs to reduce blood pressure or cholesterol levels (for example, pravastatin) and by making lifestyle changes.
Why Was This Study Done?
Inflammation (an immune response to injury) in the walls of blood vessels is thought to play a role in the development of atherosclerotic plaques. Consistent with this idea, several epidemiological studies (investigations of the causes and distribution of disease in populations) have shown that people with high circulating levels of markers of inflammation such as interleukin-6 (IL-6), C-reactive protein (CRP), and fibrinogen are more likely to have a stroke or a heart attack (a CVD event) than people with low levels of these markers. Although these studies have generally lumped together fatal and nonfatal CVD events, some evidence suggests that circulating inflammatory markers may be more strongly associated with fatal than with nonfatal CVD events. If this is the case, the mechanisms that lead to fatal and nonfatal CVD events may be subtly different and knowing about these differences could improve both the prevention and treatment of CVD. In this study, the researchers investigate this possibility using data collected in the Prospective Study of Pravastatin in the Elderly at Risk (PROSPER; a trial that examined pravastatin's effect on CVD development among 70–82 year olds with pre-existing CVD or an increased risk of CVD because of smoking, high blood pressure, or diabetes).
What Did the Researchers Do and Find?
The researchers used several statistical models to examine the association between baseline levels of IL-6, CRP, and fibrinogen in the trial participants and nonfatal CVD events (nonfatal heart attacks and nonfatal strokes), fatal CVD events, death from other types of CVD, and deaths from other causes during 3.2 years of follow-up. Increased levels of all three inflammatory markers were more strongly associated with fatal CVD than with nonfatal CVD after adjustment for treatment allocation and for other established CVD risk factors but this pattern was strongest for IL-6. Thus, a unit increase in the log of IL-6 levels increased the risk of fatal CVD by half but increased the risk of nonfatal CVD by significantly less. The researchers also investigated whether including these inflammatory markers in tools designed to predict an individual's CVD risk could improve the tool's ability to distinguish between individuals with a high and low risk. The addition of IL-6 to established risk factors, they report, increased this discriminatory ability for fatal CVD but not for nonfatal CVD.
What Do These Findings Mean?
These findings indicate that, at least for the elderly at-risk patients who were included in PROSPER, inflammatory markers are more strongly associated with the risk of a fatal heart attack or stroke than with nonfatal CVD events. These findings need to be confirmed in younger populations and larger studies also need to be done to discover whether the same association holds when fatal heart attacks and fatal strokes are considered separately. Nevertheless, the present findings suggest that inflammation may specifically help to promote the development of serious, potentially fatal CVD and should stimulate improved research into the use of inflammation markers to predict risk of deaths from CVD.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000099.
The MedlinePlus Encyclopedia has pages on coronary heart disease, stroke, and atherosclerosis (in English and Spanish)
MedlinePlus provides links to many other sources of information on heart diseases, vascular diseases, and stroke (in English and Spanish)
Information for patients and caregivers is provided by the American Heart Association on all aspects of cardiovascular disease, including information on inflammation and heart disease
Information is available from the British Heart Foundation on heart disease and keeping the heart healthy
More information about PROSPER is available on the Web site of the Vascular Biochemistry Department of the University of Glasgow
doi:10.1371/journal.pmed.1000099
PMCID: PMC2694359  PMID: 19554082
22.  Implantable Cardioverter Defibrillators. Prophylactic Use 
Executive Summary
Objective
The use of implantable cardiac defibrillators (ICDs) to prevent sudden cardiac death (SCD) in patients resuscitated from cardiac arrest or documented dangerous ventricular arrhythmias (secondary prevention of SCD) is an insured service. In 2003 (before the establishment of the Ontario Health Technology Advisory Committee), the Medical Advisory Secretariat conducted a health technology policy assessment on the prophylactic use (primary prevention of SCD) of ICDs for patients at high risk of SCD. The Medical Advisory Secretariat concluded that ICDs are effective for the primary prevention of SCD. Moreover, it found that a more clearly defined target population at risk for SCD that would be likely to benefit from ICDs is needed, given that the number needed to treat (NNT) from recent studies is 13 to 18, and given that the per-unit cost of ICDs is $32,000, which means that the projected cost to Ontario is $770 million (Cdn).
Accordingly, as part of an annual review and publication of more recent articles, the Medical Advisory Secretariat updated its health technology policy assessment of ICDs.
Clinical Need
Sudden cardiac death is caused by the sudden onset of fatal arrhythmias, or abnormal heart rhythms: ventricular tachycardia (VT), a rhythm abnormality in which the ventricles cause the heart to beat too fast, and ventricular fibrillation (VF), an abnormal, rapid and erratic heart rhythm. About 80% of fatal arrhythmias are associated with ischemic heart disease, which is caused by insufficient blood flow to the heart.
Management of VT and VF with antiarrhythmic drugs is not very effective; for this reason, nonpharmacological treatments have been explored. One such treatment is the ICD.
The Technology
An ICD is a battery-powered device that, once implanted, monitors heart rhythm and can deliver an electric shock to restore normal rhythm when potentially fatal arrhythmias are detected. The use of ICDs to prevent SCD in patients resuscitated from cardiac arrest or documented dangerous ventricular arrhythmias (secondary prevention) is an insured service in Ontario.
Primary prevention of SCD involves identification of and preventive therapy for patients who are at high risk for SCD. Most of the studies in the literature that have examined the prevention of fatal ventricular arrhythmias have focused on patients with ischemic heart disease, in particular, those with heart failure (HF), which has been shown to increase the risk of SCD. The risk of HF is determined by left ventricular ejection fraction (LVEF); most studies have focused on patients with an LVEF under 0.35 or 0.30. While most studies have found ICDs to reduce significantly the risk for SCD in patients with an LVEF less than 0.35, a more recent study (Sudden Cardiac Death in Heart Failure Trial [SCD-HeFT]) reported that patients with HF with nonischemic heart disease could also benefit from this technology. Based on the generalization of the SCD-HeFT study, the Centers for Medicare and Medicaid in the United States recently announced that it would allocate $10 billion (US) annually toward the primary prevention of SCD for patients with ischemic and nonischemic heart disease and an LVEF under 0.35.
Review Strategy
The aim of this literature review was to assess the effectiveness, safety, and cost effectiveness of ICDs for the primary prevention of SCD.
The standard search strategy used by the Medical Advisory Secretariat was used. This included a search of all international health technology assessments as well as a search of the medical literature from January 2003–May 2005.
A modification of the GRADE approach (1) was used to make judgments about the quality of evidence and strength of recommendations systematically and explicitly. GRADE provides a framework for structured reflection and can help to ensure that appropriate judgments are made. GRADE takes into account a study’s design, quality, consistency, and directness in judging the quality of evidence for each outcome. The balance between benefits and harms, quality of evidence, applicability, and the certainty of the baseline risks are considered in judgments about the strength of recommendations.
Summary of Findings
Overall, ICDs are effective for the primary prevention of SCD. Three studies – the Multicentre Automatic Defibrillator Implantation Trial I (MADIT I), the Multicentre Automatic Defibrillator Implantation Trial II (MADIT II), and SCD-HeFT – showed there was a statistically significant decrease in total mortality for patients who prophylactically received an ICD compared with those who received conventional therapy (Table 1).
Results of Key Studies on the Use of Implantable Cardioverter Defibrillators for the Primary Prevention of Sudden Cardiac Death – All-Cause Mortality
MADIT I: Multicentre Automatic Defibrillator Implantation Trial I; MADIT II: Multicentre Automatic Defibrillator Implantation Trial II; SCD-HeFT: Sudden Cardiac Death in Heart Failure Trial.
EP indicates electrophysiology; ICD, implantable cardioverter defibrillator; NNT, number needed to treat; NSVT, nonsustained ventricular tachycardia. The NNT will appear higher if follow-up is short. For ICDs, the absolute benefit increases over time for at least a 5-year period; the NNT declines, often substantially, in studies with a longer follow-up. When the NNT are equalized for a similar period as the SCD-HeFT duration (5 years), the NNT for MADIT-I is 2.2; for MADIT-II, it is 6.3.
GRADE Quality of the Evidence
Using the GRADE Working Group criteria, the quality of these 3 trials was examined (Table 2).
Quality refers to the criteria such as the adequacy of allocation concealment, blinding and follow-up.
Consistency refers to the similarity of estimates of effect across studies. If there is important unexplained inconsistency in the results, our confidence in the estimate of effect for that outcome decreases. Differences in the direction of effect, the size of the differences in effect, and the significance of the differences guide the decision about whether important inconsistency exists.
Directness refers to the extent to which the people interventions and outcome measures are similar to those of interest. For example, there may be uncertainty about the directness of the evidence if the people of interest are older, sicker or have more comorbidity than those in the studies.
As stated by the GRADE Working Group, the following definitions were used to grade the quality of the evidence:
High: Further research is very unlikely to change our confidence n the estimate of effect.
Moderate: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate.
Low: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
Very low: Any estimate of effect is very uncertain.
Quality of Evidence – MADIT I, MADIT II, and SCD-HeFT*
MADIT I: Multicentre Automatic Defibrillator Implantation Trial I; MADIT II: Multicentre Automatic Defibrillator Implantation Trial II; SCD-HeFT: Sudden Cardiac Death in Heart Failure Trial.
The 3 trials had 3 different sets of eligibility criteria for implantation of an ICD for primary prevention of SCD. Conclusions
Conclusions
Overall, there is evidence that ICDs are effective for the primary prevention of SCD. Three trials have found a statistically significant decrease in total mortality for patients who prophylactically received an ICD compared with those who received conventional therapy in their respective study populations.
As per the GRADE Working Group, recommendations consider 4 main factors:
The tradeoffs, taking into account the estimated size of the effect for the main outcome, the confidence limits around those estimates, and the relative value placed on the outcome;
The quality of the evidence (Table 2);
Translation of the evidence into practice in a specific setting, taking into consideration important factors that could be expected to modify the size of the expected effects, such as proximity to a hospital or availability of necessary expertise; and
Uncertainty about the baseline risk for the population of interest
The GRADE Working Group also recommends that incremental costs of health care alternatives should be considered explicitly with the expected health benefits and harms. Recommendations rely on judgments about the value of the incremental health benefits in relation to the incremental costs. The last column in Table 3 is the overall trade-off between benefits and harms and incorporates any risk or uncertainty.
For MADIT I, the overall GRADE and strength of the recommendation is “moderate” – the quality of the evidence is “moderate” (uncertainty due to methodological limitations in the study design), and risk/uncertainty in cost and budget impact was mitigated by the use of filters to help target the prevalent population at risk (Table 3).
For MADIT II, the overall GRADE and strength of the recommendation is “very weak” – the quality of the evidence is “weak” (uncertainty due to methodological limitations in the study design), but there is risk or uncertainty regarding the high prevalence, cost, and budget impact. It is not clear why screening for high-risk patients was dropped, given that in MADIT II the absolute reduction in mortality was small (5.6%) compared to MADIT I, which used electrophysiological screening (23%) (Table 3).
For SCD-HeFT, the overall GRADE and strength of the recommendation is “weak” – the study quality is “moderate,” but there is also risk/uncertainty due to a high NNT at 5 years (13 compared to the MADIT II NNT of 6 and MADIT I NNT of 2 at 5 years), high prevalent population (N = 23,700), and a high budget impact ($770 million). A filter (as demonstrated in MADIT 1) is required to help target the prevalent population at risk and mitigate the risk or uncertainty relating to the high NNT, prevalence, and budget impact (Table 3).
The results of the most recent ICD trial (SCD-HeFT) are not generalizable to the prevalent population in Ontario (Table 3). Given that the current funding rate of an ICD is $32,500 (Cdn), the estimated budget impact for Ontario would be as high as $770 million (Cdn). The uncertainty around the cost estimate of treating the prevalent population with LVEF < 0.30 in Ontario, the lack of human resources to implement such a strategy and the high number of patients required to prevent one SCD (NNT = 13) calls for an alternative strategy that allows the appropriate uptake and diffusion of ICDs for primary prevention for patients at maximum risk for SCD within the SCD-HeFT population.
The uptake and diffusion of ICDs for primary prevention of SCD should therefore be based on risk stratification through the use of appropriate screen(s) that would identify patients at highest risk who could derive the most benefit from this technology.
Overall GRADE and Strength of Recommendation for the Use of Implantable Cardioverter Defibrillators for the Primary Prevention of Sudden Cardiac Death
MADIT I: Multicentre Automatic Defibrillator Implantation Trial I; MADIT II: Multicentre Automatic Defibrillator Implantation Trial II; SCD-HeFT: Sudden Cardiac Death in Heart Failure Trial.
NNT indicates number needed to treat. The NNT will appear higher if follow-up is short. For ICDs, the absolute benefit increases over time for at least a 5-year period; the NNT declines, often substantially, in studies with a longer follow-up. When the NNT are equalized for a similar period as the SCD-HeFT duration (5 years), the NNT for MADIT-I is 2.2; for MADIT-II, it is 6.3.
NSVT indicates nonsustained ventricular tachycardia; VT, ventricular tachycardia.
PMCID: PMC3382404  PMID: 23074465
23.  A data-driven model of biomarker changes in sporadic Alzheimer's disease 
Brain  2014;137(9):2564-2577.
Young et al. reformulate an event-based model for the progression of Alzheimer's disease to make it applicable to a heterogeneous sporadic disease population. The enhanced model predicts the ordering of biomarker abnormality in sporadic Alzheimer's disease independently of clinical diagnoses or biomarker cut-points, and shows state-of-the-art diagnostic classification performance.
We demonstrate the use of a probabilistic generative model to explore the biomarker changes occurring as Alzheimer’s disease develops and progresses. We enhanced the recently introduced event-based model for use with a multi-modal sporadic disease data set. This allows us to determine the sequence in which Alzheimer’s disease biomarkers become abnormal without reliance on a priori clinical diagnostic information or explicit biomarker cut points. The model also characterizes the uncertainty in the ordering and provides a natural patient staging system. Two hundred and eighty-five subjects (92 cognitively normal, 129 mild cognitive impairment, 64 Alzheimer’s disease) were selected from the Alzheimer’s Disease Neuroimaging Initiative with measurements of 14 Alzheimer’s disease-related biomarkers including cerebrospinal fluid proteins, regional magnetic resonance imaging brain volume and rates of atrophy measures, and cognitive test scores. We used the event-based model to determine the sequence of biomarker abnormality and its uncertainty in various population subgroups. We used patient stages assigned by the event-based model to discriminate cognitively normal subjects from those with Alzheimer’s disease, and predict conversion from mild cognitive impairment to Alzheimer’s disease and cognitively normal to mild cognitive impairment. The model predicts that cerebrospinal fluid levels become abnormal first, followed by rates of atrophy, then cognitive test scores, and finally regional brain volumes. In amyloid-positive (cerebrospinal fluid amyloid-β1–42 < 192 pg/ml) or APOE-positive (one or more APOE4 alleles) subjects, the model predicts with high confidence that the cerebrospinal fluid biomarkers become abnormal in a distinct sequence: amyloid-β1–42, phosphorylated tau, total tau. However, in the broader population total tau and phosphorylated tau are found to be earlier cerebrospinal fluid markers than amyloid-β1–42, albeit with more uncertainty. The model’s staging system strongly separates cognitively normal and Alzheimer’s disease subjects (maximum classification accuracy of 99%), and predicts conversion from mild cognitive impairment to Alzheimer’s disease (maximum balanced accuracy of 77% over 3 years), and from cognitively normal to mild cognitive impairment (maximum balanced accuracy of 76% over 5 years). By fitting Cox proportional hazards models, we find that baseline model stage is a significant risk factor for conversion from both mild cognitive impairment to Alzheimer’s disease (P = 2.06 × 10−7) and cognitively normal to mild cognitive impairment (P = 0.033). The data-driven model we describe supports hypothetical models of biomarker ordering in amyloid-positive and APOE-positive subjects, but suggests that biomarker ordering in the wider population may diverge from this sequence. The model provides useful disease staging information across the full spectrum of disease progression, from cognitively normal to mild cognitive impairment to Alzheimer’s disease. This approach has broad application across neurodegenerative disease, providing insights into disease biology, as well as staging and prognostication.
doi:10.1093/brain/awu176
PMCID: PMC4132648  PMID: 25012224
event-based model; disease progression; Alzheimer’s disease; biomarkers; biomarker ordering
24.  Intravaginal Practices, Bacterial Vaginosis, and HIV Infection in Women: Individual Participant Data Meta-analysis 
PLoS Medicine  2011;8(2):e1000416.
Pooling of data from 14,874 women in an individual participant data meta-analysis by Nicola Low and colleagues reveals that some intravaginal practices increase the risk of HIV acquisition.
Background
Identifying modifiable factors that increase women's vulnerability to HIV is a critical step in developing effective female-initiated prevention interventions. The primary objective of this study was to pool individual participant data from prospective longitudinal studies to investigate the association between intravaginal practices and acquisition of HIV infection among women in sub-Saharan Africa. Secondary objectives were to investigate associations between intravaginal practices and disrupted vaginal flora; and between disrupted vaginal flora and HIV acquisition.
Methods and Findings
We conducted a meta-analysis of individual participant data from 13 prospective cohort studies involving 14,874 women, of whom 791 acquired HIV infection during 21,218 woman years of follow-up. Data were pooled using random-effects meta-analysis. The level of between-study heterogeneity was low in all analyses (I2 values 0.0%–16.1%). Intravaginal use of cloth or paper (pooled adjusted hazard ratio [aHR] 1.47, 95% confidence interval [CI] 1.18–1.83), insertion of products to dry or tighten the vagina (aHR 1.31, 95% CI 1.00–1.71), and intravaginal cleaning with soap (aHR 1.24, 95% CI 1.01–1.53) remained associated with HIV acquisition after controlling for age, marital status, and number of sex partners in the past 3 months. Intravaginal cleaning with soap was also associated with the development of intermediate vaginal flora and bacterial vaginosis in women with normal vaginal flora at baseline (pooled adjusted odds ratio [OR] 1.24, 95% CI 1.04–1.47). Use of cloth or paper was not associated with the development of disrupted vaginal flora. Intermediate vaginal flora and bacterial vaginosis were each associated with HIV acquisition in multivariable models when measured at baseline (aHR 1.54 and 1.69, p<0.001) or at the visit before the estimated date of HIV infection (aHR 1.41 and 1.53, p<0.001), respectively.
Conclusions
This study provides evidence to suggest that some intravaginal practices increase the risk of HIV acquisition but a direct causal pathway linking intravaginal cleaning with soap, disruption of vaginal flora, and HIV acquisition has not yet been demonstrated. More consistency in the definition and measurement of specific intravaginal practices is warranted so that the effects of specific intravaginal practices and products can be further elucidated.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Since the first reported case of acquired immunodeficiency syndrome (AIDS) in 1981, the number of people infected with the human immunodeficiency virus (HIV), which causes AIDS, has risen steadily. By the end of 2009, an estimated 33.3 million people were living with HIV/AIDS. At the beginning of the epidemic, more men than women were infected with HIV but now, globally, more than half of all adults living with HIV/AIDS are women, and HIV/AIDS is the leading cause of death among women of child-bearing age. In sub-Saharan Africa, where more than two-thirds of HIV-positive people live, the situation for women is particularly bad. About 12 million women live with HIV/AIDS in this region compared with about 8 million men; among 15–24 year-olds, women are eight times more likely than men to be HIV-positive. This pattern of infection has developed because in sub-Saharan Africa most people contract HIV through heterosexual sex.
Why Was This Study Done?
If modifiable factors that increase women's vulnerability to HIV infection could be identified, it might be possible to develop effective female-initiated prevention interventions. Some experts think that intravaginal practices such as cleaning the vagina with soap or a cloth increase the risk of HIV infection by damaging the vagina's lining or by increasing bacterial vaginosis (a condition in which harmful bacteria disrupt the healthy vaginal flora) but the evidence for such an association is inconclusive. In this meta-analysis, the researchers pool individual participant data from several prospective longitudinal cohort studies to assess the association between intravaginal practices and HIV acquisition among women in sub-Saharan Africa. Meta-analysis is a statistical method that combines data from several studies to get a clearer view of the factors associated with of a disease than is possible from individual studies. In a prospective longitudinal cohort study, groups of participants with different baseline characteristics (here, women who did or did not use intravaginal practices), who do not have the outcome of interest at the start of the study (here, HIV infection) are followed to see whether these characteristics affect disease development.
What Did the Researchers Do and Find?
The researchers pooled individual participant data from 13 prospective cohort studies in sub-Saharan Africa involving nearly 15,000 women, 791 of whom acquired HIV, and asked whether HIV infection within 2 years of study enrollment was associated with self-reported intravaginal practices. That is, were women who used specific intravaginal practices more likely to become infected with HIV than women who did not use these practices? After controlling for age, marital status, and the number of recent sex partners, women who used cloth or paper to clean their vagina were nearly one and half times more likely to have acquired HIV infection as women who did not use this practice (a pooled adjusted hazard ratio [aHR] of 1.47). The insertion of products to dry or tighten the vagina and intravaginal cleaning with soap also increased women's chances of acquiring HIV (aHRs of 1.31 and 1.24, respectively). Moreover, intravaginal cleaning with soap was associated with the development of bacterial vaginosis, and disrupted vaginal flora and bacterial vaginosis were both associated with an increased risk of HIV acquisition.
What Do These Findings Mean?
These findings suggest that some intravaginal practices increase the risk of HIV acquisition but they do not prove that there is a causal link between any intravaginal practice, disruption of vaginal flora, and HIV acquisition. It could be that the women who use intravaginal practices share other unknown characteristics that affect their vulnerability to HIV infection. The accuracy of these findings is also likely to be affected by the use of self-reported data and inconsistent definitions of intravaginal practices. Nevertheless, given the widespread use of intravaginal practices in some sub-Saharan countries (95% of female sex workers in Kenya use such practices, for example), these findings suggest that encouraging women to use less harmful intravaginal practices (for example, washing with water alone) should be included in female-initiated HIV prevention research strategies in sub-Saharan Africa and other regions where intravaginal practices are common.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000416
The US National Institute of Allergy and Infectious Diseases provides information on HIV infection and AIDS and on bacterial vaginosis
The US Centers for Disease Control and Prevention has information on all aspects of HIV/AIDS, including specific information about HIV/AIDS and women; it also has information on bacterial vaginosis (in English and Spanish)
HIV InSite has information on all aspects of HIV/AIDS
Information is available from Avert, an international AIDS nonprofit on all aspects of HIV/AIDS, including HIV/AIDS and women and HIV/AIDS in Africa (in English and Spanish)
A full description of the researchers' study protocol is available
Several Web sites provide information on microbicides Global Campaign for Microbicides, Microbicides Development Programme, Microbicides Trials Network, and International Partnership for Microbicides
doi:10.1371/journal.pmed.1000416
PMCID: PMC3039685  PMID: 21358808
25.  The Impact of Monitoring HIV Patients Prior to Treatment in Resource-Poor Settings: Insights from Mathematical Modelling 
PLoS Medicine  2008;5(3):e53.
Background
The roll-out of antiretroviral treatment (ART) in developing countries concentrates on finding patients currently in need, but over time many HIV-infected individuals will be identified who will require treatment in the future. We investigated the potential influence of alternative patient management and ART initiation strategies on the impact of ART programmes in sub-Saharan Africa.
Methods and Findings
We developed a stochastic mathematical model representing disease progression, diagnosis, clinical monitoring, and survival in a cohort of 1,000 hypothetical HIV-infected individuals in Africa. If individuals primarily enter ART programmes when symptomatic, the model predicts that only 25% will start treatment and, on average, 6 life-years will be saved per person treated. If individuals are recruited to programmes while still healthy and are frequently monitored, and CD4+ cell counts are used to help decide when to initiate ART, three times as many are expected to be treated, and average life-years saved among those treated increases to 15. The impact of programmes can be improved further by performing a second CD4+ cell count when the initial value is close to the threshold for starting treatment, maintaining high patient follow-up rates, and prioritising monitoring the oldest (≥ 35 y) and most immune-suppressed patients (CD4+ cell count ≤ 350). Initiating ART at higher CD4+ cell counts than WHO recommends leads to more life-years saved, but disproportionately more years spent on ART.
Conclusions
The overall impact of ART programmes will be limited if rates of diagnosis are low and individuals enter care too late. Frequently monitoring individuals at all stages of HIV infection and using CD4 cell count information to determine when to start treatment can maximise the impact of ART.
Using a stochastic model based on data from Africa, Timothy Hallett and colleagues find that starting HIV treatment based on regular CD4 monitoring, rather than on symptoms, would substantially increase survival.
Editors' Summary
Background.
Acquired immunodeficiency syndrome (AIDS) has killed more than 25 million people since the first case in 1981, and about 33 million people are currently infected with the human immunodeficiency virus (HIV), which causes AIDS. HIV destroys immune system cells (including CD4 cells, a type of lymphocyte), leaving infected individuals susceptible to other infections. Early in the AIDS epidemic, most HIV-positive individuals died within 10 years but in 1996, combination antiretroviral therapy (ART)—a mixture of powerful but expensive antiretroviral drugs—was developed. For HIV-positive people living in affluent, developed countries who could afford ART, AIDS then became a chronic disease, but for those living in low- and middle-income countries it remained a death sentence—ART was too expensive. In 2003, this lack of access to ART was declared a global health emergency and governments, international organizations, and funding bodies began to implement plans to increase ART coverage in developing countries.
Why Was This Study Done?
The roll-out of ART in developing countries has concentrated so far on finding HIV-positive people who currently need treatment. In developing countries, these are often individuals who have AIDS-related symptoms such as recurrent severe bacterial infections. But healthy people are also being diagnosed as HIV positive during voluntary testing and at antenatal clinics. How should these HIV-positive but symptom-free individuals be managed? Should regular health-monitoring appointments be scheduled for them and when should ART be initiated? Management decisions like these will determine how well patients do when they eventually start ART, as well as the demand for ART and other health-care services. The full range of alternative patient management strategies cannot be tested in clinical trials—it would be unethical—but public-health officials need an idea of their relative effectiveness in order to use limited resources wisely. In this study, therefore, the researchers use mathematical modeling to investigate the impact of alternative patient management and ART initiation strategies on the impact of ART programs in resource-poor settings.
What Did the Researchers Do and Find?
The researchers' mathematical model, which includes data on disease progression collected in Africa, simulates disease progression in a group (cohort) of 1,000 HIV-infected adults. It tracks these individuals from infection, through diagnosis and clinical monitoring, and into treatment and predicts how many will receive ART and their length of survival under different management scenarios and ART initiation rules. The model predicts that if HIV-positive individuals receive ART only when they have AIDS-related symptoms, only a quarter of them will ever start ART and the average life-years saved per person treated will be 6 years (that is, they will live 6 years longer than they would have done without treatment). If individuals are recruited to ART programs when they are healthy and are frequently monitored using CD4 cell counts to decide when to start ART, three-quarters of the cohort will be treated and 15 life-years will be saved per person treated. The impact of ART programs will be increased further, the model predicts, by preferentially monitoring people who are more than 35 years old and the most immunosuppressed individuals. Finally, strategies that measure CD4 cells frequently will save more life-years because ART is more likely to be started before the immune system is irreversibly damaged. Importantly for resource-poor settings, these strategies also save more life-years per year on ART.
What Do These Findings Mean?
As with all mathematical models, the accuracy of these predictions depends on the assumptions built into the model and the reliability of the data fed into it. Also, this model does not estimate the costs of the various management options, something that will need to be done to ensure effective allocation of limited resources. Nevertheless, these findings provide several general clues about how ART programs should be implemented in poor countries to maximize their effects. Early diagnosis of infections, regular monitoring of patients, and using CD4 cell counts to decide when to initiate ART should all help to improve the number of life-years saved by ART. In other words, the researchers conclude, effectively managing individuals at all stages of HIV infection is essential to maximize the impact of ART.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/doi:10.1371/journal.pmed.0050053.
Information from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS.
Information from the US Centers for Disease Control and Prevention on global HIV/AIDS topics (in English and Spanish)
HIV InSite, comprehensive and up-to-date information on all aspects of HIV/AIDS from the University of California, San Francisco
Information from Avert, an international AIDS charity, on HIV and AIDS in Africa and on HIV/AIDS treatment and care, including universal access to ART
Progress toward universal access to HIV/AIDS treatment, the latest report from the World Health Organization (available in several languages)
Guidelines for antiretroviral therapy in adults and adolescents are provided by the World Health Organization and by the US Department of Health and Human Services
doi:10.1371/journal.pmed.0050053
PMCID: PMC2265759  PMID: 18336064

Results 1-25 (1255583)