PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1440059)

Clipboard (0)
None

Related Articles

1.  Biomarker Profiling by Nuclear Magnetic Resonance Spectroscopy for the Prediction of All-Cause Mortality: An Observational Study of 17,345 Persons 
PLoS Medicine  2014;11(2):e1001606.
In this study, Würtz and colleagues conducted high-throughput profiling of blood specimens in two large population-based cohorts in order to identify biomarkers for all-cause mortality and enhance risk prediction. The authors found that biomarker profiling improved prediction of the short-term risk of death from all causes above established risk factors. However, further investigations are needed to clarify the biological mechanisms and the utility of these biomarkers to guide screening and prevention.
Please see later in the article for the Editors' Summary
Background
Early identification of ambulatory persons at high short-term risk of death could benefit targeted prevention. To identify biomarkers for all-cause mortality and enhance risk prediction, we conducted high-throughput profiling of blood specimens in two large population-based cohorts.
Methods and Findings
106 candidate biomarkers were quantified by nuclear magnetic resonance spectroscopy of non-fasting plasma samples from a random subset of the Estonian Biobank (n = 9,842; age range 18–103 y; 508 deaths during a median of 5.4 y of follow-up). Biomarkers for all-cause mortality were examined using stepwise proportional hazards models. Significant biomarkers were validated and incremental predictive utility assessed in a population-based cohort from Finland (n = 7,503; 176 deaths during 5 y of follow-up). Four circulating biomarkers predicted the risk of all-cause mortality among participants from the Estonian Biobank after adjusting for conventional risk factors: alpha-1-acid glycoprotein (hazard ratio [HR] 1.67 per 1–standard deviation increment, 95% CI 1.53–1.82, p = 5×10−31), albumin (HR 0.70, 95% CI 0.65–0.76, p = 2×10−18), very-low-density lipoprotein particle size (HR 0.69, 95% CI 0.62–0.77, p = 3×10−12), and citrate (HR 1.33, 95% CI 1.21–1.45, p = 5×10−10). All four biomarkers were predictive of cardiovascular mortality, as well as death from cancer and other nonvascular diseases. One in five participants in the Estonian Biobank cohort with a biomarker summary score within the highest percentile died during the first year of follow-up, indicating prominent systemic reflections of frailty. The biomarker associations all replicated in the Finnish validation cohort. Including the four biomarkers in a risk prediction score improved risk assessment for 5-y mortality (increase in C-statistics 0.031, p = 0.01; continuous reclassification improvement 26.3%, p = 0.001).
Conclusions
Biomarker associations with cardiovascular, nonvascular, and cancer mortality suggest novel systemic connectivities across seemingly disparate morbidities. The biomarker profiling improved prediction of the short-term risk of death from all causes above established risk factors. Further investigations are needed to clarify the biological mechanisms and the utility of these biomarkers for guiding screening and prevention.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
A biomarker is a biological molecule found in blood, body fluids, or tissues that may signal an abnormal process, a condition, or a disease. The level of a particular biomarker may indicate a patient's risk of disease, or likely response to a treatment. For example, cholesterol levels are measured to assess the risk of heart disease. Most current biomarkers are used to test an individual's risk of developing a specific condition. There are none that accurately assess whether a person is at risk of ill health generally, or likely to die soon from a disease. Early and accurate identification of people who appear healthy but in fact have an underlying serious illness would provide valuable opportunities for preventative treatment.
While most tests measure the levels of a specific biomarker, there are some technologies that allow blood samples to be screened for a wide range of biomarkers. These include nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry. These tools have the potential to be used to screen the general population for a range of different biomarkers.
Why Was This Study Done?
Identifying new biomarkers that provide insight into the risk of death from all causes could be an important step in linking different diseases and assessing patient risk. The authors in this study screened patient samples using NMR spectroscopy for biomarkers that accurately predict the risk of death particularly amongst the general population, rather than amongst people already known to be ill.
What Did the Researchers Do and Find?
The researchers studied two large groups of people, one in Estonia and one in Finland. Both countries have set up health registries that collect and store blood samples and health records over many years. The registries include large numbers of people who are representative of the wider population.
The researchers first tested blood samples from a representative subset of the Estonian group, testing 9,842 samples in total. They looked at 106 different biomarkers in each sample using NMR spectroscopy. They also looked at the health records of this group and found that 508 people died during the follow-up period after the blood sample was taken, the majority from heart disease, cancer, and other diseases. Using statistical analysis, they looked for any links between the levels of different biomarkers in the blood and people's short-term risk of dying. They found that the levels of four biomarkers—plasma albumin, alpha-1-acid glycoprotein, very-low-density lipoprotein (VLDL) particle size, and citrate—appeared to accurately predict short-term risk of death. They repeated this study with the Finnish group, this time with 7,503 individuals (176 of whom died during the five-year follow-up period after giving a blood sample) and found similar results.
The researchers carried out further statistical analyses to take into account other known factors that might have contributed to the risk of life-threatening illness. These included factors such as age, weight, tobacco and alcohol use, cholesterol levels, and pre-existing illness, such as diabetes and cancer. The association between the four biomarkers and short-term risk of death remained the same even when controlling for these other factors.
The analysis also showed that combining the test results for all four biomarkers, to produce a biomarker score, provided a more accurate measure of risk than any of the biomarkers individually. This biomarker score also proved to be the strongest predictor of short-term risk of dying in the Estonian group. Individuals with a biomarker score in the top 20% had a risk of dying within five years that was 19 times greater than that of individuals with a score in the bottom 20% (288 versus 15 deaths).
What Do These Findings Mean?
This study suggests that there are four biomarkers in the blood—alpha-1-acid glycoprotein, albumin, VLDL particle size, and citrate—that can be measured by NMR spectroscopy to assess whether otherwise healthy people are at short-term risk of dying from heart disease, cancer, and other illnesses. However, further validation of these findings is still required, and additional studies should examine the biomarker specificity and associations in settings closer to clinical practice. The combined biomarker score appears to be a more accurate predictor of risk than tests for more commonly known risk factors. Identifying individuals who are at high risk using these biomarkers might help to target preventative medical treatments to those with the greatest need.
However, there are several limitations to this study. As an observational study, it provides evidence of only a correlation between a biomarker score and ill health. It does not identify any underlying causes. Other factors, not detectable by NMR spectroscopy, might be the true cause of serious health problems and would provide a more accurate assessment of risk. Nor does this study identify what kinds of treatment might prove successful in reducing the risks. Therefore, more research is needed to determine whether testing for these biomarkers would provide any clinical benefit.
There were also some technical limitations to the study. NMR spectroscopy does not detect as many biomarkers as mass spectrometry, which might therefore identify further biomarkers for a more accurate risk assessment. In addition, because both study groups were northern European, it is not yet known whether the results would be the same in other ethnic groups or populations with different lifestyles.
In spite of these limitations, the fact that the same four biomarkers are associated with a short-term risk of death from a variety of diseases does suggest that similar underlying mechanisms are taking place. This observation points to some potentially valuable areas of research to understand precisely what's contributing to the increased risk.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001606
The US National Institute of Environmental Health Sciences has information on biomarkers
The US Food and Drug Administration has a Biomarker Qualification Program to help researchers in identifying and evaluating new biomarkers
Further information on the Estonian Biobank is available
The Computational Medicine Research Team of the University of Oulu and the University of Bristol have a webpage that provides further information on high-throughput biomarker profiling by NMR spectroscopy
doi:10.1371/journal.pmed.1001606
PMCID: PMC3934819  PMID: 24586121
2.  A Risk Prediction Model for the Assessment and Triage of Women with Hypertensive Disorders of Pregnancy in Low-Resourced Settings: The miniPIERS (Pre-eclampsia Integrated Estimate of RiSk) Multi-country Prospective Cohort Study 
PLoS Medicine  2014;11(1):e1001589.
Beth Payne and colleagues use a risk prediction model, the Pre-eclampsia Integrated Estimate of RiSk (miniPIERS) to help inform the clinical assessment and triage of women with hypertensive disorders of pregnancy in low-resourced settings.
Please see later in the article for the Editors' Summary
Background
Pre-eclampsia/eclampsia are leading causes of maternal mortality and morbidity, particularly in low- and middle- income countries (LMICs). We developed the miniPIERS risk prediction model to provide a simple, evidence-based tool to identify pregnant women in LMICs at increased risk of death or major hypertensive-related complications.
Methods and Findings
From 1 July 2008 to 31 March 2012, in five LMICs, data were collected prospectively on 2,081 women with any hypertensive disorder of pregnancy admitted to a participating centre. Candidate predictors collected within 24 hours of admission were entered into a step-wise backward elimination logistic regression model to predict a composite adverse maternal outcome within 48 hours of admission. Model internal validation was accomplished by bootstrapping and external validation was completed using data from 1,300 women in the Pre-eclampsia Integrated Estimate of RiSk (fullPIERS) dataset. Predictive performance was assessed for calibration, discrimination, and stratification capacity. The final miniPIERS model included: parity (nulliparous versus multiparous); gestational age on admission; headache/visual disturbances; chest pain/dyspnoea; vaginal bleeding with abdominal pain; systolic blood pressure; and dipstick proteinuria. The miniPIERS model was well-calibrated and had an area under the receiver operating characteristic curve (AUC ROC) of 0.768 (95% CI 0.735–0.801) with an average optimism of 0.037. External validation AUC ROC was 0.713 (95% CI 0.658–0.768). A predicted probability ≥25% to define a positive test classified women with 85.5% accuracy. Limitations of this study include the composite outcome and the broad inclusion criteria of any hypertensive disorder of pregnancy. This broad approach was used to optimize model generalizability.
Conclusions
The miniPIERS model shows reasonable ability to identify women at increased risk of adverse maternal outcomes associated with the hypertensive disorders of pregnancy. It could be used in LMICs to identify women who would benefit most from interventions such as magnesium sulphate, antihypertensives, or transportation to a higher level of care.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Each year, ten million women develop pre-eclampsia or a related hypertensive (high blood pressure) disorder of pregnancy and 76,000 women die as a result. Globally, hypertensive disorders of pregnancy cause around 12% of maternal deaths—deaths of women during or shortly after pregnancy. The mildest of these disorders is gestational hypertension, high blood pressure that develops after 20 weeks of pregnancy. Gestational hypertension does not usually harm the mother or her unborn child and resolves after delivery but up to a quarter of women with this condition develop pre-eclampsia, a combination of hypertension and protein in the urine (proteinuria). Women with mild pre-eclampsia may not have any symptoms—the condition is detected during antenatal checks—but more severe pre-eclampsia can cause headaches, blurred vision, and other symptoms, and can lead to eclampsia (fits), multiple organ failure, and death of the mother and/or her baby. The only “cure” for pre-eclampsia is to deliver the baby as soon as possible but women are sometimes given antihypertensive drugs to lower their blood pressure or magnesium sulfate to prevent seizures.
Why Was This Study Done?
Women in low- and middle-income countries (LMICs) are more likely to develop complications of pre-eclampsia than women in high-income countries and most of the deaths associated with hypertensive disorders of pregnancy occur in LMICs. The high burden of illness and death in LMICs is thought to be primarily due to delays in triage (the identification of women who are or may become severely ill and who need specialist care) and delays in transporting these women to facilities where they can receive appropriate care. Because there is a shortage of health care workers who are adequately trained in the triage of suspected cases of hypertensive disorders of pregnancy in many LMICs, one way to improve the situation might be to design a simple tool to identify women at increased risk of complications or death from hypertensive disorders of pregnancy. Here, the researchers develop miniPIERS (Pre-eclampsia Integrated Estimate of RiSk), a clinical risk prediction model for adverse outcomes among women with hypertensive disorders of pregnancy suitable for use in community and primary health care facilities in LMICs.
What Did the Researchers Do and Find?
The researchers used data on candidate predictors of outcome that are easy to collect and/or measure in all health care settings and that are associated with pre-eclampsia from women admitted with any hypertensive disorder of pregnancy to participating centers in five LMICs to build a model to predict death or a serious complication such as organ damage within 48 hours of admission. The miniPIERS model included parity (whether the woman had been pregnant before), gestational age (length of pregnancy), headache/visual disturbances, chest pain/shortness of breath, vaginal bleeding with abdominal pain, systolic blood pressure, and proteinuria detected using a dipstick. The model was well-calibrated (the predicted risk of adverse outcomes agreed with the observed risk of adverse outcomes among the study participants), it had a good discriminatory ability (it could separate women who had a an adverse outcome from those who did not), and it designated women as being at high risk (25% or greater probability of an adverse outcome) with an accuracy of 85.5%. Importantly, external validation using data collected in fullPIERS, a study that developed a more complex clinical prediction model based on data from women attending tertiary hospitals in high-income countries, confirmed the predictive performance of miniPIERS.
What Do These Findings Mean?
These findings indicate that the miniPIERS model performs reasonably well as a tool to identify women at increased risk of adverse maternal outcomes associated with hypertensive disorders of pregnancy. Because miniPIERS only includes simple-to-measure personal characteristics, symptoms, and signs, it could potentially be used in resource-constrained settings to identify the women who would benefit most from interventions such as transportation to a higher level of care. However, further external validation of miniPIERS is needed using data collected from women living in LMICs before the model can be used during routine antenatal care. Moreover, the value of miniPIERS needs to be confirmed in implementation projects that examine whether its potential translates into clinical improvements. For now, though, the model could provide the basis for an education program to increase the knowledge of women, families, and community health care workers in LMICs about the signs and symptoms of hypertensive disorders of pregnancy.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001589.
The World Health Organization provides guidelines for the management of hypertensive disorders of pregnancy in low-resourced settings
The Maternal and Child Health Integrated Program provides information on pre-eclampsia and eclampsia targeted to low-resourced settings along with a tool-kit for LMIC providers
The US National Heart, Lung, and Blood Institute provides information about high blood pressure in pregnancy and a guide to lowering blood pressure in pregnancy
The UK National Health Service Choices website provides information about pre-eclampsia
The US not-for profit organization Preeclampsia Foundation provides information about all aspects of pre-eclampsia; its website includes some personal stories
The UK charity Healthtalkonline also provides personal stories about hypertensive disorders of pregnancy
MedlinePlus provides links to further information about high blood pressure and pregnancy (in English and Spanish); the MedlinePlus Encyclopedia has a video about pre-eclampsia (also in English and Spanish)
More information about miniPIERS and about fullPIERS is available
doi:10.1371/journal.pmed.1001589
PMCID: PMC3897359  PMID: 24465185
3.  The Preclinical Natural History of Serous Ovarian Cancer: Defining the Target for Early Detection 
PLoS Medicine  2009;6(7):e1000114.
Pat Brown and colleagues carry out a modeling study and define what properties a biomarker-based screening test would require in order to be clinically useful.
Background
Ovarian cancer kills approximately 15,000 women in the United States every year, and more than 140,000 women worldwide. Most deaths from ovarian cancer are caused by tumors of the serous histological type, which are rarely diagnosed before the cancer has spread. Rational design of a potentially life-saving early detection and intervention strategy requires understanding the lesions we must detect in order to prevent lethal progression. Little is known about the natural history of lethal serous ovarian cancers before they become clinically apparent. We can learn about this occult period by studying the unsuspected serous cancers that are discovered in a small fraction of apparently healthy women who undergo prophylactic bilateral salpingo-oophorectomy (PBSO).
Methods and Findings
We developed models for the growth, progression, and detection of occult serous cancers on the basis of a comprehensive analysis of published data on serous cancers discovered by PBSO in BRCA1 mutation carriers. Our analysis yielded several critical insights into the early natural history of serous ovarian cancer. First, these cancers spend on average more than 4 y as in situ, stage I, or stage II cancers and approximately 1 y as stage III or IV cancers before they become clinically apparent. Second, for most of the occult period, serous cancers are less than 1 cm in diameter, and not visible on gross examination of the ovaries and Fallopian tubes. Third, the median diameter of a serous ovarian cancer when it progresses to an advanced stage (stage III or IV) is about 3 cm. Fourth, to achieve 50% sensitivity in detecting tumors before they advance to stage III, an annual screen would need to detect tumors of 1.3 cm in diameter; 80% detection sensitivity would require detecting tumors less than 0.4 cm in diameter. Fifth, to achieve a 50% reduction in serous ovarian cancer mortality with an annual screen, a test would need to detect tumors of 0.5 cm in diameter.
Conclusions
Our analysis has formalized essential conditions for successful early detection of serous ovarian cancer. Although the window of opportunity for early detection of these cancers lasts for several years, developing a test sufficiently sensitive and specific to take advantage of that opportunity will be a challenge. We estimated that the tumors we would need to detect to achieve even 50% sensitivity are more than 200 times smaller than the clinically apparent serous cancers typically used to evaluate performance of candidate biomarkers; none of the biomarker assays reported to date comes close to the required level of performance. Overcoming the signal-to-noise problem inherent in detection of tiny tumors will likely require discovery of truly cancer-specific biomarkers or development of novel approaches beyond traditional blood protein biomarkers. While this study was limited to ovarian cancers of serous histological type and to those arising in BRCA1 mutation carriers specifically, we believe that the results are relevant to other hereditary serous cancers and to sporadic ovarian cancers. A similar approach could be applied to other cancers to aid in defining their early natural history and to guide rational design of an early detection strategy.
Please see later in the article for Editors' Summary
Editors' Summary
Background
Every year about 190,000 women develop ovarian cancer and more than 140,000 die from the disease. Ovarian cancer occurs when a cell on the surface of the ovaries (two small organs in the pelvis that produce eggs) or in the Fallopian tubes (which connect the ovaries to the womb) acquires genetic changes (mutations) that allow it to grow uncontrollably and to spread around the body (metastasize). For women whose cancer is diagnosed when it is confined to the site of origin—ovary or Fallopian tube—(stage I disease), the outlook is good; 70%–80% of these women survive for at least 5 y. However, very few ovarian cancers are diagnosed this early. Usually, by the time the cancer causes symptoms (often only vague abdominal pain and mild digestive disturbances), it has spread into the pelvis (stage II disease), into the space around the gut, stomach, and liver (stage III disease), or to distant organs (stage IV disease). Patients with advanced-stage ovarian cancer are treated with surgery and chemotherapy but, despite recent treatment improvements, only 15% of women diagnosed with stage IV disease survive for 5 y.
Why Was This Study Done?
Most deaths from ovarian cancer are caused by serous ovarian cancer, a tumor subtype that is rarely diagnosed before it has spread. Early detection of serous ovarian cancer would save the lives of many women but no one knows what these cancers look like before they spread or how long they grow before they become clinically apparent. Learning about this occult (hidden) period of ovarian cancer development by observing tumors from their birth to late-stage disease is not feasible. However, some aspects of the early natural history of ovarian cancer can be studied by using data collected from healthy women who have had their ovaries and Fallopian tubes removed (prophylactic bilateral salpingo-oophorectomy [PBSO]) because they have inherited a mutated version of the BRCA1 gene that increases their ovarian cancer risk. In a few of these women, unsuspected ovarian cancer is discovered during PBSO. In this study, the researchers identify and analyze the available reports on occult serous ovarian cancer found this way and then develop mathematical models describing the early natural history of ovarian cancer.
What Did the Researchers Do and Find?
The researchers first estimated the time period during which the detection of occult tumors might save lives using the data from these reports. Serous ovarian cancers, they estimated, spend more than 4 y as in situ (a very early stage of cancer development), stage I, or stage II cancers and about 1 y as stage III and IV cancers before they become clinically apparent. Next, the researchers used the data to develop mathematical models for the growth, progression, and diagnosis of serous ovarian cancer (the accuracy of which depends on the assumptions used to build the models and on the quality of the data fed into them). These models indicated that, for most of the occult period, serous cancers had a diameter of less than 1 cm (too small to be detected during surgery or by gross examination of the ovaries or Fallopian tubes) and that more than half of serous cancers had advanced to stage III/IV by the time they measured 3 cm across. Furthermore, to enable the detection of half of serous ovarian cancers before they reached stage III, an annual screening test would need to detect cancers with a diameter of 1.3 cm and to halve deaths from serous ovarian cancer, an annual screening test would need to detect 0.5-cm diameter tumors.
What Do These Findings Mean?
These findings suggest that the time period over which the early detection of serous ovarian cancer would save lives is surprisingly long. More soberingly, the authors find that a test that is sensitive and specific enough to take advantage of this “window of opportunity” would need to detect tumors hundreds of times smaller than clinically apparent serous cancers. So far no ovarian cancer-specific protein or other biomarker has been identified that could be used to develop a test that comes anywhere near this level of performance. Identification of truly ovarian cancer-specific biomarkers or novel strategies will be needed in order to take advantage of the window of opportunity. The stages prior to clinical presentation of other lethal cancers are still very poorly understood. Similar studies of the early natural history of these cancers could help guide the development of rational early detection strategies.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000114.
The US National Cancer Institute provides a brief description of what cancer is and how it develops and information on all aspects of ovarian cancer for patients and professionals. It also provides a fact sheet on BRCA1 mutations and cancer risk (in English and Spanish)
The UK charity Cancerbackup also provides information about all aspects of ovarian cancer
MedlinePlus provides a list of links to additional information about ovarian cancer (in English and Spanish)
The Canary Foundation is a nonprofit organization dedicated to development of effective strategies for early detection of cancers including ovarian cancer.
doi:10.1371/journal.pmed.1000114
PMCID: PMC2711307  PMID: 19636370
4.  Development and Validation of a Risk Score for Chronic Kidney Disease in HIV Infection Using Prospective Cohort Data from the D:A:D Study 
PLoS Medicine  2015;12(3):e1001809.
Background
Chronic kidney disease (CKD) is a major health issue for HIV-positive individuals, associated with increased morbidity and mortality. Development and implementation of a risk score model for CKD would allow comparison of the risks and benefits of adding potentially nephrotoxic antiretrovirals to a treatment regimen and would identify those at greatest risk of CKD. The aims of this study were to develop a simple, externally validated, and widely applicable long-term risk score model for CKD in HIV-positive individuals that can guide decision making in clinical practice.
Methods and Findings
A total of 17,954 HIV-positive individuals from the Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) study with ≥3 estimated glomerular filtration rate (eGFR) values after 1 January 2004 were included. Baseline was defined as the first eGFR > 60 ml/min/1.73 m2 after 1 January 2004; individuals with exposure to tenofovir, atazanavir, atazanavir/ritonavir, lopinavir/ritonavir, other boosted protease inhibitors before baseline were excluded. CKD was defined as confirmed (>3 mo apart) eGFR ≤ 60 ml/min/1.73 m2. Poisson regression was used to develop a risk score, externally validated on two independent cohorts.
In the D:A:D study, 641 individuals developed CKD during 103,185 person-years of follow-up (PYFU; incidence 6.2/1,000 PYFU, 95% CI 5.7–6.7; median follow-up 6.1 y, range 0.3–9.1 y). Older age, intravenous drug use, hepatitis C coinfection, lower baseline eGFR, female gender, lower CD4 count nadir, hypertension, diabetes, and cardiovascular disease (CVD) predicted CKD. The adjusted incidence rate ratios of these nine categorical variables were scaled and summed to create the risk score. The median risk score at baseline was −2 (interquartile range –4 to 2). There was a 1:393 chance of developing CKD in the next 5 y in the low risk group (risk score < 0, 33 events), rising to 1:47 and 1:6 in the medium (risk score 0–4, 103 events) and high risk groups (risk score ≥ 5, 505 events), respectively. Number needed to harm (NNTH) at 5 y when starting unboosted atazanavir or lopinavir/ritonavir among those with a low risk score was 1,702 (95% CI 1,166–3,367); NNTH was 202 (95% CI 159–278) and 21 (95% CI 19–23), respectively, for those with a medium and high risk score. NNTH was 739 (95% CI 506–1462), 88 (95% CI 69–121), and 9 (95% CI 8–10) for those with a low, medium, and high risk score, respectively, starting tenofovir, atazanavir/ritonavir, or another boosted protease inhibitor.
The Royal Free Hospital Clinic Cohort included 2,548 individuals, of whom 94 individuals developed CKD (3.7%) during 18,376 PYFU (median follow-up 7.4 y, range 0.3–12.7 y). Of 2,013 individuals included from the SMART/ESPRIT control arms, 32 individuals developed CKD (1.6%) during 8,452 PYFU (median follow-up 4.1 y, range 0.6–8.1 y). External validation showed that the risk score predicted well in these cohorts. Limitations of this study included limited data on race and no information on proteinuria.
Conclusions
Both traditional and HIV-related risk factors were predictive of CKD. These factors were used to develop a risk score for CKD in HIV infection, externally validated, that has direct clinical relevance for patients and clinicians to weigh the benefits of certain antiretrovirals against the risk of CKD and to identify those at greatest risk of CKD.
Editors’ Summary
Background
About 35 million people are currently infected with HIV, the virus that causes AIDS. HIV destroys CD4 lymphocytes and other immune system cells, leaving infected individuals susceptible to other infections. HIV infection can be controlled, but not cured, using combination antiretroviral therapy (cART), and, nowadays, the life expectancy of many HIV-positive individuals is similar to that of HIV-negative people. HIV-positive individuals nevertheless experience some illnesses more frequently than HIV-negative people do. For example, up to a third of HIV-positive individuals develop chronic kidney disease (CKD), which is associated with an increased risk of cardiovascular disease and death. Persons with CKD may have an impaired effect of the filtration units in the kidneys that remove waste products and excess water from the blood to make urine, thereby leading to a reduced blood filtration rate (the estimated glomerular filtration rate [eGFR]) and waste product accumulation in the blood. Symptoms of CKD, which rarely occur until the disease is advanced, include tiredness, swollen feet, and frequent urination. Advanced stages of CKD cannot be cured, but its progression can be slowed by, for example, controlling hypertension (high blood pressure) and diabetes (two CDK risk factors) and by adopting a healthy lifestyle.
Why Was This Study Done?
The burden of CKD may increase among HIV-positive individuals as they age, and clinicians need to know which individuals are at high risk of developing CKD when choosing cART regimens for their patients. In addition, clinicians need to be able to identify those HIV-positive individuals at greatest risk of CKD so that they can monitor them for early signs of kidney disease. Some antiretroviral drugs—for example, tenofovir and atazanavir/ritonavir (a boosted protease inhibitor)—are associated with kidney damage. Clinicians may need to weigh the benefits and risks of giving such potentially nephrotoxic drugs to individuals who already have a high CKD risk. Here, the researchers develop and validate a simple, widely applicable risk score (a risk prediction model) for CKD among HIV-positive individuals and investigate the relationship between CKD and potentially nephrotoxic antiretroviral drugs among individuals with different CKD risk score profiles.
What Did the Researchers Do and Find?
To develop their CKD risk score, the researchers used clinical and demographic data collected from 17,954 HIV-positive individuals enrolled in the Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) study who had an eGFR > 60 ml/min/1.73 m2 and were not taking a potentially nephrotoxic antiretroviral at baseline. During 103,185 person-years of follow-up, 641 individuals developed CKD. Older age, intravenous drug use, hepatitis C coinfection, lower baseline eGFR, female gender, lower CD4 count nadir, hypertension, diabetes, and cardiovascular disease predicted CKD. The researchers included these nine factors in their risk score model (which is available online) and defined three risk groups: low (risk score < 0), medium (risk score 0–4), and high (risk score ≥ 5) risk of CKD development in the next five years. Specifically, there was a 1 in 393, 1 in 47, and 1 in 6 chance of developing CKD in the next five years in the low, medium, and high risk groups, respectively. Because some patients started to use potentially nephrotoxic antiretroviral drugs during follow-up, the researchers were able to use their risk score model to calculate how many patients would have to be treated with one of these drugs for an additional patient to develop CKD over five years in each risk group. This “number needed to harm” (NNTH) for patients starting treatment with tenofovir, atazanavir/ritonavir, or another boosted protease inhibitor was 739, 88, and 9 in the low, medium, and high risk groups, respectively. Finally, the researchers validated the accuracy of their risk score in two independent HIV study groups.
What Do These Findings Mean?
These findings provide a simple, validated risk score for CKD and indicate that the NNTH when starting potentially nephrotoxic antiretrovirals was low among HIV-positive individuals at the highest risk of CKD (i.e., treating just nine individuals with nephrotoxic antiretroviral drugs will likely lead to an additional case of CKD in five years). Although various aspects of the study, including the lack of data on race, limit the accuracy of these findings, these findings highlight the need for monitoring, screening, and chronic disease prevention to minimize the risk of HIV-positive individuals developing diabetes, hypertension, or cardiovascular disease, or becoming coinfected with hepatitis C, all of which contribute to the CKD risk score. Moreover, the development of a tool for estimating an individual’s five-year risk of developing CKD with or without the addition of potentially nephrotoxic antiretroviral drugs will enable clinicians and patients to weigh the benefits of certain antiretroviral drugs against the risk of CKD and make informed decisions about treatment options.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001809.
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
NAM/aidsmap provides basic information about HIV/AIDS, summaries of recent research findings on HIV care and treatment, and personal stories about living with AIDS/HIV
Information is available from Avert, an international AIDS charity, on many aspects of HIV/AIDS, including personal stories about living with HIV/AIDS
The World Health Organization provides information on all aspects of HIV/AIDS (in several languages), including its guidelines on the use of ART for treating and preventing HIV infection
The UNAIDS World AIDS Day Report 2014 provides up-to-date information about the AIDS epidemic and efforts to halt it
The UK National Health Service Choices website provides information for patients on chronic kidney disease, including some personal stories
The US National Kidney Foundation, a not-for-profit organization, provides information about chronic kidney disease (in English and Spanish)
A tool for calculating the CDK risk score developed in this study is available
Additional information about the D:A:D study is available
Amanda Mocroft and colleagues develop and validate a model for determining risk of developing chronic kidney disease for individuals with HIV if treated with different antiretroviral therapies.
doi:10.1371/journal.pmed.1001809
PMCID: PMC4380415  PMID: 25826420
5.  Reduced Glomerular Filtration Rate and Its Association with Clinical Outcome in Older Patients at Risk of Vascular Events: Secondary Analysis 
PLoS Medicine  2009;6(1):e1000016.
Background
Reduced glomerular filtration rate (GFR) is associated with increased cardiovascular risk in young and middle aged individuals. Associations with cardiovascular disease and mortality in older people are less clearly established. We aimed to determine the predictive value of the GFR for mortality and morbidity using data from the 5,804 participants randomized in the Prospective Study of Pravastatin in the Elderly at Risk (PROSPER).
Methods and Findings
Glomerular filtration rate was estimated (eGFR) using the Modification of Diet in Renal Disease equation and was categorized in the ranges ([20–40], [40–50], [50–60]) ≥ 60 ml/min/1.73 m2. Baseline risk factors were analysed by category of eGFR, with and without adjustment for other risk factors. The associations between baseline eGFR and morbidity and mortality outcomes, accrued after an average of 3.2 y, were investigated using Cox proportional hazard models adjusting for traditional risk factors. We tested for evidence of an interaction between the benefit of statin treatment and baseline eGFR status. Age, low-density lipoprotein (LDL) and high-density lipoprotein (HDL) cholesterol, C-reactive protein (CRP), body mass index, fasting glucose, female sex, histories of hypertension and vascular disease were associated with eGFR (p = 0.001 or less) after adjustment for other risk factors. Low eGFR was independently associated with risk of all cause mortality, vascular mortality, and other noncancer mortality and with fatal and nonfatal coronary and heart failure events (hazard ratios adjusted for CRP and other risk factors (95% confidence intervals [CIs]) for eGFR < 40 ml/min/1.73m2 relative to eGFR ≥ 60 ml/min/1.73m2 respectively 2.04 (1.48–2.80), 2.37 (1.53–3.67), 3.52 (1.78–6.96), 1.64 (1.18–2.27), 3.31 (2.03–5.41). There were no nominally statistically significant interactions (p < 0.05) between randomized treatment allocation and eGFR for clinical outcomes, with the exception of the outcome of coronary heart disease death or nonfatal myocardial infarction (p = 0.021), with the interaction suggesting increased benefit of statin treatment in subjects with impaired GFRs.
Conclusions
We have established that, in an elderly population over the age of 70 y, impaired GFR is associated with female sex, with presence of vascular disease, and with levels of other risk factors that would be associated with increased risk of vascular disease. Further, impaired GFR is independently associated with significant levels of increased risk of all cause mortality and fatal vascular events and with composite fatal and nonfatal coronary and heart failure outcomes. Our analyses of the benefits of statin treatment in relation to baseline GFR suggest that there is no reason to exclude elderly patients with impaired renal function from treatment with a statin.
Using data from the PROSPER trial, Ian Ford and colleagues investigate whether reduced glomerular filtration rate is associated with cardiovascular and mortality risk among elderly people.
Editors' Summary
Background.
Cardiovascular disease (CVD)—disease that affects the heart and/or the blood vessels—is a common cause of death in developed countries. In the USA, for example, the single leading cause of death is coronary heart disease, a CVD in which narrowing of the heart's blood vessels slows or stops the blood supply to the heart and eventually causes a heart attack. Other types of CVD include stroke (in which narrowing of the blood vessels interrupts the brain's blood supply) and heart failure (a condition in which the heart can no longer pump enough blood to the rest of the body). Many factors increase the risk of developing CVD, including high blood pressure (hypertension), high blood cholesterol, having diabetes, smoking, and being overweight. Tools such as the “Framingham risk calculator” assess an individual's overall CVD risk by taking these and other risk factors into account. CVD risk can be minimized by taking drugs to reduce blood pressure or cholesterol levels (for example, pravastatin) and by making lifestyle changes.
Why Was This Study Done?
Another potential risk factor for CVD is impaired kidney (renal) function. In healthy people, the kidneys filter waste products and excess fluid out of the blood. A reduced “estimated glomerular filtration rate” (eGFR), which indicates impaired renal function, is associated with increased CVD in young and middle-aged people and increased all-cause and cardiovascular death in people who have vascular disease. But is reduced eGFR also associated with CVD and death in older people? If it is, it would be worth encouraging elderly people with reduced eGFR to avoid other CVD risk factors. In this study, the researchers determine the predictive value of eGFR for all-cause and vascular mortality (deaths caused by CVD) and for incident vascular events (a first heart attack, stroke, or heart failure) using data from the Prospective Study of Pravastatin in the Elderly at Risk (PROSPER). This clinical trial examined pravastatin's effects on CVD development among 70–82 year olds with pre-existing vascular disease or an increased risk of CVD because of smoking, hypertension, or diabetes.
What Did the Researchers Do and Find?
The trial participants were divided into four groups based on their eGFR at the start of the study. The researchers then investigated the association between baseline CVD risk factors and baseline eGFR and between baseline eGFR and vascular events and deaths that occurred during the 3-year study. Several established CVD risk factors were associated with a reduced eGFR after allowing for other risk factors. In addition, people with a low eGFR (between 20 and 40 units) were twice as likely to die from any cause as people with an eGFR above 60 units (the normal eGFR for a young person is 100 units; eGFR decreases with age) and more than three times as likely to have nonfatal coronary heart disease or heart failure. A low eGFR also increased the risk of vascular mortality, other noncancer deaths, and fatal coronary heart disease and heart failure. Finally, pravastatin treatment reduced coronary heart disease deaths and nonfatal heart attacks most effectively among participants with the greatest degree of eGFR impairment.
What Do These Findings Mean?
These findings suggest that, in elderly people, impaired renal function is associated with levels of established CVD risk factors that increase the risk of vascular disease. They also suggest that impaired kidney function increases the risk of all-cause mortality, fatal vascular events, and fatal and nonfatal coronary heat disease and heart failure. Because the study participants were carefully chosen for inclusion in PROSPER, these findings may not be generalizable to all elderly people with vascular disease or vascular disease risk factors. Nevertheless, increased efforts should probably be made to encourage elderly people with reduced eGFR and other vascular risk factors to make lifestyle changes to reduce their overall CVD risk. Finally, although the effect of statins in elderly patients with renal dysfunction needs to be examined further, these findings suggest that this group of patients should benefit at least as much from statins as elderly patients with healthy kidneys.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000016.
The MedlinePlus Encyclopedia has pages on coronary heart disease, stroke, and heart failure (in English and Spanish)
MedlinePlus provides links to many other sources of information on heart disease, vascular disease, and stroke (in English and Spanish)
The US National Institute of Diabetes and Digestive and Kidney Diseases provides information on how the kidneys work and what can go wrong with them, including a list of links to further information about kidney disease
The American Heart Association provides information on all aspects of cardiovascular disease for patients, caregivers, and professionals (in several languages)
More information about PROSPER is available on the Web site of the Vascular Biochemistry Department of the University of Glasgow
doi:10.1371/journal.pmed.1000016
PMCID: PMC2628400  PMID: 19166266
6.  Risk Models to Predict Chronic Kidney Disease and Its Progression: A Systematic Review 
PLoS Medicine  2012;9(11):e1001344.
A systematic review of risk prediction models conducted by Justin Echouffo-Tcheugui and Andre Kengne examines the evidence base for prediction of chronic kidney disease risk and its progression, and suitability of such models for clinical use.
Background
Chronic kidney disease (CKD) is common, and associated with increased risk of cardiovascular disease and end-stage renal disease, which are potentially preventable through early identification and treatment of individuals at risk. Although risk factors for occurrence and progression of CKD have been identified, their utility for CKD risk stratification through prediction models remains unclear. We critically assessed risk models to predict CKD and its progression, and evaluated their suitability for clinical use.
Methods and Findings
We systematically searched MEDLINE and Embase (1 January 1980 to 20 June 2012). Dual review was conducted to identify studies that reported on the development, validation, or impact assessment of a model constructed to predict the occurrence/presence of CKD or progression to advanced stages. Data were extracted on study characteristics, risk predictors, discrimination, calibration, and reclassification performance of models, as well as validation and impact analyses. We included 26 publications reporting on 30 CKD occurrence prediction risk scores and 17 CKD progression prediction risk scores. The vast majority of CKD risk models had acceptable-to-good discriminatory performance (area under the receiver operating characteristic curve>0.70) in the derivation sample. Calibration was less commonly assessed, but overall was found to be acceptable. Only eight CKD occurrence and five CKD progression risk models have been externally validated, displaying modest-to-acceptable discrimination. Whether novel biomarkers of CKD (circulatory or genetic) can improve prediction largely remains unclear, and impact studies of CKD prediction models have not yet been conducted. Limitations of risk models include the lack of ethnic diversity in derivation samples, and the scarcity of validation studies. The review is limited by the lack of an agreed-on system for rating prediction models, and the difficulty of assessing publication bias.
Conclusions
The development and clinical application of renal risk scores is in its infancy; however, the discriminatory performance of existing tools is acceptable. The effect of using these models in practice is still to be explored.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Chronic kidney disease (CKD)—the gradual loss of kidney function—is increasingly common worldwide. In the US, for example, about 26 million adults have CKD, and millions more are at risk of developing the condition. Throughout life, small structures called nephrons inside the kidneys filter waste products and excess water from the blood to make urine. If the nephrons stop working because of injury or disease, the rate of blood filtration decreases, and dangerous amounts of waste products such as creatinine build up in the blood. Symptoms of CKD, which rarely occur until the disease is very advanced, include tiredness, swollen feet and ankles, puffiness around the eyes, and frequent urination, especially at night. There is no cure for CKD, but progression of the disease can be slowed by controlling high blood pressure and diabetes, both of which cause CKD, and by adopting a healthy lifestyle. The same interventions also reduce the chances of CKD developing in the first place.
Why Was This Study Done?
CKD is associated with an increased risk of end-stage renal disease, which is treated with dialysis or by kidney transplantation (renal replacement therapies), and of cardiovascular disease. These life-threatening complications are potentially preventable through early identification and treatment of CKD, but most people present with advanced disease. Early identification would be particularly useful in developing countries, where renal replacement therapies are not readily available and resources for treating cardiovascular problems are limited. One way to identify people at risk of a disease is to use a “risk model.” Risk models are constructed by testing the ability of different combinations of risk factors that are associated with a specific disease to identify those individuals in a “derivation sample” who have the disease. The model is then validated on an independent group of people. In this systematic review (a study that uses predefined criteria to identify all the research on a given topic), the researchers critically assess the ability of existing CKD risk models to predict the occurrence of CKD and its progression, and evaluate their suitability for clinical use.
What Did the Researchers Do and Find?
The researchers identified 26 publications reporting on 30 risk models for CKD occurrence and 17 risk models for CKD progression that met their predefined criteria. The risk factors most commonly included in these models were age, sex, body mass index, diabetes status, systolic blood pressure, serum creatinine, protein in the urine, and serum albumin or total protein. Nearly all the models had acceptable-to-good discriminatory performance (a measure of how well a model separates people who have a disease from people who do not have the disease) in the derivation sample. Not all the models had been calibrated (assessed for whether the average predicted risk within a group matched the proportion that actually developed the disease), but in those that had been assessed calibration was good. Only eight CKD occurrence and five CKD progression risk models had been externally validated; discrimination in the validation samples was modest-to-acceptable. Finally, very few studies had assessed whether adding extra variables to CKD risk models (for example, genetic markers) improved prediction, and none had assessed the impact of adopting CKD risk models on the clinical care and outcomes of patients.
What Do These Findings Mean?
These findings suggest that the development and clinical application of CKD risk models is still in its infancy. Specifically, these findings indicate that the existing models need to be better calibrated and need to be externally validated in different populations (most of the models were tested only in predominantly white populations) before they are incorporated into guidelines. The impact of their use on clinical outcomes also needs to be assessed before their widespread use is recommended. Such research is worthwhile, however, because of the potential public health and clinical applications of well-designed risk models for CKD. Such models could be used to identify segments of the population that would benefit most from screening for CKD, for example. Moreover, risk communication to patients could motivate them to adopt a healthy lifestyle and to adhere to prescribed medications, and the use of models for predicting CKD progression could help clinicians tailor disease-modifying therapies to individual patient needs.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001344.
This study is further discussed in a PLOS Medicine Perspective by Maarten Taal
The US National Kidney and Urologic Diseases Information Clearinghouse provides information about all aspects of kidney disease; the US National Kidney Disease Education Program provides resources to help improve the understanding, detection, and management of kidney disease (in English and Spanish)
The UK National Health Service Choices website provides information for patients on chronic kidney disease, including some personal stories
The US National Kidney Foundation, a not-for-profit organization, provides information about chronic kidney disease (in English and Spanish)
The not-for-profit UK National Kidney Federation support and information for patients with kidney disease and for their carers, including a selection of patient experiences of kidney disease
World Kidney Day, a joint initiative between the International Society of Nephrology and the International Federation of Kidney Foundations, aims to raise awareness about kidneys and kidney disease
doi:10.1371/journal.pmed.1001344
PMCID: PMC3502517  PMID: 23185136
7.  10-y Risks of Death and Emergency Re-admission in Adolescents Hospitalised with Violent, Drug- or Alcohol-Related, or Self-Inflicted Injury: A Population-Based Cohort Study 
PLoS Medicine  2015;12(12):e1001931.
Background
Hospitalisation for adversity-related injury (violent, drug/alcohol-related, or self-inflicted injury) has been described as a “teachable moment”, when intervention may reduce risks of further harm. Which adolescents are likely to benefit most from intervention strongly depends on their long-term risks of harm. We compared 10-y risks of mortality and re-admission after adversity-related injury with risks after accident-related injury.
Methods and Findings
We analysed National Health Service admissions data for England (1 April 1997–31 March 2012) for 10–19 y olds with emergency admissions for adversity-related injury (violent, drug/alcohol-related, or self-inflicted injury; n = 333,009) or for accident-related injury (n = 649,818). We used Kaplan–Meier estimates and Cox regression to estimate and compare 10-y post-discharge risks of death and emergency re-admission. Among adolescents discharged after adversity-related injury, one in 137 girls and one in 64 boys died within 10 y, and 54.2% of girls and 40.5% of boys had an emergency re-admission, with rates being highest for 18–19 y olds. Risks of death were higher than in adolescents discharged after accident-related injury (girls: age-adjusted hazard ratio 1.61, 95% CI 1.43–1.82; boys: 2.13, 95% CI 1.98–2.29), as were risks of re-admission (girls: 1.76, 95% CI 1.74–1.79; boys: 1.41, 95% CI 1.39–1.43). Risks of death and re-admission were increased after all combinations of violent, drug/alcohol-related, and self-inflicted injury, but particularly after any drug/alcohol-related or self-inflicted injury (i.e., with/without violent injury), for which age-adjusted hazard ratios for death in boys ranged from 1.67 to 5.35, compared with 1.25 following violent injury alone (girls: 1.09 to 3.25, compared with 1.27). The main limitation of the study was under-recording of adversity-related injuries and misclassification of these cases as accident-related injuries. This misclassification would attenuate the relative risks of death and re-admission for adversity-related compared with accident-related injury.
Conclusions
Adolescents discharged after an admission for violent, drug/alcohol-related, or self-inflicted injury have increased risks of subsequent harm up to a decade later. Introduction of preventive strategies for reducing subsequent harm after admission should be considered for all types of adversity-related injury, particularly for older adolescents.
In a UK cohort study, Annie Herbert and colleagues compare long-term risks between adolescents hospitalized for adversity- versus accident-related injuries.
Editors' Summary
Background
Adolescence—the period of human growth and development that occurs between the ages of 10 and 19 years—prepares the body and mind for adulthood. It is characterized by numerous, sometimes troubling, physical, mental, emotional, and social changes. Perhaps the biggest change is puberty, which usually occurs between the ages of 10 and 14 years for girls and between the ages of 12 and 16 years for boys. During puberty, several biological changes prepare the body for parenthood. For example, the breasts develop and menstrual periods begin in girls, and the testicles and penis grow in boys. Physiological growth is also rapid during puberty, and, by the end of puberty, both boys and girls are at or close to their adult height and weight. Adolescence is also the time when individuals begin to move towards social and economic independence, and when they acquire their own unique personality and opinions, the emotional skills needed to form adult relationships, and the intellectual capacity for abstract reasoning.
Why Was This Study Done?
Adolescents often use alcohol and other drugs unwisely, and because adolescents do not fully understand the relationships between behavior and consequences, everyday situations sometimes escalate into violence. Moreover, some adolescents intentionally damage their body (self-harm) as a way to deal with the emotional upheaval of adolescence. Consequently, adolescents often sustain adversity-related injuries (violence-related, drug- or alcohol-related, or self-inflicted injuries). In England, for example, a third of all adolescents admitted to hospital for any injury have an adversity-related injury; the remaining injuries are mainly accident-related. Because adolescents who present with an adversity-related injury often re-present later with other adversity-related injuries, hospitalization for such injuries is a “teachable moment,” a time when relevant interventions (for example, psychosocial interventions that deal with psychological and social development) can potentially reduce the risk of further harm. But which adolescents are likely to benefit from such interventions depends on their long-term risks of harm. Here, the researchers use hospitalization data for England to compare the ten-year risks of mortality (death) and re-admission among adolescents after adversity-related injury with the risks after accident-related injury.
What Did the Researchers Do and Find?
The researchers used National Health Service hospital admissions data collected between 1997 and 2012 for 10–19 year olds with emergency admissions for adversity-related or accident-related injury (333,009 and 649,818 adolescents, respectively) to estimate the ten-year risks of death and emergency re-admission among injured adolescents after discharge. Among adolescents discharged after an adversity-related injury, one in 137 girls and one in 64 boys died within ten years. Also, 54.2% of girls and 40.5% of boys had a subsequent emergency re-admission for an adversity-related injury, and emergency re-admission rates were highest in 18–19 year olds. The risks of both death and emergency re-admission were higher among adolescents discharged after an adversity-related injury than among adolescents discharged after an accident-related injury. For example, boys discharged after an adversity-related injury were about twice as likely to die within the next ten years as boys discharged after an accident-related injury. Risks of death were increased after all combinations of adversity-related injury but particularly after combinations that included drug- or alcohol-related or self-inflicted injury. Finally, the risks of emergency re-admission were highest after injuries that included self-inflicted injury in both girls and boys.
What Do These Findings Mean?
These findings indicate that adolescents discharged from the hospital after an admission for violence-related, drug- or alcohol-related, or self-inflicted injuries have increased risks of subsequent harm up to a decade later. Misclassification of some adversity-related injuries as accident-related injuries may affect the accuracy of these findings. Another important limitation of this observational study is residual confounding. That is, although the researchers adjusted for known factors likely to affect the risk of re-admission or death in their analysis, the adolescents who were re-admitted or died subsequent to discharge after an adversity-related injury may have shared other unknown characteristics that were responsible for their increased risk of harm. Nevertheless, these findings identify several risk factors that clinicians and service providers can use to identify those adolescents admitted to hospital with an injury who are at high or low risk of subsequent harm. Specifically, these findings suggest that the introduction of strategies for reducing subsequent harm after discharge should be considered for all types of adversity-related injury, particularly when it occurs in older adolescents.
Additional Information
This list of resources contains links that can be accessed when viewing the PDF on a device or via the online version of the article at http://dx.doi.org/10.1371/journal.pmed.1001931.
The World Health Organization (WHO) provides brief information on adolescence (in several languages) and links to WHO documents concerned with adolescent health
The UK Royal College of Psychiatrists provides a fact sheet on surviving adolescence for parents, teachers, and young people
The American Academy of Pediatrics also provides information on the stages of adolescence (in English and Spanish)
The UK not-for-profit organization Young Minds has real stories about self-harm and about other aspects of emotional well-being and mental health among young people
MedlinePlus provides basic information and links to further resources about adolescent development, self-harm, teen violence, and injuries
A previous open-access paper by the researchers about violence, self-harm, and drug or alcohol misuse among adolescents admitted to the hospital for injuries is available
doi:10.1371/journal.pmed.1001931
PMCID: PMC4699823  PMID: 26714280
8.  Removing the Age Restrictions for Rotavirus Vaccination: A Benefit-Risk Modeling Analysis 
PLoS Medicine  2012;9(10):e1001330.
A modeling analysis conducted by Manish Patel and colleagues predicts the possible number of rotavirus deaths prevented, and number of intussusception deaths caused, by use of an unrestricted rotavirus schedule in low- and middle-income countries.
Background
To minimize potential risk of intussusception, the World Health Organization (WHO) recommended in 2009 that rotavirus immunization should be initiated by age 15 weeks and completed before 32 weeks. These restrictions could adversely impact vaccination coverage and thereby its health impact, particularly in developing countries where delays in vaccination often occur.
Methods and Findings
We conducted a modeling study to estimate the number of rotavirus deaths prevented and the number of intussusception deaths caused by vaccination when administered on the restricted schedule versus an unrestricted schedule whereby rotavirus vaccine would be administered with DTP vaccine up to age 3 years. Countries were grouped on the basis of child mortality rates, using WHO data. Inputs were estimates of WHO rotavirus mortality by week of age from a recent study, intussusception mortality based on a literature review, predicted vaccination rates by week of age from USAID Demographic and Health Surveys, the United Nations Children's Fund (UNICEF) Multiple Indicator Cluster Surveys (MICS), and WHO-UNICEF 2010 country-specific coverage estimates, and published estimates of vaccine efficacy and vaccine-associated intussusception risk. On the basis of the error estimates and distributions for model inputs, we conducted 2,000 simulations to obtain median estimates of deaths averted and caused as well as the uncertainty ranges, defined as the 5th–95th percentile, to provide an indication of the uncertainty in the estimates.
We estimated that in low and low-middle income countries a restricted schedule would prevent 155,800 rotavirus deaths (5th–95th centiles, 83,300–217,700) while causing potentially 253 intussusception deaths (76–689). In contrast, vaccination without age restrictions would prevent 203,000 rotavirus deaths (102,000–281,500) while potentially causing 547 intussusception deaths (237–1,160). Thus, removing the age restrictions would avert an additional 47,200 rotavirus deaths (18,700–63,700) and cause an additional 294 (161–471) intussusception deaths, for an incremental benefit-risk ratio of 154 deaths averted for every death caused by vaccine. These extra deaths prevented under an unrestricted schedule reflect vaccination of an additional 21%–25% children, beyond the 63%–73% of the children who would be vaccinated under the restricted schedule. Importantly, these estimates err on the side of safety in that they assume high vaccine-associated risk of intussusception and do not account for potential herd immunity or non-fatal outcomes.
Conclusions
Our analysis suggests that in low- and middle-income countries the additional lives saved by removing age restrictions for rotavirus vaccination would far outnumber the potential excess vaccine-associated intussusception deaths.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Rotavirus causes severe diarrhea and vomiting. It is responsible for a large number of hospitalizations among young children in developed countries (an estimated 60,000 hospitalizations per year in the US in 2005, for example). In poor countries, rotavirus is a major cause of death in children under five. In 1998, the first rotavirus vaccine, called RotaShield, was approved in the US by the Food and Drug Administration. Shortly after the vaccine became widely used, doctors noticed a small increase in a problem called intussusception among the vaccinated infants. Intussusception is a rare type of bowel obstruction that occurs when the bowel telescopes in on itself. Prompt treatment of intussusception normally leads to full recovery, but some children with the condition need surgery, and when the disease is left untreated it can be fatal. Because intussusception is a serious condition and because very few children die from rotavirus infection in the United States, the US authorities stopped recommending vaccination with RotaShield in 1999. The manufacturer withdrew the vaccine from the market shortly thereafter.
Since then, two new vaccines (named Rotarix and RotaTeq) have been developed. Before they were approved in the US and elsewhere, they were extensively tested for any adverse side effects, especially intussusception. No increase in the risk for intussusception was found in these studies, and both are now approved and recommended for vaccination of infants around the world.
Why Was This Study Done?
Since 2006, hundreds of thousands of infants have been vaccinated with Rotarix or RotaTeq, with safety being closely monitored. Some countries have reported a small increase in intussusception (one to four additional cases per 100,000 vaccinated infants, compared with one per 2,000 of cases that occur in unvaccinated children). This increase is much lower than the one seen previously with RotaShield. In response to these findings, authorities in the US and other developed countries as well as the World Health Organization declared that the benefits of the vaccine outweigh the risks of the small number of additional intussusception cases in both developed and poor countries. However, because older infants have a higher risk of naturally occurring intussusception, they decided that the course of vaccination (three oral doses for Rotarix and two for RotaTeq) should be initiated before 15 weeks of age and completed before the age of 32 weeks. This is usually not a problem in countries with easy access to health facilities. However, in many poor countries where delays in infant vaccination are common, giving the vaccine only to very young children means that many others who could benefit from its protection will be excluded. In this study, the researchers examined the risks and benefits of rotavirus vaccination in poor countries where most of the rotavirus deaths occur. Specifically, they looked at the benefits and risks if the age restrictions were removed, with a particular emphasis on allowing infants to initiate rotavirus immunization even if they arrive after 15 weeks of age.
What Did the Researchers Do and Find?
The researchers used the most recent estimates for how well the vaccines protect children in Africa and Asia from becoming infected with rotavirus, how many deaths from rotavirus infection can be avoided by vaccination, how many additional cases of intussusception will likely occur in vaccinated children, and what proportion of children would be excluded from rotavirus vaccination because they are too old when they come to a health facility for their infant vaccination. They then estimated the number of rotavirus deaths prevented and the number of intussusception deaths caused by vaccination in two scenarios. The first one (the restricted scenario) corresponds to previous guidelines from WHO and others, in which rotavirus vaccination needs to be initiated before 15 weeks and the full series completed before 32 weeks. The second one (called the unrestricted scenario) allows rotavirus vaccination of children alongside current routinely administered vaccines up to three years of age, recognizing that most children receive their vaccination by 1 year of life.
The researchers estimated that removing the age restriction would prevent an additional 154 rotavirus deaths for each intussusception death caused by the vaccine. Under the unrestricted scenario, roughly a third more children would get vaccinated, which would prevent an additional approximately 47,000 death from rotavirus while causing approximately 300 additional intussusception deaths.
They also calculated some best- and worst-case scenarios. The worst-case scenario assumed a much higher risk of intussusception for children receiving their first dose after 15 weeks of life than what has been seen anywhere, and also that an additional 20% of children with intussusception would die from it than what was already assumed in their routine scenario (again, a higher number than seen in reality). In addition, it assumes a lower protection from rotavirus death for the vaccine than has been observed in children vaccinated so far. In this pessimistic case, the number of rotavirus deaths prevented was 24 for each intussusception death caused by the vaccine.
What Do These Findings Mean?
If one accepts that deaths caused by a vaccine are not fundamentally different from deaths caused by a failure to vaccinate, then these results show that the benefits of lifting the age restriction for rotavirus vaccine clearly outweigh the risks, at least when only examining mortality outcomes. The calculations are valid only for low-income countries in Africa and Asia where both vaccination delays and deaths from rotavirus are common. The risk-benefit ratio will be different elsewhere. There are also additional risks and benefits that are not included in the study's estimates. For example, early vaccination might be seen as less of an urgent priority when this vaccine can be had at a later date, leaving very young children more vulnerable. On the other hand, when many children in the community are vaccinated, even the unvaccinated children are less likely to get infected (what is known as “herd immunity”), something that has not been taken into account in the benefits here. The results of this study (and its limitations) were reviewed in April 2012 by WHO's Strategic Advisory Group of Experts. The group then recommended that, while early vaccination is still strongly encouraged, the age restriction on rotavirus vaccination should be removed in countries where delays in vaccination and rotavirus mortality are common so that more vulnerable children can be vaccinated and deaths from rotavirus averted.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001330.
The World Health Organization provides information on rotavirus
Wikipedia has information on rotavirus vaccine and intussusception (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Centers for Disease Control and Prevention rotavirus vaccination page includes a link to frequently asked questions
PATH Rotavirus Vaccine Access and Delivery has timely, useful updates on status of rotavirus vaccines globally
doi:10.1371/journal.pmed.1001330
PMCID: PMC3479108  PMID: 23109915
9.  Venous Thrombosis Risk after Cast Immobilization of the Lower Extremity: Derivation and Validation of a Clinical Prediction Score, L-TRiP(cast), in Three Population-Based Case–Control Studies 
PLoS Medicine  2015;12(11):e1001899.
Background
Guidelines and clinical practice vary considerably with respect to thrombosis prophylaxis during plaster cast immobilization of the lower extremity. Identifying patients at high risk for the development of venous thromboembolism (VTE) would provide a basis for considering individual thromboprophylaxis use and planning treatment studies.
The aims of this study were (1) to investigate the predictive value of genetic and environmental risk factors, levels of coagulation factors, and other biomarkers for the occurrence of VTE after cast immobilization of the lower extremity and (2) to develop a clinical prediction tool for the prediction of VTE in plaster cast patients.
Methods and Findings
We used data from a large population-based case–control study (MEGA study, 4,446 cases with VTE, 6,118 controls without) designed to identify risk factors for a first VTE. Cases were recruited from six anticoagulation clinics in the Netherlands between 1999 and 2004; controls were their partners or individuals identified via random digit dialing. Identification of predictor variables to be included in the model was based on reported associations in the literature or on a relative risk (odds ratio) > 1.2 and p ≤ 0.25 in the univariate analysis of all participants. Using multivariate logistic regression, a full prediction model was created. In addition to the full model (all variables), a restricted model (minimum number of predictors with a maximum predictive value) and a clinical model (environmental risk factors only, no blood draw or assays required) were created. To determine the discriminatory power in patients with cast immobilization (n = 230), the area under the curve (AUC) was calculated by means of a receiver operating characteristic. Validation was performed in two other case–control studies of the etiology of VTE: (1) the THE-VTE study, a two-center, population-based case–control study (conducted in Leiden, the Netherlands, and Cambridge, United Kingdom) with 784 cases and 523 controls included between March 2003 and December 2008 and (2) the Milan study, a population-based case–control study with 2,117 cases and 2,088 controls selected between December 1993 and December 2010 at the Thrombosis Center, Fondazione IRCCS Ca’ Granda–Ospedale Maggiore Policlinico, Milan, Italy.
The full model consisted of 32 predictors, including three genetic factors and six biomarkers. For this model, an AUC of 0.85 (95% CI 0.77–0.92) was found in individuals with plaster cast immobilization of the lower extremity. The AUC for the restricted model (containing 11 predictors, including two genetic factors and one biomarker) was 0.84 (95% CI 0.77–0.92). The clinical model (consisting of 14 environmental predictors) resulted in an AUC of 0.77 (95% CI 0.66–0.87). The clinical model was converted into a risk score, the L-TRiP(cast) score (Leiden–Thrombosis Risk Prediction for patients with cast immobilization score), which showed an AUC of 0.76 (95% CI 0.66–0.86). Validation in the THE-VTE study data resulted in an AUC of 0.77 (95% CI 0.58–0.96) for the L-TRiP(cast) score. Validation in the Milan study resulted in an AUC of 0.93 (95% CI 0.86–1.00) for the full model, an AUC of 0.92 (95% CI 0.76–0.87) for the restricted model, and an AUC of 0.96 (95% CI 0.92–0.99) for the clinical model. The L-TRiP(cast) score resulted in an AUC of 0.95 (95% CI 0.91–0.99).
Major limitations of this study were that information on thromboprophylaxis was not available for patients who had plaster cast immobilization of the lower extremity and that blood was drawn 3 mo after the thrombotic event.
Conclusions
These results show that information on environmental risk factors, coagulation factors, and genetic determinants in patients with plaster casts leads to high accuracy in the prediction of VTE risk. In daily practice, the clinical model may be the preferred model as its factors are most easy to determine, while the model still has good predictive performance. These results may provide guidance for thromboprophylaxis and form the basis for a management study.
Using three population-based case-control studies, Banne Nemeth and colleagues derive and validate a clinical prediction score (L-TRiP(cast)) for venous thrombosis risk.
Editors' Summary
Background
Blood normally flows smoothly around the human body, but when a cut or other injury occurs, proteins called clotting factors make the blood gel (coagulate) at the injury site. The resultant clot (thrombus) plugs the wound and prevents blood loss. Sometimes, however, a thrombus forms inside an uninjured blood vessel and partly or completely blocks the blood flow. Clot formation inside one of the veins deep in the body (usually in a leg) is called deep vein thrombosis (DVT). DVT, which can cause pain, swelling, and redness in the affected limb, is treated with anticoagulants, drugs that stop the clot growing. If left untreated, part of the clot can break off and travel to the lungs, where it can cause a life-threatening pulmonary embolism. DVT and pulmonary embolism are known collectively as venous thromboembolism (VTE). Risk factors for VTE include age, oral contraceptive use, having an inherited blood clotting disorder, and prolonged inactivity (for example, being bedridden). An individual’s lifetime risk of developing VTE is about 11%; 10%–30% of people die within 28 days of diagnosis of VTE.
Why Was This Study Done?
Clinicians cannot currently accurately predict who will develop VTE, but it would be very helpful to be able to identify individuals at high risk for VTE because the condition can be prevented by giving anticoagulants before a clot forms (thromboprophylaxis). The ability to predict VTE would be particularly useful in patients who have had a lower limb immobilized in a cast after, for example, breaking a bone. These patients have an increased risk of VTE compared to patients without cast immobilization. However, their absolute risk of VTE is not high enough to justify giving everyone with a leg cast thromboprophylaxis because this therapy increases the risk of major bleeds. Here, the researchers investigate the predictive value of genetic and environmental factors and levels of coagulation factors and other biomarkers on VTE occurrence after cast immobilization of the lower leg and develop a clinical tool for the prediction of VTE in patients with plaster casts.
What Did the Researchers Do and Find?
The researchers used data from the MEGA study, a study of risk factors for VTE, to build a prediction model for a first VTE in patients with a leg cast; the prediction model included 32 predictors (the full model). They also built a restricted model, which included only 11 predictors but had maximum predictive value, and a clinical model, which included 14 environmental predictors that can all be determined without drawing blood or undertaking any assays. They then determined the ability of each model to distinguish between patients with a leg cast who did and did not develop VTE using receiver operating characteristic (ROC) curve analysis. The area under the curve (AUC) for the full model was 0.85, for the restricted model it was 0.85, and for the clinical model it was 0.77. (A predictive test that discriminates perfectly between individuals who do and do not subsequently develop a specific condition has an AUC of 1.00; a test that is no better at predicting outcomes than flipping a coin has an AUC of 0.5.) Similar or higher AUCs were obtained for all the models using data collected in two independent studies. Finally, the researchers converted the clinical model into a risk score by giving each variable in the model a numerical score. The sum of these scores was used to stratify individuals into categories of low or high risk for VTE. With a cutoff of 9 points, the risk score correctly identified 80.8% of the patients in the MEGA study with a plaster cast who developed VTE and 60.8% of the patients who did not develop VTE.
What Do These Findings Mean?
Some aspects of this study may limit the accuracy of its findings. For example, no information was available about which patients with a plaster cast received thromboprophylaxis. Nevertheless, these findings suggest that information on environmental risk factors, coagulation factors, and genetic determinants can be used to predict VTE risk in patients with a leg cast with high accuracy. Importantly, the risk score derived and validated by the researchers, which includes only predictors that can be easily determined in clinical practice, may help clinicians decide which patients with a leg cast should receive thromboprophylaxis and which should not be exposed to the risk of anticoagulant therapy, until an unambiguous guideline for these patients becomes available.
Additional Information
This list of resources contains links that can be accessed when viewing the PDF on a device or via the online version of the article at http://dx.doi.org/10.1371/journal.pmed.1001899.
The US National Heart, Lung, and Blood Institute provides information on deep vein thrombosis (including an animation about how DVT causes pulmonary embolisms) and on pulmonary embolism
The UK National Health Service Choices website has information on deep vein thrombosis (including personal stories) and on pulmonary embolism
The US non-profit organization National Blood Clot Alliance provides detailed information about deep vein thrombosis and pulmonary embolism for patients and professionals and includes a selection of personal stories about these conditions
MedlinePlus has links to further information about deep vein thrombosis and pulmonary embolism (in English and Spanish)
Wikipedia has a page on ROC curve analysis (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
More information about the MEGA study is available
doi:10.1371/journal.pmed.1001899
PMCID: PMC4640574  PMID: 26554832
10.  An Economic Evaluation of Venous Thromboembolism Prophylaxis Strategies in Critically Ill Trauma Patients at Risk of Bleeding 
PLoS Medicine  2009;6(6):e1000098.
Using decision analysis, Henry Stelfox and colleagues estimate the cost-effectiveness of three venous thromboembolism prophylaxis strategies in patients with severe traumatic injuries who were also at risk for bleeding complications.
Background
Critically ill trauma patients with severe injuries are at high risk for venous thromboembolism (VTE) and bleeding simultaneously. Currently, the optimal VTE prophylaxis strategy is unknown for trauma patients with a contraindication to pharmacological prophylaxis because of a risk of bleeding.
Methods and Findings
Using decision analysis, we estimated the cost effectiveness of three VTE prophylaxis strategies—pneumatic compression devices (PCDs) and expectant management alone, serial Doppler ultrasound (SDU) screening, and prophylactic insertion of a vena cava filter (VCF)—in trauma patients admitted to an intensive care unit (ICU) with severe injuries who were believed to have a contraindication to pharmacological prophylaxis for up to two weeks because of a risk of major bleeding. Data on the probability of deep vein thrombosis (DVT) and pulmonary embolism (PE), and on the effectiveness of the prophylactic strategies, were taken from observational and randomized controlled studies. The probabilities of in-hospital death, ICU and hospital discharge rates, and resource use were taken from a population-based cohort of trauma patients with severe injuries (injury severity scores >12) admitted to the ICU of a regional trauma centre. The incidence of DVT at 12 weeks was similar for the PCD (14.9%) and SDU (15.0%) strategies, but higher for the VCF (25.7%) strategy. Conversely, the incidence of PE at 12 weeks was highest in the PCD strategy (2.9%), followed by the SDU (1.5%) and VCF (0.3%) strategies. Expected mortality and quality-adjusted life years were nearly identical for all three management strategies. Expected health care costs at 12 weeks were Can$55,831 for the PCD strategy, Can$55,334 for the SDU screening strategy, and Can$57,377 for the VCF strategy, with similar trends noted over a lifetime analysis.
Conclusions
The attributable mortality due to PE in trauma patients with severe injuries is low relative to other causes of mortality. Prophylactic placement of VCF in patients at high risk of VTE who cannot receive pharmacological prophylaxis is expensive and associated with an increased risk of DVT. Compared to the other strategies, SDU screening was associated with better clinical outcomes and lower costs.
Please see later in the article for Editors' Summary
Editors' Summary
Background
For patients who have been seriously injured in an accident or a violent attack (trauma patients), venous thromboembolism (VTE)—the formation of blood clots that limit the flow of blood through the veins—is a frequent and potentially fatal complication. The commonest form of VTE is deep vein thrombosis (DVT). “Distal” DVTs (clots that form in deep veins below the knee) affect about half of patients with severe trauma; “proximal” DVTs (clots that form above the knee) develop in one in five trauma patients. DVTs cause pain and swelling in the affected leg and can leave patients with a painful condition called post-thrombotic syndrome. Worse still, part of the clot can break off and travel to the lungs where it can cause a life-threatening pulmonary embolism (PE). Distal DVTs rarely embolize but, if untreated, half of patients who present with a proximal DVT will develop a PE, and 2%–3% of them will die as a result.
Why Was This Study Done?
VTE is usually prevented by using heparin, a drug that stops blood clotting, but clinicians treating critically ill trauma patients have a dilemma. Many of these patients are at high risk of serious bleeding complications so cannot be given heparin to prevent VTE. Nonpharmacological ways to prevent VTE include the use of pneumatic compression devices to keep the blood moving in the legs (clots often form in patients confined to bed because of the sluggish blood flow in their legs), repeated screening for blood clots using Doppler ultrasound, and the insertion of a “vena cava filter” into the vein that takes blood from the legs to the heart. This last device catches blood clots before they reach the lungs but increases the risk of DVT. Unfortunately, no-one knows which VTE prevention strategy works best in trauma patients who cannot be given heparin. In this study, therefore, the researchers use decision analysis (the systematic evaluation of the most important factors affecting a decision) to estimate the costs and likely clinical outcomes of these strategies.
What Did the Researchers Do and Find?
The researchers used cost and clinical data from patients admitted to a Canadian trauma center with severe head/neck and/or abdomen/pelvis injuries (patients with a high risk of bleeding complications likely to make heparin therapy dangerous for up to two weeks after the injury) to construct a Markov decision analysis model. They then fed published data on the chances of patients developing DVT or PE, and on the effectiveness of the three VTE prevention strategies, into the model to obtain estimates of the costs and clinical outcomes of the strategies at 12 weeks after the injury and over the patients' lifetime. The estimated incidence of DVT at 12 weeks was 15% for the pneumatic compression device and Doppler ultrasound strategies, but 25% for the vena cava filter strategy. By contrast, the estimated incidence of PE was 2.9% with the pneumatic compression device, 1.5% with Doppler ultrasound, but only 0.3% with the vena cava filter. The expected mortality with all three strategies was similar. Finally, the estimated health care costs per patient at 12 weeks were Can$55,334 and Can$55,831 for the Doppler ultrasound and pneumatic compression device strategies, respectively, but Can$57,377 for the vena cava filter strategy; similar trends were seen for lifetime health care costs.
What Do These Findings Mean?
As with all mathematical models, these findings depend on the data fed into the model and on the assumptions included in it. For example, because data from one Canadian trauma unit were used to construct the model, these findings may not be generalizable. Nevertheless, these findings suggest that, although VTE is common among patients with severe injuries, PE is not a major cause of death among these patients. They also suggest that the use of vena cava filters for VTE prevention in patients who cannot receive heparin should not be routinely used because it is expensive and increases the risk of DVT. Finally, these results suggest that, compared with the other strategies, serial Doppler ultrasound is associated with better clinical outcomes and lower costs.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000098.
The US National Heart Lung and Blood Institute provides information (including an animation) on deep vein thrombosis and pulmonary embolism
MedlinePlus provides links to more information about deep vein thrombosis and pulmonary embolism (in several languages)
The UK National Health Service Choices Web site has information on deep vein thrombosis and on embolism (in English and Spanish)
The Eastern Association for the Surgery of Trauma working group document Practice Management Guidelines for the Management of Venous Thromboembolism in Trauma Patients can be downloaded from the Internet
doi:10.1371/journal.pmed.1000098
PMCID: PMC2695771  PMID: 19554085
11.  Personalized Prediction of Lifetime Benefits with Statin Therapy for Asymptomatic Individuals: A Modeling Study 
PLoS Medicine  2012;9(12):e1001361.
In a modeling study conducted by Myriam Hunink and colleagues, a population-based cohort from Rotterdam is used to predict the possible lifetime benefits of statin therapy, on a personalized basis.
Background
Physicians need to inform asymptomatic individuals about personalized outcomes of statin therapy for primary prevention of cardiovascular disease (CVD). However, current prediction models focus on short-term outcomes and ignore the competing risk of death due to other causes. We aimed to predict the potential lifetime benefits with statin therapy, taking into account competing risks.
Methods and Findings
A microsimulation model based on 5-y follow-up data from the Rotterdam Study, a population-based cohort of individuals aged 55 y and older living in the Ommoord district of Rotterdam, the Netherlands, was used to estimate lifetime outcomes with and without statin therapy. The model was validated in-sample using 10-y follow-up data. We used baseline variables and model output to construct (1) a web-based calculator for gains in total and CVD-free life expectancy and (2) color charts for comparing these gains to the Systematic Coronary Risk Evaluation (SCORE) charts. In 2,428 participants (mean age 67.7 y, 35.5% men), statin therapy increased total life expectancy by 0.3 y (SD 0.2) and CVD-free life expectancy by 0.7 y (SD 0.4). Age, sex, smoking, blood pressure, hypertension, lipids, diabetes, glucose, body mass index, waist-to-hip ratio, and creatinine were included in the calculator. Gains in total and CVD-free life expectancy increased with blood pressure, unfavorable lipid levels, and body mass index after multivariable adjustment. Gains decreased considerably with advancing age, while SCORE 10-y CVD mortality risk increased with age. Twenty-five percent of participants with a low SCORE risk achieved equal or larger gains in CVD-free life expectancy than the median gain in participants with a high SCORE risk.
Conclusions
We developed tools to predict personalized increases in total and CVD-free life expectancy with statin therapy. The predicted gains we found are small. If the underlying model is validated in an independent cohort, the tools may be useful in discussing with patients their individual outcomes with statin therapy.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Cardiovascular disease (CVD) affects the heart and/or the blood vessels and is a major cause of illness and death worldwide. In the US, for example, coronary heart disease—a CVD in which narrowing of the heart's blood vessels by fatty deposits slows the blood supply to the heart and may eventually cause a heart attack—is the leading cause of death, and stroke—a CVD in which the brain's blood supply is interrupted—is the fourth leading cause of death. Established risk factors for CVD include smoking, high blood pressure, obesity, and high blood levels of a fat called low-density lipoprotein (“bad cholesterol”). Because many of these risk factors can be modified by lifestyle changes and by drugs, CVD can be prevented. Thus, physicians can assess a healthy individual's risk of developing CVD using a CVD prediction model (equations that take into account the CVD risk factors to which the individual is exposed) and can then recommend lifestyle changes and medications to reduce that individual's CVD risk.
Why Was This Study Done?
Current guidelines recommend that asymptomatic (healthy) individuals whose likely CVD risk is high should be encouraged to take statins—cholesterol-lowering drugs—as a preventative measure. Statins help to prevent CVD in healthy people with a high predicted risk of CVD, but, like all medicines, they have some unwanted side effects, so it is important that physicians can communicate both the benefits and drawbacks of statins to their patients in a way that allows them to make an informed decision about taking these drugs. Telling a patient that statins will reduce his or her short-term risk of CVD is not always helpful—patients really need to know the potential lifetime benefits of statin therapy. That is, they need to know how much longer they might live if they take statins. Here, the researchers use a mathematical model to predict the personalized lifetime benefits (increased total and CVD-free life expectancy) of statin therapy for individuals without a history of CVD.
What Did the Researchers Do and Find?
The researchers used the Rotterdam Ischemic Heart Disease & Stroke Computer Simulation (RISC) model, which simulates the life courses of individuals through six health states, from well through to CVD or non-CVD death, to estimate lifetime outcomes with and without statin therapy in a population of healthy elderly individuals. They then used these outcomes and information on baseline risk factors to develop a web-based calculator suitable for personalized prediction of the lifetime benefits of statins in routine clinical practice. The model estimated that statin therapy increases average life expectancy in the study population by 0.3 years and average CVD-free life expectancy by 0.7 years. The gains in total and CVD-free life expectancy associated with statin therapy increased with blood pressure, unfavorable cholesterol levels, and body mass index (an indicator of body fat) but decreased with age. Notably, the web-based calculator predicted that some individuals with a low ten-year CVD risk might achieve a similar or larger gain in CVD-free life expectancy with statin therapy than some individuals with a high ten-year risk. So, for example, both a 55-year-old non-smoking woman with a ten-year CVD mortality risk of 2% (a two in a hundred chance of dying of CVD within ten years) and a 65-year-old male smoker with a ten-year CVD mortality risk of 15% might both gain one year of CVD-free life expectancy with statin therapy.
What Do These Findings Mean?
These findings suggest that statin therapy can lead on average to small gains in total life expectancy and slightly larger gains in CVD-free life expectancy among healthy individuals, and show that life expectancy benefits can be predicted using an individual's risk factor profile. The accuracy and generalizability of these findings is limited by the assumptions included in the model (in particular, the model did not allow for the known side effects of statin therapy) and by the data fed into it—importantly, the risk prediction model needs to be validated using an independent dataset. If future research confirms the findings of this study, the researchers' web-based calculator could provide complementary information to the currently recommended ten-year CVD mortality risk assessment. Whether communication of personalized outcomes will ultimately result in better clinical outcomes remains to be seen, however, because patients may be less likely to choose statin therapy when provided with more information about its likely benefits.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001361.
The web-based calculator for personalized prediction of lifetime benefits with statin therapy is available (after agreement to software license)
The American Heart Association provides information about many types of cardiovascular disease for patients, carers, and professionals, including information about drug therapy for cholesterol and a heart attack risk calculator
The UK National Health Service Choices website provides information about cardiovascular disease and about statins
Information is available from the British Heart Foundation on heart disease and keeping the heart healthy; information is also available on statins, including personal stories about deciding to take statins
The US National Heart Lung and Blood Institute provides information on a wide range of cardiovascular diseases
The European Society of Cardiology's cardiovascular disease risk assessment model (SCORE) is available
MedlinePlus provides links to many other sources of information on heart diseases, vascular diseases, stroke, and statins (in English and Spanish)
doi:10.1371/journal.pmed.1001361
PMCID: PMC3531501  PMID: 23300388
12.  Optimal Management of High-Risk T1G3 Bladder Cancer: A Decision Analysis 
PLoS Medicine  2007;4(9):e284.
Background
Controversy exists about the most appropriate treatment for high-risk superficial (stage T1; grade G3) bladder cancer. Immediate cystectomy offers the best chance for survival but may be associated with an impaired quality of life compared with conservative therapy. We estimated life expectancy (LE) and quality-adjusted life expectancy (QALE) for both of these treatments for men and women of different ages and comorbidity levels.
Methods and Findings
We evaluated two treatment strategies for high-risk, T1G3 bladder cancer using a decision-analytic Markov model: (1) Immediate cystectomy with neobladder creation versus (2) conservative management with intravesical bacillus Calmette-Guérin (BCG) and delayed cystectomy in individuals with resistant or progressive disease. Probabilities and utilities were derived from published literature where available, and otherwise from expert opinion. Extensive sensitivity analyses were conducted to identify variables most likely to influence the decision. Structural sensitivity analyses modifying the base case definition and the triggers for cystectomy in the conservative therapy arm were also explored. Probabilistic sensitivity analysis was used to assess the joint uncertainty of all variables simultaneously and the uncertainty in the base case results. External validation of model outputs was performed by comparing model-predicted survival rates with independent published literature. The mean LE of a 60-y-old male was 14.3 y for immediate cystectomy and 13.6 y with conservative management. With the addition of utilities, the immediate cystectomy strategy yielded a mean QALE of 12.32 y and remained preferred over conservative therapy by 0.35 y. Worsening patient comorbidity diminished the benefit of early cystectomy but altered the LE-based preferred treatment only for patients over age 70 y and the QALE-based preferred treatment for patients over age 65 y. Sensitivity analyses revealed that patients over the age of 70 y or those strongly averse to loss of sexual function, gastrointestinal dysfunction, or life without a bladder have a higher QALE with conservative therapy. The results of structural or probabilistic sensitivity analyses did not change the preferred treatment option. Model-predicted overall and disease-specific survival rates were similar to those reported in published studies, suggesting external validity.
Conclusions
Our model is, to our knowledge, the first of its kind in bladder cancer, and demonstrated that younger patients with high-risk T1G3 bladder had a higher LE and QALE with immediate cystectomy. The decision to pursue immediate cystectomy versus conservative therapy should be based on discussions that consider patient age, comorbid status, and an individual's preference for particular postcystectomy health states. Patients over the age of 70 y or those who place high value on sexual function, gastrointestinal function, or bladder preservation may benefit from a more conservative initial therapeutic approach.
Using a Markov model, Shabbir Alibhai and colleagues develop a decision analysis comparing cystectomy with conservative treatment for high-risk superficial bladder cancer depending on patient age, comorbid conditions, and preferences.
Editors' Summary
Background.
Every year, about 67,000 people in the US develop bladder cancer. Like all cancers, bladder cancer arises when a single cell begins to grow faster than normal, loses its characteristic shape, and moves into surrounding tissues. Most bladder cancers develop from cells that line the bladder (“transitional” cells) and most are detected before they spread out of this lining. These superficial or T1 stage cancers can be removed by transurethral resection of bladder tumor (TURBT). The urologist (a specialist who treats urinary tract problems) passes a small telescope into the bladder through the urethra (the tube through which urine leaves the body) and removes the tumor. If the tumor cells look normal under a microscope (so-called normal histology), the cancer is unlikely to return; if they have lost their normal appearance, the tumor is given a “G3” histological grade, which indicates a high risk of recurrence.
Why Was This Study Done?
The best treatment for T1G3 bladder cancer remains controversial. Some urologists recommend immediate radical cystectomy— surgical removal of the bladder, the urethra, and other nearby organs. This treatment often provides a complete cure but can cause serious short-term health problems and affects long-term quality of life. Patients often develop sexual dysfunction or intestinal (gut) problems and sometimes find it hard to live with a reconstructed bladder. The other recommended treatment is immunotherapy with bacillus Calmette-Guérin (BCG, bacteria that are also used to vaccinate against tuberculosis). Long-term survival is not always as good with this conservative treatment but it is less likely than surgery to cause short-term illness or to reduce quality of life. In this study, the researchers have used decision analysis (a systematic evaluation of the important factors affecting a decision) to determine whether immediate cystectomy or conservative therapy is the optimal treatment for patients with T1G3 bladder cancer. Decision analysis allowed the researchers to account for quality-of-life factors while comparing the health benefits of each treatment for T1G3 bladder cancer.
What Did the Researchers Do and Find?
Using a decision analysis model called a Markov model, the researchers calculated the months of life gained, and the quality of life expected to result, from each of the two treatments. To estimate the life expectancy (LE) associated with each treatment, the researchers incorporated the published probabilities of various outcomes of each treatment into their model. To estimate quality-adjusted life expectancy (QALE, the number of years of good quality life), they incorporated “utilities,” measures of relative satisfaction with outcomes. (A utility of 1 represents perfect health; death is assigned a value of 0, and outcomes considered less than ideal, but better than death, fall in between). For a sexually potent 60-year-old man with bladder cancer but no other illnesses, the average LE predicted by the model was nearly eight months longer with immediate cystectomy than with conservative treatment (both LEs predicted by this model matched those seen in clinical trials); the average QALE with cystectomy was 4.2 months longer than with conservative treatment. Having additional diseases decreased the benefit of immediate cystectomy but the treatment still gave a longer LE until the patient reached 70 years old, when conservative treatment became better. For QALE, this change in optimal treatment appeared at age 65. Finally, conservative treatment gave a higher QALE than immediate cystectomy for patients concerned about preserving sexual function or averse to living with intestinal problems or a reconstructed bladder.
What Do These Findings Mean?
As with all mathematical models, these results depend on the assumptions included in the model. In particular, because published probability and utility values are not available for some of the possible outcomes of the two treatments, the LE and QALE calculations could be inaccurate. Also, assigning numerical ratings to life experiences is generally something of a simplification, which could affect the reliability of the QALE (but not the LE) results. Nevertheless, these findings provide useful guidance for urologists trying to balance the benefits of immediate cystectomy or conservative treatment against the potential short-term and long-term effects of these treatments on patients' quality of life. Specifically, the results indicate that decisions on treatment for T1G3 bladder cancer should be based on a consideration of the patient's age and any coexisting disease coupled with detailed discussions with the patient about their attitudes regarding the possible health-related effects of cystectomy.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040284.
MedlinePlus encyclopedia page on bladder cancer (in English and Spanish)
Information for patients and professionals from the US National Cancer Institute on bladder cancer (in English and Spanish)
Information for patients on bladder cancer from the UK charity Cancerbackup
Online course on Decision Analysis in Health Care from George Mason University
doi:10.1371/journal.pmed.0040284
PMCID: PMC1989749  PMID: 17896857
13.  Red Blood Cell Transfusion and Mortality in Trauma Patients: Risk-Stratified Analysis of an Observational Study 
PLoS Medicine  2014;11(6):e1001664.
Using a large multicentre cohort, Pablo Perel and colleagues evaluate the association of red blood cell transfusion with mortality according to the predicted risk of death for trauma patients.
Please see later in the article for the Editors' Summary
Background
Haemorrhage is a common cause of death in trauma patients. Although transfusions are extensively used in the care of bleeding trauma patients, there is uncertainty about the balance of risks and benefits and how this balance depends on the baseline risk of death. Our objective was to evaluate the association of red blood cell (RBC) transfusion with mortality according to the predicted risk of death.
Methods and Findings
A secondary analysis of the CRASH-2 trial (which originally evaluated the effect of tranexamic acid on mortality in trauma patients) was conducted. The trial included 20,127 trauma patients with significant bleeding from 274 hospitals in 40 countries. We evaluated the association of RBC transfusion with mortality in four strata of predicted risk of death: <6%, 6%–20%, 21%–50%, and >50%. For this analysis the exposure considered was RBC transfusion, and the main outcome was death from all causes at 28 days. A total of 10,227 patients (50.8%) received at least one transfusion. We found strong evidence that the association of transfusion with all-cause mortality varied according to the predicted risk of death (p-value for interaction <0.0001). Transfusion was associated with an increase in all-cause mortality among patients with <6% and 6%–20% predicted risk of death (odds ratio [OR] 5.40, 95% CI 4.08–7.13, p<0.0001, and OR 2.31, 95% CI 1.96–2.73, p<0.0001, respectively), but with a decrease in all-cause mortality in patients with >50% predicted risk of death (OR 0.59, 95% CI 0.47–0.74, p<0.0001). Transfusion was associated with an increase in fatal and non-fatal vascular events (OR 2.58, 95% CI 2.05–3.24, p<0.0001). The risk associated with RBC transfusion was significantly increased for all the predicted risk of death categories, but the relative increase was higher for those with the lowest (<6%) predicted risk of death (p-value for interaction <0.0001). As this was an observational study, the results could have been affected by different types of confounding. In addition, we could not consider haemoglobin in our analysis. In sensitivity analyses, excluding patients who died early; conducting propensity score analysis adjusting by use of platelets, fresh frozen plasma, and cryoprecipitate; and adjusting for country produced results that were similar.
Conclusions
The association of transfusion with all-cause mortality appears to vary according to the predicted risk of death. Transfusion may reduce mortality in patients at high risk of death but increase mortality in those at low risk. The effect of transfusion in low-risk patients should be further tested in a randomised trial.
Trial registration
www.ClinicalTrials.gov NCT01746953
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Trauma—a serious injury to the body caused by violence or an accident—is a major global health problem. Every year, injuries caused by traffic collisions, falls, blows, and other traumatic events kill more than 5 million people (9% of annual global deaths). Indeed, for people between the ages of 5 and 44 years, injuries are among the top three causes of death in many countries. Trauma sometimes kills people through physical damage to the brain and other internal organs, but hemorrhage (serious uncontrolled bleeding) is responsible for 30%–40% of trauma-related deaths. Consequently, early trauma care focuses on minimizing hemorrhage (for example, by using compression to stop bleeding) and on restoring blood circulation after blood loss (health-care professionals refer to this as resuscitation). Red blood cell (RBC) transfusion is often used for the management of patients with trauma who are bleeding; other resuscitation products include isotonic saline and solutions of human blood proteins.
Why Was This Study Done?
Although RBC transfusion can save the lives of patients with trauma who are bleeding, there is considerable uncertainty regarding the balance of risks and benefits associated with this procedure. RBC transfusion, which is an expensive intervention, is associated with several potential adverse effects, including allergic reactions and infections. Moreover, blood supplies are limited, and the risks from transfusion are high in low- and middle-income countries, where most trauma-related deaths occur. In this study, which is a secondary analysis of data from a trial (CRASH-2) that evaluated the effect of tranexamic acid (which stops excessive bleeding) in patients with trauma, the researchers test the hypothesis that RBC transfusion may have a beneficial effect among patients at high risk of death following trauma but a harmful effect among those at low risk of death.
What Did the Researchers Do and Find?
The CRASH-2 trail included 20,127 patients with trauma and major bleeding treated in 274 hospitals in 40 countries. In their risk-stratified analysis, the researchers investigated the effect of RBC transfusion on CRASH-2 participants with a predicted risk of death (estimated using a validated model that included clinical variables such as heart rate and blood pressure) on admission to hospital of less than 6%, 6%–20%, 21%–50%, or more than 50%. That is, the researchers compared death rates among patients in each stratum of predicted risk of death who received a RBC transfusion with death rates among patients who did not receive a transfusion. Half the patients received at least one transfusion. Transfusion was associated with an increase in all-cause mortality at 28 days after trauma among patients with a predicted risk of death of less than 6% or of 6%–20%, but with a decrease in all-cause mortality among patients with a predicted risk of death of more than 50%. In absolute figures, compared to no transfusion, RBC transfusion was associated with 5.1 more deaths per 100 patients in the patient group with the lowest predicted risk of death but with 11.9 fewer deaths per 100 patients in the group with the highest predicted risk of death.
What Do These Findings Mean?
These findings show that RBC transfusion is associated with an increase in all-cause deaths among patients with trauma and major bleeding with a low predicted risk of death, but with a reduction in all-cause deaths among patients with a high predicted risk of death. In other words, these findings suggest that the effect of RBC transfusion on all-cause mortality may vary according to whether a patient with trauma has a high or low predicted risk of death. However, because the participants in the CRASH-2 trial were not randomly assigned to receive a RBC transfusion, it is not possible to conclude that receiving a RBC transfusion actually increased the death rate among patients with a low predicted risk of death. It might be that the patients with this level of predicted risk of death who received a transfusion shared other unknown characteristics (confounders) that were actually responsible for their increased death rate. Thus, to provide better guidance for clinicians caring for patients with trauma and hemorrhage, the hypothesis that RBC transfusion could be harmful among patients with trauma with a low predicted risk of death should be prospectively evaluated in a randomised controlled trial.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001664.
This study is further discussed in a PLOS Medicine Perspective by Druin Burch
The World Health Organization provides information on injuries and on violence and injury prevention (in several languages)
The US Centers for Disease Control and Prevention has information on injury and violence prevention and control
The National Trauma Institute, a US-based non-profit organization, provides information about hemorrhage after trauma and personal stories about surviving trauma
The UK National Health Service Choices website provides information about blood transfusion, including a personal story about transfusion after a serious road accident
The US National Heart, Lung, and Blood Institute also provides detailed information about blood transfusions
MedlinePlus provides links to further resources on injuries, bleeding, and blood transfusion (in English and Spanish)
More information in available about CRASH-2 (in several languages)
doi:10.1371/journal.pmed.1001664
PMCID: PMC4060995  PMID: 24937305
14.  Contribution of H. pylori and Smoking Trends to US Incidence of Intestinal-Type Noncardia Gastric Adenocarcinoma: A Microsimulation Model 
PLoS Medicine  2013;10(5):e1001451.
Jennifer Yeh and colleagues examine the contribution of IHelicobacter pyloriI and smoking trends to the incidence of past and future intestinal-type noncardia gastric adenocarcinoma.
Please see later in the article for the Editors' Summary
Background
Although gastric cancer has declined dramatically in the US, the disease remains the second leading cause of cancer mortality worldwide. A better understanding of reasons for the decline can provide important insights into effective preventive strategies. We sought to estimate the contribution of risk factor trends on past and future intestinal-type noncardia gastric adenocarcinoma (NCGA) incidence.
Methods and Findings
We developed a population-based microsimulation model of intestinal-type NCGA and calibrated it to US epidemiologic data on precancerous lesions and cancer. The model explicitly incorporated the impact of Helicobacter pylori and smoking on disease natural history, for which birth cohort-specific trends were derived from the National Health and Nutrition Examination Survey (NHANES) and National Health Interview Survey (NHIS). Between 1978 and 2008, the model estimated that intestinal-type NCGA incidence declined 60% from 11.0 to 4.4 per 100,000 men, <3% discrepancy from national statistics. H. pylori and smoking trends combined accounted for 47% (range = 30%–58%) of the observed decline. With no tobacco control, incidence would have declined only 56%, suggesting that lower smoking initiation and higher cessation rates observed after the 1960s accelerated the relative decline in cancer incidence by 7% (range = 0%–21%). With continued risk factor trends, incidence is projected to decline an additional 47% between 2008 and 2040, the majority of which will be attributable to H. pylori and smoking (81%; range = 61%–100%). Limitations include assuming all other risk factors influenced gastric carcinogenesis as one factor and restricting the analysis to men.
Conclusions
Trends in modifiable risk factors explain a significant proportion of the decline of intestinal-type NCGA incidence in the US, and are projected to continue. Although past tobacco control efforts have hastened the decline, full benefits will take decades to be realized, and further discouragement of smoking and reduction of H. pylori should be priorities for gastric cancer control efforts.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Cancer of the stomach (gastric cancer) is responsible for a tenth of all cancer deaths world-wide, with an estimated 700,000 people dying from this malignancy every year, making it the second most common cause of global cancer-related deaths after lung cancer. Unfortunately, the projected global burden of this disease estimate that deaths from gastric cancer will double by 2030. Gastric cancer has a poor prognosis with only a quarter of people with this type of cancer surviving more than five years. In order to reduce deaths, it is therefore of utmost importance to identify and reduce the modifiable risk factors associated with gastric cancer. Smoking and chronic gastric infection with the bacteria Helicobacter pylori (H. pylori), are known to be two common modifiable risk factors for gastric cancer, particularly for a type of gastric cancer called intestinal-type noncardia gastric adenocarcinoma (NCGA), which occurs at the distal end of the stomach and accounts for more than half of all cases of gastric cancer in US men.
Why Was This Study Done?
H. pylori initiates a precancerous process, and so infection with this bacteria can increase intestinal-type NCGA risk by as much as 6-fold while smoking doubles cancer risk by advancing increasing progression of existing lesions. Changes in these two risk factors over the past century (especially following the US Surgeon General's Report on Smoking and Health in 1964) have led to a dramatic decline in the rates of gastric cancer in US men. Understanding the combined effects of underlying risk factor trends on health outcomes for intestinal-type NCGA at the population level can help to predict future cancer trends and burden in the US. So in this study, the researchers used a mathematical model to estimate the contribution of H. pylori and smoking trends on the decline in intestinal-type NCGA incidence in US men.
What Did the Researchers Do and Find?
The researchers used birth cohorts derived from data in two national databases, the National Health and Nutrition Examination Survey (NHANES) and National Health Interview Survey (NHIS) to develop a population-based model of intestinal-type NCGA. To ensure model predictions were consistent with epidemiologic data, the researchers calibrated the model to data on cancer and precancerous lesions and using the model, projected population outcomes between 1978 and 2040 for a base-case scenario (in which all risk factor trends were allowed to vary over time). The researchers then evaluated alternative risk factors scenarios to provide insights on the potential benefit of past and future efforts to control gastric cancer.
Using these methods, the researchers estimated that the incidence of intestinal-type NCGA (standardized by age) fell from 11.0 to 4.4 per 100,000 men between 1978 and 2008, a drop of 60%. When the researchers incorporated only H. pylori prevalence and smoking trends into the model (both of which fell dramatically over the time period) they found that intestinal-type NCGA incidence fell by only 28% (from 12.7 to 9.2 per 100,000 men), suggesting that H. pylori and smoking trends are responsible for 47% of the observed decline. The researchers found that H. pylori trends alone were responsible for 43% of the decrease in cancer but smoking trends were responsible for only a 3% drop. The researchers also found evidence that after the 1960s, observed trends in lower smoking initiation and higher cessation accelerated the decline in intestinal-type NCGA incidence by 7%. Finally, the researchers found that intestinal-type NCGA incidence is projected to decline an additional 47% between 2008 and 2040 (4.4 to 2.3 per 100,000 men) with H. pylori and smoking trends accounting for more than 80% of the observed fall.
What Do These Findings Mean?
These findings suggest that, combined with a fall in smoking rates, almost half of the observed fall in rates of intestinal-type NCGA cancer in US men between 1978 and 2008 was attributable to the decline in infection rates of H. pylori. Rates for this cancer are projected to continue to fall by 2040, with trends for both H. pylori infection and smoking accounting for more than 80% of the observed fall, highlighting the importance of the relationship between risk factors changes over time and achieving long-term reduction in cancer rates. This study is limited by the assumptions made in the model and in that it only examined one type of gastric cancer and excluded women. Nevertheless, this modeling study highlights that continued efforts to reduce rates of smoking and H. pylori infection will help to reduce rates of gastric cancer.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001451.
The National Cancer Institute gives detailed information about gastric cancer
The Gastric Cancer Foundation has information on gastric cancer for patients and professionals
Cancer Research UK explains types of gastric cancer
doi:10.1371/journal.pmed.1001451
PMCID: PMC3660292  PMID: 23700390
15.  Transnational Tobacco Company Interests in Smokeless Tobacco in Europe: Analysis of Internal Industry Documents and Contemporary Industry Materials 
PLoS Medicine  2013;10(9):e1001506.
In light lobbying by transnational tobacco companies to remove the European Union ban on the sale of snus (a smokeless tobacco product), Silvy Peeters and Anna Gilmore explore the motivation behind tobacco companies' interests in smokeless tobacco products in Europe.
Please see later in the article for the Editors' Summary
Background
European Union (EU) legislation bans the sale of snus, a smokeless tobacco (SLT) which is considerably less harmful than smoking, in all EU countries other than Sweden. To inform the current review of this legislation, this paper aims to explore transnational tobacco company (TTC) interests in SLT and pure nicotine in Europe from the 1970s to the present, comparing them with TTCs' public claims of support for harm reduction.
Methods and Results
Internal tobacco industry documents (in total 416 documents dating from 1971 to 2009), obtained via searching the online Legacy Tobacco Documents Library, were analysed using a hermeneutic approach. This library comprises documents obtained via litigation in the US and does not include documents from Imperial Tobacco, Japan Tobacco International, or Swedish Match. To help overcome this limitation and provide more recent data, we triangulated our documentary findings with contemporary documentation including TTC investor presentations. The analysis demonstrates that British American Tobacco explored SLT opportunities in Europe from 1971 driven by regulatory threats and health concerns, both likely to impact cigarette sales negatively, and the potential to create a new form of tobacco use among those no longer interested in taking up smoking. Young people were a key target. TTCs did not, however, make SLT investments until 2002, a time when EU cigarette volumes started declining, smoke-free legislation was being introduced, and public health became interested in harm reduction. All TTCs have now invested in snus (and recently in pure nicotine), yet both early and recent snus test markets appear to have failed, and little evidence was found in TTCs' corporate materials that snus is central to their business strategy.
Conclusions
There is clear evidence that BAT's early interest in introducing SLT in Europe was based on the potential for creating an alternative form of tobacco use in light of declining cigarette sales and social restrictions on smoking, with young people a key target. We conclude that by investing in snus, and recently nicotine, TTCs have eliminated competition between cigarettes and lower-risk products, thus helping maintain the current market balance in favour of (highly profitable) cigarettes while ensuring TTCs' long-term future should cigarette sales decline further and profit margins be eroded.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every year, about 5 million people die from cancer, heart disease, and other tobacco-related diseases. In recent years, to reduce this growing loss of life, international and national bodies have drawn up various tobacco control conventions and directives. For example, the European Union (EU) Directives on tobacco control call for member states to ban tobacco advertising, promotion, and sponsorship and to adopt taxation policies aimed at reducing tobacco consumption. The 2001 EU Tobacco Products Directive also bans the sale of snus, a form of smokeless tobacco (SLT), in all EU countries except Sweden. Snus, which originated in Sweden in the early 19th century, is a moist tobacco product that is placed under the upper lip. Although snus is considerably less harmful than smoking, the sale of snus was banned in the EU in 1992 because of fears that it might cause cancer and was being marketed to young people. When Sweden joined the EU in 1994, exemption from the ban was made a condition of the membership treaty.
Why Was This Study Done?
Transnational tobacco companies (TTCs) have been investing in European snus manufacturers since 2002 and more recently in pure nicotine products, and it has been suggested that, faced with declining cigarette markets in Europe and elsewhere, TTCs are preparing for a “post-cigarette era”. Since 2008, TTCs have been lobbying EU member states and the European Commission to remove the ban on snus sales, arguing that public health would be improved if governments allowed potentially reduced-harm products like snus onto the market. At the end of 2012, however, the European Commission proposed that the ban on snus sales should be continued. Here, to help inform this controversial policy debate, the researchers explore the interest of TTCs in SLT and pure nicotine in Europe from the 1970s to the present by examining internal tobacco documents and compare these interests with public claims of support for harm reduction made by TTCs.
What Did the Researchers Do and Find?
By searching the Legacy Tobacco Documents Library (internal tobacco industry documents released following US litigation cases), the researchers identified 416 documents that detail the historical interest of TTCs in SLT and pure nicotine and their efforts to enter European markets, and to influence national and EU public-health policy. The researchers analyzed these documents using a “hermeneutic” approach—methodical reading and re-reading of the documents to identify themes and sub-themes. Finally, they used TTC investor presentations and other documents to confirm these themes and to provide recent data on TTC investment in SLT. British American Tobacco (BAT) explored the opportunities for marketing SLT products in Europe from 1971 onwards. This exploration was driven by regulatory threats and health concerns, both of which were likely to impact tobacco sales, and by the potential to create a new form of tobacco use among people no longer interested in taking up smoking. TTCs did not begin to invest in SLT, however, until 2002, a time when EU cigarette sale volumes started to decline, smoke-free legislation was being introduced, and tobacco harm reduction first became a major public-health issue. All the TTCs have now invested in snus even though snus test markets appear to have failed and even though there is little evidence in corporate materials that snus is central to the business strategy of TTCs.
What Do These Findings Mean?
These findings suggest that BAT's early interest in SLT in Europe was driven by business concerns and was based on the potential for creating an alternative form of tobacco use among people—particularly young people—who would no longer take up smoking because of health concerns. They also suggest that TTC investments in snus were defensive—by buying up snus manufacturers and more recently nicotine producers, TTCs have eliminated competition between cigarettes and lower-risk products, thereby helping to maintain the current market balance in favor of cigarettes while ensuring the long-term future of TTCs should cigarette sales decline further. Although these findings are limited by the possibility that some relevant documents may have been omitted from this analysis, they nevertheless raise the concern that, if TTC investment in SLT continues, competition between cigarettes and SLT will reduce the potential for harm reduction to benefit public health. Legalization of snus sales in the European Union may therefore have considerably less benefit than envisaged.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001506.
The World Health Organization provides information about the dangers of tobacco (in several languages) and about the Framework Convention on Tobacco Control, an international treaty for tobacco control; for information about the tobacco industry's influence on policy, see the 2009 World Health Organization report Tobacco interference with tobacco control
Details of European Union legislation on the manufacture, presentation, and sale of tobacco products is available (in several languages)
Wikipedia has pages on tobacco harm reduction and on snus (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The Legacy Tobacco Documents Library is a searchable public database of tobacco company internal documents detailing their advertising, manufacturing, marketing, sales, and scientific activities
The UK Centre for Tobacco Control Studies is a network of UK universities that undertakes original research, policy development, advocacy, and teaching and training in the field of tobacco control
SmokeFree, a website provided by the UK National Health Service, offers advice on quitting smoking and includes personal stories from people who have stopped smoking
Smokefree.gov, from the US National Cancer Institute, offers online tools and resources to help people quit smoking
TobaccoTactics.org, an online resource managed by the University of Bath, provides up-to-date information on the tobacco industry and their tactics to influence tobacco regulation
doi:10.1371/journal.pmed.1001506
PMCID: PMC3769209  PMID: 24058299
16.  A data-driven model of biomarker changes in sporadic Alzheimer's disease 
Brain  2014;137(9):2564-2577.
Young et al. reformulate an event-based model for the progression of Alzheimer's disease to make it applicable to a heterogeneous sporadic disease population. The enhanced model predicts the ordering of biomarker abnormality in sporadic Alzheimer's disease independently of clinical diagnoses or biomarker cut-points, and shows state-of-the-art diagnostic classification performance.
We demonstrate the use of a probabilistic generative model to explore the biomarker changes occurring as Alzheimer’s disease develops and progresses. We enhanced the recently introduced event-based model for use with a multi-modal sporadic disease data set. This allows us to determine the sequence in which Alzheimer’s disease biomarkers become abnormal without reliance on a priori clinical diagnostic information or explicit biomarker cut points. The model also characterizes the uncertainty in the ordering and provides a natural patient staging system. Two hundred and eighty-five subjects (92 cognitively normal, 129 mild cognitive impairment, 64 Alzheimer’s disease) were selected from the Alzheimer’s Disease Neuroimaging Initiative with measurements of 14 Alzheimer’s disease-related biomarkers including cerebrospinal fluid proteins, regional magnetic resonance imaging brain volume and rates of atrophy measures, and cognitive test scores. We used the event-based model to determine the sequence of biomarker abnormality and its uncertainty in various population subgroups. We used patient stages assigned by the event-based model to discriminate cognitively normal subjects from those with Alzheimer’s disease, and predict conversion from mild cognitive impairment to Alzheimer’s disease and cognitively normal to mild cognitive impairment. The model predicts that cerebrospinal fluid levels become abnormal first, followed by rates of atrophy, then cognitive test scores, and finally regional brain volumes. In amyloid-positive (cerebrospinal fluid amyloid-β1–42 < 192 pg/ml) or APOE-positive (one or more APOE4 alleles) subjects, the model predicts with high confidence that the cerebrospinal fluid biomarkers become abnormal in a distinct sequence: amyloid-β1–42, phosphorylated tau, total tau. However, in the broader population total tau and phosphorylated tau are found to be earlier cerebrospinal fluid markers than amyloid-β1–42, albeit with more uncertainty. The model’s staging system strongly separates cognitively normal and Alzheimer’s disease subjects (maximum classification accuracy of 99%), and predicts conversion from mild cognitive impairment to Alzheimer’s disease (maximum balanced accuracy of 77% over 3 years), and from cognitively normal to mild cognitive impairment (maximum balanced accuracy of 76% over 5 years). By fitting Cox proportional hazards models, we find that baseline model stage is a significant risk factor for conversion from both mild cognitive impairment to Alzheimer’s disease (P = 2.06 × 10−7) and cognitively normal to mild cognitive impairment (P = 0.033). The data-driven model we describe supports hypothetical models of biomarker ordering in amyloid-positive and APOE-positive subjects, but suggests that biomarker ordering in the wider population may diverge from this sequence. The model provides useful disease staging information across the full spectrum of disease progression, from cognitively normal to mild cognitive impairment to Alzheimer’s disease. This approach has broad application across neurodegenerative disease, providing insights into disease biology, as well as staging and prognostication.
doi:10.1093/brain/awu176
PMCID: PMC4132648  PMID: 25012224
event-based model; disease progression; Alzheimer’s disease; biomarkers; biomarker ordering
17.  Implantable Cardioverter Defibrillators. Prophylactic Use 
Executive Summary
Objective
The use of implantable cardiac defibrillators (ICDs) to prevent sudden cardiac death (SCD) in patients resuscitated from cardiac arrest or documented dangerous ventricular arrhythmias (secondary prevention of SCD) is an insured service. In 2003 (before the establishment of the Ontario Health Technology Advisory Committee), the Medical Advisory Secretariat conducted a health technology policy assessment on the prophylactic use (primary prevention of SCD) of ICDs for patients at high risk of SCD. The Medical Advisory Secretariat concluded that ICDs are effective for the primary prevention of SCD. Moreover, it found that a more clearly defined target population at risk for SCD that would be likely to benefit from ICDs is needed, given that the number needed to treat (NNT) from recent studies is 13 to 18, and given that the per-unit cost of ICDs is $32,000, which means that the projected cost to Ontario is $770 million (Cdn).
Accordingly, as part of an annual review and publication of more recent articles, the Medical Advisory Secretariat updated its health technology policy assessment of ICDs.
Clinical Need
Sudden cardiac death is caused by the sudden onset of fatal arrhythmias, or abnormal heart rhythms: ventricular tachycardia (VT), a rhythm abnormality in which the ventricles cause the heart to beat too fast, and ventricular fibrillation (VF), an abnormal, rapid and erratic heart rhythm. About 80% of fatal arrhythmias are associated with ischemic heart disease, which is caused by insufficient blood flow to the heart.
Management of VT and VF with antiarrhythmic drugs is not very effective; for this reason, nonpharmacological treatments have been explored. One such treatment is the ICD.
The Technology
An ICD is a battery-powered device that, once implanted, monitors heart rhythm and can deliver an electric shock to restore normal rhythm when potentially fatal arrhythmias are detected. The use of ICDs to prevent SCD in patients resuscitated from cardiac arrest or documented dangerous ventricular arrhythmias (secondary prevention) is an insured service in Ontario.
Primary prevention of SCD involves identification of and preventive therapy for patients who are at high risk for SCD. Most of the studies in the literature that have examined the prevention of fatal ventricular arrhythmias have focused on patients with ischemic heart disease, in particular, those with heart failure (HF), which has been shown to increase the risk of SCD. The risk of HF is determined by left ventricular ejection fraction (LVEF); most studies have focused on patients with an LVEF under 0.35 or 0.30. While most studies have found ICDs to reduce significantly the risk for SCD in patients with an LVEF less than 0.35, a more recent study (Sudden Cardiac Death in Heart Failure Trial [SCD-HeFT]) reported that patients with HF with nonischemic heart disease could also benefit from this technology. Based on the generalization of the SCD-HeFT study, the Centers for Medicare and Medicaid in the United States recently announced that it would allocate $10 billion (US) annually toward the primary prevention of SCD for patients with ischemic and nonischemic heart disease and an LVEF under 0.35.
Review Strategy
The aim of this literature review was to assess the effectiveness, safety, and cost effectiveness of ICDs for the primary prevention of SCD.
The standard search strategy used by the Medical Advisory Secretariat was used. This included a search of all international health technology assessments as well as a search of the medical literature from January 2003–May 2005.
A modification of the GRADE approach (1) was used to make judgments about the quality of evidence and strength of recommendations systematically and explicitly. GRADE provides a framework for structured reflection and can help to ensure that appropriate judgments are made. GRADE takes into account a study’s design, quality, consistency, and directness in judging the quality of evidence for each outcome. The balance between benefits and harms, quality of evidence, applicability, and the certainty of the baseline risks are considered in judgments about the strength of recommendations.
Summary of Findings
Overall, ICDs are effective for the primary prevention of SCD. Three studies – the Multicentre Automatic Defibrillator Implantation Trial I (MADIT I), the Multicentre Automatic Defibrillator Implantation Trial II (MADIT II), and SCD-HeFT – showed there was a statistically significant decrease in total mortality for patients who prophylactically received an ICD compared with those who received conventional therapy (Table 1).
Results of Key Studies on the Use of Implantable Cardioverter Defibrillators for the Primary Prevention of Sudden Cardiac Death – All-Cause Mortality
MADIT I: Multicentre Automatic Defibrillator Implantation Trial I; MADIT II: Multicentre Automatic Defibrillator Implantation Trial II; SCD-HeFT: Sudden Cardiac Death in Heart Failure Trial.
EP indicates electrophysiology; ICD, implantable cardioverter defibrillator; NNT, number needed to treat; NSVT, nonsustained ventricular tachycardia. The NNT will appear higher if follow-up is short. For ICDs, the absolute benefit increases over time for at least a 5-year period; the NNT declines, often substantially, in studies with a longer follow-up. When the NNT are equalized for a similar period as the SCD-HeFT duration (5 years), the NNT for MADIT-I is 2.2; for MADIT-II, it is 6.3.
GRADE Quality of the Evidence
Using the GRADE Working Group criteria, the quality of these 3 trials was examined (Table 2).
Quality refers to the criteria such as the adequacy of allocation concealment, blinding and follow-up.
Consistency refers to the similarity of estimates of effect across studies. If there is important unexplained inconsistency in the results, our confidence in the estimate of effect for that outcome decreases. Differences in the direction of effect, the size of the differences in effect, and the significance of the differences guide the decision about whether important inconsistency exists.
Directness refers to the extent to which the people interventions and outcome measures are similar to those of interest. For example, there may be uncertainty about the directness of the evidence if the people of interest are older, sicker or have more comorbidity than those in the studies.
As stated by the GRADE Working Group, the following definitions were used to grade the quality of the evidence:
High: Further research is very unlikely to change our confidence n the estimate of effect.
Moderate: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate.
Low: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
Very low: Any estimate of effect is very uncertain.
Quality of Evidence – MADIT I, MADIT II, and SCD-HeFT*
MADIT I: Multicentre Automatic Defibrillator Implantation Trial I; MADIT II: Multicentre Automatic Defibrillator Implantation Trial II; SCD-HeFT: Sudden Cardiac Death in Heart Failure Trial.
The 3 trials had 3 different sets of eligibility criteria for implantation of an ICD for primary prevention of SCD. Conclusions
Conclusions
Overall, there is evidence that ICDs are effective for the primary prevention of SCD. Three trials have found a statistically significant decrease in total mortality for patients who prophylactically received an ICD compared with those who received conventional therapy in their respective study populations.
As per the GRADE Working Group, recommendations consider 4 main factors:
The tradeoffs, taking into account the estimated size of the effect for the main outcome, the confidence limits around those estimates, and the relative value placed on the outcome;
The quality of the evidence (Table 2);
Translation of the evidence into practice in a specific setting, taking into consideration important factors that could be expected to modify the size of the expected effects, such as proximity to a hospital or availability of necessary expertise; and
Uncertainty about the baseline risk for the population of interest
The GRADE Working Group also recommends that incremental costs of health care alternatives should be considered explicitly with the expected health benefits and harms. Recommendations rely on judgments about the value of the incremental health benefits in relation to the incremental costs. The last column in Table 3 is the overall trade-off between benefits and harms and incorporates any risk or uncertainty.
For MADIT I, the overall GRADE and strength of the recommendation is “moderate” – the quality of the evidence is “moderate” (uncertainty due to methodological limitations in the study design), and risk/uncertainty in cost and budget impact was mitigated by the use of filters to help target the prevalent population at risk (Table 3).
For MADIT II, the overall GRADE and strength of the recommendation is “very weak” – the quality of the evidence is “weak” (uncertainty due to methodological limitations in the study design), but there is risk or uncertainty regarding the high prevalence, cost, and budget impact. It is not clear why screening for high-risk patients was dropped, given that in MADIT II the absolute reduction in mortality was small (5.6%) compared to MADIT I, which used electrophysiological screening (23%) (Table 3).
For SCD-HeFT, the overall GRADE and strength of the recommendation is “weak” – the study quality is “moderate,” but there is also risk/uncertainty due to a high NNT at 5 years (13 compared to the MADIT II NNT of 6 and MADIT I NNT of 2 at 5 years), high prevalent population (N = 23,700), and a high budget impact ($770 million). A filter (as demonstrated in MADIT 1) is required to help target the prevalent population at risk and mitigate the risk or uncertainty relating to the high NNT, prevalence, and budget impact (Table 3).
The results of the most recent ICD trial (SCD-HeFT) are not generalizable to the prevalent population in Ontario (Table 3). Given that the current funding rate of an ICD is $32,500 (Cdn), the estimated budget impact for Ontario would be as high as $770 million (Cdn). The uncertainty around the cost estimate of treating the prevalent population with LVEF < 0.30 in Ontario, the lack of human resources to implement such a strategy and the high number of patients required to prevent one SCD (NNT = 13) calls for an alternative strategy that allows the appropriate uptake and diffusion of ICDs for primary prevention for patients at maximum risk for SCD within the SCD-HeFT population.
The uptake and diffusion of ICDs for primary prevention of SCD should therefore be based on risk stratification through the use of appropriate screen(s) that would identify patients at highest risk who could derive the most benefit from this technology.
Overall GRADE and Strength of Recommendation for the Use of Implantable Cardioverter Defibrillators for the Primary Prevention of Sudden Cardiac Death
MADIT I: Multicentre Automatic Defibrillator Implantation Trial I; MADIT II: Multicentre Automatic Defibrillator Implantation Trial II; SCD-HeFT: Sudden Cardiac Death in Heart Failure Trial.
NNT indicates number needed to treat. The NNT will appear higher if follow-up is short. For ICDs, the absolute benefit increases over time for at least a 5-year period; the NNT declines, often substantially, in studies with a longer follow-up. When the NNT are equalized for a similar period as the SCD-HeFT duration (5 years), the NNT for MADIT-I is 2.2; for MADIT-II, it is 6.3.
NSVT indicates nonsustained ventricular tachycardia; VT, ventricular tachycardia.
PMCID: PMC3382404  PMID: 23074465
18.  Impact of early intervention and disease modification in patients with predementia Alzheimer’s disease: a Markov model simulation 
Background:
Early screenings involving biomarkers and use of potential disease-modifying therapies (DMTs) may have significant humanistic implications for treatment strategies in Alzheimer’s disease.
Methods:
Markov models simulated transitions of patient cohorts beginning in predementia, a hypothetical early stage of Alzheimer’s disease marked by objective cognitive impairment/memory complaints without functional impairment, and followed for 10 years. Hypothetical cohorts of 10,000 patients included those who were treated with standard of care (donepezil) upon reaching mild–moderate Alzheimer’s disease, a DMT in predementia, and a DMT in mild-moderate Alzheimer’s disease. Transition probabilities were based on data from the Alzheimer’s Disease Neuroimaging Initiative and published clinical data, and estimated for the hypothetical DMT. In each disease stage (predementia, mild, moderate, or severe), time was computed and costs were estimated using literature review and published data, and published data provided mortality rates. The impact of screening was evaluated using positive predictive value (patients identified as predementia truly at risk for transition to dementia).
Results:
Earlier treatment yielded modest gains in total life-years; however, the distribution was skewed towards milder disease. Assuming a 25% reduction in the annual risk of progression, treating predementia patients with DMT increased life-years in predementia to mild states on average from 3.2 to 4.2, while life-years spent in moderate-to-severe Alzheimer’s disease decreased from 2.6 to 2.2. Average time in the community increased from 4.4 to 5.4 years, while time in long-term care declined from 1.3 to 0.9 years. This impact grows as the advantage of the novel agent increases. Screening accuracy had significant implications for cost-effectiveness.
Conclusion:
If screening can accurately identify predementia patients at risk for progression, earlier treatment with DMTs has the potential benefit to patients of prolonging time in milder disease, reducing time spent with more severe disease, increasing time in the community, and reducing time in long-term care.
doi:10.2147/CEOR.S22265
PMCID: PMC3202482  PMID: 22046104
Alzheimer’s disease; Markov model; disease-modifying therapy; donepezil; standard of care; predementia
19.  Prospects for Advancing Tuberculosis Control Efforts through Novel Therapies 
PLoS Medicine  2006;3(8):e273.
Background
Development of new, effective, and affordable tuberculosis (TB) therapies has been identified as a critical priority for global TB control. As new candidates emerge from the global TB drug pipeline, the potential impacts of novel, shorter regimens on TB incidence and mortality have not yet been examined.
Methods and Findings
We used a mathematical model of TB to evaluate the expected benefits of shortening the duration of effective chemotherapy for active pulmonary TB. First, we considered general relationships between treatment duration and TB dynamics. Next, as a specific example, we calibrated the model to reflect the current situation in the South-East Asia region. We found that even with continued and rapid progress in scaling up the World Health Organization's DOTS strategy of directly observed, short-course chemotherapy, the benefits of reducing treatment duration would be substantial. Compared to a baseline of continuing DOTS coverage at current levels, and with currently available tools, a 2-mo regimen introduced by 2012 could prevent around 20% (range 13%–28%) of new cases and 25% (range 19%–29%) of TB deaths in South-East Asia between 2012 and 2030. If effective treatment with existing drugs expands rapidly, overall incremental benefits of shorter regimens would be lower, but would remain considerable (13% [range 8%–19%] and 19% [range 15%–23%] reductions in incidence and mortality, respectively, between 2012 and 2030). A ten-year delay in the introduction of new drugs would erase nearly three-fourths of the total expected benefits in this region through 2030.
Conclusions
The introduction of new, shorter treatment regimens could dramatically accelerate the reductions in TB incidence and mortality that are expected under current regimens—with up to 2- or 3-fold increases in rates of decline if shorter regimens are accompanied by enhanced case detection. Continued progress in reducing the global TB burden will require a balanced approach to pursuing new technologies while promoting wider implementation of proven strategies.
Mathematical modeling suggests that new tuberculosis treatments that are shorter than the current 6-month standard regimen would lead to considerable reductions in incidence and mortality of TB, which remains the leading cause of global deaths.
Editors' Summary
Background.
One third of the world's population is infected with Mycobacterium tuberculosis, the bacterium that is the main cause of tuberculosis (TB). In most people, the infection remains dormant, but every year eight million people develop active TB, usually in their lungs, and two million people die from the disease. For most of the second half of the 20th century, TB was in decline, particularly in developed countries, due to the availability of powerful antibiotics. Recently, however, global efforts to control TB have been set back by the HIV/AIDS epidemic—people with damaged immune systems are very susceptible to TB—and the emergence of antibiotic-resistant bacteria. In the 1990s, the World Health Organization (WHO) introduced DOTS as the recommended strategy for global TB control. Central to DOTS is “directly observed short-course chemotherapy.” To cure TB, several antibiotics have to be taken daily for 6 months. Patients must complete this treatment, even if they feel better sooner, to prevent relapse and the emergence of drug-resistant bacteria. The DOTS approach ensures that patients do this by having trained observers watch them swallow each dose of their medication for the entire 6-month period.
Why Was This Study Done?
This year, WHO and partners launched a renewed Global Plan to Stop TB, which aims to reduce, by 2015, the number of people who are sick with TB (disease prevalence) and the number of people who die each year from the disease (mortality) to half the 1990 levels. Because sick people often infect others, reducing disease prevalence will also reduce the number of new infections each year (disease incidence). The Global Plan to Stop TB includes a commitment to expand and intensify the DOTS strategy (in the year 2004, only around half of new active, infectious TB cases were detected under DOTS). The drug combinations currently used for DOTS consist of four or more different antibiotics, which have all been around and in use for many years. Recently, renewed effort has gone into the search for new TB treatments. Several candidate drugs have been identified and are now being tested, and scientists expect that some of them will be able to cure patients quicker than the current 6-month regimen. In this study, the researchers wanted to understand the potential benefits of such shorter treatments.
What Did the Researchers Do and Find?
The researchers developed a mathematical model that considers the acquisition, progression, diagnosis, and treatment of M. tuberculosis infection and its clinical consequences. They used the model to predict how changes in treatment duration might affect TB incidence, prevalence, and mortality. In the model, shorter treatment durations are connected to higher cure rates, as each additional month of treatment means more time to ensure that patients keep taking their medications, and more doses that might be missed. Since patients who drop out of treatment early can continue to infect other people, the model shows how a 2-month course of antibiotics will produce a quicker decline in the incidence of TB than a 6-month course by reducing these opportunities to infect others.
  The researchers then refined their model to include current conditions in South-East Asia, an area where DOTS is being scaled up and where one-third of all new TB cases occur. This “calibrated” model indicates that even if DOTS is scaled up as planned, shorter drug courses would still reduce TB incidence and deaths much quicker than a 6-month course. If, for example, a 2-month drug treatment were introduced by 2012, it might prevent 13% of the new cases and 19% of the TB deaths that would otherwise occur in South-East Asia between 2012 and 2030. These benefits might be even greater if the new drug regimen freed up resources to improve the systematic effort to detect new TB cases. On the other hand, delaying its implementation until 2022 would erase three-quarters of the predicted benefits.
What Do These Findings Mean?
Like all mathematical models, this one makes many assumptions that, if incorrect, will change the predictions. With this caveat, the study confirms that the planned scale-up of DOTS will greatly reduce TB incidence and mortality over the next few years. However, it also suggests that the impact of DOTS could be substantially improved by introducing shorter drug regimens. The earlier such shorter treatments become available, the greater their benefits would be. By reducing the opportunities for patients to default on their treatment and by shortening the period during which they can infect others, the researchers predict that the rate of decline in TB incidence and mortality could be up to three times higher for antibiotics that only have to be taken for 2 months compared with those that need to be taken for 6 months. But, they stress, strategies for reducing the global TB burden must strike a balance between pursuing new treatment and detection strategies and promoting wider implementation of proven strategies. And while there is hope that new shorter treatments will prove effective over the next few years, this will only become clear as candidates are rigorously tested.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030273.
• US National Institute of Allergy and Infectious Diseases patient fact sheet on TB
• US Centers for Disease Control and Prevention information for patients and professionals on TB
• MedlinePlus encyclopedia entry on TB
• NHS Direct Online patient information on TB from the UK National Health Service
• World Health Organization information on global TB control
• Global Alliance for TB Drug Development information on current initiatives to develop new TB drugs
• Wikipedia page on TB treatment (note that Wikipedia is a free online encyclopedia that anyone can edit)
doi:10.1371/journal.pmed.0030273
PMCID: PMC1523376  PMID: 16866578
20.  Obstructive Sleep Apnea and Risk of Cardiovascular Events and All-Cause Mortality: A Decade-Long Historical Cohort Study 
PLoS Medicine  2014;11(2):e1001599.
Tetyana Kendzerska and colleagues explore the association between physiological measures of obstructive sleep apnea other than the apnea-hypopnea index and the risk of cardiovascular events.
Please see later in the article for the Editors' Summary
Background
Obstructive sleep apnea (OSA) has been reported to be a risk factor for cardiovascular (CV) disease. Although the apnea-hypopnea index (AHI) is the most commonly used measure of OSA, other less well studied OSA-related variables may be more pathophysiologically relevant and offer better prediction. The objective of this study was to evaluate the relationship between OSA-related variables and risk of CV events.
Methods and Findings
A historical cohort study was conducted using clinical database and health administrative data. Adults referred for suspected OSA who underwent diagnostic polysomnography at the sleep laboratory at St Michael's Hospital (Toronto, Canada) between 1994 and 2010 were followed through provincial health administrative data (Ontario, Canada) until May 2011 to examine the occurrence of a composite outcome (myocardial infarction, stroke, congestive heart failure, revascularization procedures, or death from any cause). Cox regression models were used to investigate the association between baseline OSA-related variables and composite outcome controlling for traditional risk factors. The results were expressed as hazard ratios (HRs) and 95% CIs; for continuous variables, HRs compare the 75th and 25th percentiles. Over a median follow-up of 68 months, 1,172 (11.5%) of 10,149 participants experienced our composite outcome. In a fully adjusted model, other than AHI OSA-related variables were significant independent predictors: time spent with oxygen saturation <90% (9 minutes versus 0; HR = 1.50, 95% CI 1.25–1.79), sleep time (4.9 versus 6.4 hours; HR = 1.20, 95% CI 1.12–1.27), awakenings (35 versus 18; HR = 1.06, 95% CI 1.02–1.10), periodic leg movements (13 versus 0/hour; HR = 1.05, 95% CI 1.03–1.07), heart rate (70 versus 56 beats per minute [bpm]; HR = 1.28, 95% CI 1.19–1.37), and daytime sleepiness (HR = 1.13, 95% CI 1.01–1.28).The main study limitation was lack of information about continuous positive airway pressure (CPAP) adherence.
Conclusion
OSA-related factors other than AHI were shown as important predictors of composite CV outcome and should be considered in future studies and clinical practice.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Obstructive sleep apnea (OSA) is a common sleep-related breathing disorder, particularly among middle-aged and elderly people. It is characterized by apnea—a brief interruption in breathing that lasts at least 10 seconds—and hypopnea—a decrease of more than 50% in the amplitude of breathing that lasts at least 10 seconds or clear but smaller decrease in amplitude associated with either oxygen desaturation or an arousal. Patients with OSA experience numerous episodes of apnea and hypopnea during the night; severe OSA is defined as having 30 or more episodes per hour (an apnea-hypopnea index [AHI] of >30). These breathing interruptions occur when relaxation of the upper airway muscles decreases the airflow, which lowers the amount of oxygen in the blood. As a result, affected individuals frequently wake from deep sleep as they struggle to breathe. Symptoms of OSA include loud snoring and daytime sleepiness. Treatments include lifestyle changes such as losing weight (excess fat around the neck increases airway collapse) and smoking cessation. For severe OSA, doctors recommend continuous positive airway pressure (CPAP), in which a machine blows pressurized air through a face mask into the airway to keep it open.
Why Was This Study Done?
OSA can be life-threatening. Most directly, daytime sleepiness can cause accidents, but OSA is also associated with an increased risk of developing cardiovascular disease (CVD, disease that affects the heart and the circulation). To date, studies that have investigated the association between OSA and the risk of myocardial infarction (heart attack), congestive heart failure, stroke, and other CVDs have used the AHI to diagnose and categorize the severity of OSA. However, by focussing on AHI, clinicians and researchers may be missing opportunities to improve their ability to predict which patients are at the highest risk of CVD. In this historical cohort study, the researchers investigate the association between other OSA-related variables (for example, blood oxygen saturation and sleep fragmentation) and the risk of cardiovascular events and all-cause mortality (death). A historical cohort study examines the medical records of groups of individuals who have different characteristics at baseline for the subsequent occurrence of specific outcomes.
What Did the Researchers Do and Find?
The researchers used administrative data (including hospitalization records and physicians' claims for services supplied to patients) to follow up adults referred for suspected OSA who underwent diagnostic polysomnography (a sleep study) at a single Canadian hospital between 1994 and 2010. A database of the polysomnography results provided information on OSA-related variables for all the study participants. Over an average follow-up of about 6 years, 11.5% of the 10,149 participants were hospitalized for a myocardial infarction, stroke, or congestive heart failure, underwent a revascularization procedure (an intervention that restores the blood supply to an organ or tissue after CVD has blocked a blood vessel), or had died from any cause. After adjusting for multiple established risk factors for CVD such as smoking and age in Cox regression models (a statistical approach that examines associations between patient variables and outcomes), several OSA-related variables (but not AHI) were significant predictors of CVD. The strongest OSA-related predictor of cardiovascular events or all-cause mortality was total sleep time spent with oxygen saturation below 90%, which increased the risk of a cardiovascular event or death by 50%. Other statistically significant OSA-related predictors (predictors that were unlikely to be associated with the outcome through chance) of cardiovascular events or death included total sleep time, number of awakenings, frequency of periodic leg movements, heart rate, and daytime sleepiness.
What Do These Findings Mean?
These findings indicate that OSA-related factors other than AHI are important predictors of the composite outcome of a cardiovascular event or all-cause mortality. Indeed, although AHI was significantly associated with the researchers' composite outcome in an analysis that did not consider other established risk factors for CVD (“confounders”), the association became non-significant after controlling for potential confounders. The accuracy of these findings, which need to be confirmed in other settings, is likely to be limited by the lack of information available about the use of CPAP by study participants and by the lack of adjustment for some important confounders. Importantly, however, these findings suggest that OSA-related factors other than AHI should be considered as predictors of CVD in future studies and in clinical practice.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001599.
The US National Heart Lung and Blood Institute has information (including several videos) about obstructive sleep apnea (in English and Spanish), sleep studies, heart disease, and other cardiovascular diseases (some information in English and Spanish)
The UK National Health Service Choices website provides information (including personal stories) about sleep apnea and about cardiovascular disease
The not-for-profit American Sleep Apnea Association provides detailed information about sleep apnea for patients and health-care professionals, including personal stories about the condition
The MedlinePlus encyclopedia has pages on obstructive sleep apnea and on polysomnography; MedlinePlus provides links to further information and advice about obstructive sleep apnea, heart diseases, and vascular diseases (in English and Spanish)
doi:10.1371/journal.pmed.1001599
PMCID: PMC3913558  PMID: 24503600
21.  Primary prevention of coronary heart disease: integration of new data, evolving views, revised goals, and role of rosuvastatin in management. A comprehensive survey 
A recent explosion in the amount of cardiovascular risk and incipient, undetected subclinical cardiovascular pathology has swept across the globe. Nearly 70% of adult Americans are overweight or obese; the prevalence of visceral obesity stands at 53% and continues to rise. At any one time, 55% of the population is on a weight-loss diet, and almost all fail. Fewer than 15% of adults or children exercise sufficiently, and over 60% engage in no vigorous activity. Among adults, 11%–13% have diabetes, 34% have hypertension, 36% have prehypertension, 36% have prediabetes, 12% have both prediabetes and prehypertension, and 15% of the population with either diabetes, hypertension, or dyslipidemia are undiagnosed. About one-third of the adult population, and 80% of the obese, have fatty livers. With 34% of children overweight or obese, prevalence having doubled in just a few years, type 2 diabetes, hypertension, dyslipidemia, and fatty livers in children are at their highest levels ever. Half of adults have at least one cardiovascular risk factor. Not even 1% of the population attains ideal cardiovascular health. Despite falling coronary death rates for decades, coronary heart disease (CHD) death rates in US women 35 to 54 years of age may now be increasing because of the obesity epidemic. Up to 65% of patients do not have their conventional risk biomarkers under control. Only 30% of high risk patients with CHD achieve aggressive low density lipoprotein (LDL) targets. Of those patients with multiple risk factors, fewer than 10% have all of them adequately controlled. Even when patients are titrated to evidence-based targets, about 70% of cardiac events remain unaddressed. Undertreatment is also common. About two-thirds of high risk primary care patients are not taking needed medications for dyslipidemia. Poor patient adherence, typically below 50%, adds further difficulty. Hence, after all such fractional reductions are multiplied, only a modest portion of total cardiovascular risk burden is actually being eliminated, and the full potential of risk reduction remains unrealized. Worldwide the situation is similar, with the prevalence of metabolic syndrome approaching 50%. Primordial prevention, resulting from healthful lifestyle habits that do not permit the appearance of risk factors, is the preferred method to lower cardiovascular risk. Lowering the prevalence of obesity is the most urgent matter, and is pleiotropic since it affects blood pressure, lipid profiles, glucose metabolism, inflammation, and atherothrombotic disease progression. Physical activity also improves several risk factors, with the additional potential to lower heart rate. Given the current obstacles, success of primordial prevention remains uncertain. At the same time, the consequences of delay and inaction will inevitably be disastrous, and the sense of urgency mounts. Since most CHD events arise in a large subpopulation of low- to moderate-risk individuals, identifying a high proportion of those who will go on to develop events with accuracy remains unlikely. Without a refinement in risk prediction, the current model of targeting high-risk individuals for aggressive therapy may not succeed alone, especially given the rising burden of risk. Estimating cardiovascular risk over a period of 10 years, using scoring systems such as Framingham or SCORE, continues to enjoy widespread use and is recommended for all adults. Limitations in the former have been of concern, including the under- or over-estimation of risk in specific populations, a relatively short 10-year risk horizon, focus on myocardial infarction and CHD death, and exclusion of family history. Classification errors may occur in up to 37% of individuals, particularly women and the young. Several different scoring systems are discussed in this review. The use of lifetime risk is an important conceptual advance, since ≥90% of young adults with a low 10-year risk have a lifetime risk of ≥39%; over half of all American adults have a low 10-year risk but a high lifetime risk. At age 50 the absence of traditional risk factors is associated with extremely low lifetime risk and significantly greater longevity. Pathological and epidemiological data confirm that atherosclerosis begins in early childhood, and advances seamlessly and inexorably throughout life. Risk factors in childhood are similar to those in adults, and track between stages of life. When indicated, aggressive treatment should begin at the earliest indication, and be continued for years. For those patients at intermediate risk according to global risk scores, C-reactive protein (CRP), coronary artery calcium (CAC), and carotid intima-media thickness (CIMT) are available for further stratification. Using statins for primary prevention is recommended by guidelines, is prevalent, but remains underprescribed. Statin drugs are unrivaled, evidence-based, major weapons to lower cardiovascular risk. Even when low density lipoprotein cholesterol (LDL-C) targets are attained, over half of patients continue to have disease progression and clinical events. This residual risk is of great concern, and multiple sources of remaining risk exist. Though clinical evidence is incomplete, altering or raising the blood high density lipoprotein cholesterol (HDL-C) level continues to be pursued. Of all agents available, rosuvastatin produces the greatest reduction in LDL-C, LDL-P, and improvement in apoA-I/apoB, together with a favorable safety profile. Several recent proposals and methods to lower cardiovascular risk are reviewed. A combination of approaches, such as the addition of lifetime risk, refinement of risk prediction, guideline compliance, novel treatments, improvement in adherence, and primordial prevention, including environmental and social intervention, will be necessary to lower the present high risk burden.
doi:10.2147/DDDT.S14934
PMCID: PMC3140289  PMID: 21792295
primary prevention; cardiovascular risk; coronary heart disease; primordial prevention; rosuvastatin; JUPITER study; statin drugs; C-reactive protein; inflammation; low-density lipoprotein; high-density lipoprotein; diabetes; metabolic syndrome; Framingham risk score; Reynolds risk score; SCORE; coronary artery calcification; carotid intima-media thickness; hypertension; obesity; non-HDL-cholesterol; LDL-P; dysfunctional HDL; lifetime risk; advanced lipid testing; Bogalusa Heart Study
22.  Plasma Amyloid β as a Predictor of Dementia and Cognitive Decline: A Systematic Review and Meta-analysis 
Archives of neurology  2012;69(7):824-831.
Context
Preclinical prediction of Alzheimer’s disease is important, critical to effective intervention. Plasma levels of amyloid β-peptides have been a principal focus of the growing literature on blood-based biomarkers, but studies to date have varied in design, assay methods and sample size, making it difficult to readily interpret the overall data.
Objective
To conduct a systematic review and meta-analysis of relevant prospective studies in order to determine if plasma amyloid β levels may predict development of dementia, Alzheimer’s disease, and cognitive decline.
Data Sources
Prospective studies published between 1995 and 2011 indexed in the PubMed, EMBASE, and PsycInfo databases were searched.
Study Selection
Selected studies included those measuring at least one relevant plasma amyloid β species (Aβ40, Aβ42, Aβ42:Aβ40 ratio) and reporting an effect estimate for dementia, Alzheimer’s disease, or cognitive change.
Data Extraction
Using a standardized extraction form, appropriate study parameters on subject information, exposure, and outcome were extracted. Random effects models were utilized to generate summary risk ratios and 95% confidence intervals, comparing the bottom versus top quantile for each plasma measure.
Results
Thirteen studies with a total of 10,303 subjects met inclusion criteria for meta-analysis. Lower Aβ42:Aβ40 ratios were significantly associated with development of Alzheimer’s disease (summary RR=1.60, 95% CI=1.04,2.46; p=0.03) and dementia (RR=1.67 95% CI=1.02,2.75; p=0.04). Significant heterogeneity was found for both summary estimates, which could not be explained by participants’ age, sex distribution, the study’s follow-up time, or year of publication. Plasma levels of Aβ40 and Aβ42 alone were not significantly associated with either outcome.
Conclusions
Overall, the literature indicates that plasma Aβ42:Aβ40 ratios predict development of Alzheimer’s disease and dementia. However, significant heterogeneity in the meta-analysis underlines the need for substantial further investigation of plasma amyloid β levels as a preclinical biomarker.
doi:10.1001/archneurol.2011.1841
PMCID: PMC3772635  PMID: 22451159
23.  Hedging against Antiviral Resistance during the Next Influenza Pandemic Using Small Stockpiles of an Alternative Chemotherapy 
PLoS Medicine  2009;6(5):e1000085.
Mathematically simulating an influenza pandemic, Joseph Wu and colleagues predict that using a secondary antiviral drug early in local epidemics would reduce global emergence of resistance to the primary stockpiled drug.
Background
The effectiveness of single-drug antiviral interventions to reduce morbidity and mortality during the next influenza pandemic will be substantially weakened if transmissible strains emerge which are resistant to the stockpiled antiviral drugs. We developed a mathematical model to test the hypothesis that a small stockpile of a secondary antiviral drug could be used to mitigate the adverse consequences of the emergence of resistant strains.
Methods and Findings
We used a multistrain stochastic transmission model of influenza to show that the spread of antiviral resistance can be significantly reduced by deploying a small stockpile (1% population coverage) of a secondary drug during the early phase of local epidemics. We considered two strategies for the use of the secondary stockpile: early combination chemotherapy (ECC; individuals are treated with both drugs in combination while both are available); and sequential multidrug chemotherapy (SMC; individuals are treated only with the secondary drug until it is exhausted, then treated with the primary drug). We investigated all potentially important regions of unknown parameter space and found that both ECC and SMC reduced the cumulative attack rate (AR) and the resistant attack rate (RAR) unless the probability of emergence of resistance to the primary drug pA was so low (less than 1 in 10,000) that resistance was unlikely to be a problem or so high (more than 1 in 20) that resistance emerged as soon as primary drug monotherapy began. For example, when the basic reproductive number was 1.8 and 40% of symptomatic individuals were treated with antivirals, AR and RAR were 67% and 38% under monotherapy if pA = 0.01. If the probability of resistance emergence for the secondary drug was also 0.01, then SMC reduced AR and RAR to 57% and 2%. The effectiveness of ECC was similar if combination chemotherapy reduced the probabilities of resistance emergence by at least ten times. We extended our model using travel data between 105 large cities to investigate the robustness of these resistance-limiting strategies at a global scale. We found that as long as populations that were the main source of resistant strains employed these strategies (SMC or ECC), then those same strategies were also effective for populations far from the source even when some intermediate populations failed to control resistance. In essence, through the existence of many wild-type epidemics, the interconnectedness of the global network dampened the international spread of resistant strains.
Conclusions
Our results indicate that the augmentation of existing stockpiles of a single anti-influenza drug with smaller stockpiles of a second drug could be an effective and inexpensive epidemiological hedge against antiviral resistance if either SMC or ECC were used. Choosing between these strategies will require additional empirical studies. Specifically, the choice will depend on the safety of combination therapy and the synergistic effect of one antiviral in suppressing the emergence of resistance to the other antiviral when both are taken in combination.
Editors' Summary
Background
Every winter, millions of people catch influenza—a viral infection of the airways—and about half a million people die as a result. These seasonal “epidemics” occur because small but frequent changes in the viral proteins (antigens) to which the human immune system responds mean that an immune response produced one year provides only partial protection against influenza the next year. Influenza viruses also occasionally appear that contain major antigenic changes. Human populations have little or no immunity to such viruses so they can start deadly pandemics (global epidemics). The 1918–19 influenza pandemic, for example, killed 40–50 million people. The last influenza pandemic was in 1968 and many experts fear the next pandemic might strike soon. To prepare for such an eventuality, scientists are trying to develop vaccines that might work against an emerging pandemic influenza virus. In addition, many governments are stockpiling antiviral drugs for the large-scale treatment of influenza and for targeted prophylaxis (prevention). Antiviral drugs prevent the replication of the influenza virus, thereby shortening the length of time that an infected person is ill and protecting uninfected people against infection. Their widespread use should, therefore, slow the spread of pandemic influenza.
Why Was This Study Done?
Although some countries are stockpiling more than one antiviral drug in preparation for an influenza pandemic, many countries are investing in large stockpiles of a single drug, oseltamivir (Tamiflu). But influenza viruses can become resistant to antiviral drugs and the widespread use of a single drug (the primary antiviral) is likely to increase the risk that a resistant strain will emerge. If this did happen, the ability of antiviral drugs to slow the spread of a pandemic would be greatly reduced. In this study, the researchers use a mathematical model of influenza transmission to investigate whether a small stockpile of a secondary antiviral drug could be used to prevent the adverse consequences of the emergence of antiviral-resistant pandemic influenza viruses.
What Did the Researchers Do and Find?
The researchers used their model of influenza transmission to predict how two strategies for the use of a small stockpile of a secondary antiviral might affect the cumulative attack rate (AR; the final proportion of the population infected) and the resistant attack rate (RAR; the proportion of the population infected with an influenza virus strain resistant to the primary drug, a measure that may reflect the impact of antiviral resistance on death rates during a pandemic). In a large, closed population, the model predicted that both “early combination chemotherapy” (treatment with both drugs together while both are available) and “sequential multi-drug chemotherapy” (treatment with the secondary drug until it is exhausted, then treatment with the primary drug) would reduce the AR and the RAR compared with monotherapy unless the probability of emergence of resistance to the primary drug was very low (resistance rarely occurred) or very high (resistance emerged as soon as the primary drug was used). The researchers then introduced international travel data into their model to investigate whether these two strategies could limit the development of antiviral resistance at a global scale. This analysis predicted that, provided the population that was the main source of resistant strains used one of the strategies, both strategies in distant, subsequently affected populations would be able to reduce the AR and RAR even if some intermediate populations failed to control resistance.
What Do These Findings Mean?
As with all mathematical models, the accuracy of these predictions depends on the assumptions used to build the model and the data fed into it. Nevertheless, these findings suggest that both of the proposed strategies for the use of small stockpiles of secondary antiviral drugs should limit the spread of drug-resistant influenza virus more effectively than monotherapy with the primary antiviral drug. Thus, small stockpiles of secondary antivirals could provide a hedge against the development of antiviral resistance during the early phases of an influenza pandemic and are predicted to be a worthwhile public-health investment. However, note the researchers, experimental studies—including determinations of which drugs are safe to use together, and how effectively a given combination prevents resistance compared with each drug used alone—are now needed to decide which of the strategies to recommend in real-life situations. In the context of the 2009 global spread of swine flu, these findings suggest that public health officials might consider zanamivir (Relenza) as the secondary antiviral drug for resistance-limiting strategies in countries that have stockpiled oseltamivir.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000085.
The US Centers for Disease Control and Prevention provides information about influenza for patients and professionals, including specific information on pandemic influenza and on influenza antiviral drugs
The World Health Organization provides information on influenza (in several languages) and has detailed guidelines on the use of vaccines and antivirals during influenza pandemics
The UK Health Protection Agency provides information on pandemic influenza
MedlinePlus provides a list of links to other information about influenza (in English and Spanish)
doi:10.1371/journal.pmed.1000085
PMCID: PMC2680070  PMID: 19440354
24.  Enhanced External Counterpulsation (EECP) 
Executive Summary
Objective
To assess the effectiveness, and cost effectiveness of EECP in patients with severe anginal symptoms, secondary to chronic coronary disease, who are unresponsive to exhaustive pharmacotherapy and not candidates for surgical/percutaneous revascularization procedures (e.g., angioplasty, coronary bypass surgery).
To assess the effectiveness, and cost effectiveness of EECP in patients with heart failure.
Clinical Need
Angina
Angina is a clinical syndrome characterized by discomfort in the chest, jaw, shoulder, back or arm. Angina usually occurs in patients with coronary artery disease (CAD) involving ≥1 large epicardial artery. However it can also occur in people with valvular heart disease, hypertrophic cardiomyopathy, and uncontrolled hypertension.
Conventional approaches to restoring the balance between oxygen supply and demand focus on the disruption of the underlying disease through: drug therapy (β blockers, calcium channel blockers, nitrates, antiplatelet agents, ACE inhibitors, statins); life-style modifications (smoking cessation, weight loss); or revascularization techniques such as coronary artery bypass graft surgery (CABG) or percutaneous coronary interventions (PCI). (1) Limitations of each of these approaches include: adverse drug effects, procedure-related mortality and morbidity, restenosis after PCI, and time dependent graft attrition after CABG. Furthermore, an increasing number of patients are not appropriate candidates for standard revascularization options, due to co-morbid conditions (HF, peripheral vascular disease), poor distal coronary artery targets, and patient preference. The morbidity and mortality associated with repeat surgical revascularization procedures are significantly higher, and often excludes these patients from consideration for further revascularizations. (2)
Patients with CAD who have chronic ischemic symptoms that are unresponsive to both conventional medical therapy and revascularization techniques have refractory angina pectoris. It has been estimated that greater than 100,000 patients each year in the US may be diagnosed as having this condition. (3) Patients with refractory angina have marked limitation of ordinary physical activity or are unable to perform any ordinary physical activity without discomfort (CCS functional class III/IV). Also, there must be some objective evidence of ischemia as demonstrated by exercise treadmill testing, stress imaging studies or coronary physiologic studies. (1)
Dejongste et al. (4)estimated that the prevalence of chronic refractory angina is about 100,000 patients in the United States. This would correspond to approximately 3,800 (100,000 x 3.8% [Ontario is approximately 3.8% of the population of the United States]) patients in Ontario having chronic refractory angina.
Heart Failure
Heart failure results from any structural or functional cardiac disorder that impairs the ability of the heart to act as a pump.
A recent study (5) revealed 28,702 patients were hospitalized for first-time HF in Ontario between April 1994 and March 1997. Women comprised 51% of the cohort. Eighty-five percent were aged 65 years or older, and 58% were aged 75 years or older.
Patients with chronic HF experience shortness of breath, a limited capacity for exercise, high rates of hospitalization and rehospitalization, and die prematurely. (6) The New York Heart Association (NYHA) has provided a commonly used functional classification for the severity of HF (7):
Class I: No limitation of physical activity. No symptoms with ordinary exertion.
Class II: Slight limitations of physical activity. Ordinary activity causes symptoms.
Class III: Marked limitation of physical activity. Less than ordinary activity causes symptoms. Asymptomatic at rest.
Class IV: Inability to carry out any physical activity without discomfort. Symptoms at rest.
The National Heart, Lung, and Blood Institute (7) estimates that 35% of patients with HF are in functional NYHA class I; 35% are in class II; 25%, class III; and 5%, class IV. Surveys (8) suggest that from 5% to 15% of patients with HF have persistent severe symptoms, and that the remainder of patients with HF is evenly divided between those with mild and moderately severe symptoms.
To date, the diagnosis and management of chronic HF has concentrated on patients with the clinical syndrome of HF accompanied by severe left ventricular systolic dysfunction. Major changes in treatment have resulted from a better understanding of the pathophysiology of HF and the results of large clinical trials. Treatment for chronic HF includes lifestyle management, drugs, cardiac surgery, or implantable pacemakers and defibrillators. Despite pharmacologic advances, which include diuretics, angiotensin-converting enzyme inhibitors, beta-blockers, spironolactone, and digoxin, many patients remain symptomatic on maximally tolerated doses. (6)
The Technology
Patients are typically treated by a trained technician in a medically supervised environment for 1 hour daily for a total of 35 hours over 7 weeks. The procedure involves sequential inflation and deflation of compressible cuffs wrapped around the patient’s calves, lower thighs and upper thighs. In addition to 3 sets of cuffs, the patient has finger plethysmogram and electrocardiogram (ECG) attachments that are connected to a control and display console.
External counterpulsation was used in the United States to treat cardiogenic shock after acute myocardial infarction. (9;10) More recently, an enhanced version namely “enhanced external counterpulsation” (EECP) was introduced as a noninvasive procedure for outpatient treatment of patients with severe, uncontrollable cardiac ischemia. EECP is said to increase coronary perfusion pressure and reduce the myocardial oxygen demand. Currently, EECP is not applicable for all patients with refractory angina pectoris. For example, many patients are considered ineligible for therapy due to co-morbidities, including those with severe pulmonary vascular disease, deep vein thrombosis, phlebitis and irregular heart rhythms, and heart failure. (1)
Very recently, investigation began into EECP as an adjunctive treatment for patients with HF. Anecdotal reports suggested that EECP may benefit patients with coronary disease and left ventricular dysfunction. The safety and effectiveness of EECP in patients with symptomatic heart failure and coronary disease and its role in patients with nonischemic heart failure secondary to LV dysfunction is unclear. Furthermore, the safety and effectiveness of EECP in the different stages of HF and whether it is only for patients who are refractive to pharmacotherapy is unknown.
2003 Health Technology Assessment by the Medical Advisory Secretariat
The Medical Advisory Secretariat health technology assessment (originally published in February 2003) reported on the effectiveness of EECP for patients with angina and HF. The report concluded that there was insufficient evidence to support the use of EECP in patients with refractory stable CCS III/IV angina as well as insufficient evidence to support the use of EECP in patients with HF.
Review Strategy
The aim of this literature review was to assess the effectiveness, safety, and cost effectiveness of EECP for the treatment of refractory stable CCS III/IV angina or HF.
The standard search strategy used by the Medical Advisory Secretariat was used. This included a search of all international health technology assessments as well as a search of the medical literature from December 2002 to March 2006.
A modification of the GRADE approach (11) was used to make judgments about the quality of evidence and strength of recommendations systematically and explicitly. GRADE provides a framework for structured reflection and can help to ensure that appropriate judgments are made. GRADE takes into account a study’s design, quality, consistency, and directness in judging the quality of evidence for each outcome. The balance between benefits and harms, quality of evidence, applicability, and the certainty of the baseline risks are considered in judgments about the strength of recommendations.
Summary of Findings
The Cochrane and INAHTA databases yielded 3 HTAs or systematic reviews on EECP treatment (Blue Cross Blue Shield Technology Evaluation Center [BCBS TEC], ECRI, and the Centers for Medicare and Medicaid Services [CMS]). A search of Medline and Embase December 2005 – March 2006 (after the literature search cutoff from the most recent HTA) was conducted using key words enhanced external counterpulsation, EECP, angina, myocardial ischemia, congestive heart failure. This search produced 1 study which met the inclusion criteria. This level 4a study was inferior in quality to the RCT which formed the basis of the 2003 Medical Advisory Secretariat recommendation.
BCBS reviewed the evidence through November 2005 to determine if EECP improves health outcomes for refractory chronic stable angina pectoris or chronic stable HF. (12) BCBS concluded that the available evidence is not sufficient to permit conclusions of the effect of EECP on health outcomes. Both controlled trials had methodologic flaws (MUST EECP and MUST EECP quality of life studies). The case series and observational studies for both indications while suggestive of a treatment benefit from EECP have shortcomings as well.
On March 20 2006, CMS posted their proposed coverage decision memorandum for external counterpulsation therapy. (13) Overall, CMS stated that the evidence is not adequate to conclude that external counterpulsation therapy is reasonable and necessary for:
Canadian Cardiovascular Society Classification (CCSC) II angina
Heart failure
NYHA class II/III stable HF symptoms with an EF≤35%
NYHA class II/III stable HF symptoms with an EF≤40%
NYHA class IV HF
Acute HF
Cardiogenic shock
Acute MI
In January 2005, ECRI (14) stated that there was insufficient evidence available to draw conclusions about the long-term effectiveness of EECP, with respect to morbidity, survival, or quality of life, for any coronary indication (refractory angina, congestive heart failure, cardiogenic shock and acute MI).
GRADE Quality of the Studies
According to the GRADE Working Group criteria, the quality of the trials was examined (Table 1). (11)
Quality refers to the criteria such as the adequacy of allocation concealment, blinding and followup.
Consistency refers to the similarity of estimates of effect across studies. If there is important unexplained inconsistency in the results, our confidence in the estimate of effect for that outcome decreases. Differences in the direction of effect, the size of the differences in effect and the significance of the differences guide the decision about whether important inconsistency exists.
Directness refers to the extent to which the people interventions and outcome measures are similar to those of interest. For example, there may be uncertainty about the directness of the evidence if the people of interest are older, sicker or have more comorbidity than those in the studies.
As stated by the GRADE Working Group, the following definitions were used in grading the quality of the evidence. (11)
GRADE Quality of Studies
Economic Analysis - Literature Review
No economic analysis of EECP was identified in the published literature.
Estimated Prevalence of Angina in Ontario
3,800 patients with chronic refractory angina:
The number of patients with chronic refractory angina in the US is estimated to be approximately 100,000 (4), this corresponds to about 3,800 patients in Ontario (3.8% × 100,000) with refractory angina.
3,800 patients × $7,000 Cdn (approximate cost for a full course of therapy) ~ $26.6M Cdn.
Estimated Prevalence of Heart Failure in Ontario
23,700 patients EF ≤ 0.35:
This estimate is from an expert (personal communication) at the Institute for Clinical Evaluative Sciences (ICES), where they examined a sample of echocardiography studies drawn from a diagnostic lab in 2001. They found that the prevalence of EF ≤ 0.35 was 8.3%, and if generalized to all patients undergoing echocardiography, there would be 23,700 patients.
23,700 patients with EF ≤35% × $7,000 Cdn ~ $166 M Cdn.
Conclusions
There is insufficient evidence to support the effectiveness and safety of EECP treatment for patients with refractory stable CCS III-IV angina or HF.
As per the GRADE Working Group, overall recommendations consider 4 main factors. (11)
The tradeoffs, taking into account the estimated size of the effect for the main outcome, the confidence limits around those estimates and the relative value placed on the outcome.
The quality of the evidence.
Translation of the evidence into practice in a specific setting, taking into consideration important factors that could be expected to modify the size of the expected effects such as proximity to a hospital or availability of necessary expertise.
Uncertainty about the baseline risk for the population of interest.
The GRADE Working Group also recommends that incremental costs of healthcare alternatives should be considered explicitly alongside the expected health benefits and harms. (11) Recommendations rely on judgments about the value of the incremental health benefits in relation to the incremental costs. The last column in Table 2 is the overall trade-off between benefits and harms and incorporates any risk/uncertainty.
For angina and heart failure, the overall GRADE and strength of the recommendations is “weak” – the quality of the evidence is “low” (uncertainties due to methodological limitations in the study design in terms of study quality and directness), and the corresponding risk/uncertainty is increased due to a budget impact of approximately $26.6 M Cdn or $166 M Cdn respectively while the cost-effectiveness of EECP is unknown and difficult to estimate considering that there are no high quality studies of effectiveness.
Overall GRADE and Strength of Recommendation (Including Uncertainty)
PMCID: PMC3379533  PMID: 23074496
25.  Reducing the Impact of the Next Influenza Pandemic Using Household-Based Public Health Interventions 
PLoS Medicine  2006;3(9):e361.
Background
The outbreak of highly pathogenic H5N1 influenza in domestic poultry and wild birds has caused global concern over the possible evolution of a novel human strain [1]. If such a strain emerges, and is not controlled at source [2,3], a pandemic is likely to result. Health policy in most countries will then be focused on reducing morbidity and mortality.
Methods and Findings
We estimate the expected reduction in primary attack rates for different household-based interventions using a mathematical model of influenza transmission within and between households. We show that, for lower transmissibility strains [2,4], the combination of household-based quarantine, isolation of cases outside the household, and targeted prophylactic use of anti-virals will be highly effective and likely feasible across a range of plausible transmission scenarios. For example, for a basic reproductive number (the average number of people infected by a typically infectious individual in an otherwise susceptible population) of 1.8, assuming only 50% compliance, this combination could reduce the infection (symptomatic) attack rate from 74% (49%) to 40% (27%), requiring peak quarantine and isolation levels of 6.2% and 0.8% of the population, respectively, and an overall anti-viral stockpile of 3.9 doses per member of the population. Although contact tracing may be additionally effective, the resources required make it impractical in most scenarios.
Conclusions
National influenza pandemic preparedness plans currently focus on reducing the impact associated with a constant attack rate, rather than on reducing transmission. Our findings suggest that the additional benefits and resource requirements of household-based interventions in reducing average levels of transmission should also be considered, even when expected levels of compliance are only moderate.
Voluntary household-based quarantine and external isolation are likely to be effective in limiting the morbidity and mortality of an influenza pandemic, even if such a pandemic cannot be entirely prevented, and even if compliance with these interventions is moderate.
Editors' Summary
Background.
Naturally occurring variation in the influenza virus can lead both to localized annual epidemics and to less frequent global pandemics of catastrophic proportions. The most destructive of the three influenza pandemics of the 20th century, the so-called Spanish flu of 1918–1919, is estimated to have caused 20 million deaths. As evidenced by ongoing tracking efforts and news media coverage of H5N1 avian influenza, contemporary approaches to monitoring and communications can be expected to alert health officials and the general public of the emergence of new, potentially pandemic strains before they spread globally.
Why Was This Study Done?
In order to act most effectively on advance notice of an approaching influenza pandemic, public health workers need to know which available interventions are likely to be most effective. This study was done to estimate the effectiveness of specific preventive measures that communities might implement to reduce the impact of pandemic flu. In particular, the study evaluates methods to reduce person-to-person transmission of influenza, in the likely scenario that complete control cannot be achieved by mass vaccination and anti-viral treatment alone.
What Did the Researchers Do and Find?
The researchers developed a mathematical model—essentially a computer simulation—to simulate the course of pandemic influenza in a hypothetical population at risk for infection at home, through external peer networks such as schools and workplaces, and through general community transmission. Parameters such as the distribution of household sizes, the rate at which individuals develop symptoms from nonpandemic viruses, and the risk of infection within households were derived from demographic and epidemiologic data from Hong Kong, as well as empirical studies of influenza transmission. A model based on these parameters was then used to calculate the effects of interventions including voluntary household quarantine, voluntary individual isolation in a facility outside the home, and contact tracing (that is, asking infectious individuals to identify people whom they may have infected and then warning those people) on the spread of pandemic influenza through the population. The model also took into account the anti-viral treatment of exposed, asymptomatic household members and of individuals in isolation, and assumed that all intervention strategies were put into place before the arrival of individuals infected with the pandemic virus.
  Using this model, the authors predicted that even if only half of the population were to comply with public health interventions, the proportion infected during the first year of an influenza pandemic could be substantially reduced by a combination of household-based quarantine, isolation of actively infected individuals in a location outside the household, and targeted prophylactic treatment of exposed individuals with anti-viral drugs. Based on an influenza-associated mortality rate of 0.5% (as has been estimated for New York City in the 1918–1919 pandemic), the magnitude of the predicted benefit of these interventions is a reduction from 49% to 27% in the proportion of the population who become ill in the first year of the pandemic, which would correspond to 16,000 fewer deaths in a city the size of Hong Kong (6.8 million people). In the model, anti-viral treatment appeared to be about as effective as isolation when each was used in combination with household quarantine, but would require stockpiling 3.9 doses of anti-viral for each member of the population. Contact tracing was predicted to provide a modest additional benefit over quarantine and isolation, but also to increase considerably the proportion of the population in quarantine.
What Do These Findings Mean?
This study predicts that voluntary household-based quarantine and external isolation can be effective in limiting the morbidity and mortality of an influenza pandemic, even if such a pandemic cannot be entirely prevented, and even if compliance with these interventions is far from uniform. These simulations can therefore inform preparedness plans in the absence of data from actual intervention trials, which would be impossible outside (and impractical within) the context of an actual pandemic. Like all mathematical models, however, the one presented in this study relies on a number of assumptions regarding the characteristics and circumstances of the situation that it is intended to represent. For example, the authors found that the efficacy of policies to reduce the rate of infection vary according to the ease with which a given virus spreads from person to person. Because this parameter (known as the basic reproductive ratio, R0) cannot be reliably predicted for a new viral strain based on past epidemics, the authors note that in an actual influenza pandemic rapid determinations of R0 in areas already involved would be necessary to finalize public health responses in threatened areas. Further, the implementation of the interventions that appear beneficial in this model would require devoting attention and resources to practical considerations, such as how to staff isolation centers and provide food and water to those in household quarantine. However accurate the scientific data and predictive models may be, their effectiveness can only be realized through well-coordinated local, as well as international, efforts.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030361.
• World Health Organization influenza pandemic preparedness page
• US Department of Health and Human Services avian and pandemic flu information site
• Pandemic influenza page from the Public Health Agency of Canada
• Emergency planning page on pandemic flu from the England Department of Health
• Wikipedia entry on pandemic influenza with links to individual country resources (note: Wikipedia is a free Internet encyclopedia that anyone can edit)
doi:10.1371/journal.pmed.0030361
PMCID: PMC1526768  PMID: 16881729

Results 1-25 (1440059)