PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1240965)

Clipboard (0)
None

Related Articles

1.  Is (1→3)-β-D-glucan the missing link from bedside assessment to pre-emptive therapy of invasive candidiasis? 
Critical Care  2011;15(6):1017.
Invasive candidiasis is a frequent life-threatening complication in critically ill patients. Early diagnosis followed by prompt treatment aimed at improving outcome by minimizing unnecessary antifungal use remains a major challenge in the ICU setting. Timely patient selection thus plays a key role for clinically efficient and cost-effective management. Approaches combining clinical risk factors and Candida colonization data have improved our ability to identify such patients early. While the negative predictive value of scores and predicting rules is up to 95 to 99%, the positive predictive value is much lower, ranging between 10 and 60%. Accordingly, if a positive score or rule is used to guide the start of antifungal therapy, many patients may be treated unnecessarily. Candida biomarkers display higher positive predictive values; however, they lack sensitivity and are thus not able to identify all cases of invasive candidiasis. The (1→3)-β-D-glucan (BG) assay, a panfungal antigen test, is recommended as a complementary tool for the diagnosis of invasive mycoses in high-risk hemato-oncological patients. Its role in the more heterogeneous ICU population remains to be defined. More efficient clinical selection strategies combined with performant laboratory tools are needed in order to treat the right patients at the right time by keeping costs of screening and therapy as low as possible. The new approach proposed by Posteraro and colleagues in the previous issue of Critical Care meets these requirements. A single positive BG value in medical patients admitted to the ICU with sepsis and expected to stay for more than 5 days preceded the documentation of candidemia by 1 to 3 days with an unprecedented diagnostic accuracy. Applying this one-point fungal screening on a selected subset of ICU patients with an estimated 15 to 20% risk of developing candidemia is an appealing and potentially cost-effective approach. If confirmed by multicenter investigations, and extended to surgical patients at high risk of invasive candidiasis after abdominal surgery, this Bayesian-based risk stratification approach aimed at maximizing clinical efficiency by minimizing health care resource utilization may substantially simplify the management of critically ill patients at risk of invasive candidiasis.
doi:10.1186/cc10544
PMCID: PMC3388704  PMID: 22171793
2.  Male Circumcision at Different Ages in Rwanda: A Cost-Effectiveness Study 
PLoS Medicine  2010;7(1):e1000211.
Agnes Binagwaho and colleagues predict that circumcision of newborn boys would be effective and cost-saving as a long-term strategy to prevent HIV in Rwanda.
Background
There is strong evidence showing that male circumcision (MC) reduces HIV infection and other sexually transmitted infections (STIs). In Rwanda, where adult HIV prevalence is 3%, MC is not a traditional practice. The Rwanda National AIDS Commission modelled cost and effects of MC at different ages to inform policy and programmatic decisions in relation to introducing MC. This study was necessary because the MC debate in Southern Africa has focused primarily on MC for adults. Further, this is the first time, to our knowledge, that a cost-effectiveness study on MC has been carried out in a country where HIV prevalence is below 5%.
Methods and Findings
A cost-effectiveness model was developed and applied to three hypothetical cohorts in Rwanda: newborns, adolescents, and adult men. Effectiveness was defined as the number of HIV infections averted, and was calculated as the product of the number of people susceptible to HIV infection in the cohort, the HIV incidence rate at different ages, and the protective effect of MC; discounted back to the year of circumcision and summed over the life expectancy of the circumcised person. Direct costs were based on interviews with experienced health care providers to determine inputs involved in the procedure (from consumables to staff time) and related prices. Other costs included training, patient counselling, treatment of adverse events, and promotion campaigns, and they were adjusted for the averted lifetime cost of health care (antiretroviral therapy [ART], opportunistic infection [OI], laboratory tests). One-way sensitivity analysis was performed by varying the main inputs of the model, and thresholds were calculated at which each intervention is no longer cost-saving and at which an intervention costs more than one gross domestic product (GDP) per capita per life-year gained. Results: Neonatal MC is less expensive than adolescent and adult MC (US$15 instead of US$59 per procedure) and is cost-saving (the cost-effectiveness ratio is negative), even though savings from infant circumcision will be realized later in time. The cost per infection averted is US$3,932 for adolescent MC and US$4,949 for adult MC. Results for infant MC appear robust. Infant MC remains highly cost-effective across a reasonable range of variation in the base case scenario. Adolescent MC is highly cost-effective for the base case scenario but this high cost-effectiveness is not robust to small changes in the input variables. Adult MC is neither cost-saving nor highly cost-effective when considering only the direct benefit for the circumcised man.
Conclusions
The study suggests that Rwanda should be simultaneously scaling up circumcision across a broad range of age groups, with high priority to the very young. Infant MC can be integrated into existing health services (i.e., neonatal visits and vaccination sessions) and over time has better potential than adolescent and adult circumcision to achieve the very high coverage of the population required for maximal reduction of HIV incidence. In the presence of infant MC, adolescent and adult MC would evolve into a “catch-up” campaign that would be needed at the start of the program but would eventually become superfluous.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Acquired immunodeficiency syndrome (AIDS) has killed more than 25 million people since 1981 and more than 31 million people (22 million in sub-Saharan Africa alone) are now infected with the human immunodeficiency virus (HIV), which causes AIDS. There is no cure for HIV/AIDS and no vaccine against HIV infection. Consequently, prevention of HIV transmission is extremely important. HIV is most often spread through unprotected sex with an infected partner. Individuals can reduce their risk of HIV infection, therefore, by abstaining from sex, by having one or a few sexual partners, and by always using a male or female condom. In addition, male circumcision—the removal of the foreskin, the loose fold of skin that covers the head of penis—can halve HIV transmission rates to men resulting from sex with women. Thus, as part of its HIV prevention strategy, the World Health Organization (WHO) recommends that male circumcision programs be scaled up in countries where there is a generalized HIV epidemic and where few men are circumcised.
Why Was This Study Done?
One such country is Rwanda. Here, 3% of the adult population is infected with HIV but only 15% of men are circumcised—worldwide, about 30% of men are circumcised. Demand for circumcision is increasing in Rwanda but, before policy makers introduce a country-wide male circumcision program, they need to identify the most cost-effective way to increase circumcision rates. In particular, they need to decide the age at which circumcision should be offered. Circumcision soon after birth (neonatal circumcision) is quick and simple and rarely causes any complications. Circumcision of adolescents and adults is more complex and has a higher complication rate. Although several studies have investigated the cost-effectiveness (the balance between the clinical and financial costs of a medical intervention and its benefits) of circumcision in adult men, little is known about its cost-effectiveness in newborn boys. In this study, which is one of several studies on male circumcision being organized by the National AIDS Control Commission in Rwanda, the researchers model the cost-effectiveness of circumcision at different ages.
What Did the Researchers Do and Find?
The researchers developed a simple cost-effectiveness model and applied it to three hypothetical groups of Rwandans: newborn boys, adolescent boys, and adult men. For their model, the researchers calculated the effectiveness of male circumcision (the number of HIV infections averted) by estimating the reduction in the annual number of new HIV infections over time. They obtained estimates of the costs of circumcision (including the costs of consumables, staff time, and treatment of complications) from health care providers and adjusted these costs for the money saved through not needing to treat HIV in males in whom circumcision prevented infection. Using their model, the researchers estimate that each neonatal male circumcision would cost US$15 whereas each adolescent or adult male circumcision would cost US$59. Neonatal male circumcision, they report, would be cost-saving. That is, over a lifetime, neonatal male circumcision would save more money than it costs. Finally, using the WHO definition of cost-effectiveness (for a cost-effective intervention, the additional cost incurred to gain one year of life must be less than a country's per capita gross domestic product), the researchers estimate that, although adolescent circumcision would be highly cost-effective, circumcision of adult men would only be potentially cost-effective (but would likely prove cost-effective if the additional infections that would occur from men to their partners without a circumcision program were also taken into account).
What Do These Findings Mean?
As with all modeling studies, the accuracy of these findings depends on the many assumptions included in the model. However, the findings suggest that male circumcision for infants for the prevention of HIV infection later in life is highly cost-effective and likely to be cost-saving and that circumcision for adolescents is cost-effective. The researchers suggest, therefore, that policy makers in Rwanda and in countries with similar HIV infection and circumcision rates should scale up male circumcision programs across all age groups, with high priority being given to the very young. If infants are routinely circumcised, they suggest, circumcision of adolescent and adult males would become a “catch-up” campaign that would be needed at the start of the program but that would become superfluous over time. Such an approach would represent a switch from managing the HIV epidemic as an emergency towards focusing on sustainable, long-term solutions to this major public-health problem.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000211.
This study is further discussed in a PLoS Medicine Perspective by Seth Kalichman
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
Information is available from the Joint United Nations Programme on HIV/AIDS (UNAIDS) on HIV infection and AIDS and on male circumcision in relation to HIV and AIDS
HIV InSite has comprehensive information on all aspects of HIV/AIDS
Information is available from Avert, an international AIDS charity on many aspects of HIV/AIDS, including information on HIV and AIDS in Africa, and on circumcision and HIV (some information in English and Spanish)
More information about male circumcision is available from the Clearinghouse on Male Circumcision
The National AIDS Control Commission of Rwanda provides detailed information about HIV/AIDS in Rwanda (in English and French)
doi:10.1371/journal.pmed.1000211
PMCID: PMC2808207  PMID: 20098721
3.  Validation and comparison of clinical prediction rules for invasive candidiasis in intensive care unit patients: a matched case-control study 
Critical Care  2011;15(4):R198.
Introduction
Due to the increasing prevalence and severity of invasive candidiasis, investigators have developed clinical prediction rules to identify patients who may benefit from antifungal prophylaxis or early empiric therapy. The aims of this study were to validate and compare the Paphitou and Ostrosky-Zeichner clinical prediction rules in ICU patients in a 689-bed academic medical center.
Methods
We conducted a retrospective matched case-control study from May 2003 to June 2008 to evaluate the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of each rule. Cases included adults with ICU stays of at least four days and invasive candidiasis matched to three controls by age, gender and ICU admission date. The clinical prediction rules were applied to cases and controls via retrospective chart review to evaluate the success of the rules in predicting invasive candidiasis. Paphitou's rule included diabetes, total parenteral nutrition (TPN) and dialysis with or without antibiotics. Ostrosky-Zeichner's rule included antibiotics or central venous catheter plus at least two of the following: surgery, immunosuppression, TPN, dialysis, corticosteroids and pancreatitis. Conditional logistic regression was performed to evaluate the rules. Discriminative power was evaluated by area under the receiver operating characteristic curve (AUC ROC).
Results
A total of 352 patients were included (88 cases and 264 controls). The incidence of invasive candidiasis among adults with an ICU stay of at least four days was 2.3%. The prediction rules performed similarly, exhibiting low PPVs (0.041 to 0.054), high NPVs (0.983 to 0.990) and AUC ROCs (0.649 to 0.705). A new prediction rule (Nebraska Medical Center rule) was developed with PPVs, NPVs and AUC ROCs of 0.047, 0.994 and 0.770, respectively.
Conclusions
Based on low PPVs and high NPVs, the rules are most useful for identifying patients who are not likely to develop invasive candidiasis, potentially preventing unnecessary antifungal use, optimizing patient ICU care and facilitating the design of forthcoming antifungal clinical trials.
doi:10.1186/cc10366
PMCID: PMC3387640  PMID: 21846332
candidiasis; clinical prediction rules; prophylaxis
4.  Risk Stratification by Self-Measured Home Blood Pressure across Categories of Conventional Blood Pressure: A Participant-Level Meta-Analysis 
PLoS Medicine  2014;11(1):e1001591.
Jan Staessen and colleagues compare the risk of cardiovascular, cardiac, or cerebrovascular events in patients with elevated office blood pressure vs. self-measured home blood pressure.
Please see later in the article for the Editors' Summary
Background
The Global Burden of Diseases Study 2010 reported that hypertension is worldwide the leading risk factor for cardiovascular disease, causing 9.4 million deaths annually. We examined to what extent self-measurement of home blood pressure (HBP) refines risk stratification across increasing categories of conventional blood pressure (CBP).
Methods and Findings
This meta-analysis included 5,008 individuals randomly recruited from five populations (56.6% women; mean age, 57.1 y). All were not treated with antihypertensive drugs. In multivariable analyses, hazard ratios (HRs) associated with 10-mm Hg increases in systolic HBP were computed across CBP categories, using the following systolic/diastolic CBP thresholds (in mm Hg): optimal, <120/<80; normal, 120–129/80–84; high-normal, 130–139/85–89; mild hypertension, 140–159/90–99; and severe hypertension, ≥160/≥100.
Over 8.3 y, 522 participants died, and 414, 225, and 194 had cardiovascular, cardiac, and cerebrovascular events, respectively. In participants with optimal or normal CBP, HRs for a composite cardiovascular end point associated with a 10-mm Hg higher systolic HBP were 1.28 (1.01–1.62) and 1.22 (1.00–1.49), respectively. At high-normal CBP and in mild hypertension, the HRs were 1.24 (1.03–1.49) and 1.20 (1.06–1.37), respectively, for all cardiovascular events and 1.33 (1.07–1.65) and 1.30 (1.09–1.56), respectively, for stroke. In severe hypertension, the HRs were not significant (p≥0.20). Among people with optimal, normal, and high-normal CBP, 67 (5.0%), 187 (18.4%), and 315 (30.3%), respectively, had masked hypertension (HBP≥130 mm Hg systolic or ≥85 mm Hg diastolic). Compared to true optimal CBP, masked hypertension was associated with a 2.3-fold (1.5–3.5) higher cardiovascular risk. A limitation was few data from low- and middle-income countries.
Conclusions
HBP substantially refines risk stratification at CBP levels assumed to carry no or only mildly increased risk, in particular in the presence of masked hypertension. Randomized trials could help determine the best use of CBP vs. HBP in guiding BP management. Our study identified a novel indication for HBP, which, in view of its low cost and the increased availability of electronic communication, might be globally applicable, even in remote areas or in low-resource settings.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Globally, hypertension (high blood pressure) is the leading risk factor for cardiovascular disease and is responsible for 9.4 million deaths annually from heart attacks, stroke, and other cardiovascular diseases. Hypertension, which rarely has any symptoms, is diagnosed by measuring blood pressure, the force that blood circulating in the body exerts on the inside of large blood vessels. Blood pressure is highest when the heart is pumping out blood (systolic blood pressure) and lowest when the heart is refilling (diastolic blood pressure). European guidelines define optimal blood pressure as a systolic blood pressure of less than 120 millimeters of mercury (mm Hg) and a diastolic blood pressure of less than 80 mm Hg (a blood pressure of less than 120/80 mm Hg). Normal blood pressure, high-normal blood pressure, and mild hypertension are defined as blood pressures in the ranges 120–129/80–84 mm Hg, 130–139/85–89 mm Hg, and 140–159/90–99 mm Hg, respectively. A blood pressure of more than 160 mm Hg systolic or 100 mm Hg diastolic indicates severe hypertension. Many factors affect blood pressure; overweight people and individuals who eat salty or fatty food are at high risk of developing hypertension. Lifestyle changes and/or antihypertensive drugs can be used to control hypertension.
Why Was This Study Done?
The current guidelines for the diagnosis and management of hypertension recommend risk stratification based on conventionally measured blood pressure (CBP, the average of two consecutive measurements made at a clinic). However, self-measured home blood pressure (HBP) more accurately predicts outcomes because multiple HBP readings are taken and because HBP measurement avoids the “white-coat effect”—some individuals have a raised blood pressure in a clinical setting but not at home. Could risk stratification across increasing categories of CBP be refined through the use of self-measured HBP, particularly at CBP levels assumed to be associated with no or only mildly increased risk? Here, the researchers undertake a participant-level meta-analysis (a study that uses statistical approaches to pool results from individual participants in several independent studies) to answer this question.
What Did the Researchers Do and Find?
The researchers included 5,008 individuals recruited from five populations and enrolled in the International Database of Home Blood Pressure in Relation to Cardiovascular Outcome (IDHOCO) in their meta-analysis. CBP readings were available for all the participants, who measured their HBP using an oscillometric device (an electronic device for measuring blood pressure). The researchers used information on fatal and nonfatal cardiovascular, cardiac, and cerebrovascular (stroke) events to calculate the hazard ratios (HRs, indicators of increased risk) associated with a 10-mm Hg increase in systolic HBP across standard CBP categories. In participants with optimal CBP, an increase in systolic HBP of 10-mm Hg increased the risk of any cardiovascular event by nearly 30% (an HR of 1.28). Similar HRs were associated with a 10-mm Hg increase in systolic HBP for all cardiovascular events among people with normal and high-normal CBP and with mild hypertension, but for people with severe hypertension, systolic HBP did not significantly add to the prediction of any end point. Among people with optimal, normal, and high-normal CBP, 5%, 18.4%, and 30.4%, respectively, had a HBP of 130/85 or higher (“masked hypertension,” a higher blood pressure in daily life than in a clinical setting). Finally, compared to individuals with optimal CBP without masked hypertension, individuals with masked hypertension had more than double the risk of cardiovascular disease.
What Do These Findings Mean?
These findings indicate that HBP measurements, particularly in individuals with masked hypertension, refine risk stratification at CBP levels assumed to be associated with no or mildly elevated risk of cardiovascular disease. That is, HBP measurements can improve the prediction of cardiovascular complications or death among individuals with optimal, normal, and high-normal CBP but not among individuals with severe hypertension. Clinical trials are needed to test whether the identification and treatment of masked hypertension leads to a reduction of cardiovascular complications and is cost-effective compared to the current standard of care, which does not include HBP measurements and does not treat people with normal or high-normal CBP. Until then, these findings provide support for including HBP monitoring in primary prevention strategies for cardiovascular disease among individuals at risk for masked hypertension (for example, people with diabetes), and for carrying out HBP monitoring in people with a normal CBP but unexplained signs of hypertensive target organ damage.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001591.
This study is further discussed in a PLOS Medicine Perspective by Mark Caulfield
The US National Heart, Lung, and Blood Institute has patient information about high blood pressure (in English and Spanish) and a guide to lowering high blood pressure that includes personal stories
The American Heart Association provides information on high blood pressure and on cardiovascular diseases (in several languages); it also provides personal stories about dealing with high blood pressure
The UK National Health Service Choices website provides detailed information for patients about hypertension (including a personal story) and about cardiovascular disease
The World Health Organization provides information on cardiovascular disease and controlling blood pressure; its A Global Brief on Hypertension was published on World Health Day 2013
The UK charity Blood Pressure UK provides information about white-coat hypertension and about home blood pressure monitoring
MedlinePlus provides links to further information about high blood pressure, heart disease, and stroke (in English and Spanish)
doi:10.1371/journal.pmed.1001591
PMCID: PMC3897370  PMID: 24465187
5.  Accurate and Robust Genomic Prediction of Celiac Disease Using Statistical Learning 
PLoS Genetics  2014;10(2):e1004137.
Practical application of genomic-based risk stratification to clinical diagnosis is appealing yet performance varies widely depending on the disease and genomic risk score (GRS) method. Celiac disease (CD), a common immune-mediated illness, is strongly genetically determined and requires specific HLA haplotypes. HLA testing can exclude diagnosis but has low specificity, providing little information suitable for clinical risk stratification. Using six European cohorts, we provide a proof-of-concept that statistical learning approaches which simultaneously model all SNPs can generate robust and highly accurate predictive models of CD based on genome-wide SNP profiles. The high predictive capacity replicated both in cross-validation within each cohort (AUC of 0.87–0.89) and in independent replication across cohorts (AUC of 0.86–0.9), despite differences in ethnicity. The models explained 30–35% of disease variance and up to ∼43% of heritability. The GRS's utility was assessed in different clinically relevant settings. Comparable to HLA typing, the GRS can be used to identify individuals without CD with ≥99.6% negative predictive value however, unlike HLA typing, fine-scale stratification of individuals into categories of higher-risk for CD can identify those that would benefit from more invasive and costly definitive testing. The GRS is flexible and its performance can be adapted to the clinical situation by adjusting the threshold cut-off. Despite explaining a minority of disease heritability, our findings indicate a genomic risk score provides clinically relevant information to improve upon current diagnostic pathways for CD and support further studies evaluating the clinical utility of this approach in CD and other complex diseases.
Author Summary
Celiac disease (CD) is a common immune-mediated illness, affecting approximately 1% of the population in Western countries but the diagnostic process remains sub-optimal. The development of CD is strongly dependent on specific human leukocyte antigen (HLA) genes, and HLA testing to identify CD susceptibility is now commonly undertaken in clinical practice. The clinical utility of HLA typing is to exclude CD when the CD susceptibility HLA types are absent, but notably, most people who possess HLA types imparting susceptibility for CD never develop CD. Therefore, while genetic testing in CD can overcome several limitations of the current diagnostic tools, the utility of HLA typing to identify those individuals at increased-risk of CD is limited. Using large datasets assaying single nucleotide polymorphisms (SNPs), we have developed genomic risk scores (GRS) based on multiple SNPs that can more accurately predict CD risk across several populations in “real world” clinical settings. The GRS can generate predictions that optimize CD risk stratification and diagnosis, potentially reducing the number of unnecessary follow-up investigations. The medical and economic impact of improving CD diagnosis is likely to be significant, and our findings support further studies into the role of personalized GRS's for other strongly heritable human diseases.
doi:10.1371/journal.pgen.1004137
PMCID: PMC3923679  PMID: 24550740
6.  The impact of the HEART risk score in the early assessment of patients with acute chest pain: design of a stepped wedge, cluster randomised trial 
Background
Chest pain remains a diagnostic challenge: physicians do not want to miss an acute coronary syndrome (ACS), but, they also wish to avoid unnecessary additional diagnostic procedures. In approximately 75% of the patients presenting with chest pain at the emergency department (ED) there is no underlying cardiac cause. Therefore, diagnostic strategies focus on identifying patients in whom an ACS can be safely ruled out based on findings from history, physical examination and early cardiac marker measurement. The HEART score, a clinical prediction rule, was developed to provide the clinician with a simple, early and reliable predictor of cardiac risk. We set out to quantify the impact of the use of the HEART score in daily practice on patient outcomes and costs.
Methods/Design
We designed a prospective, multi-centre, stepped wedge, cluster randomised trial. Our aim is to include a total of 6600 unselected chest pain patients presenting at the ED in 10 Dutch hospitals during an 11-month period. All clusters (i.e. hospitals) start with a period of ‘usual care’ and are randomised in their timing when to switch to ‘intervention care’. The latter involves the calculation of the HEART score in each patient to guide clinical decision; notably reassurance and discharge of patients with low scores and intensive monitoring and early intervention in patients with high HEART scores. Primary outcome is occurrence of major adverse cardiac events (MACE), including acute myocardial infarction, revascularisation or death within 6 weeks after presentation. Secondary outcomes include occurrence of MACE in low-risk patients, quality of life, use of health care resources and costs.
Discussion
Stepped wedge designs are increasingly used to evaluate the real-life effectiveness of non-pharmacological interventions because of the following potential advantages: (a) each hospital has both a usual care and an intervention period, therefore, outcomes can be compared within and across hospitals; (b) each hospital will have an intervention period which enhances participation in case of a promising intervention; (c) all hospitals generate data about potential implementation problems. This large impact trial will generate evidence whether the anticipated benefits (in terms of safety and cost-effectiveness) of using the HEART score will indeed be achieved in real-life clinical practice.
Trial registration
ClinicalTrials.gov 80-82310-97-12154.
doi:10.1186/1471-2261-13-77
PMCID: PMC3849098  PMID: 24070098
HEART score; Chest pain; Clinical prediction rule; Risk score implementation; Impact; Stepped wedge design; Cluster randomised trial
7.  ACHTUNG-Rule: a new and improved model for prognostic assessment in myocardial infarction 
Background:
Thrombolysis In Myocardial Infarction (TIMI), Platelet Glycoprotein IIb/IIIa in Unstable Angina: Receptor Suppression Using Integrilin (PURSUIT) and Global Registry of Acute Coronary Events (GRACE) scores have been developed for risk stratification in myocardial infarction (MI). The latter is the most validated score, yet active research is ongoing for improving prognostication in MI.
Aim:
Derivation and validation of a new model for intrahospital, post-discharge and combined/total all-cause mortality prediction – ACHTUNG-Rule – and comparison with the GRACE algorithm.
Methods:
1091 patients admitted for MI (age 68.4 ± 13.5, 63.2% males, 41.8% acute MI with ST-segment elevation (STEMI)) and followed for 19.7 ± 6.4 months were assigned to a derivation sample. 400 patients admitted at a later date at our institution (age 68.3 ± 13.4, 62.7% males, 38.8% STEMI) and followed for a period of 7.2 ± 4.0 months were assigned to a validation sample. Three versions of the ACHTUNG-Rule were developed for the prediction of intrahospital, post-discharge and combined (intrahospital plus post-discharge) all-cause mortality prediction. All models were evaluated for their predictive performance using the area under the receiver operating characteristic (ROC) curve, calibration through the Hosmer–Lemeshow test and predictive utility within each individual patient through the Brier score. Comparison through ROC curve analysis and measures of risk reclassification – net reclassification improvement index (NRI) or Integrated Discrimination Improvement (IDI) – was performed between the ACHTUNG versions for intrahospital, post-discharge and combined mortality prediction and the equivalent GRACE score versions for intrahospital (GRACE-IH), post-discharge (GRACE-6PD) and post-admission 6-month mortality (GRACE-6).
Results:
Assessment of calibration and overall performance of the ACHTUNG-Rule demonstrated a good fit (p value for the Hosmer–Lemeshow goodness-of-fit test of 0.258, 0.101 and 0.550 for ACHTUNG-IH, ACHTUNG-T and ACHTUNG-R, respectively) and high discriminatory power in the validation cohort for all the primary endpoints (intrahospital mortality: AUC ACHTUNG-IH 0.886 ± 0.035 vs. AUC GRACE-IH 0.906 ± 0.026; post-discharge mortality: AUC ACHTUNG-R 0.827 ± 0.036 vs. AUC GRACE-6PD 0.811 ± 0.034; combined/total mortality: AUC ACHTUNG-T 0.831 ± 0.028 vs. AUC GRACE-6 0.815 ± 0.033). Furthermore, all versions of the ACHTUNG-Rule accurately reclassified a significant number of patients in different, more appropriate, risk categories (NRI ACHTUNG-IH 17.1%, p (2-sided) = 0.0021; NRI ACHTUNG-R 22.0%, p = 0.0002; NRI ACHTUNG-T 18.6%, p = 0.0012). The prognostic performance of the ACHTUNG-Rule was similar in both derivation and validation samples.
Conclusions:
All versions of the ACHTUNG-Rule have shown excellent discriminative power and good calibration for predicting intrahospital, post-discharge and combined in-hospital plus post-discharge mortality. The ACHTUNG version for intrahospital mortality prediction was not inferior to its equivalent GRACE model, and ACHTUNG versions for post-discharge and combined/total mortality demonstrated apparent superiority. External validation in wider, independent, preferably multicentre, registries is warranted before its potential clinical implementation.
doi:10.1177/2048872612466536
PMCID: PMC3760564  PMID: 24062923
Myocardial infarction; prognosis; risk assessment; GRACE risk score
8.  Research on Implementation of Interventions in Tuberculosis Control in Low- and Middle-Income Countries: A Systematic Review 
PLoS Medicine  2012;9(12):e1001358.
Cobelens and colleagues systematically reviewed research on implementation and cost-effectiveness of the WHO-recommended interventions for tuberculosis.
Background
Several interventions for tuberculosis (TB) control have been recommended by the World Health Organization (WHO) over the past decade. These include isoniazid preventive therapy (IPT) for HIV-infected individuals and household contacts of infectious TB patients, diagnostic algorithms for rule-in or rule-out of smear-negative pulmonary TB, and programmatic treatment for multidrug-resistant TB. There is no systematically collected data on the type of evidence that is publicly available to guide the scale-up of these interventions in low- and middle-income countries. We investigated the availability of published evidence on their effectiveness, delivery, and cost-effectiveness that policy makers need for scaling-up these interventions at country level.
Methods and Findings
PubMed, Web of Science, EMBASE, and several regional databases were searched for studies published from 1 January 1990 through 31 March 2012 that assessed health outcomes, delivery aspects, or cost-effectiveness for any of these interventions in low- or middle-income countries. Selected studies were evaluated for their objective(s), design, geographical and institutional setting, and generalizability. Studies reporting health outcomes were categorized as primarily addressing efficacy or effectiveness of the intervention. These criteria were used to draw landscapes of published research. We identified 59 studies on IPT in HIV infection, 14 on IPT in household contacts, 44 on rule-in diagnosis, 19 on rule-out diagnosis, and 72 on second-line treatment. Comparative effectiveness studies were relatively few (n = 9) and limited to South America and sub-Saharan Africa for IPT in HIV-infection, absent for IPT in household contacts, and rare for second-line treatment (n = 3). Evaluations of diagnostic and screening algorithms were more frequent (n = 19) but geographically clustered and mainly of non-comparative design. Fifty-four studies evaluated ways of delivering these interventions, and nine addressed their cost-effectiveness.
Conclusions
There are substantial gaps in published evidence for scale-up for five WHO-recommended TB interventions settings at country level, which for many countries possibly precludes program-wide implementation of these interventions. There is a strong need for rigorous operational research studies to be carried out in programmatic settings to inform on best use of existing and new interventions in TB control.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Tuberculosis (TB), caused by Mycobacterium tuberculosis, is curable and preventable, but according to the World Health Organization (WHO), in 2011, 8.7 million people had symptoms of TB (usually a productive cough and fever) and 1.4 million people—95% from low- and middle-income countries—died from TB. TB is also the leading cause of death in people with HIV worldwide, and in 2010 about 10 million children were orphaned as a result of their parents dying from TB. To help reduce the considerable global burden of TB, a global initiative called the Stop TB Partnership, led by WHO, has implemented a strategy to reduce deaths from TB by 50% by 2015—even greater than the target of Millennium Development Goal 6 (to reverse the increase in TB incidence by 2015).
Why Was This Study Done?
Over the past few years, WHO has recommended that countries implement several interventions to help control the spread of tuberculosis through measures to improve prevention, diagnosis, and treatment. Five such interventions currently recommended by WHO are: treatment with isoniazid to prevent TB among people who are HIV positive, and also among household contacts of people infected with TB; the use of clinical pathways (algorithms) for diagnosing TB in people accessing health care who have a negative smear test—the most commonly used diagnostic test, which relies on sputum samples—(“rule-in algorithms”); screening algorithms for excluding TB in people who have HIV (“rule-out algorithms”); and finally, provision of second-line treatment for multidrug-resistant tuberculosis (a form of TB that does not respond to the most commonly used drugs) under programmatic conditions. The effectiveness of these interventions, their costs, and the practicalities of implementation are all important information for countries seeking to control TB following the WHO guidelines, but little is known about the availability of this information. Therefore, in this study the researchers systematically reviewed published studies to find evidence of the effectiveness of each of these interventions when implemented in routine practice, and also for additional information on the setting and conditions of implemented interventions, which might be useful to other countries.
What Did the Researchers Do and Find?
Using a specific search strategy, the researchers comprehensively searched through several key databases of publications, including regional databases, to identify 208 (out of 11,489 found initially) suitable research papers published between January 1990 and March 2012. For included studies, the researchers also noted the geographical location and setting and the type and design of study.
Of the 208 included studies, 59 focused on isoniazid prevention therapy in HIV infection, and only 14 on isoniazid prevention therapy for household contacts. There were 44 studies on “rule-in” clinical diagnosis, 19 on “rule-out” clinical diagnosis, and 72 studies on second-line treatment for TB. Studies on each intervention had some weaknesses, and overall, researchers found that there were very few real-world studies reporting on the effectiveness of interventions in program settings (rather than under optimal conditions in research settings). Few studies evaluated the methods used to implement the intervention or addressed delivery and operational issues (such as adherence to treatment), and there were limited economic evaluations of the recommended interventions. Furthermore, the researchers found that in general, the South Asian region was poorly represented.
What Do These Findings Mean?
These findings suggest that there is limited evidence on effectiveness, delivery, and cost-effectiveness to guide the scale-up of five WHO recommended interventions to control tuberculosis in the countries and settings, despite the urgent need for such interventions to be implemented. The poor evidence base identified in this review highlights the tension between the decision to adopt the recommendation and its implementation adapted to local circumstances, and may be an important reason as to why these interventions are not implemented in many countries. This study also suggests creative thinking is necessary to address the gaps between WHO recommendations and global health policy on new interventions and their real-world implementation in country-wide TB control programs. Future research should focus more on operational studies, the results of which should be made publicly available, and researchers, donors, and medical journals could perhaps re-consider their priorities to help bridge the knowledge gap identified in this study.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001358.
WHO has a wide range of information about TB and research on TB, including more about the STOP TB strategy and the STOP TB Partnership
The UN website has more information about MDG 6
The Global Fund to Fight AIDS, Tuberculosis and Malaria has specific information about progress on TB control
doi:10.1371/journal.pmed.1001358
PMCID: PMC3525528  PMID: 23271959
9.  RISK STRATIFICATION IN CRITICAL LIMB ISCHEMIA: DERIVATION AND VALIDATION OF A MODEL TO PREDICT AMPUTATION-FREE SURVIVAL USING MULTI-CENTER SURGICAL OUTCOMES DATA 
Patients with critical limb ischemia (CLI) are a heterogeneous population with respect to risk for mortality and limb loss, complicating clinical decision-making. Endovascular options, as compared to bypass, offer a tradeoff between reduced procedural risk and inferior durability. Risk stratified data predictive of amputation-free survival (AFS) may improve clinical decision making and allow for better assessment of new technology in the CLI population.
METHODS
This was a retrospective analysis of prospectively collected data from patients who underwent infrainguinal vein bypass surgery for CLI. Two datasets were used: the PREVENT III randomized trial (n=1404) and a multicenter registry (n=716) from 3 distinct vascular centers (2 academic, 1 community-based). The PREVENT III cohort was randomly assigned to a derivation set (n=953) and to a validation set (n=451). The primary endpoint was AFS. Predictors of AFS identified on univariate screen (inclusion threshold, p<0.20) were included in a stepwise selection Cox model. The resulting 5 significant predictors were assigned an integer score to stratify patients into 3 risk groups. The prediction rule was internally validated in the PREVENT III validation set and externally validated in the multicenter cohort.
RESULTS
The estimated 1 year AFS in the derivation, internal validation, and external validation sets were 76.3%, 72.5%, and 77.0%, respectively. In the derivation set, dialysis (HR 2.81, p<.0001), tissue loss (HR 2.22, p=.0004), age ≥75 (HR 1.64, p=.001), hematocrit ≤30 (HR 1.61, p=.012), and advanced CAD (HR 1.41, p=.021) were significant predictors for AFS in the multivariable model. An integer score, derived from the β coefficients, was used to generate 3 risk categories (low ≤ 3 [44.4% of cohort], medium 4–7 [46.7% of cohort], high ≥8 [8.8% of cohort]). Stratification of the patients, in each dataset, according to risk category yielded 3 significantly different Kaplan-Meier estimates for one year AFS (86%, 73%, and 45% for low, medium, and high risk groups respectively). For a given risk category, the AFS estimate was consistent between the derivation and validation sets.
CONCLUSION
Among patients selected to undergo surgical bypass for infrainguinal disease, this parsimonious risk stratification model reliably identified a category of CLI patients with a >50% chance of death or major amputation at 1 year. Calculation of a “PIII risk score” may be useful for surgical decision making and for clinical trial designs in the CLI population.
doi:10.1016/j.jvs.2008.07.062
PMCID: PMC2765219  PMID: 19118735
10.  Defining Catastrophic Costs and Comparing Their Importance for Adverse Tuberculosis Outcome with Multi-Drug Resistance: A Prospective Cohort Study, Peru 
PLoS Medicine  2014;11(7):e1001675.
Tom Wingfield and colleagues investigate the relationship between catastrophic costs and tuberculosis outcomes for patients receiving free tuberculosis care in Peru.
Please see later in the article for the Editors' Summary
Background
Even when tuberculosis (TB) treatment is free, hidden costs incurred by patients and their households (TB-affected households) may worsen poverty and health. Extreme TB-associated costs have been termed “catastrophic” but are poorly defined. We studied TB-affected households' hidden costs and their association with adverse TB outcome to create a clinically relevant definition of catastrophic costs.
Methods and Findings
From 26 October 2002 to 30 November 2009, TB patients (n = 876, 11% with multi-drug-resistant [MDR] TB) and healthy controls (n = 487) were recruited to a prospective cohort study in shantytowns in Lima, Peru. Patients were interviewed prior to and every 2–4 wk throughout treatment, recording direct (household expenses) and indirect (lost income) TB-related costs. Costs were expressed as a proportion of the household's annual income. In poorer households, costs were lower but constituted a higher proportion of the household's annual income: 27% (95% CI = 20%–43%) in the least-poor houses versus 48% (95% CI = 36%–50%) in the poorest. Adverse TB outcome was defined as death, treatment abandonment or treatment failure during therapy, or recurrence within 2 y. 23% (166/725) of patients with a defined treatment outcome had an adverse outcome. Total costs ≥20% of household annual income was defined as catastrophic because this threshold was most strongly associated with adverse TB outcome. Catastrophic costs were incurred by 345 households (39%). Having MDR TB was associated with a higher likelihood of incurring catastrophic costs (54% [95% CI = 43%–61%] versus 38% [95% CI = 34%–41%], p<0.003). Adverse outcome was independently associated with MDR TB (odds ratio [OR] = 8.4 [95% CI = 4.7–15], p<0.001), previous TB (OR = 2.1 [95% CI = 1.3–3.5], p = 0.005), days too unwell to work pre-treatment (OR = 1.01 [95% CI = 1.00–1.01], p = 0.02), and catastrophic costs (OR = 1.7 [95% CI = 1.1–2.6], p = 0.01). The adjusted population attributable fraction of adverse outcomes explained by catastrophic costs was 18% (95% CI = 6.9%–28%), similar to that of MDR TB (20% [95% CI = 14%–25%]). Sensitivity analyses demonstrated that existing catastrophic costs thresholds (≥10% or ≥15% of household annual income) were not associated with adverse outcome in our setting. Study limitations included not measuring certain “dis-saving” variables (including selling household items) and gathering only 6 mo of costs-specific follow-up data for MDR TB patients.
Conclusions
Despite free TB care, having TB disease was expensive for impoverished TB patients in Peru. Incurring higher relative costs was associated with adverse TB outcome. The population attributable fraction indicated that catastrophic costs and MDR TB were associated with similar proportions of adverse outcomes. Thus TB is a socioeconomic as well as infectious problem, and TB control interventions should address both the economic and clinical aspects of this disease.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Caused by the infectious microbe Mycobacterium tuberculosis, tuberculosis (or TB) is a global health problem. In 2012, an estimated 8.6 million people fell ill with TB, and 1.3 million were estimated to have died because of the disease. Poverty is widely recognized as an important risk factor for TB, and developing nations shoulder a disproportionate burden of both poverty and TB disease. For example, in Lima (the capital of Peru), the incidence of TB follows the poverty map, sparing residents living in rich areas of the city while spreading among poorer residents that live in overcrowded households.
The Peruvian government, non-profit organizations, and the World Health Organization (WHO) have extended healthcare programs to provide free diagnosis and treatment for TB and drug-resistant strains of TB in Peru, but rates of new TB cases remain high. For example, in Ventanilla (an area of 16 shantytowns located in northern Lima), the rate of infection was higher during the study period, at 162 new cases per 100,000 people per year, than the national average. About one-third of the 277,895 residents of Ventanilla live on under US$1 per day.
Why Was This Study Done?
Poverty increases the risks associated with contracting TB infection, but the disease also affects the most economically productive age group, and the income of TB-affected households often decreases post-diagnosis, exacerbating poverty. A recent WHO consultation report proposed a target of eradicating catastrophic costs for TB-affected families by 2035, but hidden TB-related costs remain understudied, and there is no international consensus defining catastrophic costs incurred by patients and households affected by TB. Lost income and the cost of transport are among hidden costs associated with free treatment programs; these costs and their potential impact on patients and their households are not well defined. Here the researchers sought to clarify and characterize TB-related costs and explore whether there is a relationship between the hidden costs associated with free TB treatment programs and the likelihood of completing treatment and becoming cured of TB.
What Did the Researchers Do and Find?
Over a seven-year period (2002–2009), the researchers recruited 876 study participants with TB diagnosed at health posts located in Ventanilla. To provide a comparative control group, a sample of 487 healthy individuals was also recruited to participate. Participants were interviewed prior to treatment, and households' TB-related direct expenses and indirect expenses (lost income attributed to TB) were recorded every 2–4 wk. Data were collected during scheduled household visits.
TB patients were poorer than controls, and analysis of the data showed that accessing free TB care was expensive for TB patients, especially those with multi-drug-resistant (MDR) TB. Total expenses were similar pre-treatment compared to during treatment for TB patients, despite receiving free care (1.1 versus 1.2 times the same household's monthly income). Even though direct expenses (for example, costs of medical examinations and medicines other than anti-TB therapy) were lower in the poorest households, their total expenses (direct and indirect) made up a greater proportion of their household annual income: 48% for the poorest households compared to 27% in the least-poor households.
The researchers defined costs that were equal to or above one-fifth (20%) of household annual income as catastrophic because this threshold marked the greatest association with adverse treatment outcomes such as death, abandoning treatment, failing to respond to treatment, or TB recurrence. By calculating the population attributable fraction—the proportional reduction in population adverse treatment outcomes that could occur if a risk factor was reduced to zero—the authors estimate that adverse TB outcomes explained by catastrophic costs and MDR TB were similar: 18% for catastrophic costs and 20% for MDR TB.
What Do These Findings Mean?
The findings of this study indicate a potential role for social protection as a means to improve TB disease control and health, as well as defining a novel, evidence-based threshold for catastrophic costs for TB-affected households of 20% or more of annual income. Addressing the economic impact of diagnosis and treatment in impoverished communities may increase the odds of curing TB.
Study limitations included only six months of follow-up data being gathered on costs for each participant and not recording “dissavings,” such as selling of household items in response to financial shock. Because the study was observational, the authors aren't able to determine the direction of the association between catastrophic costs and TB outcome. Even so, the study indicates that TB is a socioeconomic as well as infectious problem, and that TB control interventions should address both the economic and clinical aspects of the disease.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001675.
The World Health Organization provides information on all aspects of tuberculosis, including the Global Tuberculosis Report 2013
The US Centers for Disease Control and Prevention has information about tuberculosis
Médecins Sans Frontières's TB&ME blog provides patients' stories of living with MDR TB
TB Alert, a UK-based charity that promotes TB awareness worldwide, has information on TB in several European, African, and Asian languages
More information is available about the Innovation For Health and Development (IFHAD) charity and its research team's work in Peru
doi:10.1371/journal.pmed.1001675
PMCID: PMC4098993  PMID: 25025331
11.  A Comparison of Cost Effectiveness Using Data from Randomized Trials or Actual Clinical Practice: Selective Cox-2 Inhibitors as an Example 
PLoS Medicine  2009;6(12):e1000194.
Tjeerd-Pieter van Staa and colleagues estimate the likely cost effectiveness of selective Cox-2 inhibitors prescribed during routine clinical practice, as compared to the cost effectiveness predicted from randomized controlled trial data.
Background
Data on absolute risks of outcomes and patterns of drug use in cost-effectiveness analyses are often based on randomised clinical trials (RCTs). The objective of this study was to evaluate the external validity of published cost-effectiveness studies by comparing the data used in these studies (typically based on RCTs) to observational data from actual clinical practice. Selective Cox-2 inhibitors (coxibs) were used as an example.
Methods and Findings
The UK General Practice Research Database (GPRD) was used to estimate the exposure characteristics and individual probabilities of upper gastrointestinal (GI) events during current exposure to nonsteroidal anti-inflammatory drugs (NSAIDs) or coxibs. A basic cost-effectiveness model was developed evaluating two alternative strategies: prescription of a conventional NSAID or coxib. Outcomes included upper GI events as recorded in GPRD and hospitalisation for upper GI events recorded in the national registry of hospitalisations (Hospital Episode Statistics) linked to GPRD. Prescription costs were based on the prescribed number of tables as recorded in GPRD and the 2006 cost data from the British National Formulary. The study population included over 1 million patients prescribed conventional NSAIDs or coxibs. Only a minority of patients used the drugs long-term and daily (34.5% of conventional NSAIDs and 44.2% of coxibs), whereas coxib RCTs required daily use for at least 6–9 months. The mean cost of preventing one upper GI event as recorded in GPRD was US$104k (ranging from US$64k with long-term daily use to US$182k with intermittent use) and US$298k for hospitalizations. The mean costs (for GPRD events) over calendar time were US$58k during 1990–1993 and US$174k during 2002–2005. Using RCT data rather than GPRD data for event probabilities, the mean cost was US$16k with the VIGOR RCT and US$20k with the CLASS RCT.
Conclusions
The published cost-effectiveness analyses of coxibs lacked external validity, did not represent patients in actual clinical practice, and should not have been used to inform prescribing policies. External validity should be an explicit requirement for cost-effectiveness analyses.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Before a new treatment for a specific disease becomes an established part of clinical practice, it goes through a long process of development and clinical testing. This process starts with extensive studies of the new treatment in the laboratory and in animals and then moves into clinical trials. The most important of these trials are randomized controlled trials (RCTs), studies in which the efficacy and safety of the new drug and an established drug are compared by giving the two drugs to randomized groups of patients with the disease. The final hurdle that a drug or any other healthcare technology often has to jump before being adopted for widespread clinical use is a health technology assessment, which aims to provide policymakers, clinicians, and patients with information about the balance between the clinical and financial costs of the drug and its benefits (its cost-effectiveness). In England and Wales, for example, the National Institute for Health and Clinical Excellence (NICE), which promotes clinical excellence and the effective use of resources within the National Health Service, routinely commissions such assessments.
Why Was This Study Done?
Data on the risks of various outcomes associated with a new treatment are needed for cost-effectiveness analyses. These data are usually obtained from RCTs, but although RCTs are the best way of determining a drug's potency in experienced hands under ideal conditions (its efficacy), they may not be a good way to determine a drug's success in an average clinical setting (its effectiveness). In this study, the researchers compare the data from RCTs that have been used in several published cost-effectiveness analyses of a class of drugs called selective cyclooxygenase-2 inhibitors (“coxibs”) with observational data from actual clinical practice. They then ask whether the published cost-effectiveness studies, which generally used RCT data, should have been used to inform coxib prescribing policies. Coxibs are nonsteroidal anti-inflammatory drugs (NSAIDs) that were developed in the 1990s to treat arthritis and other chronic inflammatory conditions. Conventional NSAIDs can cause gastric ulcers and bleeding from the gut (upper gastrointestinal events) if taken for a long time. The use of coxibs avoids this problem.
What Did the Researchers Do and Find?
The researchers extracted data on the real-life use of conventional NSAIDs and coxibs and on the incidence of upper gastrointestinal events from the UK General Practice Research Database (GPRD) and from the national registry of hospitalizations. Only a minority of the million patients who were prescribed conventional NSAIDs (average cost per prescription US$17.80) or coxibs (average cost per prescription US$47.04) for a variety of inflammatory conditions took them on a long-term daily basis, whereas in the RCTs of coxibs, patients with a few carefully defined conditions took NSAIDs daily for at least 6–9 months. The researchers then developed a cost-effectiveness model to evaluate the costs of the alternative strategies of prescribing a conventional NSAID or a coxib. The mean additional cost of preventing one gastrointestinal event recorded in the GPRD by using a coxib instead of a NSAID, they report, was US$104,000; the mean cost of preventing one hospitalization for such an event was US$298,000. By contrast, the mean cost of preventing one gastrointestinal event by using a coxib instead of a NSAID calculated from data obtained in RCTs was about US$20,000.
What Do These Findings Mean?
These findings suggest that the published cost-effectiveness analyses of coxibs greatly underestimate the cost of preventing gastrointestinal events by replacing prescriptions of conventional NSAIDs with prescriptions of coxibs. That is, if data from actual clinical practice had been used in cost-effectiveness analyses rather than data from RCTs, the conclusions of the published cost-effectiveness analyses of coxibs would have been radically different and may have led to different prescribing guidelines for this class of drug. More generally, these findings provide a good illustration of how important it is to ensure that cost-effectiveness analyses have “external” validity by using realistic estimates for event rates and costs rather than relying on data from RCTs that do not always reflect the real-world situation. The researchers suggest, therefore, that health technology assessments should move from evaluating cost-efficacy in ideal populations with ideal interventions to evaluating cost-effectiveness in real populations with real interventions.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000194.
The UK National Institute for Health Research provides information about health technology assessment
The National Institute for Health and Clinical Excellence Web site describes how this organization provides guidance on promoting good health within the England and Wales National Health Service
Information on the UK General Practice Research Database is available
Wikipedia has pages on health technology assessment and on selective cyclooxygenase-2 inhibitors (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1000194
PMCID: PMC2779340  PMID: 19997499
12.  Risk Stratification in Acute Heart Failure: Rationale and Design of the STRATIFY and DECIDE Studies 
American heart journal  2012;164(6):825-834.
A critical challenge for physicians facing patients presenting with signs and symptoms of acute heart failure (AHF) is how and where to best manage them. Currently, most patients evaluated for AHF are admitted to the hospital, yet not all warrant inpatient care. Up to 50% of admissions could be potentially avoided and many admitted patients could be discharged after a short period of observation and treatment. Methods for identifying patients that can be sent home early are lacking. Improving the physician’s ability to identify and safely manage low-risk patients is essential to avoiding unnecessary use of hospital beds.
Two studies (STRATIFY and DECIDE) have been funded by the National Heart Lung and Blood Institute with the goal of developing prediction rules to facilitate early decision making in AHF. Using prospectively gathered evaluation and treatment data from the acute setting (STRATIFY) and early inpatient stay (DECIDE), rules will be generated to predict risk for death and serious complications. Subsequent studies will be designed to test the external validity, utility, generalizability and cost-effectiveness of these prediction rules in different acute care environments representing racially and socioeconomically diverse patient populations.
A major innovation is prediction of 5-day as well as 30-day outcomes, overcoming the limitation that 30-day outcomes are highly dependent on unpredictable, post-visit patient and provider behavior. A novel aspect of the proposed project is the use of a comprehensive cardiology review to correctly assign post-treatment outcomes to the acute presentation. Finally, a rigorous analysis plan has been developed to construct the prediction rules that will maximally extract both the statistical and clinical properties of every data element. Upon completion of this study we will subsequently externally test the prediction rules in a heterogeneous patient cohort.
doi:10.1016/j.ahj.2012.07.033
PMCID: PMC3511776  PMID: 23194482
13.  The Effects of Mandatory Prescribing of Thiazides for Newly Treated, Uncomplicated Hypertension: Interrupted Time-Series Analysis 
PLoS Medicine  2007;4(7):e232.
Background
The purpose of our study was to evaluate the effects of a new reimbursement rule for antihypertensive medication that made thiazides mandatory first-line drugs for newly treated, uncomplicated hypertension. The objective of the new regulation was to reduce drug expenditures.
Methods and Findings
We conducted an interrupted time-series analysis on prescribing data before and after the new reimbursement rule for antihypertensive medication was put into effect. All patients started on antihypertensive medication in 61 general practices in Norway were included in the analysis. The new rule was put forward by the Ministry of Health and was approved by parliament. Adherence to the rule was monitored only minimally, and there were no penalties for non-adherence. Our primary outcome was the proportion of thiazide prescriptions among all prescriptions made for persons started on antihypertensive medication. Secondary outcomes included the proportion of patients who, within 4 mo, reached recommended blood-pressure goals and the proportion of patients who, within 4 mo, were not started on a second antihypertensive drug. We also compared drug costs before and after the intervention. During the baseline period, 10% of patients started on antihypertensive medication were given a thiazide prescription. This proportion rose steadily during the transition period, after which it remained stable at 25%. For other outcomes, no statistically significant differences were demonstrated. Achievement of treatment goals was slightly higher (56.6% versus 58.4%) after the new rule was introduced, and the prescribing of a second drug was slightly lower (24.0% versus 21.8%). Drug costs were reduced by an estimated Norwegian kroner 4.8 million (€0.58 million, US$0.72 million) in the first year, which is equivalent to Norwegian kroner 1.06 per inhabitant (€0.13, US$0.16).
Conclusions
Prescribing of thiazides in Norway for uncomplicated hypertension more than doubled after a reimbursement rule requiring the use of thiazides as the first-choice therapy was put into effect. However, the resulting savings on drug expenditures were modest. There were no significant changes in the achievement of treatment goals or in the prescribing of a second antihypertensive drug.
Atle Fretheim and colleagues found that the prescribing of thiazides in Norway for uncomplicated hypertension more than doubled after a rule requiring their use as first-choice therapy was put into effect.
Editors' Summary
Background.
High blood pressure (hypertension) is a common medical condition, especially among elderly people. It has no obvious symptoms but can lead to heart attacks, heart failure, strokes, or kidney failure. It is diagnosed by measuring blood pressure—the force that blood moving around the body exerts on the inside of arteries (large blood vessels). Many factors affect blood pressure (which depends on the amount of blood being pumped round the body and on the size and condition of the arteries), but overweight people and individuals who eat fatty or salty food are at high risk of developing hypertension. Mild hypertension can often be corrected by making lifestyle changes, but many patients also take one or more antihypertensive agents. These include thiazide diuretics and several types of non-thiazide drugs, many of which reduce heart rate or contractility and/or dilate blood vessels.
Why Was This Study Done?
Antihypertensive agents are a major part of national drug expenditure in developed countries, where as many as one person in ten is treated for hypertension. The different classes of drugs are all effective, but their cost varies widely. Thiazides, for example, are a tenth of the price of many non-thiazide drugs. In Norway, the low use of thiazides recently led the government to impose a new reimbursement rule aimed at reducing public expenditure on antihypertensive drugs. Since March 2004, family doctors have been reimbursed for drug costs only if they prescribe thiazides as first-line therapy for uncomplicated hypertension, unless there are medical reasons for selecting other drugs. Adherence to the rule has not been monitored, and there is no penalty for non-adherence, so has this intervention changed prescribing practices? To find out, the researchers in this study analyzed Norwegian prescribing data before and after the new rule came into effect.
What Did the Researchers Do and Find?
The researchers analyzed the monthly antihypertensive drug–prescribing records of 61 practices around Oslo, Norway, between January 2003 and November 2003 (pre-intervention period), between December 2003 and February 2004 (transition period), and between March 2004 and January 2005 (post-intervention period). This type of study is called an “interrupted time series”. During the pre-intervention period, one in ten patients starting antihypertensive medication was prescribed a thiazide drug. This proportion gradually increased during the transition period before stabilizing at one in four patients throughout the post-intervention period. A slightly higher proportion of patients reached their recommended blood-pressure goal after the rule was introduced than before, and a slightly lower proportion needed to switch to a second drug class, but both these small differences may have been due to chance. Finally, the researchers estimated that the observed change in prescribing practices reduced drug costs per Norwegian by US$0.16 (€0.13) in the first year.
What Do These Findings Mean?
Past attempts to change antihypertensive-prescribing practices by trying to influence family doctors (for example, through education) have largely failed. By contrast, these findings suggest that imposing a change on them (in this case, by introducing a new reimbursement rule) can be effective (at least over the short term and in the practices included in the study), even when compliance with the change is not monitored nor noncompliance penalized. However, despite a large shift towards prescribing thiazides, three-quarters of patients were still prescribed non-thiazide drugs (possibly because of doubts about the efficacy of thiazides as first-line drugs), which emphasizes how hard it is to change doctors' prescribing habits. Further studies are needed to investigate whether the approach examined in this study can effectively contain the costs of antihypertensive drugs (and of drugs used for other common medical conditions) in the long term and in other settings. Also, because the estimated reduction in drug costs produced by the intervention was relatively modest (although likely to increase over time as more patients start on thiazides), other ways to change prescribing practices and produce savings in national drug expenditures should be investigated.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040232.
MedlinePlus encyclopedia page on hypertension (in English and Spanish)
UK National Institute for Health and Clinical Excellence information on hypertension for patients, carers, and professionals
American Heart Association information for patients on high blood pressure
An open-access research article describing the potential savings of using thiazides as the first-choice antihypertensive drug
A previous study in Norway, published in PLoS Medicine, examined what happened when doctors were actively encouraged to make more use of thiazides. There was also an economic evaluation of what this achieved
doi:10.1371/journal.pmed.0040232
PMCID: PMC1904466  PMID: 17622192
14.  Performance of Thirteen Clinical Rules to Distinguish Bacterial and Presumed Viral Meningitis in Vietnamese Children 
PLoS ONE  2012;7(11):e50341.
Background and Purpose
Successful outcomes from bacterial meningitis require rapid antibiotic treatment; however, unnecessary treatment of viral meningitis may lead to increased toxicities and expense. Thus, improved diagnostics are required to maximize treatment and minimize side effects and cost. Thirteen clinical decision rules have been reported to identify bacterial from viral meningitis. However, few rules have been tested and compared in a single study, while several rules are yet to be tested by independent researchers or in pediatric populations. Thus, simultaneous test and comparison of these rules are required to enable clinicians to select an optimal diagnostic rule for bacterial meningitis in settings and populations similar to ours.
Methods
A retrospective cross-sectional study was conducted at the Infectious Department of Pediatric Hospital Number 1, Ho Chi Minh City, Vietnam. The performance of the clinical rules was evaluated by area under a receiver operating characteristic curve (ROC-AUC) using the method of DeLong and McNemar test for specificity comparison.
Results
Our study included 129 patients, of whom 80 had bacterial meningitis and 49 had presumed viral meningitis. Spanos's rule had the highest AUC at 0.938 but was not significantly greater than other rules. No rule provided 100% sensitivity with a specificity higher than 50%. Based on our calculation of theoretical sensitivity and specificity, we suggest that a perfect rule requires at least four independent variables that posses both sensitivity and specificity higher than 85–90%.
Conclusions
No clinical decision rules provided an acceptable specificity (>50%) with 100% sensitivity when applying our data set in children. More studies in Vietnam and developing countries are required to develop and/or validate clinical rules and more very good biomarkers are required to develop such a perfect rule.
doi:10.1371/journal.pone.0050341
PMCID: PMC3508924  PMID: 23209715
15.  The Clinical and Economic Impact of Point-of-Care CD4 Testing in Mozambique and Other Resource-Limited Settings: A Cost-Effectiveness Analysis 
PLoS Medicine  2014;11(9):e1001725.
Emily Hyle and colleagues conduct a cost-effectiveness analysis to estimate the clinical and economic impact of point-of-care CD4 testing compared to laboratory-based tests in Mozambique.
Please see later in the article for the Editors' Summary
Background
Point-of-care CD4 tests at HIV diagnosis could improve linkage to care in resource-limited settings. Our objective is to evaluate the clinical and economic impact of point-of-care CD4 tests compared to laboratory-based tests in Mozambique.
Methods and Findings
We use a validated model of HIV testing, linkage, and treatment (CEPAC-International) to examine two strategies of immunological staging in Mozambique: (1) laboratory-based CD4 testing (LAB-CD4) and (2) point-of-care CD4 testing (POC-CD4). Model outcomes include 5-y survival, life expectancy, lifetime costs, and incremental cost-effectiveness ratios (ICERs). Input parameters include linkage to care (LAB-CD4, 34%; POC-CD4, 61%), probability of correctly detecting antiretroviral therapy (ART) eligibility (sensitivity: LAB-CD4, 100%; POC-CD4, 90%) or ART ineligibility (specificity: LAB-CD4, 100%; POC-CD4, 85%), and test cost (LAB-CD4, US$10; POC-CD4, US$24). In sensitivity analyses, we vary POC-CD4-specific parameters, as well as cohort and setting parameters to reflect a range of scenarios in sub-Saharan Africa. We consider ICERs less than three times the per capita gross domestic product in Mozambique (US$570) to be cost-effective, and ICERs less than one times the per capita gross domestic product in Mozambique to be very cost-effective. Projected 5-y survival in HIV-infected persons with LAB-CD4 is 60.9% (95% CI, 60.9%–61.0%), increasing to 65.0% (95% CI, 64.9%–65.1%) with POC-CD4. Discounted life expectancy and per person lifetime costs with LAB-CD4 are 9.6 y (95% CI, 9.6–9.6 y) and US$2,440 (95% CI, US$2,440–US$2,450) and increase with POC-CD4 to 10.3 y (95% CI, 10.3–10.3 y) and US$2,800 (95% CI, US$2,790–US$2,800); the ICER of POC-CD4 compared to LAB-CD4 is US$500/year of life saved (YLS) (95% CI, US$480–US$520/YLS). POC-CD4 improves clinical outcomes and remains near the very cost-effective threshold in sensitivity analyses, even if point-of-care CD4 tests have lower sensitivity/specificity and higher cost than published values. In other resource-limited settings with fewer opportunities to access care, POC-CD4 has a greater impact on clinical outcomes and remains cost-effective compared to LAB-CD4. Limitations of the analysis include the uncertainty around input parameters, which is examined in sensitivity analyses. The potential added benefits due to decreased transmission are excluded; their inclusion would likely further increase the value of POC-CD4 compared to LAB-CD4.
Conclusions
POC-CD4 at the time of HIV diagnosis could improve survival and be cost-effective compared to LAB-CD4 in Mozambique, if it improves linkage to care. POC-CD4 could have the greatest impact on mortality in settings where resources for HIV testing and linkage are most limited.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
AIDS has already killed about 36 million people, and a similar number of people (mostly living in low- and middle-income countries) are currently infected with HIV, the virus that causes AIDS. HIV destroys immune system cells (including CD4 cells, a type of lymphocyte), leaving infected individuals susceptible to other infections. Early in the AIDS epidemic, HIV-infected individuals usually died within ten years of infection. After effective antiretroviral therapy (ART) became available in 1996, HIV infection became a chronic condition for people living in high-income countries, but because ART was expensive, HIV/AIDS remained a fatal disease in low- and middle-income countries. In 2003, the international community began to work towards achieving universal ART coverage, and by the end of 2012, 61% of HIV-positive people (nearly 10 million individuals) living low- and middle-income countries who were eligible for treatment—because their CD4 cell count had fallen below 350 cells/mm3 of blood or they had developed an AIDS-defining condition—were receiving treatment.
Why Was This Study Done?
In sub-Saharan Africa nearly 50% of HIV-infected people eligible for treatment remain untreated, in part because of poor linkage between HIV diagnosis and clinical care. After patients receive a diagnosis of HIV infection, their eligibility for ART initiation is determined by sending a blood sample away to a laboratory for a CD4 cell count (the current threshold for treatment is a CD4 count below 500/mm3, although low- and middle-income countries have yet to update their national guidelines from the threshold CD4 count below 350/mm3). Patients have to return to the clinic to receive their test results and to initiate ART if they are eligible for treatment. Unfortunately, many patients are “lost” during this multistep process in resource-limited settings. Point-of-care CD4 tests at HIV diagnosis—tests that are done on the spot and provide results the same day—might help to improve linkage to care in such settings. Here, the researchers use a mathematical model to assess the clinical outcomes and cost-effectiveness of point-of-care CD4 testing at the time of HIV diagnosis compared to laboratory-based testing in Mozambique, where about 1.5 million HIV-positive individuals live.
What Did the Researchers Do and Find?
The researchers used a validated model of HIV testing, linkage, and treatment called the Cost-Effectiveness of Preventing AIDS Complications–International (CEPAC-I) model to compare the clinical impact, costs, and cost-effectiveness of point-of-care and laboratory CD4 testing in newly diagnosed HIV-infected patients in Mozambique. They used published data to estimate realistic values for various model input parameters, including the probability of linkage to care following the use of each test, the accuracy of the tests, and the cost of each test. At a CD4 threshold for treatment of 250/mm3, the model predicted that 60.9% of newly diagnosed HIV-infected people would survive five years if their immunological status was assessed using the laboratory-based CD4 test, whereas 65% would survive five years if the point-of-care test was used. Predicted life expectancies were 9.6 and 10.3 years with the laboratory-based and point-of-care tests, respectively, and the per person lifetime costs (which mainly reflect treatment costs) associated with the two tests were US$2,440 and $US2,800, respectively. Finally, the incremental cost-effectiveness ratio—calculated as the incremental costs of one therapeutic intervention compared to another divided by the incremental benefits—was US$500 per year of life saved, when comparing use of the point-of-care test with a laboratory-based test.
What Do These Findings Mean?
These findings suggest that, compared to laboratory-based CD4 testing, point-of-care testing at HIV diagnosis could improve survival for HIV-infected individuals in Mozambique. Because the per capita gross domestic product in Mozambique is US$570, these findings also indicate that point-of-care testing would be very cost-effective compared to laboratory-based testing (an incremental cost-effectiveness ratio less than one times the per capita gross domestic product is regarded as very cost-effective). As with all modeling studies, the accuracy of these findings depends on the assumptions built into the model and on the accuracy of the input parameters. However, the point-of-care strategy averted deaths and was estimated to be cost-effective compared to the laboratory-based test over a wide range of input parameter values reflecting Mozambique and several other resource-limited settings that the researchers modeled. Importantly, these “sensitivity analyses” suggest that point-of-care CD4 testing is likely to have the greatest impact on HIV-related deaths and be economically efficient in settings in sub-Saharan Africa with the most limited health care resources, provided point-of-care CD4 testing improves the linkage to care for HIV-infected people.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001725.
The World Health Organization provides information on all aspects of HIV/AIDS (in several languages); its “Consolidated Guidelines on the Use of Antiretroviral Drugs for Treating and Preventing HIV Infections: Recommendations for a Public Health Approach”, which highlights the potential of point-of-care tests to improve the linkage of newly diagnosed HIV-infected patients to care, is available
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
NAM/aidsmap provides basic information about HIV/AIDS, and summaries of recent research findings on HIV care and treatment; it has a fact sheet on CD4 testing
Information is available from Avert, an international AIDS charity, on many aspects of HIV/AIDS, including information on starting, monitoring, and switching treatment and on HIV and AIDS in sub-Saharan Africa (in English and Spanish)
The “UNAIDS Report on the Global AIDS Epidemic 2013” provides up-to-date information about the AIDS epidemic and efforts to halt it
Personal stories about living with HIV/AIDS are available through Avert, Nam/aidsmap, and Healthtalkonline
doi:10.1371/journal.pmed.1001725
PMCID: PMC4165752  PMID: 25225800
16.  Application of Multivariate Probabilistic (Bayesian) Networks to Substance Use Disorder Risk Stratification and Cost Estimation 
Introduction: This paper explores the use of machine learning and Bayesian classification models to develop broadly applicable risk stratification models to guide disease management of health plan enrollees with substance use disorder (SUD). While the high costs and morbidities associated with SUD are understood by payers, who manage it through utilization review, acute interventions, coverage and cost limitations, and disease management, the literature shows mixed results for these modalities in improving patient outcomes and controlling cost. Our objective is to evaluate the potential of data mining methods to identify novel risk factors for chronic disease and stratification of enrollee utilization, which can be used to develop new methods for targeting disease management services to maximize benefits to both enrollees and payers.
Methods: For our evaluation, we used DecisionQ machine learning algorithms to build Bayesian network models of a representative sample of data licensed from Thomson-Reuters' MarketScan consisting of 185,322 enrollees with three full-year claim records. Data sets were prepared, and a stepwise learning process was used to train a series of Bayesian belief networks (BBNs). The BBNs were validated using a 10 percent holdout set.
Results: The networks were highly predictive, with the risk-stratification BBNs producing area under the curve (AUC) for SUD positive of 0.948 (95 percent confidence interval [CI], 0.944–0.951) and 0.736 (95 percent CI, 0.721–0.752), respectively, and SUD negative of 0.951 (95 percent CI, 0.947–0.954) and 0.738 (95 percent CI, 0.727–0.750), respectively. The cost estimation models produced area under the curve ranging from 0.72 (95 percent CI, 0.708–0.731) to 0.961 (95 percent CI, 0.95–0.971)
Conclusion: We were able to successfully model a large, heterogeneous population of commercial enrollees, applying state-of-the-art machine learning technology to develop complex and accurate multivariate models that support near-real-time scoring of novel payer populations based on historic claims and diagnostic data. Initial validation results indicate that we can stratify enrollees with SUD diagnoses into different cost categories with a high degree of sensitivity and specificity, and the most challenging issue becomes one of policy. Due to the social stigma associated with the disease and ethical issues pertaining to access to care and individual versus societal benefit, a thoughtful dialogue needs to occur about the appropriate way to implement these technologies.
PMCID: PMC2804457  PMID: 20169014
substance use disorder; Bayesian belief network; chemical dependency; predictive modeling
17.  Updated Systematic Review and Meta-Analysis of the Performance of Risk Prediction Rules in Children and Young People with Febrile Neutropenia 
PLoS ONE  2012;7(5):e38300.
Introduction
Febrile neutropenia is a common and potentially life-threatening complication of treatment for childhood cancer, which has increasingly been subject to targeted treatment based on clinical risk stratification. Our previous meta-analysis demonstrated 16 rules had been described and 2 of them subject to validation in more than one study. We aimed to advance our knowledge of evidence on the discriminatory ability and predictive accuracy of such risk stratification clinical decision rules (CDR) for children and young people with cancer by updating our systematic review.
Methods
The review was conducted in accordance with Centre for Reviews and Dissemination methods, searching multiple electronic databases, using two independent reviewers, formal critical appraisal with QUADAS and meta-analysis with random effects models where appropriate. It was registered with PROSPERO: CRD42011001685.
Results
We found 9 new publications describing a further 7 new CDR, and validations of 7 rules. Six CDR have now been subject to testing across more than two data sets. Most validations demonstrated the rule to be less efficient than when initially proposed; geographical differences appeared to be one explanation for this.
Conclusion
The use of clinical decision rules will require local validation before widespread use. Considerable uncertainty remains over the most effective rule to use in each population, and an ongoing individual-patient-data meta-analysis should develop and test a more reliable CDR to improve stratification and optimise therapy. Despite current challenges, we believe it will be possible to define an internationally effective CDR to harmonise the treatment of children with febrile neutropenia.
doi:10.1371/journal.pone.0038300
PMCID: PMC3365042  PMID: 22693615
18.  Validation of a clinical risk scoring system, based solely on clinical presentation, for the management of pregnancy of unknown location 
Fertility and sterility  2012;99(1):193-198.
Objective
Assess a scoring system to triage women with a pregnancy of unknown location.
Design
Validation of prediction rule.
Setting
Multicenter study.
Patients
Women with a pregnancy of unknown location.
Main Outcome Measures
Scores were assigned to factors identified at clinical presentation. Total score was calculated to assess risk of ectopic pregnancy women with a pregnancy of unknown location, and a 3-tiered clinical action plan proposed. Recommendations were: low risk, intermediate risk and high risk. Recommendation based on model score was compared to clinical diagnosis.
Interventions
None
Results
The cohort of 1400 women (284 ectopic pregnancy (EP), 759 miscarriages, and 357 intrauterine pregnancy (IUP)) was more diverse than the original cohort used to develop the decision rule. A total of 29.4% IUPs were identified for less frequent follow up and 18.4% nonviable gestations were identified for more frequent follow up (to rule out an ectopic pregnancy) compared to intermediate risk (i.e. monitor in current standard fashion). For decision of possible less frequent monitor, specificity was 90.8% (89.0 – 92.6) with negative predictive value of 79.0% (76.7 – 81.3). For decision of more intense follow up specificity was 95.0% (92.7 – 97.2). Test characteristics using the scoring system replicated in the diverse validation cohort.
Conclusion
A scoring system based on symptoms at presentation has value to stratify risk and influence the intensity of outpatient surveillance for women with pregnancy of unknown location but does not serve as a diagnostic tool.
doi:10.1016/j.fertnstert.2012.09.012
PMCID: PMC3534951  PMID: 23040528
ectopic pregnancy; pregnancy of unknown location; risk factors; scoring system
19.  Tailoring adverse drug event surveillance to the paediatric inpatient 
Introduction
Although paediatric patients have an increased risk for adverse drug events, few detection methodologies target this population. To utilise computerised adverse event surveillance, specialised trigger rules are required to accommodate the unique needs of children. The aim was to develop new, tailored rules sustainable for review and robust enough to support aggregate event rate monitoring.
Methods
The authors utilised a voluntary staff incident-reporting system, lab values and physician insight to design trigger rules. During Phase 1, problem areas were identified by reviewing 5 years of paediatric voluntary incident reports. Based on these findings, historical lab electrolyte values were analysed to devise critical value thresholds. This evidence informed Phase 2 rule development. For 3 months, surveillance alerts were evaluated for occurrence of adverse drug events.
Results
In Phase 1, replacement preparations and total parenteral nutrition comprised the majority (36.6%) of adverse drug events in 353 paediatric patients. During Phase 2, nine new trigger rules produced 225 alerts in 103 paediatric inpatients. Of these, 14 adverse drug events were found by the paediatric hypoglycaemia rule, but all other electrolyte trigger rules were ineffective. Compared with the adult-focused hypoglycaemia rule, the new, tailored version increased the paediatric event detection rate from 0.43 to 1.51 events per 1000 patient days.
Conclusions
Relying solely on absolute lab values to detect electrolyte-related adverse drug events did not meet our goals. Use of compound rule logic improved detection of hypoglycaemia. More success may be found in designing real-time rules that leverage lab trends and additional clinical information.
doi:10.1136/qshc.2009.032680
PMCID: PMC2975971  PMID: 20511599
Paediatrics; adverse drug event; computerised surveillance; trigger tool; information technology; medication error
20.  Cost-Effectiveness of Preventing Loss to Follow-up in HIV Treatment Programs: A Côte d'Ivoire Appraisal 
PLoS Medicine  2009;6(10):e1000173.
Based on data from West Africa, Elena Losina and colleagues predict that interventions to reduce dropout rates from HIV treatment programs (such as eliminating copayments) will be cost-effective.
Background
Data from HIV treatment programs in resource-limited settings show extensive rates of loss to follow-up (LTFU) ranging from 5% to 40% within 6 mo of antiretroviral therapy (ART) initiation. Our objective was to project the clinical impact and cost-effectiveness of interventions to prevent LTFU from HIV care in West Africa.
Methods and Findings
We used the Cost-Effectiveness of Preventing AIDS Complications (CEPAC) International model to project the clinical benefits and cost-effectiveness of LTFU-prevention programs from a payer perspective. These programs include components such as eliminating ART co-payments, eliminating charges to patients for opportunistic infection-related drugs, improving personnel training, and providing meals and reimbursing for transportation for participants. The efficacies and costs of these interventions were extensively varied in sensitivity analyses. We used World Health Organization criteria of <3× gross domestic product per capita (3× GDP per capita = US$2,823 for Côte d'Ivoire) as a plausible threshold for “cost-effectiveness.” The main results are based on a reported 18% 1-y LTFU rate. With full retention in care, projected per-person discounted life expectancy starting from age 37 y was 144.7 mo (12.1 y). Survival losses from LTFU within 1 y of ART initiation ranged from 73.9 to 80.7 mo. The intervention costing US$22/person/year (e.g., eliminating ART co-payment) would be cost-effective with an efficacy of at least 12%. An intervention costing US$77/person/year (inclusive of all the components described above) would be cost-effective with an efficacy of at least 41%.
Conclusions
Interventions that prevent LTFU in resource-limited settings would substantially improve survival and would be cost-effective by international criteria with efficacy of at least 12%–41%, depending on the cost of intervention, based on a reported 18% cumulative incidence of LTFU at 1 y after ART initiation. The commitment to start ART and treat HIV in these settings should include interventions to prevent LTFU.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Acquired immunodeficiency syndrome (AIDS) has killed more than 25 million people since the first reported case in 1981. Currently, about 33 million people are infected with the human immunodeficiency virus (HIV), which causes AIDS. Two-thirds of people infected with HIV live in sub-Saharan Africa. HIV infects and destroys immune system cells, thereby weakening the immune system and rendering infected individuals susceptible to infection. There is no cure for HIV/AIDS. Combination antiretroviral therapy (ART), a mixture of antiretroviral drugs that suppress the replication of the virus in the body, is used to treat and prevent HIV infection. ART is expensive but major international efforts by governments, international organizations, and funding bodies have increased ART availability. According to World Health Organization (WHO) estimates, at least 9.7 million people in low- and middle-income countries need ART and as of 2007, 3 million of those people had reliable access to the drugs.
Why Was This Study Done?
Although ART is an effective treatment for HIV, a large number of individuals who initiate ART do not receive long-term follow-up care. These patients are generally sicker and have a worse long-term outcome than those who receive follow-up care. Loss to follow up (LTFU) is a significant problem that can undermine the benefits of expanding ART availability. Strategies to improve follow up concentrate on bringing lost patients back into the health care system, but such patients often die before they can be contacted. Prevention of LTFU might be a better strategy to improve HIV care after ART initiation, but there is little information available on which specific interventions might best accomplish this goal.
What Did the Researchers Do and Find?
Given the lack of reported data on the actual costs and effectiveness of LTFU prevention, the researchers used a model to estimate the clinical impact and cost-effectiveness of several possible strategies to prevent LTFU in HIV-infected persons receiving ART in Côte d'Ivoire, West Africa. The researchers used the previously developed Cost-Effectiveness of Preventing AIDS Complications (CEPAC) computer simulation model and combined it with data from a program of ART delivery in Abidjan, Côte d'Ivoire. They then projected the clinical benefits and the cost required to attain a given level of benefit (cost-effectiveness ratio) of different LTFU-prevention strategies from the perspective of the payer (the organization that pays all the medical costs to provide care). Several interventions were considered, including reducing costs to patients (eliminating patient co-payments and paying for transportation) and increasing services to patients at their visits (improving staff training in HIV care, and providing meals at clinic times). LTFU was predicted to cause a 54.3%–58.3% reduction in the estimated life expectancy beyond age 37; patients continuing HIV care were predicted to live a further 144.7 months whie those lost to follow up by 1 year after ART initiation were predicted to live only for a further 73.9–80.7 months. LTFU-prevention strategies in the Côte d'Ivoire were deemed to be cost-effective if they cost less than $2,823 (which is 3× gross domestic product per capita) per year of life saved. The efficacy and cost of the different LTFU-prevention strategies varied in the analyses; stopping ART co-payment alone would be cost-effective at a cost of $22/person/year if it reduced LTFU rates by 12%, while including all the LTFU-prevention strategies described would be cost-effective at $77/person/year if they reduced LTFU-rates by 41%.
What Do These Findings Mean?
The findings suggest that moderately effective strategies for preventing LTFU in resource-limited settings would improve survival, provide good value for money, and should be used to improve HIV treatment programs. Although modeling is valuable to explore the costs and effectiveness of LTFU-prevention strategies it cannot replace the need for more reported data to shed light on problems leading to LTFU and the prevention strategies required to combat it. Also, Côte d'Ivoire might not be representative of all West African countries or resource-limited settings. A similar analysis using data from other ART programs in different countries would be useful to provide better understanding of the impact of LTFU in HIV treatment programs. Finally, the research highlights the cost of second-line ART (a new antiretroviral drug combination for patients in whom first-line treatment fails) as a crucial issue. It is estimated that 5% of all people receiving ART in low- and middle-income countries receive second-line ART and these numbers are expected to increase. Second-line ART had major effects on cost-effectiveness, and a reduction in the cost of this treatment is critical in order to guarantee continued access to HIV treatment.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000173.
This study is further discussed in a PLoS Medicine Perspective by Gregory Bisson and Jeffrey Stringer
WHO provides information on disease prevention, treatment, and HIV/AIDS programs and projects
The UN Millennium Development Goals project site contains information on worldwide efforts to halt the spread of HIV/AIDS
aidsmap, a nonprofit, nongovernmental organization, provides information on HIV and supporting those living with HIV
doi:10.1371/journal.pmed.1000173
PMCID: PMC2762030  PMID: 19859538
21.  Chronious: the last advances in telehealth monitoring systems 
The effectiveness of treatment depends on the patient’s ability to manage in the everyday life his/her chronic health status in accordance with the medical prescriptions outside the hospital settings. For this reason, the European Commission promotes research in tele-health applications, such as Chronious “An Open, Ubiquitous and Adaptive Chronic Disease Management Platform for COPD and Renal Insufficiency”. The aim is the improvement of healthcare service by offering an online health management solution that addresses the patient-professional interaction, personal data security, reduction of hospitalization and related costs. Chronious implements a modular hardware-software system that integrates existing healthcare legacy system, biomedical sensors, user interfaces and multi-parametric data processing with decision support system for patients and health professionals. Nowadays, very few of chronic disease management tools commercially available are accompanied with patient-professional interfaces for communication and education purposes. As added value, Chronious proposes lifestyle and mental support tools for the patients and ontological cross-lingual information retrieval system for clinicians for faster and easier queries to medical knowledge. The patient at home is equipped with a T-shirt able to record cardiac/respiratory/audio and activity signs, external devices (weight scale, glucometer, blood pressure monitoring device, spirometer, air quality sensor) and a touch-screen computer to send reminders on drugs intake and to collect information on dietary habits and mental status. All information are automatically transmitted via IP/GPRS to the Central System, that using a web-interface and ruled based algorithms allows clinicians to monitor patients status and give suggestions for acting in case of worsening trend or risk situation. As consequence, critical procedures that are quite complicated for the patient such as frequent/continuous monitoring, visits to hospitals, self-care are becoming straightforward and simpler. In addition, the information of the clinician is more direct, accurate and complete improving the prognosis for the chronic diseases and the selection of the most appropriate treatment planning. For validation purposes, Chronious is focused on chronic obstructive pulmonary disease and chronic kidney disease, being these widespread and highly expensive in terms of social and economic costs. The validation protocol considers also the most frequent related comorbidities, such as diabetes, involving the patients category which will take advantage of the highest foreseen benefits. This enables an open architecture for further applications. Project validation is divided in two progressive phases: the first one in hospital setting was aimed to verify on 50 patients if the delivered prototypes met the user requirements, the ergonomic and functional specifications. The second phase has observational features. The improved system is currently applied at home on 60 selected patients. Patients are instructed to use the system independently for an expected duration of 4 months each. In parallel, the patient is monitored with standard periodic outpatient checks. At the end, customer satisfaction and the predictive ability of the system in the evolution of the disease will be evaluated. First feedbacks are encouraging because Chronious monitoring provides friendly approaches to new technologies and reassures patients reducing the intervention time in critical situation.
PMCID: PMC3571130
chronic disease; patient-professional interfaces; lifestyle
22.  Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered 
Background
Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios.
Methods
Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components.
Results
Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.
For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set.
Conclusions
The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.
doi:10.1186/1471-2288-11-76
PMCID: PMC3125387  PMID: 21600023
23.  Cost-Effectiveness of Early Versus Standard Antiretroviral Therapy in HIV-Infected Adults in Haiti 
PLoS Medicine  2011;8(9):e1001095.
This cost-effectiveness study comparing early versus standard antiretroviral treatment (ART) for HIV, based on randomized clinical trial data from Haiti, reveals that the new WHO guidelines for early ART initiation can be cost-effective in resource-poor settings.
Background
In a randomized clinical trial of early versus standard antiretroviral therapy (ART) in HIV-infected adults with a CD4 cell count between 200 and 350 cells/mm3 in Haiti, early ART decreased mortality by 75%. We assessed the cost-effectiveness of early versus standard ART in this trial.
Methods and Findings
Trial data included use of ART and other medications, laboratory tests, outpatient visits, radiographic studies, procedures, and hospital services. Medication, laboratory, radiograph, labor, and overhead costs were from the study clinic, and hospital and procedure costs were from local providers. We evaluated cost per year of life saved (YLS), including patient and caregiver costs, with a median of 21 months and maximum of 36 months of follow-up, and with costs and life expectancy discounted at 3% per annum. Between 2005 and 2008, 816 participants were enrolled and followed for a median of 21 months. Mean total costs per patient during the trial were US$1,381 for early ART and US$1,033 for standard ART. After excluding research-related laboratory tests without clinical benefit, costs were US$1,158 (early ART) and US$979 (standard ART). Early ART patients had higher mean costs for ART (US$398 versus US$81) but lower costs for non-ART medications, CD4 cell counts, clinically indicated tests, and radiographs (US$275 versus US$384). The cost-effectiveness ratio after a maximum of 3 years for early versus standard ART was US$3,975/YLS (95% CI US$2,129/YLS–US$9,979/YLS) including research-related tests, and US$2,050/YLS excluding research-related tests (95% CI US$722/YLS–US$5,537/YLS).
Conclusions
Initiating ART in HIV-infected adults with a CD4 cell count between 200 and 350 cells/mm3 in Haiti, consistent with World Health Organization advice, was cost-effective (US$/YLS <3 times gross domestic product per capita) after a maximum of 3 years, after excluding research-related laboratory tests.
Trial registration
ClinicalTrials.gov NCT00120510
Please see later in the article for the Editors' Summary
Editors' Summary
Background
AIDS has killed more than 25 million people since 1981, and about 33 million people (most of them living in low- and middle-income countries) are now infected with HIV, the virus that causes AIDS. HIV destroys immune system cells (including CD4 cells, a type of lymphocyte), leaving infected individuals susceptible to other infections. Early in the AIDS epidemic, most HIV-infected people died within 10 years of infection. Then, in 1996, highly active antiretroviral therapy (ART) became available and, for people living in affluent countries HIV/AIDS became a chronic condition. However, ART was extremely expensive and so a diagnosis of HIV infection remained a death sentence for people living in developing countries. In 2003, this situation was declared a global health emergency, and governments, international agencies, and funding bodies began to implement plans to increase ART coverage in developing countries. In 2009, more than a third of people in low- and middle-income countries who needed ART were receiving it, on the basis of guidelines that were in place at that time.
Why Was This Study Done?
Until recently, the World Health Organization (WHO) recommended that all HIV-positive patients with CD4 cell count below 200/mm3 blood or an AIDS-defining illness such as Kaposi's sarcoma should be given ART. Then, in 2009, the CIPRA HT-001 randomized clinical trial, which was undertaken in Haiti, reported that patients who started ART when their CD4 cell count was between 200 and 350 cells/mm3 (“early ART”) had a higher survival rate than patients who started ART according to the WHO guidelines (“standard ART”). As a result, WHO now recommends that ART is started in HIV-infected people when their CD4 cell count falls below 350 cells/mm3. But is this new recommendation cost-effective? Do its benefits outweigh its costs? Policy-makers need to know the cost-effectiveness of interventions so that they can allocate their limited resources wisely. A medical intervention is generally considered cost-effective if it costs less than three times a country's per capita gross domestic product (GDP) per year of life saved (YLS). In this study, the researchers assess the cost-effectiveness of early versus standard ART in the CIPRA HT-001 trial.
What Did the Researchers Do and Find?
The researchers used trial data on the use and costs of ART, other medications, laboratory tests, outpatient visits, radiography, procedures, and hospital services to evaluate the costs associated with early ART and standard ART among the 816 CIPRA HT-001 trial participants. The average total costs per patient after a maximum of 3 years treatment were US$1,381 for early ART and US$1,033 for standard ART. These figures dropped to US$1,158 and US$979, respectively, when the costs of research-related tests without clinical benefit were excluded. Patients who received early ART had higher average costs for ART but lower costs for other aspects of their treatment than patients who received standard ART. The incremental cost-effectiveness ratio after 3 years for early ART compared to standard ART was US$3,975/YLS if the costs of research-related tests were included in the calculation. That is, the cost of saving one year of life by starting ART early instead of when the CD4 cell count dropped below 200/mm3 was nearly US$4,000. Importantly, exclusion of the costs of research-related tests reduced the incremental cost-effectiveness ratio of early ART compared to standard ART to US$2,050/YLS.
What Do These Findings Mean?
Because the Haitian GDP per capita is US$785, these findings suggest that, in Haiti, early ART is a cost-effective intervention over a 3-year period. That is, the incremental cost per year of life saved of early ART compared to standard ART after exclusion of research-related tests is less than three times Haiti's per capita GDP. The researchers note that their incremental cost-effectiveness ratios are likely to be conservative because they did not consider the clinical benefits of early ART that continue beyond 3 years—early ART is associated with lower longer-term mortality than standard ART—or the effect of early ART on disability and quality of life. Cost-effectiveness studies now need to be undertaken at different sites to determine whether these findings are generalizable but, for now, this cost-effectiveness study suggests that the new WHO guidelines for ART initiation can be cost-effective in resource-poor settings, information that should help policy-makers in developing countries allocate their limited resources.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001095.
Information is available from the US National Institute of Allergy and infectious diseases on HIV infection and AIDS
HIV InSite has comprehensive information on all aspects of HIV/AIDS
Information is available from Avert, an international AIDS charity on many aspects of HIV/AIDS, including information on the HIV/AIDS in the Caribbean, and on HIV/AIDS treatment and care (in English and Spanish)
WHO provides information about universal access to AIDS treatment (in English, French and Spanish); its 2010 ART guidelines can be downloaded
More information about the CIPRA HT-001 clinical trial is available
Patient stories about living with HIV/AIDS are available through Avert and through the charity website Healthtalkonline
More information about GHESKIO is available from Weill Cornell Global Health
doi:10.1371/journal.pmed.1001095
PMCID: PMC3176754  PMID: 21949643
24.  A World Malaria Map: Plasmodium falciparum Endemicity in 2007 
PLoS Medicine  2009;6(3):e1000048.
Background
Efficient allocation of resources to intervene against malaria requires a detailed understanding of the contemporary spatial distribution of malaria risk. It is exactly 40 y since the last global map of malaria endemicity was published. This paper describes the generation of a new world map of Plasmodium falciparum malaria endemicity for the year 2007.
Methods and Findings
A total of 8,938 P. falciparum parasite rate (PfPR) surveys were identified using a variety of exhaustive search strategies. Of these, 7,953 passed strict data fidelity tests for inclusion into a global database of PfPR data, age-standardized to 2–10 y for endemicity mapping. A model-based geostatistical procedure was used to create a continuous surface of malaria endemicity within previously defined stable spatial limits of P. falciparum transmission. These procedures were implemented within a Bayesian statistical framework so that the uncertainty of these predictions could be evaluated robustly. The uncertainty was expressed as the probability of predicting correctly one of three endemicity classes; previously stratified to be an informative guide for malaria control. Population at risk estimates, adjusted for the transmission modifying effects of urbanization in Africa, were then derived with reference to human population surfaces in 2007. Of the 1.38 billion people at risk of stable P. falciparum malaria, 0.69 billion were found in Central and South East Asia (CSE Asia), 0.66 billion in Africa, Yemen, and Saudi Arabia (Africa+), and 0.04 billion in the Americas. All those exposed to stable risk in the Americas were in the lowest endemicity class (PfPR2−10 ≤ 5%). The vast majority (88%) of those living under stable risk in CSE Asia were also in this low endemicity class; a small remainder (11%) were in the intermediate endemicity class (PfPR2−10 > 5 to < 40%); and the remaining fraction (1%) in high endemicity (PfPR2−10 ≥ 40%) areas. High endemicity was widespread in the Africa+ region, where 0.35 billion people are at this level of risk. Most of the rest live at intermediate risk (0.20 billion), with a smaller number (0.11 billion) at low stable risk.
Conclusions
High levels of P. falciparum malaria endemicity are common in Africa. Uniformly low endemic levels are found in the Americas. Low endemicity is also widespread in CSE Asia, but pockets of intermediate and very rarely high transmission remain. There are therefore significant opportunities for malaria control in Africa and for malaria elimination elsewhere. This 2007 global P. falciparum malaria endemicity map is the first of a series with which it will be possible to monitor and evaluate the progress of this intervention process.
Incorporating data from nearly 8,000 surveys ofPlasmodium falciparum parasite rates, Simon Hay and colleagues employ a model-based geostatistical procedure to create a map of global malaria endemicity.
Editors' Summary
Background.
Malaria is one of the most common infectious diseases in the world and one of the greatest global public health problems. The Plasmodium falciparum parasite causes approximately 500 million cases each year and over one million deaths in sub-Saharan Africa. More than 40% of the world's population is at risk of malaria. The parasite is transmitted to people through the bites of infected mosquitoes. These insects inject a life stage of the parasite called sporozoites, which invade human liver cells where they reproduce briefly. The liver cells then release merozoites (another life stage of the parasite), which invade red blood cells. Here, they multiply again before bursting out and infecting more red blood cells, causing fever and damaging vital organs. The infected red blood cells also release gametocytes, which infect mosquitoes when they take a blood meal. In the mosquito, the gametocytes multiply and develop into sporozoites, thus completing the parasite's life cycle. Malaria can be prevented by controlling the mosquitoes that spread the parasite and by avoiding mosquito bites by sleeping under insecticide-treated bed nets. Effective treatment with antimalarial drugs also helps to decrease malaria transmission.
Why Was This Study Done?
In 1998, the World Health Organization and several other international agencies launched Roll Back Malaria, a global partnership that aims to reduce the human and socioeconomic costs of malaria. Targets have been continually raised since this time and have culminated in the Roll Back Malaria Global Malaria Action Plan of 2008, where universal coverage of locally appropriate interventions is called for by 2010 and the long-term goal of malaria eradication again tabled for the international community. For malaria control and elimination initiatives to be effective, financial resources must be concentrated in regions where they will have the most impact, so it is essential to have up-to-date and accurate maps to guide effort and expenditure. In 2008, researchers of the Malaria Atlas Project constructed a map that stratified the world into three levels of malaria risk: no risk, unstable transmission risk (occasional focal outbreaks), and stable transmission risk (endemic areas where the disease is always present). Now, researchers extend this work by describing a new evidence-based method for generating continuous maps of P. falciparum endemicity within the area of stable malaria risk over the entire world's surface. They then use this method to produce a P. falciparum endemicity map for 2007. Endemicity is important as it is a guide to the level of morbidity and mortality a population will suffer, as well as the intensity of the interventions that that will be required to bring the disease under control or additionally to interrupt transmission.
What Did the Researchers Do and Find?
The researchers identified nearly 8,000 surveys of P. falciparum parasite rates (PfPR; the percentage of a population with parasites detectable in their blood) completed since 1985 that met predefined criteria for inclusion into a global database of PfPR data. They then used “model-based geostatistics” to build a world map of P. falciparum endemicity for 2007 that took into account where and, importantly, when and all these surveys were done. Predictions were comprehensive (for every area of stable transmission globally) and continuous (predicted as a endemicity value between 0% and 100%). The population at risk of three levels of malaria endemicity were identified to help summarize these findings: low endemicity, where PfPR is below 5% and where it should be technically feasible to eliminate malaria; intermediate endemicity where PfPR is between 5% and 40% and it should be theoretically possible to interrupt transmission with the universal coverage of bed nets; high endemicity is where PfPR is above 40% and suites of locally appropriate intervention will be needed to bring malaria under control. The global level of malaria endemicity is much reduced when compared with historical maps. Nevertheless, the resulting map indicates that in 2007 almost 60% of the 2.4 billion people at malaria risk were living in areas with a stable risk of P. falciparum transmission—0.69 billion people in Central and South East Asia (CSE Asia), 0.66 billion in Africa, Yemen, and Saudi Arabia (Africa+), and 0.04 billion in the Americas. The people of the Americas were all in the low endemicity class. Although most people exposed to stable risk in CSE Asia were also in the low endemicity class (88%), 11% were in the intermediate class, and 1% were in the high endemicity class. By contrast, high endemicity was most common and widespread in the Africa+ region (53%), but with significant numbers in the intermediate (30%), and low (17%) endemicity classes.
What Do These Findings Mean?
The accuracy of this new world map of P. falciparum endemicity depends on the assumptions made in its construction and critically on the accuracy of the data fed into it, but because of the statistical methods used to construct this map, it is possible to quantify the uncertainty in the results for all users. Thus, this map (which, together with the data used in its construction, will be freely available) represents an important new resource that clearly indicates areas where malaria control can be improved (for example, Africa) and other areas where malaria elimination may be technically possible. In addition, planned annual updates of the global P. falciparum endemicity map and the PfPR database by the Malaria Atlas Project will help public-health experts to monitor the progress of the malaria control community towards international control and elimination targets.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000048.
A PLoS Medicine Health in Action article (Hay SI, Snow RW (2006) The Malaria Atlas Project: Developing Global Maps of Malaria Risk. PLoS Med 3(12): e473) and a Research Article (Guerra CA, Gikandi PW, Tatem AJ, Noor AM, Smith DL, et al. (2008) The Limits and Intensity of Plasmodium falciparum Transmission: Implications for Malaria Control and Elimination Worldwide. PLoS Med 5(2): e38) also provide further details about the global mapping of malaria risk, and a further Research Article (Snow RW, Guerra CA, Mutheu JJ, Hay SI (2008) International Funding for Malaria Control in Relation to Populations at Risk of Stable Plasmodium falciparum Transmission. PLoS Med 5(7): e142) discusses the financing of malaria control in relation to this risk
Additional national and regional level maps and more information on the global mapping of malaria are available at the Malaria Atlas Project
The MedlinePlus encyclopedia contains a page on malaria (in English and Spanish)
Information is available from the World Health Organization on malaria (in several languages)
The US Centers for Disease Control and Prevention provide information on malaria (in English and Spanish)
Information is available from the Roll Back Malaria Partnership on its approach to the global control of malaria, and on malaria control efforts in specific parts of the world
doi:10.1371/journal.pmed.1000048
PMCID: PMC2659708  PMID: 19323591
25.  Optimal Sampling Strategies for Detecting Zoonotic Disease Epidemics 
PLoS Computational Biology  2014;10(6):e1003668.
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
Author Summary
Outbreaks of zoonoses can have large costs to society through public health and agricultural impacts. Because many zoonoses co-occur in multiple animal populations simultaneously, detection of zoonotic outbreaks can be especially difficult. We evaluated how to design sampling strategies for the early detection of disease outbreaks of vector-borne diseases. We built a framework to integrate epidemiological dynamical models with a sampling process that accounts for budgetary constraints, such as those faced by many management agencies. We illustrate our approach using West Nile virus, a globally-spreading zoonotic arbovirus that has significantly affected North American bird populations. Our results suggest that simple formulas can often make robust predictions about the proper sampling procedure, though we also illustrate how computational methods can be used to extend our framework to more realistic modeling scenarios when these simple predictions break down.
doi:10.1371/journal.pcbi.1003668
PMCID: PMC4072525  PMID: 24968100

Results 1-25 (1240965)