PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1375179)

Clipboard (0)
None

Related Articles

1.  Is (1→3)-β-D-glucan the missing link from bedside assessment to pre-emptive therapy of invasive candidiasis? 
Critical Care  2011;15(6):1017.
Invasive candidiasis is a frequent life-threatening complication in critically ill patients. Early diagnosis followed by prompt treatment aimed at improving outcome by minimizing unnecessary antifungal use remains a major challenge in the ICU setting. Timely patient selection thus plays a key role for clinically efficient and cost-effective management. Approaches combining clinical risk factors and Candida colonization data have improved our ability to identify such patients early. While the negative predictive value of scores and predicting rules is up to 95 to 99%, the positive predictive value is much lower, ranging between 10 and 60%. Accordingly, if a positive score or rule is used to guide the start of antifungal therapy, many patients may be treated unnecessarily. Candida biomarkers display higher positive predictive values; however, they lack sensitivity and are thus not able to identify all cases of invasive candidiasis. The (1→3)-β-D-glucan (BG) assay, a panfungal antigen test, is recommended as a complementary tool for the diagnosis of invasive mycoses in high-risk hemato-oncological patients. Its role in the more heterogeneous ICU population remains to be defined. More efficient clinical selection strategies combined with performant laboratory tools are needed in order to treat the right patients at the right time by keeping costs of screening and therapy as low as possible. The new approach proposed by Posteraro and colleagues in the previous issue of Critical Care meets these requirements. A single positive BG value in medical patients admitted to the ICU with sepsis and expected to stay for more than 5 days preceded the documentation of candidemia by 1 to 3 days with an unprecedented diagnostic accuracy. Applying this one-point fungal screening on a selected subset of ICU patients with an estimated 15 to 20% risk of developing candidemia is an appealing and potentially cost-effective approach. If confirmed by multicenter investigations, and extended to surgical patients at high risk of invasive candidiasis after abdominal surgery, this Bayesian-based risk stratification approach aimed at maximizing clinical efficiency by minimizing health care resource utilization may substantially simplify the management of critically ill patients at risk of invasive candidiasis.
doi:10.1186/cc10544
PMCID: PMC3388704  PMID: 22171793
2.  Validation and comparison of clinical prediction rules for invasive candidiasis in intensive care unit patients: a matched case-control study 
Critical Care  2011;15(4):R198.
Introduction
Due to the increasing prevalence and severity of invasive candidiasis, investigators have developed clinical prediction rules to identify patients who may benefit from antifungal prophylaxis or early empiric therapy. The aims of this study were to validate and compare the Paphitou and Ostrosky-Zeichner clinical prediction rules in ICU patients in a 689-bed academic medical center.
Methods
We conducted a retrospective matched case-control study from May 2003 to June 2008 to evaluate the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of each rule. Cases included adults with ICU stays of at least four days and invasive candidiasis matched to three controls by age, gender and ICU admission date. The clinical prediction rules were applied to cases and controls via retrospective chart review to evaluate the success of the rules in predicting invasive candidiasis. Paphitou's rule included diabetes, total parenteral nutrition (TPN) and dialysis with or without antibiotics. Ostrosky-Zeichner's rule included antibiotics or central venous catheter plus at least two of the following: surgery, immunosuppression, TPN, dialysis, corticosteroids and pancreatitis. Conditional logistic regression was performed to evaluate the rules. Discriminative power was evaluated by area under the receiver operating characteristic curve (AUC ROC).
Results
A total of 352 patients were included (88 cases and 264 controls). The incidence of invasive candidiasis among adults with an ICU stay of at least four days was 2.3%. The prediction rules performed similarly, exhibiting low PPVs (0.041 to 0.054), high NPVs (0.983 to 0.990) and AUC ROCs (0.649 to 0.705). A new prediction rule (Nebraska Medical Center rule) was developed with PPVs, NPVs and AUC ROCs of 0.047, 0.994 and 0.770, respectively.
Conclusions
Based on low PPVs and high NPVs, the rules are most useful for identifying patients who are not likely to develop invasive candidiasis, potentially preventing unnecessary antifungal use, optimizing patient ICU care and facilitating the design of forthcoming antifungal clinical trials.
doi:10.1186/cc10366
PMCID: PMC3387640  PMID: 21846332
candidiasis; clinical prediction rules; prophylaxis
3.  Seasonal Influenza Vaccination for Children in Thailand: A Cost-Effectiveness Analysis 
PLoS Medicine  2015;12(5):e1001829.
Background
Seasonal influenza is a major cause of mortality worldwide. Routine immunization of children has the potential to reduce this mortality through both direct and indirect protection, but has not been adopted by any low- or middle-income countries. We developed a framework to evaluate the cost-effectiveness of influenza vaccination policies in developing countries and used it to consider annual vaccination of school- and preschool-aged children with either trivalent inactivated influenza vaccine (TIV) or trivalent live-attenuated influenza vaccine (LAIV) in Thailand. We also compared these approaches with a policy of expanding TIV coverage in the elderly.
Methods and Findings
We developed an age-structured model to evaluate the cost-effectiveness of eight vaccination policies parameterized using country-level data from Thailand. For policies using LAIV, we considered five different age groups of children to vaccinate. We adopted a Bayesian evidence-synthesis framework, expressing uncertainty in parameters through probability distributions derived by fitting the model to prospectively collected laboratory-confirmed influenza data from 2005-2009, by meta-analysis of clinical trial data, and by using prior probability distributions derived from literature review and elicitation of expert opinion. We performed sensitivity analyses using alternative assumptions about prior immunity, contact patterns between age groups, the proportion of infections that are symptomatic, cost per unit vaccine, and vaccine effectiveness. Vaccination of children with LAIV was found to be highly cost-effective, with incremental cost-effectiveness ratios between about 2,000 and 5,000 international dollars per disability-adjusted life year averted, and was consistently preferred to TIV-based policies. These findings were robust to extensive sensitivity analyses. The optimal age group to vaccinate with LAIV, however, was sensitive both to the willingness to pay for health benefits and to assumptions about contact patterns between age groups.
Conclusions
Vaccinating school-aged children with LAIV is likely to be cost-effective in Thailand in the short term, though the long-term consequences of such a policy cannot be reliably predicted given current knowledge of influenza epidemiology and immunology. Our work provides a coherent framework that can be used for similar analyses in other low- and middle-income countries.
Ben Cooper and colleagues use an age-structured model to estimate optimal cost-effectiveness of flu vaccination among Thai children aged 2 to 17.
Editors' Summary
Background
Every year, millions of people catch influenza, a viral disease of the airways. Most infected individuals recover quickly, but elderly people, the very young, and chronically ill individuals are at high risk of developing serious complications such as pneumonia; seasonal influenza kills about half a million people annually. Small but frequent changes in the influenza virus mean that an immune response produced one year by exposure to the virus provides only partial protection against influenza the next year. Annual immunization with a vaccine that contains killed or live-attenuated (weakened) influenza viruses of the major circulating strains can reduce a person’s chance of catching influenza. Consequently, many countries run seasonal influenza vaccination programs that target elderly people and other people at high risk of influenza complications, and people who care for these individuals.
Why Was This Study Done?
As well as reducing the vaccinated person’s risk of infection, influenza vaccination protects unvaccinated members of the population by reducing the chances of influenza spreading. Because children make a disproportionately large contribution to the transmission of influenza, vaccination of children might therefore provide greater benefits to the whole population than vaccination of elderly people, particularly when vaccination uptake among the elderly is low. Thus, many high-income countries now recommend annual influenza vaccination of children with a trivalent live-attenuated influenza vaccine (LAIV; a trivalent vaccine contains three viruses), which is sprayed into the nose. However, to date no low- or middle-income countries have evaluated this policy. Here, the researchers develop a mathematical model (framework) to evaluate the cost-effectiveness of annual vaccination of children with LAIV or trivalent inactivated influenza vaccine (TIV) in Thailand. A cost-effectiveness analysis evaluates whether a medical intervention is good value for money by comparing the health outcomes and costs associated with the introduction of the intervention with the health outcomes and costs of the existing standard of care. Thailand, a middle-income country, offers everyone over 65 years old free seasonal influenza vaccination with TIV, but vaccine coverage in this age group is low (10%).
What Did the Researchers Do and Find?
The researchers developed a modeling framework that contained six connected components including a transmission model that incorporated infectious contacts within and between different age groups, a health outcome model that calculated the disability-adjusted life years (DALYs, a measure of the overall disease burden) averted by specific vaccination policies, and a cost model that calculated the costs to the population of each policy. They used this framework and data from Thailand to calculate the cost-effectiveness of six childhood vaccination policies in Thailand (one with TIV and five with LAIV that targeted children of different ages) against a baseline policy of 10% TIV coverage in the elderly; they also investigated the cost-effectiveness of increasing vaccination in the elderly to 66%. All seven vaccination policies tested reduced influenza cases and deaths compared to the baseline policy, but the LAIV-based polices were consistently better than the TIV-based policies; the smallest reductions were seen when TIV coverage in elderly people was increased to 66%. All seven policies were highly cost-effective according to the World Health Organization’s threshold for cost-effectiveness. That is, the cost per DALY averted by each policy compared to the baseline policy (the incremental cost-effectiveness ratio) was less than Thailand’s gross domestic product per capita (the total economic output of a country divided by the number of people in the country).
What Do These Findings Mean?
These findings suggest that seasonal influenza vaccination of children with LAIV is likely to represent good value for money in Thailand and, potentially, in other middle- and low-income countries in the short term. The long-term consequences of annual influenza vaccination of children in Thailand cannot be reliably predicted, however, because of limitations in our current understanding of influenza immunity in populations. Moreover, the accuracy of these findings is limited by the assumptions built into the modeling framework, including the vaccine costs and efficacy that were used to run the model, which were estimated from limited data. Importantly, however, these findings support proposals for large-scale community-based controlled trials of policies to vaccinate children against influenza in low- and middle-income countries. Indeed, based on these findings, Thailand is planning to evaluate school-based seasonal influenza vaccination in a few provinces in 2016 before considering a nationwide program of seasonal influenza vaccination of children.
Additional Information
This list of resources contains links that can be accessed when viewing the PDF on a device or via the online version of the article at http://dx.doi.org/10.1371/journal.pmed.1001829.
The UK National Health Service Choices website provides information for patients about seasonal influenza, about influenza vaccination, and about influenza vaccination in children
The World Health Organization provides information on seasonal influenza (in several languages) and on influenza vaccines
The US Centers for Disease Control and Prevention also provides information for patients and health professionals on all aspects of seasonal influenza, including information about vaccination, about children, influenza, and vaccination, and about herd immunity; its website contains a short video about personal experiences of influenza
Flu.gov, a US government website, provides access to information on seasonal influenza and vaccination
MedlinePlus has links to further information about influenza and about vaccination (in English and Spanish)
The Thai National Influenza Center monitors influenza activity throughout Thailand
doi:10.1371/journal.pmed.1001829
PMCID: PMC4444096  PMID: 26011712
4.  Male Circumcision at Different Ages in Rwanda: A Cost-Effectiveness Study 
PLoS Medicine  2010;7(1):e1000211.
Agnes Binagwaho and colleagues predict that circumcision of newborn boys would be effective and cost-saving as a long-term strategy to prevent HIV in Rwanda.
Background
There is strong evidence showing that male circumcision (MC) reduces HIV infection and other sexually transmitted infections (STIs). In Rwanda, where adult HIV prevalence is 3%, MC is not a traditional practice. The Rwanda National AIDS Commission modelled cost and effects of MC at different ages to inform policy and programmatic decisions in relation to introducing MC. This study was necessary because the MC debate in Southern Africa has focused primarily on MC for adults. Further, this is the first time, to our knowledge, that a cost-effectiveness study on MC has been carried out in a country where HIV prevalence is below 5%.
Methods and Findings
A cost-effectiveness model was developed and applied to three hypothetical cohorts in Rwanda: newborns, adolescents, and adult men. Effectiveness was defined as the number of HIV infections averted, and was calculated as the product of the number of people susceptible to HIV infection in the cohort, the HIV incidence rate at different ages, and the protective effect of MC; discounted back to the year of circumcision and summed over the life expectancy of the circumcised person. Direct costs were based on interviews with experienced health care providers to determine inputs involved in the procedure (from consumables to staff time) and related prices. Other costs included training, patient counselling, treatment of adverse events, and promotion campaigns, and they were adjusted for the averted lifetime cost of health care (antiretroviral therapy [ART], opportunistic infection [OI], laboratory tests). One-way sensitivity analysis was performed by varying the main inputs of the model, and thresholds were calculated at which each intervention is no longer cost-saving and at which an intervention costs more than one gross domestic product (GDP) per capita per life-year gained. Results: Neonatal MC is less expensive than adolescent and adult MC (US$15 instead of US$59 per procedure) and is cost-saving (the cost-effectiveness ratio is negative), even though savings from infant circumcision will be realized later in time. The cost per infection averted is US$3,932 for adolescent MC and US$4,949 for adult MC. Results for infant MC appear robust. Infant MC remains highly cost-effective across a reasonable range of variation in the base case scenario. Adolescent MC is highly cost-effective for the base case scenario but this high cost-effectiveness is not robust to small changes in the input variables. Adult MC is neither cost-saving nor highly cost-effective when considering only the direct benefit for the circumcised man.
Conclusions
The study suggests that Rwanda should be simultaneously scaling up circumcision across a broad range of age groups, with high priority to the very young. Infant MC can be integrated into existing health services (i.e., neonatal visits and vaccination sessions) and over time has better potential than adolescent and adult circumcision to achieve the very high coverage of the population required for maximal reduction of HIV incidence. In the presence of infant MC, adolescent and adult MC would evolve into a “catch-up” campaign that would be needed at the start of the program but would eventually become superfluous.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Acquired immunodeficiency syndrome (AIDS) has killed more than 25 million people since 1981 and more than 31 million people (22 million in sub-Saharan Africa alone) are now infected with the human immunodeficiency virus (HIV), which causes AIDS. There is no cure for HIV/AIDS and no vaccine against HIV infection. Consequently, prevention of HIV transmission is extremely important. HIV is most often spread through unprotected sex with an infected partner. Individuals can reduce their risk of HIV infection, therefore, by abstaining from sex, by having one or a few sexual partners, and by always using a male or female condom. In addition, male circumcision—the removal of the foreskin, the loose fold of skin that covers the head of penis—can halve HIV transmission rates to men resulting from sex with women. Thus, as part of its HIV prevention strategy, the World Health Organization (WHO) recommends that male circumcision programs be scaled up in countries where there is a generalized HIV epidemic and where few men are circumcised.
Why Was This Study Done?
One such country is Rwanda. Here, 3% of the adult population is infected with HIV but only 15% of men are circumcised—worldwide, about 30% of men are circumcised. Demand for circumcision is increasing in Rwanda but, before policy makers introduce a country-wide male circumcision program, they need to identify the most cost-effective way to increase circumcision rates. In particular, they need to decide the age at which circumcision should be offered. Circumcision soon after birth (neonatal circumcision) is quick and simple and rarely causes any complications. Circumcision of adolescents and adults is more complex and has a higher complication rate. Although several studies have investigated the cost-effectiveness (the balance between the clinical and financial costs of a medical intervention and its benefits) of circumcision in adult men, little is known about its cost-effectiveness in newborn boys. In this study, which is one of several studies on male circumcision being organized by the National AIDS Control Commission in Rwanda, the researchers model the cost-effectiveness of circumcision at different ages.
What Did the Researchers Do and Find?
The researchers developed a simple cost-effectiveness model and applied it to three hypothetical groups of Rwandans: newborn boys, adolescent boys, and adult men. For their model, the researchers calculated the effectiveness of male circumcision (the number of HIV infections averted) by estimating the reduction in the annual number of new HIV infections over time. They obtained estimates of the costs of circumcision (including the costs of consumables, staff time, and treatment of complications) from health care providers and adjusted these costs for the money saved through not needing to treat HIV in males in whom circumcision prevented infection. Using their model, the researchers estimate that each neonatal male circumcision would cost US$15 whereas each adolescent or adult male circumcision would cost US$59. Neonatal male circumcision, they report, would be cost-saving. That is, over a lifetime, neonatal male circumcision would save more money than it costs. Finally, using the WHO definition of cost-effectiveness (for a cost-effective intervention, the additional cost incurred to gain one year of life must be less than a country's per capita gross domestic product), the researchers estimate that, although adolescent circumcision would be highly cost-effective, circumcision of adult men would only be potentially cost-effective (but would likely prove cost-effective if the additional infections that would occur from men to their partners without a circumcision program were also taken into account).
What Do These Findings Mean?
As with all modeling studies, the accuracy of these findings depends on the many assumptions included in the model. However, the findings suggest that male circumcision for infants for the prevention of HIV infection later in life is highly cost-effective and likely to be cost-saving and that circumcision for adolescents is cost-effective. The researchers suggest, therefore, that policy makers in Rwanda and in countries with similar HIV infection and circumcision rates should scale up male circumcision programs across all age groups, with high priority being given to the very young. If infants are routinely circumcised, they suggest, circumcision of adolescent and adult males would become a “catch-up” campaign that would be needed at the start of the program but that would become superfluous over time. Such an approach would represent a switch from managing the HIV epidemic as an emergency towards focusing on sustainable, long-term solutions to this major public-health problem.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000211.
This study is further discussed in a PLoS Medicine Perspective by Seth Kalichman
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
Information is available from the Joint United Nations Programme on HIV/AIDS (UNAIDS) on HIV infection and AIDS and on male circumcision in relation to HIV and AIDS
HIV InSite has comprehensive information on all aspects of HIV/AIDS
Information is available from Avert, an international AIDS charity on many aspects of HIV/AIDS, including information on HIV and AIDS in Africa, and on circumcision and HIV (some information in English and Spanish)
More information about male circumcision is available from the Clearinghouse on Male Circumcision
The National AIDS Control Commission of Rwanda provides detailed information about HIV/AIDS in Rwanda (in English and French)
doi:10.1371/journal.pmed.1000211
PMCID: PMC2808207  PMID: 20098721
5.  Risk Stratification by Self-Measured Home Blood Pressure across Categories of Conventional Blood Pressure: A Participant-Level Meta-Analysis 
PLoS Medicine  2014;11(1):e1001591.
Jan Staessen and colleagues compare the risk of cardiovascular, cardiac, or cerebrovascular events in patients with elevated office blood pressure vs. self-measured home blood pressure.
Please see later in the article for the Editors' Summary
Background
The Global Burden of Diseases Study 2010 reported that hypertension is worldwide the leading risk factor for cardiovascular disease, causing 9.4 million deaths annually. We examined to what extent self-measurement of home blood pressure (HBP) refines risk stratification across increasing categories of conventional blood pressure (CBP).
Methods and Findings
This meta-analysis included 5,008 individuals randomly recruited from five populations (56.6% women; mean age, 57.1 y). All were not treated with antihypertensive drugs. In multivariable analyses, hazard ratios (HRs) associated with 10-mm Hg increases in systolic HBP were computed across CBP categories, using the following systolic/diastolic CBP thresholds (in mm Hg): optimal, <120/<80; normal, 120–129/80–84; high-normal, 130–139/85–89; mild hypertension, 140–159/90–99; and severe hypertension, ≥160/≥100.
Over 8.3 y, 522 participants died, and 414, 225, and 194 had cardiovascular, cardiac, and cerebrovascular events, respectively. In participants with optimal or normal CBP, HRs for a composite cardiovascular end point associated with a 10-mm Hg higher systolic HBP were 1.28 (1.01–1.62) and 1.22 (1.00–1.49), respectively. At high-normal CBP and in mild hypertension, the HRs were 1.24 (1.03–1.49) and 1.20 (1.06–1.37), respectively, for all cardiovascular events and 1.33 (1.07–1.65) and 1.30 (1.09–1.56), respectively, for stroke. In severe hypertension, the HRs were not significant (p≥0.20). Among people with optimal, normal, and high-normal CBP, 67 (5.0%), 187 (18.4%), and 315 (30.3%), respectively, had masked hypertension (HBP≥130 mm Hg systolic or ≥85 mm Hg diastolic). Compared to true optimal CBP, masked hypertension was associated with a 2.3-fold (1.5–3.5) higher cardiovascular risk. A limitation was few data from low- and middle-income countries.
Conclusions
HBP substantially refines risk stratification at CBP levels assumed to carry no or only mildly increased risk, in particular in the presence of masked hypertension. Randomized trials could help determine the best use of CBP vs. HBP in guiding BP management. Our study identified a novel indication for HBP, which, in view of its low cost and the increased availability of electronic communication, might be globally applicable, even in remote areas or in low-resource settings.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Globally, hypertension (high blood pressure) is the leading risk factor for cardiovascular disease and is responsible for 9.4 million deaths annually from heart attacks, stroke, and other cardiovascular diseases. Hypertension, which rarely has any symptoms, is diagnosed by measuring blood pressure, the force that blood circulating in the body exerts on the inside of large blood vessels. Blood pressure is highest when the heart is pumping out blood (systolic blood pressure) and lowest when the heart is refilling (diastolic blood pressure). European guidelines define optimal blood pressure as a systolic blood pressure of less than 120 millimeters of mercury (mm Hg) and a diastolic blood pressure of less than 80 mm Hg (a blood pressure of less than 120/80 mm Hg). Normal blood pressure, high-normal blood pressure, and mild hypertension are defined as blood pressures in the ranges 120–129/80–84 mm Hg, 130–139/85–89 mm Hg, and 140–159/90–99 mm Hg, respectively. A blood pressure of more than 160 mm Hg systolic or 100 mm Hg diastolic indicates severe hypertension. Many factors affect blood pressure; overweight people and individuals who eat salty or fatty food are at high risk of developing hypertension. Lifestyle changes and/or antihypertensive drugs can be used to control hypertension.
Why Was This Study Done?
The current guidelines for the diagnosis and management of hypertension recommend risk stratification based on conventionally measured blood pressure (CBP, the average of two consecutive measurements made at a clinic). However, self-measured home blood pressure (HBP) more accurately predicts outcomes because multiple HBP readings are taken and because HBP measurement avoids the “white-coat effect”—some individuals have a raised blood pressure in a clinical setting but not at home. Could risk stratification across increasing categories of CBP be refined through the use of self-measured HBP, particularly at CBP levels assumed to be associated with no or only mildly increased risk? Here, the researchers undertake a participant-level meta-analysis (a study that uses statistical approaches to pool results from individual participants in several independent studies) to answer this question.
What Did the Researchers Do and Find?
The researchers included 5,008 individuals recruited from five populations and enrolled in the International Database of Home Blood Pressure in Relation to Cardiovascular Outcome (IDHOCO) in their meta-analysis. CBP readings were available for all the participants, who measured their HBP using an oscillometric device (an electronic device for measuring blood pressure). The researchers used information on fatal and nonfatal cardiovascular, cardiac, and cerebrovascular (stroke) events to calculate the hazard ratios (HRs, indicators of increased risk) associated with a 10-mm Hg increase in systolic HBP across standard CBP categories. In participants with optimal CBP, an increase in systolic HBP of 10-mm Hg increased the risk of any cardiovascular event by nearly 30% (an HR of 1.28). Similar HRs were associated with a 10-mm Hg increase in systolic HBP for all cardiovascular events among people with normal and high-normal CBP and with mild hypertension, but for people with severe hypertension, systolic HBP did not significantly add to the prediction of any end point. Among people with optimal, normal, and high-normal CBP, 5%, 18.4%, and 30.4%, respectively, had a HBP of 130/85 or higher (“masked hypertension,” a higher blood pressure in daily life than in a clinical setting). Finally, compared to individuals with optimal CBP without masked hypertension, individuals with masked hypertension had more than double the risk of cardiovascular disease.
What Do These Findings Mean?
These findings indicate that HBP measurements, particularly in individuals with masked hypertension, refine risk stratification at CBP levels assumed to be associated with no or mildly elevated risk of cardiovascular disease. That is, HBP measurements can improve the prediction of cardiovascular complications or death among individuals with optimal, normal, and high-normal CBP but not among individuals with severe hypertension. Clinical trials are needed to test whether the identification and treatment of masked hypertension leads to a reduction of cardiovascular complications and is cost-effective compared to the current standard of care, which does not include HBP measurements and does not treat people with normal or high-normal CBP. Until then, these findings provide support for including HBP monitoring in primary prevention strategies for cardiovascular disease among individuals at risk for masked hypertension (for example, people with diabetes), and for carrying out HBP monitoring in people with a normal CBP but unexplained signs of hypertensive target organ damage.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001591.
This study is further discussed in a PLOS Medicine Perspective by Mark Caulfield
The US National Heart, Lung, and Blood Institute has patient information about high blood pressure (in English and Spanish) and a guide to lowering high blood pressure that includes personal stories
The American Heart Association provides information on high blood pressure and on cardiovascular diseases (in several languages); it also provides personal stories about dealing with high blood pressure
The UK National Health Service Choices website provides detailed information for patients about hypertension (including a personal story) and about cardiovascular disease
The World Health Organization provides information on cardiovascular disease and controlling blood pressure; its A Global Brief on Hypertension was published on World Health Day 2013
The UK charity Blood Pressure UK provides information about white-coat hypertension and about home blood pressure monitoring
MedlinePlus provides links to further information about high blood pressure, heart disease, and stroke (in English and Spanish)
doi:10.1371/journal.pmed.1001591
PMCID: PMC3897370  PMID: 24465187
6.  ACHTUNG-Rule: a new and improved model for prognostic assessment in myocardial infarction 
Background:
Thrombolysis In Myocardial Infarction (TIMI), Platelet Glycoprotein IIb/IIIa in Unstable Angina: Receptor Suppression Using Integrilin (PURSUIT) and Global Registry of Acute Coronary Events (GRACE) scores have been developed for risk stratification in myocardial infarction (MI). The latter is the most validated score, yet active research is ongoing for improving prognostication in MI.
Aim:
Derivation and validation of a new model for intrahospital, post-discharge and combined/total all-cause mortality prediction – ACHTUNG-Rule – and comparison with the GRACE algorithm.
Methods:
1091 patients admitted for MI (age 68.4 ± 13.5, 63.2% males, 41.8% acute MI with ST-segment elevation (STEMI)) and followed for 19.7 ± 6.4 months were assigned to a derivation sample. 400 patients admitted at a later date at our institution (age 68.3 ± 13.4, 62.7% males, 38.8% STEMI) and followed for a period of 7.2 ± 4.0 months were assigned to a validation sample. Three versions of the ACHTUNG-Rule were developed for the prediction of intrahospital, post-discharge and combined (intrahospital plus post-discharge) all-cause mortality prediction. All models were evaluated for their predictive performance using the area under the receiver operating characteristic (ROC) curve, calibration through the Hosmer–Lemeshow test and predictive utility within each individual patient through the Brier score. Comparison through ROC curve analysis and measures of risk reclassification – net reclassification improvement index (NRI) or Integrated Discrimination Improvement (IDI) – was performed between the ACHTUNG versions for intrahospital, post-discharge and combined mortality prediction and the equivalent GRACE score versions for intrahospital (GRACE-IH), post-discharge (GRACE-6PD) and post-admission 6-month mortality (GRACE-6).
Results:
Assessment of calibration and overall performance of the ACHTUNG-Rule demonstrated a good fit (p value for the Hosmer–Lemeshow goodness-of-fit test of 0.258, 0.101 and 0.550 for ACHTUNG-IH, ACHTUNG-T and ACHTUNG-R, respectively) and high discriminatory power in the validation cohort for all the primary endpoints (intrahospital mortality: AUC ACHTUNG-IH 0.886 ± 0.035 vs. AUC GRACE-IH 0.906 ± 0.026; post-discharge mortality: AUC ACHTUNG-R 0.827 ± 0.036 vs. AUC GRACE-6PD 0.811 ± 0.034; combined/total mortality: AUC ACHTUNG-T 0.831 ± 0.028 vs. AUC GRACE-6 0.815 ± 0.033). Furthermore, all versions of the ACHTUNG-Rule accurately reclassified a significant number of patients in different, more appropriate, risk categories (NRI ACHTUNG-IH 17.1%, p (2-sided) = 0.0021; NRI ACHTUNG-R 22.0%, p = 0.0002; NRI ACHTUNG-T 18.6%, p = 0.0012). The prognostic performance of the ACHTUNG-Rule was similar in both derivation and validation samples.
Conclusions:
All versions of the ACHTUNG-Rule have shown excellent discriminative power and good calibration for predicting intrahospital, post-discharge and combined in-hospital plus post-discharge mortality. The ACHTUNG version for intrahospital mortality prediction was not inferior to its equivalent GRACE model, and ACHTUNG versions for post-discharge and combined/total mortality demonstrated apparent superiority. External validation in wider, independent, preferably multicentre, registries is warranted before its potential clinical implementation.
doi:10.1177/2048872612466536
PMCID: PMC3760564  PMID: 24062923
Myocardial infarction; prognosis; risk assessment; GRACE risk score
7.  The impact of the HEART risk score in the early assessment of patients with acute chest pain: design of a stepped wedge, cluster randomised trial 
Background
Chest pain remains a diagnostic challenge: physicians do not want to miss an acute coronary syndrome (ACS), but, they also wish to avoid unnecessary additional diagnostic procedures. In approximately 75% of the patients presenting with chest pain at the emergency department (ED) there is no underlying cardiac cause. Therefore, diagnostic strategies focus on identifying patients in whom an ACS can be safely ruled out based on findings from history, physical examination and early cardiac marker measurement. The HEART score, a clinical prediction rule, was developed to provide the clinician with a simple, early and reliable predictor of cardiac risk. We set out to quantify the impact of the use of the HEART score in daily practice on patient outcomes and costs.
Methods/Design
We designed a prospective, multi-centre, stepped wedge, cluster randomised trial. Our aim is to include a total of 6600 unselected chest pain patients presenting at the ED in 10 Dutch hospitals during an 11-month period. All clusters (i.e. hospitals) start with a period of ‘usual care’ and are randomised in their timing when to switch to ‘intervention care’. The latter involves the calculation of the HEART score in each patient to guide clinical decision; notably reassurance and discharge of patients with low scores and intensive monitoring and early intervention in patients with high HEART scores. Primary outcome is occurrence of major adverse cardiac events (MACE), including acute myocardial infarction, revascularisation or death within 6 weeks after presentation. Secondary outcomes include occurrence of MACE in low-risk patients, quality of life, use of health care resources and costs.
Discussion
Stepped wedge designs are increasingly used to evaluate the real-life effectiveness of non-pharmacological interventions because of the following potential advantages: (a) each hospital has both a usual care and an intervention period, therefore, outcomes can be compared within and across hospitals; (b) each hospital will have an intervention period which enhances participation in case of a promising intervention; (c) all hospitals generate data about potential implementation problems. This large impact trial will generate evidence whether the anticipated benefits (in terms of safety and cost-effectiveness) of using the HEART score will indeed be achieved in real-life clinical practice.
Trial registration
ClinicalTrials.gov 80-82310-97-12154.
doi:10.1186/1471-2261-13-77
PMCID: PMC3849098  PMID: 24070098
HEART score; Chest pain; Clinical prediction rule; Risk score implementation; Impact; Stepped wedge design; Cluster randomised trial
8.  Accurate and Robust Genomic Prediction of Celiac Disease Using Statistical Learning 
PLoS Genetics  2014;10(2):e1004137.
Practical application of genomic-based risk stratification to clinical diagnosis is appealing yet performance varies widely depending on the disease and genomic risk score (GRS) method. Celiac disease (CD), a common immune-mediated illness, is strongly genetically determined and requires specific HLA haplotypes. HLA testing can exclude diagnosis but has low specificity, providing little information suitable for clinical risk stratification. Using six European cohorts, we provide a proof-of-concept that statistical learning approaches which simultaneously model all SNPs can generate robust and highly accurate predictive models of CD based on genome-wide SNP profiles. The high predictive capacity replicated both in cross-validation within each cohort (AUC of 0.87–0.89) and in independent replication across cohorts (AUC of 0.86–0.9), despite differences in ethnicity. The models explained 30–35% of disease variance and up to ∼43% of heritability. The GRS's utility was assessed in different clinically relevant settings. Comparable to HLA typing, the GRS can be used to identify individuals without CD with ≥99.6% negative predictive value however, unlike HLA typing, fine-scale stratification of individuals into categories of higher-risk for CD can identify those that would benefit from more invasive and costly definitive testing. The GRS is flexible and its performance can be adapted to the clinical situation by adjusting the threshold cut-off. Despite explaining a minority of disease heritability, our findings indicate a genomic risk score provides clinically relevant information to improve upon current diagnostic pathways for CD and support further studies evaluating the clinical utility of this approach in CD and other complex diseases.
Author Summary
Celiac disease (CD) is a common immune-mediated illness, affecting approximately 1% of the population in Western countries but the diagnostic process remains sub-optimal. The development of CD is strongly dependent on specific human leukocyte antigen (HLA) genes, and HLA testing to identify CD susceptibility is now commonly undertaken in clinical practice. The clinical utility of HLA typing is to exclude CD when the CD susceptibility HLA types are absent, but notably, most people who possess HLA types imparting susceptibility for CD never develop CD. Therefore, while genetic testing in CD can overcome several limitations of the current diagnostic tools, the utility of HLA typing to identify those individuals at increased-risk of CD is limited. Using large datasets assaying single nucleotide polymorphisms (SNPs), we have developed genomic risk scores (GRS) based on multiple SNPs that can more accurately predict CD risk across several populations in “real world” clinical settings. The GRS can generate predictions that optimize CD risk stratification and diagnosis, potentially reducing the number of unnecessary follow-up investigations. The medical and economic impact of improving CD diagnosis is likely to be significant, and our findings support further studies into the role of personalized GRS's for other strongly heritable human diseases.
doi:10.1371/journal.pgen.1004137
PMCID: PMC3923679  PMID: 24550740
9.  Research on Implementation of Interventions in Tuberculosis Control in Low- and Middle-Income Countries: A Systematic Review 
PLoS Medicine  2012;9(12):e1001358.
Cobelens and colleagues systematically reviewed research on implementation and cost-effectiveness of the WHO-recommended interventions for tuberculosis.
Background
Several interventions for tuberculosis (TB) control have been recommended by the World Health Organization (WHO) over the past decade. These include isoniazid preventive therapy (IPT) for HIV-infected individuals and household contacts of infectious TB patients, diagnostic algorithms for rule-in or rule-out of smear-negative pulmonary TB, and programmatic treatment for multidrug-resistant TB. There is no systematically collected data on the type of evidence that is publicly available to guide the scale-up of these interventions in low- and middle-income countries. We investigated the availability of published evidence on their effectiveness, delivery, and cost-effectiveness that policy makers need for scaling-up these interventions at country level.
Methods and Findings
PubMed, Web of Science, EMBASE, and several regional databases were searched for studies published from 1 January 1990 through 31 March 2012 that assessed health outcomes, delivery aspects, or cost-effectiveness for any of these interventions in low- or middle-income countries. Selected studies were evaluated for their objective(s), design, geographical and institutional setting, and generalizability. Studies reporting health outcomes were categorized as primarily addressing efficacy or effectiveness of the intervention. These criteria were used to draw landscapes of published research. We identified 59 studies on IPT in HIV infection, 14 on IPT in household contacts, 44 on rule-in diagnosis, 19 on rule-out diagnosis, and 72 on second-line treatment. Comparative effectiveness studies were relatively few (n = 9) and limited to South America and sub-Saharan Africa for IPT in HIV-infection, absent for IPT in household contacts, and rare for second-line treatment (n = 3). Evaluations of diagnostic and screening algorithms were more frequent (n = 19) but geographically clustered and mainly of non-comparative design. Fifty-four studies evaluated ways of delivering these interventions, and nine addressed their cost-effectiveness.
Conclusions
There are substantial gaps in published evidence for scale-up for five WHO-recommended TB interventions settings at country level, which for many countries possibly precludes program-wide implementation of these interventions. There is a strong need for rigorous operational research studies to be carried out in programmatic settings to inform on best use of existing and new interventions in TB control.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Tuberculosis (TB), caused by Mycobacterium tuberculosis, is curable and preventable, but according to the World Health Organization (WHO), in 2011, 8.7 million people had symptoms of TB (usually a productive cough and fever) and 1.4 million people—95% from low- and middle-income countries—died from TB. TB is also the leading cause of death in people with HIV worldwide, and in 2010 about 10 million children were orphaned as a result of their parents dying from TB. To help reduce the considerable global burden of TB, a global initiative called the Stop TB Partnership, led by WHO, has implemented a strategy to reduce deaths from TB by 50% by 2015—even greater than the target of Millennium Development Goal 6 (to reverse the increase in TB incidence by 2015).
Why Was This Study Done?
Over the past few years, WHO has recommended that countries implement several interventions to help control the spread of tuberculosis through measures to improve prevention, diagnosis, and treatment. Five such interventions currently recommended by WHO are: treatment with isoniazid to prevent TB among people who are HIV positive, and also among household contacts of people infected with TB; the use of clinical pathways (algorithms) for diagnosing TB in people accessing health care who have a negative smear test—the most commonly used diagnostic test, which relies on sputum samples—(“rule-in algorithms”); screening algorithms for excluding TB in people who have HIV (“rule-out algorithms”); and finally, provision of second-line treatment for multidrug-resistant tuberculosis (a form of TB that does not respond to the most commonly used drugs) under programmatic conditions. The effectiveness of these interventions, their costs, and the practicalities of implementation are all important information for countries seeking to control TB following the WHO guidelines, but little is known about the availability of this information. Therefore, in this study the researchers systematically reviewed published studies to find evidence of the effectiveness of each of these interventions when implemented in routine practice, and also for additional information on the setting and conditions of implemented interventions, which might be useful to other countries.
What Did the Researchers Do and Find?
Using a specific search strategy, the researchers comprehensively searched through several key databases of publications, including regional databases, to identify 208 (out of 11,489 found initially) suitable research papers published between January 1990 and March 2012. For included studies, the researchers also noted the geographical location and setting and the type and design of study.
Of the 208 included studies, 59 focused on isoniazid prevention therapy in HIV infection, and only 14 on isoniazid prevention therapy for household contacts. There were 44 studies on “rule-in” clinical diagnosis, 19 on “rule-out” clinical diagnosis, and 72 studies on second-line treatment for TB. Studies on each intervention had some weaknesses, and overall, researchers found that there were very few real-world studies reporting on the effectiveness of interventions in program settings (rather than under optimal conditions in research settings). Few studies evaluated the methods used to implement the intervention or addressed delivery and operational issues (such as adherence to treatment), and there were limited economic evaluations of the recommended interventions. Furthermore, the researchers found that in general, the South Asian region was poorly represented.
What Do These Findings Mean?
These findings suggest that there is limited evidence on effectiveness, delivery, and cost-effectiveness to guide the scale-up of five WHO recommended interventions to control tuberculosis in the countries and settings, despite the urgent need for such interventions to be implemented. The poor evidence base identified in this review highlights the tension between the decision to adopt the recommendation and its implementation adapted to local circumstances, and may be an important reason as to why these interventions are not implemented in many countries. This study also suggests creative thinking is necessary to address the gaps between WHO recommendations and global health policy on new interventions and their real-world implementation in country-wide TB control programs. Future research should focus more on operational studies, the results of which should be made publicly available, and researchers, donors, and medical journals could perhaps re-consider their priorities to help bridge the knowledge gap identified in this study.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001358.
WHO has a wide range of information about TB and research on TB, including more about the STOP TB strategy and the STOP TB Partnership
The UN website has more information about MDG 6
The Global Fund to Fight AIDS, Tuberculosis and Malaria has specific information about progress on TB control
doi:10.1371/journal.pmed.1001358
PMCID: PMC3525528  PMID: 23271959
10.  Risk Stratification in Acute Heart Failure: Rationale and Design of the STRATIFY and DECIDE Studies 
American heart journal  2012;164(6):825-834.
A critical challenge for physicians facing patients presenting with signs and symptoms of acute heart failure (AHF) is how and where to best manage them. Currently, most patients evaluated for AHF are admitted to the hospital, yet not all warrant inpatient care. Up to 50% of admissions could be potentially avoided and many admitted patients could be discharged after a short period of observation and treatment. Methods for identifying patients that can be sent home early are lacking. Improving the physician’s ability to identify and safely manage low-risk patients is essential to avoiding unnecessary use of hospital beds.
Two studies (STRATIFY and DECIDE) have been funded by the National Heart Lung and Blood Institute with the goal of developing prediction rules to facilitate early decision making in AHF. Using prospectively gathered evaluation and treatment data from the acute setting (STRATIFY) and early inpatient stay (DECIDE), rules will be generated to predict risk for death and serious complications. Subsequent studies will be designed to test the external validity, utility, generalizability and cost-effectiveness of these prediction rules in different acute care environments representing racially and socioeconomically diverse patient populations.
A major innovation is prediction of 5-day as well as 30-day outcomes, overcoming the limitation that 30-day outcomes are highly dependent on unpredictable, post-visit patient and provider behavior. A novel aspect of the proposed project is the use of a comprehensive cardiology review to correctly assign post-treatment outcomes to the acute presentation. Finally, a rigorous analysis plan has been developed to construct the prediction rules that will maximally extract both the statistical and clinical properties of every data element. Upon completion of this study we will subsequently externally test the prediction rules in a heterogeneous patient cohort.
doi:10.1016/j.ahj.2012.07.033
PMCID: PMC3511776  PMID: 23194482
11.  RISK STRATIFICATION IN CRITICAL LIMB ISCHEMIA: DERIVATION AND VALIDATION OF A MODEL TO PREDICT AMPUTATION-FREE SURVIVAL USING MULTI-CENTER SURGICAL OUTCOMES DATA 
Patients with critical limb ischemia (CLI) are a heterogeneous population with respect to risk for mortality and limb loss, complicating clinical decision-making. Endovascular options, as compared to bypass, offer a tradeoff between reduced procedural risk and inferior durability. Risk stratified data predictive of amputation-free survival (AFS) may improve clinical decision making and allow for better assessment of new technology in the CLI population.
METHODS
This was a retrospective analysis of prospectively collected data from patients who underwent infrainguinal vein bypass surgery for CLI. Two datasets were used: the PREVENT III randomized trial (n=1404) and a multicenter registry (n=716) from 3 distinct vascular centers (2 academic, 1 community-based). The PREVENT III cohort was randomly assigned to a derivation set (n=953) and to a validation set (n=451). The primary endpoint was AFS. Predictors of AFS identified on univariate screen (inclusion threshold, p<0.20) were included in a stepwise selection Cox model. The resulting 5 significant predictors were assigned an integer score to stratify patients into 3 risk groups. The prediction rule was internally validated in the PREVENT III validation set and externally validated in the multicenter cohort.
RESULTS
The estimated 1 year AFS in the derivation, internal validation, and external validation sets were 76.3%, 72.5%, and 77.0%, respectively. In the derivation set, dialysis (HR 2.81, p<.0001), tissue loss (HR 2.22, p=.0004), age ≥75 (HR 1.64, p=.001), hematocrit ≤30 (HR 1.61, p=.012), and advanced CAD (HR 1.41, p=.021) were significant predictors for AFS in the multivariable model. An integer score, derived from the β coefficients, was used to generate 3 risk categories (low ≤ 3 [44.4% of cohort], medium 4–7 [46.7% of cohort], high ≥8 [8.8% of cohort]). Stratification of the patients, in each dataset, according to risk category yielded 3 significantly different Kaplan-Meier estimates for one year AFS (86%, 73%, and 45% for low, medium, and high risk groups respectively). For a given risk category, the AFS estimate was consistent between the derivation and validation sets.
CONCLUSION
Among patients selected to undergo surgical bypass for infrainguinal disease, this parsimonious risk stratification model reliably identified a category of CLI patients with a >50% chance of death or major amputation at 1 year. Calculation of a “PIII risk score” may be useful for surgical decision making and for clinical trial designs in the CLI population.
doi:10.1016/j.jvs.2008.07.062
PMCID: PMC2765219  PMID: 19118735
12.  The Project Data Sphere Initiative: Accelerating Cancer Research by Sharing Data 
The Oncologist  2015;20(5):464-e20.
By providing access to large, late-phase, cancer-trial data sets, the Project Data Sphere initiative has the potential to transform cancer research by optimizing research efficiency and accelerating progress toward meaningful improvements in cancer care. This type of platform provides opportunities for unique research projects that can examine relatively neglected areas and that can construct models necessitating large amounts of detailed data.
Background.
In this paper, we provide background and context regarding the potential for a new data-sharing platform, the Project Data Sphere (PDS) initiative, funded by financial and in-kind contributions from the CEO Roundtable on Cancer, to transform cancer research and improve patient outcomes. Given the relatively modest decline in cancer death rates over the past several years, a new research paradigm is needed to accelerate therapeutic approaches for oncologic diseases. Phase III clinical trials generate large volumes of potentially usable information, often on hundreds of patients, including patients treated with standard of care therapies (i.e., controls). Both nationally and internationally, a variety of stakeholders have pursued data-sharing efforts to make individual patient-level clinical trial data available to the scientific research community.
Potential Benefits and Risks of Data Sharing.
For researchers, shared data have the potential to foster a more collaborative environment, to answer research questions in a shorter time frame than traditional randomized control trials, to reduce duplication of effort, and to improve efficiency. For industry participants, use of trial data to answer additional clinical questions could increase research and development efficiency and guide future projects through validation of surrogate end points, development of prognostic or predictive models, selection of patients for phase II trials, stratification in phase III studies, and identification of patient subgroups for development of novel therapies. Data transparency also helps promote a public image of collaboration and altruism among industry participants. For patient participants, data sharing maximizes their contribution to public health and increases access to information that may be used to develop better treatments. Concerns about data-sharing efforts include protection of patient privacy and confidentiality. To alleviate these concerns, data sets are deidentified to maintain anonymity. To address industry concerns about protection of intellectual property and competitiveness, we illustrate several models for data sharing with varying levels of access to the data and varying relationships between trial sponsors and data access sponsors.
The Project Data Sphere Initiative.
PDS is an independent initiative of the CEO Roundtable on Cancer Life Sciences Consortium, built to voluntarily share, integrate, and analyze comparator arms of historical cancer clinical trial data sets to advance future cancer research. The aim is to provide a neutral, broad-access platform for industry and academia to share raw, deidentified data from late-phase oncology clinical trials using comparator-arm data sets. These data are likely to be hypothesis generating or hypothesis confirming but, notably, do not take the place of performing a well-designed trial to address a specific hypothesis. Prospective providers of data to PDS complete and sign a data sharing agreement that includes a description of the data they propose to upload, and then they follow easy instructions on the website for uploading their deidentified data. The SAS Institute has also collaborated with the initiative to provide intrinsic analytic tools accessible within the website itself.
As of October 2014, the PDS website has available data from 14 cancer clinical trials covering 9,000 subjects, with hopes to further expand the database to include more than 25,000 subject accruals within the next year. PDS differentiates itself from other data-sharing initiatives by its degree of openness, requiring submission of only a brief application with background information of the individual requesting access and agreement to terms of use. Data from several different sponsors may be pooled to develop a comprehensive cohort for analysis. In order to protect patient privacy, data providers in the U.S. are responsible for deidentifying data according to standards set forth by the Privacy Rule of the U.S. Health Insurance Portability and Accountability Act of 1996.
Using Data Sharing to Improve Outcomes in Cancer: The “Prostate Cancer Challenge.”
Control-arm data of several studies among patients with metastatic castration-resistant prostate cancer (mCRPC) are currently available through PDS. These data sets have multiple potential uses. The “Prostate Cancer Challenge” will ask the cancer research community to use clinical trial data deposited in the PDS website to address key research questions regarding mCRPC.
General themes that could be explored by the cancer community are described in this article: prognostic models evaluating the influence of pretreatment factors on survival and patient-reported outcomes; comparative effectiveness research evaluating the efficacy of standard of care therapies, as illustrated in our companion article comparing mitoxantrone plus prednisone with prednisone alone; effects of practice variation in dose, frequency, and duration of therapy; level of patient adherence to elements of trial protocols to inform the design of future clinical trials; and age of subjects, regional differences in health care, and other confounding factors that might affect outcomes.
Potential Limitations and Methodological Challenges.
The number of data sets available and the lack of experimental-arm data limit the potential scope of research using the current PDS. The number of trials is expected to grow exponentially over the next year and may include multiple cancer settings, such as breast, colorectal, lung, hematologic malignancy, and bone marrow transplantation. Other potential limitations include the retrospective nature of the data analyses performed using PDS and its generalizability, given that clinical trials are often conducted among younger, healthier, and less racially diverse patient populations. Methodological challenges exist when combining individual patient data from multiple clinical trials; however, advancements in statistical methods for secondary database analysis offer many tools for reanalyzing data arising from disparate trials, such as propensity score matching. Despite these concerns, few if any comparable data sets include this level of detail across multiple clinical trials and populations.
Conclusion.
Access to large, late-phase, cancer-trial data sets has the potential to transform cancer research by optimizing research efficiency and accelerating progress toward meaningful improvements in cancer care. This type of platform provides opportunities for unique research projects that can examine relatively neglected areas and that can construct models necessitating large amounts of detailed data. The full potential of PDS will be realized only when multiple tumor types and larger numbers of data sets are available through the website.
doi:10.1634/theoncologist.2014-0431
PMCID: PMC4425388  PMID: 25876994
Project Data Sphere; Data sharing; Prostate cancer; Comparative effectiveness research
13.  Defining Catastrophic Costs and Comparing Their Importance for Adverse Tuberculosis Outcome with Multi-Drug Resistance: A Prospective Cohort Study, Peru 
PLoS Medicine  2014;11(7):e1001675.
Tom Wingfield and colleagues investigate the relationship between catastrophic costs and tuberculosis outcomes for patients receiving free tuberculosis care in Peru.
Please see later in the article for the Editors' Summary
Background
Even when tuberculosis (TB) treatment is free, hidden costs incurred by patients and their households (TB-affected households) may worsen poverty and health. Extreme TB-associated costs have been termed “catastrophic” but are poorly defined. We studied TB-affected households' hidden costs and their association with adverse TB outcome to create a clinically relevant definition of catastrophic costs.
Methods and Findings
From 26 October 2002 to 30 November 2009, TB patients (n = 876, 11% with multi-drug-resistant [MDR] TB) and healthy controls (n = 487) were recruited to a prospective cohort study in shantytowns in Lima, Peru. Patients were interviewed prior to and every 2–4 wk throughout treatment, recording direct (household expenses) and indirect (lost income) TB-related costs. Costs were expressed as a proportion of the household's annual income. In poorer households, costs were lower but constituted a higher proportion of the household's annual income: 27% (95% CI = 20%–43%) in the least-poor houses versus 48% (95% CI = 36%–50%) in the poorest. Adverse TB outcome was defined as death, treatment abandonment or treatment failure during therapy, or recurrence within 2 y. 23% (166/725) of patients with a defined treatment outcome had an adverse outcome. Total costs ≥20% of household annual income was defined as catastrophic because this threshold was most strongly associated with adverse TB outcome. Catastrophic costs were incurred by 345 households (39%). Having MDR TB was associated with a higher likelihood of incurring catastrophic costs (54% [95% CI = 43%–61%] versus 38% [95% CI = 34%–41%], p<0.003). Adverse outcome was independently associated with MDR TB (odds ratio [OR] = 8.4 [95% CI = 4.7–15], p<0.001), previous TB (OR = 2.1 [95% CI = 1.3–3.5], p = 0.005), days too unwell to work pre-treatment (OR = 1.01 [95% CI = 1.00–1.01], p = 0.02), and catastrophic costs (OR = 1.7 [95% CI = 1.1–2.6], p = 0.01). The adjusted population attributable fraction of adverse outcomes explained by catastrophic costs was 18% (95% CI = 6.9%–28%), similar to that of MDR TB (20% [95% CI = 14%–25%]). Sensitivity analyses demonstrated that existing catastrophic costs thresholds (≥10% or ≥15% of household annual income) were not associated with adverse outcome in our setting. Study limitations included not measuring certain “dis-saving” variables (including selling household items) and gathering only 6 mo of costs-specific follow-up data for MDR TB patients.
Conclusions
Despite free TB care, having TB disease was expensive for impoverished TB patients in Peru. Incurring higher relative costs was associated with adverse TB outcome. The population attributable fraction indicated that catastrophic costs and MDR TB were associated with similar proportions of adverse outcomes. Thus TB is a socioeconomic as well as infectious problem, and TB control interventions should address both the economic and clinical aspects of this disease.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Caused by the infectious microbe Mycobacterium tuberculosis, tuberculosis (or TB) is a global health problem. In 2012, an estimated 8.6 million people fell ill with TB, and 1.3 million were estimated to have died because of the disease. Poverty is widely recognized as an important risk factor for TB, and developing nations shoulder a disproportionate burden of both poverty and TB disease. For example, in Lima (the capital of Peru), the incidence of TB follows the poverty map, sparing residents living in rich areas of the city while spreading among poorer residents that live in overcrowded households.
The Peruvian government, non-profit organizations, and the World Health Organization (WHO) have extended healthcare programs to provide free diagnosis and treatment for TB and drug-resistant strains of TB in Peru, but rates of new TB cases remain high. For example, in Ventanilla (an area of 16 shantytowns located in northern Lima), the rate of infection was higher during the study period, at 162 new cases per 100,000 people per year, than the national average. About one-third of the 277,895 residents of Ventanilla live on under US$1 per day.
Why Was This Study Done?
Poverty increases the risks associated with contracting TB infection, but the disease also affects the most economically productive age group, and the income of TB-affected households often decreases post-diagnosis, exacerbating poverty. A recent WHO consultation report proposed a target of eradicating catastrophic costs for TB-affected families by 2035, but hidden TB-related costs remain understudied, and there is no international consensus defining catastrophic costs incurred by patients and households affected by TB. Lost income and the cost of transport are among hidden costs associated with free treatment programs; these costs and their potential impact on patients and their households are not well defined. Here the researchers sought to clarify and characterize TB-related costs and explore whether there is a relationship between the hidden costs associated with free TB treatment programs and the likelihood of completing treatment and becoming cured of TB.
What Did the Researchers Do and Find?
Over a seven-year period (2002–2009), the researchers recruited 876 study participants with TB diagnosed at health posts located in Ventanilla. To provide a comparative control group, a sample of 487 healthy individuals was also recruited to participate. Participants were interviewed prior to treatment, and households' TB-related direct expenses and indirect expenses (lost income attributed to TB) were recorded every 2–4 wk. Data were collected during scheduled household visits.
TB patients were poorer than controls, and analysis of the data showed that accessing free TB care was expensive for TB patients, especially those with multi-drug-resistant (MDR) TB. Total expenses were similar pre-treatment compared to during treatment for TB patients, despite receiving free care (1.1 versus 1.2 times the same household's monthly income). Even though direct expenses (for example, costs of medical examinations and medicines other than anti-TB therapy) were lower in the poorest households, their total expenses (direct and indirect) made up a greater proportion of their household annual income: 48% for the poorest households compared to 27% in the least-poor households.
The researchers defined costs that were equal to or above one-fifth (20%) of household annual income as catastrophic because this threshold marked the greatest association with adverse treatment outcomes such as death, abandoning treatment, failing to respond to treatment, or TB recurrence. By calculating the population attributable fraction—the proportional reduction in population adverse treatment outcomes that could occur if a risk factor was reduced to zero—the authors estimate that adverse TB outcomes explained by catastrophic costs and MDR TB were similar: 18% for catastrophic costs and 20% for MDR TB.
What Do These Findings Mean?
The findings of this study indicate a potential role for social protection as a means to improve TB disease control and health, as well as defining a novel, evidence-based threshold for catastrophic costs for TB-affected households of 20% or more of annual income. Addressing the economic impact of diagnosis and treatment in impoverished communities may increase the odds of curing TB.
Study limitations included only six months of follow-up data being gathered on costs for each participant and not recording “dissavings,” such as selling of household items in response to financial shock. Because the study was observational, the authors aren't able to determine the direction of the association between catastrophic costs and TB outcome. Even so, the study indicates that TB is a socioeconomic as well as infectious problem, and that TB control interventions should address both the economic and clinical aspects of the disease.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001675.
The World Health Organization provides information on all aspects of tuberculosis, including the Global Tuberculosis Report 2013
The US Centers for Disease Control and Prevention has information about tuberculosis
Médecins Sans Frontières's TB&ME blog provides patients' stories of living with MDR TB
TB Alert, a UK-based charity that promotes TB awareness worldwide, has information on TB in several European, African, and Asian languages
More information is available about the Innovation For Health and Development (IFHAD) charity and its research team's work in Peru
doi:10.1371/journal.pmed.1001675
PMCID: PMC4098993  PMID: 25025331
14.  A Comparison of Cost Effectiveness Using Data from Randomized Trials or Actual Clinical Practice: Selective Cox-2 Inhibitors as an Example 
PLoS Medicine  2009;6(12):e1000194.
Tjeerd-Pieter van Staa and colleagues estimate the likely cost effectiveness of selective Cox-2 inhibitors prescribed during routine clinical practice, as compared to the cost effectiveness predicted from randomized controlled trial data.
Background
Data on absolute risks of outcomes and patterns of drug use in cost-effectiveness analyses are often based on randomised clinical trials (RCTs). The objective of this study was to evaluate the external validity of published cost-effectiveness studies by comparing the data used in these studies (typically based on RCTs) to observational data from actual clinical practice. Selective Cox-2 inhibitors (coxibs) were used as an example.
Methods and Findings
The UK General Practice Research Database (GPRD) was used to estimate the exposure characteristics and individual probabilities of upper gastrointestinal (GI) events during current exposure to nonsteroidal anti-inflammatory drugs (NSAIDs) or coxibs. A basic cost-effectiveness model was developed evaluating two alternative strategies: prescription of a conventional NSAID or coxib. Outcomes included upper GI events as recorded in GPRD and hospitalisation for upper GI events recorded in the national registry of hospitalisations (Hospital Episode Statistics) linked to GPRD. Prescription costs were based on the prescribed number of tables as recorded in GPRD and the 2006 cost data from the British National Formulary. The study population included over 1 million patients prescribed conventional NSAIDs or coxibs. Only a minority of patients used the drugs long-term and daily (34.5% of conventional NSAIDs and 44.2% of coxibs), whereas coxib RCTs required daily use for at least 6–9 months. The mean cost of preventing one upper GI event as recorded in GPRD was US$104k (ranging from US$64k with long-term daily use to US$182k with intermittent use) and US$298k for hospitalizations. The mean costs (for GPRD events) over calendar time were US$58k during 1990–1993 and US$174k during 2002–2005. Using RCT data rather than GPRD data for event probabilities, the mean cost was US$16k with the VIGOR RCT and US$20k with the CLASS RCT.
Conclusions
The published cost-effectiveness analyses of coxibs lacked external validity, did not represent patients in actual clinical practice, and should not have been used to inform prescribing policies. External validity should be an explicit requirement for cost-effectiveness analyses.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Before a new treatment for a specific disease becomes an established part of clinical practice, it goes through a long process of development and clinical testing. This process starts with extensive studies of the new treatment in the laboratory and in animals and then moves into clinical trials. The most important of these trials are randomized controlled trials (RCTs), studies in which the efficacy and safety of the new drug and an established drug are compared by giving the two drugs to randomized groups of patients with the disease. The final hurdle that a drug or any other healthcare technology often has to jump before being adopted for widespread clinical use is a health technology assessment, which aims to provide policymakers, clinicians, and patients with information about the balance between the clinical and financial costs of the drug and its benefits (its cost-effectiveness). In England and Wales, for example, the National Institute for Health and Clinical Excellence (NICE), which promotes clinical excellence and the effective use of resources within the National Health Service, routinely commissions such assessments.
Why Was This Study Done?
Data on the risks of various outcomes associated with a new treatment are needed for cost-effectiveness analyses. These data are usually obtained from RCTs, but although RCTs are the best way of determining a drug's potency in experienced hands under ideal conditions (its efficacy), they may not be a good way to determine a drug's success in an average clinical setting (its effectiveness). In this study, the researchers compare the data from RCTs that have been used in several published cost-effectiveness analyses of a class of drugs called selective cyclooxygenase-2 inhibitors (“coxibs”) with observational data from actual clinical practice. They then ask whether the published cost-effectiveness studies, which generally used RCT data, should have been used to inform coxib prescribing policies. Coxibs are nonsteroidal anti-inflammatory drugs (NSAIDs) that were developed in the 1990s to treat arthritis and other chronic inflammatory conditions. Conventional NSAIDs can cause gastric ulcers and bleeding from the gut (upper gastrointestinal events) if taken for a long time. The use of coxibs avoids this problem.
What Did the Researchers Do and Find?
The researchers extracted data on the real-life use of conventional NSAIDs and coxibs and on the incidence of upper gastrointestinal events from the UK General Practice Research Database (GPRD) and from the national registry of hospitalizations. Only a minority of the million patients who were prescribed conventional NSAIDs (average cost per prescription US$17.80) or coxibs (average cost per prescription US$47.04) for a variety of inflammatory conditions took them on a long-term daily basis, whereas in the RCTs of coxibs, patients with a few carefully defined conditions took NSAIDs daily for at least 6–9 months. The researchers then developed a cost-effectiveness model to evaluate the costs of the alternative strategies of prescribing a conventional NSAID or a coxib. The mean additional cost of preventing one gastrointestinal event recorded in the GPRD by using a coxib instead of a NSAID, they report, was US$104,000; the mean cost of preventing one hospitalization for such an event was US$298,000. By contrast, the mean cost of preventing one gastrointestinal event by using a coxib instead of a NSAID calculated from data obtained in RCTs was about US$20,000.
What Do These Findings Mean?
These findings suggest that the published cost-effectiveness analyses of coxibs greatly underestimate the cost of preventing gastrointestinal events by replacing prescriptions of conventional NSAIDs with prescriptions of coxibs. That is, if data from actual clinical practice had been used in cost-effectiveness analyses rather than data from RCTs, the conclusions of the published cost-effectiveness analyses of coxibs would have been radically different and may have led to different prescribing guidelines for this class of drug. More generally, these findings provide a good illustration of how important it is to ensure that cost-effectiveness analyses have “external” validity by using realistic estimates for event rates and costs rather than relying on data from RCTs that do not always reflect the real-world situation. The researchers suggest, therefore, that health technology assessments should move from evaluating cost-efficacy in ideal populations with ideal interventions to evaluating cost-effectiveness in real populations with real interventions.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000194.
The UK National Institute for Health Research provides information about health technology assessment
The National Institute for Health and Clinical Excellence Web site describes how this organization provides guidance on promoting good health within the England and Wales National Health Service
Information on the UK General Practice Research Database is available
Wikipedia has pages on health technology assessment and on selective cyclooxygenase-2 inhibitors (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1000194
PMCID: PMC2779340  PMID: 19997499
15.  Performance of Thirteen Clinical Rules to Distinguish Bacterial and Presumed Viral Meningitis in Vietnamese Children 
PLoS ONE  2012;7(11):e50341.
Background and Purpose
Successful outcomes from bacterial meningitis require rapid antibiotic treatment; however, unnecessary treatment of viral meningitis may lead to increased toxicities and expense. Thus, improved diagnostics are required to maximize treatment and minimize side effects and cost. Thirteen clinical decision rules have been reported to identify bacterial from viral meningitis. However, few rules have been tested and compared in a single study, while several rules are yet to be tested by independent researchers or in pediatric populations. Thus, simultaneous test and comparison of these rules are required to enable clinicians to select an optimal diagnostic rule for bacterial meningitis in settings and populations similar to ours.
Methods
A retrospective cross-sectional study was conducted at the Infectious Department of Pediatric Hospital Number 1, Ho Chi Minh City, Vietnam. The performance of the clinical rules was evaluated by area under a receiver operating characteristic curve (ROC-AUC) using the method of DeLong and McNemar test for specificity comparison.
Results
Our study included 129 patients, of whom 80 had bacterial meningitis and 49 had presumed viral meningitis. Spanos's rule had the highest AUC at 0.938 but was not significantly greater than other rules. No rule provided 100% sensitivity with a specificity higher than 50%. Based on our calculation of theoretical sensitivity and specificity, we suggest that a perfect rule requires at least four independent variables that posses both sensitivity and specificity higher than 85–90%.
Conclusions
No clinical decision rules provided an acceptable specificity (>50%) with 100% sensitivity when applying our data set in children. More studies in Vietnam and developing countries are required to develop and/or validate clinical rules and more very good biomarkers are required to develop such a perfect rule.
doi:10.1371/journal.pone.0050341
PMCID: PMC3508924  PMID: 23209715
16.  The Effects of Mandatory Prescribing of Thiazides for Newly Treated, Uncomplicated Hypertension: Interrupted Time-Series Analysis 
PLoS Medicine  2007;4(7):e232.
Background
The purpose of our study was to evaluate the effects of a new reimbursement rule for antihypertensive medication that made thiazides mandatory first-line drugs for newly treated, uncomplicated hypertension. The objective of the new regulation was to reduce drug expenditures.
Methods and Findings
We conducted an interrupted time-series analysis on prescribing data before and after the new reimbursement rule for antihypertensive medication was put into effect. All patients started on antihypertensive medication in 61 general practices in Norway were included in the analysis. The new rule was put forward by the Ministry of Health and was approved by parliament. Adherence to the rule was monitored only minimally, and there were no penalties for non-adherence. Our primary outcome was the proportion of thiazide prescriptions among all prescriptions made for persons started on antihypertensive medication. Secondary outcomes included the proportion of patients who, within 4 mo, reached recommended blood-pressure goals and the proportion of patients who, within 4 mo, were not started on a second antihypertensive drug. We also compared drug costs before and after the intervention. During the baseline period, 10% of patients started on antihypertensive medication were given a thiazide prescription. This proportion rose steadily during the transition period, after which it remained stable at 25%. For other outcomes, no statistically significant differences were demonstrated. Achievement of treatment goals was slightly higher (56.6% versus 58.4%) after the new rule was introduced, and the prescribing of a second drug was slightly lower (24.0% versus 21.8%). Drug costs were reduced by an estimated Norwegian kroner 4.8 million (€0.58 million, US$0.72 million) in the first year, which is equivalent to Norwegian kroner 1.06 per inhabitant (€0.13, US$0.16).
Conclusions
Prescribing of thiazides in Norway for uncomplicated hypertension more than doubled after a reimbursement rule requiring the use of thiazides as the first-choice therapy was put into effect. However, the resulting savings on drug expenditures were modest. There were no significant changes in the achievement of treatment goals or in the prescribing of a second antihypertensive drug.
Atle Fretheim and colleagues found that the prescribing of thiazides in Norway for uncomplicated hypertension more than doubled after a rule requiring their use as first-choice therapy was put into effect.
Editors' Summary
Background.
High blood pressure (hypertension) is a common medical condition, especially among elderly people. It has no obvious symptoms but can lead to heart attacks, heart failure, strokes, or kidney failure. It is diagnosed by measuring blood pressure—the force that blood moving around the body exerts on the inside of arteries (large blood vessels). Many factors affect blood pressure (which depends on the amount of blood being pumped round the body and on the size and condition of the arteries), but overweight people and individuals who eat fatty or salty food are at high risk of developing hypertension. Mild hypertension can often be corrected by making lifestyle changes, but many patients also take one or more antihypertensive agents. These include thiazide diuretics and several types of non-thiazide drugs, many of which reduce heart rate or contractility and/or dilate blood vessels.
Why Was This Study Done?
Antihypertensive agents are a major part of national drug expenditure in developed countries, where as many as one person in ten is treated for hypertension. The different classes of drugs are all effective, but their cost varies widely. Thiazides, for example, are a tenth of the price of many non-thiazide drugs. In Norway, the low use of thiazides recently led the government to impose a new reimbursement rule aimed at reducing public expenditure on antihypertensive drugs. Since March 2004, family doctors have been reimbursed for drug costs only if they prescribe thiazides as first-line therapy for uncomplicated hypertension, unless there are medical reasons for selecting other drugs. Adherence to the rule has not been monitored, and there is no penalty for non-adherence, so has this intervention changed prescribing practices? To find out, the researchers in this study analyzed Norwegian prescribing data before and after the new rule came into effect.
What Did the Researchers Do and Find?
The researchers analyzed the monthly antihypertensive drug–prescribing records of 61 practices around Oslo, Norway, between January 2003 and November 2003 (pre-intervention period), between December 2003 and February 2004 (transition period), and between March 2004 and January 2005 (post-intervention period). This type of study is called an “interrupted time series”. During the pre-intervention period, one in ten patients starting antihypertensive medication was prescribed a thiazide drug. This proportion gradually increased during the transition period before stabilizing at one in four patients throughout the post-intervention period. A slightly higher proportion of patients reached their recommended blood-pressure goal after the rule was introduced than before, and a slightly lower proportion needed to switch to a second drug class, but both these small differences may have been due to chance. Finally, the researchers estimated that the observed change in prescribing practices reduced drug costs per Norwegian by US$0.16 (€0.13) in the first year.
What Do These Findings Mean?
Past attempts to change antihypertensive-prescribing practices by trying to influence family doctors (for example, through education) have largely failed. By contrast, these findings suggest that imposing a change on them (in this case, by introducing a new reimbursement rule) can be effective (at least over the short term and in the practices included in the study), even when compliance with the change is not monitored nor noncompliance penalized. However, despite a large shift towards prescribing thiazides, three-quarters of patients were still prescribed non-thiazide drugs (possibly because of doubts about the efficacy of thiazides as first-line drugs), which emphasizes how hard it is to change doctors' prescribing habits. Further studies are needed to investigate whether the approach examined in this study can effectively contain the costs of antihypertensive drugs (and of drugs used for other common medical conditions) in the long term and in other settings. Also, because the estimated reduction in drug costs produced by the intervention was relatively modest (although likely to increase over time as more patients start on thiazides), other ways to change prescribing practices and produce savings in national drug expenditures should be investigated.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040232.
MedlinePlus encyclopedia page on hypertension (in English and Spanish)
UK National Institute for Health and Clinical Excellence information on hypertension for patients, carers, and professionals
American Heart Association information for patients on high blood pressure
An open-access research article describing the potential savings of using thiazides as the first-choice antihypertensive drug
A previous study in Norway, published in PLoS Medicine, examined what happened when doctors were actively encouraged to make more use of thiazides. There was also an economic evaluation of what this achieved
doi:10.1371/journal.pmed.0040232
PMCID: PMC1904466  PMID: 17622192
17.  Investigating the performance and cost-effectiveness of the simple ultrasound-based rules compared to the risk of malignancy index in the diagnosis of ovarian cancer (SUBSONiC-study): protocol of a prospective multicenter cohort study in the Netherlands 
BMC Cancer  2015;15:482.
Background
Estimating the risk of malignancy is essential in the management of adnexal masses. An accurate differential diagnosis between benign and malignant masses will reduce morbidity and costs due to unnecessary operations, and will improve referral to a gynecologic oncologist for specialized cancer care, which improves outcome and overall survival. The Risk of Malignancy Index is currently the most commonly used method in clinical practice, but has a relatively low diagnostic accuracy (sensitivity 75–80 % and specificity 85–90 %). Recent reports show that other methods, such as simple ultrasound-based rules, subjective assessment and (Diffusion Weighted) Magnetic Resonance Imaging might be superior to the RMI in the pre-operative differentiation of adnexal masses.
Methods/Design
A prospective multicenter cohort study will be performed in the south of The Netherlands. A total of 270 women diagnosed with at least one pelvic mass that is suspected to be of ovarian origin who will undergo surgery, will be enrolled. We will apply the Risk of Malignancy Index with a cut-off value of 200 and a two-step triage test consisting of simple ultrasound-based rules supplemented -if necessary- with either subjective assessment by an expert sonographer or Magnetic Resonance Imaging with diffusion weighted sequences, to characterize the adnexal masses. The histological diagnosis will be the reference standard. Diagnostic performances will be expressed as sensitivity, specificity, positive and negative predictive values and likelihood ratios.
Discussion
We hypothesize that this two-step triage test, including the simple ultrasound-based rules, will have better diagnostic accuracy than the Risk of Malignancy Index and therefore will improve the management of women with adnexal masses. Furthermore, we expect this two-step test to be more cost-effective. If the hypothesis is confirmed, the results of this study could have major effects on current guidelines and implementation of the triage test in daily clinical practice could be a possibility.
Trial registration
ClinicalTrials.gov: registration number NCT02218502
doi:10.1186/s12885-015-1319-5
PMCID: PMC4489120  PMID: 26111920
Ovarian cancer; Ultrasound; Risk of malignancy index; Simple ultrasound-based rules; Subjective assessment; Diffusion weighted imaging; MRI; Diagnosis
18.  Tailoring adverse drug event surveillance to the paediatric inpatient 
Introduction
Although paediatric patients have an increased risk for adverse drug events, few detection methodologies target this population. To utilise computerised adverse event surveillance, specialised trigger rules are required to accommodate the unique needs of children. The aim was to develop new, tailored rules sustainable for review and robust enough to support aggregate event rate monitoring.
Methods
The authors utilised a voluntary staff incident-reporting system, lab values and physician insight to design trigger rules. During Phase 1, problem areas were identified by reviewing 5 years of paediatric voluntary incident reports. Based on these findings, historical lab electrolyte values were analysed to devise critical value thresholds. This evidence informed Phase 2 rule development. For 3 months, surveillance alerts were evaluated for occurrence of adverse drug events.
Results
In Phase 1, replacement preparations and total parenteral nutrition comprised the majority (36.6%) of adverse drug events in 353 paediatric patients. During Phase 2, nine new trigger rules produced 225 alerts in 103 paediatric inpatients. Of these, 14 adverse drug events were found by the paediatric hypoglycaemia rule, but all other electrolyte trigger rules were ineffective. Compared with the adult-focused hypoglycaemia rule, the new, tailored version increased the paediatric event detection rate from 0.43 to 1.51 events per 1000 patient days.
Conclusions
Relying solely on absolute lab values to detect electrolyte-related adverse drug events did not meet our goals. Use of compound rule logic improved detection of hypoglycaemia. More success may be found in designing real-time rules that leverage lab trends and additional clinical information.
doi:10.1136/qshc.2009.032680
PMCID: PMC2975971  PMID: 20511599
Paediatrics; adverse drug event; computerised surveillance; trigger tool; information technology; medication error
19.  Updated Systematic Review and Meta-Analysis of the Performance of Risk Prediction Rules in Children and Young People with Febrile Neutropenia 
PLoS ONE  2012;7(5):e38300.
Introduction
Febrile neutropenia is a common and potentially life-threatening complication of treatment for childhood cancer, which has increasingly been subject to targeted treatment based on clinical risk stratification. Our previous meta-analysis demonstrated 16 rules had been described and 2 of them subject to validation in more than one study. We aimed to advance our knowledge of evidence on the discriminatory ability and predictive accuracy of such risk stratification clinical decision rules (CDR) for children and young people with cancer by updating our systematic review.
Methods
The review was conducted in accordance with Centre for Reviews and Dissemination methods, searching multiple electronic databases, using two independent reviewers, formal critical appraisal with QUADAS and meta-analysis with random effects models where appropriate. It was registered with PROSPERO: CRD42011001685.
Results
We found 9 new publications describing a further 7 new CDR, and validations of 7 rules. Six CDR have now been subject to testing across more than two data sets. Most validations demonstrated the rule to be less efficient than when initially proposed; geographical differences appeared to be one explanation for this.
Conclusion
The use of clinical decision rules will require local validation before widespread use. Considerable uncertainty remains over the most effective rule to use in each population, and an ongoing individual-patient-data meta-analysis should develop and test a more reliable CDR to improve stratification and optimise therapy. Despite current challenges, we believe it will be possible to define an internationally effective CDR to harmonise the treatment of children with febrile neutropenia.
doi:10.1371/journal.pone.0038300
PMCID: PMC3365042  PMID: 22693615
20.  Application of Multivariate Probabilistic (Bayesian) Networks to Substance Use Disorder Risk Stratification and Cost Estimation 
Introduction: This paper explores the use of machine learning and Bayesian classification models to develop broadly applicable risk stratification models to guide disease management of health plan enrollees with substance use disorder (SUD). While the high costs and morbidities associated with SUD are understood by payers, who manage it through utilization review, acute interventions, coverage and cost limitations, and disease management, the literature shows mixed results for these modalities in improving patient outcomes and controlling cost. Our objective is to evaluate the potential of data mining methods to identify novel risk factors for chronic disease and stratification of enrollee utilization, which can be used to develop new methods for targeting disease management services to maximize benefits to both enrollees and payers.
Methods: For our evaluation, we used DecisionQ machine learning algorithms to build Bayesian network models of a representative sample of data licensed from Thomson-Reuters' MarketScan consisting of 185,322 enrollees with three full-year claim records. Data sets were prepared, and a stepwise learning process was used to train a series of Bayesian belief networks (BBNs). The BBNs were validated using a 10 percent holdout set.
Results: The networks were highly predictive, with the risk-stratification BBNs producing area under the curve (AUC) for SUD positive of 0.948 (95 percent confidence interval [CI], 0.944–0.951) and 0.736 (95 percent CI, 0.721–0.752), respectively, and SUD negative of 0.951 (95 percent CI, 0.947–0.954) and 0.738 (95 percent CI, 0.727–0.750), respectively. The cost estimation models produced area under the curve ranging from 0.72 (95 percent CI, 0.708–0.731) to 0.961 (95 percent CI, 0.95–0.971)
Conclusion: We were able to successfully model a large, heterogeneous population of commercial enrollees, applying state-of-the-art machine learning technology to develop complex and accurate multivariate models that support near-real-time scoring of novel payer populations based on historic claims and diagnostic data. Initial validation results indicate that we can stratify enrollees with SUD diagnoses into different cost categories with a high degree of sensitivity and specificity, and the most challenging issue becomes one of policy. Due to the social stigma associated with the disease and ethical issues pertaining to access to care and individual versus societal benefit, a thoughtful dialogue needs to occur about the appropriate way to implement these technologies.
PMCID: PMC2804457  PMID: 20169014
substance use disorder; Bayesian belief network; chemical dependency; predictive modeling
21.  Validation of a clinical risk scoring system, based solely on clinical presentation, for the management of pregnancy of unknown location 
Fertility and sterility  2012;99(1):193-198.
Objective
Assess a scoring system to triage women with a pregnancy of unknown location.
Design
Validation of prediction rule.
Setting
Multicenter study.
Patients
Women with a pregnancy of unknown location.
Main Outcome Measures
Scores were assigned to factors identified at clinical presentation. Total score was calculated to assess risk of ectopic pregnancy women with a pregnancy of unknown location, and a 3-tiered clinical action plan proposed. Recommendations were: low risk, intermediate risk and high risk. Recommendation based on model score was compared to clinical diagnosis.
Interventions
None
Results
The cohort of 1400 women (284 ectopic pregnancy (EP), 759 miscarriages, and 357 intrauterine pregnancy (IUP)) was more diverse than the original cohort used to develop the decision rule. A total of 29.4% IUPs were identified for less frequent follow up and 18.4% nonviable gestations were identified for more frequent follow up (to rule out an ectopic pregnancy) compared to intermediate risk (i.e. monitor in current standard fashion). For decision of possible less frequent monitor, specificity was 90.8% (89.0 – 92.6) with negative predictive value of 79.0% (76.7 – 81.3). For decision of more intense follow up specificity was 95.0% (92.7 – 97.2). Test characteristics using the scoring system replicated in the diverse validation cohort.
Conclusion
A scoring system based on symptoms at presentation has value to stratify risk and influence the intensity of outpatient surveillance for women with pregnancy of unknown location but does not serve as a diagnostic tool.
doi:10.1016/j.fertnstert.2012.09.012
PMCID: PMC3534951  PMID: 23040528
ectopic pregnancy; pregnancy of unknown location; risk factors; scoring system
22.  The Clinical and Economic Impact of Point-of-Care CD4 Testing in Mozambique and Other Resource-Limited Settings: A Cost-Effectiveness Analysis 
PLoS Medicine  2014;11(9):e1001725.
Emily Hyle and colleagues conduct a cost-effectiveness analysis to estimate the clinical and economic impact of point-of-care CD4 testing compared to laboratory-based tests in Mozambique.
Please see later in the article for the Editors' Summary
Background
Point-of-care CD4 tests at HIV diagnosis could improve linkage to care in resource-limited settings. Our objective is to evaluate the clinical and economic impact of point-of-care CD4 tests compared to laboratory-based tests in Mozambique.
Methods and Findings
We use a validated model of HIV testing, linkage, and treatment (CEPAC-International) to examine two strategies of immunological staging in Mozambique: (1) laboratory-based CD4 testing (LAB-CD4) and (2) point-of-care CD4 testing (POC-CD4). Model outcomes include 5-y survival, life expectancy, lifetime costs, and incremental cost-effectiveness ratios (ICERs). Input parameters include linkage to care (LAB-CD4, 34%; POC-CD4, 61%), probability of correctly detecting antiretroviral therapy (ART) eligibility (sensitivity: LAB-CD4, 100%; POC-CD4, 90%) or ART ineligibility (specificity: LAB-CD4, 100%; POC-CD4, 85%), and test cost (LAB-CD4, US$10; POC-CD4, US$24). In sensitivity analyses, we vary POC-CD4-specific parameters, as well as cohort and setting parameters to reflect a range of scenarios in sub-Saharan Africa. We consider ICERs less than three times the per capita gross domestic product in Mozambique (US$570) to be cost-effective, and ICERs less than one times the per capita gross domestic product in Mozambique to be very cost-effective. Projected 5-y survival in HIV-infected persons with LAB-CD4 is 60.9% (95% CI, 60.9%–61.0%), increasing to 65.0% (95% CI, 64.9%–65.1%) with POC-CD4. Discounted life expectancy and per person lifetime costs with LAB-CD4 are 9.6 y (95% CI, 9.6–9.6 y) and US$2,440 (95% CI, US$2,440–US$2,450) and increase with POC-CD4 to 10.3 y (95% CI, 10.3–10.3 y) and US$2,800 (95% CI, US$2,790–US$2,800); the ICER of POC-CD4 compared to LAB-CD4 is US$500/year of life saved (YLS) (95% CI, US$480–US$520/YLS). POC-CD4 improves clinical outcomes and remains near the very cost-effective threshold in sensitivity analyses, even if point-of-care CD4 tests have lower sensitivity/specificity and higher cost than published values. In other resource-limited settings with fewer opportunities to access care, POC-CD4 has a greater impact on clinical outcomes and remains cost-effective compared to LAB-CD4. Limitations of the analysis include the uncertainty around input parameters, which is examined in sensitivity analyses. The potential added benefits due to decreased transmission are excluded; their inclusion would likely further increase the value of POC-CD4 compared to LAB-CD4.
Conclusions
POC-CD4 at the time of HIV diagnosis could improve survival and be cost-effective compared to LAB-CD4 in Mozambique, if it improves linkage to care. POC-CD4 could have the greatest impact on mortality in settings where resources for HIV testing and linkage are most limited.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
AIDS has already killed about 36 million people, and a similar number of people (mostly living in low- and middle-income countries) are currently infected with HIV, the virus that causes AIDS. HIV destroys immune system cells (including CD4 cells, a type of lymphocyte), leaving infected individuals susceptible to other infections. Early in the AIDS epidemic, HIV-infected individuals usually died within ten years of infection. After effective antiretroviral therapy (ART) became available in 1996, HIV infection became a chronic condition for people living in high-income countries, but because ART was expensive, HIV/AIDS remained a fatal disease in low- and middle-income countries. In 2003, the international community began to work towards achieving universal ART coverage, and by the end of 2012, 61% of HIV-positive people (nearly 10 million individuals) living low- and middle-income countries who were eligible for treatment—because their CD4 cell count had fallen below 350 cells/mm3 of blood or they had developed an AIDS-defining condition—were receiving treatment.
Why Was This Study Done?
In sub-Saharan Africa nearly 50% of HIV-infected people eligible for treatment remain untreated, in part because of poor linkage between HIV diagnosis and clinical care. After patients receive a diagnosis of HIV infection, their eligibility for ART initiation is determined by sending a blood sample away to a laboratory for a CD4 cell count (the current threshold for treatment is a CD4 count below 500/mm3, although low- and middle-income countries have yet to update their national guidelines from the threshold CD4 count below 350/mm3). Patients have to return to the clinic to receive their test results and to initiate ART if they are eligible for treatment. Unfortunately, many patients are “lost” during this multistep process in resource-limited settings. Point-of-care CD4 tests at HIV diagnosis—tests that are done on the spot and provide results the same day—might help to improve linkage to care in such settings. Here, the researchers use a mathematical model to assess the clinical outcomes and cost-effectiveness of point-of-care CD4 testing at the time of HIV diagnosis compared to laboratory-based testing in Mozambique, where about 1.5 million HIV-positive individuals live.
What Did the Researchers Do and Find?
The researchers used a validated model of HIV testing, linkage, and treatment called the Cost-Effectiveness of Preventing AIDS Complications–International (CEPAC-I) model to compare the clinical impact, costs, and cost-effectiveness of point-of-care and laboratory CD4 testing in newly diagnosed HIV-infected patients in Mozambique. They used published data to estimate realistic values for various model input parameters, including the probability of linkage to care following the use of each test, the accuracy of the tests, and the cost of each test. At a CD4 threshold for treatment of 250/mm3, the model predicted that 60.9% of newly diagnosed HIV-infected people would survive five years if their immunological status was assessed using the laboratory-based CD4 test, whereas 65% would survive five years if the point-of-care test was used. Predicted life expectancies were 9.6 and 10.3 years with the laboratory-based and point-of-care tests, respectively, and the per person lifetime costs (which mainly reflect treatment costs) associated with the two tests were US$2,440 and $US2,800, respectively. Finally, the incremental cost-effectiveness ratio—calculated as the incremental costs of one therapeutic intervention compared to another divided by the incremental benefits—was US$500 per year of life saved, when comparing use of the point-of-care test with a laboratory-based test.
What Do These Findings Mean?
These findings suggest that, compared to laboratory-based CD4 testing, point-of-care testing at HIV diagnosis could improve survival for HIV-infected individuals in Mozambique. Because the per capita gross domestic product in Mozambique is US$570, these findings also indicate that point-of-care testing would be very cost-effective compared to laboratory-based testing (an incremental cost-effectiveness ratio less than one times the per capita gross domestic product is regarded as very cost-effective). As with all modeling studies, the accuracy of these findings depends on the assumptions built into the model and on the accuracy of the input parameters. However, the point-of-care strategy averted deaths and was estimated to be cost-effective compared to the laboratory-based test over a wide range of input parameter values reflecting Mozambique and several other resource-limited settings that the researchers modeled. Importantly, these “sensitivity analyses” suggest that point-of-care CD4 testing is likely to have the greatest impact on HIV-related deaths and be economically efficient in settings in sub-Saharan Africa with the most limited health care resources, provided point-of-care CD4 testing improves the linkage to care for HIV-infected people.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001725.
The World Health Organization provides information on all aspects of HIV/AIDS (in several languages); its “Consolidated Guidelines on the Use of Antiretroviral Drugs for Treating and Preventing HIV Infections: Recommendations for a Public Health Approach”, which highlights the potential of point-of-care tests to improve the linkage of newly diagnosed HIV-infected patients to care, is available
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
NAM/aidsmap provides basic information about HIV/AIDS, and summaries of recent research findings on HIV care and treatment; it has a fact sheet on CD4 testing
Information is available from Avert, an international AIDS charity, on many aspects of HIV/AIDS, including information on starting, monitoring, and switching treatment and on HIV and AIDS in sub-Saharan Africa (in English and Spanish)
The “UNAIDS Report on the Global AIDS Epidemic 2013” provides up-to-date information about the AIDS epidemic and efforts to halt it
Personal stories about living with HIV/AIDS are available through Avert, Nam/aidsmap, and Healthtalkonline
doi:10.1371/journal.pmed.1001725
PMCID: PMC4165752  PMID: 25225800
23.  Chronious: the last advances in telehealth monitoring systems 
The effectiveness of treatment depends on the patient’s ability to manage in the everyday life his/her chronic health status in accordance with the medical prescriptions outside the hospital settings. For this reason, the European Commission promotes research in tele-health applications, such as Chronious “An Open, Ubiquitous and Adaptive Chronic Disease Management Platform for COPD and Renal Insufficiency”. The aim is the improvement of healthcare service by offering an online health management solution that addresses the patient-professional interaction, personal data security, reduction of hospitalization and related costs. Chronious implements a modular hardware-software system that integrates existing healthcare legacy system, biomedical sensors, user interfaces and multi-parametric data processing with decision support system for patients and health professionals. Nowadays, very few of chronic disease management tools commercially available are accompanied with patient-professional interfaces for communication and education purposes. As added value, Chronious proposes lifestyle and mental support tools for the patients and ontological cross-lingual information retrieval system for clinicians for faster and easier queries to medical knowledge. The patient at home is equipped with a T-shirt able to record cardiac/respiratory/audio and activity signs, external devices (weight scale, glucometer, blood pressure monitoring device, spirometer, air quality sensor) and a touch-screen computer to send reminders on drugs intake and to collect information on dietary habits and mental status. All information are automatically transmitted via IP/GPRS to the Central System, that using a web-interface and ruled based algorithms allows clinicians to monitor patients status and give suggestions for acting in case of worsening trend or risk situation. As consequence, critical procedures that are quite complicated for the patient such as frequent/continuous monitoring, visits to hospitals, self-care are becoming straightforward and simpler. In addition, the information of the clinician is more direct, accurate and complete improving the prognosis for the chronic diseases and the selection of the most appropriate treatment planning. For validation purposes, Chronious is focused on chronic obstructive pulmonary disease and chronic kidney disease, being these widespread and highly expensive in terms of social and economic costs. The validation protocol considers also the most frequent related comorbidities, such as diabetes, involving the patients category which will take advantage of the highest foreseen benefits. This enables an open architecture for further applications. Project validation is divided in two progressive phases: the first one in hospital setting was aimed to verify on 50 patients if the delivered prototypes met the user requirements, the ergonomic and functional specifications. The second phase has observational features. The improved system is currently applied at home on 60 selected patients. Patients are instructed to use the system independently for an expected duration of 4 months each. In parallel, the patient is monitored with standard periodic outpatient checks. At the end, customer satisfaction and the predictive ability of the system in the evolution of the disease will be evaluated. First feedbacks are encouraging because Chronious monitoring provides friendly approaches to new technologies and reassures patients reducing the intervention time in critical situation.
PMCID: PMC3571130
chronic disease; patient-professional interfaces; lifestyle
24.  Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered 
Background
Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios.
Methods
Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components.
Results
Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.
For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set.
Conclusions
The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.
doi:10.1186/1471-2288-11-76
PMCID: PMC3125387  PMID: 21600023
25.  Optimal Sampling Strategies for Detecting Zoonotic Disease Epidemics 
PLoS Computational Biology  2014;10(6):e1003668.
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.
Author Summary
Outbreaks of zoonoses can have large costs to society through public health and agricultural impacts. Because many zoonoses co-occur in multiple animal populations simultaneously, detection of zoonotic outbreaks can be especially difficult. We evaluated how to design sampling strategies for the early detection of disease outbreaks of vector-borne diseases. We built a framework to integrate epidemiological dynamical models with a sampling process that accounts for budgetary constraints, such as those faced by many management agencies. We illustrate our approach using West Nile virus, a globally-spreading zoonotic arbovirus that has significantly affected North American bird populations. Our results suggest that simple formulas can often make robust predictions about the proper sampling procedure, though we also illustrate how computational methods can be used to extend our framework to more realistic modeling scenarios when these simple predictions break down.
doi:10.1371/journal.pcbi.1003668
PMCID: PMC4072525  PMID: 24968100

Results 1-25 (1375179)