PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1258717)

Clipboard (0)
None

Related Articles

1.  Should the surgeon or the general practitioner (GP) follow up patients after surgery for colon cancer? A randomized controlled trial protocol focusing on quality of life, cost-effectiveness and serious clinical events 
Background
All patients who undergo surgery for colon cancer are followed up according to the guidelines of the Norwegian Gastrointestinal Cancer Group (NGICG). These guidelines state that the aims of follow-up after surgery are to perform quality assessment, provide support and improve survival. In Norway, most of these patients are followed up in a hospital setting. We describe a multi-centre randomized controlled trial to test whether these patients can be followed up by their general practitioner (GP) without altering quality of life, cost effectiveness and/or the incidence of serious clinical events.
Methods and Design
Patients undergoing surgery for colon cancer with histological grade Dukes's Stage A, B or C and below 75 years of age are eligible for inclusion. They will be randomized after surgery to follow-up at the surgical outpatient clinic (control group) or follow-up by the district GP (intervention group). Both study arms comply with the national NGICG guidelines. The primary endpoints will be quality of life (QoL) (measured by the EORTC QLQ C-30 and the EQ-5D instruments), serious clinical events (SCEs), and costs. The follow-up period will be two years after surgery, and quality of life will be measured every three months. SCEs and costs will be estimated prospectively. The sample size was 170 patients.
Discussion
There is an ongoing debate on the best method of follow-up for patients with CRC. Due to a wide range of follow-up programmes and paucity of randomized trials, it is impossible to draw conclusions about the best combination and frequency of clinic (or family practice) visits, blood tests, endoscopic procedures and radiological examinations that maximize the clinical outcome, quality of life and costs. Most studies on follow-up of CRC patients have been performed in a hospital outpatient setting. We hypothesize that postoperative follow-up of colon cancer patients (according to national guidelines) by GPs will not have any impact on patients' quality of life. Furthermore, we hypothesize that there will be no increase in SCEs and that the incremental cost-effectiveness ratio will improve.
Trial registration
This trial has been registered at ClinicalTrials.gov. The trial registration number is: NCT00572143
doi:10.1186/1472-6963-8-137
PMCID: PMC2474836  PMID: 18578856
2.  Reporting and Methods in Clinical Prediction Research: A Systematic Review 
PLoS Medicine  2012;9(5):e1001221.
Walter Bouwmeester and colleagues investigated the reporting and methods of prediction studies in 2008, in six high-impact general medical journals, and found that the majority of prediction studies do not follow current methodological recommendations.
Background
We investigated the reporting and methods of prediction studies, focusing on aims, designs, participant selection, outcomes, predictors, statistical power, statistical methods, and predictive performance measures.
Methods and Findings
We used a full hand search to identify all prediction studies published in 2008 in six high impact general medical journals. We developed a comprehensive item list to systematically score conduct and reporting of the studies, based on recent recommendations for prediction research. Two reviewers independently scored the studies. We retrieved 71 papers for full text review: 51 were predictor finding studies, 14 were prediction model development studies, three addressed an external validation of a previously developed model, and three reported on a model's impact on participant outcome. Study design was unclear in 15% of studies, and a prospective cohort was used in most studies (60%). Descriptions of the participants and definitions of predictor and outcome were generally good. Despite many recommendations against doing so, continuous predictors were often dichotomized (32% of studies). The number of events per predictor as a measure of statistical power could not be determined in 67% of the studies; of the remainder, 53% had fewer than the commonly recommended value of ten events per predictor. Methods for a priori selection of candidate predictors were described in most studies (68%). A substantial number of studies relied on a p-value cut-off of p<0.05 to select predictors in the multivariable analyses (29%). Predictive model performance measures, i.e., calibration and discrimination, were reported in 12% and 27% of studies, respectively.
Conclusions
The majority of prediction studies in high impact journals do not follow current methodological recommendations, limiting their reliability and applicability.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
There are often times in our lives when we would like to be able to predict the future. Is the stock market going to go up, for example, or will it rain tomorrow? Being able predict future health is also important, both to patients and to physicians, and there is an increasing body of published clinical “prediction research.” Diagnostic prediction research investigates the ability of variables or test results to predict the presence or absence of a specific diagnosis. So, for example, one recent study compared the ability of two imaging techniques to diagnose pulmonary embolism (a blood clot in the lungs). Prognostic prediction research investigates the ability of various markers to predict future outcomes such as the risk of a heart attack. Both types of prediction research can investigate the predictive properties of patient characteristics, single variables, tests, or markers, or combinations of variables, tests, or markers (multivariable studies). Both types of prediction research can include also studies that build multivariable prediction models to guide patient management (model development), or that test the performance of models (validation), or that quantify the effect of using a prediction model on patient and physician behaviors and outcomes (impact assessment).
Why Was This Study Done?
With the increase in prediction research, there is an increased interest in the methodology of this type of research because poorly done or poorly reported prediction research is likely to have limited reliability and applicability and will, therefore, be of little use in patient management. In this systematic review, the researchers investigate the reporting and methods of prediction studies by examining the aims, design, participant selection, definition and measurement of outcomes and candidate predictors, statistical power and analyses, and performance measures included in multivariable prediction research articles published in 2008 in several general medical journals. In a systematic review, researchers identify all the studies undertaken on a given topic using a predefined set of criteria and systematically analyze the reported methods and results of these studies.
What Did the Researchers Do and Find?
The researchers identified all the multivariable prediction studies meeting their predefined criteria that were published in 2008 in six high impact general medical journals by browsing through all the issues of the journals (a hand search). They then scored the methods and reporting of each study using a comprehensive item list based on recent recommendations for the conduct of prediction research (for example, the reporting recommendations for tumor marker prognostic studies—the REMARK guidelines). Of 71 retrieved studies, 51 were predictor finding studies, 14 were prediction model development studies, three externally validated an existing model, and three reported on a model's impact on participant outcome. Study design, participant selection, definitions of outcomes and predictors, and predictor selection were generally well reported, but other methodological and reporting aspects of the studies were suboptimal. For example, despite many recommendations, continuous predictors were often dichotomized. That is, rather than using the measured value of a variable in a prediction model (for example, blood pressure in a cardiovascular disease prediction model), measurements were frequently assigned to two broad categories. Similarly, many of the studies failed to adequately estimate the sample size needed to minimize bias in predictor effects, and few of the model development papers quantified and validated the proposed model's predictive performance.
What Do These Findings Mean?
These findings indicate that, in 2008, most of the prediction research published in high impact general medical journals failed to follow current guidelines for the conduct and reporting of clinical prediction studies. Because the studies examined here were published in high impact medical journals, they are likely to be representative of the higher quality studies published in 2008. However, reporting standards may have improved since 2008, and the conduct of prediction research may actually be better than this analysis suggests because the length restrictions that are often applied to journal articles may account for some of reporting omissions. Nevertheless, despite some encouraging findings, the researchers conclude that the poor reporting and poor methods they found in many published prediction studies is a cause for concern and is likely to limit the reliability and applicability of this type of clinical research.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001221.
The EQUATOR Network is an international initiative that seeks to improve the reliability and value of medical research literature by promoting transparent and accurate reporting of research studies; its website includes information on a wide range of reporting guidelines including the REMARK recommendations (in English and Spanish)
A video of a presentation by Doug Altman, one of the researchers of this study, on improving the reporting standards of the medical evidence base, is available
The Cochrane Prognosis Methods Group provides additional information on the methodology of prognostic research
doi:10.1371/journal.pmed.1001221
PMCID: PMC3358324  PMID: 22629234
3.  A Randomised Trial Comparing Genotypic and Virtual Phenotypic Interpretation of HIV Drug Resistance: The CREST Study 
PLoS Clinical Trials  2006;1(3):e18.
Objectives:
The aim of this study was to compare the efficacy of different HIV drug resistance test reports (genotype and virtual phenotype) in patients who were changing their antiretroviral therapy (ART).
Design:
Randomised, open-label trial with 48-week followup.
Setting:
The study was conducted in a network of primary healthcare sites in Australia and New Zealand.
Participants:
Patients failing current ART with plasma HIV RNA > 2000 copies/mL who wished to change their current ART were eligible. Subjects were required to be > 18 years of age, previously treated with ART, have no intercurrent illnesses requiring active therapy, and to have provided written informed consent.
Interventions:
Eligible subjects were randomly assigned to receive a genotype (group A) or genotype plus virtual phenotype (group B) prior to selection of their new antiretroviral regimen.
Outcome Measures:
Patient groups were compared for patterns of ART selection and surrogate outcomes (plasma viral load and CD4 counts) on an intention-to-treat basis over a 48-week period.
Results:
Three hundred and twenty seven patients completing > one month of followup were included in these analyses. Resistance tests were the primary means by which ART regimens were selected (group A: 64%, group B: 62%; p = 0.32). At 48 weeks, there were no significant differences between the groups for mean change from baseline plasma HIV RNA (group A: 0.68 log copies/mL, group B: 0.58 log copies/mL; p = 0.23) and mean change from baseline CD4+ cell count (group A: 37 cells/mm3, group B: 50 cells/mm3; p = 0.28).
Conclusions:
In the absence of clear demonstrated benefits arising from the use of the virtual phenotype interpretation, this study suggests resistance testing using genotyping linked to a reliable interpretive algorithm is adequate for the management of HIV infection.
Editorial Commentary
Background: Antiretroviral drugs are used to treat patients with HIV infection, with good evidence that they improve prognosis. However, mutations develop in the HIV genome that allow it to evade successful treatment—known as drug resistance—and such mutations are known against every class of antiretroviral drug. Resistance can cause treatment failure and limit the treatment options available. Different types of tests are often used to detect resistance and to work out whether patients should switch to a different drug regimen. Currently, the different types of tests include genotype testing (direct sequencing of genes from virus samples infecting a patient); phenotype testing (a test that assesses the sensitivity of a patient's HIV sample to different drugs), and virtual phenotype testing (a way of interpreting genotype data that estimates the likely viral response to different drugs). The researchers of this study did a trial to find out whether providing an additional virtual phenotype report would be beneficial to patients, as compared with a genotype report alone. The main outcome was HIV viral load after 12 months of treatment, but the researchers also looked at differences in drug regimens prescribed, number of treatment changes in the study, and changes in CD4+ (the type of white blood cell infected by HIV) counts.
What this trial shows: The researchers found that the main endpoint of the trial (HIV viral load after 12 months) was no different in patients whose clinicians had received a virtual phenotype report as well as a genotype report, compared with those who had received a genotype report alone. In addition, the average number of drugs prescribed was no different between patients in the two different arms of the trial, and there was no difference in number of drug regimen changes, and no change in immune response (measured using CD4+ cell levels). However, more drugs predicted to be sensitive were prescribed by clinicians who got both a genotype and virtual phenotype report, as compared with clinicians who received only the genotype report.
Strengths and limitations: The size of the trial (338 patients recruited) was large enough to properly test the hypothesis that providing a virtual phenotype report as well as a genotype report would result in lower HIV viral loads. Randomization of patients to either intervention ensured that the comparison groups were well-balanced, and the researchers also tested whether selection bias had affected the results (i.e., testing for the possibility that clinicians could predict which intervention participants would receive, and change recruitment into the trial as a result). They found no evidence for selection bias occurring within the trial. However, interpreting the results is difficult because the trial did not directly compare the two different testing platforms, but rather looked at whether providing a virtual phenotype report as well as a genotype report was better than providing a genotype report alone. The investigators also acknowledge that since the trial was conducted, the cutoffs for interpreting genotype information as resistant have been lowered. The findings may therefore not translate precisely to the current situation.
Contribution to the evidence: Other cohort studies and clinical trials have shown that patients offered resistance testing respond better to antiretroviral therapy compared with those who were not, but the clinical effectiveness of different resistance testing methods is not known. This study provides additional data on the respective benefits of genotype testing versus genotype plus provision of virtual phenotype. Another trial comparing genotype versus virtual phenotype has also found that the different interpretation methods perform similarly.
doi:10.1371/journal.pctr.0010018
PMCID: PMC1523224  PMID: 16878178
4.  The intermediate endpoint effect in logistic and probit regression 
Background
An intermediate endpoint is hypothesized to be in the middle of the causal sequence relating an independent variable to a dependent variable. The intermediate variable is also called a surrogate or mediating variable and the corresponding effect is called the mediated, surrogate endpoint, or intermediate endpoint effect. Clinical studies are often designed to change an intermediate or surrogate endpoint and through this intermediate change influence the ultimate endpoint. In many intermediate endpoint clinical studies the dependent variable is binary, and logistic or probit regression is used.
Purpose
The purpose of this study is to describe a limitation of a widely used approach to assessing intermediate endpoint effects and to propose an alternative method, based on products of coefficients, that yields more accurate results.
Methods
The intermediate endpoint model for a binary outcome is described for a true binary outcome and for a dichotomization of a latent continuous outcome. Plots of true values and a simulation study are used to evaluate the different methods.
Results
Distorted estimates of the intermediate endpoint effect and incorrect conclusions can result from the application of widely used methods to assess the intermediate endpoint effect. The same problem occurs for the proportion of an effect explained by an intermediate endpoint, which has been suggested as a useful measure for identifying intermediate endpoints. A solution to this problem is given based on the relationship between latent variable modeling and logistic or probit regression.
Limitations
More complicated intermediate variable models are not addressed in the study, although the methods described in the article can be extended to these more complicated models.
Conclusions
Researchers are encouraged to use an intermediate endpoint method based on the product of regression coefficients. A common method based on difference in coefficient methods can lead to distorted conclusions regarding the intermediate effect.
doi:10.1177/1740774507083434
PMCID: PMC2857773  PMID: 17942466
5.  A Randomized, Double-Blind, Placebo-Controlled Trial of Lessertia frutescens in Healthy Adults 
PLoS Clinical Trials  2007;2(4):e16.
Objectives:
Indigenous medicines are widely used throughout Africa, despite a lack of scientific evidence for their safety or efficacy. The aims of this study were: (a) to conduct a pilot study of the safety of a common indigenous South African phytotherapy, Lessertia frutescens (Sutherlandia), in healthy adults; and (b) to contribute to establishing procedures for ethical and scientifically rigorous clinical trials of African indigenous medicines.
Design:
A randomized, double-blind, placebo-controlled trial of Sutherlandia leaf powder in healthy adults.
Setting:
Tiervlei Trial Centre, Karl Bremer Hospital, Bellville, South Africa.
Participants:
25 adults who provided informed consent and had no known significant diseases or allergic conditions nor clinically abnormal laboratory blood profiles during screening.
Intervention:
12 participants randomized to a treatment arm consumed 400 mg capsules of Sutherlandia leaf powder twice daily (800 mg/d). 13 individuals randomized to the control arm consumed a placebo capsule. Each participant received 180 capsules for the trial duration of 3 mo.
Outcome Measures:
The primary endpoint was frequency of adverse events; secondary endpoints were changes in physical, vital, blood, and biomarker indices.
Results:
There were no significant differences in general adverse events or physical, vital, blood, and biomarker indices between the treatment and placebo groups (p > 0.05). However, participants consuming Sutherlandia reported improved appetite compared to those in the placebo group (p = 0.01). Although the treatment group exhibited a lower respiration rate (p < 0.04) and higher platelet count (p = 0.03), MCH (p = 0.01), MCHC (p = 0.02), total protein (p = 0.03), and albumin (p = 0.03), than the placebo group, these differences remained within the normal physiological range, and were not clinically relevant. The Sutherlandia biomarker canavanine was undetectable in participant plasma.
Conclusion:
Consumption of 800 mg/d Sutherlandia leaf powder capsules for 3 mo was tolerated by healthy adults.
Editorial Commentary
Background: In Africa, traditional herbal medicines are given for many illnesses. In particular, one herbal medicine, Sutherlandia (Lessertia frutescens) is commonly given in the belief that this herb will treat some of the symptoms associated with HIV/AIDS, such as nausea and lack of appetite, amongst others. However, there is very little evidence relating to the safety and none to the efficacy of this herb. Generally, when new drugs are developed, the first stage of human testing involves a Phase 1 trial. This type of trial would typically involve small numbers of healthy individuals, who would receive progressively increasing doses of the drug under study, and would be closely monitored for any sign of side effects. Phase 1 trials would typically also collect data from blood samples to find out how the drug is handled in the body and broken down and eliminated. Therefore, the researchers here carried out a preliminary study to assess just the safety of Sutherlandia. 25 healthy adults were randomized to receive either tablets containing a fixed dose of Sutherlandia leaf powder daily for three months, or matched placebo tablets containing lettuce leaf powder, for the same period of time. The main aim of the trial was to assess safety, so the primary outcomes were adverse events experienced by the participants. The researchers also measured standard outcomes such as blood pressure, heart rate, body weight, urine glucose, protein, and many others, at one-month intervals over the three-month period.
What the trial shows: Adverse events experienced by trial participants over the three months of this trial included those that might be expected in a group of otherwise healthy individuals, such as headaches, insomnia, allergies, malaise, palpitations, nosebleeds, and so on. The researchers did not see statistically significant differences between treatment and placebo groups in any of the major categories of these events. Most physical and laboratory measurements also showed no statistically significant differences between the study groups. However, there were statistically significant, but small, differences between groups in respiratory rate and in various basic blood tests. The researchers did not think these differences were clinically important. Overall, this trial suggested that Sutherlandia use was not associated with side effects at this dosage and over this time scale.
Strengths and limitations: Strengths of this study include the use of randomization to distribute individuals to either the Sutherlandia or control groups, and in the use of a placebo control group, which therefore allowed the researchers to compare the frequencies of adverse events in the Sutherlandia group with what might be expected among healthy individuals over the course of three months. An important limitation is the small sample size of the trial. This size limits the sensitivity of the trial to detect rare adverse events to the herb under study, and therefore one cannot say conclusively that the herb is safe, based on this data. Additionally, the study looked only at the participants' response to one dosage level of Sutherlandia. A strategy using progressively increasing doses would have allowed the researchers to see if there was a maximum tolerated dose to this herb. A further limitation in this study is the lack of data relating to how the herb is broken down in the body; these data are normally an important part of Phase 1 trials and, combined with safety data, are crucial to finding out whether a compound is safe when given at a dosage that allows it to be available to the appropriate tissues.
Contribution to the evidence: Data from previous studies in nonhuman primates have shown that Sutherlandia is not associated with toxic or other side effects at approximately equivalent or higher doses than that normally taken by people with HIV/AIDS. This study adds safety data relating to Sutherlandia consumption in healthy humans, which confirm the primate data. However, it is crucial to collect more data relating to how the probable active ingredients of Sutherlandia are absorbed and broken down, and to assess safety at different dosages, before studies are even considered for the next stage, which is to see whether Sutherlandia has any efficacy in people with HIV/AIDS.
doi:10.1371/journal.pctr.0020016
PMCID: PMC1863514  PMID: 17476314
6.  A Method for Utilizing Bivariate Efficacy Outcome Measures to Screen Regimens for Activity in 2-Stage Phase II Clinical Trials 
Background
Most phase II clinical trials utilize a single primary endpoint to determine the promise of a regimen for future study. However, many disorders manifest themselves in complex ways. For example, migraine headaches can cause pain, auras, photophobia, and emesis. Investigators may believe a drug is effective at reducing migraine pain and the severity of emesis during an attack. Nevertheless, they could still be interested in proceeding with development of the drug if it is effective against only one of these symptoms. Such a study would be a candidate for a clinical trial with co-primary endpoints.
Purpose
The purpose of the article is to provide a method for designing a 2-stage clinical trial with dichotomous co-primary endpoints of efficacy that has the ability to detect activity on either response measure with high probability when the drug is active on one or both measures, while at the same time rejecting the drug with high probability when there is little activity on both dimensions. The design enables early closure for futility and is flexible with regard to attained accrual.
Methods
The design is proposed in the context of cancer clinical trials where tumor response is used to assess a drug's ability to kill tumor cells and progression-free survival (PFS) status after a certain period is used to evaluate the drug's ability to stabilize tumor growth. Both endpoints are assumed to be distributed as binomial random variables, and uninteresting probabilities of success are determined from historical controls. Given the necessity of accrual flexibility, exhaustive searching algorithms to find optimum designs do not seem feasible at this time. Instead, critical values are determined for realized sample sizes using specific procedures. Then accrual windows are found to achieve a design's desired level of significance, probability of early termination (PET), and power.
Results
The design is illustrated with a clinical trial that examined bevacizumab in patients with recurrent endometrial cancer. This study was negative by tumor response but positive by 6-month PFS. The procedure was compared to modified procedures in the literature, indicating that the method is competitive.
Limitations
Although the procedure allows investigators to construct designs with desired levels of significance and power, the PET under the null is smaller than single endpoint studies.
Conclusions
The impact of adding an additional endpoint on the sample size is often minimal, but the study gains sensitivity to activity on another dimension of treatment response. The operating characteristics are fairly robust to the level of association between the two endpoints. Software is available for implementing the methods.
doi:10.1177/1740774512450101
PMCID: PMC3598604  PMID: 22811448
Binomial distribution; multinomial distribution; correlated primary endpoints; cytotoxic; cytostatic; two-stage design
7.  Switching HIV Treatment in Adults Based on CD4 Count Versus Viral Load Monitoring: A Randomized, Non-Inferiority Trial in Thailand 
PLoS Medicine  2013;10(8):e1001494.
Using a randomized controlled trial, Marc Lallemant and colleagues ask if a CD4-based monitoring and treatment switching strategy provides a similar clinical outcome compared to the standard viral load-based strategy for adults with HIV in Thailand.
Please see later in the article for the Editors' Summary
Background
Viral load (VL) is recommended for monitoring the response to highly active antiretroviral therapy (HAART) but is not routinely available in most low- and middle-income countries. The purpose of the study was to determine whether a CD4-based monitoring and switching strategy would provide a similar clinical outcome compared to the standard VL-based strategy in Thailand.
Methods and Findings
The Programs for HIV Prevention and Treatment (PHPT-3) non-inferiority randomized clinical trial compared a treatment switching strategy based on CD4-only (CD4) monitoring versus viral-load (VL). Consenting participants were antiretroviral-naïve HIV-infected adults (CD4 count 50–250/mm3) initiating non-nucleotide reverse transcriptase inhibitor (NNRTI)-based therapy. Randomization, stratified by site (21 public hospitals), was performed centrally after enrollment. Clinicians were unaware of the VL values of patients randomized to the CD4 arm. Participants switched to second-line combination with confirmed CD4 decline >30% from peak (within 200 cells from baseline) in the CD4 arm, or confirmed VL >400 copies/ml in the VL arm. Primary endpoint was clinical failure at 3 years, defined as death, new AIDS-defining event, or CD4 <50 cells/mm3. The 3-year Kaplan-Meier cumulative risks of clinical failure were compared for non-inferiority with a margin of 7.4%. In the intent to treat analysis, data were censored at the date of death or at last visit. The secondary endpoints were difference in future-drug-option (FDO) score, a measure of resistance profiles, virologic and immunologic responses, and the safety and tolerance of HAART. 716 participants were randomized, 356 to VL monitoring and 360 to CD4 monitoring. At 3 years, 319 participants (90%) in VL and 326 (91%) in CD4 were alive and on follow-up. The cumulative risk of clinical failure was 8.0% (95% CI 5.6–11.4) in VL versus 7.4% (5.1–10.7) in CD4, and the upper-limit of the one-sided 95% CI of the difference was 3.4%, meeting the pre-determined non-inferiority criterion. Probability of switch for study criteria was 5.2% (3.2–8.4) in VL versus 7.5% (5.0–11.1) in CD4 (p = 0.097). Median time from treatment initiation to switch was 11.7 months (7.7–19.4) in VL and 24.7 months (15.9–35.0) in CD4 (p = 0.001). The median duration of viremia >400 copies/ml at switch was 7.2 months (5.8–8.0) in VL versus 15.8 months (8.5–20.4) in CD4 (p = 0.002). FDO scores were not significantly different at time of switch. No adverse events related to the monitoring strategy were reported.
Conclusions
The 3-year rates of clinical failure and loss of treatment options did not differ between strategies although the longer-term consequences of CD4 monitoring would need to be investigated. These results provide reassurance to treatment programs currently based on CD4 monitoring as VL measurement becomes more affordable and feasible in resource-limited settings.
Trial registration
ClinicalTrials.gov NCT00162682
Please see later in the article for the Editors' Summary
Editors' Summary
Background
About 34 million people (most of them living in low-and middle-income countries) are currently infected with HIV, the virus that causes AIDS. HIV infection leads to the destruction of immune system cells (including CD4 cells, a type of white blood cell), leaving infected individuals susceptible to other infections. Early in the AIDS epidemic, most HIV-infected individuals died within 10 years of infection. Then, in 1996, highly active antiretroviral therapy (HAART)—combined drugs regimens that suppress viral replication and allow restoration of the immune system—became available. For people living in affluent countries, HIV/AIDS became a chronic condition but, because HAART was expensive, HIV/AIDS remained a fatal illness for people living in resource-limited countries. In 2003, the international community declared HIV/AIDS a global health emergency and, in 2006, it set the target of achieving universal global access to HAART by 2010. By the end of 2011, 8 million of the estimated 14.8 million people in need of HAART in low- and middle-income countries were receiving treatment.
Why Was This Study Done?
At the time this trial was conceived, national and international recommendations were that HIV-positive individuals should start HAART when their CD4 count fell below 200 cells/mm3 and should have their CD4 count regularly monitored to optimize HAART. In 2013, the World Health Organization (WHO) recommendations were updated to promote expanded eligibility for HAART with a CD4 of 500 cells/mm3 or less for adults, adolescents, and older children although priority is given to individuals with CD4 count of 350 cells/mm3 or less. Because HIV often becomes resistant to first-line antiretroviral drugs, WHO also recommends that viral load—the amount of virus in the blood—should be monitored so that suspected treatment failures can be confirmed and patients switched to second-line drugs in a timely manner. This monitoring and switching strategy is widely used in resource-rich settings, but is still very difficult to implement for low- and middle-income countries where resources for monitoring are limited and access to costly second-line drugs is restricted. In this randomized non-inferiority trial, the researchers compare the performance of a CD4-based treatment monitoring and switching strategy with the standard viral load-based strategy among HIV-positive adults in Thailand. In a randomized trial, individuals are assigned different interventions by the play of chance and followed up to compare the effects of these interventions; a non-inferiority trial investigates whether one treatment is not worse than another.
What Did the Researchers Do and Find?
The researchers assigned about 700 HIV-positive adults who were beginning HAART for the first time to have their CD4 count (CD4 arm) or their CD4 count and viral load (VL arm) determined every 3 months. Participants were switched to a second-line therapy if their CD4 count declined by more than 30% from their peak CD4 count (CD4 arm) or if a viral load of more than 400 copies/ml was recorded (VL arm). The 3-year cumulative risk of clinical failure (defined as death, a new AIDS-defining event, or a CD4 count of less than 50 cells/mm3) was 8% in the VL arm and 7.4% in the CD4 arm. This difference in clinical failure risk met the researchers' predefined criterion for non-inferiority. The probability of a treatment switch was similar in the two arms, but the average time from treatment initiation to treatment switch and the average duration of a high viral load after treatment switch were both longer in the CD4 arm than in the VL arm. Finally, the future-drug-option score, a measure of viral drug resistance profiles, was similar in the two arms at the time of treatment switch.
What Do These Findings Mean?
These findings suggest that, in Thailand, a CD4 switching strategy is non-inferior in terms of clinical outcomes among HIV-positive adults 3 years after beginning HAART when compared to the recommended viral load-based switching strategy and that there is no difference between the strategies in terms of viral suppression and immune restoration after 3-years follow-up. Importantly, however, even though patients in the CD4 arm spent longer with a high viral load than patients in the VL arm, the emergence of HIV mutants resistant to antiretroviral drugs was similar in the two arms. Although these findings provide no information about the long-term outcomes of the two monitoring strategies and may not be generalizable to routine care settings, they nevertheless provide reassurance that using CD4 counts alone to monitor HAART in HIV treatment programs in resource-limited settings is an appropriate strategy to use as viral load measurement becomes more affordable and feasible in these settings.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001494.
The World Health Organization provides information on all aspects of HIV/AIDS (in several languages); its 2010 recommendations for antiretroviral therapy for HIV infection in adults and adolescents are available as well as the June 2013 Consolidated guidelines on the use of antiretroviral drugs for treating and preventing HIV infection: recommendations for a public health approach
The 2012 UNAIDS World AIDS Day Report provides up-to-date information about the AIDS epidemic and efforts to halt it
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
NAM/aidsmap provides basic information about HIV/AIDS and summaries of recent research findings on HIV care and treatment
Information is available from Avert, an international AIDS charity on many aspects of HIV/AIDS, including information on the global HIV/AIDS epidemic, on HIV and AIDS in Thailand, on universal access to AIDS treatment, and on starting, monitoring and switching HIV treatment (in English and Spanish)
The UK National Health Service Choices website provides information (including personal stories) about HIV and AIDS
More information about this trial (the PHPT-3 trial) is available
Patient stories about living with HIV/AIDS are available through Avert; the nonprofit website Healthtalkonline also provides personal stories about living with HIV, including stories about HIV treatment
doi:10.1371/journal.pmed.1001494
PMCID: PMC3735458  PMID: 23940461
8.  BRIDGING FROM CLINICAL ENDPOINTS TO ESTIMATES OF TREATMENT VALUE FOR EXTERNAL DECISION MAKERS 
Aim
While clinical endpoints provide important information on the efficacy of treatment in controlled conditions, they often are not relevant to decision makers trying to gauge the potential economic impact or value of new treatments. Therefore, it is often necessary to translate changes in cognition, function or behavior into changes in cost or other measures, which can be problematic if not conducted in a transparent manner. The Dependence Scale (DS), which measures the level of assistance a patient requires due to AD-related deficits, may provide a useful measure of the impact of AD progression in a way that is relevant to patients, providers and payers, by linking clinical endpoints to estimates of cost effectiveness or value. The aim of this analysis was to test the association of the DS to clinical endpoints and AD-related costs.
Method
The relationship between DS score and other endpoints was explored using the Predictors Study, a large, multi-center cohort of patients with probable AD followed annually for four years. Enrollment required a modified Mini-Mental State Examination (mMMS) score ≥30, equivalent to a score of approximately ≥16 on the MMSE. DS summated scores (range: 0–15) were compared to measures of cognition (MMSE), function (Blessed Dementia Rating Scale, BDRS, 0–17), behavior, extrapyramidal symptoms (EPS), and psychotic symptoms (illusions, delusions or hallucinations). Also, estimates for total cost (sum of direct medical cost, direct non-medical cost, and cost of informal caregivers’ time) were compared to DS scores.
Results
For the 172 patients in the analysis, mean baseline scores were: DS: 5.2 (SD: 2.0), MMSE: 23.0 (SD: 3.5), BDRS: 2.9 (SD: 1.3), EPS: 10.8%, behavior: 28.9% psychotic symptoms: 21.1%. After 4 years, mean scores were: DS: 8.9 (SD: 2.9), MMSE: 17.2 (SD: 4.7), BDRS: 5.2 (SD: 1.4), EPS: 37.5%, behavior: 60.0%, psychotic symptoms: 46.7%. At baseline, DS scores were significantly correlated with MMSE (r=−0.299, p<0.01), BDRS (r=0.610, p<0.01), behavior (r=.2633, p=0.0005), EPS (r=0.1910, p=0.0137) and psychotic symptoms (r=0.253, p<0.01); and at 4-year follow-up, DS scores were significantly correlated with MMSE (r=−0.3705, p=0.017), BDRS (r=0.6982, p<0.001). Correlations between DS and behavior (−0.0085, p=0.96), EPS (r=0.3824, p=0.0794), psychotic symptoms (r=0.130, ns) were not statistically significant at follow-up. DS scores were also significantly correlated with total costs at baseline (r=0.2615, p=0.0003) and follow-up (r=0.3359, p=0.0318).
Discussion
AD is associated with deficits in cognition, function and behavior, thus it is imperative that these constructs are assessed in trials of AD treatment. However, assessing multiple endpoints can lead to confusion for decision makers if treatments do not impact all endpoints similarly, especially if the measures are not used typically in practice. One potential method for translating these deficits into a more meaningful outcome would be to identify a separate construct, one that takes a broader view of the overall impact of the disease. Patient dependence, as measured by the DS, would appear to be a reasonable choice – it is associated with the three clinical endpoints, as well as measures of cost (medical and informal), thereby providing a bridge between measures of clinical efficacy and value in a single, transparent measure.
PMCID: PMC2694572  PMID: 19262963
9.  A Post Hoc Comparison of the Effects of Lisdexamfetamine Dimesylate and Osmotic-Release Oral System Methylphenidate on Symptoms of Attention-Deficit Hyperactivity Disorder in Children and Adolescents 
CNS Drugs  2013;27(9):743-751.
Introduction
There are limited head-to-head data comparing the efficacy of long-acting amfetamine- and methylphenidate-based psychostimulants as treatments for individuals with attention-deficit hyperactivity disorder (ADHD). This post hoc analysis provides the first parallel-group comparison of the effect of lisdexamfetamine dimesylate (lisdexamfetamine) and osmotic-release oral system methylphenidate (OROS-MPH) on symptoms of ADHD in children and adolescents.
Study Design
This was a post hoc analysis of a randomized, double-blind, parallel-group, dose-optimized, placebo-controlled, phase III study.
Setting
The phase III study was carried out in 48 centres across ten European countries.
Patients
The phase III study enrolled children and adolescents (aged 6–17 years) who met Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision criteria for a primary diagnosis of ADHD and who had a baseline ADHD Rating Scale IV (ADHD-RS-IV) total score of 28 or higher.
Intervention
Eligible patients were randomized (1:1:1) to receive a once-daily, optimized dose of lisdexamfetamine (30, 50 or 70 mg/day), placebo or OROS-MPH (18, 36 or 54 mg/day) for 7 weeks.
Main Outcome Measures
In this post hoc analysis, efficacy was assessed using the ADHD-RS-IV and Clinical Global Impressions-Improvement (CGI-I) scale. Responders were defined as those achieving at least a 30 % reduction from baseline in ADHD-RS-IV total score and a CGI-I score of 1 (very much improved) or 2 (much improved). The proportion of patients achieving an ADHD-RS-IV total score less than or equal to the mean for their age (based on normative data) was also determined. Endpoint was the last on-treatment visit with a valid assessment. Safety assessments included treatment-emergent adverse events (TEAEs) and vital signs.
Results
Of the 336 patients randomized, 332 were included in the safety population, 317 were included in the full analysis set and 196 completed the study. The mean (standard deviation) ADHD-RS-IV total score at baseline was 40.7 (7.31) for lisdexamfetamine, 41.0 (7.14) for placebo and 40.5 (6.72) for OROS-MPH. The least-squares (LS) mean change (standard error) in ADHD-RS-IV total score from baseline to endpoint was −24.3 (1.16) for lisdexamfetamine, −5.7 (1.13) for placebo and −18.7 (1.14) for OROS-MPH. The difference between lisdexamfetamine and OROS-MPH in LS mean change (95 % confidence interval [CI]) in ADHD-RS-IV total score from baseline to endpoint was statistically significant in favour of lisdexamfetamine (−5.6 [−8.4 to −2.7]; p < 0.001). The difference between lisdexamfetamine and OROS-MPH in the percentage of patients (95 % CI) with a CGI-I score of 1 or 2 at endpoint was 17.4 (5.0–29.8; p < 0.05; number needed to treat [NNT] 6), and the difference in the percentage of patients (95 % CI) achieving at least a 30 % reduction in ADHD-RS-IV total score and a CGI-I score of 1 or 2 was 18.3 (5.4–31.3; p < 0.05; NNT 6). The difference between lisdexamfetamine and OROS-MPH in the percentage of patients (95 % CI) with an ADHD-RS-IV total score less than or equal to the mean for their age at endpoint was 14.0 (0.6–27.4; p = 0.050). The overall frequency of TEAEs and the frequencies of decreased appetite, insomnia, decreased weight, nausea and anorexia TEAEs were greater in patients treated with lisdexamfetamine than in those treated with OROS-MPH, whereas headache and nasopharyngitis were more frequently reported in patients receiving OROS-MPH.
Conclusions
This post hoc analysis showed that, at the doses tested, patients treated with lisdexamfetamine showed statistically significantly greater improvement in symptoms of ADHD than those receiving OROS-MPH, as assessed using the ADHD-RS-IV and CGI-I. The safety profiles of lisdexamfetamine and OROS-MPH were consistent with the known effects of stimulant medications.
doi:10.1007/s40263-013-0086-6
PMCID: PMC3751426  PMID: 23801529
10.  Stenting for Peripheral Artery Disease of the Lower Extremities 
Executive Summary
Background
Objective
In January 2010, the Medical Advisory Secretariat received an application from University Health Network to provide an evidentiary platform on stenting as a treatment management for peripheral artery disease. The purpose of this health technology assessment is to examine the effectiveness of primary stenting as a treatment management for peripheral artery disease of the lower extremities.
Clinical Need: Condition and Target Population
Peripheral artery disease (PAD) is a progressive disease occurring as a result of plaque accumulation (atherosclerosis) in the arterial system that carries blood to the extremities (arms and legs) as well as vital organs. The vessels that are most affected by PAD are the arteries of the lower extremities, the aorta, the visceral arterial branches, the carotid arteries and the arteries of the upper limbs. In the lower extremities, PAD affects three major arterial segments i) aortic-iliac, ii) femoro-popliteal (FP) and iii) infra-popliteal (primarily tibial) arteries. The disease is commonly classified clinically as asymptomatic claudication, rest pain and critical ischemia.
Although the prevalence of PAD in Canada is not known, it is estimated that 800,000 Canadians have PAD. The 2007 Trans Atlantic Intersociety Consensus (TASC) II Working Group for the Management of Peripheral Disease estimated that the prevalence of PAD in Europe and North America to be 27 million, of whom 88,000 are hospitalizations involving lower extremities. A higher prevalence of PAD among elderly individuals has been reported to range from 12% to 29%. The National Health and Nutrition Examination Survey (NHANES) estimated that the prevalence of PAD is 14.5% among individuals 70 years of age and over.
Modifiable and non-modifiable risk factors associated with PAD include advanced age, male gender, family history, smoking, diabetes, hypertension and hyperlipidemia. PAD is a strong predictor of myocardial infarction (MI), stroke and cardiovascular death. Annually, approximately 10% of ischemic cardiovascular and cerebrovascular events can be attributed to the progression of PAD. Compared with patients without PAD, the 10-year risk of all-cause mortality is 3-fold higher in patients with PAD with 4-5 times greater risk of dying from cardiovascular event. The risk of coronary heart disease is 6 times greater and increases 15-fold in patients with advanced or severe PAD. Among subjects with diabetes, the risk of PAD is often severe and associated with extensive arterial calcification. In these patients the risk of PAD increases two to four fold. The results of the Canadian public survey of knowledge of PAD demonstrated that Canadians are unaware of the morbidity and mortality associated with PAD. Despite its prevalence and cardiovascular risk implications, only 25% of PAD patients are undergoing treatment.
The diagnosis of PAD is difficult as most patients remain asymptomatic for many years. Symptoms do not present until there is at least 50% narrowing of an artery. In the general population, only 10% of persons with PAD have classic symptoms of claudication, 40% do not complain of leg pain, while the remaining 50% have a variety of leg symptoms different from classic claudication. The severity of symptoms depends on the degree of stenosis. The need to intervene is more urgent in patients with limb threatening ischemia as manifested by night pain, rest pain, ischemic ulcers or gangrene. Without successful revascularization those with critical ischemia have a limb loss (amputation) rate of 80-90% in one year.
Diagnosis of PAD is generally non-invasive and can be performed in the physician offices or on an outpatient basis in a hospital. Most common diagnostic procedure include: 1) Ankle Brachial Index (ABI), a ratio of the blood pressure readings between the highest ankle pressure and the highest brachial (arm) pressure; and 2) Doppler ultrasonography, a diagnostic imaging procedure that uses a combination of ultrasound and wave form recordings to evaluate arterial flow in blood vessels. The value of the ABI can provide an assessment of the severity of the disease. Other non invasive imaging techniques include: Computed Tomography (CT) and Magnetic Resonance Angiography (MRA). Definitive diagnosis of PAD can be made by an invasive catheter based angiography procedure which shows the roadmap of the arteries, depicting the exact location and length of the stenosis / occlusion. Angiography is the standard method against which all other imaging procedures are compared for accuracy.
More than 70% of the patients diagnosed with PAD remain stable or improve with conservative management of pharmacologic agents and life style modifications. Significant PAD symptoms are well known to negatively influence an individual quality of life. For those who do not improve, revascularization methods either invasive or non-invasive can be used to restore peripheral circulation.
Technology Under Review
A Stent is a wire mesh “scaffold” that is permanently implanted in the artery to keep the artery open and can be combined with angioplasty to treat PAD. There are two types of stents: i) balloon-expandable and ii) self expandable stents and are available in varying length. The former uses an angioplasty balloon to expand and set the stent within the arterial segment. Recently, drug-eluting stents have been developed and these types of stents release small amounts of medication intended to reduce neointimal hyperplasia, which can cause re-stenosis at the stent site. Endovascular stenting avoids the problem of early elastic recoil, residual stenosis and flow limiting dissection after balloon angioplasty.
Research Questions
In individuals with PAD of the lower extremities (superficial femoral artery, infra-popliteal, crural and iliac artery stenosis or occlusion), is primary stenting more effective than percutaneous transluminal angioplasty (PTA) in improving patency?
In individuals with PAD of the lower extremities (superficial femoral artery, infra-popliteal, crural and iliac artery stenosis or occlusion), does primary stenting provide immediate success compared to PTA?
In individuals with PAD of the lower extremities (superficial femoral artery, infra-popliteal, crural and iliac artery stenosis or occlusion), is primary stenting associated with less complications compared to PTA?
In individuals with PAD of the lower extremities (superficial femoral artery, infra-popliteal, crural and iliac artery stenosis or occlusion), does primary stenting compared to PTA reduce the rate of re-intervention?
In individuals with PAD of the lower extremities (superficial femoral artery, infra-popliteal, crural and iliac artery stenosis or occlusion) is primary stenting more effective than PTA in improving clinical and hemodynamic success?
Are drug eluting stents more effective than bare stents in improving patency, reducing rates of re-interventions or complications?
Research Methods
Literature Search
A literature search was performed on February 2, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, OVID EMBASE, the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA). Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. The quality of evidence was assessed as high, moderate, low or very low according to GRADE methodology.
Inclusion Criteria
English language full-reports from 1950 to January Week 3, 2010
Comparative randomized controlled trials (RCTs), systematic reviews and meta-analyses of RCTs
Proven diagnosis of PAD of the lower extremities in all patients.
Adult patients at least 18 years of age.
Stent as at least one treatment arm.
Patency, re-stenosis, re-intervention, technical success, hemodynamic (ABI) and clinical improvement and complications as at least an outcome.
Exclusion Criteria
Non-randomized studies
Observational studies (cohort or retrospective studies) and case report
Feasibility studies
Studies that have evaluated stent but not as a primary intervention
Outcomes of Interest
The primary outcome measure was patency. Secondary measures included technical success, re-intervention, complications, hemodynamic (ankle brachial pressure index, treadmill walking distance) and clinical success or improvement according to Rutherford scale. It was anticipated, a priori, that there would be substantial differences among trials regarding the method of examination and definitions of patency or re-stenosis. Where studies reported only re-stenosis rates, patency rates were calculated as 1 minus re-stenosis rates.
Statistical Analysis
Odds ratios (for binary outcomes) or mean difference (for continuous outcomes) with 95% confidence intervals (CI) were calculated for each endpoint. An intention to treat principle (ITT) was used, with the total number of patients randomized to each study arm as the denominator for each proportion. Sensitivity analysis was performed using per protocol approach. A pooled odds ratio (POR) or mean difference for each endpoint was then calculated for all trials reporting that endpoint using a fixed effects model. PORs were calculated for comparisons of primary stenting versus PTA or other alternative procedures. Level of significance was set at alpha=0.05. Homogeneity was assessed using the chi-square test, I2 and by visual inspection of forest plots. If heterogeneity was encountered within groups (P < 0.10), a random effects model was used. All statistical analyses were performed using RevMan 5. Where sufficient data were available, these analyses were repeated within subgroups of patients defined by time of outcome assessment to evaluate sustainability of treatment benefit. Results were pooled based on the diseased artery and stent type.
Summary of Findings
Balloon-expandable stents vs PTA in superficial femoral artery disease
Based on a moderate quality of evidence, there is no significant difference in patency between primary stenting using balloon-expandable bare metal stents and PTA at 6, 12 and 24 months in patients with superficial femoral artery disease. The pooled OR for patency and their corresponding 95% CI are: 6 months 1.26 (0.74, 2.13); 12 months 0.95 (0.66, 1.38); and 24 months 0.72 (0.34. 1.55).
There is no significant difference in clinical improvement, re-interventions, peri and post operative complications, mortality and amputations between primary stenting using balloon-expandable bare stents and PTA in patients with superficial femoral artery. The pooled OR and their corresponding 95% CI are clinical improvement 0.85 (0.50, 1.42); ankle brachial index 0.01 (-0.02, 0.04) re-intervention 0.83 (0.26, 2.65); complications 0.73 (0.43, 1.22); all cause mortality 1.08 (0.59, 1.97) and amputation rates 0.41 (0.14, 1.18).
Self-expandable stents vs PTA in superficial femoral artery disease
Based on a moderate quality of evidence, primary stenting using self-expandable bare metal stents is associated with significant improvement in patency at 6, 12 and 24 months in patients with superficial femoral artery disease. The pooled OR for patency and their corresponding 95% CI are: 6 months 2.35 (1.06, 5.23); 12 months 1.54 (1.01, 2.35); and 24 months 2.18 (1.00. 4.78). However, the benefit of primary stenting is not observed for clinical improvement, re-interventions, peri and post operative complications, mortality and amputation in patients with superficial femoral artery disease. The pooled OR and their corresponding 95% CI are clinical improvement 0.61 (0.37, 1.01); ankle brachial index 0.01 (-0.06, 0.08) re-intervention 0.60 (0.36, 1.02); complications 1.60 (0.53, 4.85); all cause mortality 3.84 (0.74, 19.22) and amputation rates 1.96 (0.20, 18.86).
Balloon expandable stents vs PTA in iliac artery occlusive disease
Based on moderate quality of evidence, despite immediate technical success, 12.23 (7.17, 20.88), primary stenting is not associated with significant improvement in patency, clinical status, treadmill walking distance and reduction in re-intervention, complications, cardiovascular events, all cause mortality, QoL and amputation rates in patients with intermittent claudication caused by iliac artery occlusive disease. The pooled OR and their corresponding 95% CI are: patency 1.03 (0.56, 1.87); clinical improvement 1.08 (0.60, 1.94); walking distance 3.00 (12.96, 18.96); re-intervention 1.16 (0.71, 1.90); complications 0.56 (0.20, 1.53); all cause mortality 0.89 (0.47, 1.71); QoL 0.40 (-4.42, 5.52); cardiovascular event 1.16 (0.56, 2.40) and amputation rates 0.37 (0.11, 1.23). To date no RCTs are available evaluating self-expandable stents in the common or external iliac artery stenosis or occlusion.
Drug-eluting stent vs balloon-expandable bare metal stents in crural arteries
Based on a very low quality of evidence, at 6 months of follow-up, sirolimus drug-eluting stents are associated with a reduction in target vessel revascularization and re-stenosis rates in patients with atherosclerotic lesions of crural (tibial) arteries compared with balloon-expandable bare metal stent. The OR and their corresponding 95% CI are: re-stenosis 0.09 (0.03, 0.28) and TVR 0.15 (0.05, 0.47) in patients with atherosclerotic lesions of the crural arteries at 6 months follow-up. Both types of stents offer similar immediate success. Limitations of this study include: short follow-up period, small sample and no assessment of mortality as an outcome. Further research is needed to confirm its effect and safety.
PMCID: PMC3377569  PMID: 23074395
11.  Efficacy and safety of duloxetine 60 mg once daily in major depressive disorder: a review with expert commentary 
Drugs in Context  2013;2013:212245.
Objective:
Major depressive disorder (MDD) is a significant public health concern and challenges health care providers to intervene with appropriate treatment. This article provides an overview of efficacy and safety information for duloxetine 60 mg/day in the treatment of MDD, including its effect on painful physical symptoms (PPS).
Design:
A literature search was conducted for articles and pooled analyses reporting information regarding the use of duloxetine 60 mg/day in placebo-controlled trials.
Setting:
Placebo-controlled, active-comparator, short- and long-term studies were reviewed.
Participants:
Adult (≥18 years) patients with MDD.
Measurements:
Effect sizes for continuous outcome (change from baseline to endpoint) and categorical outcome (response and remission rates) were calculated using the primary measures of 17-item Hamilton Rating Scale for Depression (HAMD-17) or Montgomery–Åsberg Depression Rating Scale (MADRS) total score. The Brief Pain Inventory and Visual Analogue Scales were used to assess improvements in PPS. Glass estimation method was used to calculate effect sizes, and numbers needed to treat (NNT) were calculated based on HAMD-17 and MADRS total scores for remission and response rates. Safety data were examined via the incidence of treatment-emergent adverse events and by mean changes in vital-sign measures.
Results:
Treatment with duloxetine was associated with small-to-moderate effect sizes in the range of 0.12 to 0.72 for response rate and 0.07 to 0.65 for remission rate. NNTs were in the range of 3 to 16 for response and 3 to 29 for remission. Statistically significant improvements (p≤0.05) were observed in duloxetine-treated patients compared to placebo-treated patients in PPS and quality of life. The safety profile of the 60-mg dose was consistent with duloxetine labeling, with the most commonly observed significant adverse events being nausea, dry mouth, diarrhea, dizziness, constipation, fatigue, and decreased appetite.
Conclusion:
These results reinforce the efficacy and tolerability of duloxetine 60 mg/day as an effective short- and long-term treatment for adults with MDD. The evidence of the independent analgesic effect of duloxetine 60 mg/day supports its use as a treatment for patients with PPS associated with depression. This review is limited by the fact that it included randomized clinical trials with different study designs. Furthermore, data from randomized controlled trials may not generalize well to real clinical practice.
doi:10.7573/dic.212245
PMCID: PMC3884746  PMID: 24432034
duloxetine; major depressive disorder; painful physical symptoms; quality of life; effect size; safety and tolerability
12.  Threshold Haemoglobin Levels and the Prognosis of Stable Coronary Disease: Two New Cohorts and a Systematic Review and Meta-Analysis 
PLoS Medicine  2011;8(5):e1000439.
Anoop Shah and colleagues performed a retrospective cohort study and a systematic review, and show evidence that in people with stable coronary disease there were threshold hemoglobin values below which mortality increased in a graded, continuous fashion.
Background
Low haemoglobin concentration has been associated with adverse prognosis in patients with angina and myocardial infarction (MI), but the strength and shape of the association and the presence of any threshold has not been precisely evaluated.
Methods and findings
A retrospective cohort study was carried out using the UK General Practice Research Database. 20,131 people with a new diagnosis of stable angina and no previous acute coronary syndrome, and 14,171 people with first MI who survived for at least 7 days were followed up for a mean of 3.2 years. Using semi-parametric Cox regression and multiple adjustment, there was evidence of threshold haemoglobin values below which mortality increased in a graded continuous fashion. For men with MI, the threshold value was 13.5 g/dl (95% confidence interval [CI] 13.2–13.9); the 29.5% of patients with haemoglobin below this threshold had an associated hazard ratio for mortality of 2.00 (95% CI 1.76–2.29) compared to those with haemoglobin values in the lowest risk range. Women tended to have lower threshold haemoglobin values (e.g, for MI 12.8 g/dl; 95% CI 12.1–13.5) but the shape and strength of association did not differ between the genders, nor between patients with angina and MI. We did a systematic review and meta-analysis that identified ten previously published studies, reporting a total of only 1,127 endpoints, but none evaluated thresholds of risk.
Conclusions
There is an association between low haemoglobin concentration and increased mortality. A large proportion of patients with coronary disease have haemoglobin concentrations below the thresholds of risk defined here. Intervention trials would clarify whether increasing the haemoglobin concentration reduces mortality.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Coronary artery disease is the main cause of death in high-income countries and the second most common cause of death in middle- and low-income countries, accounting for 16.3%, 13.9%, and 9.4% of all deaths, respectively, in 2004. Many risks factors, such as high blood pressure and high blood cholesterol level, are known to be associated with coronary artery disease, and prevention and treatment of such factors remains one of the key strategies in the management of coronary artery disease. Recent studies have suggested that low hemoglobin may be associated with mortality in patients with coronary artery disease. Therefore, using blood hemoglobin level as a prognostic biomarker for patients with stable coronary artery disease may be of potential benefit especially as measurement of hemoglobin is almost universal in such patients and there are available interventions that effectively increase hemoglobin concentration.
Why was This Study Done?
Much more needs to be understood about the relationship between low hemoglobin and coronary artery disease before hemoglobin levels can potentially be used as a clinical prognostic biomarker. Previous studies have been limited in their ability to describe the shape of this relationship—which means that it is uncertain whether there is a “best” hemoglobin threshold or a continuous graded relationship from “good” to “bad”—to assess gender differences, and to compare patients with angina or who have experienced previous myocardial infarction. In order to inform these knowledge gaps, the researchers conducted a retrospective analysis of patients from a prospective observational cohort as well as a systematic review and meta-analysis (statistical analysis) of previous studies.
What Did the Researchers Do and Find?
The researchers conducted a systematic review and meta-analysis of previous studies and found ten relevant studies, but none evaluated thresholds of risk, only linear relationships.
The researchers carried out a new study using the UK's General Practice Research Database—a national research tool that uses anonymized electronic clinical records of a representative sample of the UK population, with details of consultations, diagnoses, referrals, prescriptions, and test results—as the basis for their analysis. They identified and collected information from two cohorts of patients: those with new onset stable angina and no previous acute coronary syndrome; and those with a first myocardial infarction (heart attack). For these patients, the researchers also looked at all values of routinely recorded blood parameters (including hemoglobin) and information on established cardiovascular risk factors, such as smoking. The researchers followed up patients using death of any cause as a primary endpoint and put this data into a statistical model to identify upper and lower thresholds of an optimal hemoglobin range beyond which mortality risk increased.
The researchers found that there was a threshold hemoglobin value below which mortality continuously increased in a graded manner. For men with myocardial infarction, the threshold value was 13.5 g/dl: 29.5% of patients had hemoglobin below this threshold and had a hazard ratio for mortality of 2.00 compared to those with hemoglobin values in the lowest risk range. Women had a lower threshold hemoglobin value than men: 12.8 g/dl for women with myocardial infarction, but the shape and strength of association did not differ between the genders, or between patients with angina and myocardial infarction.
What Do These Findings Mean?
These findings suggest that there are thresholds of hemoglobin that are associated with increased risk of mortality in patients with angina or myocardial infarction. A substantial proportion of patients (15%–30%) have a hemoglobin level that places them at markedly higher risk of death compared to patients with lowest risk hemoglobin levels and importantly, these thresholds are higher than clinicians might anticipate—and are remarkably similar to World Health Organization anemia thresholds of 12 g/dl for women and 13 g/dl for men. Despite the limitations of these observational findings, this study supports the rationale for conducting future randomized controlled trials to assess whether hemoglobin levels are causal and whether clinicians should intervene to increase hemoglobin levels, for example by oral iron supplementation.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000439.
Wikipedia provides information about hemoglobin (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The World Health Organization provides an overview of the global prevalence of coronary artery disease, a factsheet on the top ten causes of death, as well as information on anemia
doi:10.1371/journal.pmed.1000439
PMCID: PMC3104976  PMID: 21655315
13.  A Transdiagnostic Community-Based Mental Health Treatment for Comorbid Disorders: Development and Outcomes of a Randomized Controlled Trial among Burmese Refugees in Thailand 
PLoS Medicine  2014;11(11):e1001757.
In a randomized controlled trial, Paul Bolton and colleagues investigate whether a transdiagnostic community-based intervention is effective for improving mental health symptoms among Burmese refugees in Thailand.
Please see later in the article for the Editors' Summary
Background
Existing studies of mental health interventions in low-resource settings have employed highly structured interventions delivered by non-professionals that typically do not vary by client. Given high comorbidity among mental health problems and implementation challenges with scaling up multiple structured evidence-based treatments (EBTs), a transdiagnostic treatment could provide an additional option for approaching community-based treatment of mental health problems. Our objective was to test such an approach specifically designed for flexible treatments of varying and comorbid disorders among trauma survivors in a low-resource setting.
Methods and Findings
We conducted a single-blinded, wait-list randomized controlled trial of a newly developed transdiagnostic psychotherapy, Common Elements Treatment Approach (CETA), for low-resource settings, compared with wait-list control (WLC). CETA was delivered by lay workers to Burmese survivors of imprisonment, torture, and related traumas, with flexibility based on client presentation. Eligible participants reported trauma exposure and met severity criteria for depression and/or posttraumatic stress (PTS). Participants were randomly assigned to CETA (n = 182) or WLC (n = 165). Outcomes were assessed by interviewers blinded to participant allocation using locally adapted standard measures of depression and PTS (primary outcomes) and functional impairment, anxiety symptoms, aggression, and alcohol use (secondary outcomes). Primary analysis was intent-to-treat (n = 347), including 73 participants lost to follow-up. CETA participants experienced significantly greater reductions of baseline symptoms across all outcomes with the exception of alcohol use (alcohol use analysis was confined to problem drinkers). The difference in mean change from pre-intervention to post-intervention between intervention and control groups was −0.49 (95% CI: −0.59, −0.40) for depression, −0.43 (95% CI: −0.51, −0.35) for PTS, −0.42 (95% CI: −0.58, −0.27) for functional impairment, −0.48 (95% CI: −0.61, −0.34) for anxiety, −0.24 (95% CI: −0.34, −0.15) for aggression, and −0.03 (95% CI: −0.44, 0.50) for alcohol use. This corresponds to a 77% reduction in mean baseline depression score among CETA participants compared to a 40% reduction among controls, with respective values for the other outcomes of 76% and 41% for anxiety, 75% and 37% for PTS, 67% and 22% for functional impairment, and 71% and 32% for aggression. Effect sizes (Cohen's d) were large for depression (d = 1.16) and PTS (d = 1.19); moderate for impaired function (d = 0.63), anxiety (d = 0.79), and aggression (d = 0.58); and none for alcohol use. There were no adverse events. Limitations of the study include the lack of long-term follow-up, non-blinding of service providers and participants, and no placebo or active comparison intervention.
Conclusions
CETA provided by lay counselors was highly effective across disorders among trauma survivors compared to WLCs. These results support the further development and testing of transdiagnostic approaches as possible treatment options alongside existing EBTs.
Trial registration
ClinicalTrials.gov NCT01459068
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Worldwide, one in four people will experience a mental health disorder at some time during their life. Although many evidence-based treatments (EBTs), most involving some sort of cognitive behavioral therapy (talking therapies that help people manage their mental health problems by changing the way they think and behave), are now available, many people with mental health disorders never receive any treatment for their condition. The situation is particularly bad for people living in low-resource settings, where a delivery model for EBTs based on referral to mental health professionals is problematic given that mental health professionals are scarce. To facilitate widespread access to mental health care among poor and/or rural populations in low-resource settings, EBTs need to be deliverable at the primary or community level by non-professionals. Moreover, because there is a large burden of trauma-related mental health disorders in low-resource settings and because trauma increases the risk of multiple mental health problems, treatment options that address comorbid (coexisting) mental health problems in low-resource settings are badly needed.
Why Was This Study Done?
One possible solution to the problem of delivering EBTs for comorbid mental health disorders in low-resource settings is “transdiagnostic” treatment. Many mental health EBTs for different disorders share common components. Transdiagnostic treatments recognize these facts and apply these common components to a range of disorders rather than creating a different structured treatment for each diagnosis. The Common Elements Treatment Approach (CETA), for example, trains counselors in a range of components that are similar across EBTs and teaches counselors how to choose components, their order, and dose, based on their client's problems. This flexible approach, which was designed for delivery by non-professional providers in low-resource settings, provides counselors with the skills needed to treat depression, anxiety, and posttraumatic stress—three trauma-related mental health disorders. In this randomized controlled trial, the researchers investigate the use of CETA among Burmese refugees living in Thailand, many of whom are survivors of decades-long harsh military rule in Myanmar. A randomized controlled trial compares the outcomes of individuals chosen to receive different interventions through the play of chance.
What Did the Researchers Do and Find?
The researchers assigned Burmese survivors or witnesses of imprisonment, torture, and related traumas who met symptom criteria for significant depression and/or posttraumatic stress to either the CETA or wait-list control arm of their trial. Lay counselors treated the participants in the CETA arm by delivering CETA components—for example, “psychoeducation” (which teaches clients that their symptoms are normal and experienced by many people) and “cognitive coping” (which helps clients understand that how they think about an event can impact their feelings and behavior)—chosen to reflect the client's priority problems at presentation. Participants in the control arm received regular calls from the trial coordinator to check on their safety but no other intervention. Participants in the CETA arm experienced greater reductions of baseline symptoms of depression, posttraumatic stress, anxiety, and aggression than participants in the control arm. For example, there was a 77% reduction in the average depression score from before the intervention to after the intervention among participants in the CETA arm, but only a 40% reduction in the depression score among participants in the control arm. Importantly, the effect size of CETA (a statistical measure that quantifies the importance of the difference between two groups) was large for depression and posttraumatic stress, the primary outcomes of the trial. That is, compared to no treatment, CETA had a large effect on the symptoms of depression and posttraumatic stress experienced by the trial participants.
What Do These Findings Mean?
These findings suggest that, among Burmese survivors and witnesses of torture and other trauma living in Thailand, CETA delivered by lay counselors was a highly effective treatment for comorbid mental disorders compared to no treatment (the wait-list control). These findings may not be generalizable to other low-resource settings, they provide no information about long-term outcomes, and they do not identify which aspects of CETA were responsible for symptom improvement or explain the improvements seen among the control participants. Given that the study compared CETA to no treatment rather than a placebo (dummy) or active comparison intervention, it is not possible to conclude that CETA works better that existing treatments. Nevertheless, these findings support the continued development and assessment of transdiagnostic approaches for the treatment of mental health disorders in low-resource settings where treatment access and comorbid mental health disorders are important challenges.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001757.
The World Health Organization provides background information about mental health
The US National Institute of Mental Health provides information about a range of mental health disorders and about cognitive behavioral therapy
The UK National Health Service Choices website has information about cognitive behavioral therapy, including some personal stories and links to other related mental health resources on the Choices website
A short introduction to transdiagnosis and CETA written by one of the trial authors is available
Information about this trial is available on the ClinicalTrials.gov website
The UN Refugee Agency provides information about Burmese (Myanmar) refugees in Thailand
doi:10.1371/journal.pmed.1001757
PMCID: PMC4227644  PMID: 25386945
14.  Repetitive Transcranial Magnetic Stimulation for the Treatment of Major Depressive Disorder 
Executive Summary
Objective
This review was conducted to assess the effectiveness of repetitive transcranial magnetic stimulation (rTMS) in the treatment of major depressive disorder (MDD).
The Technology
rTMS is a noninvasive way to stimulate nerve cells in areas of the brain. During rTMS, an electrical current passes through a wire coil placed over the scalp. The current induces a magnetic field that produces an electrical field in the brain that then causes nerve cells to depolarize, resulting in the stimulation or disruption of brain activity.
Researchers have investigated rTMS as an option to treat MDD, as an add-on to drug therapy, and, in particular, as an alternative to electroconvulsive therapy (ECT) for patients with treatment-resistant depression.
The advantages of rTMS over ECT for patients with severe refractory depression are that general anesthesia is not needed, it is an outpatient procedure, it requires less energy, the simulation is specific and targeted, and convulsion is not required. The advantages of rTMS as an add-on treatment to drug therapy may include hastening of the clinical response when used with antidepressant drugs.
Review Strategy
The Medical Advisory Secretariat used its standard search strategy to locate international health technology assessments and English-language journal articles published from January 1996 to March 2004.
Summary of Findings
Some early meta-analyses suggested rTMS might be effective for the treatment of MDD (for treatment-resistant MDD and as an add-on treatment to drug therapy for patients not specifically defined as treatment resistant). There were, however, several crucial methodological limitations in the included studies that were not critically assessed. These are discussed below.
Recent meta-analyses (including 2 international health technology assessments) have done evidence-based critical analyses of studies that have assessed rTMS for MDD. The 2 most recent health technology assessments (from the Oxford Cochrane Collaboration and the Norwegian Centre for Health Technology Assessment) concluded that there is no evidence that rTMS is effective for the treatment of MDD, either as compared with a placebo for patients with treatment-resistant or nontreatment-resistant MDD, or as an alternative to ECT for patients with treatment-resistant MDD. This mainly due to the poor quality of the studies.
The major methodological limitations were identified in older meta-analyses, recent health technology assessments, and the most recently published trials (Level 2–4 evidence) on the effectiveness of rTMS for MDD are discussed below.
Small sample size was a limitation acknowledged by many of the authors. There was also a lack of a priori sample size calculation or justification.
Biased randomization may have been a problem. Generally, the published reports lacked detailed information on the method of allocation concealment used. This is important because it is impossible to determine if there was a possible influence (direct or indirect) in the allocation of the patients to different treatment groups.
The trials were single blind, evaluated by external blinded assessors, rather than double blind. Double blinding is more robust, because neither the participants nor the investigators know which participants are receiving the active treatment and which are getting a placebo. Those administering rTMS, however, cannot be blinded to whether they are administering the active treatment or a placebo.
There was patient variability among the studies. In some studies, the authors said that patients were “medication resistant,” but the definitions of resistant, if provided, were inconsistent or unclear. For example, some described “medication resistant” as failing at least one trial of drugs during the current depressive episode. Furthermore, it was unclear if the term “medication resistant” referred to antidepressants only or to combinations of antidepressants and other drug augmentation strategies (such as neuroleptics, benzodiazepine, carbamazepine, and lithium). Also variable was the type of depression (i.e., unipolar and/or bipolar), if patients were inpatients or outpatients, if they had psychotic symptoms or no psychotic symptoms, and the chronicity of depression.
Dropouts or withdrawals were a concern. Some studies reported that patients dropped out, but provided no further details. Intent-to-treat analysis was not done in any of the trials. This is important, because ignoring patients who drop out of a trial can bias the results, usually in favour of the treatment. This is because patients who withdraw from trials are less likely to have had the treatment, more likely to have missed their interim checkups, and more likely to have experienced adverse effects when taking the treatment, compared with patients who do not withdraw. (1)
Measurement of treatment outcomes using scales or inventories makes interpreting results and drawing conclusions difficult. The most common scale, the Hamilton Depression Rating Scale (HDRS) is based on a semistructured interview. Some authors (2) reported that rating scales based on semistructured interviews are more susceptible to observation bias than are self-administered questionnaires such as the Beck Depression Inventory (BDI). Martin et al. (3) argued that the lack of consistency in effect as determined by the 2 scales (a positive result after 2 weeks of treatment as measured by the HDRS and a negative result for the BDI) makes definitive conclusions about the nature of the change in mood of patients impossible. It was suggested that because of difficulties interpreting results from psychometric scales, (4) and the subjective or unstable character of MDD, other, more objective, outcome measures such as readmission to hospital, time to hospital discharge, time to adjunctive treatment, and time off work should be used to assess rTMS for the treatment of depression.
A placebo effect could have influenced the results. Many studies reported response rates for patients who received placebo treatment. For example, Klein et al. (5) reported a control group response rate as high as 25%. Patients receiving placebo rTMS may receive a small dose of magnetic energy that may alter their depression.
Short-term studies were the most common. Patients received rTMS treatment for 1 to 2 weeks. Most studies followed-up patients for 2 to 4 weeks post-treatment. Dannon et al. (6) followed-up patients who responded to a course of ECT or rTMS for up to 6 months; however, the assessment procedure was not blinded, the medication regimen during follow-up was not controlled, and initial baseline data for the patient groups were not reported. The long-term effectiveness of rTMS for the treatment of depression is unknown, as is the long-term use, if any, of maintenance therapy. The cost-effectiveness of rTMS for the treatment of depression is also unknown. A lack of long-term studies makes cost-effectiveness analysis difficult.
The complexity of possible combinations for administering rTMS makes comparing like with like difficult. Wasserman and Lisanby (7) have said that the method for precisely targeting the stimulation in this area is unreliable. It is unknown if the left dorsolateral prefrontal cortex is the optimal location for treatment. Further, differences in rTMS administration include number of trains per session, duration of each train, and motor threshold.
Clinical versus statistical significance. Several meta-analyses and studies have found that the degree of therapeutic change associated with rTMS across studies is relatively modest; that is, results may be statistically, but not necessarily clinically, significant. (8-11). Conventionally, a 50% reduction in the HDRS scores is commonly accepted as a clinically important reduction in depression. Although some studies have observed a statistically significant reduction in the depression rating, many have not shows the clinically significant reduction of 50% on the HDRS. (11-13) Therefore, few patients in these studies would meet the standard criteria for response. (9)
Clinical/methodological diversity and statistical heterogeneity. In the Norwegian health technology assessment, Aarre et al. (14) said that a formal meta-analysis was not feasible because the designs of the studies varied too much, particularly in how rTMS was administered and in the characteristics of the patients. They noted that the quality of the study designs was poor. The 12 studies that comprised the assessment had small samples, and highly variable inclusion criteria and study designs. The patients’ previous histories, diagnoses, treatment histories, and treatment settings were often insufficiently characterized. Furthermore, many studies reported that patients had treatment-resistant MDD, yet did not listclear criteria for the designation. Without this information, Aarre and colleagues suggested that the interpretation of the results is difficult and the generalizability of results is questionable. They concluded that rTMS cannot be recommended as a standard treatment for depression: “More, larger and more carefully designed studies are needed to demonstrate convincingly a clinically relevant effect of rTMS.”
In the Cochrane Collaboration systematic review, Martin et al. (3;15) said that the complexity of possible combinations for administering rTMS makes comparison of like versus like difficult. A statistical test for heterogeneity (chi-square test) examines if the observed treatment effects are more different from each other than one would expect due to random error (or chance) alone. (16) However, this statistical test must be interpreted with caution because it has low power in the (common) situation of a meta-analysis when the trials have small sample sizes or are few. This means that while a statistically significant result may indicate a problem with heterogeneity, a nonsignificant result must not be taken as evidence of no heterogeneity.
Despite not finding statistically significant heterogeneity, Martin et al. reported that the overall mean baseline depression values for the severity of depression were higher in the treatment group than in the placebo group. (3;15) Although these differences were not significant at the level of each study, they may have introduced potential bias into the meta-analysis of pooled data by accentuating the tendency for regression to the mean of the more extreme values. Individual patient data from all the studies were not available; therefore, an appropriate adjustment according to baseline severity was not possible. Martin et al. concluded that the findings from the systematic review and meta-analysis provided insufficient evidence to suggest that rTMS is effective in the treatment of depression. Moreover, there were several confounding factors (e.g., definition of treatment resistance) in the studies, thus the authors concluded, “The rTMS technique needs more high quality trials to show its effectiveness for therapeutic use.”
Conclusion
Due to several serious methodological limitations in the studies that have examined the effectiveness of rTMS in patients with MDD, it is not possible to conclude that rTMS either is or is not effective as a treatment for MDD (in treatment-resistant depression or in nontreatment-resistant depression).
PMCID: PMC3387754  PMID: 23074457
15.  Adjustment of the GRACE score by growth differentiation factor 15 enables a more accurate appreciation of risk in non-ST-elevation acute coronary syndrome 
European Heart Journal  2011;33(9):1095-1104.
Aims
The aim of the study was to evaluate whether knowledge of the circulating concentration of growth differentiation factor 15 (GDF-15) adds predictive information to the Global Registry of Acute Coronary Events (GRACE) score, a validated scoring system for risk assessment in non-ST-elevation acute coronary syndrome (NSTE-ACS). We also evaluated whether GDF-15 adds predictive information to a model containing the GRACE score and N-terminal pro-B-type natriuretic peptide (NT-proBNP), a prognostic biomarker already in clinical use.
Methods and results
The GRACE score, GDF-15, and NT-proBNP levels were determined on admission in 1122 contemporary patients with NSTE-ACS. Six-month all-cause mortality or non-fatal myocardial infarction (MI) was the primary endpoint of the study. To obtain GDF-15- and NT-proBNP-adjusted 6-month estimated probabilities of death or non-fatal MI, statistical algorithms were developed in a derivation cohort (n = 754; n = 66 reached the primary endpoint) and applied to a validation cohort (n = 368; n = 33). Adjustment of the GRACE risk estimate by GDF-15 increased the area under the receiver-operating characteristic curve (AUC) from 0.79 to 0.85 (P < 0.001) in the validation cohort. Discrimination improvement was confirmed by an integrated discrimination improvement (IDI) of 0.055 (P = 0.005). A net 31% of the patients without events were reclassified into lower risk, and a net 27% of the patients with events were reclassified into higher risk, resulting in a total continuous net reclassification improvement [NRI(>0)] of 0.58 (P = 0.002). Addition of NT-proBNP to the GRACE score led to a similar improvement in discrimination and reclassification. Addition of GDF-15 to a model containing GRACE and NT-proBNP led to a further improvement in model performance [increase in AUC from 0.84 for GRACE plus NT-proBNP to 0.86 for GRACE plus NT-proBNP plus GDF-15, P = 0.010; IDI = 0.024, P = 0.063; NRI(>0) = 0.42, P = 0.022].
Conclusion
We show that a single measurement of GDF-15 on admission markedly enhances the predictive value of the GRACE score and provides moderate incremental information to a model including the GRACE score and NT-proBNP. Our study is the first to provide simple algorithms that can be used by the practicing clinician to more precisely estimate risk in individual patients based on the GRACE score and a single biomarker measurement on admission. The rigorous statistical approach taken in the present study may serve as a blueprint for future studies exploring the added value of biomarkers beyond clinical risk scores.
doi:10.1093/eurheartj/ehr444
PMCID: PMC3888120  PMID: 22199121
GDF-15; NT-proBNP; GRACE score; Acute coronary syndrome; Risk stratification
16.  Prophylactic Perioperative Sodium Bicarbonate to Prevent Acute Kidney Injury Following Open Heart Surgery: A Multicenter Double-Blinded Randomized Controlled Trial 
PLoS Medicine  2013;10(4):e1001426.
In a double-blinded randomized controlled trial, Anja Haase-Fielitz and colleagues find that an infusion of sodium bicarbonate during open heart surgery did not reduce the risk for acute kidney injury, compared with saline control.
Background
Preliminary evidence suggests a nephroprotective effect of urinary alkalinization in patients at risk of acute kidney injury. In this study, we tested whether prophylactic bicarbonate-based infusion reduces the incidence of acute kidney injury and tubular damage in patients undergoing open heart surgery.
Methods and Findings
In a multicenter, double-blinded (patients, clinical and research personnel), randomized controlled trial we enrolled 350 adult patients undergoing open heart surgery with the use of cardiopulmonary bypass. At induction of anesthesia, patients received either 24 hours of intravenous infusion of sodium bicarbonate (5.1 mmol/kg) or sodium chloride (5.1 mmol/kg). The primary endpoint was the proportion of patients developing acute kidney injury. Secondary endpoints included the magnitude of acute tubular damage as measured by urinary neutrophil gelatinase-associated lipocalin (NGAL), initiation of acute renal replacement therapy, and mortality. The study was stopped early under recommendation of the Data Safety and Monitoring Committee because interim analysis suggested likely lack of efficacy and possible harm. Groups were non-significantly different at baseline except that a greater proportion of patients in the sodium bicarbonate group (66/174 [38%]) presented with preoperative chronic kidney disease compared to control (44/176 [25%]; p = 0.009). Sodium bicarbonate increased urinary pH (from 6.0 to 7.5, p<0.001). More patients receiving bicarbonate (83/174 [47.7%]) developed acute kidney injury compared with control patients (64/176 [36.4%], odds ratio [OR] 1.60 [95% CI 1.04–2.45]; unadjusted p = 0.032). After multivariable adjustment, a non-significant unfavorable group difference affecting patients receiving sodium bicarbonate was found for the primary endpoint (OR 1.45 [0.90–2.33], p = 0.120]). A greater postoperative increase in urinary NGAL in patients receiving bicarbonate infusion was observed compared to control patients (p = 0.011). The incidence of postoperative renal replacement therapy was similar but hospital mortality was increased in patients receiving sodium bicarbonate compared with control (11/174 [6.3%] versus 3/176 [1.7%], OR 3.89 [1.07–14.2], p = 0.031).
Conclusions
Urinary alkalinization using sodium bicarbonate infusion was not found to reduce the incidence of acute kidney injury or attenuate tubular damage following open heart surgery; however, it was associated with a possible increase in mortality. On the basis of these findings we do not recommend the prophylactic use of sodium bicarbonate infusion to reduce the risk of acute kidney injury. Discontinuation of growing implementation of this therapy in this setting seems to be justified.
Trial registration
ClinicalTrials.gov NCT00672334
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Open heart surgery is a type of cardiac surgery that is used to treat patients with severe heart disease, where the patient's chest is cut open and surgery is performed on the internal structures of the heart. During open heart surgery, surgeons may use a technique called cardiopulmonary bypass to temporarily take over the function of the heart and lungs. This type of surgery may be used to prevent heart attack or heart failure in patients with conditions such as angina, atherosclerosis, congenital heart disease, or valvular heart disease. There are a number of complications associated with open heart surgery and one of these is the rapid loss of kidney function, known as acute kidney injury (AKI), and formerly known as acute renal failure. Symptoms of AKI can be variable, with diagnosis of AKI based on laboratory findings (such as elevated blood urea nitrogen and creatinine), or clinical signs such as inability of the kidneys to produce sufficient amounts of urine. Globally, more than 10 million people are affected by AKI each year. AKI occurs in about one quarter of patients undergoing cardiac surgery and is associated with longer stays in the hospital and an increased risk of death. Treatment of AKI includes administration of intravenous fluids, diuretics, and, in severe cases, patients may require kidney dialysis.
Why Was This Study Done?
The mechanism for why AKI occurs during cardiac surgery is complex and thought to involve multiple factors relating to blood circulation, the immune system, and toxins released by the kidneys. In addition to treating AKI after it occurs, it is important to identify patients who are at risk for developing AKI prior to cardiac surgery and then apply techniques to prevent AKI during cardiac surgery. A number of interventions have been tested for preventing AKI during cardiac surgery, but there is currently no strong evidence for a standard way to prevent AKI. One intervention that has potential for preventing AKI is the administration of sodium bicarbonate during cardiac surgery. Sodium bicarbonate causes alkalinization of the urine, and it is thought that this could reduce the effect of toxins in the kidneys. A previous pilot study showed promising effects for sodium bicarbonate to reduce the likelihood of AKI. In a follow-up to this pilot study, here the researchers have performed an international randomized controlled trial to test whether administration of sodium bicarbonate compared to sodium chloride (saline) during cardiac surgery can prevent AKI.
What Did the Researchers Do and Find?
350 patients undergoing open heart surgery with at least one risk factor for developing AKI were recruited across four sites in different countries (Germany, Canada, Ireland, and Australia). These patients were randomly assigned to receive either sodium bicarbonate (treatment) or saline control solution, given as a continuous infusion into the blood stream for 24 hours during surgery. Neither the researchers nor the patients were aware of which patients were assigned to the treatment group. The researchers measured the occurrence of AKI within the first 5 days after surgery and they found that a greater proportion of those patients receiving sodium bicarbonate developed AKI, as compared to those patients receiving saline control. On the basis of these findings the study was terminated before planned recruitment was completed. A key issue with this study is that a greater proportion of the patients in the sodium bicarbonate group had chronic kidney disease prior to open heart surgery. After adjusting for this difference in the statistical analysis, the researchers observed that the difference between the groups was not significant—that is, it could have happened by chance. The authors also observed that a significantly greater proportion of patients receiving sodium bicarbonate died in the hospital after surgery compared to patients receiving saline control.
What Do These Findings Mean?
These findings suggest that giving an infusion of sodium bicarbonate to induce alkalinization of the urine during open heart surgery is not a useful treatment for preventing AKI. Furthermore, this treatment may even increase the likelihood of death. The researchers do not recommend the use of sodium bicarbonate infusion to reduce the risk of AKI after open heart surgery and stress the need for discontinuation of this therapy. Key limitations of this research study are the early termination of the study and the greater proportion of patients with chronic kidney disease prior to surgery.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001426.
The Renal Association, a professional association for kidney doctors and researchers, provides information about acute kidney injury
The International Society for Nephrology and the International Federation of Kidney Foundations provide information about preventing acute kidney injury around the world and jointly initiated World Kidney Day
MedlinePlus has information on open heart surgery
doi:10.1371/journal.pmed.1001426
PMCID: PMC3627643  PMID: 23610561
17.  Tenofovir Disoproxil Fumarate for Prevention of HIV Infection in Women: A Phase 2, Double-Blind, Randomized, Placebo-Controlled Trial 
PLoS Clinical Trials  2007;2(5):e27.
Objectives:
The objective of this trial was to investigate the safety and preliminary effectiveness of a daily dose of 300 mg of tenofovir disoproxil fumarate (TDF) versus placebo in preventing HIV infection in women.
Design:
This was a phase 2, randomized, double-blind, placebo-controlled trial.
Setting:
The study was conducted between June 2004 and March 2006 in Tema, Ghana; Douala, Cameroon; and Ibadan, Nigeria.
Participants:
We enrolled 936 HIV-negative women at high risk of HIV infection into this study.
Intervention:
Participants were randomized 1:1 to once daily use of 300 mg of TDF or placebo.
Outcome measures:
The primary safety endpoints were grade 2 or higher serum creatinine elevations (>2.0 mg/dl) for renal function, grade 3 or 4 aspartate aminotransferase or alanine aminotransferase elevations (>170 U/l) for hepatic function, and grade 3 or 4 phosphorus abnormalities (<1.5 mg/dl). The effectiveness endpoint was infection with HIV-1 or HIV-2.
Results:
Study participants contributed 428 person-years of laboratory testing to the primary safety analysis. No significant differences emerged between treatment groups in clinical or laboratory safety outcomes. Study participants contributed 476 person-years of HIV testing to the primary effectiveness analysis, during which time eight seroconversions occurred. Two were diagnosed in participants randomized to TDF (0.86 per 100 person-years) and six in participants receiving placebo (2.48 per 100 person-years), yielding a rate ratio of 0.35 (95% confidence interval = 0.03–1.93), which did not achieve statistical significance. Owing to premature closures of the Cameroon and Nigeria study sites, the planned person-years of follow-up and study power could not be achieved.
Conclusion:
Daily oral use of TDF in HIV-uninfected women was not associated with increased clinical or laboratory adverse events. Effectiveness could not be conclusively evaluated because of the small number of HIV infections observed during the study.
Editorial Commentary
Background: The World Health Organization has estimated that in 2006 around 4.3 million people were newly infected with HIV. Infection rates seem to be increasing in some countries, and there is an urgent need to find safe and effective ways of preventing HIV from being transmitted from one person to another. Many strategies for the prevention of HIV transmission between adults, such as use of condoms or changes to behavior, are not completely reliable, and women, in particular, may not always be able to negotiate condom use. Additional strategies for reducing the risk of HIV transmission are needed. One of these strategies is called “pre-exposure prophylaxis.” This strategy involves individuals who are at high risk of becoming infected with HIV taking antiviral drugs to prevent HIV infection. One particular drug, tenofovir disoproxil fumarate, is currently approved as a treatment for HIV infection, and is also being investigated as a strong candidate for pre-exposure prophylaxis. The research presented here reports on results of a trial carried out at three different sites in Ghana, Cameroon, and Nigeria. In the trial, 936 women who were not infected with HIV but who were at high risk of becoming infected, were randomized to take tenofovir tablets daily or, alternatively, placebo tablets. The researchers planned to follow up with the women for 12 months, and the primary analysis for efficacy would focus on a comparison of the rate of new HIV infections between the two arms of the trial. Primary safety analyses included specific laboratory tests carried out on blood samples that might point to abnormalities in liver or kidney function. Safety data were also collected throughout the trial, and health problems that arose were classified as adverse events or serious adverse events.
What this trial shows: Unfortunately, this trial was not completed as planned. Two sites (Nigeria and Cameroon) were closed either before the planned number of participants had been recruited or before all participants had completed full follow-up. Therefore, not enough data were available from this trial to determine whether tenofovir reduced the risk of HIV infection. Only two sites contributed data for the primary safety analyses, which looked at liver and kidney function. The researchers did not see any statistically significant differences in these safety endpoints between participants taking tenofovir and those taking placebo. There were also no statistically significant differences between the treatment groups in the number of adverse events. The main efficacy analysis found two new HIV infections in the tenofovir group and six in the placebo group. Because only eight effectiveness endpoints were observed during this study, the difference in HIV incidence between these groups was not statistically significant.
Strengths and limitations: A strength of this trial is that it was correctly designed to address the original objectives of the study, involving appropriate concealment of randomization and blinding of participants and study staff to treatment assignment. The main limitation of this study was the closure of two study sites, which meant that the study did not have sufficient power to assess differences between trial arms in the primary efficacy analysis.
Contribution to the evidence: At the time this trial was completed, there was no other evidence from randomized studies that evaluated antiretroviral drugs for prevention of HIV infection. This trial cannot, however, definitively address whether tenofovir reduces the risk of HIV infection among at-risk women or not. Ongoing and future trials are essential in order to answer this question. The trial reported here provides important data on the safety of daily tenofovir among high-risk HIV-uninfected women; the safety data are encouraging and suggest that tenofovir use is not associated with increased adverse events as compared to placebo.
doi:10.1371/journal.pctr.0020027
PMCID: PMC1876601  PMID: 17525796
18.  Opportunities and challenges of clinical trials in cardiology using composite primary endpoints 
In clinical trials, the primary efficacy endpoint often corresponds to a so-called “composite endpoint”. Composite endpoints combine several events of interest within a single outcome variable. Thereby it is intended to enlarge the expected effect size and thereby increase the power of the study. However, composite endpoints also come along with serious challenges and problems. On the one hand, composite endpoints may lead to difficulties during the planning phase of a trial with respect to the sample size calculation, as the expected clinical effect of an intervention on the composite endpoint depends on the effects on its single components and their correlations. This may lead to wrong assumptions on the sample size needed. Too optimistic assumptions on the expected effect may lead to an underpowered of the trial, whereas a too conservatively estimated effect results in an unnecessarily high sample size. On the other hand, the interpretation of composite endpoints may be difficult, as the observed effect of the composite does not necessarily reflect the effects of the single components. Therefore the demonstration of the clinical efficacy of a new intervention by exclusively evaluating the composite endpoint may be misleading. The present paper summarizes results and recommendations of the latest research addressing the above mentioned problems in the planning, analysis and interpretation of clinical trials with composite endpoints, thereby providing a practical guidance for users.
doi:10.4330/wjc.v7.i1.1
PMCID: PMC4306200  PMID: 25632312
Composite endpoint; Competing risks; Multiple testing; Time-to-event; Adaptive designs
19.  Individual patient data meta-analysis of acupuncture for chronic pain: protocol of the Acupuncture Trialists' Collaboration 
Trials  2010;11:90.
Background
The purpose of clinical trials of acupuncture is to help clinicians and patients make decisions about treatment. Yet this is not straightforward: some trials report acupuncture to be superior to sham (placebo) acupuncture while others show evidence that acupuncture is superior to usual care but not sham, and still others conclude that acupuncture is no better than usual care. Meta-analyses of these trials tend to come to somewhat indeterminate conclusions. This appears to be because, until recently, acupuncture research was dominated by small trials of questionable quality. The Acupuncture Trialists' Collaboration, a group of trialists, statisticians and other researchers, was established to synthesize patient-level data from several recently published large, high-quality trials.
Methods
There are three distinct phases to the Acupuncture Trialists Collaboration: a systematic review to identify eligible studies; collation and harmonization of raw data; statistical analysis. To be eligible, trials must have unambiguous allocation concealment. Eligible pain conditions are osteoarthritis; chronic headache (tension or migraine headache); shoulder pain; and non-specific back or neck pain. Once received, patient-level data will undergo quality checks and the results of prior publications will be replicated. The primary analysis will be to determine the effect size of acupuncture. Each trial will be evaluated by analysis of covariance with the principal endpoint as the dependent variable and, as covariates, the baseline score for the principal endpoint and the variables used to stratify randomization. The effect size for acupuncture from each trial - that is, the coefficient and standard error from the analysis of covariance - will then be entered into a meta-analysis. We will compute effect sizes separately for comparisons of acupuncture with sham acupuncture, and acupuncture with no acupuncture control for each pain condition. Other analyses will investigate the impact of different sham techniques, styles of acupuncture or frequency and duration of treatment sessions.
Discussion
Individual patient data meta-analysis of high-quality trials will provide the most reliable basis for treatment decisions about acupuncture. Above all, however, we hope that our approach can serve as a model for future studies in acupuncture and other complementary therapies.
doi:10.1186/1745-6215-11-90
PMCID: PMC2955653  PMID: 20920180
20.  A phase I/II dose-escalation trial of vitamin D3 and calcium in multiple sclerosis(e–Pub ahead of print)(LOE Classification) 
Neurology  2010;74(23):1852-1859.
Objective:
Low vitamin D status has been associated with multiple sclerosis (MS) prevalence and risk, but the therapeutic potential of vitamin D in established MS has not been explored. Our aim was to assess the tolerability of high-dose oral vitamin D and its impact on biochemical, immunologic, and clinical outcomes in patients with MS prospectively.
Methods:
An open-label randomized prospective controlled 52-week trial matched patients with MS for demographic and disease characteristics, with randomization to treatment or control groups. Treatment patients received escalating vitamin D doses up to 40,000 IU/day over 28 weeks to raise serum 25-hydroxyvitamin D [25(OH)D] rapidly and assess tolerability, followed by 10,000 IU/day (12 weeks), and further downtitrated to 0 IU/day. Calcium (1,200 mg/day) was given throughout the trial. Primary endpoints were mean change in serum calcium at each vitamin D dose and a comparison of serum calcium between groups. Secondary endpoints included 25(OH)D and other biochemical measures, immunologic biomarkers, relapse events, and Expanded Disability Status Scale (EDSS) score.
Results:
Forty-nine patients (25 treatment, 24 control) were enrolled [mean age 40.5 years, EDSS 1.34, and 25(OH)D 78 nmol/L]. All calcium-related measures within and between groups were normal. Despite a mean peak 25(OH)D of 413nmol/L, no significant adverse events occurred. Although there may have been confounding variables in clinical outcomes, treatment group patients appeared to have fewer relapse events and a persistent reduction in T-cell proliferation compared to controls.
Conclusions:
High-dose vitamin D (∼10,000 IU/day) in multiple sclerosis is safe, with evidence of immunomodulatory effects.
Classification of evidence:
This trial provides Class II evidence that high-dose vitamin D use for 52 weeks in patients with multiple sclerosis does not significantly increase serum calcium levels when compared to patients not on high-dose supplementation. The trial, however, lacked statistical precision and the design requirements to adequately assess changes in clinical disease measures (relapses and Expanded Disability Status Scale scores), providing only Class level IV evidence for these outcomes.
GLOSSARY
= alkaline phosphatase;
= alanine aminotransferase;
= aspartate aminotransferase;
= experimental autoimmune encephalitis;
= Expanded Disability Status Scale;
= interleukin;
= least squares;
= matrix metalloproteinase-9;
= multiple sclerosis;
= parathyroid hormone;
= T-cell score;
= tissue inhibitory of metalloproteinase-1;
= tumor necrosis factor–α.
doi:10.1212/WNL.0b013e3181e1cec2
PMCID: PMC2882221  PMID: 20427749
21.  Greater Response to Placebo in Children Than in Adults: A Systematic Review and Meta-Analysis in Drug-Resistant Partial Epilepsy 
PLoS Medicine  2008;5(8):e166.
Background
Despite guidelines establishing the need to perform comprehensive paediatric drug development programs, pivotal trials in children with epilepsy have been completed mostly in Phase IV as a postapproval replication of adult data. However, it has been shown that the treatment response in children can differ from that in adults. It has not been investigated whether differences in drug effect between adults and children might occur in the treatment of drug-resistant partial epilepsy, although such differences may have a substantial impact on the design and results of paediatric randomised controlled trials (RCTs).
Methods and Findings
Three electronic databases were searched for RCTs investigating any antiepileptic drug (AED) in the add-on treatment of drug-resistant partial epilepsy in both children and adults. The treatment effect was compared between the two age groups using the ratio of the relative risk (RR) of the 50% responder rate between active AEDs treatment and placebo groups, as well as meta-regression. Differences in the response to placebo and to active treatment were searched using logistic regression. A comparable approach was used for analysing secondary endpoints, including seizure-free rate, total and adverse events-related withdrawal rates, and withdrawal rate for seizure aggravation. Five AEDs were evaluated in both adults and children with drug-resistant partial epilepsy in 32 RCTs. The treatment effect was significantly lower in children than in adults (RR ratio: 0.67 [95% confidence interval (CI) 0.51–0.89]; p = 0.02 by meta-regression). This difference was related to an age-dependent variation in the response to placebo, with a higher rate in children than in adults (19% versus 9.9%, p < 0.001), whereas no significant difference was observed in the response to active treatment (37.2% versus 30.4%, p = 0.364). The relative risk of the total withdrawal rate was also significantly lower in children than in adults (RR ratio: 0.65 [95% CI 0.43–0.98], p = 0.004 by metaregression), due to higher withdrawal rate for seizure aggravation in children (5.6%) than in adults (0.7%) receiving placebo (p < 0.001). Finally, there was no significant difference in the seizure-free rate between adult and paediatric studies.
Conclusions
Children with drug-resistant partial epilepsy receiving placebo in double-blind RCTs demonstrated significantly greater 50% responder rate than adults, probably reflecting increased placebo and regression to the mean effects. Paediatric clinical trial designs should account for these age-dependent variations of the response to placebo to reduce the risk of an underestimated sample size that could result in falsely negative trials.
In a systematic review of antiepileptic drugs, Philippe Ryvlin and colleagues find that children with drug-resistant partial epilepsy enrolled in trials seem to have a greater response to placebo than adults enrolled in such trials.
Editors' Summary
Background.
Whenever an adult is given a drug to treat a specific condition, that drug will have been tested in “randomized controlled trials” (RCTs). In RCTs, a drug's effects are compared to those of another drug for the same condition (or to a placebo, dummy drug) by giving groups of adult patients the different treatments and measuring how well each drug deals with the condition and whether it has any other effects on the patients' health. However, many drugs given to children have only been tested in adults, the assumption being that children can safely take the same drugs as adults provided the dose is scaled down. This approach to treatment is generally taken in epilepsy, a common brain disorder in children in which disruptions in the electrical activity of part (partial epilepsy) or all (generalized epilepsy) of the brain cause seizures. The symptoms of epilepsy depend on which part of the brain is disrupted and can include abnormal sensations, loss of consciousness, or convulsions. Most but not all patients can be successfully treated with antiepileptic drugs, which reduce or stop the occurrence of seizures.
Why Was This Study Done?
It is increasingly clear that children and adults respond differently to many drugs, including antiepileptic drugs. For example, children often break down drugs differently from adults, so a safe dose for an adult may be fatal to a child even after scaling down for body size, or it may be ineffective because of quicker clearance from the child's body. Consequently, regulatory bodies around the world now require comprehensive drug development programs in children as well as in adults. However, for pediatric trials to yield useful results, the general differences in the treatment response between children and adults must first be determined and then allowed for in the design of pediatric RCTs. In this study, the researchers investigate whether there is any evidence in published RCTs for age-dependent differences in the response to antiepileptic drugs in drug-resistant partial epilepsy.
What Did the Researchers Do and Find?
The researchers searched the literature for reports of RCTs on the effects of antiepileptic drugs in the add-on treatment of drug-resistant partial epilepsy in children and in adults—that is, trials that compared the effects of giving an additional antiepileptic drug with those of giving a placebo by asking what fraction of patients given each treatment had a 50% reduction in seizure frequency during the treatment period compared to a baseline period (the “50% responder rate”). This “systematic review” yielded 32 RCTs, including five pediatric RCTs. The researchers then compared the treatment effect (the ratio of the 50% responder rate in the treatment arm to the placebo arm) in the two age groups using a statistical approach called “meta-analysis” to pool the results of these studies. The treatment effect, they report, was significantly lower in children than in adults. Further analysis indicated that this difference was because more children than adults responded to the placebo. Nearly 1 in 5 children had a 50% reduction in seizure rate when given a placebo compared to only 1 in 10 adults. About a third of both children and adults had a 50% reduction in seizure rate when given antiepileptic drugs.
What Do These Findings Mean?
These findings, although limited by the small number of pediatric trials done so far, suggest that children with drug-resistant partial epilepsy respond more strongly in RCTs to placebo than adults. Although additional studies need to be done to find an explanation for this observation and to discover whether anything similar occurs in other conditions, this difference between children and adults should be taken into account in the design of future pediatric trials on the effects of antiepileptic drugs, and possibly drugs for other conditions. Specifically, to reduce the risk of false-negative results, this finding suggests that it might be necessary to increase the size of future pediatric trials to ensure that the trials have enough power to discover effects of the drugs tested, if they exist.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050166.
This study is further discussed in a PLoS Medicine Perspective by Terry Klassen and colleagues
The European Medicines Agency provides information about the regulation of medicines for children in Europe
The US Food and Drug Administration Office of Pediatric Therapeutics provides similar information for the US
The UK Medicines and Healthcare products Regulatory Agency also provides information on why medicines need to be tested in children
The MedlinePlus encyclopedia has a page on epilepsy (in English and Spanish)
The US National Institute for Neurological Disorders and Stroke and the UK National Health Service Direct health encyclopedia both provide information on epilepsy for patients (in several languages)
Neuroscience for Kids is an educational Web site prepared by Eric Chudler (University of Washington, Seattle, US) that includes information on epilepsy and a list of links to epilepsy organizations (mainly in English but some sections in other languages as well)
doi:10.1371/journal.pmed.0050166
PMCID: PMC2504483  PMID: 18700812
22.  BLEED-Myocardial Infarction Score: Predicting mid-term post-discharge bleeding events 
World Journal of Cardiology  2013;5(6):196-206.
AIM: To derive and validate a score for the prediction of mid-term bleeding events following discharge for myocardial infarction (MI).
METHODS: One thousand and fifty patients admitted for MI and followed for 19.9 ± 6.7 mo were assigned to a derivation cohort. A new risk model, called BLEED-MI, was developed for predicting clinically significant bleeding events during follow-up (primary endpoint) and a composite endpoint of significant hemorrhage plus all-cause mortality (secondary endpoint), incorporating the following variables: age, diabetes mellitus, arterial hypertension, smoking habits, blood urea nitrogen, glomerular filtration rate and hemoglobin at admission, history of stroke, bleeding during hospitalization or previous major bleeding, heart failure during hospitalization and anti-thrombotic therapies prescribed at discharge. The BLEED-MI model was tested for calibration, accuracy and discrimination in the derivation sample and in a new, independent, validation cohort comprising 852 patients admitted at a later date.
RESULTS: The BLEED-MI score showed good calibration in both derivation and validation samples (Hosmer-Lemeshow test P value 0.371 and 0.444, respectively) and high accuracy within each individual patient (Brier score 0.061 and 0.067, respectively). Its discriminative performance in predicting the primary outcome was relatively high (c-statistic of 0.753 ± 0.032 in the derivation cohort and 0.718 ± 0.033 in the validation sample). Incidence of primary/secondary endpoints increased progressively with increasing BLEED-MI scores. In the validation sample, a BLEED-MI score below 2 had a negative predictive value of 98.7% (152/154) for the occurrence of a clinically significant hemorrhagic episode during follow-up and for the composite endpoint of post-discharge hemorrhage plus all-cause mortality. An accurate prediction of bleeding events was shown independently of mortality, as BLEED-MI predicted bleeding with similar efficacy in patients who did not die during follow-up: Area Under the Curve 0.703, Hosmer-Lemeshow test P value 0.547, Brier score 0.060; low-risk (BLEED-MI score 0-3) event rate: 1.2%; intermediate risk (score 4-6) event rate: 5.6%; high risk (score ≥ 7) event rate: 12.5%.
CONCLUSION: A new bedside prediction-scoring model for post-discharge mid-term bleeding has been derived and preliminarily validated. This is the first score designed to predict mid- term hemorrhagic risk in patients discharged following admission for acute MI. This model should be externally validated in larger cohorts of patients before its potential implementation.
doi:10.4330/wjc.v5.i6.196
PMCID: PMC3691499  PMID: 23802048
Myocardial infarction; Bleeding; Prediction model; Risk stratification
23.  Volume Expansion with Albumin Compared to Gelofusine in Children with Severe Malaria: Results of a Controlled Trial  
PLoS Clinical Trials  2006;1(5):e21.
Objectives:
Previous studies have shown that in children with severe malaria, resuscitation with albumin infusion results in a lower mortality than resuscitation with saline infusion. Whether the apparent benefit of albumin is due solely to its colloidal properties, and thus might also be achieved with other synthetic colloids, or due to the many other unique physiological properties of albumin is unknown. As albumin is costly and not readily available in Africa, examination of more affordable colloids is warranted. In order to inform the design of definitive phase III trials we compared volume expansion with Gelofusine (succinylated modified fluid gelatin 4% intravenous infusion) with albumin.
Design:
This study was a phase II safety and efficacy study.
Setting:
The study was conducted at Kilifi District Hospital, Kenya.
Participants:
The participants were children admitted with severe falciparum malaria (impaired consciousness or deep breathing), metabolic acidosis (base deficit > 8 mmol/l), and clinical features of shock.
Interventions:
The interventions were volume resuscitation with either 4.5% human albumin solution or Gelofusine.
Outcome Measures:
Primary endpoints were the resolution of shock and acidosis; secondary endpoints were in-hospital mortality and adverse events including neurological sequelae.
Results:
A total of 88 children were enrolled: 44 received Gelofusine and 44 received albumin. There was no significant difference in the resolution of shock or acidosis between the groups. Whilst no participant developed pulmonary oedema or fluid overload, fatal neurological events were more common in the group receiving gelatin-based intervention fluids. Mortality was lower in patients receiving albumin (1/44; 2.3%) than in those treated with Gelofusine (7/44; 16%) by intention to treat (Fisher's exact test, p = 0.06), or 1/40 (2.5%) and 4/40 (10%), respectively, for those treated per protocol (p = 0.36). Meta-analysis of published trials to provide a summary estimate of the effect of albumin on mortality showed a pooled relative risk of death with albumin administration of 0.19 (95% confidence interval 0.06–0.59; p = 0.004 compared to other fluid boluses).
Conclusions:
In children with severe malaria, we have shown a consistent survival benefit of receiving albumin infusion compared to other resuscitation fluids, despite comparable effects on the resolution of acidosis and shock. The lack of similar mortality benefit from Gelofusine suggests that the mechanism may involve a specific neuroprotective effect of albumin, rather than solely the effect of the administered colloid. Further exploration of the benefits of albumin is warranted in larger clinical trials.
Editorial Commentary
Background: In Africa, children admitted to hospital with severe malaria are at high risk of death even though effective malaria treatment is available. Death typically occurs during a narrow time window after admission and before antimalarial treatments can start working. Acidosis (excessive acidity of the blood) is thought to predict death, but it is not clear how acidosis arises. One possibility is that hypovolemia (lowered blood fluid volume) is important, which would normally require urgent resuscitation with fluids. However, there is little evidence on what type of fluid should be given. In the trial reported here, carried out in Kenya's Kilifi District Hospital between 2004 and 2006, 88 children admitted with severe malaria were assigned to receive either albumin solution (a colloid solution made from blood protein) or Gelofusine (a synthetic colloid). The primary outcomes that the researchers were interested in were correction of shock and acidosis in the blood after 8 h. However, the researchers also looked at death rate in hospital and adverse events after treatment.
What this trial shows: The investigators found no significant differences in the primary outcomes (correction of shock and acidosis in the blood 8 h after fluids were started) between children given Gelofusine and those given albumin. However, they did see a difference in death rates between children given Gelofusine and those given albumin. Death rates in hospital were lower in the group given albumin, and this was statistically significant. The researchers then combined the data on death rates from this trial with data from two other trials with an albumin arm. This combined analysis also supported the suggestion that death rates with albumin were lower than with other fluids, either Gelofusine or salt solution.
Strengths and limitations: There is currently very little evidence from trials to guide the initial management of fluids in children with severe malaria. The results from this trial indicate that further research is a priority. However, the actual findings from this trial must be tested in larger trials that recruit enough children to establish reliably whether there is a difference in death rate between albumin treatment and treatment with other fluids. This trial was not originally planned to find a clinically relevant difference in death rate, and therefore does not definitively answer that question. Further trials would also need to use a random method to assign participants to the different treatments, rather than alternate blocks (as in this trial). A random method ensures greater comparability of the two groups in the trial, and reduces the chance of selection bias (where assignment of patients to different treatments can be distorted during the enrollment process).
Contribution to the evidence: This study adds data suggesting that fluid resuscitation with albumin solution, as compared to Gelofusine, may reduce the chance of death in children with severe malaria. However, this finding is not definitive and would need to be examined in further carefully controlled trials. If the finding is supported by further research, then a solution to the problems of high cost and limited availability of albumin will need to be found.
doi:10.1371/journal.pctr.0010021
PMCID: PMC1569382  PMID: 16998584
24.  Depressive mood is independently related to stroke and cardiovascular events in a community 
By means of a multivariate Cox model, we investigated the predictive value of a depressive mood on vascular disease risk in middle-aged community-dwelling people. In 224 people (88 men and 136 women; mean age: 56.8 ± 11.2 years) of U town, Hokkaido (latitude: 43.45 degrees N, longitude: 141.85 degrees E), a chronoecological health watch was started in April 2001. Consultations were repeated every 3 months. Results at the November 30, 2004 follow-up are presented herein. 7-day/24-h blood pressure (BP) and heart rate (HR) monitoring started on a Thursday, with readings taken at 30-rain intervals between 07:00 h and 22:00 h and at 60-min intervals between 22:00 h and 07:00 h. Data stored in the memory of the monitor (TM-2430-15, A&D company, Japan) were retrieved and analyzed on a personal computer with a commercial software for this device. Subjects were asked to answer a self-administered questionnaire inquiring about 15 items of a depression scale, at the start of study and again after 1-2 years. Subjects with a score higher by at least two points at the second versus first screening were classified as having a depressive mood. The other subjects served as the control group.
The mean follow-up time was 1064 days, during which four subjects suffered an adverse vascular outcome (myocardial infarction: one man and one woman; stroke: two men). Among the variables used in the Cox proportional hazard models, a depressive mood, assessed by the Geriatric Depression Scale (GDS), as well as the MESOR of diastolic (D) BP (DBP-MESOR) and the circadian amplitude of systolic (S) BP (SBP-Amplitude) showed a statistically significant association with the occurrence of adverse vascular outcomes. The GDS score during the second but not during the first session was statistically significantly associated with the adverse vascular outcome. In univariate analyses, the relative risk (RR) of developing outcomes was predicted by a three-point increase in the GDS scale (RR = 3.088, 95% CI: 1.375-6.935, P = 0.0063). Increases of 5 mmHg in DBP-MESOR and of 3 mmHg in SBP-Amplitude were associated with RRs of 2.143 (95% CI: 1.232-3.727, P = 0.0070) and 0.700 (95% CI: 0.495-0.989, P = 0.0430), respectively. In multivariate analyses, when both the second GDS score and the DBP-MESOR were used as continuous variables in the same model, GDS remained statistically significantly associated with the occurrence of cardiovascular death. After adjustment for DBP-MESOR, a three-point increase in GDS score was associated with a RR of 2.172 (95% CI: 1.123-4.200). Monday endpoints of the 7-day profile showed a statistically significant association with adverse vascular outcomes. A 5 mmHg increase in DBP on Monday was associated with a RR of 1.576 (95% CI: 1.011-2.457, P = 0.0446).
The main result of the present study is that in middle-aged community-dwelling people, a depressive mood predicted the occurrence of vascular diseases beyond the prediction provided by age, gender, ABP, lifestyle and environmental conditions, as assessed by means of a multivariate Cox model. A depressive mood, especially enhanced for 1-2 years, was associated with adverse vascular outcomes. Results herein suggest the clinical importance of repetitive assessments of a depressive mood and the need to take sufficient care of depressed subjects.
Another result herein is that circadian and circaseptan characteristics of BP variability measured 7-day/24-h predicted the occurrence of vascular disease beyond the prediction provided by age, gender, depressive mood and lifestyle, as assessed by means of a multivariate Cox model. Earlier, we showed that the morning surge in BP on Mondays was statistically significantly higher compared with other weekdays. Although a direct association between the Monday surge in BP and cardiovascular events could not be demonstrated herein, it is possible that the BP surge on Monday mornings may also trigger cardiovascular events. We have shown that depressive people exhibit a more prominent circaseptan variation in SBP, DBP and the double product (DP) compared to non-depressed subjects.
In view of the strong relation between depression and adverse cardiac events, studies should be done to ascertain that depression is properly diagnosed and treated. Chronodiagnosis and chronotherapy can reduce an elevated blood pressure and improve the altered variability in BP and HR, thus reducing the incidence of adverse cardiac events. This recommendation stands at the basis of chronomics, focusing on pre-habilitation in preference to rehabilitation, as a public service offered in several Japanese towns.
PMCID: PMC2821202  PMID: 16275504
Depressive mood; Seven-day ambulatory blood pressure; Cardiovascular diseases; Stroke
25.  Ultrasound of metacarpophalangeal joints is a sensitive and reliable endpoint for drug therapies in rheumatoid arthritis: results of a randomized, two-center placebo-controlled study 
Arthritis Research & Therapy  2012;14(5):R198.
Introduction
We aimed to investigate the sensitivity and reliability of two-dimensional ultrasonographic endpoints at the metacarpophalageal joints (MCPJs) and their potential to provide an early and objective indication of a therapeutic response to treatment intervention in rheumatoid arthritis (RA).
Methods
A randomized, double-blind, parallel-group, two-center, placebo-controlled trial investigated the effect on ultrasonographic measures of synovitis of repeat dose oral prednisone, 15mg or 7.5mg, each compared to placebo, in consecutive two-week studies; there were 18 subjects in a 1:1 ratio and 27 subjects in a 2:1 ratio, respectively. All subjects met the 1987 American College of Rheumatology criteria for the diagnosis of RA, were ≥18 years-old with RA disease duration ≥6 months, and had a Disease Activity Score 28 based on C-reactive protein (DAS28(CRP)) ≥3.2. Subjects underwent high-frequency (gray-scale) and power Doppler ultrasonography at Days 1 (baseline), 2, 8 and 15 in the dorsal transverse and longitudinal planes of all 10 MCPJs to obtain summated scores of quantitative and semi-quantitative measures of synovial thickness as well as vascularity. The primary endpoint was the summated score of power Doppler area measured quantitatively in all 10 MCPJs in the transverse plane at Day 15. Clinical efficacy was assessed at the same time points by DAS28(CRP).
Results
All randomized subjects completed the trial. The comparison between daily 15 mg prednisone and placebo at Day 15 yielded a statistically significant treatment effect (effect size = 1.17, P = 0.013) in change from baseline in the primary endpoint, but borderline for prednisone 7.5 mg daily versus placebo (effect size = 0.61, P = 0.071). A significant treatment effect for DAS28(CRP) was only observed at Day 15 in the prednisone 15 mg group (effect size = 0.95, P = 0.032). However, significant treatment effects at all time points for a variety of ultrasound (US) endpoints were detected with both prednisone doses; the largest observed effect size = 2.33. Combining US endpoints with DAS28(CRP) improved the registration of significant treatment effects. The parallel scan inter-reader reliability of summated 10 MCPJ scores were good to excellent (ICC values >0.61) for the majority of US measures.
Conclusions
Ultrasonography of MCPJs is an early, reliable indicator of therapeutic response in RA with potential to reduce patient numbers and length of trials designed to give preliminary indications of efficacy.
Trial Registration
Clinicaltrials.gov identifier: NCT00746512
doi:10.1186/ar4034
PMCID: PMC3580508  PMID: 22972032

Results 1-25 (1258717)