PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1652737)

Clipboard (0)
None

Related Articles

1.  Multivariable risk prediction can greatly enhance the statistical power of clinical trial subgroup analysis 
Background
When subgroup analyses of a positive clinical trial are unrevealing, such findings are commonly used to argue that the treatment's benefits apply to the entire study population; however, such analyses are often limited by poor statistical power. Multivariable risk-stratified analysis has been proposed as an important advance in investigating heterogeneity in treatment benefits, yet no one has conducted a systematic statistical examination of circumstances influencing the relative merits of this approach vs. conventional subgroup analysis.
Methods
Using simulated clinical trials in which the probability of outcomes in individual patients was stochastically determined by the presence of risk factors and the effects of treatment, we examined the relative merits of a conventional vs. a "risk-stratified" subgroup analysis under a variety of circumstances in which there is a small amount of uniformly distributed treatment-related harm. The statistical power to detect treatment-effect heterogeneity was calculated for risk-stratified and conventional subgroup analysis while varying: 1) the number, prevalence and odds ratios of individual risk factors for risk in the absence of treatment, 2) the predictiveness of the multivariable risk model (including the accuracy of its weights), 3) the degree of treatment-related harm, and 5) the average untreated risk of the study population.
Results
Conventional subgroup analysis (in which single patient attributes are evaluated "one-at-a-time") had at best moderate statistical power (30% to 45%) to detect variation in a treatment's net relative risk reduction resulting from treatment-related harm, even under optimal circumstances (overall statistical power of the study was good and treatment-effect heterogeneity was evaluated across a major risk factor [OR = 3]). In some instances a multi-variable risk-stratified approach also had low to moderate statistical power (especially when the multivariable risk prediction tool had low discrimination). However, a multivariable risk-stratified approach can have excellent statistical power to detect heterogeneity in net treatment benefit under a wide variety of circumstances, instances under which conventional subgroup analysis has poor statistical power.
Conclusion
These results suggest that under many likely scenarios, a multivariable risk-stratified approach will have substantially greater statistical power than conventional subgroup analysis for detecting heterogeneity in treatment benefits and safety related to previously unidentified treatment-related harm. Subgroup analyses must always be well-justified and interpreted with care, and conventional subgroup analyses can be useful under some circumstances; however, clinical trial reporting should include a multivariable risk-stratified analysis when an adequate externally-developed risk prediction tool is available.
doi:10.1186/1471-2288-6-18
PMCID: PMC1523355  PMID: 16613605
2.  Effect of an Educational Toolkit on Quality of Care: A Pragmatic Cluster Randomized Trial 
PLoS Medicine  2014;11(2):e1001588.
In a pragmatic cluster-randomized trial, Baiju Shah and colleagues evaluated the effectiveness of printed educational materials for clinician education focusing on cardiovascular disease screening and risk reduction in people with diabetes.
Please see later in the article for the Editors' Summary
Background
Printed educational materials for clinician education are one of the most commonly used approaches for quality improvement. The objective of this pragmatic cluster randomized trial was to evaluate the effectiveness of an educational toolkit focusing on cardiovascular disease screening and risk reduction in people with diabetes.
Methods and Findings
All 933,789 people aged ≥40 years with diagnosed diabetes in Ontario, Canada were studied using population-level administrative databases, with additional clinical outcome data collected from a random sample of 1,592 high risk patients. Family practices were randomly assigned to receive the educational toolkit in June 2009 (intervention group) or May 2010 (control group). The primary outcome in the administrative data study, death or non-fatal myocardial infarction, occurred in 11,736 (2.5%) patients in the intervention group and 11,536 (2.5%) in the control group (p = 0.77). The primary outcome in the clinical data study, use of a statin, occurred in 700 (88.1%) patients in the intervention group and 725 (90.1%) in the control group (p = 0.26). Pre-specified secondary outcomes, including other clinical events, processes of care, and measures of risk factor control, were also not improved by the intervention. A limitation is the high baseline rate of statin prescribing in this population.
Conclusions
The educational toolkit did not improve quality of care or cardiovascular outcomes in a population with diabetes. Despite being relatively easy and inexpensive to implement, printed educational materials were not effective. The study highlights the need for a rigorous and scientifically based approach to the development, dissemination, and evaluation of quality improvement interventions.
Trial Registration
http://www.ClinicalTrials.gov NCT01411865 and NCT01026688
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Clinical practice guidelines help health care providers deliver the best care to patients by combining all the evidence on disease management into specific recommendations for care. However, the implementation of evidence-based guidelines is often far from perfect. Take the example of diabetes. This common chronic disease, which is characterized by high levels of sugar (glucose) in the blood, impairs the quality of life of patients and shortens life expectancy by increasing the risk of cardiovascular diseases (conditions that affect the heart and circulation) and other life-threatening conditions. Patients need complex care to manage the multiple risk factors (high blood sugar, high blood pressure, high levels of fat in the blood) that are associated with the long-term complications of diabetes, and they need to be regularly screened and treated for these complications. Clinical practice guidelines for diabetes provide recommendations on screening and diagnosis, drug treatment, and cardiovascular disease risk reduction, and on helping patients self-manage their disease. Unfortunately, the care delivered to patients with diabetes frequently fails to meet the standards laid down in these guidelines.
Why Was This Study Done?
How can guideline adherence and the quality of care provided to patients be improved? A common approach is to send printed educational materials to clinicians. For example, when the Canadian Diabetes Association (CDA) updated its clinical practice guidelines in 2008, it mailed educational toolkits that contained brochures and other printed materials targeting key themes from the guidelines to family physicians. In this pragmatic cluster randomized trial, the researchers investigate the effect of the CDA educational toolkit that targeted cardiovascular disease screening and treatment on the quality of care of people with diabetes. A pragmatic trial asks whether an intervention works under real-life conditions and whether it works in terms that matter to the patient; a cluster randomized trial randomly assigns groups of people to receive alternative interventions and compares outcomes in the differently treated “clusters.”
What Did the Researchers Do and Find?
The researchers randomly assigned family practices in Ontario, Canada to receive the educational toolkit in June 2009 (intervention group) or in May 2010 (control group). They examined outcomes between July 2009 and April 2010 in all patients with diabetes in Ontario aged over 40 years (933,789 people) using population-level administrative data. In Canada, administrative databases record the personal details of people registered with provincial health plans, information on hospital visits and prescriptions, and physician service claims for consultations, assessments, and diagnostic and therapeutic procedures. They also examined clinical outcome data from a random sample of 1,592 patients at high risk of cardiovascular complications. In the administrative data study, death or non-fatal heart attack (the primary outcome) occurred in about 11,500 patients in both the intervention and control group. In the clinical data study, the primary outcome―use of a statin to lower blood fat levels―occurred in about 700 patients in both study groups. Secondary outcomes, including other clinical events, processes of care, and measures of risk factor control were also not improved by the intervention. Indeed, in the administrative data study, some processes of care outcomes related to screening for heart disease were statistically significantly worse in the intervention group than in the control group, and in the clinical data study, fewer patients in the intervention group reached blood pressure targets than in the control group.
What Do These Findings Mean?
These findings suggest that the CDA cardiovascular diseases educational toolkit did not improve quality of care or cardiovascular outcomes in a population with diabetes. Indeed, the toolkit may have led to worsening in some secondary outcomes although, because numerous secondary outcomes were examined, this may be a chance finding. Limitations of the study include its length, which may have been too short to see an effect of the intervention on clinical outcomes, and the possibility of a ceiling effect—the control group in the clinical data study generally had good care, which left little room for improvement of the quality of care in the intervention group. Overall, however, these findings suggest that printed educational materials may not be an effective way to improve the quality of care for patients with diabetes and other complex conditions and highlight the need for a rigorous, scientific approach to the development, dissemination, and evaluation of quality improvement interventions.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001588.
The US National Diabetes Information Clearinghouse provides information about diabetes for patients, health care professionals, and the general public (in English and Spanish)
The UK National Health Service Choices website provides information (including some personal stories) for patients and carers about type 2 diabetes, the commonest form of diabetes
The Canadian Diabetes Association also provides information about diabetes for patients (including some personal stories about living with diabetes) and health care professionals; its latest clinical practice guidelines are available on its website
The UK National Institute for Health and Care Excellence provides general information about clinical guidelines and about health care quality standards in the UK
The US Agency for Healthcare Research and Quality aims to improve the quality, safety, efficiency, and effectiveness of health care for all Americans (information in English and Spanish); the US National Guideline Clearinghouse is a searchable database of clinical practice guidelines
The International Diabetes Federation provides information about diabetes for patients and health care professionals, along with international statistics on the burden of diabetes
doi:10.1371/journal.pmed.1001588
PMCID: PMC3913553  PMID: 24505216
3.  Subgroup Analyses in Randomized Controlled Trials: the Need for Risk Stratification in Kidney Transplantation 
Although randomized controlled trials are the gold standard for establishing causation in clinical research, their aggregated results can be misleading when applied to individual patients. A treatment may be beneficial in some patients, but its harms may outweigh benefits in others. While conventional one-variable-at-a-time subgroup analyses have well-known limitations, multivariable risk-based analyses can help uncover clinically significant heterogeneity in treatment effects that may be otherwise obscured. Trials in kidney transplantation have yielded the finding that a reduction in acute rejection does not translate into a similar benefit in prolonging graft survival and improving graft function. This paradox might be explained by the variation in risk for acute rejection among included kidney transplant recipients varying the likelihood of benefit or harm from intense immunosuppressive regimens. Analyses that stratify patients by their immunological risk may resolve these otherwise puzzling results. Reliable risk models should be developed to investigate benefits and harms in rationally designed risk-based subgroups of patients in existing RCT datasets. These risk strata would need to be validated in future prospective clinical trials examining long term effects on patient and graft survival. This approach may allow better individualized treatment choices for kidney transplant recipients.
doi:10.1111/j.1600-6143.2009.02802.x
PMCID: PMC2997518  PMID: 19764948
4.  Schizophrenia and Violence: Systematic Review and Meta-Analysis 
PLoS Medicine  2009;6(8):e1000120.
Seena Fazel and colleagues investigate the association between schizophrenia and other psychoses and violence and violent offending, and show that the increased risk appears to be partly mediated by substance abuse comorbidity.
Background
Although expert opinion has asserted that there is an increased risk of violence in individuals with schizophrenia and other psychoses, there is substantial heterogeneity between studies reporting risk of violence, and uncertainty over the causes of this heterogeneity. We undertook a systematic review of studies that report on associations between violence and schizophrenia and other psychoses. In addition, we conducted a systematic review of investigations that reported on risk of homicide in individuals with schizophrenia and other psychoses.
Methods and Findings
Bibliographic databases and reference lists were searched from 1970 to February 2009 for studies that reported on risks of interpersonal violence and/or violent criminality in individuals with schizophrenia and other psychoses compared with general population samples. These data were meta-analysed and odds ratios (ORs) were pooled using random-effects models. Ten demographic and clinical variables were extracted from each study to test for any observed heterogeneity in the risk estimates. We identified 20 individual studies reporting data from 18,423 individuals with schizophrenia and other psychoses. In men, ORs for the comparison of violence in those with schizophrenia and other psychoses with those without mental disorders varied from 1 to 7 with substantial heterogeneity (I2 = 86%). In women, ORs ranged from 4 to 29 with substantial heterogeneity (I2 = 85%). The effect of comorbid substance abuse was marked with the random-effects ORs of 2.1 (95% confidence interval [CI] 1.7–2.7) without comorbidity, and an OR of 8.9 (95% CI 5.4–14.7) with comorbidity (p<0.001 on metaregression). Risk estimates of violence in individuals with substance abuse (but without psychosis) were similar to those in individuals with psychosis with substance abuse comorbidity, and higher than all studies with psychosis irrespective of comorbidity. Choice of outcome measure, whether the sample was diagnosed with schizophrenia or with nonschizophrenic psychoses, study location, or study period were not significantly associated with risk estimates on subgroup or metaregression analysis. Further research is necessary to establish whether longitudinal designs were associated with lower risk estimates. The risk for homicide was increased in individuals with psychosis (with and without comorbid substance abuse) compared with general population controls (random-effects OR = 19.5, 95% CI 14.7–25.8).
Conclusions
Schizophrenia and other psychoses are associated with violence and violent offending, particularly homicide. However, most of the excess risk appears to be mediated by substance abuse comorbidity. The risk in these patients with comorbidity is similar to that for substance abuse without psychosis. Public health strategies for violence reduction could consider focusing on the primary and secondary prevention of substance abuse.
Please see later in the article for Editors' Summary
Editors' Summary
Background
Schizophrenia is a lifelong, severe psychotic condition. One in 100 people will have at least one episode of schizophrenia during their lifetime. Symptoms include delusions (for example, patients believe that someone is plotting against them) and hallucinations (hearing or seeing things that are not there). In men, schizophrenia usually starts in the late teens or early 20s; women tend to develop schizophrenia a little later. The causes of schizophrenia include genetic predisposition, obstetric complications, illegal drug use (substance abuse), and experiencing traumatic life events. The condition can be treated with a combination of antipsychotic drugs and supportive therapy; hospitalization may be necessary in very serious cases to prevent self harm. Many people with schizophrenia improve sufficiently after treatment to lead satisfying lives although some patients need lifelong support and supervision.
Why Was This Study Done?
Some people believe that schizophrenia and other psychoses are associated with violence, a perception that is often reinforced by news reports and that contributes to the stigma associated with mental illness. However, mental health advocacy groups and many mental health clinicians argue that it is a myth that people with mental health problems are violent. Several large, population-based studies have examined this disputed relationship. But, although some studies found no increased risk of violence among patients with schizophrenia compared with the general population, others found a marked increase in violent offending in patients with schizophrenia. Here, the researchers try to resolve this variation (“heterogeneity”) in the conclusions reached in different studies by doing a systematic review (a study that uses predefined search criteria to identify all the research on a specific topic) and a meta-analysis (a statistical method for combining the results of several studies) of the literature on associations between violence and schizophrenia and other psychoses. They also explored the relationship between substance abuse and violence.
What Did the Researchers Do and Find?
By systematically searching bibliographic databases and reference lists, the researchers identified 20 studies that compared the risk of violence in people with schizophrenia and other psychoses and the risk of violence in the general population. They then used a “random effects model” (a statistical technique that allows for heterogeneity between studies) to investigate the association between schizophrenia and violence. For men with schizophrenia or other psychoses, the pooled odds ratio (OR) from the relevant studies (which showed moderate heterogeneity) was 4.7, which was reduced to 3.8 once adjustment was made for socio-economic factors. That is, a man with schizophrenia was four to five times as likely to commit a violent act as a man in the general population. For women, the equivalent pooled OR was 8.2 but there was a much greater variation between the ORs in the individual studies than in the studies that involved men. The researchers then used “meta-regression” to investigate the heterogeneity between the studies. This analysis suggested that none of the study characteristics examined apart from co-occurring substance abuse could have caused the variation between the studies. Importantly the authors found that risk estimates of violence in people with substance abuse but no psychosis were similar to those in people with substance abuse and psychosis and higher than those in people with psychosis alone. Finally, although people with schizophrenia were nearly 20 times more likely to have committed murder than people in the general population, only one in 300 people with schizophrenia had killed someone, a similar risk to that seen in people with substance abuse.
What Do These Findings Mean?
These findings indicate that schizophrenia and other psychoses are associated with violence but that the association is strongest in people with substance abuse and most of the excess risk of violence associated with schizophrenia and other psychoses is mediated by substance abuse. However, the increased risk in patients with comorbidity was similar to that in substance abuse without psychosis. A potential implication of this finding is that violence reduction strategies that focus on preventing substance abuse among both the general population and among people with psychoses might be more successful than strategies that solely target people with mental illnesses. However, the quality of the individual studies included in this meta-analysis limits the strength of its conclusions and more research into the association between schizophrenia, substance abuse, and violence would assist in clarifying how and if strategies for violence reduction are changed.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000120.
The US National Institute of Mental Health provides information about schizophrenia (in English and Spanish)
The UK National Health Service Choices Web site has information for patients and carers about schizophrenia
The MedlinePlus Encyclopedia has a page on schizophrenia; MedlinePlus provides links to other sources of information on schizophrenia and on psychotic disorders (in English and Spanish)
The Schizophrenia and Related Disorders Alliance of America provides information and support for people with schizophrenia and their families
The time to change Web site provides information about an English campaign to reduce the stigma associated with mental illness
The Schizophrenia Research Forum provides updated research news and commentaries for the scientific community
doi:10.1371/journal.pmed.1000120
PMCID: PMC2718581  PMID: 19668362
5.  Directly observed therapy for treating tuberculosis 
Background
Tuberculosis (TB) requires at least six months of treatment. If treatment is incomplete, patients may not be cured and drug resistance may develop. Directly Observed Therapy (DOT) is a specific strategy, endorsed by the World Health Organization, to improve adherence by requiring health workers, community volunteers or family members to observe and record patients taking each dose.
Objectives
To evaluate DOT compared to self-administered therapy in people on treatment for active TB or on prophylaxis to prevent active disease. We also compared the effects of different forms of DOT.
Search methods
We searched the following databases up to 13 January 2015: the Cochrane Infectious Diseases Group Specialized Register; the Cochrane Central Register of Controlled Trials (CENTRAL), published in the Cochrane Library; MEDLINE; EMBASE; LILACS and mRCT. We also checked article reference lists and contacted relevant researchers and organizations.
Selection criteria
Randomized controlled trials (RCTs) and quasi-RCTs comparing DOT with routine self-administration of treatment or prophylaxis at home.
Data collection and analysis
Two review authors independently assessed risk of bias of each included trial and extracted data. We compared interventions using risk ratios (RR) with 95% confidence intervals (CI). We used a random-effects model if meta-analysis was appropriate but heterogeneity present (I2 statistic = 50%). We assessed the quality of the evidence using the GRADE approach.
Main results
Eleven trials including 5662 participants met the inclusion criteria. DOT was performed by a range of people (nurses, community health workers, family members or former TB patients) in a variety of settings (clinic, the patient's home or the home of a community volunteer).
DOT versus self-administered
Six trials from South Africa, Thailand, Taiwan, Pakistan and Australia compared DOT with self-administered therapy for treatment. Trials included DOT at home by family members, community health workers (who were usually supervised); DOT at home by health staff; and DOT at health facilities. TB cure was low with self-administration across all studies (range 41% to 67%), and direct observation did not substantially improve this (RR 1.08, 95% CI 0.91 to 1.27; five trials, 1645 participants, moderate quality evidence). In a subgroup analysis stratified by the frequency of contact between health services in the self-treatment arm, daily DOT may improve TB cure when compared to self-administered treatment where patients in the self-administered group only visited the clinic every month (RR 1.15, 95% CI 1.06 to 1.25; two trials, 900 participants); but with contact in the control becoming more frequent, this small effect was not apparent (every two weeks: RR 0.96, 95% CI 0.83 to 1.12; one trial, 497 participants; every week: RR 0.90, 95% CI 0.68 to 1.21; two trials, 248 participants).
Treatment completion showed a similar pattern, ranging from 59% to 78% in the self-treatment groups, and direct observation did not improve this (RR 1.07, 95% CI 0.96 to 1.19; six trials, 1839 participants, moderate quality evidence).
DOT at home versus DOT at health facility
In four trials that compared DOT at home by family members, or community health workers, with DOT by health workers at a health facility there was little or no difference in cure or treatment completion (cure: RR 1.02, 95% CI 0.88 to 1.18, four trials, 1556 participants, moderate quality evidence; treatment completion: RR 1.04, 95% CI 0.91 to 1.17, three trials, 1029 participants, moderate quality evidence).
DOT by family member versus DOT by community health worker
Two trials compared DOT at home by family members with DOT at home by community health workers. There was also little or no difference in cure or treatment completion (cure: RR 1.02, 95% CI 0.86 to 1.21; two trials, 1493 participants, moderate quality evidence; completion: RR 1.05, 95% CI 0.90 to 1.22; two trials, 1493 participants, low quality evidence).
Specific patient categories
A trial of 300 intravenous drug users in the USA evaluated direct observation with no observation in TB prophylaxis to prevent active disease and showed little difference in treatment completion (RR 1.00, 95% CI 0.88 to 1.13; one trial, 300 participants, low quality evidence).
Authors' conclusions
From the existing trials, DOT did not provide a solution to poor adherence in TB treatment. Given the large resource and cost implications of DOT, policy makers might want to reconsider strategies that depend on direct observation. Other options might take into account financial and logistical barriers to care; approaches that motivate patients and staff; and defaulter follow-up.
PLAIN LANGUAGE SUMMARY
Directly observing people with TB take their drugs to help them complete their treatment
This Cochrane Review summarises trials evaluating the effects of directly observed therapy (DOT) for treating people with tuberculosis (TB) or people on prophylaxis to prevent active disease compared to self-administered treatment. After searching for relevant trials up to 13 January 2015, we included 11 randomized controlled trials, enrolling 5662 people with TB, and conducted between 1995 and 2008.
What is DOT and how might it improve treatment outcomes for people with TB
DOT is one strategy to ensure that patients with TB take all their medication. An 'observer' acceptable to the patient and the health system observes the patient taking every dose of their medication, and records this for the health system to monitor.
The World Health Organization currently recommends that people with TB are treated for at least six months to achieve cure. These long durations of treatment can be difficult for patients to complete, especially once they are well and need to return to work. Failure to complete treatment can lead to relapse and even death in individuals, and also has important public health consequences, such as increased TB transmission and the development of drug resistance.
What the research says
Overall, cure and treatment completion in both self-treatment and DOT groups was low, and DOT did not substantially improve this. Small effects were seen in a subgroup of studies where the self-treatment group were monitored less frequently than the DOT group.
There is probably no difference in TB cure or treatment completion when the direct observation was conducted at home or at the clinic (moderate quality evidence). There is probably little or no difference in TB cure direct observation is conducted by a community health worker or family member (moderate quality evidence) and there may be little or no difference in treatment completion either (low quality evidence).
Direct observation may have little or no effect on treatment completion in injection drug users (low quality evidence).
The authors conclude that DOT on its own may not offer the solution to poor adherence in people taking TB medication.
doi:10.1002/14651858.CD003343.pub4
PMCID: PMC4460720  PMID: 26022367
6.  Topical antifungals for seborrhoeic dermatitis 
Background
Seborrhoeic dermatitis is a chronic inflammatory skin condition that is distributed worldwide. It commonly affects the scalp, face and flexures of the body. Treatment options include antifungal drugs, steroids, calcineurin inhibitors, keratolytic agents and phototherapy.
Objectives
To assess the effects of antifungal agents for seborrhoeic dermatitis of the face and scalp in adolescents and adults.
A secondary objective is to assess whether the same interventions are effective in the management of seborrhoeic dermatitis in patients with HIV/AIDS.
Search methods
We searched the following databases up to December 2014: the Cochrane Skin Group Specialised Register, the Cochrane Central Register of Controlled Trials (CENTRAL) (2014, Issue 11), MEDLINE (from 1946), EMBASE (from 1974) and Latin American Caribbean Health Sciences Literature (LILACS) (from 1982). We also searched trials registries and checked the bibliographies of published studies for further trials.
Selection criteria
Randomised controlled trials of topical antifungals used for treatment of seborrhoeic dermatitis in adolescents and adults, with primary outcome measures of complete clearance of symptoms and improved quality of life.
Data collection and analysis
Review author pairs independently assessed eligibility for inclusion, extracted study data and assessed risk of bias of included studies. We performed fixed-effect meta-analysis for studies with low statistical heterogeneity and used a random-effects model when heterogeneity was high.
Main results
We included 51 studies with 9052 participants. Of these, 45 trials assessed treatment outcomes at five weeks or less after commencement of treatment, and six trials assessed outcomes over a longer time frame. We believe that 24 trials had some form of conflict of interest, such as funding by pharmaceutical companies.
Among the included studies were 12 ketoconazole trials (N = 3253), 11 ciclopirox trials (N = 3029), two lithium trials (N = 141), two bifonazole trials (N = 136) and one clotrimazole trial (N = 126) that compared the effectiveness of these treatments versus placebo or vehicle. Nine ketoconazole trials (N = 632) and one miconazole trial (N = 47) compared these treatments versus steroids. Fourteen studies (N = 1541) compared one antifungal versus another or compared different doses or schedules of administration of the same agent versus one another.
Ketoconazole
Topical ketoconazole 2% treatment showed a 31% lower risk of failed clearance of rashes compared with placebo (risk ratio (RR) 0.69, 95% confidence interval (CI) 0.59 to 0.81, eight studies, low-quality evidence) at four weeks of follow-up, but the effect on side effects was uncertain because evidence was of very low quality (RR 0.97, 95% CI 0.58 to 1.64, six studies); heterogeneity between studies was substantial (I² = 74%). The median proportion of those who did not have clearance in the placebo groups was 69%.
Ketoconazole treatment resulted in a remission rate similar to that of steroids (RR 1.17, 95% CI 0.95 to 1.44, six studies, low-quality evidence), but occurrence of side effects was 44% lower in the ketoconazole group than in the steroid group (RR 0.56, 95% CI 0.32 to 0.96, eight studies, moderate-quality evidence).
Ketoconozale yielded a similar remission failure rate as ciclopirox (RR 1.09, 95% CI 0.95 to 1.26, three studies, low-quality evidence). Most comparisons between ketoconazole and other antifungals were based on single studies that showed comparability of treatment effects.
Ciclopirox
Ciclopirox 1% led to a lower failed remission rate than placebo at four weeks of follow-up (RR 0.79, 95% CI 0.67 to 0.94, eight studies, moderate-quality evidence) with similar rates of side effects (RR 0.9, 95% CI 0.72 to 1.11, four studies, moderate-quality evidence).
Other antifungals
Clotrimazole and miconazole efficacies were comparable with those of steroids on short-term assessment in single studies.
Treatment effects on individual symptoms were less clear and were inconsistent, possibly because of difficulties encountered in measuring these symptoms.
Evidence was insufficient to conclude that dose or mode of delivery influenced treatment outcome. Only one study reported on treatment compliance. No study assessed quality of life. One study assessed the maximum rash-free period but provided insufficient data for analysis. One small study in patients with HIV compared the effect of lithium versus placebo on seborrhoeic dermatitis of the face, but treatment outcomes were similar.
Authors' conclusions
Ketoconazole and ciclopirox are more effective than placebo, but limited evidence suggests that either of these agents is more effective than any other agent within the same class. Very few studies have assessed symptom clearance for longer periods than four weeks. Ketoconazole produced findings similar to those of steroids, but side effects were fewer. Treatment effect on overall quality of life remains unknown. Better outcome measures, studies of better quality and better reporting are all needed to improve the evidence base for antifungals for seborrhoeic dermatitis.
Plain Language Summary
Antifungal treatments applied to the skin to treat seborrhoeic dermatitis
Background
Seborrhoeic dermatitis is a chronic inflammatory skin condition found throughout the world, with rashes with varying degrees of redness, scaling and itching. It affects people of both sexes but is more common among men. The disease usually starts after puberty and can lead to personal discomfort and cosmetic concerns when rashes occur at prominent skin sites. Drugs that act against moulds, also called antifungal agents, have been commonly used on their own or in combination.
Review question
Do antifungal treatments applied to the skin clear up the rashes and itching of seborrhoeic dermatitis?
Study characteristics
We included 51 studies with 9052 participants. Trials typically were four weeks long, and very few trials were longer. In all, 24 studies had some involvement of pharmaceutical companies such as funding or employment of the researchers.
Key results
Particpants taking ketoconazole were 31% less likely than those given placebo to have symptoms that persisted at four weeks of follow-up. This was seen in eight studies with 2520 participants, but wide variation was noted between studies. Ketoconazole was as effective as steroids but had 44% fewer side effects. Without causing more side effects, ciclopirox was 21% more effective than placebo in achieving clinical clearance of rashes. Treatment effect on redness, itching or scaling symptoms of the skin was less clear. Evidence was insufficient to conclude that that one antifungal was superior to other antifungals, but this observation was based on few studies. Ketoconazole and ciclopirox are the most heavily investigated antifungals and are more effective than placebo. Other antifungals might have similar effects, but data are insufficient to underpin this.
Common side effects were increased skin redness or itching, burning sensation and hair loss.
No studies measured quality of life. Only one study reported on percentage of compliance in different treatment groups. Other studies used surrogates such as acceptability to represent compliance. We therefore could not assess the effect of compliance on treatment outcomes. One study on patients with HIV reported no clear effects of treatments.
Quality of the evidence
Evidence for the effects of ketoconazole compared with placebo or a steroid was assessed to be of low quality. Evidence derived from comparison of ciclopirox versus placebo was assessed to be of moderate quality. Better quality studies with longer follow-up and better reporting are needed to enlarge the evidence base for antifungals.
doi:10.1002/14651858.CD008138.pub2
PMCID: PMC4448221  PMID: 25919043
7.  Red Blood Cell Transfusion and Mortality in Trauma Patients: Risk-Stratified Analysis of an Observational Study 
PLoS Medicine  2014;11(6):e1001664.
Using a large multicentre cohort, Pablo Perel and colleagues evaluate the association of red blood cell transfusion with mortality according to the predicted risk of death for trauma patients.
Please see later in the article for the Editors' Summary
Background
Haemorrhage is a common cause of death in trauma patients. Although transfusions are extensively used in the care of bleeding trauma patients, there is uncertainty about the balance of risks and benefits and how this balance depends on the baseline risk of death. Our objective was to evaluate the association of red blood cell (RBC) transfusion with mortality according to the predicted risk of death.
Methods and Findings
A secondary analysis of the CRASH-2 trial (which originally evaluated the effect of tranexamic acid on mortality in trauma patients) was conducted. The trial included 20,127 trauma patients with significant bleeding from 274 hospitals in 40 countries. We evaluated the association of RBC transfusion with mortality in four strata of predicted risk of death: <6%, 6%–20%, 21%–50%, and >50%. For this analysis the exposure considered was RBC transfusion, and the main outcome was death from all causes at 28 days. A total of 10,227 patients (50.8%) received at least one transfusion. We found strong evidence that the association of transfusion with all-cause mortality varied according to the predicted risk of death (p-value for interaction <0.0001). Transfusion was associated with an increase in all-cause mortality among patients with <6% and 6%–20% predicted risk of death (odds ratio [OR] 5.40, 95% CI 4.08–7.13, p<0.0001, and OR 2.31, 95% CI 1.96–2.73, p<0.0001, respectively), but with a decrease in all-cause mortality in patients with >50% predicted risk of death (OR 0.59, 95% CI 0.47–0.74, p<0.0001). Transfusion was associated with an increase in fatal and non-fatal vascular events (OR 2.58, 95% CI 2.05–3.24, p<0.0001). The risk associated with RBC transfusion was significantly increased for all the predicted risk of death categories, but the relative increase was higher for those with the lowest (<6%) predicted risk of death (p-value for interaction <0.0001). As this was an observational study, the results could have been affected by different types of confounding. In addition, we could not consider haemoglobin in our analysis. In sensitivity analyses, excluding patients who died early; conducting propensity score analysis adjusting by use of platelets, fresh frozen plasma, and cryoprecipitate; and adjusting for country produced results that were similar.
Conclusions
The association of transfusion with all-cause mortality appears to vary according to the predicted risk of death. Transfusion may reduce mortality in patients at high risk of death but increase mortality in those at low risk. The effect of transfusion in low-risk patients should be further tested in a randomised trial.
Trial registration
www.ClinicalTrials.gov NCT01746953
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Trauma—a serious injury to the body caused by violence or an accident—is a major global health problem. Every year, injuries caused by traffic collisions, falls, blows, and other traumatic events kill more than 5 million people (9% of annual global deaths). Indeed, for people between the ages of 5 and 44 years, injuries are among the top three causes of death in many countries. Trauma sometimes kills people through physical damage to the brain and other internal organs, but hemorrhage (serious uncontrolled bleeding) is responsible for 30%–40% of trauma-related deaths. Consequently, early trauma care focuses on minimizing hemorrhage (for example, by using compression to stop bleeding) and on restoring blood circulation after blood loss (health-care professionals refer to this as resuscitation). Red blood cell (RBC) transfusion is often used for the management of patients with trauma who are bleeding; other resuscitation products include isotonic saline and solutions of human blood proteins.
Why Was This Study Done?
Although RBC transfusion can save the lives of patients with trauma who are bleeding, there is considerable uncertainty regarding the balance of risks and benefits associated with this procedure. RBC transfusion, which is an expensive intervention, is associated with several potential adverse effects, including allergic reactions and infections. Moreover, blood supplies are limited, and the risks from transfusion are high in low- and middle-income countries, where most trauma-related deaths occur. In this study, which is a secondary analysis of data from a trial (CRASH-2) that evaluated the effect of tranexamic acid (which stops excessive bleeding) in patients with trauma, the researchers test the hypothesis that RBC transfusion may have a beneficial effect among patients at high risk of death following trauma but a harmful effect among those at low risk of death.
What Did the Researchers Do and Find?
The CRASH-2 trail included 20,127 patients with trauma and major bleeding treated in 274 hospitals in 40 countries. In their risk-stratified analysis, the researchers investigated the effect of RBC transfusion on CRASH-2 participants with a predicted risk of death (estimated using a validated model that included clinical variables such as heart rate and blood pressure) on admission to hospital of less than 6%, 6%–20%, 21%–50%, or more than 50%. That is, the researchers compared death rates among patients in each stratum of predicted risk of death who received a RBC transfusion with death rates among patients who did not receive a transfusion. Half the patients received at least one transfusion. Transfusion was associated with an increase in all-cause mortality at 28 days after trauma among patients with a predicted risk of death of less than 6% or of 6%–20%, but with a decrease in all-cause mortality among patients with a predicted risk of death of more than 50%. In absolute figures, compared to no transfusion, RBC transfusion was associated with 5.1 more deaths per 100 patients in the patient group with the lowest predicted risk of death but with 11.9 fewer deaths per 100 patients in the group with the highest predicted risk of death.
What Do These Findings Mean?
These findings show that RBC transfusion is associated with an increase in all-cause deaths among patients with trauma and major bleeding with a low predicted risk of death, but with a reduction in all-cause deaths among patients with a high predicted risk of death. In other words, these findings suggest that the effect of RBC transfusion on all-cause mortality may vary according to whether a patient with trauma has a high or low predicted risk of death. However, because the participants in the CRASH-2 trial were not randomly assigned to receive a RBC transfusion, it is not possible to conclude that receiving a RBC transfusion actually increased the death rate among patients with a low predicted risk of death. It might be that the patients with this level of predicted risk of death who received a transfusion shared other unknown characteristics (confounders) that were actually responsible for their increased death rate. Thus, to provide better guidance for clinicians caring for patients with trauma and hemorrhage, the hypothesis that RBC transfusion could be harmful among patients with trauma with a low predicted risk of death should be prospectively evaluated in a randomised controlled trial.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001664.
This study is further discussed in a PLOS Medicine Perspective by Druin Burch
The World Health Organization provides information on injuries and on violence and injury prevention (in several languages)
The US Centers for Disease Control and Prevention has information on injury and violence prevention and control
The National Trauma Institute, a US-based non-profit organization, provides information about hemorrhage after trauma and personal stories about surviving trauma
The UK National Health Service Choices website provides information about blood transfusion, including a personal story about transfusion after a serious road accident
The US National Heart, Lung, and Blood Institute also provides detailed information about blood transfusions
MedlinePlus provides links to further resources on injuries, bleeding, and blood transfusion (in English and Spanish)
More information in available about CRASH-2 (in several languages)
doi:10.1371/journal.pmed.1001664
PMCID: PMC4060995  PMID: 24937305
8.  Competing event risk stratification may improve the design and efficiency of clinical trials: Secondary analysis of SWOG 8794 
Contemporary clinical trials  2012;34(1):74-79.
Background
Composite endpoints can be problematic in the presence of competing risks when a treatment does not affect events comprising the endpoint equally.
Methods
We conducted secondary analysis of SWOG 8794 trial of adjuvant radiation therapy (RT) for high-risk post-operative prostate cancer. The primary outcome was metastasis-free survival (MFS), defined as time to first occurrence of metastasis or death from any cause (competing mortality (CM)). We developed separate risk scores for time to metastasis and CM using competing risks regression. We estimated treatment effects using Cox models adjusted for risk scores and identified an enriched subgroup of 75 patients at high risk of metastasis and low risk of CM.
Results
The mean CM risk score was significantly lower in the RT arm vs. control arm (p=0.001). The effect of RT on MFS (HR 0.70; 95% CI, 0.53–0.92; p=0.010) was attenuated when controlling for metastasis and CM risk (HR 0.76; 95% CI, 0.58–1.00; p=0.049), and the effect of RT on overall survival (HR 0.73; 95% CI, 0.55–0.96; p=0.02) was no longer significant when controlling for metastasis and CM risk (HR 0.80; 95% CI, 0.60–1.06; p=0.12). Compared to the whole sample, the enriched subgroup had the same 10-year incidence of MFS (40%; 95% CI, 22–57%), but a higher incidence of metastasis (30% (95% CI, 15–47%) vs. 20% (95% CI, 15–26%)). A randomized trial in the subgroup would have achieved 80% power with 56% less patients (313 vs. 709, respectively).
Conclusion
Stratification on competing event risk may improve the efficiency of clinical trials.
doi:10.1016/j.cct.2012.09.008
PMCID: PMC3525784  PMID: 23063467
Prostate cancer; competing risks; competing mortality; radiation therapy
9.  The International Community-Acquired Pneumonia (CAP) Collaboration Cohort (ICCC) study: rationale, design and description of study cohorts and patients 
BMJ Open  2012;2(3):e001030.
Objective
To improve the understanding of the determinants of prognosis and accurate risk stratification in community-acquired pneumonia (CAP).
Design
Multicentre collaboration of prospective cohorts.
Setting
6 cohorts from the USA, Canada, Hong Kong and Spain.
Participants
From a published meta-analysis of risk stratification studies in CAP, the authors identified and pooled individual patient-level data from six prospective cohort studies of CAP (three from the USA, one each from Canada, Hong Kong and Spain) to create the International CAP Collaboration Cohort. Initial essential inclusion criteria of meta-analysis were (1) prospective design, (2) in English language, (3) reported 30-day mortality and transfer to an intensive or high dependency care and (4) minimum 1000 participants. Common baseline patient characteristics included demographics, history and physical examination findings, comorbidities and laboratory and radiographic findings.
Primary and secondary outcome measures
This paper reports the rationale, hypotheses and analytical framework and also describes study cohorts and patients. The authors aim to (1) compare the prognostic accuracy of existing CAP risk stratification tools, (2) assess patient-level determinants of prognosis, (3) improve risk stratification by combined use of scoring systems and (4) understand prognostic factors for specific patient groups.
Results
The six cohorts assembled from 1991 to 2007 included 13 784 patients (median age 71 years, 54% men). Aside from one randomised controlled study, the remaining five were cohort studies, but all had similar inclusion criteria. Overall, there was 0%–6% missing data. A total of 6159 (44%) had severe pneumonia by Pneumonia Severity Index class IV/V. Mortality at 30 days was 8% (1036). Admission to intensive care or high dependency unit was also 8% (1059).
Conclusions
International CAP Collaboration Cohort provides a pooled multicentre data set of patients with CAP, which will help us to better understand the prognosis of CAP.
Article summary
Article focus
This paper reports the rationale, hypotheses and analytical framework and also describes study cohorts and patients. We aim to
compare the prognostic accuracy of existing CAP risk stratification tools;
assess patient-level determinants of prognosis;
improve risk stratification by combined use of scoring systems;
understand prognostic factors for specific patient groups.
Key messages
The International CAP Collaboration Cohort (ICCC) as described in this report will be able to provide better understanding of determinants of outcomes in CAP. Examples of such development include comparison of commonly and less commonly known CAP severity scoring systems and identification of characteristics of CAP patients with poor outcome (30-day mortality) despite non-severe status of severity score.
In view of the large sample size, the ICCC cohort will be able to provide the determinants of outcomes in patient groups with specific conditions such as cardiovascular and respiratory diseases taking into account case mix and individual prognostic indicators.
The ICCC cohort will be of benefit to the CAP research community and help define a future agenda for research, as well as helping clinicians make better clinical decisions for patients with CAP.
Strengths and limitations of this study
The ICCC is a multicentre/multiethnic cohort where all collaborating groups defined pneumonia based on clinical features and the presence of CXR evidence of pneumonia. The major strengths of ICCC are prospective study design, inclusion of CAP patients spanning across wide age range, ethnicity, different healthcare settings and large sample size. Potential areas of improvement in assessment of CAP might be identification of at-risk patients with pneumonia who have been initially assessed as non-severe CAP. With large sample size, ICCC may provide an opportunity to identify characteristics of such individuals. Based on this work, risk assessment may be applied at more than one point in time in order to observe temporal trends in recovery or deterioration in future CAP research and clinical practice.
There were multiple observers and data collections across several centres. However, all cohorts followed the strict criteria in data collection as described in table 1. Furthermore, the data collected were objective measures such as age and urea level, thereby ruling out potential observer bias. The process of care between hospitals may be variable. There may be a variation in clinical management between different hospitals and in different healthcare setting between the various countries such as there may be important variations in antibiotic use, patterns of infective micro-organisms, care protocols and treatment guidelines. Other limitations to consider are biomarkers, healthcare provider and site characteristics. The patients were enrolled into the study at different time periods. However, this presents an opportunity to compare and contrast different healthcare systems to better understand the variation in healthcare setting and outcomes. Since all six studies used the Pneumonia Severity Index (PSI) for risk stratification, this can have implications, for example, patients who scored non-severe at initial assessment (low PSI) but might have had worse outcome are under-represented if such patients were sent home. This could contribute to attenuation of estimates in low PSI group. Nevertheless, it is possible that these patients would have presented again to the medical centre if/when deterioration occurred. Cohorts that only had data on CURB-related variables and cohorts with smaller sample sizes were not included in the ICCC, and this may introduce some degree of selection bias. Nevertheless, this should not have any effect on the internal relationship between the predictors and outcomes of interest.
doi:10.1136/bmjopen-2012-001030
PMCID: PMC3358618  PMID: 22614174
10.  Fluoxetine versus other types of pharmacotherapy for depression 
Background
Depression is common in primary care and it is associated with marked personal, social and economic morbidity, and creates significant demands on service providers in terms of workload. Treatment is predominantly pharmaceutical or psychological. Fluoxetine, the first of a group of antidepressant (AD) agents known as selective serotonin reuptake inhibitors (SSRIs), has been studied in many randomised controlled trials (RCTs) in comparison with tricyclic (TCA), heterocyclic and related ADs, and other SSRIs. These comparative studies provided contrasting findings. In addition, systematic reviews of RCTs have always considered the SSRIs as a group, and evidence applicable to this group of drugs might not be applicable to fluoxetine alone. The present systematic review assessed the efficacy and tolerability profile of fluoxetine in comparison with TCAs, SSRIs and newer agents.
Objectives
To determine the efficacy of fluoxetine, compared with other ADs, in alleviating the acute symptoms of depression, and to review its acceptability.
Search methods
Relevant studies were located by searching the Cochrane Collaboration Depression, Anxiety and Neurosis Controlled Trials Register (CCDANCTR), the Cochrane Central Register of Controlled Trials (CENTRAL), Medline (1966-2004) and Embase (1974-2004). Non-English language articles were included.
Selection criteria
Only RCTs were included. For trials which have a crossover design only results from the first randomisation period were considered.
Data were independently extracted by two reviewers using a standard form. Responders to treatment were calculated on an intention-to-treat basis: drop-outs were always included in this analysis. When data on drop-outs were carried forward and included in the efficacy evaluation, they were analysed according to the primary studies; when dropouts were excluded from any assessment in the primary studies, they were considered as treatment failures. Scores from continuous outcomes were analysed including patients with a final assessment or with the last observation carried forward. Tolerability data were analysed by calculating the proportion of patients who failed to complete the study and who experienced adverse reactions out of the total number of randomised patients. The primary analyses used a fixed effects approach, and presented Peto Odds Ratio (Peto OR) and Standardised Mean Difference (SMD).
Main results
On a dichotomous outcome fluoxetine was less effective than dothiepin (Peto OR: 2.09, 95% CI 1.08 to 4.05), sertraline (Peto OR: 1.40, 95% CI 1.11 to 1.76), mirtazapine (Peto OR: 1.64, 95% CI 1.01 to 2.65) and venlafaxine (Peto OR: 1.40, 95% CI 1.15 to 1.70). On a continuous outcome, fluoxetine was more effective than ABT-200 (Standardised Mean Difference (SMD) random effects: - 1.85, 95% CI - 2.25 to - 1.45) and milnacipran (SMD random effects: - 0.38, 95% CI - 0.71 to - 0.06); conversely, it was less effective than venlafaxine (SMD random effect: 0.11, 95% CI 0.00 to 0.23), however these figures were of borderline statistical significance.
Fluoxetine was better tolerated than TCAs considered as a group (Peto OR: 0.78, 95% CI 0.68 to 0.89), and was better tolerated in comparison with individual ADs, in particular than amitriptyline (Peto OR: 0.64, 95% CI 0.47 to 0.85) and imipramine (Peto OR: 0.79, 95% CI 0.63 to 0.99), and among newer ADs than ABT-200 (Peto OR: 0.21, 95% CI 0.10 to 0.41), pramipexole (Peto OR: 0.20, 95% CI 0.08 to 0.47) and reboxetine (Peto OR: 0.61, 95% CI 0.40 to 0.94).
Authors’ conclusions
There are statistically significant differences in terms of efficacy and tolerability between fluoxetine and certain ADs, but the clinical meaning of these differences is uncertain, and no definitive implications for clinical practice can be drawn. From a clinical point of view the analysis of antidepressants’ safety profile (adverse effect and suicide risk) remains of crucial importance and more reliable data about these outcomes are needed. Waiting for more robust evidence, treatment decisions should be based on considerations of clinical history, drug toxicity, patient acceptability, and cost. We need for large, pragmatic trials, enrolling heterogeneous populations of patients with depression to generate clinically relevant information on the benefits and harms of competitive pharmacological options. A meta-analysis of individual patient data from the randomised trials is clearly necessary.
doi:10.1002/14651858.CD004185.pub2
PMCID: PMC4163961  PMID: 16235353
Antidepressive Agents [therapeutic use]; Antidepressive Agents, Second-Generation [*therapeutic use]; Antidepressive Agents, Tricyclic [therapeutic use]; Depression [*drug therapy]; Fluoxetine [*therapeutic use]; Randomized Controlled Trials as Topic; Serotonin Uptake Inhibitors [*therapeutic use]; Humans
11.  A Novel Brief Therapy for Patients Who Attempt Suicide: A 24-months Follow-Up Randomized Controlled Study of the Attempted Suicide Short Intervention Program (ASSIP) 
PLoS Medicine  2016;13(3):e1001968.
Background
Attempted suicide is the main risk factor for suicide and repeated suicide attempts. However, the evidence for follow-up treatments reducing suicidal behavior in these patients is limited. The objective of the present study was to evaluate the efficacy of the Attempted Suicide Short Intervention Program (ASSIP) in reducing suicidal behavior. ASSIP is a novel brief therapy based on a patient-centered model of suicidal behavior, with an emphasis on early therapeutic alliance.
Methods and Findings
Patients who had recently attempted suicide were randomly allocated to treatment as usual (n = 60) or treatment as usual plus ASSIP (n = 60). ASSIP participants received three therapy sessions followed by regular contact through personalized letters over 24 months. Participants considered to be at high risk of suicide were included, 63% were diagnosed with an affective disorder, and 50% had a history of prior suicide attempts. Clinical exclusion criteria were habitual self-harm, serious cognitive impairment, and psychotic disorder. Study participants completed a set of psychosocial and clinical questionnaires every 6 months over a 24-month follow-up period.
The study represents a real-world clinical setting at an outpatient clinic of a university hospital of psychiatry. The primary outcome measure was repeat suicide attempts during the 24-month follow-up period. Secondary outcome measures were suicidal ideation, depression, and health-care utilization. Furthermore, effects of prior suicide attempts, depression at baseline, diagnosis, and therapeutic alliance on outcome were investigated.
During the 24-month follow-up period, five repeat suicide attempts were recorded in the ASSIP group and 41 attempts in the control group. The rates of participants reattempting suicide at least once were 8.3% (n = 5) and 26.7% (n = 16). ASSIP was associated with an approximately 80% reduced risk of participants making at least one repeat suicide attempt (Wald χ21 = 13.1, 95% CI 12.4–13.7, p < 0.001). ASSIP participants spent 72% fewer days in the hospital during follow-up (ASSIP: 29 d; control group: 105 d; W = 94.5, p = 0.038). Higher scores of patient-rated therapeutic alliance in the ASSIP group were associated with a lower rate of repeat suicide attempts. Prior suicide attempts, depression, and a diagnosis of personality disorder at baseline did not significantly affect outcome. Participants with a diagnosis of borderline personality disorder (n = 20) had more previous suicide attempts and a higher number of reattempts.
Key study limitations were missing data and dropout rates. Although both were generally low, they increased during follow-up. At 24 months, the group difference in dropout rate was significant: ASSIP, 7% (n = 4); control, 22% (n = 13). A further limitation is that we do not have detailed information of the co-active follow-up treatment apart from participant self-reports every 6 months on the setting and the duration of the co-active treatment.
Conclusions
ASSIP, a manual-based brief therapy for patients who have recently attempted suicide, administered in addition to the usual clinical treatment, was efficacious in reducing suicidal behavior in a real-world clinical setting. ASSIP fulfills the need for an easy-to-administer low-cost intervention. Large pragmatic trials will be needed to conclusively establish the efficacy of ASSIP and replicate our findings in other clinical settings.
Trial registration
ClinicalTrials.gov NCT02505373
In a randomized controlled trial, Konrad Michel and colleagues test the efficacy of a manual-based therapy intended to prevent repeat suicide attempts.
Editors' Summary
Background
Suicide is a serious public health problem. Over 800,000 people worldwide die by suicide every year. In the US, one suicide death occurs approximately every 12 minutes. While the causes of suicide are complex, the goals of suicide prevention are simple—reduce factors that increase risk, and increase factors that promote resilience or coping. Factors that increase suicide risk include family history of suicide, family history of child abuse, previous suicide attempts, history of mental disorders (particularly depression), history of alcohol and substance abuse, and access to lethal means. Factors that are protective against suicide include effective clinical care for mental, physical, and substance abuse disorders; connectedness to family and community; and problem solving and conflict resolution skills. A previous suicide attempt is the main risk factor for repeat attempts and for completed suicide. Fifteen to 25 percent of people who attempt suicide make another attempt, and five to ten percent eventually die by suicide.
Why Was This Study Done?
A number of suicide prevention treatments have been developed. Most of them involve therapy sessions and personal follow-up. While some of them have been shown to work in clinical trials—often with participants who have made a previous suicide attempt—few interventions have proven to be effective consistently in different settings. For this study, the researchers developed a treatment called Attempted Suicide Short Intervention Program (ASSIP) composed of three therapy sessions shortly after the suicide attempt and follow-up over two years with personalized mailed letters. They wanted the therapy part to be short, in order to provide a treatment that would allow a psychiatric service to cope with the large number of patients seen in the emergency department after a suicide attempt. The therapeutic elements of the treatment emphasized building an early therapeutic alliance, which would then serve as a basis (“anchoring”) for long-term outreach contact through regular letters. The therapy sessions and letters follow a detailed script, which the researchers developed into a manual that includes a step-by-step description of the highly structured treatment, checklists, handouts, and standardized letters for use by health professionals in various clinical settings. This study was done to test whether ASSIP can reduce suicidal behavior in addition to routine treatment.
What Did the Researchers Do and Find?
The researchers carried out a randomized clinical trial testing ASSIP in people who had attempted suicide (the majority by intentional overdosing) and been admitted to the emergency department of the Bern University General Hospital in Switzerland. Participants were randomly assigned to two groups. The treatment group received ASSIP in addition to treatment as usual (inpatient, day patient, and outpatient care as deemed appropriate by the hospital clinicians); the control group received a single structured assessment interview plus treatment as usual. The study objective was to evaluate—with follow-up questionnaires and health-care data—whether ASSIP can reduce the rate of repeated suicide attempt in the 24 months after a suicide attempt. The researchers also compared suicidal ideation (i.e., whether and how often participants had suicidal thoughts), levels of depression, and how often people were hospitalized between the two groups.
A total of 120 patients who had recently attempted suicide were randomly allocated to treatment as usual or treatment as usual plus ASSIP. The 60 ASSIP participants received three therapy sessions followed by regular contact over 24 months. During the first therapy session, the patient was prompted to tell the story of how he or she had reached the point of attempting suicide. Narrative interviewing is a key element of ASSIP’s patient-centered collaborative approach. The first session was videotaped, and parts were watched and discussed by patient and therapist during the second session, to recreate the experience of psychological pain and analyze how stress developed into suicidal action. During the final session, therapist and patient developed a list of long-term goals, warning signs, and safety strategies. These were printed and given to the patient in a credit-card-sized folded leaflet along with a list of telephone help numbers. Patients were told to carry both items at all times and to use them in the event of an emotional crisis. Over the subsequent two years, patients received six letters from their therapist reminding them of the risk of future suicidal crises and the importance of the collaboratively developed safety strategies.
During the 24 months of follow-up, one death by suicide occurred in each group, five repeat suicide attempts were recorded in the ASSIP group, and 41 repeat suicide attempts were recorded in the control group. ASSIP was associated with an approximately 80% reduced risk of repeat suicide attempt. In addition, ASSIP participants spent 72% fewer days in the hospital during follow-up. There was no difference in patient-reported suicidal ideation or in levels of depression.
What Do these Findings Mean?
The results show that ASSIP, administered in addition to the usual clinical treatment, was able to reduce suicidal behavior over 24 months in patients who had recently attempted suicide. The addition of ASSIP to usual treatment directly or its effect on repeat attempts might also reduce health care costs. The absence of effects on suicidal thoughts and depression is consistent with ASSIP’s objective to help people cope with crises as opposed to eliminating them. The study’s findings in a real-world clinical setting (a university hospital in the Swiss capital) are promising. They justify further testing in large clinical trials and diverse settings to answer conclusively whether and where ASSIP can reduce repeat suicide attempts, prevent deaths from suicide, and reduce health-care costs.
Additional Information
This list of resources contains links that can be accessed when viewing the PDF on a device or via the online version of the article at http://dx.doi.org/10.1371/journal.pmed.1001968.
National Action Alliance for Suicide Prevention has information on research prioritization for suicide prevention
There is also a supplemental issue of the American Journal of Preventive Medicine focused on research about suicide prevention
More information about suicide is available from ZEROSuicide http://zerosuicide.sprc.org/ and the Suicide Prevention Resource Center http://www.sprc.org/
The US Centers for Disease Control and Prevention has information on suicide
The UK Mental Health Foundation also has information on suicide
The page “About Suicide” from the American Foundation for Suicide Prevention has information on warning signs, risk factors, and statistics
The US National Suicide Prevention Lifeline offers help and information
The Bern University Hospital of Psychiatry has a page describing ASSIP for patients (in German)
The Finnish Association for Mental Health has a page describing ASSIP (in English)
doi:10.1371/journal.pmed.1001968
PMCID: PMC4773217  PMID: 26930055
12.  Te Ira Tangata: A Zelen randomised controlled trial of a treatment package including problem solving therapy compared to treatment as usual in Maori who present to hospital after self harm 
Trials  2011;12:117.
Background
Maori, the indigenous people of New Zealand, who present to hospital after intentionally harming themselves, do so at a higher rate than non-Maori. There have been no previous treatment trials in Maori who self harm and previous reviews of interventions in other populations have been inconclusive as existing trials have been under powered and done on unrepresentative populations. These reviews have however indicated that problem solving therapy and sending regular postcards after the self harm attempt may be an effective treatment. There is also a small literature on sense of belonging in self harm and the importance of culture. This protocol describes a pragmatic trial of a package of measures which include problem solving therapy, postcards, patient support, cultural assessment, improved access to primary care and a risk management strategy in Maori who present to hospital after self harm using a novel design.
Methods
We propose to use a double consent Zelen design where participants are randomised prior to giving consent to enrol a representative cohort of patients. The main outcome will be the number of Maori scoring below nine on the Beck Hopelessness Scale. Secondary outcomes will be hospital repetition at one year; self reported self harm; anxiety; depression; quality of life; social function; and hospital use at three months and one year.
Discussion
A strength of the study is that it is a pragmatic trial which aims to recruit Maori using a Maori clinical team and protocol. It does not exclude people if English is not their first language. A potential limitation is the analysis of the results which is complex and may underestimate any effect if a large number of people refuse their consent in the group randomised to problem solving therapy as they will effectively cross over to the treatment as usual group. This study is the first randomised control trial to explicitly use cultural assessment and management.
Trial registration
Australia and New Zealand Clinical Trials Register (ANZCTR): ACTRN12609000952246
doi:10.1186/1745-6215-12-117
PMCID: PMC3103449  PMID: 21569300
13.  Meta-analyses of Adverse Effects Data Derived from Randomised Controlled Trials as Compared to Observational Studies: Methodological Overview 
PLoS Medicine  2011;8(5):e1001026.
Su Golder and colleagues carry out an overview of meta-analyses to assess whether estimates of the risk of harm outcomes differ between randomized trials and observational studies. They find that, on average, there is no difference in the estimates of risk between overviews of observational studies and overviews of randomized trials.
Background
There is considerable debate as to the relative merits of using randomised controlled trial (RCT) data as opposed to observational data in systematic reviews of adverse effects. This meta-analysis of meta-analyses aimed to assess the level of agreement or disagreement in the estimates of harm derived from meta-analysis of RCTs as compared to meta-analysis of observational studies.
Methods and Findings
Searches were carried out in ten databases in addition to reference checking, contacting experts, citation searches, and hand-searching key journals, conference proceedings, and Web sites. Studies were included where a pooled relative measure of an adverse effect (odds ratio or risk ratio) from RCTs could be directly compared, using the ratio of odds ratios, with the pooled estimate for the same adverse effect arising from observational studies. Nineteen studies, yielding 58 meta-analyses, were identified for inclusion. The pooled ratio of odds ratios of RCTs compared to observational studies was estimated to be 1.03 (95% confidence interval 0.93–1.15). There was less discrepancy with larger studies. The symmetric funnel plot suggests that there is no consistent difference between risk estimates from meta-analysis of RCT data and those from meta-analysis of observational studies. In almost all instances, the estimates of harm from meta-analyses of the different study designs had 95% confidence intervals that overlapped (54/58, 93%). In terms of statistical significance, in nearly two-thirds (37/58, 64%), the results agreed (both studies showing a significant increase or significant decrease or both showing no significant difference). In only one meta-analysis about one adverse effect was there opposing statistical significance.
Conclusions
Empirical evidence from this overview indicates that there is no difference on average in the risk estimate of adverse effects of an intervention derived from meta-analyses of RCTs and meta-analyses of observational studies. This suggests that systematic reviews of adverse effects should not be restricted to specific study types.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Whenever patients consult a doctor, they expect the treatments they receive to be effective and to have minimal adverse effects (side effects). To ensure that this is the case, all treatments now undergo exhaustive clinical research—carefully designed investigations that test new treatments and therapies in people. Clinical investigations fall into two main groups—randomized controlled trials (RCTs) and observational, or non-randomized, studies. In RCTs, groups of patients with a specific disease or condition are randomly assigned to receive the new treatment or a control treatment, and the outcomes (for example, improvements in health and the occurrence of specific adverse effects) of the two groups of patients are compared. Because the patients are randomly chosen, differences in outcomes between the two groups are likely to be treatment-related. In observational studies, patients who are receiving a specific treatment are enrolled and outcomes in this group are compared to those in a similar group of untreated patients. Because the patient groups are not randomly chosen, differences in outcomes between cases and controls may be the result of a hidden shared characteristic among the cases rather than treatment-related (so-called confounding variables).
Why Was This Study Done?
Although data from individual trials and studies are valuable, much more information about a potential new treatment can be obtained by systematically reviewing all the evidence and then doing a meta-analysis (so-called evidence-based medicine). A systematic review uses predefined criteria to identify all the research on a treatment; meta-analysis is a statistical method for combining the results of several studies to yield “pooled estimates” of the treatment effect (the efficacy of a treatment) and the risk of harm. Treatment effect estimates can differ between RCTs and observational studies, but what about adverse effect estimates? Can different study designs provide a consistent picture of the risk of harm, or are the results from different study designs so disparate that it would be meaningless to combine them in a single review? In this methodological overview, which comprises a systematic review and meta-analyses, the researchers assess the level of agreement in the estimates of harm derived from meta-analysis of RCTs with estimates derived from meta-analysis of observational studies.
What Did the Researchers Do and Find?
The researchers searched literature databases and reference lists, consulted experts, and hand-searched various other sources for studies in which the pooled estimate of an adverse effect from RCTs could be directly compared to the pooled estimate for the same adverse effect from observational studies. They identified 19 studies that together covered 58 separate adverse effects. In almost all instances, the estimates of harm obtained from meta-analyses of RCTs and observational studies had overlapping 95% confidence intervals. That is, in statistical terms, the estimates of harm were similar. Moreover, in nearly two-thirds of cases, there was agreement between RCTs and observational studies about whether a treatment caused a significant increase in adverse effects, a significant decrease, or no significant change (a significant change is one unlikely to have occurred by chance). Finally, the researchers used meta-analysis to calculate that the pooled ratio of the odds ratios (a statistical measurement of risk) of RCTs compared to observational studies was 1.03. This figure suggests that there was no consistent difference between risk estimates obtained from meta-analysis of RCT data and those obtained from meta-analysis of observational study data.
What Do These Findings Mean?
The findings of this methodological overview suggest that there is no difference on average in the risk estimate of an intervention's adverse effects obtained from meta-analyses of RCTs and from meta-analyses of observational studies. Although limited by some aspects of its design, this overview has several important implications for the conduct of systematic reviews of adverse effects. In particular, it suggests that, rather than limiting systematic reviews to certain study designs, it might be better to evaluate a broad range of studies. In this way, it might be possible to build a more complete, more generalizable picture of potential harms associated with an intervention, without any loss of validity, than by evaluating a single type of study. Such a picture, in combination with estimates of treatment effects also obtained from systematic reviews and meta-analyses, would help clinicians decide the best treatment for their patients.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001026.
The US National Institutes of Health provide information on clinical research; the UK National Health Service Choices Web site also has a page on clinical trials and medical research
The Cochrane Collaboration produces and disseminates systematic reviews of health-care interventions
Medline Plus provides links to further information about clinical trials (in English and Spanish)
doi:10.1371/journal.pmed.1001026
PMCID: PMC3086872  PMID: 21559325
14.  Implantable Cardioverter Defibrillators. Prophylactic Use 
Executive Summary
Objective
The use of implantable cardiac defibrillators (ICDs) to prevent sudden cardiac death (SCD) in patients resuscitated from cardiac arrest or documented dangerous ventricular arrhythmias (secondary prevention of SCD) is an insured service. In 2003 (before the establishment of the Ontario Health Technology Advisory Committee), the Medical Advisory Secretariat conducted a health technology policy assessment on the prophylactic use (primary prevention of SCD) of ICDs for patients at high risk of SCD. The Medical Advisory Secretariat concluded that ICDs are effective for the primary prevention of SCD. Moreover, it found that a more clearly defined target population at risk for SCD that would be likely to benefit from ICDs is needed, given that the number needed to treat (NNT) from recent studies is 13 to 18, and given that the per-unit cost of ICDs is $32,000, which means that the projected cost to Ontario is $770 million (Cdn).
Accordingly, as part of an annual review and publication of more recent articles, the Medical Advisory Secretariat updated its health technology policy assessment of ICDs.
Clinical Need
Sudden cardiac death is caused by the sudden onset of fatal arrhythmias, or abnormal heart rhythms: ventricular tachycardia (VT), a rhythm abnormality in which the ventricles cause the heart to beat too fast, and ventricular fibrillation (VF), an abnormal, rapid and erratic heart rhythm. About 80% of fatal arrhythmias are associated with ischemic heart disease, which is caused by insufficient blood flow to the heart.
Management of VT and VF with antiarrhythmic drugs is not very effective; for this reason, nonpharmacological treatments have been explored. One such treatment is the ICD.
The Technology
An ICD is a battery-powered device that, once implanted, monitors heart rhythm and can deliver an electric shock to restore normal rhythm when potentially fatal arrhythmias are detected. The use of ICDs to prevent SCD in patients resuscitated from cardiac arrest or documented dangerous ventricular arrhythmias (secondary prevention) is an insured service in Ontario.
Primary prevention of SCD involves identification of and preventive therapy for patients who are at high risk for SCD. Most of the studies in the literature that have examined the prevention of fatal ventricular arrhythmias have focused on patients with ischemic heart disease, in particular, those with heart failure (HF), which has been shown to increase the risk of SCD. The risk of HF is determined by left ventricular ejection fraction (LVEF); most studies have focused on patients with an LVEF under 0.35 or 0.30. While most studies have found ICDs to reduce significantly the risk for SCD in patients with an LVEF less than 0.35, a more recent study (Sudden Cardiac Death in Heart Failure Trial [SCD-HeFT]) reported that patients with HF with nonischemic heart disease could also benefit from this technology. Based on the generalization of the SCD-HeFT study, the Centers for Medicare and Medicaid in the United States recently announced that it would allocate $10 billion (US) annually toward the primary prevention of SCD for patients with ischemic and nonischemic heart disease and an LVEF under 0.35.
Review Strategy
The aim of this literature review was to assess the effectiveness, safety, and cost effectiveness of ICDs for the primary prevention of SCD.
The standard search strategy used by the Medical Advisory Secretariat was used. This included a search of all international health technology assessments as well as a search of the medical literature from January 2003–May 2005.
A modification of the GRADE approach (1) was used to make judgments about the quality of evidence and strength of recommendations systematically and explicitly. GRADE provides a framework for structured reflection and can help to ensure that appropriate judgments are made. GRADE takes into account a study’s design, quality, consistency, and directness in judging the quality of evidence for each outcome. The balance between benefits and harms, quality of evidence, applicability, and the certainty of the baseline risks are considered in judgments about the strength of recommendations.
Summary of Findings
Overall, ICDs are effective for the primary prevention of SCD. Three studies – the Multicentre Automatic Defibrillator Implantation Trial I (MADIT I), the Multicentre Automatic Defibrillator Implantation Trial II (MADIT II), and SCD-HeFT – showed there was a statistically significant decrease in total mortality for patients who prophylactically received an ICD compared with those who received conventional therapy (Table 1).
Results of Key Studies on the Use of Implantable Cardioverter Defibrillators for the Primary Prevention of Sudden Cardiac Death – All-Cause Mortality
MADIT I: Multicentre Automatic Defibrillator Implantation Trial I; MADIT II: Multicentre Automatic Defibrillator Implantation Trial II; SCD-HeFT: Sudden Cardiac Death in Heart Failure Trial.
EP indicates electrophysiology; ICD, implantable cardioverter defibrillator; NNT, number needed to treat; NSVT, nonsustained ventricular tachycardia. The NNT will appear higher if follow-up is short. For ICDs, the absolute benefit increases over time for at least a 5-year period; the NNT declines, often substantially, in studies with a longer follow-up. When the NNT are equalized for a similar period as the SCD-HeFT duration (5 years), the NNT for MADIT-I is 2.2; for MADIT-II, it is 6.3.
GRADE Quality of the Evidence
Using the GRADE Working Group criteria, the quality of these 3 trials was examined (Table 2).
Quality refers to the criteria such as the adequacy of allocation concealment, blinding and follow-up.
Consistency refers to the similarity of estimates of effect across studies. If there is important unexplained inconsistency in the results, our confidence in the estimate of effect for that outcome decreases. Differences in the direction of effect, the size of the differences in effect, and the significance of the differences guide the decision about whether important inconsistency exists.
Directness refers to the extent to which the people interventions and outcome measures are similar to those of interest. For example, there may be uncertainty about the directness of the evidence if the people of interest are older, sicker or have more comorbidity than those in the studies.
As stated by the GRADE Working Group, the following definitions were used to grade the quality of the evidence:
High: Further research is very unlikely to change our confidence n the estimate of effect.
Moderate: Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate.
Low: Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate.
Very low: Any estimate of effect is very uncertain.
Quality of Evidence – MADIT I, MADIT II, and SCD-HeFT*
MADIT I: Multicentre Automatic Defibrillator Implantation Trial I; MADIT II: Multicentre Automatic Defibrillator Implantation Trial II; SCD-HeFT: Sudden Cardiac Death in Heart Failure Trial.
The 3 trials had 3 different sets of eligibility criteria for implantation of an ICD for primary prevention of SCD. Conclusions
Conclusions
Overall, there is evidence that ICDs are effective for the primary prevention of SCD. Three trials have found a statistically significant decrease in total mortality for patients who prophylactically received an ICD compared with those who received conventional therapy in their respective study populations.
As per the GRADE Working Group, recommendations consider 4 main factors:
The tradeoffs, taking into account the estimated size of the effect for the main outcome, the confidence limits around those estimates, and the relative value placed on the outcome;
The quality of the evidence (Table 2);
Translation of the evidence into practice in a specific setting, taking into consideration important factors that could be expected to modify the size of the expected effects, such as proximity to a hospital or availability of necessary expertise; and
Uncertainty about the baseline risk for the population of interest
The GRADE Working Group also recommends that incremental costs of health care alternatives should be considered explicitly with the expected health benefits and harms. Recommendations rely on judgments about the value of the incremental health benefits in relation to the incremental costs. The last column in Table 3 is the overall trade-off between benefits and harms and incorporates any risk or uncertainty.
For MADIT I, the overall GRADE and strength of the recommendation is “moderate” – the quality of the evidence is “moderate” (uncertainty due to methodological limitations in the study design), and risk/uncertainty in cost and budget impact was mitigated by the use of filters to help target the prevalent population at risk (Table 3).
For MADIT II, the overall GRADE and strength of the recommendation is “very weak” – the quality of the evidence is “weak” (uncertainty due to methodological limitations in the study design), but there is risk or uncertainty regarding the high prevalence, cost, and budget impact. It is not clear why screening for high-risk patients was dropped, given that in MADIT II the absolute reduction in mortality was small (5.6%) compared to MADIT I, which used electrophysiological screening (23%) (Table 3).
For SCD-HeFT, the overall GRADE and strength of the recommendation is “weak” – the study quality is “moderate,” but there is also risk/uncertainty due to a high NNT at 5 years (13 compared to the MADIT II NNT of 6 and MADIT I NNT of 2 at 5 years), high prevalent population (N = 23,700), and a high budget impact ($770 million). A filter (as demonstrated in MADIT 1) is required to help target the prevalent population at risk and mitigate the risk or uncertainty relating to the high NNT, prevalence, and budget impact (Table 3).
The results of the most recent ICD trial (SCD-HeFT) are not generalizable to the prevalent population in Ontario (Table 3). Given that the current funding rate of an ICD is $32,500 (Cdn), the estimated budget impact for Ontario would be as high as $770 million (Cdn). The uncertainty around the cost estimate of treating the prevalent population with LVEF < 0.30 in Ontario, the lack of human resources to implement such a strategy and the high number of patients required to prevent one SCD (NNT = 13) calls for an alternative strategy that allows the appropriate uptake and diffusion of ICDs for primary prevention for patients at maximum risk for SCD within the SCD-HeFT population.
The uptake and diffusion of ICDs for primary prevention of SCD should therefore be based on risk stratification through the use of appropriate screen(s) that would identify patients at highest risk who could derive the most benefit from this technology.
Overall GRADE and Strength of Recommendation for the Use of Implantable Cardioverter Defibrillators for the Primary Prevention of Sudden Cardiac Death
MADIT I: Multicentre Automatic Defibrillator Implantation Trial I; MADIT II: Multicentre Automatic Defibrillator Implantation Trial II; SCD-HeFT: Sudden Cardiac Death in Heart Failure Trial.
NNT indicates number needed to treat. The NNT will appear higher if follow-up is short. For ICDs, the absolute benefit increases over time for at least a 5-year period; the NNT declines, often substantially, in studies with a longer follow-up. When the NNT are equalized for a similar period as the SCD-HeFT duration (5 years), the NNT for MADIT-I is 2.2; for MADIT-II, it is 6.3.
NSVT indicates nonsustained ventricular tachycardia; VT, ventricular tachycardia.
PMCID: PMC3382404  PMID: 23074465
15.  Schizophrenia 
BMJ Clinical Evidence  2012;2012:1007.
Introduction
The lifetime prevalence of schizophrenia is approximately 0.7% and incidence rates vary between 7.7 and 43.0 per 100,000; about 75% of people have relapses and continued disability, and one third fail to respond to standard treatment. Positive symptoms include auditory hallucinations, delusions, and thought disorder. Negative symptoms (demotivation, self-neglect, and reduced emotion) have not been consistently improved by any treatment.
Methods and outcomes
We conducted a systematic review and aimed to answer the following clinical questions: What are the effects of drug treatments for positive, negative, or cognitive symptoms of schizophrenia? What are the effects of drug treatments in people with schizophrenia who are resistant to standard antipsychotic drugs? What are the effects of interventions to improve adherence to antipsychotic medication in people with schizophrenia? We searched: Medline, Embase, The Cochrane Library, and other important databases up to May 2010 (Clinical Evidence reviews are updated periodically; please check our website for the most up-to-date version of this review). We included harms alerts from relevant organisations such as the US Food and Drug Administration (FDA) and the UK Medicines and Healthcare products Regulatory Agency (MHRA).
Results
We found 51 systematic reviews, RCTs, or observational studies that met our inclusion criteria. We performed a GRADE evaluation of the quality of evidence for interventions.
Conclusions
In this systematic review, we present information relating to the effectiveness and safety of the following interventions: amisulpride, chlorpromazine, clozapine, depot haloperidol decanoate, haloperidol, olanzapine, pimozide, quetiapine, risperidone, sulpiride, ziprasidone, zotepine, aripiprazole, sertindole, paliperidone, flupentixol, depot flupentixol decanoate, zuclopenthixol, depot zuclopenthixol decanoate, behavioural therapy, clozapine, compliance therapy, first-generation antipsychotic drugs in treatment-resistant people, multiple-session family interventions, psychoeducational interventions, and second-generation antipsychotic drugs in treatment-resistant people.
Key Points
The lifetime prevalence of schizophrenia is approximately 0.7% and incidence rates vary between 7.7 and 43.0 per 100,000; about 75% of people have relapses and continued disability, and one third fail to respond to standard treatment. Positive symptoms include auditory hallucinations, delusions, and thought disorder. Negative symptoms (anhedonia, asociality, flattening of affect, and demotivation) and cognitive dysfunction have not been consistently improved by any treatment.
Standard treatment of schizophrenia has been antipsychotic drugs, the first of which included chlorpromazine and haloperidol, but these so-called first-generation antipsychotics can all cause adverse effects such as extrapyramidal adverse effects, hyperprolactinaemia, and sedation. Attempts to address these adverse effects led to the development of second-generation antipsychotics.
The second-generation antipsychotics amisulpride, clozapine, olanzapine, and risperidone may be more effective at reducing positive symptoms compared with first-generation antipsychotic drugs, but may cause similar adverse effects, plus additional metabolic effects such as weight gain.
CAUTION: Clozapine has been associated with potentially fatal blood dyscrasias. Blood monitoring is essential, and it is recommended that its use be limited to people with treatment-resistant schizophrenia.
Pimozide, quetiapine, aripiprazole, sulpiride, ziprasidone, and zotepine seem to be as effective as standard antipsychotic drugs at improving positive symptoms. Again, these drugs cause similar adverse effects to first-generation antipsychotics and other second-generation antipsychotics.
CAUTION: Pimozide has been associated with sudden cardiac death at doses above 20 mg daily.
We found very little evidence regarding depot injections of haloperidol decanoate, flupentixol decanoate, or zuclopenthixol decanoate; thus, we don’t know if they are more effective than oral treatments at improving symptoms.
In people who are resistant to standard antipsychotic drugs, clozapine may improve symptoms compared with first-generation antipsychotic agents, but this benefit must be balanced against the likelihood of adverse effects. We found limited evidence on other individual first- or second-generation antipsychotic drugs other than clozapine in people with treatment-resistant schizophrenia.In people with treatment-resistant schizophrenia, we don't know how second-generation agents other than clozapine compare with each other or first-generation antipsychotic agents, or how clozapine compares with other second-generation antipsychotic agents, because of a lack of evidence.
We don't know whether behavioural interventions, compliance therapy, psychoeducational interventions, or family interventions improve adherence to antipsychotic medication compared with usual care because of a paucity of good-quality evidence.
It is clear that some included studies in this review have serious failings and that the evidence base for the efficacy of antipsychotic medication and other interventions is surprisingly weak. For example, although in many trials haloperidol has been used as the standard comparator, the clinical trial evidence for haloperidol is less impressive may be expected.
By their very nature, systematic reviews and RCTs provide average indices of probable efficacy in groups of selected individuals. Although some RCTs limit inclusion criteria to a single category of diagnosis, many studies include individuals with different diagnoses such as schizoaffective disorder. In all RCTs, even in those recruiting people with a single DSM or ICD-10 diagnosis, there is considerable clinical heterogeneity.
Genome-wide association studies of large samples with schizophrenia demonstrate that this clinical heterogeneity reflects, in turn, complex biological heterogeneity. For example, genome-wide association studies suggest that around 1000 genetic variants of low penetrance and other individually rare genetic variants of higher penetrance, along with epistasis and epigenetic mechanisms, are thought to be responsible, probably with the biological and psychological effects of environmental factors, for the resultant complex clinical phenotype. A more stratified approach to clinical trials would help to identify those subgroups that seem to be the best responders to a particular intervention.
To date, however, there is little to suggest that stratification on the basis of clinical characteristics successfully helps to predict which drugs work best for which people. There is a pressing need for the development of biomarkers with clinical utility for mental health problems. Such measures could help to stratify clinical populations or provide better markers of efficacy in clinical trials, and would complement the current use of clinical outcome scales. Clinicians are also well aware that many people treated with antipsychotic medication develop significant adverse effects such as extrapyramidal symptoms or weight gain. Again, our ability to identify which people will develop which adverse effects is poorly developed, and might be assisted by using biomarkers to stratify populations.
The results of this review tend to indicate that as far as antipsychotic medication goes, current drugs are of limited efficacy in some people, and that most drugs cause adverse effects in most people. Although this is a rather downbeat conclusion, it should not be too surprising, given clinical experience and our knowledge of the pharmacology of the available antipsychotic medication. All currently available antipsychotic medications have the same putative mechanism of action — namely, dopaminergic antagonism with varying degrees of antagonism at other receptor sites. More efficacious antipsychotic medication awaits a better understanding of the biological pathogenesis of these conditions so that rational treatments can be developed.
PMCID: PMC3385413  PMID: 23870705
16.  Heterogeneity of primary outcome measures used in clinical trials of treatments for intermediate, posterior, and panuveitis 
Background
Uveitis describes a heterogeneous group of conditions characterized by intraocular inflammation. Since most of the sight-threatening forms of uveitis are individually rare, there has been an increasing tendency for clinical trials to group distinct uveitis syndromes together despite clear variations in phenotype which may reflect real aetiological and pathogenetic differences. Furthermore this grouping of distinct syndromes, and the range of manifestations within each uveitis syndrome, leads to a wide range of possible outcome measures. In this study we wished to review the degree of consensus or otherwise in the choice of primary outcome measures for registered clinical trials related to uveitis.
Methods
Systematic review of data provided in clinical trial registries describing clinical trials dealing with medical treatment of intermediate, posterior, or panuveitis through 01 October 2013. We reviewed 15 on-line clinical trial registries approved by the International Committee of Medical Journal Editors. We identified all that met the following inclusion criteria: prospective, interventional design; target populations with intermediate, posterior or panuveitis; and one or more pre-specified outcome measures that were related to uveitis. Primary outcome measures were classified in terms of type (efficacy or safety or both; single, composite, or multiple); dimension (disease activity, disease damage, measured or patient-reported visual function); and domain (the specific study variable being measured).
Results
Of 195 registered uveitis studies, we identified 104 clinical trials that met inclusion criteria. There were 14 different domains used as primary outcome measures. Among clinical trials that utilized primary outcome measures of treatment efficacy (n = 94), 70 (74 %) used a measure of disease activity (vitreous haze in 40/70 [57 %]; macular oedema in 19/70 [27 %]) and 49 (70 %) used a measure of visual function (visual acuity in all cases). Multiple primary outcome measures were used in 23 (22 %) of 104 clinical trials. With regard to quality, in 12 (12 %) of 104 clinical trials, outcome measures were poorly defined. No clinical trial utilized a patient-reported study variable as primary outcome measure.
Conclusions
This systematic review highlights the heterogeneity of outcome measures used in recent clinical trials for intermediate, posterior, and panuveitis. Current designs prioritize clinician-observed measures of disease activity and measurement of visual function as outcome measures. This apparent lack of consensus regarding outcome measures for the study of uveitis is a concern, as it prevents comparison of studies and meta-analyses, and weakens the evidence available to stake-holders, from patients to clinicians to regulators, regarding the efficacy and value of a given treatment.
doi:10.1186/s13023-015-0318-6
PMCID: PMC4545540  PMID: 26286265
Uveitis; Clinical trials; Outcome measures; Endpoints; Composite endpoints
17.  Getting better at chronic care in remote communities: study protocol for a pragmatic cluster randomised controlled of community based management 
BMC Public Health  2012;12:1017.
Background
Prevalence and incidence of diabetes and other common comorbid conditions (hypertension, coronary heart disease, renal disease and chronic lung disease) are extremely high among Indigenous Australians. Recent measures to improve quality of preventive care in Indigenous community settings, while apparently successful at increasing screening and routine check-up rates, have shown only modest or little improvements in appropriate care such as the introduction of insulin and other scaled-up drug regimens in line with evidence-based guidelines, together with support for risk factor reduction. A new strategy is required to ensure high quality integrated family-centred care is available locally, with continuity and cultural safety, by community-based care coordinators with appropriate system supports.
Methods/design
The trial design is open parallel cluster randomised controlled trial. The objective of this pragmatic trial is to test the effectiveness of a model of health service delivery that facilitates integrated community-based, intensive chronic condition management, compared with usual care, in rural and remote Indigenous primary health care services in north Queensland. Participants are Indigenous adults (aged 18–65 years) with poorly controlled diabetes (HbA1c>=8.5) and at least one other chronic condition. The intervention is to employ an Indigenous Health Worker to case manage the care of a maximum caseload of 30 participants. The Indigenous Health Workers receive intensive clinical training initially, and throughout the study, to ensure they are competent to coordinate care for people with chronic conditions. The Indigenous Health Workers, supported by the local primary health care (PHC) team and an Indigenous Clinical Support Team, will manage care, including coordinating access to multidisciplinary team care based on best practice standards. Allocation by cluster to the intervention and control groups is by simple randomisation after participant enrolment. Participants in the control group will receive usual care, and will be wait-listed to receive a revised model of the intervention informed by the data analysis. The primary outcome is reduction in HbA1c measured at 18 months. Implementation fidelity will be monitored and a qualitative investigation (methods to be determined) will aim to identify elements of the model which may influence health outcomes for Indigenous people with chronic conditions.
Discussion
This pragmatic trial will test a culturally-sound family-centred model of care with supported case management by IHWs to improve outcomes for people with complex chronic care needs. This trial is now in the intervention phase.
Trial registration
Australian New Zealand Clinical Trials Registry ACTR12610000812099
doi:10.1186/1471-2458-12-1017
PMCID: PMC3519682  PMID: 23170964
Aboriginal and Torres Strait Islander; Diabetes; Indigenous Health Worker; Partnerships; HbA1c control
18.  Assessing and reporting heterogeneity in treatment effects in clinical trials: a proposal 
Trials  2010;11:85.
Mounting evidence suggests that there is frequently considerable variation in the risk of the outcome of interest in clinical trial populations. These differences in risk will often cause clinically important heterogeneity in treatment effects (HTE) across the trial population, such that the balance between treatment risks and benefits may differ substantially between large identifiable patient subgroups; the "average" benefit observed in the summary result may even be non-representative of the treatment effect for a typical patient in the trial. Conventional subgroup analyses, which examine whether specific patient characteristics modify the effects of treatment, are usually unable to detect even large variations in treatment benefit (and harm) across risk groups because they do not account for the fact that patients have multiple characteristics simultaneously that affect the likelihood of treatment benefit. Based upon recent evidence on optimal statistical approaches to assessing HTE, we propose a framework that prioritizes the analysis and reporting of multivariate risk-based HTE and suggests that other subgroup analyses should be explicitly labeled either as primary subgroup analyses (well-motivated by prior evidence and intended to produce clinically actionable results) or secondary (exploratory) subgroup analyses (performed to inform future research). A standardized and transparent approach to HTE assessment and reporting could substantially improve clinical trial utility and interpretability.
doi:10.1186/1745-6215-11-85
PMCID: PMC2928211  PMID: 20704705
19.  Effect of Flexible Sigmoidoscopy-Based Screening on Incidence and Mortality of Colorectal Cancer: A Systematic Review and Meta-Analysis of Randomized Controlled Trials 
PLoS Medicine  2012;9(12):e1001352.
A systematic review and meta-analysis of randomized trials conducted by B. Joseph Elmunzer and colleagues reports that that flexible sigmoidoscopy-based screening reduces the incidence of colorectal cancer in average-risk patients, as compared to usual care or no screening.
Background
Randomized controlled trials (RCTs) have yielded varying estimates of the benefit of flexible sigmoidoscopy (FS) screening for colorectal cancer (CRC). Our objective was to more precisely estimate the effect of FS-based screening on the incidence and mortality of CRC by performing a meta-analysis of published RCTs.
Methods and Findings
Medline and Embase databases were searched for eligible articles published between 1966 and 28 May 2012. After screening 3,319 citations and 29 potentially relevant articles, two reviewers identified five RCTs evaluating the effect of FS screening on the incidence and mortality of CRC. The reviewers independently extracted relevant data; discrepancies were resolved by consensus. The quality of included studies was assessed using criteria set out by the Evidence-Based Gastroenterology Steering Group. Random effects meta-analysis was performed.
The five RCTs meeting eligibility criteria were determined to be of high methodologic quality and enrolled 416,159 total subjects. Four European studies compared FS to no screening and one study from the United States compared FS to usual care. By intention to treat analysis, FS-based screening was associated with an 18% relative risk reduction in the incidence of CRC (0.82, 95% CI 0.73–0.91, p<0.001, number needed to screen [NNS] to prevent one case of CRC = 361), a 33% reduction in the incidence of left-sided CRC (RR 0.67, 95% CI 0.59–0.76, p<0.001, NNS = 332), and a 28% reduction in the mortality of CRC (relative risk [RR] 0.72, 95% CI 0.65–0.80, p<0.001, NNS = 850). The efficacy estimate, the amount of benefit for those who actually adhered to the recommended treatment, suggested that FS screening reduced CRC incidence by 32% (p<0.001), and CRC-related mortality by 50% (p<0.001).
Limitations of this meta-analysis include heterogeneity in the design of the included trials, absence of studies from Africa, Asia, or South America, and lack of studies comparing FS with colonoscopy or stool-based testing.
Conclusions
This meta-analysis of randomized controlled trials demonstrates that FS-based screening significantly reduces the incidence and mortality of colorectal cancer in average-risk patients.
Please see later in the article for the Editors' Summary
Editor's Summary
Background
Colorectal cancer (CRC) is the second leading cause of cancer-related death in the United States. Regular CRC screening has been shown to reduce the risk of dying from CRC by 16%, and CRC screening can identify early stage cancers in otherwise healthy people, which allows for early treatment and management of the disease. Screening for colorectal cancer is frequently performed using a flexible sigmoidoscopy (FS), which is a thin, flexible tube with a tiny camera and light on the end, allowing a doctor to look at the inside wall of the bowel and remove any small growths or polyps. Although screening may detect early cancers, the life-saving and health benefits of screening are uncertain because the polyp may not necessarily progress. This could lead to anxiety and unnecessary interventions and treatments amongst those screened. Randomized controlled trials (RCTs) are needed to determine all of the risks involved in cancer screenings, however the guidelines that recommend FS-based screening do not rely upon RCT data. Recently, the results of four large-scale RCTs evaluating FS screening for CRC have been published. The conflicting results with respect to the incidence and mortality of CRC in these studies have called into question the effectiveness of endoscopic screening.
Why Was This Study Done?
The results of RCTs measuring the risks and outcomes of CRC screening have shown varying estimates of the benefits of using FS screening. If better estimates of the risks and benefits of FS screening are developed, then the current CRC screening guidelines may be updated to reflect this new information. In this study, the authors show the results of a meta-analysis of published RCTs, which more precisely estimates the effects of FS-based screening on the incidence and mortality of colorectal cancer.
What Did the Researchers Do and Find?
The researchers used the Medline and Embase databases to find relevant studies from 1966 to May 28, 2012. After screening 3,319 citations and 29 potentially relevant articles, five RCTs of high methodologic quality and 416,159 total subjects evaluating the effect of FS screening on the incidence and mortality of CRC were identified. The data were extracted and random effects meta-analysis was performed. The meta-analysis revealed that FS-based screening was associated with an 18% relative risk reduction in the incidence of CRC (0.82, 95% CI 0.73–0.91, p<0.001, number needed to screen (NNS) to prevent one case of CRC = 361), a 33% reduction in the incidence of left-sided CRC (RR 0.67, 95% CI 0.59–0.76, p<0.001, NNS = 332), and a 28% reduction in the mortality of CRC (RR 0.72, 95% CI 0.65–0.80, p<0.001, NNS = 850). The amount of benefit for those who adhered to the recommended treatment suggested that FS screening reduced CRC incidence by 32% (p<0.001), and CRC-related mortality by 50% (p<0.001).
What Do These Findings Mean?
This meta-analysis of RCTs evaluating the effect of FS on CRC incidence and mortality demonstrates that a FS-based strategy for screening is very effective in reducing the incidence and mortality of CRC in patients. The current recommendations for endoscopic screening are based on observational studies, which may not accurately reflect the effect of FS-based screening on the incidence and mortality of CRC. Here, the authors performed a systematic review and meta-analysis of five recent RCTs to better estimate the true effect of FS-based screening on the incidence and mortality of CRC. Thus, the results of this meta-analysis may affect health policy, and directly impact patients and clinicians.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001352.
Cancer research UK provides comprehensive information about screening for colorectal cancers as does the UK National Screening Committee
PubMed Health has general information about colon cancer
The National Cancer Institute also has comprehensive resources on colorectal cancer and treatment
The Mayo Clinic provides an overview of all aspects of colon cancer
doi:10.1371/journal.pmed.1001352
PMCID: PMC3514315  PMID: 23226108
20.  Impact of Xpert MTB/RIF for TB Diagnosis in a Primary Care Clinic with High TB and HIV Prevalence in South Africa: A Pragmatic Randomised Trial 
PLoS Medicine  2014;11(11):e1001760.
Helen Cox and colleagues investigate the impact Xpert MTB/RIF for diagnosing patients with presumptive tuberculosis in a large primary care clinic in Khayelitsha, Cape Town.
Please see later in the article for the Editors' Summary
Background
Xpert MTB/RIF is approved for use in tuberculosis (TB) and rifampicin-resistance diagnosis. However, data are limited on the impact of Xpert under routine conditions in settings with high TB burden.
Methods and Findings
A pragmatic prospective cluster-randomised trial of Xpert for all individuals with presumptive (symptomatic) TB compared to the routine diagnostic algorithm of sputum microscopy and limited use of culture was conducted in a large TB/HIV primary care clinic. The primary outcome was the proportion of bacteriologically confirmed TB cases not initiating TB treatment by 3 mo after presentation. Secondary outcomes included time to TB treatment and mortality. Unblinded randomisation occurred on a weekly basis. Xpert and smear microscopy were performed on site. Analysis was both by intention to treat (ITT) and per protocol.
Between 7 September 2010 and 28 October 2011, 1,985 participants were assigned to the Xpert (n = 982) and routine (n = 1,003) diagnostic algorithms (ITT analysis); 882 received Xpert and 1,063 routine (per protocol analysis). 13% (32/257) of individuals with bacteriologically confirmed TB (smear, culture, or Xpert) did not initiate treatment by 3 mo after presentation in the Xpert arm, compared to 25% (41/167) in the routine arm (ITT analysis, risk ratio 0.51, 95% CI 0.33–0.77, p = 0.0052).
The yield of bacteriologically confirmed TB cases among patients with presumptive TB was 17% (167/1,003) with routine diagnosis and 26% (257/982) with Xpert diagnosis (ITT analysis, risk ratio 1.57, 95% CI 1.32–1.87, p<0.001). This difference in diagnosis rates resulted in a higher rate of treatment initiation in the Xpert arm: 23% (229/1,003) and 28% (277/982) in the routine and Xpert arms, respectively (ITT analysis, risk ratio 1.24, 95% CI 1.06–1.44, p = 0.013). Time to treatment initiation was improved overall (ITT analysis, hazard ratio 0.76, 95% CI 0.63–0.92, p = 0.005) and among HIV-infected participants (ITT analysis, hazard ratio 0.67, 95% CI 0.53–0.85, p = 0.001). There was no difference in 6-mo mortality with Xpert versus routine diagnosis. Study limitations included incorrect intervention allocation for a high proportion of participants and that the study was conducted in a single clinic.
Conclusions
These data suggest that in this routine primary care setting, use of Xpert to diagnose TB increased the number of individuals with bacteriologically confirmed TB who were treated by 3 mo and reduced time to treatment initiation, particularly among HIV-infected participants.
Trial registration
Pan African Clinical Trials Registry PACTR201010000255244
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In 2012, about 8.6 million people developed active tuberculosis (TB)—a contagious mycobacterial disease that usually affects the lungs—and at least 1.3 million people died from the disease. Most of these deaths were in low- and middle-income countries, and a fifth were in HIV-positive individuals, who are particularly susceptible to TB. Mycobacterium tuberculosis, the bacterium that causes TB, is spread in airborne droplets when people with active disease cough or sneeze. The characteristic symptoms of TB include a cough, weight loss, and night sweats. Diagnostic tests for TB include microscopic examination of sputum (mucus coughed up from the lungs), growth (culture) of M. tuberculosis from sputum, and molecular tests (for example, the automated Xpert MTB/RIF test) that rapidly and accurately detect M. tuberculosis in sputum and determine its antibiotic resistance. TB can be cured by taking several antibiotics daily for at least six months, although the emergence of multidrug-resistant TB is making the disease harder to treat.
Why Was This Study Done?
To improve TB control, active disease needs to be diagnosed and treated quickly. However, sputum microscopy, the mainstay of TB diagnosis in many high-burden settings, fails to identify up to half of infected people, and mycobacterial culture (the “gold standard” of TB diagnosis) is slow and often unavailable in resource-limited settings. In late 2010, the World Health Organization recommended the routine use of the Xpert MTB/RIF test (Xpert) for TB diagnosis, and several low- and middle-income countries are now scaling up access to Xpert in their national TB control programs. But although Xpert performs well in ideal conditions, little is known about the impact of its implementation in routine (real-life) settings. In this pragmatic cluster-randomized trial, the researchers assess the health impacts of Xpert in a large TB/HIV primary health care clinic in South Africa, an upper-middle-income country that began to scale up access to Xpert for individuals showing symptoms of TB (individuals with presumptive TB) in 2011. A pragmatic trial asks whether an intervention works under real-life conditions; a cluster-randomized trial randomly assigns groups of people to receive alternative interventions and compares outcomes in the differently treated “clusters.”
What Did the Researchers Do and Find?
The researchers assigned everyone with presumptive TB attending a TB/HIV primary health care clinic in Cape Town to receive either Xpert for TB diagnosis or routine sputum microscopy and limited culture. Specifically, Xpert was requested on the routine laboratory request forms for individuals attending the clinic during randomly designated Xpert weeks but not during randomly designated routine testing weeks. During the 51-week trial, 982 individuals were assigned to the Xpert arm, and 1,003 were assigned to the routine testing arm, but because clinic staff sometimes failed to request Xpert during Xpert weeks, only 882 participants in the Xpert arm received the intervention. In an “intention to treat” analysis (an analysis that considers the outcomes of all the participants in a trial whether or not they received their assigned intervention), 13% of bacteriologically confirmed TB cases in the Xpert arm did not initiate TB treatment by three months after enrollment (the trial's primary outcome) compared to 25% in the routine testing arm. The proportion of participants with microbiologically confirmed TB and the proportion initiating TB treatment were higher in the Xpert arm than in the routine testing arm. Finally, the time to treatment initiation was lower in the Xpert arm than in the routine testing arm, particularly among HIV-infected participants.
What Do These Findings Mean?
These findings show that, in this primary health care setting, the provision of Xpert for TB diagnosis in individuals with presumptive TB provided benefits over testing that relied primarily on sputum microscopy. Notably, these benefits were seen even though a substantial proportion of individuals assigned to the Xpert intervention did not actually receive an Xpert test. The pragmatic nature of this trial, which aimed to minimize clinic disruption, and other aspects of the trial design may limit the accuracy and generalizability of these findings. Moreover, further studies are needed to discover whether the use of Xpert in real-life settings reduces the burden of TB illness and death over the long term. Nevertheless, these findings suggest that the implementation of Xpert has the potential to improve the outcomes of TB control programs and may also improve outcomes for individuals.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001760.
The World Health Organization provides information (in several languages) on all aspects of tuberculosis, including general information on tuberculosis diagnostics and specific information on the roll-out of the Xpert MTB/RIF test; further information about the World Health Organization's endorsement of Xpert MTB/RIF is included in a Strategic and Technical Advisory Group for Tuberculosis report; the “Global Tuberculosis Report 2013” provides information about tuberculosis around the world, including in South Africa
The Stop TB Partnership is working towards tuberculosis elimination and provides patient stories about tuberculosis (in English and Spanish); the Tuberculosis Vaccine Initiative (a not-for-profit organization) also provides personal stories about tuberculosis
The US Centers for Disease Control and Prevention has information about tuberculosis and its diagnosis (in English and Spanish)
The US National Institute of Allergy and Infectious Diseases also has detailed information on all aspects of tuberculosis
The South African National Tuberculosis Management Guidelines 2014 are available
The Foundation for Innovative New Diagnostics, a not-for-profit organization that helps to develop and introduce new diagnostic tests for tuberculosis, malaria, and neglected tropical diseases, has detailed information about the Xpert MTB/RIF test
More information about TB, HIV, and drug-resistant TB treatment in Khayelitsha, Cape Town, South Africa are provided by Médecins sans Frontières, South Africa
doi:10.1371/journal.pmed.1001760
PMCID: PMC4244039  PMID: 25423041
21.  A combined index to classify prognostic comorbidity in candidates for radical prostatectomy 
BMC Urology  2014;14:28.
Background
In patients with early prostate cancer, stratification by comorbidity could be of importance in clinical decision making as well as in characterizing patients enrolled into clinical trials. In this study, we investigated several comorbidity classifications as predictors of overall mortality after radical prostatectomy, searching for measures providing complementary prognostic information which could be combined into a single score.
Methods
The study sample consisted of 2205 consecutive patients selected for radical prostatectomy with a mean age of 64 years and a mean follow-up of 9.2 years (median: 8.6). Seventy-four patients with incomplete tumor-related data were excluded. In addition to age and tumor-related parameters, six comorbidity classifications and the body mass index were assessed as possible predictors of overall mortality. Kaplan-Meier curves and Mantel-Haenszel hazard ratios were used for univariate analysis. The impact of different causes of death was analyzed by competing risk analysis. Cox proportional hazard models were calculated to analyze combined effects of variables.
Results
Age, Gleason score, tumor stage, Charlson score, American Society of Anesthesiologists (ASA) physical status class and body mass index were identified a significant predictors of overall mortality in the multivariate analysis regardless whether two-sided and three-sided stratifications were used. Competing risk analysis revealed that the excess mortality in patients with a body mass index of 30 kg/m2 or higher was attributable to competing mortality including second cancers, but not to prostate cancer mortality.
Conclusion
Stratifying patients by a combined consideration of the comorbidity measures Charlson score, ASA classification and body mass index may assist clinical decision making in elderly candidates for radical prostatectomy.
doi:10.1186/1471-2490-14-28
PMCID: PMC3986600  PMID: 24678762
Prostate cancer; Radical prostatectomy; Comorbidity; Overall survival; Competing mortality; ASA classification; Charlson score; Body mass index; Cox proportional hazard models
22.  Pralidoxime in Acute Organophosphorus Insecticide Poisoning—A Randomised Controlled Trial 
PLoS Medicine  2009;6(6):e1000104.
In a randomized controlled trial of individuals who had taken organophosphorus insecticides, Michael Eddleston and colleagues find that there is no evidence that the addition of the antidote pralidoxime offers benefit over atropine and supportive care.
Background
Poisoning with organophosphorus (OP) insecticides is a major global public health problem, causing an estimated 200,000 deaths each year. Although the World Health Organization recommends use of pralidoxime, this antidote's effectiveness remains unclear. We aimed to determine whether the addition of pralidoxime chloride to atropine and supportive care offers benefit.
Methods and Findings
We performed a double-blind randomised placebo-controlled trial of pralidoxime chloride (2 g loading dose over 20 min, followed by a constant infusion of 0.5 g/h for up to 7 d) versus saline in patients with organophosphorus insecticide self-poisoning. Mortality was the primary outcome; secondary outcomes included intubation, duration of intubation, and time to death. We measured baseline markers of exposure and pharmacodynamic markers of response to aid interpretation of clinical outcomes. Two hundred thirty-five patients were randomised to receive pralidoxime (121) or saline placebo (114). Pralidoxime produced substantial and moderate red cell acetylcholinesterase reactivation in patients poisoned by diethyl and dimethyl compounds, respectively. Mortality was nonsignificantly higher in patients receiving pralidoxime: 30/121 (24.8%) receiving pralidoxime died, compared with 18/114 (15.8%) receiving placebo (adjusted hazard ratio [HR] 1.69, 95% confidence interval [CI] 0.88–3.26, p = 0.12). Incorporating the baseline amount of acetylcholinesterase already aged and plasma OP concentration into the analysis increased the HR for patients receiving pralidoxime compared to placebo, further decreasing the likelihood that pralidoxime is beneficial. The need for intubation was similar in both groups (pralidoxime 26/121 [21.5%], placebo 24/114 [21.1%], adjusted HR 1.27 [95% CI 0.71–2.29]). To reduce confounding due to ingestion of different insecticides, we further analysed patients with confirmed chlorpyrifos or dimethoate poisoning alone, finding no evidence of benefit.
Conclusions
Despite clear reactivation of red cell acetylcholinesterase in diethyl organophosphorus pesticide poisoned patients, we found no evidence that this regimen improves survival or reduces need for intubation in patients with organophosphorus insecticide poisoning. The reason for this failure to benefit patients was not apparent. Further studies of different dose regimens or different oximes are required.
Trial Registration
Controlled-trials.com ISRCTN55264358
Please see later in the article for Editors' Summary
Editors' Summary
Background
Each year, about 200,000 people worldwide die from poisoning with organophosphorous insecticides, toxic chemicals that are widely used in agriculture, particularly in developing countries. Organophosphates disrupt communication between the brain and the body in both insects and people. The brain controls the body by sending electrical impulses along nerve cells (neurons) to the body's muscle cells. At the end of the neurons, these impulses are converted into chemical messages (neurotransmitters), which cross the gap between neurons and muscle cells (the neuromuscular junction) and bind to proteins (receptors) on the muscle cells that pass on the brain's message. One important neurotransmitter is acetylcholine. This is used at neuromuscular junctions, in the part of the nervous system that controls breathing and other automatic vital functions, and in parts of the central nervous system. Normally, the enzyme acetylcholinesterase quickly breaks down acetylcholine after it has delivered its message, but organophosphates inhibit acetylcholinesterase and, as a result, disrupt the transmission of nerve impulses at nerve endings. Symptoms of organophosphate poisoning include excessive sweating, diarrhea, muscle weakness, and breathing problems. Most deaths from organophosphate poisoning are caused by respiratory failure.
Why Was This Study Done?
Treatment for organophosphorous insecticide poisoning includes resuscitation and assistance with breathing (intubation) if necessary and the rapid administration of atropine. This antidote binds to “muscarinic” acetylcholine receptors and blocks the effects of acetylcholine at this type of receptor. Atropine can only reverse some of the effects of organophosphate poisoning, however, because it does not block the activity of acetylcholine at its other receptors. Consequently, the World Health Organization (WHO) recommends that a second type of antidote called an oxime acetylcholinesterase reactivator be given after atropine. But, although the beneficial effects of atropine are clear, controversy surrounds the role of oximes in treating organophosphate poisoning. There is even some evidence that the oxime pralidoxime can be harmful. In this study, the researchers try to resolve this controversy by studying the effects of pralidoxime treatment on patients poisoned by organophosphorous insecticides in Sri Lanka in a randomized controlled trial (a study in which groups of patients are randomly chosen to receive different treatments).
What Did the Researchers Do and Find?
The researchers enrolled 235 adults who had been admitted to two Sri Lankan district hospitals with organophosphorous insecticide self-poisoning (in Sri Lanka, more than 70% of fatal suicide attempts are the result of pesticide poisoning). The patients, all of whom had been given atropine, were randomized to receive either the WHO recommended regimen of pralidoxime or saline. The researchers determined how much and which pesticide each patient had been exposed to, measured the levels of pralidoxime and acetylcholinesterase activity in the patients' blood, and monitored the patients' progress during their hospital stay. Overall, 48 patients died—30 of the 121 patients who received pralidoxime and 18 of the 114 control patients. After adjusting for the baseline characteristics of the two treatment groups and for intubation at baseline, pralidoxime treatment increased the patients' risk of dying by two-thirds, although this increased risk of death was not statistically significant. In other words, this result does not prove that pralidoxime treatment was bad for the patients in this trial. However, in further analyses that adjusted for the ingestion of different insecticides, the baseline levels of insecticides in patients' blood, and other prespecified variables, pralidoxime treatment always increased the patients' risk of death.
What Do These Findings Mean?
These findings provide no evidence that the WHO recommended regimen of pralidoxime improves survival after organophosphorous pesticide poisoning even though other results from the trial show that the treatment reactivated acetylcholinesterase. Indeed, although limited by the small number of patients enrolled into this study (the trial recruited fewer patients than expected because results from another trial had a deleterious effect on recruitment), these findings actually suggest that pralidoxime treatment may be harmful at least in self-poisoned patients. This suspicion now needs be confirmed in trials that more fully assess the risks/benefits of oximes and that explore the effects of different dosing regimens and/or different oximes.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000104.
The US Environmental Protection Agency provides information about all aspects of insecticides (in English and Spanish)
Toxtown, an interactive site from the US National Library of Medicine provides information on exposure to pesticides and other environmental health concerns (in English and Spanish)
The US National Pesticide Information Center provides objective, science-based information about pesticides (in English and Spanish)
MedlinePlus also provides links to information on pesticides (in English and Spanish)
For more on Poisoning Prevention and Management see WHO's International Programme on Chemical Safety (IPCS)
WikiTox, a clinical toxicology teaching resource project, has detailed information on organophosphates
doi:10.1371/journal.pmed.1000104
PMCID: PMC2696321  PMID: 19564902
23.  Analysis and design of randomised clinical trials involving competing risks endpoints 
Trials  2011;12:127.
Background
In randomised clinical trials involving time-to-event outcomes, the failures concerned may be events of an entirely different nature and as such define a classical competing risks framework. In designing and analysing clinical trials involving such endpoints, it is important to account for the competing events, and evaluate how each contributes to the overall failure. An appropriate choice of statistical model is important for adequate determination of sample size.
Methods
We describe how competing events may be summarised in such trials using cumulative incidence functions and Gray's test. The statistical modelling of competing events using proportional cause-specific and subdistribution hazard functions, and the corresponding procedures for sample size estimation are outlined. These are illustrated using data from a randomised clinical trial (SQNP01) of patients with advanced (non-metastatic) nasopharyngeal cancer.
Results
In this trial, treatment has no effect on the competing event of loco-regional recurrence. Thus the effects of treatment on the hazard of distant metastasis were similar via both the cause-specific (unadjusted csHR = 0.43, 95% CI 0.25 - 0.72) and subdistribution (unadjusted subHR 0.43; 95% CI 0.25 - 0.76) hazard analyses, in favour of concurrent chemo-radiotherapy followed by adjuvant chemotherapy. Adjusting for nodal status and tumour size did not alter the results. The results of the logrank test (p = 0.002) comparing the cause-specific hazards and the Gray's test (p = 0.003) comparing the cumulative incidences also led to the same conclusion. However, the subdistribution hazard analysis requires many more subjects than the cause-specific hazard analysis to detect the same magnitude of effect.
Conclusions
The cause-specific hazard analysis is appropriate for analysing competing risks outcomes when treatment has no effect on the cause-specific hazard of the competing event. It requires fewer subjects than the subdistribution hazard analysis for a similar effect size. However, if the main and competing events are influenced in opposing directions by an intervention, a subdistribution hazard analysis may be warranted.
doi:10.1186/1745-6215-12-127
PMCID: PMC3130669  PMID: 21595883
24.  The ACCESS study a Zelen randomised controlled trial of a treatment package including problem solving therapy compared to treatment as usual in people who present to hospital after self-harm: study protocol for a randomised controlled trial 
Trials  2011;12:135.
Background
People who present to hospital after intentionally harming themselves pose a common and important problem. Previous reviews of interventions have been inconclusive as existing trials have been under powered and done on unrepresentative populations. These reviews have however indicated that problem solving therapy and regular written communications after the self-harm attempt may be an effective treatment. This protocol describes a large pragmatic trial of a package of measures which include problem solving therapy, regular written communication, patient support, cultural assessment, improved access to primary care and a risk management strategy in people who present to hospital after self-harm using a novel design.
Methods
We propose to use a double consent Zelen design where participants are randomised prior to giving consent to enrol a large representative cohort of patients. The main outcome will be hospital attendance following repetition of self-harm, in the 12 months after recruitment with secondary outcomes of self reported self-harm, hopelessness, anxiety, depression, quality of life, social function and hospital use at three months and one year.
Discussion
A strength of the study is that it is a pragmatic trial which aims to recruit large numbers and does not exclude people if English is not their first language. A potential limitation is the analysis of the results which is complex and may underestimate any effect if a large number of people refuse their consent in the group randomised to problem solving therapy as they will effectively cross over to the treatment as usual group. However the primary analysis is a true intention to treat analysis of everyone randomised which includes both those who consent and do not consent to participate in the study. This provides information about how the intervention will work in practice in a representative population which is a major advance in this study compared to what has been done before.
Trial registration
Australia and New Zealand Clinical Trials Register (ANZCTR): ACTRN12609000641291
doi:10.1186/1745-6215-12-135
PMCID: PMC3117717  PMID: 21615951
25.  Reappraisal of Metformin Efficacy in the Treatment of Type 2 Diabetes: A Meta-Analysis of Randomised Controlled Trials 
PLoS Medicine  2012;9(4):e1001204.
Catherine Cornu and colleagues performed a meta-analysis of randomised controlled trials of metformin efficacy on cardiovascular morbidity or mortality in patients with type 2 diabetes and showed that although metformin is considered the gold standard, its benefit/risk ratio remains uncertain.
Background
The UK Prospective Diabetes Study showed that metformin decreases mortality compared to diet alone in overweight patients with type 2 diabetes mellitus. Since then, it has been the first-line treatment in overweight patients with type 2 diabetes. However, metformin-sulphonylurea bitherapy may increase mortality.
Methods and Findings
This meta-analysis of randomised controlled trials evaluated metformin efficacy (in studies of metformin versus diet alone, versus placebo, and versus no treatment; metformin as an add-on therapy; and metformin withdrawal) against cardiovascular morbidity or mortality in patients with type 2 diabetes. We searched Medline, Embase, and the Cochrane database. Primary end points were all-cause mortality and cardiovascular death. Secondary end points included all myocardial infarctions, all strokes, congestive heart failure, peripheral vascular disease, leg amputations, and microvascular complications. Thirteen randomised controlled trials (13,110 patients) were retrieved; 9,560 patients were given metformin, and 3,550 patients were given conventional treatment or placebo. Metformin did not significantly affect the primary outcomes all-cause mortality, risk ratio (RR) = 0.99 (95% CI: 0.75 to 1.31), and cardiovascular mortality, RR = 1.05 (95% CI: 0.67 to 1.64). The secondary outcomes were also unaffected by metformin treatment: all myocardial infarctions, RR = 0.90 (95% CI: 0.74 to 1.09); all strokes, RR = 0.76 (95% CI: 0.51 to 1.14); heart failure, RR = 1.03 (95% CI: 0.67 to 1.59); peripheral vascular disease, RR = 0.90 (95% CI: 0.46 to 1.78); leg amputations, RR = 1.04 (95% CI: 0.44 to 2.44); and microvascular complications, RR = 0.83 (95% CI: 0.59 to 1.17). For all-cause mortality and cardiovascular mortality, there was significant heterogeneity when including the UK Prospective Diabetes Study subgroups (I2 = 41% and 59%). There was significant interaction with sulphonylurea as a concomitant treatment for myocardial infarction (p = 0.10 and 0.02, respectively).
Conclusions
Although metformin is considered the gold standard, its benefit/risk ratio remains uncertain. We cannot exclude a 25% reduction or a 31% increase in all-cause mortality. We cannot exclude a 33% reduction or a 64% increase in cardiovascular mortality. Further studies are needed to clarify this situation.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Worldwide, more than 350 million people have diabetes, and this number is increasing rapidly. Diabetes is characterized by dangerous amounts of sugar (glucose) in the blood. Blood sugar levels are normally controlled by insulin, a hormone produced by the pancreas. In people with type 2 diabetes (the most common form of diabetes), blood sugar control fails because the fat and muscle cells that usually respond to insulin by removing excess sugar from the blood become less responsive to insulin. Type 2 diabetes can be controlled with diet and exercise and with antidiabetic pills, each of which works in a different way to maintain a healthy blood sugar level. Metformin, for example, stops the liver making glucose and increases the body's response to insulin, whereas sulfonylureas help the pancreas make more insulin. The long-term complications of diabetes, which include an increased risk of cardiovascular problems such as heart disease and stroke, reduce the life expectancy of people with diabetes by about ten years compared to people without diabetes.
Why Was This Study Done?
In 1998, a large randomized clinical trial called the UK Prospective Diabetes Study (UKPDS 34) reported that metformin in combination with dietary control reduced all-cause mortality in overweight patients with type 2 diabetes when compared to dietary control alone. Specifically, the risk of death from any cause among patients taking metformin was about a third lower than the risk of death among patients not taking metformin—a risk ratio (RR) of 0.64. This reduction in risk was significant (that is, it was unlikely to have occurred by chance) because its 95% confidence interval (95% CI; there is a 95% chance that the “true” RR lies within this interval) of 0.45–0.91 did not overlap 1.0. Given this finding, metformin is now recommended as the first-line treatment for type 2 diabetes. However, UKPDS 34 also reported an increase in death in non-overweight patients who took metformin plus sulfonylurea compared to those who took sulfonylurea alone (RR: 1.60; 95% CI: 1.02–2.52), a result considered non-significant by the UKPDS 34 researchers and largely ignored ever since. So do the benefits of metformin outweigh its risks? In this meta-analysis, the researchers re-evaluate the risk-to-benefit balance of metformin in the treatment of patients with type 2 diabetes. A meta-analysis is a statistical method that combines the results of several studies.
What Did the Researchers Do and Find?
The researchers identified 13 randomized controlled trials that evaluated the effect of metformin on cardiovascular morbidity (illness) and mortality in patients with type 2 diabetes. More than 13,000 patients participated in these studies, three-quarters of whom received metformin and a quarter of whom received other treatments or a placebo. Compared to other treatments, metformin treatment had no effect on the risk of all-cause mortality (RR: 0.99; 95% CI: 0.75–1.31) or cardiovascular mortality (RR: 1.05; 95% CI: 0.67–1.64), the primary end points of this study. However, the results of the individual trials varied more than would be expected by chance (“heterogeneity”). Exclusion of the UKPDS 34 trial from the meta-analysis had no effect on the estimated risk ratio for all-cause mortality or cardiovascular deaths, but the heterogeneity disappeared. Finally, metformin treatment had no significant effect on the risk of cardiovascular conditions such as heart attacks, strokes, and heart failure; there was no heterogeneity among the trials for these secondary end points.
What Do These Findings Mean?
These findings show no evidence that metformin has any beneficial effect on all-cause mortality, on cardiovascular mortality, or on cardiovascular morbidity among patients with type 2 diabetes. These findings must be cautiously interpreted because only a few randomized controlled trials were included in this study, and only a few patients died or developed any cardiovascular illnesses. Importantly, however, from these findings, it is impossible to exclude beyond reasonable doubt the possibility that metformin causes up to a 25% reduction or a 31% increase in all-cause mortality. Similarly, these findings cannot exclude the possibility that metformin causes up to a 33% reduction or a 64% increase in cardiovascular mortality. Given that a large number of patients take metformin for many years as a first-line treatment for diabetes, further studies are urgently needed to clarify this situation.
Additional Information
Please access these web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001204.
The International Diabetes Federation provides information about all aspects of diabetes
The US National Diabetes Information Clearinghouse provides information about diabetes for patients, health-care professionals, and the general public, including detailed information on diabetes medicines (in English and Spanish)
The UK National Health Service Choices web site provides information for patients and carers about type 2 diabetes and includes peoples stories about diabetes
The charity Diabetes UK also provides detailed information for patients and carers, including information on diabetes medications, and has a further selection of stories from people with diabetes
MedlinePlus provides links to further resources and advice about diabetes and about diabetes medicines; it also provides information about metformin (in English and Spanish)
The charity Healthtalkonline has interviews with people about their experiences of diabetes and of controlling diabetes with oral medications
doi:10.1371/journal.pmed.1001204
PMCID: PMC3323508  PMID: 22509138

Results 1-25 (1652737)