PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (992265)

Clipboard (0)
None

Related Articles

1.  Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched ClinicalTrials.gov for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
Background
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.
Methods and Findings
We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Conclusions
Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at ClinicalTrials.gov within one year of trial completion. ClinicalTrials.gov—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in ClinicalTrials.gov are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at ClinicalTrials.gov with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched ClinicalTrials.gov for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at ClinicalTrials.gov had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at ClinicalTrials.gov and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at ClinicalTrials.gov was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the ClinicalTrials.gov postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the ClinicalTrials.gov postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the ClinicalTrials.gov postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the ClinicalTrials.gov postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at ClinicalTrials.gov than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at ClinicalTrials.gov only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at ClinicalTrials.gov, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001566.
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
doi:10.1371/journal.pmed.1001566
PMCID: PMC3849189  PMID: 24311990
2.  Publication of Clinical Trials Supporting Successful New Drug Applications: A Literature Analysis 
PLoS Medicine  2008;5(9):e191.
Background
The United States (US) Food and Drug Administration (FDA) approves new drugs based on sponsor-submitted clinical trials. The publication status of these trials in the medical literature and factors associated with publication have not been evaluated. We sought to determine the proportion of trials submitted to the FDA in support of newly approved drugs that are published in biomedical journals that a typical clinician, consumer, or policy maker living in the US would reasonably search.
Methods and Findings
We conducted a cohort study of trials supporting new drugs approved between 1998 and 2000, as described in FDA medical and statistical review documents and the FDA approved drug label. We determined publication status and time from approval to full publication in the medical literature at 2 and 5 y by searching PubMed and other databases through 01 August 2006. We then evaluated trial characteristics associated with publication. We identified 909 trials supporting 90 approved drugs in the FDA reviews, of which 43% (394/909) were published. Among the subset of trials described in the FDA-approved drug label and classified as “pivotal trials” for our analysis, 76% (257/340) were published. In multivariable logistic regression for all trials 5 y postapproval, likelihood of publication correlated with statistically significant results (odds ratio [OR] 3.03, 95% confidence interval [CI] 1.78–5.17); larger sample sizes (OR 1.33 per 2-fold increase in sample size, 95% CI 1.17–1.52); and pivotal status (OR 5.31, 95% CI 3.30–8.55). In multivariable logistic regression for only the pivotal trials 5 y postapproval, likelihood of publication correlated with statistically significant results (OR 2.96, 95% CI 1.24–7.06) and larger sample sizes (OR 1.47 per 2-fold increase in sample size, 95% CI 1.15–1.88). Statistically significant results and larger sample sizes were also predictive of publication at 2 y postapproval and in multivariable Cox proportional models for all trials and the subset of pivotal trials.
Conclusions
Over half of all supporting trials for FDA-approved drugs remained unpublished ≥ 5 y after approval. Pivotal trials and trials with statistically significant results and larger sample sizes are more likely to be published. Selective reporting of trial results exists for commonly marketed drugs. Our data provide a baseline for evaluating publication bias as the new FDA Amendments Act comes into force mandating basic results reporting of clinical trials.
Ida Sim and colleagues investigate the publication status and publication bias of trials submitted to the US Food and Drug Administration (FDA) for a wide variety of approved drugs.
Editors' Summary
Background.
Before a new drug becomes available for the treatment of a specific human disease, its benefits and harms are carefully studied, first in the laboratory and in animals, and then in several types of clinical trials. In the most important of these trials—so-called “pivotal” clinical trials—the efficacy and safety of the new drug and of a standard treatment are compared by giving groups of patients the different treatments and measuring several predefined “outcomes.” These outcomes indicate whether the new drug is more effective than the standard treatment and whether it has any other effects on the patients' health and daily life. All this information is then submitted by the sponsor of the new drug (usually a pharmaceutical company) to the government body responsible for drug approval—in the US, this is the Food and Drug Administration (FDA).
Why Was This Study Done?
After a drug receives FDA approval, information about the clinical trials supporting the FDA's decision are included in the FDA “Summary Basis of Approval” and/or on the drug label. In addition, some clinical trials are described in medical journals. Ideally, all the clinical information that leads to a drug's approval should be publicly available to help clinicians make informed decisions about how to treat their patients. A full-length publication in a medical journal is the primary way that clinical trial results are communicated to the scientific community and the public. Unfortunately, drug sponsors sometimes publish the results only of trials where their drug performed well; as a consequence, trials where the drug did no better than the standard treatment or where it had unwanted side effects remain unpublished. Publication bias like this provides an inaccurate picture of a drug's efficacy and safety relative to other therapies and may lead to excessive prescribing of newer, more expensive (but not necessarily more effective) treatments. In this study, the researchers investigate whether selective trial reporting is common by evaluating the publication status of trials submitted to the FDA for a wide variety of approved drugs. They also ask which factors affect a trial's chances of publication.
What Did the Researchers Do and Find?
The researchers identified 90 drugs approved by the FDA between 1998 and 2000 by searching the FDA's Center for Drug Evaluation and Research Web site. From the Summary Basis of Approval for each drug, they identified 909 clinical trials undertaken to support these approvals. They then searched the published medical literature up to mid-2006 to determine if and when the results of each trial were published. Although 76% of the pivotal trials had appeared in medical journals, usually within 3 years of FDA approval, only 43% of all of the submitted trials had been published. Among all the trials, those with statistically significant results were nearly twice as likely to have been published as those without statistically significant results, and pivotal trials were three times more likely to have been published as nonpivotal trials, 5 years postapproval. In addition, a larger sample size increased the likelihood of publication. Having statistically significant results and larger sample sizes also increased the likelihood of publication of the pivotal trials.
What Do These Findings Mean?
Although the search methods used in this study may have missed some publications, these findings suggest that more than half the clinical trials undertaken to support drug approval remain unpublished 5 years or more after FDA approval. They also reveal selective reporting of results. For example, they show that a pivotal trial in which the new drug does no better than an old drug is less likely to be published than one where the new drug is more effective, a publication bias that could establish an inappropriately favorable record for the new drug in the medical literature. Importantly, these findings provide a baseline for monitoring the effects of the FDA Amendments Act 2007, which was introduced to improve the accuracy and completeness of drug trial reporting. Under this Act, all trials supporting FDA-approved drugs must be registered when they start, and the summary results of all the outcomes declared at trial registration as well as specific details about the trial protocol must be publicly posted within a year of drug approval on the US National Institutes of Health clinical trials site.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050191.
PLoS Medicine recently published an editorial discussing the FDA Amendment Act and what it means for medical journals: The PLoS Medicine Editors (2008) Next Stop, Don't Block the Doors: Opening Up Access to Clinical Trials Results. PLoS Med 5(7): e160
The US Food and Drug Administration provides information about drug approval in the US for consumers and for health care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward international norms and standards for reporting the findings of clinical trials
doi:10.1371/journal.pmed.0050191
PMCID: PMC2553819  PMID: 18816163
3.  Greater Response to Placebo in Children Than in Adults: A Systematic Review and Meta-Analysis in Drug-Resistant Partial Epilepsy 
PLoS Medicine  2008;5(8):e166.
Background
Despite guidelines establishing the need to perform comprehensive paediatric drug development programs, pivotal trials in children with epilepsy have been completed mostly in Phase IV as a postapproval replication of adult data. However, it has been shown that the treatment response in children can differ from that in adults. It has not been investigated whether differences in drug effect between adults and children might occur in the treatment of drug-resistant partial epilepsy, although such differences may have a substantial impact on the design and results of paediatric randomised controlled trials (RCTs).
Methods and Findings
Three electronic databases were searched for RCTs investigating any antiepileptic drug (AED) in the add-on treatment of drug-resistant partial epilepsy in both children and adults. The treatment effect was compared between the two age groups using the ratio of the relative risk (RR) of the 50% responder rate between active AEDs treatment and placebo groups, as well as meta-regression. Differences in the response to placebo and to active treatment were searched using logistic regression. A comparable approach was used for analysing secondary endpoints, including seizure-free rate, total and adverse events-related withdrawal rates, and withdrawal rate for seizure aggravation. Five AEDs were evaluated in both adults and children with drug-resistant partial epilepsy in 32 RCTs. The treatment effect was significantly lower in children than in adults (RR ratio: 0.67 [95% confidence interval (CI) 0.51–0.89]; p = 0.02 by meta-regression). This difference was related to an age-dependent variation in the response to placebo, with a higher rate in children than in adults (19% versus 9.9%, p < 0.001), whereas no significant difference was observed in the response to active treatment (37.2% versus 30.4%, p = 0.364). The relative risk of the total withdrawal rate was also significantly lower in children than in adults (RR ratio: 0.65 [95% CI 0.43–0.98], p = 0.004 by metaregression), due to higher withdrawal rate for seizure aggravation in children (5.6%) than in adults (0.7%) receiving placebo (p < 0.001). Finally, there was no significant difference in the seizure-free rate between adult and paediatric studies.
Conclusions
Children with drug-resistant partial epilepsy receiving placebo in double-blind RCTs demonstrated significantly greater 50% responder rate than adults, probably reflecting increased placebo and regression to the mean effects. Paediatric clinical trial designs should account for these age-dependent variations of the response to placebo to reduce the risk of an underestimated sample size that could result in falsely negative trials.
In a systematic review of antiepileptic drugs, Philippe Ryvlin and colleagues find that children with drug-resistant partial epilepsy enrolled in trials seem to have a greater response to placebo than adults enrolled in such trials.
Editors' Summary
Background.
Whenever an adult is given a drug to treat a specific condition, that drug will have been tested in “randomized controlled trials” (RCTs). In RCTs, a drug's effects are compared to those of another drug for the same condition (or to a placebo, dummy drug) by giving groups of adult patients the different treatments and measuring how well each drug deals with the condition and whether it has any other effects on the patients' health. However, many drugs given to children have only been tested in adults, the assumption being that children can safely take the same drugs as adults provided the dose is scaled down. This approach to treatment is generally taken in epilepsy, a common brain disorder in children in which disruptions in the electrical activity of part (partial epilepsy) or all (generalized epilepsy) of the brain cause seizures. The symptoms of epilepsy depend on which part of the brain is disrupted and can include abnormal sensations, loss of consciousness, or convulsions. Most but not all patients can be successfully treated with antiepileptic drugs, which reduce or stop the occurrence of seizures.
Why Was This Study Done?
It is increasingly clear that children and adults respond differently to many drugs, including antiepileptic drugs. For example, children often break down drugs differently from adults, so a safe dose for an adult may be fatal to a child even after scaling down for body size, or it may be ineffective because of quicker clearance from the child's body. Consequently, regulatory bodies around the world now require comprehensive drug development programs in children as well as in adults. However, for pediatric trials to yield useful results, the general differences in the treatment response between children and adults must first be determined and then allowed for in the design of pediatric RCTs. In this study, the researchers investigate whether there is any evidence in published RCTs for age-dependent differences in the response to antiepileptic drugs in drug-resistant partial epilepsy.
What Did the Researchers Do and Find?
The researchers searched the literature for reports of RCTs on the effects of antiepileptic drugs in the add-on treatment of drug-resistant partial epilepsy in children and in adults—that is, trials that compared the effects of giving an additional antiepileptic drug with those of giving a placebo by asking what fraction of patients given each treatment had a 50% reduction in seizure frequency during the treatment period compared to a baseline period (the “50% responder rate”). This “systematic review” yielded 32 RCTs, including five pediatric RCTs. The researchers then compared the treatment effect (the ratio of the 50% responder rate in the treatment arm to the placebo arm) in the two age groups using a statistical approach called “meta-analysis” to pool the results of these studies. The treatment effect, they report, was significantly lower in children than in adults. Further analysis indicated that this difference was because more children than adults responded to the placebo. Nearly 1 in 5 children had a 50% reduction in seizure rate when given a placebo compared to only 1 in 10 adults. About a third of both children and adults had a 50% reduction in seizure rate when given antiepileptic drugs.
What Do These Findings Mean?
These findings, although limited by the small number of pediatric trials done so far, suggest that children with drug-resistant partial epilepsy respond more strongly in RCTs to placebo than adults. Although additional studies need to be done to find an explanation for this observation and to discover whether anything similar occurs in other conditions, this difference between children and adults should be taken into account in the design of future pediatric trials on the effects of antiepileptic drugs, and possibly drugs for other conditions. Specifically, to reduce the risk of false-negative results, this finding suggests that it might be necessary to increase the size of future pediatric trials to ensure that the trials have enough power to discover effects of the drugs tested, if they exist.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050166.
This study is further discussed in a PLoS Medicine Perspective by Terry Klassen and colleagues
The European Medicines Agency provides information about the regulation of medicines for children in Europe
The US Food and Drug Administration Office of Pediatric Therapeutics provides similar information for the US
The UK Medicines and Healthcare products Regulatory Agency also provides information on why medicines need to be tested in children
The MedlinePlus encyclopedia has a page on epilepsy (in English and Spanish)
The US National Institute for Neurological Disorders and Stroke and the UK National Health Service Direct health encyclopedia both provide information on epilepsy for patients (in several languages)
Neuroscience for Kids is an educational Web site prepared by Eric Chudler (University of Washington, Seattle, US) that includes information on epilepsy and a list of links to epilepsy organizations (mainly in English but some sections in other languages as well)
doi:10.1371/journal.pmed.0050166
PMCID: PMC2504483  PMID: 18700812
4.  A Systematic Review of Studies That Aim to Determine Which Outcomes to Measure in Clinical Trials in Children  
PLoS Medicine  2008;5(4):e96.
Background
In clinical trials the selection of appropriate outcomes is crucial to the assessment of whether one intervention is better than another. Selection of inappropriate outcomes can compromise the utility of a trial. However, the process of selecting the most suitable outcomes to include can be complex. Our aim was to systematically review studies that address the process of selecting outcomes or outcome domains to measure in clinical trials in children.
Methods and Findings
We searched Cochrane databases (no date restrictions) in December 2006; and MEDLINE (1950 to 2006), CINAHL (1982 to 2006), and SCOPUS (1966 to 2006) in January 2007 for studies of the selection of outcomes for use in clinical trials in children. We also asked a group of experts in paediatric clinical research to refer us to any other relevant studies. From these articles we extracted data on the clinical condition of interest, description of the method used to select outcomes, the people involved in the selection process, the outcomes selected, and limitations of the method as defined by the authors. The literature search identified 8,889 potentially relevant abstracts. Of these, 70 were retrieved, and 25 were included in the review. These studies described the work of 13 collaborations representing various paediatric specialties including critical care, gastroenterology, haematology, psychiatry, neurology, respiratory paediatrics, rheumatology, neonatal medicine, and dentistry. Two groups utilised the Delphi technique, one used the nominal group technique, and one used both methods to reach a consensus about which outcomes should be measured in clinical trials. Other groups used semistructured discussion, and one group used a questionnaire-based survey. The collaborations involved clinical experts, research experts, and industry representatives. Three groups involved parents of children affected by the particular condition.
Conclusions
Very few studies address the appropriate choice of outcomes for clinical research with children, and in most paediatric specialties no research has been undertaken. Among the studies we did assess, very few involved parents or children in selecting outcomes that should be measured, and none directly involved children. Research should be undertaken to identify the best way to involve parents and children in assessing which outcomes should be measured in clinical trials.
Ian Sinha and colleagues show, in a systematic review of published studies, that there are very few studies that address the appropriate choice of outcomes for clinical research with children.
Editors' Summary
Background.
When adult patients are given a drug for a disease by their doctors, they can be sure that its benefits and harms will have been carefully studied in clinical trials. Clinical researchers will have asked how well the drug does when compared to other drugs by giving groups of patients the various treatments and determining several “outcomes.” These are measurements carefully chosen in advance by clinical experts that ensure that trials provide as much information as possible about how effectively a drug deals with a specific disease and whether it has any other effects on patients' health and daily life. The situation is very different, however, for pediatric (child) patients. About three-quarters of the drugs given to children are “off-label”—they have not been specifically tested in children. The assumption used to be that children are just small people who can safely take drugs tested in adults provided the dose is scaled down. However, it is now known that children's bodies handle many drugs differently from adult bodies and that a safe dose for an adult can sometimes kill a child even after scaling down for body size. Consequently, regulatory bodies in the US, Europe, and elsewhere now require clinical trials to be done in children and drugs for pediatric use to be specifically licensed.
Why Was This Study Done?
Because children are not small adults, the methodology used to design trials involving children needs to be adapted from that used to design trials in adult patients. In particular, the process of selecting the outcomes to include in pediatric trials needs to take into account the differences between adults and children. For example, because children's brains are still developing, it may be important to include outcome measures that will detect any effect that drugs have on intellectual development. In this study, therefore, the researchers undertook a systematic review of the medical literature to discover how much is known about the best way to select outcomes in clinical trials in children.
What Did the Researchers Do and Find?
The researchers used a predefined search strategy to identify all the studies published since 1950 that examined the selection of outcomes in clinical trials in children. They also asked experts in pediatric clinical research for details of relevant studies. Only 25 studies, which covered several pediatric specialties and were published by 13 collaborative groups, met the strict eligibility criteria laid down by the researchers for their systematic review. Several approaches previously used to choose outcomes in clinical trials in adults were used in these studies to select outcomes. Two groups used the “Delphi” technique, in which opinions are sought from individuals, collated, and fed back to the individuals to generate discussion and a final, consensus agreement. One group used the “nominal group technique,” which involves the use of structured face-to-face discussions to develop a solution to a problem followed by a vote. Another group used both methods. The remaining groups (except one that used a questionnaire) used semistructured discussion meetings or workshops to decide on outcomes. Although most of the groups included clinical experts, people doing research on the specific clinical condition under investigation, and industry representatives, only three groups asked parents about which outcomes should be included in the trials, and none asked children directly.
What Do These Findings Mean?
These findings indicate that very few studies have addressed the selection of appropriate outcomes for clinical research in children. Indeed, in many pediatric specialties no research has been done on this important topic. Importantly, some of the studies included in this systematic review clearly show that it is inappropriate to use the outcomes used in adult clinical trials in pediatric populations. Overall, although the studies identified in this review provide some useful information on the selection of outcomes in clinical trials in children, further research is urgently needed to ensure that this process is made easier and more uniform. In particular, much more research must be done to determine the best way to involve children and their parents in the selection of outcomes.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050096.
A related PLoSMedicine Perspective article is available
The European Medicines Agency provides information about the regulation of medicines for children in Europe
The US Food and Drug Administration Office of Pediatric Therapeutics provides similar information for the US
The UK Medicines and Healthcare products Regulatory Agency also provides information on why medicines need to be tested in children
The UK Medicines for Children Research Network aims to facilitate the conduct of clinical trials of medicines for children
The James Lind Alliance has been established in the UK to increase patient involvement in medical research issues such as outcome selection in clinical trials
doi:10.1371/journal.pmed.0050096
PMCID: PMC2346505  PMID: 18447577
5.  Methods for the comparative evaluation of pharmaceuticals 
Political background
As a German novelty, the Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen; IGWiG) was established in 2004 to, among other tasks, evaluate the benefit of pharmaceuticals. In this context it is of importance that patented pharmaceuticals are only excluded from the reference pricing system if they offer a therapeutic improvement.
The institute is commissioned by the Federal Joint Committee (Gemeinsamer Bundesausschuss, G-BA) or by the Ministry of Health and Social Security. The German policy objective expressed by the latest health care reform (Gesetz zur Modernisierung der Gesetzlichen Krankenversicherung, GMG) is to base decisions on a scientific assessment of pharmaceuticals in comparison to already available treatments. However, procedures and methods are still to be established.
Research questions and methods
This health technology assessment (HTA) report was commissioned by the German Agency for HTA at the Institute for Medical Documentation and Information (DAHTA@DIMDI). It analysed criteria, procedures, and methods of comparative drug assessment in other EU-/OECD-countries. The research question was the following: How do national public institutions compare medicines in connection with pharmaceutical regulation, i.e. licensing, reimbursement and pricing of drugs?
Institutions as well as documents concerning comparative drug evaluation (e.g. regulations, guidelines) were identified through internet, systematic literature, and hand searches. Publications were selected according to pre-defined inclusion and exclusion criteria. Documents were analysed in a qualitative matter following an analytic framework that had been developed in advance. Results were summarised narratively and presented in evidence tables.
Results and discussion
Currently licensing agencies do not systematically assess a new drug's added value for patients and society. This is why many countries made post-licensing evaluation of pharmaceuticals a requirement for reimbursement or pricing decisions. Typically an explicitly designated drug review body is involved.
In all eleven countries included (Austria, Australia, Canada, Switzerland, Finland, France, the Netherlands, Norway, New Zealand, Sweden, and the United Kingdom) a drug's therapeutic benefit in comparison to treatment alternatives is leading the evaluation. A medicine is classified as a therapeutic improvement if it demonstrates an improved benefit-/risk-profile compared to treatment alternatives. However, evidence of superiority to a relevant degree is requested.
Health related quality of life is considered as the most appropriate criterion for a drug's added value from patients' perspective. Review bodies in Australia, New Zealand, and the United Kingdom have committed themselves to include this outcome measure whenever possible.
Pharmacological or innovative characteristics (e.g. administration route, dosage regime, new acting principle) and other advantages (e.g. taste, appearance) are considered in about half of the countries. However, in most cases these aspects rank as second line criteria for a drug's added value.
All countries except France and Switzerland perform a comparative pharmacoeconomic evaluation to analyse costs caused by a drug intervention in relation to its benefit (preferably by cost utility analysis). However, the question if a medicine is cost effective in relation to treatment alternatives is answered in a political and social context. A range of remarkably varying criteria are considered.
Countries agree that randomised controlled head-to-head trials (head-to-head RCT) with a high degree of internal and external validity provide the most reliable and least biased evidence of a drug's relative treatment effects (as do systematic reviews and meta-analyses of these RCT). Final outcome parameters reflecting long-term treatment objectives (mortality, morbidity, quality of life) are preferred to surrogate parameters. Following the concept of community effectiveness, drug review institutions also explicitly favour RCT in a "natural" design, i.e. in daily routine and country specific care settings.
The countries' requirements for pharmacoeconomic studies are similar despite some methodological inconsistencies, e.g. concerning cost calculation.
Outcomes of clinical and pharmacoeconomic analyses are largely determined by the choice of comparator. Selecting an appropriate comparative treatment is therefore crucial. In theory, the best or most cost effective therapy is regarded as appropriate comparator for clinical and economic studies. Pragmatically however, institutions accept that the drug is compared to the treatment of daily routine or to the least expensive therapy.
If a pharmaceutical offers several approved indications, in some countries all of them are assessed. Others only evaluate a drug's main indication. Canada is the only country which also considers a medicine's off-label use.
It is well known that clinical trials and pharmacoeconomic studies directly comparing a drug with adequate competitors are lacking - in quantitative as well as in qualitative terms. This is specifically the case before or shortly after marketing authorisation. Yet there is the need to support reimbursement or pricing decisions by scientific evidence. In this situation review bodies are often forced to rely on observational studies or on other internally less valid data (including expert and consensus opinions). As a second option they use statistical approaches like indirect adjusted comparisons (in Australia and the United Kingdom) and, commonly, economic modelling. However, there is consensus that results provided by these techniques need to be verified by valid head-to-head comparisons as soon as possible.
Conclusions
In the majority of countries reimbursement and pricing decisions are based on systematic and evidence-based evaluation comparing a drug's clinical and economic characteristics to daily treatment routine. However, further evaluation criteria, requirements and specific methodological issues still lack internationally consented standards.
PMCID: PMC3011319  PMID: 21289930
6.  Inadequate Dissemination of Phase I Trials: A Retrospective Cohort Study 
PLoS Medicine  2009;6(2):e1000034.
Background
Drug development is ideally a logical sequence in which information from small early studies (Phase I) is subsequently used to inform and plan larger, more definitive studies (Phases II–IV). Phase I trials are unique because they generally provide the first evaluation of new drugs in humans. The conduct and dissemination of Phase I trials have not previously been empirically evaluated. Our objective was to describe the initiation, completion, and publication of Phase I trials in comparison with Phase II–IV trials.
Methods and Findings
We reviewed a cohort of all protocols approved by a sample of ethics committees in France from January 1, 1994 to December 31, 1994. The comparison of 140 Phase I trials with 304 Phase II–IV trials, showed that Phase I studies were more likely to be initiated (133/140 [95%] versus 269/304 [88%]), more likely to be completed (127/133 [95%] versus 218/269 [81%]), and more likely to produce confirmatory results (71/83 [86%] versus 125/175 [71%]) than Phase II–IV trials. Publication was less frequent for Phase I studies (21/127 [17%] versus 93/218 [43%]), even if only accounting for studies providing confirmatory results (18/71 [25%] versus 79/125 [63%]).
Conclusions
The initiation, completion, and publications of Phase I trials are different from those of other studies. Moreover, the results of these trials should be published in order to ensure the integrity of the overall body of scientific knowledge, and ultimately the safety of future trial participants and patients.
François Chapuis and colleagues examine a cohort of clinical trial protocols approved by French ethics committees, and show that Phase I trials are less frequently published than other types of trials.
Editors' Summary
Background.
Before a new drug is used to treat patients, its benefits and harms have to be carefully investigated in clinical trials—studies that investigate the drug's effects on people. Because giving any new drug to people is potentially dangerous, drugs are first tested in a short “Phase I” trial in which a few people (usually healthy volunteers) are given doses of the drug likely to have a therapeutic effect. A Phase I trial evaluates the safety and tolerability of the drug and investigates how the human body handles the drug. It may also provide some information about the drug's efficacy that can guide the design of later trials. The next stage of clinical drug development is a Phase II trial in which the therapeutic efficacy of the drug is investigated by giving more patients and volunteers different doses of the drug. Finally, several large Phase III trials are undertaken to confirm the evidence collected in the Phase II trial about the drug's efficacy and safety. If the Phase III trials are successful, the drug will receive official marketing approval. In some cases, this approval requires Phase IV (postapproval) trials to be done to optimize the drug's use in clinical practice.
Why Was This Study Done?
In an ideal world, the results of all clinical trials on new drugs would be published in medical journals so that doctors and patients could make fully informed decisions about the treatments available to them. Unfortunately, this is not an ideal world and, for example, it is well known that the results of Phase III trials in which a new drug outperforms a standard treatment are more likely to be published than those in which the new drug performs badly or has unwanted side effects (an example of “publication bias”). But what about the results of Phase I trials? These need to be widely disseminated so that researchers can avoid unknowingly exposing people to potentially dangerous new drugs after similar drugs have caused adverse side effects. However, drug companies are often reluctant to disclose information on early phase trials. In this study, the researchers ask whether the dissemination of the results of Phase I trials is adequate.
What Did the Researchers Do and Find?
The researchers identified 667 drug trial protocols approved in 1994 by 25 French research ethics committees (independent panels of experts that ensure that the rights, safety, and well-being of trial participants are protected). In 2001, questionnaires were mailed to each trial's principal investigator asking whether the trial had been started and completed and whether its results had been published in a medical journal or otherwise disseminated (for example, by presentation at a scientific meeting). 140 questionnaires for Phase I trials and 304 for Phase II–IV trials were returned and analyzed by the investigators. They found that Phase I trials were more likely to have been started and to have been completed than Phase II–IV trials. The results of 86% of the Phase I studies matched the researchers' expectations, but the study hypothesis was confirmed in only 71% of the Phase II–IV trials. Finally, the results of 17% of the Phase I studies were published in scientific journals compared to 43% of the Phase II–IV studies. About half of the Phase I study results were not disseminated in any form.
What Do These Findings Mean?
These findings suggest that the fate of Phase I trials is different from that of other clinical trials and that there is inadequate dissemination of the results of these early trials. These findings may not be generalizable to other countries and may be affected by the poor questionnaire response rate. Nevertheless, they suggest that steps need to be taken to ensure that the results of Phase I studies are more widely disseminated. Recent calls by the World Health Organization and other bodies for mandatory preregistration in trial registries of all Phase I trials as well as all Phase II–IV trials should improve the situation by providing basic information about Phase I trials whose results are not published in full elsewhere.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000034.
Two recent research articles published in PLoS Medicine—by Ida Sim and colleagues (PLoS Med e191) and by Lisa Bero and colleagues (PLoS Med e217)—investigate publication bias in Phase III trials
The ClinicalTrials.gov Web site provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the US Food and Drug Administration (the body that approves drugs in the USA) Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward setting international norms and standards for the reporting of clinical trials (in several languages)
doi:10.1371/journal.pmed.1000034
PMCID: PMC2642878  PMID: 19226185
7.  Reporting Bias in Drug Trials Submitted to the Food and Drug Administration: Review of Publication and Presentation 
PLoS Medicine  2008;5(11):e217.
Background
Previous studies of drug trials submitted to regulatory authorities have documented selective reporting of both entire trials and favorable results. The objective of this study is to determine the publication rate of efficacy trials submitted to the Food and Drug Administration (FDA) in approved New Drug Applications (NDAs) and to compare the trial characteristics as reported by the FDA with those reported in publications.
Methods and Findings
This is an observational study of all efficacy trials found in approved NDAs for New Molecular Entities (NMEs) from 2001 to 2002 inclusive and all published clinical trials corresponding to the trials within the NDAs. For each trial included in the NDA, we assessed its publication status, primary outcome(s) reported and their statistical significance, and conclusions. Seventy-eight percent (128/164) of efficacy trials contained in FDA reviews of NDAs were published. In a multivariate model, trials with favorable primary outcomes (OR = 4.7, 95% confidence interval [CI] 1.33–17.1, p = 0.018) and active controls (OR = 3.4, 95% CI 1.02–11.2, p = 0.047) were more likely to be published. Forty-one primary outcomes from the NDAs were omitted from the papers. Papers included 155 outcomes that were in the NDAs, 15 additional outcomes that favored the test drug, and two other neutral or unknown additional outcomes. Excluding outcomes with unknown significance, there were 43 outcomes in the NDAs that did not favor the NDA drug. Of these, 20 (47%) were not included in the papers. The statistical significance of five of the remaining 23 outcomes (22%) changed between the NDA and the paper, with four changing to favor the test drug in the paper (p = 0.38). Excluding unknowns, 99 conclusions were provided in both NDAs and papers, nine conclusions (9%) changed from the FDA review of the NDA to the paper, and all nine did so to favor the test drug (100%, 95% CI 72%–100%, p = 0.0039).
Conclusions
Many trials were still not published 5 y after FDA approval. Discrepancies between the trial information reviewed by the FDA and information found in published trials tended to lead to more favorable presentations of the NDA drugs in the publications. Thus, the information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.
Lisa Bero and colleagues review the publication status of all efficacy trials carried out in support of new drug approvals from 2001 and 2002, and find that a quarter of trials remain unpublished.
Editors' Summary
Background.
All health-care professionals want their patients to have the best available clinical care—but how can they identify the optimum drug or intervention? In the past, clinicians used their own experience or advice from colleagues to make treatment decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of clinical research findings. So, for example, before a new drug is approved for the treatment of a specific disease in the United States and becomes available for doctors to prescribe, the drug's sponsors (usually a pharmaceutical company) must submit a “New Drug Application” (NDA) to the US Food and Drug Administration (FDA). The NDA tells the story of the drug's development from laboratory and animal studies through to clinical trials, including “efficacy” trials in which the efficacy and safety of the new drug and of a standard drug for the disease are compared by giving groups of patients the different drugs and measuring several key (primary) “outcomes.” FDA reviewers use this evidence to decide whether to approve a drug.
Why Was This Study Done?
Although the information in NDAs is publicly available, clinicians and patients usually learn about new drugs from articles published in medical journals after drug approval. Unfortunately, drug sponsors sometimes publish the results only of the trials in which their drug performed well and in which statistical analyses indicate that the drug's improved performance was a real effect rather than a lucky coincidence. Trials in which a drug did not show a “statistically significant benefit” or where the drug was found to have unwanted side effects often remain unpublished. This “publication bias” means that the scientific literature can contain an inaccurate picture of a drug's efficacy and safety relative to other therapies. This may lead to clinicians preferentially prescribing newer, more expensive drugs that are not necessarily better than older drugs. In this study, the researchers test the hypothesis that not all the trial results in NDAs are published in medical journals. They also investigate whether there are any discrepancies between the trial data included in NDAs and in published articles.
What Did the Researchers Do and Find?
The researchers identified all the efficacy trials included in NDAs for totally new drugs that were approved by the FDA in 2001 and 2002 and searched the scientific literature for publications between July 2006 and June 2007 relating to these trials. Only three-quarters of the efficacy trials in the NDAs were published; trials with favorable outcomes were nearly five times as likely to be published as those without favorable outcomes. Although 155 primary outcomes were in both the papers and the NDAs, 41 outcomes were only in the NDAs. Conversely, 17 outcomes were only in the papers; 15 of these favored the test drug. Of the 43 primary outcomes reported in the NDAs that showed no statistically significant benefit for the test drug, only half were included in the papers; for five of the reported primary outcomes, the statistical significance differed between the NDA and the paper and generally favored the test drug in the papers. Finally, nine out of 99 conclusions differed between the NDAs and the papers; each time, the published conclusion favored the test drug.
What Do These Findings Mean?
These findings indicate that the results of many trials of new drugs are not published 5 years after FDA approval of the drug. Furthermore, unexplained discrepancies between the data and conclusions in NDAs and in medical journals are common and tend to paint a more favorable picture of the new drug in the scientific literature than in the NDAs. Overall, these findings suggest that the information on the efficacy of new drugs that is readily available to clinicians and patients through the published scientific literature is incomplete and potentially biased. The recent introduction in the US and elsewhere of mandatory registration of all clinical trials before they start and of mandatory publication in trial registers of the full results of all the predefined primary outcomes should reduce publication bias over the next few years and should allow clinicians and patients to make fully informed treatment decisions.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050217.
This study is further discussed in a PLoS Medicine Perspective by An-Wen Chan
PLoS Medicine recently published a related article by Ida Sim and colleagues: Lee K, Bacchetti P, Sim I (2008) Publication of clinical trials supporting successful new drug applications: A literature analysis. PLoS Med 5: e191. doi:10.1371/journal.pmed.0050191
The Food and Drug Administration provides information about drug approval in the US for consumers and for health-care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
NDAs for approved drugs can also be found on this Web site
The ClinicalTrials.gov Web site provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward setting international norms and standards for the reporting of clinical trials (in several languages)
doi:10.1371/journal.pmed.0050217
PMCID: PMC2586350  PMID: 19067477
8.  Publication Bias in Antipsychotic Trials: An Analysis of Efficacy Comparing the Published Literature to the US Food and Drug Administration Database 
PLoS Medicine  2012;9(3):e1001189.
A comparison of data held by the U.S. Food and Drug Administration (FDA) against data from journal reports of clinical trials enables estimation of the extent of publication bias for antipsychotics.
Background
Publication bias compromises the validity of evidence-based medicine, yet a growing body of research shows that this problem is widespread. Efficacy data from drug regulatory agencies, e.g., the US Food and Drug Administration (FDA), can serve as a benchmark or control against which data in journal articles can be checked. Thus one may determine whether publication bias is present and quantify the extent to which it inflates apparent drug efficacy.
Methods and Findings
FDA Drug Approval Packages for eight second-generation antipsychotics—aripiprazole, iloperidone, olanzapine, paliperidone, quetiapine, risperidone, risperidone long-acting injection (risperidone LAI), and ziprasidone—were used to identify a cohort of 24 FDA-registered premarketing trials. The results of these trials according to the FDA were compared with the results conveyed in corresponding journal articles. The relationship between study outcome and publication status was examined, and effect sizes derived from the two data sources were compared. Among the 24 FDA-registered trials, four (17%) were unpublished. Of these, three failed to show that the study drug had a statistical advantage over placebo, and one showed the study drug was statistically inferior to the active comparator. Among the 20 published trials, the five that were not positive, according to the FDA, showed some evidence of outcome reporting bias. However, the association between trial outcome and publication status did not reach statistical significance. Further, the apparent increase in the effect size point estimate due to publication bias was modest (8%) and not statistically significant. On the other hand, the effect size for unpublished trials (0.23, 95% confidence interval 0.07 to 0.39) was less than half that for the published trials (0.47, 95% confidence interval 0.40 to 0.54), a difference that was significant.
Conclusions
The magnitude of publication bias found for antipsychotics was less than that found previously for antidepressants, possibly because antipsychotics demonstrate superiority to placebo more consistently. Without increased access to regulatory agency data, publication bias will continue to blur distinctions between effective and ineffective drugs.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
People assume that, when they are ill, health-care professionals will ensure that they get the best available treatment. But how do clinicians know which treatment is likely to be most effective? In the past, clinicians used their own experience to make such decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of trials, studies that investigate the efficacy and safety of medical interventions in patients. Evidence-based medicine can guide clinicians, however, only if all the results from clinical trials are published in an unbiased manner. Unfortunately, “publication bias” is common. For example, the results of trials in which a new drug did not perform better than existing drugs or in which it had unwanted side effects often remain unpublished. Moreover, published trials can be subject to outcome reporting bias—the publication may only include those trial outcomes that support the use of the new treatment rather than presenting all the available data.
Why Was This Study Done?
If only strongly positive results are published and negative results and side-effects remain unpublished, a drug will seem safer and more effective than it is in reality, which could affect clinical decision-making and patient outcomes. But how big a problem is publication bias? Here, researchers use US Food and Drug Administration (FDA) reviews as a benchmark to quantify the extent to which publication bias may be altering the apparent efficacy of second-generation antipsychotics (drugs used to treat schizophrenia and other mental illnesses that are characterized by a loss of contact with reality). In the US, all new drugs have to be approved by the FDA before they can be marketed. During this approval process, the FDA collects and keeps complete information about premarketing trials, including descriptions of their design and prespecified outcome measures and all the data collected during the trials. Thus, a comparison of the results included in the FDA reviews for a group of trials and the results that appear in the literature for the same trials can provide direct evidence about publication bias.
What Did the Researchers Do and Find?
The researchers identified 24 FDA-registered premarketing trials that investigated the use of eight second-generation antipsychotics for the treatment of schizophrenia or schizoaffective disorder. They searched the published literature for reports of these trials, and, by comparing the results of these trials according to the FDA with the results in the published articles, they examined the relationship between the study outcome (did the FDA consider it positive or negative?) and publication and looked for outcome reporting bias. Four of the 24 FDA-registered trials were unpublished. Three of these unpublished trials failed to show that the study drug was more effective than a placebo (a “dummy” pill); the fourth showed that the study drug was inferior to another drug already in use in the US. Among the 20 published trials, the five that the FDA judged not positive showed some evidence of publication bias. However, the association between trial outcome and publication status did not reach statistical significance (it might have happened by chance), and the mean effect size (a measure of drug effectiveness) derived from the published literature was only slightly higher than that derived from the FDA records. By contrast, within the FDA dataset, the mean effect size of the published trials was approximately double that of the unpublished trials.
What Do These Findings Mean?
The accuracy of these findings is limited by the small number of trials analyzed. Moreover, this study considers only the efficacy and not the safety of these drugs, it assumes that the FDA database is complete and unbiased, and its findings are not generalizable to other conditions that antipsychotics are used to treat. Nevertheless, these findings show that publication bias in the reporting of trials of second-generation antipsychotic drugs enhances the apparent efficacy of these drugs. Although the magnitude of the publication bias seen here is less than that seen in a similar study of antidepressant drugs, these findings show how selective reporting of clinical trial data undermines the integrity of the evidence base and can deprive clinicians of accurate data on which to base their prescribing decisions. Increased access to FDA reviews, suggest the researchers, is therefore essential to prevent publication bias continuing to blur distinctions between effective and ineffective drugs.
Additional Information
Please access these web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001189.
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals
Detailed information about the process by which drugs are approved is on the web site of the FDA Center for Drug Evaluation and Research; also, FDA Drug Approval Packages are available for many drugs; the FDA Transparency Initiative, which was launched in June 2009, is an agency-wide effort to improve the transparency of the FDA
FDA-approved product labeling on drugs marketed in the US can be found at the US National Library of Medicine's DailyMed web page
Wikipedia has a page on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
MedlinePlus provides links to sources of information on schizophrenia and on psychotic disorders (in English and Spanish)
Patient experiences of psychosis, including the effects of medication, are provided by the charity HealthtalkOnline
doi:10.1371/journal.pmed.1001189
PMCID: PMC3308934  PMID: 22448149
9.  The influence of the European paediatric regulation on marketing authorisation of orphan drugs for children 
Background
Drug development for rare diseases is challenging, especially when these orphan drugs (OD) are intended for children. In 2007 the EU Paediatric Drug Regulation was enacted to improve the development of high quality and ethically researched medicines for children through the establishment of Paediatric Investigation Plans (PIPs). The effect of the EU Paediatric Drug Regulation on the marketing authorisation (MA) of drugs for children with rare diseases was studied.
Methods
Data on all designated orphan drugs, their indication, MA, PIPs and indication group (adult or child) were obtained from the European Medicines Agency (EMA). The outcome and duration of the process from orphan drug designation (ODD) to MA, was compared, per indication, by age group. The effect of the Paediatric Drug Regulation, implemented in 2007, on the application process was assessed with survival analysis.
Results
Eighty-one orphan drugs obtained MA since 2000 and half are authorised for (a subgroup of) children; another 34 are currently undergoing further investigations in children through agreed PIPs. The Paediatric Drug Regulation did not significantly increase the number of ODDs with potential paediatric indications (58% before vs 64% after 2007 of ODDs, p = 0.1) and did not lead to more MAs for ODs with paediatric indications (60% vs 43%, p = 0.22). ODs authorised after 2007 had a longer time to MA than those authorised before 2007 (Hazard ratio (95% CI) 2.80 (1.84-4.28), p < 0.001); potential paediatric use did not influence the time to MA (Hazard ratio (95% CI) 1.14 (0.77-1.70), p = 0.52).
Conclusions
The EU Paediatric Drug Regulation had a minor impact on development and availability of ODs for children, was associated with a longer time to MA, but ensured the further paediatric development of drugs still off-label to children. The impact of the Paediatric Drug Regulation on research quantity and quality in children through PIPs is not yet clear.
doi:10.1186/s13023-014-0120-x
PMCID: PMC4237943  PMID: 25091201
European paediatric drug regulation; Orphan drug development; Paediatric use; Paediatric investigation plan; Rare diseases
10.  Systematic review of safety in paediatric drug trials published in 2007 
Background
There is now greater involvement of children in drug trials to ensure that paediatric medicines are supported by sound scientific evidence. The safety of the participating children is of paramount importance. Previous research shows that these children can suffer moderate and severe adverse drug reactions (ADRs) in clinical trials, yet very few of the trials designated a data safety monitoring board (DSMB) to oversee the trial.
Methods
Safety data from a systematic review of paediatric drug randomised controlled trials (RCTs) published in 2007 were analysed. All reported adverse events (AEs) were classified and assessed to determine whether an ADR had been experienced. ADRs were then categorised according to severity. Each trial report was examined as to whether an independent DSMB was in place.
Results
Of the 582 paediatric drug RCTs analysed, 210 (36%) reported that a serious AE had occurred, and in 15% mortality was reported. ADRs were detected in more than half of the RCTs (305); 66 (11%) were severe, and 79 (14%) were moderate. Severe ADRs involved a wide range of organ systems and were frequently associated with cytotoxic drugs, antiparasitics, anticonvulsants and psychotropic drugs. Two RCTs reported significantly higher mortality rates in the treatment group. Only 69 (12%) of the RCTs stated there was a DSMB. DSMBs terminated five RCTs and changed the protocol in one.
Conclusions
Children participating in drug RCTs experience a significant amount and a wide range of ADRs. DSMBs are needed to ensure the safety of paediatric participants in clinical drug trials.
doi:10.1007/s00228-011-1112-6
PMCID: PMC3256313  PMID: 21858432
Paediatric clinical trials; Adverse drug reactions (ADRs); Drug safety; Data safety monitoring boards (DSMBs); Systematic review
11.  Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study 
PLoS Medicine  2010;7(10):e1000354.
Andreas Lundh and colleagues investigated the effect of publication of large industry-supported trials on citations and journal income, through reprint sales, in six general medical journals
Background
Transparency in reporting of conflict of interest is an increasingly important aspect of publication in medical journals. Publication of large industry-supported trials may generate many citations and journal income through reprint sales and thereby be a source of conflicts of interest for journals. We investigated industry-supported trials' influence on journal impact factors and revenue.
Methods and Findings
We sampled six major medical journals (Annals of Internal Medicine, Archives of Internal Medicine, BMJ, JAMA, The Lancet, and New England Journal of Medicine [NEJM]). For each journal, we identified randomised trials published in 1996–1997 and 2005–2006 using PubMed, and categorized the type of financial support. Using Web of Science, we investigated citations of industry-supported trials and the influence on journal impact factors over a ten-year period. We contacted journal editors and retrieved tax information on income from industry sources. The proportion of trials with sole industry support varied between journals, from 7% in BMJ to 32% in NEJM in 2005–2006. Industry-supported trials were more frequently cited than trials with other types of support, and omitting them from the impact factor calculation decreased journal impact factors. The decrease varied considerably between journals, with 1% for BMJ to 15% for NEJM in 2007. For the two journals disclosing data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet in 2005–2006.
Conclusions
Publication of industry-supported trials was associated with an increase in journal impact factors. Sales of reprints may provide a substantial income. We suggest that journals disclose financial information in the same way that they require them from their authors, so that readers can assess the potential effect of different types of papers on journals' revenue and impact.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Medical journals publish many different types of papers that inform doctors about the latest research advances and the latest treatments for their patients. They publish articles that describe laboratory-based research into the causes of diseases and the identification of potential new drugs. They publish the results of early clinical trials in which a few patients are given a potential new drug to check its safety. Finally and most importantly, they publish the results of randomized controlled trials (RCTs). RCTs are studies in which large numbers of patients are randomly allocated to different treatments without the patient or the clinician knowing the allocation and the efficacy of the various treatments compared. RCTs are best way of determining whether a new drug is effective and have to be completed before a drug can be marketed. Because RCTs are very expensive, they are often supported by drug companies. That is, drug companies provide grants or drugs for the trial or assist with data analysis and/or article preparation.
Why Was This Study Done?
Whenever a medical journal publishes an article, the article's authors have to declare any conflicts of interest such as financial gain from the paper's publication. Conflict of interest statements help readers assess papers—an author who owns the patent for a drug, for example, might put an unduly positive spin on his/her results. The experts who review papers for journals before publication provide similar conflict of interest statements. But what about the journal editors who ultimately decide which papers get published? The International Committee of Medical Journal Editors (ICMJE), which produces medical publishing guidelines, states that: “Editors who make final decisions about manuscripts must have no personal, professional, or financial involvement in any of the issues that they might judge.” However, the publication of industry-supported RCTs might create “indirect” conflicts of interest for journals by boosting the journal's impact factor (a measure of a journal's importance based on how often its articles are cited) and its income through the sale of reprints to drug companies. In this study, the researchers investigate whether the publication of industry-supported RCTs influences the impact factors and finances of six major medical journals.
What Did the Researchers Do and Find?
The researchers determined which RCTs published in the New England Journal of Medicine (NEJM), the British Medical Journal (BMJ), The Lancet, and three other major medical journals in 1996–1997 and 2005–2006 were supported wholly, partly, or not at all by industry. They then used the online academic citation index Web of Science to calculate an approximate impact factor for each journal for 1998 and 2007 and calculated the effect of the published RCTs on the impact factor. The proportion of RCTs with sole industry support varied between journals. Thus, 32% of the RCTs published in the NEJM during both two-year periods had industry support whereas only 7% of the RCTs published in the BMJ in 2005–2006 had industry support. Industry-supported trials were more frequently cited than RCTs with other types of support and omitting industry-supported RCTs from impact factor calculations decreased all the approximate journal impact factors. For example, omitting all RCTs with industry or mixed support decreased the 2007 BMJ and NEJM impact factors by 1% and 15%, respectively. Finally, the researchers asked each journal's editor about their journal's income from industry sources. For the BMJ and The Lancet, the only journals that provided this information, income from reprint sales was 3% and 41%, respectively, of total income in 2005–2006.
What Do These Findings Mean?
These findings show that the publication of industry-supported RCTs was associated with an increase in the approximate impact factors of these six major medical journals. Because these journals publish numerous RCTs, this result may not be generalizable to other journals. These findings also indicate that income from reprint sales can be a substantial proportion of a journal's total income. Importantly, these findings do not imply that the decisions of editors are affected by the possibility that the publication of an industry-supported trial might improve their journal's impact factor or income. Nevertheless, the researchers suggest, journals should live up to the same principles related to conflicts of interest as those that they require from their authors and should routinely disclose information on the source and amount of income that they receive.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000354.
This study is further discussed in a PLoS Medicine Perspective by Harvey Marcovitch
The International Committee of Medical Journal Editors provides information about the publication of medical research, including conflicts of interest
The World Association of Medical Editors also provides information on conflicts of interest in medical journals
Information about impact factors is provided by Thomson Reuters, a provider of intelligent information for businesses and professionals; Thomson Reuters also runs Web of Science
doi:10.1371/journal.pmed.1000354
PMCID: PMC2964336  PMID: 21048986
12.  Number of Patients Studied Prior to Approval of New Medicines: A Database Analysis 
PLoS Medicine  2013;10(3):e1001407.
In an evaluation of medicines approved by the European Medicines Agency 2000 to 2010, Ruben Duijnhoven and colleagues find that the number of patients evaluated for medicines approved for chronic use are inadequate for evaluation of safety or long-term efficacy.
Background
At the time of approval of a new medicine, there are few long-term data on the medicine's benefit–risk balance. Clinical trials are designed to demonstrate efficacy, but have major limitations with regard to safety in terms of patient exposure and length of follow-up. This study of the number of patients who had been administered medicines at the time of medicine approval by the European Medicines Agency aimed to determine the total number of patients studied, as well as the number of patients studied long term for chronic medication use, compared with the International Conference on Harmonisation's E1 guideline recommendations.
Methods and Findings
All medicines containing new molecular entities approved between 2000 and 2010 were included in the study, including orphan medicines as a separate category. The total number of patients studied before approval was extracted (main outcome). In addition, the number of patients with long-term use (6 or 12 mo) was determined for chronic medication. 200 unique new medicines were identified: 161 standard and 39 orphan medicines. The median total number of patients studied before approval was 1,708 (interquartile range [IQR] 968–3,195) for standard medicines and 438 (IQR 132–915) for orphan medicines. On average, chronic medication was studied in a larger number of patients (median 2,338, IQR 1,462–4,135) than medication for intermediate (878, IQR 513–1,559) or short-term use (1,315, IQR 609–2,420). Safety and efficacy of chronic use was studied in fewer than 1,000 patients for at least 6 and 12 mo in 46.4% and 58.3% of new medicines, respectively. Among the 84 medicines intended for chronic use, 68 (82.1%) met the guideline recommendations for 6-mo use (at least 300 participants studied for 6 mo and at least 1,000 participants studied for any length of time), whereas 67 (79.8%) of the medicines met the criteria for 12-mo patient exposure (at least 100 participants studied for 12 mo).
Conclusions
For medicines intended for chronic use, the number of patients studied before marketing is insufficient to evaluate safety and long-term efficacy. Both safety and efficacy require continued study after approval. New epidemiologic tools and legislative actions necessitate a review of the requirements for the number of patients studied prior to approval, particularly for chronic use, and adequate use of post-marketing studies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Before any new medicine is marketed for the treatment of a human disease, it has to go through extensive laboratory and clinical research. In the laboratory, scientists investigate the causes of diseases, identify potential new treatments, and test these interventions in disease models, some of which involve animals. The safety and efficacy of potential new interventions is then investigated in a series of clinical trials—studies in which the new treatment is tested in selected groups of patients under strictly controlled conditions, first to determine whether the drug is tolerated by humans and then to assess its efficacy. Finally, the results of these trials are reviewed by the government body responsible for drug approval; in the US, this body is the Food and Drug Administration, and in the European Union, the European Medicines Agency (EMA) is responsible for the scientific evaluation and approval of new medicines.
Why Was This Study Done?
Clinical trials are primarily designed to test the efficacy—the ability to produce the desired therapeutic effect—of new medicines. The number of patients needed to establish efficacy determines the size of a clinical trial, and the indications for which efficacy must be shown determine the trial's duration. However, identifying adverse effects of drugs generally requires the drug to be taken by more patients than are required to show efficacy, so the information about adverse effects is often relatively limited at the end of clinical testing. Consequently, when new medicines are approved, their benefit–risk ratios are often poorly defined, even though physicians need this information to decide which treatment to recommend to their patients. For the evaluation of risk or adverse effects of medicines being developed for chronic (long-term) treatment of non-life-threatening diseases, current guidelines recommend that at least 1,000–1,500 patients are exposed to the new drug and that 300 and 100 patients use the drug for six and twelve months, respectively, before approval. But are these guidelines being followed? In this database analysis, the researchers use data collected by the EMA to determine how many patients are exposed to new medicines before approval in the European Union and how many are exposed for extended periods of time to medicines intended for chronic use.
What Did the Researchers Do and Find?
Using the European Commission's Community Register of Medicinal Products, the researchers identified 161 standard medicines and 39 orphan medicines (medicines to treat or prevent rare life-threatening diseases) that contained new active substances and that were approved in the European Union between 2000 and 2010. They extracted information on the total number of patients studied and on the number exposed to the medicines for six months and twelve months before approval of each medicine from EMA's European public assessment reports. The average number of patients studied before approval was 1,708 for standard medicines and 438 for orphan medicines (marketing approval is easier to obtain for orphan medicines than for standard medicines to encourage drug companies to develop medicines that might otherwise be unprofitable). On average, medicines for chronic use (for example, asthma medications) were studied in more patients (2,338) than those for intermediate use such as anticancer drugs (878), or short-term use such as antibiotics (1,315). The safety and efficacy of chronic use was studied in fewer than 1,000 patients for at least six and twelve months in 46.4% and 58.4% of new medicines, respectively. Finally, among the 84 medicines intended for chronic use, 72 were studied in at least 300 patients for six months, and 70 were studied in at least 100 patients for twelve months.
What Do These Findings Mean?
These findings suggest that although the number of patients studied before approval is sufficient to determine the short-term efficacy of new medicines, it is insufficient to determine safety or long-term efficacy. Any move by drug approval bodies to require pharmaceutical companies to increase the total number of patients exposed to a drug, or the number exposed for extended periods of time to drugs intended for chronic use, would inevitably delay the entry of new products into the market, which likely would be unacceptable to patients and healthcare providers. Nevertheless, the researchers suggest that a reevaluation of the study size and long-term data requirements that need to be met for the approval of new medicines, particularly those designed for long-term use, is merited. They also stress the need for continued study of both the safety and efficacy of new medicines after approval and the importance of post-marketing studies that actively examine safety issues.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001407.
The European Medicines Agency (EMA) provides information about all aspects of the scientific evaluation and approval of new medicines in the European Union; its European public assessment reports are publicly available
The European Commission's Community Register of Medicinal Products is a publicly searchable database of medicinal products approved for human use in the European Union
The US Food and Drug Administration provides information about drug approval in the US for consumers and for health professionals
The US National Institutes of Health provides information (including personal stories) about clinical trials
doi:10.1371/journal.pmed.1001407
PMCID: PMC3601954  PMID: 23526887
13.  Trial Publication after Registration in ClinicalTrials.Gov: A Cross-Sectional Analysis 
PLoS Medicine  2009;6(9):e1000144.
Joseph Ross and colleagues examine publication rates of clinical trials and find low rates of publication even following registration in Clinicaltrials.gov.
Background
ClinicalTrials.gov is a publicly accessible, Internet-based registry of clinical trials managed by the US National Library of Medicine that has the potential to address selective trial publication. Our objectives were to examine completeness of registration within ClinicalTrials.gov and to determine the extent and correlates of selective publication.
Methods and Findings
We examined reporting of registration information among a cross-section of trials that had been registered at ClinicalTrials.gov after December 31, 1999 and updated as having been completed by June 8, 2007, excluding phase I trials. We then determined publication status among a random 10% subsample by searching MEDLINE using a systematic protocol, after excluding trials completed after December 31, 2005 to allow at least 2 y for publication following completion. Among the full sample of completed trials (n = 7,515), nearly 100% reported all data elements mandated by ClinicalTrials.gov, such as intervention and sponsorship. Optional data element reporting varied, with 53% reporting trial end date, 66% reporting primary outcome, and 87% reporting trial start date. Among the 10% subsample, less than half (311 of 677, 46%) of trials were published, among which 96 (31%) provided a citation within ClinicalTrials.gov of a publication describing trial results. Trials primarily sponsored by industry (40%, 144 of 357) were less likely to be published when compared with nonindustry/nongovernment sponsored trials (56%, 110 of 198; p<0.001), but there was no significant difference when compared with government sponsored trials (47%, 57 of 122; p = 0.22). Among trials that reported an end date, 75 of 123 (61%) completed prior to 2004, 50 of 96 (52%) completed during 2004, and 62 of 149 (42%) completed during 2005 were published (p = 0.006).
Conclusions
Reporting of optional data elements varied and publication rates among completed trials registered within ClinicalTrials.gov were low. Without greater attention to reporting of all data elements, the potential for ClinicalTrials.gov to address selective publication of clinical trials will be limited.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
People assume that whenever they are ill, health care professionals will make sure they get the best available treatment. But how do clinicians know which treatment is most appropriate? In the past, clinicians used their own experience to make treatment decisions. Nowadays, they rely on evidence-based medicine—the systematic review and appraisal of the results of clinical trials, studies that investigate the efficacy and safety of medical interventions in people. However, evidence-based medicine can only be effective if all the results from clinical trials are published promptly in medical journals. Unfortunately, the results of trials in which a new drug did not perform better than existing drugs or in which it had unwanted side effects often remain unpublished or only appear in the public domain many years after the drug has been approved for clinical use by the US Food and Drug Administration (FDA) and other governmental bodies.
Why Was This Study Done?
The extent of this “selective” publication, which can impair evidence-based clinical practice, remains unclear but is thought to be substantial. In this study, the researchers investigate the problem of selective publication by systematically examining the extent of publication of the results of trials registered in ClinicalTrials.gov, a Web-based registry of US and international clinical trials. ClinicalTrials.gov was established in 2000 by the US National Library of Medicine in response to the 1997 FDA Modernization Act. This act required preregistration of all trials of new drugs to provide the public with information about trials in which they might be able to participate. Mandatory data elements for registration in ClinicalTrials.gov initially included the trial's title, the condition studied in the trial, the trial design, and the intervention studied. In September 2007, the FDA Amendments Act expanded the mandatory requirements for registration in ClinicalTrials.gov by making it necessary, for example, to report the trial start date and to report primary and secondary outcomes (the effect of the intervention on predefined clinical measurements) in the registry within 2 years of trial completion.
What Did the Researchers Do and Find?
The researchers identified 7,515 trials that were registered within ClinicalTrials.gov after December 31, 1999 (excluding phase I, safety trials), and whose record indicated trial completion by June 8, 2007. Most of these trials reported all the mandatory data elements that were required by ClinicalTrials.gov before the FDA Amendments Act but reporting of optional data elements was less complete. For example, only two-thirds of the trials reported their primary outcome. Next, the researchers randomly selected 10% of the trials and, after excluding trials whose completion date was after December 31, 2005 (to allow at least two years for publication), determined the publication status of this subsample by systematically searching MEDLINE (an online database of articles published in selected medical and scientific journals). Fewer than half of the trials in the subsample had been published, and the citation for only a third of these publications had been entered into ClinicalTrials.gov. Only 40% of industry-sponsored trials had been published compared to 56% of nonindustry/nongovernment-sponsored trials, a difference that is unlikely to have occurred by chance. Finally, 61% of trials with a completion date before 2004 had been published, but only 42% of trials completed during 2005 had been published.
What Do These Findings Mean?
These findings indicate that, over the period studied, critical trial information was not included in the ClinicalTrials.gov registry. The FDA Amendments Act should remedy some of these shortcomings but only if the accuracy and completeness of the information in ClinicalTrials.gov is carefully monitored. These findings also reveal that registration in ClinicalTrials.gov does not guarantee that trial results will appear in a timely manner in the scientific literature. However, they do not address the reasons for selective publication (which may be, in part, because it is harder to publish negative results than positive results), and they are potentially limited by the methods used to discover whether trial results had been published. Nevertheless, these findings suggest that the FDA, trial sponsors, and the scientific community all need to make a firm commitment to minimize the selective publication of trial results to ensure that patients and clinicians have access to the information they need to make fully informed treatment decisions.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000144.
PLoS Medicine recently published two related articles on selected publication by Ida Sim and colleagues and by Lisa Bero and colleagues and an editorial discussing the FDA Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The US Food and Drug Administration provides further information about drug approval in the US for consumers and health care professionals
doi:10.1371/journal.pmed.1000144
PMCID: PMC2728480  PMID: 19901971
14.  Evidence for the Selective Reporting of Analyses and Discrepancies in Clinical Trials: A Systematic Review of Cohort Studies of Clinical Trials 
PLoS Medicine  2014;11(6):e1001666.
In a systematic review of cohort studies, Kerry Dwan and colleagues examine the evidence for selective reporting and discrepancies in analyses between journal publications and other documents for clinical trials.
Please see later in the article for the Editors' Summary
Background
Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs).
Methods and Findings
A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively.
Conclusions
Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
In the past, clinicians relied on their own experience when choosing the best treatment for their patients. Nowadays, they turn to evidence-based medicine—the systematic review and appraisal of trials, studies that investigate the benefits and harms of medical treatments in patients. However, evidence-based medicine can guide clinicians only if all the results from clinical trials are published in an unbiased and timely manner. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Recent initiatives—such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a prerequisite for publication in medical journals—aim to prevent these biases, which pose a threat to informed medical decision-making.
Why Was This Study Done?
Selective reporting of analyses of outcomes may also affect the validity of clinical trial findings. Sometimes, for example, a trial publication will include a per protocol analysis (which considers only the outcomes of patients who received their assigned treatment) rather than a pre-planned intention-to-treat analysis (which considers the outcomes of all the patients regardless of whether they received their assigned treatment). If the decision to publish the per protocol analysis is based on the results of this analysis being more favorable than those of the intention-to-treat analysis (which more closely resembles “real” life), then “analysis reporting bias” has occurred. In this systematic review, the researchers investigate the selective reporting of analyses and discrepancies in randomized controlled trials (RCTs) by reviewing published studies that assessed selective reporting of analyses in groups (cohorts) of RCTs and discrepancies in analyses of RCTs between different sources (for example, between the protocol in a trial registry and the journal publication) or different sections of a source. A systematic review uses predefined criteria to identify all the research on a given topic.
What Did the Researchers Do and Find?
The researchers identified 22 cohort studies (containing 3,140 RCTs) that were eligible for inclusion in their systematic review. All of these studies reported on discrepancies between the information provided by the RCTs in different places, but none investigated the evidence for analysis reporting bias. Several of the cohort studies reported, for example, that there were discrepancies in the statistical analyses included in the different documents associated with the RCTs included in their analysis. Other types of discrepancies reported by the cohort studies included discrepancies in the reporting of composite outcomes (an outcome in which multiple end points are combined) and in the reporting of subgroup analyses (investigations of outcomes in subgroups of patients that should be predefined in the trial protocol to avoid bias). Discrepancy rates varied among the RCTs according to the types of analyses and cohort studies considered. Thus, whereas in one cohort study discrepancies were present in the statistical test used for the analysis of the primary outcome in only 7% of the included studies, they were present in the subgroup analyses of all the included studies.
What Do These Findings Mean?
These findings indicate that discrepancies in analyses between publications and other study documents such as protocols in trial registries are common. The reasons for these discrepancies in analyses were not discussed in trial reports but may be the result of reporting bias, errors, or legitimate departures from a pre-specified protocol. For example, a statistical analysis that is not specified in the trial protocol may sometimes appear in a publication because the journal requested its inclusion as a condition of publication. The researchers suggest that it may be impossible for systematic reviewers to distinguish between these possibilities simply by looking at the source documentation. Instead, they suggest, it may be necessary for reviewers to contact the trial authors. However, to make selective reporting of analyses more easily detectable, they suggest that protocols and analysis plans should be published and that investigators should be required to stick to these plans or explain any discrepancies when they publish their trial results. Together with other initiatives, this approach should help improve the quality of evidence-based medicine and, as a result, the treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001666.
Wikipedia has pages on evidence-based medicine, on systematic reviews, and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials
The Cochrane Collaboration is a global independent network of health practitioners, researchers, patient advocates, and others that aims to promote evidence-informed health decision-making by producing high-quality, relevant, accessible systematic reviews and other synthesized research evidence; the Cochrane Handbook for Systematic Reviews of Interventions describes the preparation of systematic reviews in detail
PLOS Medicine recently launched a Reporting Guidelines Collection, an open-access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information
doi:10.1371/journal.pmed.1001666
PMCID: PMC4068996  PMID: 24959719
15.  Threats to Validity in the Design and Conduct of Preclinical Efficacy Studies: A Systematic Review of Guidelines for In Vivo Animal Experiments 
PLoS Medicine  2013;10(7):e1001489.
Background
The vast majority of medical interventions introduced into clinical development prove unsafe or ineffective. One prominent explanation for the dismal success rate is flawed preclinical research. We conducted a systematic review of preclinical research guidelines and organized recommendations according to the type of validity threat (internal, construct, or external) or programmatic research activity they primarily address.
Methods and Findings
We searched MEDLINE, Google Scholar, Google, and the EQUATOR Network website for all preclinical guideline documents published up to April 9, 2013 that addressed the design and conduct of in vivo animal experiments aimed at supporting clinical translation. To be eligible, documents had to provide guidance on the design or execution of preclinical animal experiments and represent the aggregated consensus of four or more investigators. Data from included guidelines were independently extracted by two individuals for discrete recommendations on the design and implementation of preclinical efficacy studies. These recommendations were then organized according to the type of validity threat they addressed. A total of 2,029 citations were identified through our search strategy. From these, we identified 26 guidelines that met our eligibility criteria—most of which were directed at neurological or cerebrovascular drug development. Together, these guidelines offered 55 different recommendations. Some of the most common recommendations included performance of a power calculation to determine sample size, randomized treatment allocation, and characterization of disease phenotype in the animal model prior to experimentation.
Conclusions
By identifying the most recurrent recommendations among preclinical guidelines, we provide a starting point for developing preclinical guidelines in other disease domains. We also provide a basis for the study and evaluation of preclinical research practice.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
The development process for new drugs is lengthy and complex. It begins in the laboratory, where scientists investigate the causes of diseases and identify potential new treatments. Next, promising interventions undergo preclinical research in cells and in animals (in vivo animal experiments) to test whether the intervention has the expected effect and to support the generalization (extension) of this treatment–effect relationship to patients. Drugs that pass these tests then enter clinical trials, where their safety and efficacy is tested in selected groups of patients under strictly controlled conditions. Finally, the government bodies responsible for drug approval review the results of the clinical trials, and successful drugs receive a marketing license, usually a decade or more after the initial laboratory work. Notably, only 11% of agents that enter clinical testing (investigational drugs) are ultimately licensed.
Why Was This Study Done?
The frequent failure of investigational drugs during clinical translation is potentially harmful to trial participants. Moreover, the costs of these failures are passed onto healthcare systems in the form of higher drug prices. It would be good, therefore, to reduce the attrition rate of investigational drugs. One possible explanation for the dismal success rate of clinical translation is that preclinical research, the key resource for justifying clinical development, is flawed. To address this possibility, several groups of preclinical researchers have issued guidelines intended to improve the design and execution of in vivo animal studies. In this systematic review (a study that uses predefined criteria to identify all the research on a given topic), the authors identify the experimental practices that are commonly recommended in these guidelines and organize these recommendations according to the type of threat to validity (internal, construct, or external) that they address. Internal threats to validity are factors that confound reliable inferences about treatment–effect relationships in preclinical research. For example, experimenter expectation may bias outcome assessment. Construct threats to validity arise when researchers mischaracterize the relationship between an experimental system and the clinical disease it is intended to represent. For example, researchers may use an animal model for a complex multifaceted clinical disease that only includes one characteristic of the disease. External threats to validity are unseen factors that frustrate the transfer of treatment–effect relationships from animal models to patients.
What Did the Researchers Do and Find?
The researchers identified 26 preclinical guidelines that met their predefined eligibility criteria. Twelve guidelines addressed preclinical research for neurological and cerebrovascular drug development; other disorders covered by guidelines included cardiac and circulatory disorders, sepsis, pain, and arthritis. Together, the guidelines offered 55 different recommendations for the design and execution of preclinical in vivo animal studies. Nineteen recommendations addressed threats to internal validity. The most commonly included recommendations of this type called for the use of power calculations to ensure that sample sizes are large enough to yield statistically meaningful results, random allocation of animals to treatment groups, and “blinding” of researchers who assess outcomes to treatment allocation. Among the 25 recommendations that addressed threats to construct validity, the most commonly included recommendations called for characterization of the properties of the animal model before experimentation and matching of the animal model to the human manifestation of the disease. Finally, six recommendations addressed threats to external validity. The most commonly included of these recommendations suggested that preclinical research should be replicated in different models of the same disease and in different species, and should also be replicated independently.
What Do These Findings Mean?
This systematic review identifies a range of investigational recommendations that preclinical researchers believe address threats to the validity of preclinical efficacy studies. Many of these recommendations are not widely implemented in preclinical research at present. Whether the failure to implement them explains the frequent discordance between the results on drug safety and efficacy obtained in preclinical research and in clinical trials is currently unclear. These findings provide a starting point, however, for the improvement of existing preclinical research guidelines for specific diseases, and for the development of similar guidelines for other diseases. They also provide an evidence-based platform for the analysis of preclinical evidence and for the study and evaluation of preclinical research practice. These findings should, therefore, be considered by investigators, institutional review bodies, journals, and funding agents when designing, evaluating, and sponsoring translational research.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001489.
The US Food and Drug Administration provides information about drug approval in the US for consumers and for health professionals; its Patient Network provides a step-by-step description of the drug development process that includes information on preclinical research
The UK Medicines and Healthcare Products Regulatory Agency (MHRA) provides information about all aspects of the scientific evaluation and approval of new medicines in the UK; its My Medicine: From Laboratory to Pharmacy Shelf web pages describe the drug development process from scientific discovery, through preclinical and clinical research, to licensing and ongoing monitoring
The STREAM website provides ongoing information about policy, ethics, and practices used in clinical translation of new drugs
The CAMARADES collaboration offers a “supporting framework for groups involved in the systematic review of animal studies” in stroke and other neurological diseases
doi:10.1371/journal.pmed.1001489
PMCID: PMC3720257  PMID: 23935460
16.  Completeness of Reporting of Patient-Relevant Clinical Trial Outcomes: Comparison of Unpublished Clinical Study Reports with Publicly Available Data 
PLoS Medicine  2013;10(10):e1001526.
Beate Wieseler and colleagues compare the completeness of reporting of patient-relevant clinical trial outcomes between clinical study reports and publicly available data.
Please see later in the article for the Editors' Summary
Background
Access to unpublished clinical study reports (CSRs) is currently being discussed as a means to allow unbiased evaluation of clinical research. The Institute for Quality and Efficiency in Health Care (IQWiG) routinely requests CSRs from manufacturers for its drug assessments.
Our objective was to determine the information gain from CSRs compared to publicly available sources (journal publications and registry reports) for patient-relevant outcomes included in IQWiG health technology assessments (HTAs) of drugs.
Methods and Findings
We used a sample of 101 trials with full CSRs received for 16 HTAs of drugs completed by IQWiG between 15 January 2006 and 14 February 2011, and analyzed the CSRs and the publicly available sources of these trials. For each document type we assessed the completeness of information on all patient-relevant outcomes included in the HTAs (benefit outcomes, e.g., mortality, symptoms, and health-related quality of life; harm outcomes, e.g., adverse events). We dichotomized the outcomes as “completely reported” or “incompletely reported.” For each document type, we calculated the proportion of outcomes with complete information per outcome category and overall.
We analyzed 101 trials with CSRs; 86 had at least one publicly available source, 65 at least one journal publication, and 50 a registry report. The trials included 1,080 patient-relevant outcomes. The CSRs provided complete information on a considerably higher proportion of outcomes (86%) than the combined publicly available sources (39%). With the exception of health-related quality of life (57%), CSRs provided complete information on 78% to 100% of the various benefit outcomes (combined publicly available sources: 20% to 53%). CSRs also provided considerably more information on harms. The differences in completeness of information for patient-relevant outcomes between CSRs and journal publications or registry reports (or a combination of both) were statistically significant for all types of outcomes.
The main limitation of our study is that our sample is not representative because only CSRs provided voluntarily by pharmaceutical companies upon request could be assessed. In addition, the sample covered only a limited number of therapeutic areas and was restricted to randomized controlled trials investigating drugs.
Conclusions
In contrast to CSRs, publicly available sources provide insufficient information on patient-relevant outcomes of clinical trials. CSRs should therefore be made publicly available.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
People assume that, when they are ill, health care professionals will ensure that they get the best available treatment. In the past, clinicians used their own experience to make decisions about which treatments to offer their patients, but nowadays, they rely on evidence-based medicine—the systematic review and appraisal of clinical trials, studies that investigate the benefits and harms of drugs and other medical interventions in patients. Evidence-based medicine can guide clinicians, however, only if all the results of clinical research are available for evaluation. Unfortunately, the results of trials in which a new drug performs better than existing drugs are more likely to be published than those in which the new drug performs badly or has unwanted side effects (publication bias). Moreover, trial outcomes that support the use of a new treatment are more likely to be published than those that do not support its use (outcome reporting bias). Both types of bias pose a substantial threat to informed medical decision-making.
Why Was This Study Done?
Recent initiatives, such as making registration of clinical trials in a trial registry (for example, ClinicalTrials.gov) a precondition for publication in medical journals, aim to prevent these biases but are imperfect. Another way to facilitate the unbiased evaluation of clinical research might be to increase access to clinical study reports (CSRs)—detailed but generally unpublished accounts of clinical trials. Notably, information from CSRs was recently used to challenge conclusions based on published evidence about the efficacy and safety of the antiviral drug oseltamivir and the antidepressant reboxetine. In this study, the researchers compare the information available in CSRs and in publicly available sources (journal publications and registry reports) for the patient-relevant outcomes included in 16 health technology assessments (HTAs; analyses of the medical implications of the use of specific medical technologies) for drugs; the HTAs were prepared by the Institute for Quality and Efficiency in Health Care (IQWiG), Germany's main HTA agency.
What Did the Researchers Do and Find?
The researchers searched for published journal articles and registry reports for each of 101 trials for which the IQWiG had requested and received full CSRs from drug manufacturers during HTA preparation. They then assessed the completeness of information on the patient-relevant benefit and harm outcomes (for example symptom relief and adverse effects, respectively) included in each document type. Eighty-six of the included trials had at least one publicly available data source; the results of 15% of the trials were not available in either journals or registry reports. Overall, the CSRs provided complete information on 86% of the patient-related outcomes, whereas the combined publicly available sources provided complete information on only 39% of the outcomes. For individual outcomes, the CSRs provided complete information on 78%–100% of the benefit outcomes, with the exception of health-related quality of life (57%); combined publicly available sources provided complete information on 20%–53% of these outcomes. The CSRs also provided more information on patient-relevant harm outcomes than the publicly available sources.
What Do These Findings Mean?
These findings show that, for the clinical trials considered here, publicly available sources provide much less information on patient-relevant outcomes than CSRs. The generalizability of these findings may be limited, however, because the trials included in this study are not representative of all trials. Specifically, only CSRs that were voluntarily provided by drug companies were assessed, a limited number of therapeutic areas were covered by the trials, and the trials investigated only drugs. Nevertheless, these findings suggest that access to CSRs is important for the unbiased evaluation of clinical trials and for informed decision-making in health care. Notably, in June 2013, the European Medicines Agency released a draft policy calling for the proactive publication of complete clinical trial data (possibly including CSRs). In addition, the European Union and the European Commission are considering legal measures to improve the transparency of clinical trial data. Both these initiatives will probably only apply to drugs that are approved after January 2014, however, and not to drugs already in use. The researchers therefore call for CSRs to be made publicly available for both past and future trials, a recommendation also supported by the AllTrials initiative, which is campaigning for all clinical trials to be registered and fully reported.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001526.
Wikipedia has pages on evidence-based medicine, publication bias, and health technology assessment (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The ClinicalTrials.gov website is a searchable register of federally and privately supported clinical trials in the US; it provides information about all aspects of clinical trials
The European Medicines Agency (EMA) provides information about all aspects of the scientific evaluation and approval of new medicines in the European Union, and guidance on the preparation of clinical study reports; its draft policy on the release of data from clinical trials is available
Information about IQWiG is available (in English and German); Informed Health Online is a website provided by IQWiG that provides objective, independent, and evidence-based information for patients (also in English and German)
doi:10.1371/journal.pmed.1001526
PMCID: PMC3793003  PMID: 24115912
17.  Ultraviolet Phototherapy Management of Moderate-to-Severe Plaque Psoriasis 
Executive Summary
Objective
The purpose of this evidence based analysis was to determine the effectiveness and safety of ultraviolet phototherapy for moderate-to-severe plaque psoriasis.
Research Questions
The specific research questions for the evidence review were as follows:
What is the safety of ultraviolet phototherapy for moderate-to-severe plaque psoriasis?
What is the effectiveness of ultraviolet phototherapy for moderate-to-severe plaque psoriasis?
Clinical Need: Target Population and Condition
Psoriasis is a common chronic, systemic inflammatory disease affecting the skin, nails and occasionally the joints and has a lifelong waning and waxing course. It has a worldwide occurrence with a prevalence of at least 2% of the general population, making it one of the most common systemic inflammatory diseases. The immune-mediated disease has several clinical presentations with the most common (85% - 90%) being plaque psoriasis.
Characteristic features of psoriasis include scaling, redness, and elevation of the skin. Patients with psoriasis may also present with a range of disabling symptoms such as pruritus (itching), pain, bleeding, or burning associated with plaque lesions and up to 30% are classified as having moderate-to-severe disease. Further, some psoriasis patients can be complex medical cases in which diabetes, inflammatory bowel disease, and hypertension are more likely to be present than in control populations and 10% also suffer from arthritis (psoriatic arthritis). The etiology of psoriasis is unknown but is thought to result from complex interactions between the environment and predisposing genes.
Management of psoriasis is related to the extent of the skin involvement, although its presence on the hands, feet, face or genitalia can present challenges. Moderate-to-severe psoriasis is managed by phototherapy and a range of systemic agents including traditional immunosuppressants such as methotrexate and cyclospsorin. Treatment with modern immunosuppressant agents known as biologicals, which more specifically target the immune defects of the disease, is usually reserved for patients with contraindications and those failing or unresponsive to treatments with traditional immunosuppressants or phototherapy.
Treatment plans are based on a long-term approach to managing the disease, patient’s expectations, individual responses and risk of complications. The treatment goals are several fold but primarily to:
1) improve physical signs and secondary psychological effects,
2) reduce inflammation and control skin shedding,
3) control physical signs as long as possible, and to
4) avoid factors that can aggravate the condition.
Approaches are generally individualized because of the variable presentation, quality of life implications, co-existent medical conditions, and triggering factors (e.g. stress, infections and medications). Individual responses and commitments to therapy also present possible limitations.
Phototherapy
Ultraviolet phototherapy units have been licensed since February 1993 as a class 2 device in Canada. Units are available as hand held devices, hand and foot devices, full-body panel, and booth styles for institutional and home use. Units are also available with a range of ultraviolet A, broad and narrow band ultraviolet B (BB-UVB and NB-UVB) lamps. After establishing appropriate ultraviolet doses, three-times weekly treatment schedules for 20 to 25 treatments are generally needed to control symptoms.
Evidence-Based Analysis Methods
The literature search strategy employed keywords and subject headings to capture the concepts of 1) phototherapy and 2) psoriasis. The search involved runs in the following databases: Ovid MEDLINE (1996 to March Week 3 2009), OVID MEDLINE In-Process and Other Non-Indexed Citations, EMBASE (1980 to 2009 Week 13), the Wiley Cochrane Library, and the Centre for Reviews and Dissemination/International Agency for Health Technology Assessment. Parallel search strategies were developed for the remaining databases. Search results were limited to human and English-language published between January 1999 and March 31, 2009. Search alerts were generated and reviewed for relevant literature up until May 31, 2009.
English language reports and human studies
Ultraviolet phototherapy interventions for plaque-type psoriasis
Reports involving efficacy and/or safety outcome studies
Original reports with defined study methodology
Standardized measurements on outcome events such as technical success, safety, effectiveness, durability, quality of life or patient satisfaction
Non-systematic reviews, letters, comments and editorials
Randomized trials involving side-to-side or half body comparisons
Randomized trials not involving ultraviolet phototherapy intervention for plaque-type psoriasis
Trials involving dosing studies, pilot feasibility studies or lacking control groups
Summary of Findings
A 2000 health technology evidence report on the overall management of psoriasis by The National Institute Health Research (NIHR) Health Technology Assessment Program of the UK was identified in the MAS evidence-based review. The report included 109 RCT studies published between 1966 and June 1999 involving four major treatment approaches – 51 on phototherapy, 32 on oral retinoids, 18 on cyclosporin and five on fumarates.. The absence of RCTs on methotrexate was noted as original studies with this agent had been performed prior to 1966.
Of the 51 RCT studies involving phototherapy, 22 involved UVA, 21 involved UVB, five involved both UVA and UVB and three involved natural light as a source of UV. The RCT studies included comparisons of treatment schedules, ultraviolet source, addition of adjuvant therapies, and comparisons between phototherapy and topical treatment schedules. Because of heterogeneity, no synthesis or meta-analysis could be performed. Overall, the reviewers concluded that the efficacy of only five therapies could be supported from the RCT-based evidence review: photochemotherapy or phototherapy, cyclosporin, systemic retinoids, combination topical vitamin D3 analogues (calcipotriol) and corticosteroids in combination with phototherapy and fumarates. Although there was no RCT evidence supporting methotrexate, it’s efficacy for psoriasis is well known and it continues to be a treatment mainstay.
The conclusion of the NIHR evidence review was that both photochemotherapy and phototherapy were effective treatments for clearing psoriasis, although their comparative effectiveness was unknown. Despite the conclusions on efficacy, a number of issues were identified in the evidence review and several areas for future research were discussed to address these limitations. Trials focusing on comparative effectiveness, either between ultraviolet sources or between classes of treatment such as methotrexate versus phototherapy, were recommended to refine treatment algorithms. The need for better assessment of cost-effectiveness of therapies to consider systemic drug costs and costs of surveillance, as well as drug efficacy, were also noted. Overall, the authors concluded that phototherapy and photochemotherapy had important roles in psoriasis management and were standard therapeutic options for psoriasis offered in dermatology practices.
The MAS evidence-based review focusing on the RCT trial evidence for ultraviolet phototherapy management of moderate-to-severe plaque psoriasis was performed as an update to the NIHR 2000 systemic review on treatments for severe psoriasis. In this review, an additional 26 RCT reports examining phototherapy or photochemotherapy for psoriasis were identified. Among the studies were two RCTs comparing ultraviolet wavelength sources, five RCTs comparing different forms of phototherapy, four RCTs combining phototherapy with prior spa saline bathing, nine RCTs combining phototherapy with topical agents, two RCTs combining phototherapy with the systemic immunosuppressive agents methotrexate or alefacept, one RCT comparing phototherapy with an additional light source (the excimer laser), and one comparing a combination therapy with phototherapy and psychological intervention involving simultaneous audiotape sessions on mindfulness and stress reduction. Two trials also examined the effect of treatment setting on effectiveness of phototherapy, one on inpatient versus outpatient therapy and one on outpatient clinic versus home-based phototherapy.
Conclusions
The conclusions of the MAS evidence-based review are outlined in Table ES1. In summary, phototherapy provides good control of clinical symptoms in the short term for patients with moderate-to-severe plaque-type psoriasis that have failed or are unresponsive to management with topical agents. However, many of the evidence gaps identified in the NIHR 2000 evidence review on psoriasis management persisted. In particular, the lack of evidence on the comparative effectiveness and/or cost-effectiveness between the major treatment options for moderate-to-severe psoriasis remained. The evidence on effectiveness and safety of longer term strategies for disease management has also not been addressed. Evidence for the safety, effectiveness, or cost-effectiveness of phototherapy delivered in various settings is emerging but is limited. In addition, because all available treatments for psoriasis – a disease with a high prevalence, chronicity, and cost – are palliative rather than curative, strategies for disease control and improvements in self-efficacy employed in other chronic disease management strategies should be investigated.
RCT Evidence for Ultraviolet Phototherapy Treatment of Moderate-To-Severe Plaque Psoriasis
Phototherapy is an effective treatment for moderate-to-severe plaque psoriasis
Narrow band PT is more effective than broad band PT for moderate-to-severe plaque psoriasis
Oral-PUVA has a greater clinical response, requires less treatments and has a greater cumulative UV irradiation dose than UVB to achieve treatment effects for moderate-to-severe plaque psoriasis
Spa salt water baths prior to phototherapy did increase short term clinical response of moderate-to-severe plaque psoriasis but did not decrease cumulative UV irradiation dose
Addition of topical agents (vitamin D3 calcipotriol) to NB-UVB did not increase mean clinical response or decrease treatments or cumulative UV irradiation dose
Methotrexate prior to NB-UVB in high need psoriasis patients did significantly increase clinical response, decrease number of treatment sessions and decrease cumulative UV irradiation dose
Phototherapy following alefacept did increase early clinical response in moderate-to-severe plaque psoriasis
Effectiveness and safety of home NB-UVB phototherapy was not inferior to NB-UVB phototherapy provided in a clinic to patients with psoriasis referred for phototherapy. Treatment burden was lower and patient satisfaction was higher with home therapy and patients in both groups preferred future phototherapy treatments at home
Ontario Health System Considerations
A 2006 survey of ultraviolet phototherapy services in Canada identified 26 phototherapy clinics in Ontario for a population of over 12 million. At that time, there were 177 dermatologists and 50 geographic regions in which 28% (14/50) provided phototherapy services. The majority of the phototherapy services were reported to be located in densely populated areas; relatively few patients living in rural communities had access to these services. The inconvenience of multiple weekly visits for optimal phototherapy treatment effects poses additional burdens to those with travel difficulties related to health, job, or family-related responsibilities.
Physician OHIP billing for phototherapy services totaled 117,216 billings in 2007, representing approximately 1,800 patients in the province treated in private clinics. The number of patients treated in hospitals is difficult to estimate as physician costs are not billed directly to OHIP in this setting. Instead, phototherapy units and services provided in hospitals are funded by hospitals’ global budgets. Some hospitals in the province, however, have divested their phototherapy services, so the number of phototherapy clinics and their total capacity is currently unknown.
Technological advances have enabled changes in phototherapy treatment regimens from lengthy hospital inpatient stays to outpatient clinic visits and, more recently, to an at-home basis. When combined with a telemedicine follow-up, home phototherapy may provide an alternative strategy for improved access to service and follow-up care, particularly for those with geographic or mobility barriers. Safety and effectiveness have, however, so far been evaluated for only one phototherapy home-based delivery model. Alternate care models and settings could potentially increase service options and access, but the broader consequences of the varying cost structures and incentives that either increase or decrease phototherapy services are unknown.
Economic Analyses
The focus of the current economic analysis was to characterize the costs associated with the provision of NB-UVB phototherapy for plaque-type, moderate-to-severe psoriasis in different clinical settings, including home therapy. A literature review was conducted and no cost-effectiveness (cost-utility) economic analyses were published in this area.
Hospital, Clinic, and Home Costs of Phototherapy
Costs for NB-UVB phototherapy were based on consultations with equipment manufacturers and dermatologists. Device costs applicable to the provision of NB-UVB phototherapy in hospitals, private clinics and at a patient’s home were estimated. These costs included capital costs of purchasing NB-UVB devices (amortized over 15-20 years), maintenance costs of replacing equipment bulbs, physician costs of phototherapy treatment in private clinics ($7.85 per phototherapy treatment), and medication and laboratory costs associated with treatment of moderate-to-severe psoriasis.
NB-UVB phototherapy services provided in a hospital setting were paid for by hospitals directly. Phototherapy services in private clinic and home settings were paid for by the clinic and patient, respectively, except for physician services covered by OHIP. Indirect funding was provided to hospitals as part of global budgeting and resource allocation. Home therapy services for NB-UVB phototherapy were not covered by the MOHLTC. Coverage for home-based phototherapy however, was in some cases provided by third party insurers.
Device costs for NB-UVB phototherapy were estimated for two types of phototherapy units: a “booth unit” consisting of 48 bulbs used in hospitals and clinics, and a “panel unit” consisting of 10 bulbs for home use. The device costs of the booth and panel units were estimated at approximately $18,600 and $2,900, respectively; simple amortization over 15 and 20 years implied yearly costs of approximately $2,500 and $150, respectively. Replacement cost for individual bulbs was about $120 resulting in total annual cost of maintenance of about $8,640 and $120 for booth and panel units, respectively.
Estimated Total Costs for Ontario
Average annual cost per patient for NB-UVB phototherapy provided in the hospital, private clinic or at home was estimated to be $292, $810 and $365 respectively. For comparison purposes, treatment of moderate-to-severe psoriasis with methotrexate and cyclosporin amounted to $712 and $3,407 annually per patient respectively; yearly costs for biological drugs were estimated to be $18,700 for alefacept and $20,300 for etanercept-based treatments.
Total annual costs of NB-UVB phototherapy were estimated by applying average costs to an estimated proportion of the population (age 18 or older) eligible for phototherapy treatment. The prevalence of psoriasis was estimated to be approximately 2% of the population, of which about 85% was of plaque-type psoriasis and approximately 20% to 30% was considered moderate-to-severe in disease severity. An estimate of 25% for moderate-to-severe psoriasis cases was used in the current economic analysis resulting in a range of 29,400 to 44,200 cases. Approximately 21% of these patients were estimated to be using NB-UVB phototherapy for treatment resulting in a number of cases in the range between 6,200 and 9,300 cases. The average (7,700) number of cases was used to calculate associated costs for Ontario by treatment setting.
Total annual costs were as follows: $2.3 million in a hospital setting, $6.3 million in a private clinic setting, and $2.8 million for home phototherapy. Costs for phototherapy services provided in private clinics were greater ($810 per patient annually; total of $6.3 million annually) and differed from the same services provided in the hospital setting only in terms of additional physician costs associated with phototherapy OHIP fees.
Keywords
Psoriasis, ultraviolet radiation, phototherapy, photochemotherapy, NB-UVB, BB-UVB PUVA
PMCID: PMC3377497  PMID: 23074532
18.  Strategies and Practices in Off-Label Marketing of Pharmaceuticals: A Retrospective Analysis of Whistleblower Complaints 
PLoS Medicine  2011;8(4):e1000431.
Aaron Kesselheim and colleagues analyzed unsealed whistleblower complaints against pharmaceutical companies filed in US federal fraud cases that contained allegations of off-label marketing, and develop a taxonomy of the various off-label practices.
Background
Despite regulatory restrictions, off-label marketing of pharmaceutical products has been common in the US. However, the scope of off-label marketing remains poorly characterized. We developed a typology for the strategies and practices that constitute off-label marketing.
Methods and Findings
We obtained unsealed whistleblower complaints against pharmaceutical companies filed in US federal fraud cases that contained allegations of off-label marketing (January 1996–October 2010) and conducted structured reviews of them. We coded and analyzed the strategic goals of each off-label marketing scheme and the practices used to achieve those goals, as reported by the whistleblowers. We identified 41 complaints arising from 18 unique cases for our analytic sample (leading to US$7.9 billion in recoveries). The off-label marketing schemes described in the complaints had three non–mutually exclusive goals: expansions to unapproved diseases (35/41, 85%), unapproved disease subtypes (22/41, 54%), and unapproved drug doses (14/41, 34%). Manufacturers were alleged to have pursued these goals using four non–mutually exclusive types of marketing practices: prescriber-related (41/41, 100%), business-related (37/41, 90%), payer-related (23/41, 56%), and consumer-related (18/41, 44%). Prescriber-related practices, the centerpiece of company strategies, included self-serving presentations of the literature (31/41, 76%), free samples (8/41, 20%), direct financial incentives to physicians (35/41, 85%), and teaching (22/41, 54%) and research activities (8/41, 20%).
Conclusions
Off-label marketing practices appear to extend to many areas of the health care system. Unfortunately, the most common alleged off-label marketing practices also appear to be the most difficult to control through external regulatory approaches.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Before a pharmaceutical company can market a new prescription drug in the US, the drug has to go through a long approval process. After extensive studies in the laboratory and in animals, the pharmaceutical company must test the drug's safety and efficacy in a series of clinical trials in which groups of patients with specific diseases are given the drug according to strict protocols. The results of these trials are reviewed by Federal Drug Administration (FDA, the body that regulates drugs in the US) and, when the FDA is satisfied that the drug is safe and effective for the conditions in which it is tested, it approves the drug for sale. An important part of the approval process is the creation of the “drug label,” a detailed report that specifies the exact diseases and patient groups in which the drug can be used and the approved doses of the drug.
Why Was This Study Done?
Physicians can, however, legally use FDA-approved drugs “off-label.” That is, they can prescribe drugs for a different disease, in a different group of patients, or at a different dose to that specified in the drug's label. However, because drugs' manufacturers stand to benefit financially from off-label use through increased drugs sales, the FDA prohibits them from directly promoting unapproved uses. The fear is that such marketing would encourage the widespread use of drugs in settings where their efficacy and safety has not been rigorously tested, exposing patients to uncertain benefits and possible adverse effects. Despite the regulatory restrictions, off-label marketing seems to be common. In 2010, for example, at least six pharmaceutical companies settled US government investigations into alleged off-label marketing programs. Unfortunately, the tactics used by pharmaceutical companies for off-label marketing have been poorly understood in the medical community, in part because pharmaceutical industry insiders (“whistleblowers”) are the only ones who can present in-depth knowledge of these tactics. In recent years, as more whistleblowers have come forward to allege off-label marketing, developing a more complete picture of the practice is now possible. In this study, the researchers attempt to systematically classify the strategies and practices used in off-labeling marketing by examining complaints filed by whistleblowers in federal enforcement actions where off-label marketing by pharmaceutical companies has been alleged.
What Did the Researchers Do and Find?
In their analysis of 41 whistleblower complaints relating to 18 alleged cases of off-label marketing in federal fraud cases unsealed between January 1996 and October 2010, the researchers identified three non–mutually exclusive goals of off-label marketing schemes. The commonest goal (85% of cases) was expansion of drug use to unapproved diseases (for example, gabapentin, which is approved for the treatment of specific types of epilepsy, was allegedly promoted as a therapy for patients with psychiatric diseases such as depression). The other goals were expansion to unapproved disease subtypes (for example, some antidepressant drugs approved for adults were allegedly promoted to pediatricians for use in children) and expansion to unapproved drug dosing strategies, typically higher doses. The researchers also identified four non–mutually exclusive types of marketing practices designed to achieve these goals. All of the whistleblowers alleged prescriber-related practices (including providing financial incentives and free samples to physicians), and most alleged internal practices intended to bolster off-label marketing, such as sales quotas that could only be met if the manufacturer's sales representatives promoted off-label drug use. Payer-related practices (for example, discussions with prescribers about ways to ensure insurance reimbursement for off-label prescriptions) and consumer-related practices (most commonly, the review of confidential patient charts to identify consumers who could be off-label users) were also alleged.
What Do These Findings Mean?
These findings suggest that off-labeling marketing practices extend to many parts of the health care delivery system. Because these practices were alleged by whistleblowers and were not the subject of testimony in a full trial, some of the practices identified by the researchers were not confirmed. Conversely, because most of the whistleblowers were US-based sales representatives, there may be other goals and strategies that this study has not identified. Nevertheless, these findings provide a useful snapshot of off-label marketing strategies and practices allegedly employed in the US over the past 15 years, which can now be used to develop new regulatory strategies aimed at effective oversight of off-label marketing. Importantly, however, these findings suggest that no regulatory strategy will be complete and effective unless physicians themselves fully understand the range of off-label marketing practices and their consequences for public health and act as a bulwark against continued efforts to engage in off-label promotion.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000431.
The US Food and Drug Administration provides detailed information about drug approval in the US for consumers and for health professionals; its Bad Ad Program aims to educate health care providers about the role they can play in ensuring that prescription drug advertising and promotion is truthful and not misleading.
The American Cancer Society has a page about off-label drug use
Wikipedia has pages on prescription drugs, on pharmaceutical marketing, and on off-label drug use (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
Taxpayers Against Fraud is a nonprofit organization dedicated to helping whistleblowers, and it presents up-to-date information about False Claims Act cases
The Government Accountability Project is a nonprofit organization that seeks to promote corporate and government accountability by protecting whistleblowers, advancing occupational free speech, and empowering citizen activists
Healthy Skepticism is an international nonprofit membership association that aims to improve health by reducing harm from misleading health information
doi:10.1371/journal.pmed.1000431
PMCID: PMC3071370  PMID: 21483716
19.  How Does Medical Device Regulation Perform in the United States and the European Union? A Systematic Review 
PLoS Medicine  2012;9(7):e1001276.
Aaron Kesselheim and colleagues conduct a systematic review to examine the strengths and weaknesses associated with approaches to medical device regulation in the US and EU.
Background
Policymakers and regulators in the United States (US) and the European Union (EU) are weighing reforms to their medical device approval and post-market surveillance systems. Data may be available that identify strengths and weakness of the approaches to medical device regulation in these settings.
Methods and Findings
We performed a systematic review to find empirical studies evaluating medical device regulation in the US or EU. We searched Medline using two nested categories that included medical devices and glossary terms attributable to the US Food and Drug Administration and the EU, following PRISMA guidelines for systematic reviews. We supplemented this search with a review of the US Government Accountability Office online database for reports on US Food and Drug Administration device regulation, consultations with local experts in the field, manual reference mining of selected articles, and Google searches using the same key terms used in the Medline search. We found studies of premarket evaluation and timing (n = 9), studies of device recalls (n = 8), and surveys of device manufacturers (n = 3). These studies provide evidence of quality problems in pre-market submissions in the US, provide conflicting views of device safety based largely on recall data, and relay perceptions of some industry leaders from self-surveys.
Conclusions
Few studies have quantitatively assessed medical device regulation in either the US or EU. Existing studies of US and EU device approval and post-market evaluation performance suggest that policy reforms are necessary for both systems, including improving classification of devices in the US and promoting transparency and post-market oversight in the EU. Assessment of regulatory performance in both settings is limited by lack of data on post-approval safety outcomes. Changes to these device approval and post-marketing systems must be accompanied by ongoing research to ensure that there is better assessment of what works in either setting.
Please see later in the article for the Editors' Summary.
Editors' Summary
Background
Medical devices—health technologies that are not medicines, vaccines, or clinical procedures—cover a vast range of equipment from the simple to the more complex. Medical devices are essential for patient care, and in the past decade, new devices have offered improved treatment alternatives for many diseases and conditions, leading to substantial growth in the US$350 billion medical device industry. However, new medical devices also pose substantial risks to patients, as shown in recent high-profile product recalls involving breast implants and artificial hip implants.
Why Was This Study Done?
Concerns about the safety of new medical devices have led to calls for greater testing of the safety and effectiveness of new devices before they come on the market and for improved monitoring of their performance after new devices have been approved for use by a regulatory body. In this study, the researchers systematically reviewed evidence about the performance of medical device approval and post-market surveillance systems in two of the most important world markets for medical devices—the United States and the European Union.
What Did the Researchers Do and Find?
The researchers performed a keyword search in Medline (a database of published biomedical literature) for all relevant articles, and supplemented this search with a review of reports on Food and Drug Administration (FDA) device regulation in the US Government Accountability Office's online database. Then they consulted with both US and EU experts and also conducted Google searches to capture reports by management consultant firms. The researchers included only those studies that reported empirical data, either qualitative or quantitative, about the characteristics, performance metrics, or effectiveness of device evaluation or post-market oversight in the US or EU.
Using these methods the researchers identified nine studies that focused on pre-market evaluation and timing, eight studies of device recalls, and three surveys of device manufacturers. Because of the variable quality and lack of outcomes from these studies and reports, the researchers concluded that these studies offered only limited insights into either the US or EU systems. But the available evidence does suggest that in the US, the FDA could improve oversight of device approval, for example, by following up on its commitment to reclassify high-risk medical devices and improve post-market surveillance of devices that are approved on the basis of limited data. The researchers also suggest that using recalls to measure the safety record of individual devices or classes of devices is flawed, as particular devices may be over- or underrepresented in recall data depending on the frequency of their use, design complexity, and the clinical manifestations of malfunction. In the EU, apart from a few studies addressing the timing of approval, the researchers found almost no robust data on device regulation. Some case reports suggested substantial dangers to patients in the EU from devices approved on the basis of limited data, but the researchers could not systematically compare the quality of studies used for device approval or post-approval safety outcomes between the EU and US, mainly because of the lack of transparency among the EU regulators (Notified Bodies).
What Do These Findings Mean?
These findings show that few studies have quantitatively assessed medical device regulation in either the US or EU, but the existing studies examined in this review suggest that policy reforms are necessary for both device approval and post-market evaluation of performance, including improving classification of devices in the US and promoting transparency and postmarket oversight in the EU. However, assessment of regulatory performance in both the US and EU is limited by lack of data on post-approval safety outcomes. Any changes to medical device approval and post-marketing systems should be accompanied by ongoing research and evaluation to ensure that there is an improved assessment of what works in either setting.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001276.
This study is further discussed in a PLoS Medicine Perspective by Sanket Dhruva and Rita Redberg
The WHO website has a comprehensive topic section on medical devices
Information on medical devices is also available from the FDA and the European Commission
doi:10.1371/journal.pmed.1001276
PMCID: PMC3418047  PMID: 22912563
20.  Initial Severity and Antidepressant Benefits: A Meta-Analysis of Data Submitted to the Food and Drug Administration 
PLoS Medicine  2008;5(2):e45.
Background
Meta-analyses of antidepressant medications have reported only modest benefits over placebo treatment, and when unpublished trial data are included, the benefit falls below accepted criteria for clinical significance. Yet, the efficacy of the antidepressants may also depend on the severity of initial depression scores. The purpose of this analysis is to establish the relation of baseline severity and antidepressant efficacy using a relevant dataset of published and unpublished clinical trials.
Methods and Findings
We obtained data on all clinical trials submitted to the US Food and Drug Administration (FDA) for the licensing of the four new-generation antidepressants for which full datasets were available. We then used meta-analytic techniques to assess linear and quadratic effects of initial severity on improvement scores for drug and placebo groups and on drug–placebo difference scores. Drug–placebo differences increased as a function of initial severity, rising from virtually no difference at moderate levels of initial depression to a relatively small difference for patients with very severe depression, reaching conventional criteria for clinical significance only for patients at the upper end of the very severely depressed category. Meta-regression analyses indicated that the relation of baseline severity and improvement was curvilinear in drug groups and showed a strong, negative linear component in placebo groups.
Conclusions
Drug–placebo differences in antidepressant efficacy increase as a function of baseline severity, but are relatively small even for severely depressed patients. The relationship between initial severity and antidepressant efficacy is attributable to decreased responsiveness to placebo among very severely depressed patients, rather than to increased responsiveness to medication.
Kirsch and colleagues show that, in antidepressant trials, there is a greater difference in efficacy between drug and placebo amongst more severely depressed patients. However, this difference seems to result from a poorer response to placebo amongst more depressed patients.
Editors' Summary
Background.
Everyone feels miserable occasionally. But for some people—those with depression—these sad feelings last for months or years and interfere with daily life. Depression is a serious medical illness caused by imbalances in the brain chemicals that regulate mood. It affects one in six people at some time during their life, making them feel hopeless, worthless, unmotivated, even suicidal. Doctors measure the severity of depression using the “Hamilton Rating Scale of Depression” (HRSD), a 17–21 item questionnaire. The answers to each question are given a score and a total score for the questionnaire of more than 18 indicates severe depression. Mild depression is often treated with psychotherapy or talk therapy (for example, cognitive–behavioral therapy helps people to change negative ways of thinking and behaving). For more severe depression, current treatment is usually a combination of psychotherapy and an antidepressant drug, which is hypothesized to normalize the brain chemicals that affect mood. Antidepressants include “tricyclics,” “monoamine oxidases,” and “selective serotonin reuptake inhibitors” (SSRIs). SSRIs are the newest antidepressants and include fluoxetine, venlafaxine, nefazodone, and paroxetine.
Why Was This Study Done?
Although the US Food and Drug Administration (FDA), the UK National Institute for Health and Clinical Excellence (NICE), and other licensing authorities have approved SSRIs for the treatment of depression, some doubts remain about their clinical efficacy. Before an antidepressant is approved for use in patients, it must undergo clinical trials that compare its ability to improve the HRSD scores of patients with that of a placebo, a dummy tablet that contains no drug. Each individual trial provides some information about the new drug's effectiveness but additional information can be gained by combining the results of all the trials in a “meta-analysis,” a statistical method for combining the results of many studies. A previously published meta-analysis of the published and unpublished trials on SSRIs submitted to the FDA during licensing has indicated that these drugs have only a marginal clinical benefit. On average, the SSRIs improved the HRSD score of patients by 1.8 points more than the placebo, whereas NICE has defined a significant clinical benefit for antidepressants as a drug–placebo difference in the improvement of the HRSD score of 3 points. However, average improvement scores may obscure beneficial effects between different groups of patient, so in the meta-analysis in this paper, the researchers investigated whether the baseline severity of depression affects antidepressant efficacy.
What Did the Researchers Do and Find?
The researchers obtained data on all the clinical trials submitted to the FDA for the licensing of fluoxetine, venlafaxine, nefazodone, and paroxetine. They then used meta-analytic techniques to investigate whether the initial severity of depression affected the HRSD improvement scores for the drug and placebo groups in these trials. They confirmed first that the overall effect of these new generation of antidepressants was below the recommended criteria for clinical significance. Then they showed that there was virtually no difference in the improvement scores for drug and placebo in patients with moderate depression and only a small and clinically insignificant difference among patients with very severe depression. The difference in improvement between the antidepressant and placebo reached clinical significance, however, in patients with initial HRSD scores of more than 28—that is, in the most severely depressed patients. Additional analyses indicated that the apparent clinical effectiveness of the antidepressants among these most severely depressed patients reflected a decreased responsiveness to placebo rather than an increased responsiveness to antidepressants.
What Do These Findings Mean?
These findings suggest that, compared with placebo, the new-generation antidepressants do not produce clinically significant improvements in depression in patients who initially have moderate or even very severe depression, but show significant effects only in the most severely depressed patients. The findings also show that the effect for these patients seems to be due to decreased responsiveness to placebo, rather than increased responsiveness to medication. Given these results, the researchers conclude that there is little reason to prescribe new-generation antidepressant medications to any but the most severely depressed patients unless alternative treatments have been ineffective. In addition, the finding that extremely depressed patients are less responsive to placebo than less severely depressed patients but have similar responses to antidepressants is a potentially important insight into how patients with depression respond to antidepressants and placebos that should be investigated further.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050045.
The MedlinePlus encyclopedia contains a page on depression (in English and Spanish)
Detailed information for patients and caregivers is available on all aspects of depression (including symptoms and treatment) from the US National Institute of Medical Health and from the UK National Health Service Direct Health Encyclopedia
MedlinePlus provides a list of links to further information on depression
Clinical Guidance for professionals, patients, caregivers and the public is provided by the UK National Institute for Health and Clinical Excellence
doi:10.1371/journal.pmed.0050045
PMCID: PMC2253608  PMID: 18303940
21.  License Compliance Issues For Biopharmaceuticals: Special Challenges For Negotiations Between Companies And Non-Profit Research Institutions 
Summary
Biopharmaceuticals are therapeutic products based on biotechnology. They are manufactured by or from living organisms and are the most complex of all commercial medicines to develop, manufacture and qualify for regulatory approval. In recent years biopharmaceuticals have rapidly increased in number and importance with over 4001 already marketed in the U.S. and European markets alone. Many companies throughout the world are now ramping up investments in biopharmaceutical R&D and expanding their portfolios through licensing of early-stage biotechnologies from universities and other non-profit research institutions, and there is an increasing number of license agreements for biopharmaceutical product development relative to traditional small molecule drug compounds. This trend will only continue as large numbers of biosimilars and biogenerics enter the market.
A primary goal of technology transfer offices associated with publicly-funded, non-profit research institutions is to establish patent protection for inventions deemed to have commercial potential and license them for product development. Such licenses help stimulate economic development and job creation, bring a stream of royalty revenue to the institution and, hopefully, advance the public good or public health by bringing new and useful products to market. In the course of applying for such licenses, a commercial development plan is usually put forth by the license applicant. This plan indicates the path the applicant expects to follow to bring the licensed invention to market. In the case of small molecule drug compounds, there exists a widely-recognized series of clinical development steps, dictated by regulatory requirements, that must be met to bring a new drug to market, such as completion of preclinical toxicology, Phase 1, 2 and 3 testing and product approvals. These steps often become the milestone/benchmark schedule incorporated into license agreements which technology transfer offices use to monitor the licensee’s diligence and progress; most exclusive licenses include a commercial development plan, with penalties, financial or even revocation of the license, if the plan is not followed, e.g., the license falls too far behind.
This study examines whether developmental milestone schedules based on a small molecule drug development model are useful and realistic in setting expectations for biopharmaceutical product development. We reviewed the monitoring records of all exclusive Public Health Service (PHS) commercial development license agreements for small molecule drugs or therapeutics based on biotechnology (biopharmaceuticals) executed by the National Institutes of Health (NIH) Office of Technology Transfer (OTT) between 2003 and 2009. We found that most biopharmaceutical development license agreements required amending because developmental milestones in the negotiated schedule could not be met by the licensee. This was in stark contrast with license agreements for small molecule chemical compounds which rarely needed changes to their developmental milestone schedules. As commercial development licenses for biopharmaceuticals make up the vast majority of NIH’s exclusive license agreements, there is clearly a need to: 1) more closely examine how these benchmark schedules are formed, 2) try to understand the particular risk factors contributing to benchmark schedule non-compliance, and 3) devise alternatives to the current license benchmark schedule structural model. Schedules that properly weigh the most relevant risk factors such as technology classification (e.g., vaccine vs recombinant antibody vs gene therapy), likelihood of unforeseen regulatory issues, and company size/structure may help assure compliance with original license benchmark schedules. This understanding, coupled with a modified approach to the license negotiation process that makes use of a clear and comprehensive term sheet to minimize ambiguities should result in a more realistic benchmark schedule.
PMCID: PMC3234133  PMID: 22162900
22.  Chiropractic care for paediatric and adolescent Attention-Deficit/Hyperactivity Disorder: A systematic review 
Background
Psychostimulants are first line of therapy for paediatric and adolescent AD/HD. The evidence suggests that up to 30% of those prescribed stimulant medications do not show clinically significant outcomes. In addition, many children and adolescents experience side-effects from these medications. As a result, parents are seeking alternate interventions for their children. Complementary and alternative medicine therapies for behavioural disorders such as AD/HD are increasing with as many as 68% of parents having sought help from alternative practitioners, including chiropractors.
Objective
The review seeks to answer the question of whether chiropractic care can reduce symptoms of inattention, impulsivity and hyperactivity for paediatric and adolescent AD/HD.
Methods
Electronic databases (Cochrane CENTRAL register of Controlled Trials, Cochrane Database of Systematic reviews, MEDLINE, PsycINFO, CINAHL, Scopus, ISI Web of Science, Index to Chiropractic Literature) were searched from inception until July 2009 for English language studies for chiropractic care and AD/HD. Inclusion and exclusion criteria were applied to select studies. All randomised controlled trials were evaluated using the Jadad score and a checklist developed from the CONSORT (Consolidated Standards of Reporting Trials) guidelines.
Results
The search yielded 58 citations of which 22 were intervention studies. Of these, only three studies were identified for paediatric and adolescent AD/HD cohorts. The methodological quality was poor and none of the studies qualified using inclusion criteria.
Conclusions
To date there is insufficient evidence to evaluate the efficacy of chiropractic care for paediatric and adolescent AD/HD. The claim that chiropractic care improves paediatric and adolescent AD/HD, is only supported by low levels of scientific evidence. In the interest of paediatric and adolescent health, if chiropractic care for AD/HD is to continue, more rigorous scientific research needs to be undertaken to examine the efficacy and effectiveness of chiropractic treatment. Adequately-sized RCTs using clinically relevant outcomes and standardised measures to examine the effectiveness of chiropractic care verses no-treatment/placebo control or standard care (pharmacological and psychosocial care) are needed to determine whether chiropractic care is an effective alternative intervention for paediatric and adolescent AD/HD.
doi:10.1186/1746-1340-18-13
PMCID: PMC2891800  PMID: 20525195
23.  Strategies for Increasing Recruitment to Randomised Controlled Trials: Systematic Review 
PLoS Medicine  2010;7(11):e1000368.
Patrina Caldwell and colleagues performed a systematic review of randomized studies that compared methods of recruiting individual study participants into trials, and found that strategies that focus on increasing potential participants' awareness of the specific health problem, and that engaged them, appeared to increase recruitment.
Background
Recruitment of participants into randomised controlled trials (RCTs) is critical for successful trial conduct. Although there have been two previous systematic reviews on related topics, the results (which identified specific interventions) were inconclusive and not generalizable. The aim of our study was to evaluate the relative effectiveness of recruitment strategies for participation in RCTs.
Methods and Findings
A systematic review, using the PRISMA guideline for reporting of systematic reviews, that compared methods of recruiting individual study participants into an actual or mock RCT were included. We searched MEDLINE, Embase, The Cochrane Library, and reference lists of relevant studies. From over 16,000 titles or abstracts reviewed, 396 papers were retrieved and 37 studies were included, in which 18,812 of at least 59,354 people approached agreed to participate in a clinical RCT. Recruitment strategies were broadly divided into four groups: novel trial designs (eight studies), recruiter differences (eight studies), incentives (two studies), and provision of trial information (19 studies). Strategies that increased people's awareness of the health problem being studied (e.g., an interactive computer program [relative risk (RR) 1.48, 95% confidence interval (CI) 1.00–2.18], attendance at an education session [RR 1.14, 95% CI 1.01–1.28], addition of a health questionnaire [RR 1.37, 95% CI 1.14–1.66]), or a video about the health condition (RR 1.75, 95% CI 1.11–2.74), and also monetary incentives (RR1.39, 95% CI 1.13–1.64 to RR 1.53, 95% CI 1.28–1.84) improved recruitment. Increasing patients' understanding of the trial process, recruiter differences, and various methods of randomisation and consent design did not show a difference in recruitment. Consent rates were also higher for nonblinded trial design, but differential loss to follow up between groups may jeopardise the study findings. The study's main limitation was the necessity of modifying the search strategy with subsequent search updates because of changes in MEDLINE definitions. The abstracts of previous versions of this systematic review were published in 2002 and 2007.
Conclusion
Recruitment strategies that focus on increasing potential participants' awareness of the health problem being studied, its potential impact on their health, and their engagement in the learning process appeared to increase recruitment to clinical studies. Further trials of recruitment strategies that target engaging participants to increase their awareness of the health problems being studied and the potential impact on their health may confirm this hypothesis.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Before any health care intervention—a treatment for a disease or a measure such as vaccination that is designed to prevent an illness—is adopted by the medical community, it undergoes exhaustive laboratory-based and clinical research. In the laboratory, scientists investigate the causes of diseases, identify potential new treatments or preventive methods, and test these interventions in animals. New interventions that look hopeful are then investigated in clinical trials—studies that test these interventions in people by following a strict trial protocol or action plan. Phase I trials test interventions in a few healthy volunteers or patients to evaluate their safety and to identify possible side effects. In phase II trials, a larger group of patients receives an intervention to evaluate its safety further and to get an initial idea of its effectiveness. In phase III trials, very large groups of patients (sometimes in excess of a thousand people) are randomly assigned to receive the new intervention or an established intervention or placebo (dummy intervention). These “randomized controlled trials” or “RCTs” provide the most reliable information about the effectiveness and safety of health care interventions.
Why Was This Study Done?
Patients who participate in clinical trials must fulfill the inclusion criteria laid down in the trial protocol and must be given information about the trial, its risks, and potential benefits before agreeing to participate (informed consent). Unfortunately, many RCTs struggle to enroll the number of patients specified in their trial protocol, which can reduce a trial's ability to measure the effect of a new intervention. Inadequate recruitment can also increase costs and, in the worst cases, prevent trial completion. Several strategies have been developed to improve recruitment but it is not clear which strategy works best. In this study, the researchers undertake a systematic review (a study that uses predefined criteria to identify all the research on a given topic) of “recruitment trials”—studies that have randomly divided potential RCT participants into groups, applied different strategies for recruitment to each group, and compared recruitment rates in the groups.
What Did the Researchers Do and Find?
The researchers identified 37 randomized trials of recruitment strategies into real and mock RCTs (where no actual trial occurred). In all, 18,812 people agreed to participate in an RCT in these recruitment trials out of at least 59,354 people approached. Some of these trials investigated novel strategies for recruitment, such as changes in how patients are randomized. Others looked at the effect of recruiter differences (for example, increased contact between the health care professionals doing the recruiting and the trial investigators), the effect of offering monetary incentives to participants, and the effect of giving more information about the trial to potential participants. Recruitment strategies that improved people's awareness of the health problem being studied—provision of an interactive computer program or a video about the health condition, attendance at an educational session, or inclusion of a health questionnaire in the recruitment process—improved recruitment rates, as did monetary incentives. Increasing patients' understanding about the trial process itself, recruiter differences, and alterations in consent design and randomization generally had no effect on recruitment rates although consent rates were higher when patients knew the treatment to which they had been randomly allocated before consenting. However, differential losses among the patients in different treatment groups in such nonblinded trials may jeopardize study findings.
What Do These Findings Mean?
These findings suggest that trial recruitment strategies that focus on increasing the awareness of potential participants of the health problem being studied and its possible effects on their health, and that engage potential participants in the trial process are likely to increase recruitment to RCTs. The accuracy of these findings depends on whether the researchers identified all the published research on recruitment strategies and on whether other research on recruitment strategies has been undertaken and not published that could alter these findings. Furthermore, because about half of the recruitment trials identified by the researchers were undertaken in the US, the successful strategies identified here might not be generalizable to other countries. Nevertheless, these recruitment strategies should now be investigated further to ensure that the future evaluation of new health care interventions is not hampered by poor recruitment into RCTs.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000368.
The ClinicalTrials.gov Web site is a searchable register of federally and privately supported clinical trials in the US and around the world, providing information about all aspects of clinical trials
The US National Institutes of Health provides information about clinical trials
The UK National Health Service Choices Web site has information for patients about clinical trials and medical research
The UK Medical Research Council Clinical Trials Units also provides information for patients about clinical trials and links to information on clinical trials provided by other organizations
MedlinePlus has links to further resources on clinical trials (in English and Spanish)
The Australian Government's National Health and Medical Research Council has information about clinical trials
WHO International Clinical Trials Registry Platform aims to ensure that all trials are publicly accessible to those making health care decisions
The Star Child Health International Forum of Standards for Research is a resource center for pediatric clinical trial design, conduct, and reporting
doi:10.1371/journal.pmed.1000368
PMCID: PMC2976724  PMID: 21085696
24.  Differences in Reporting of Analyses in Internal Company Documents Versus Published Trial Reports: Comparisons in Industry-Sponsored Trials in Off-Label Uses of Gabapentin 
PLoS Medicine  2013;10(1):e1001378.
Using documents obtained through litigation, S. Swaroop Vedula and colleagues compared internal company documents regarding industry-sponsored trials of off-label uses of gabapentin with the published trial reports and find discrepancies in reporting of analyses.
Background
Details about the type of analysis (e.g., intent to treat [ITT]) and definitions (i.e., criteria for including participants in the analysis) are necessary for interpreting a clinical trial's findings. Our objective was to compare the description of types of analyses and criteria for including participants in the publication (i.e., what was reported) with descriptions in the corresponding internal company documents (i.e., what was planned and what was done). Trials were for off-label uses of gabapentin sponsored by Pfizer and Parke-Davis, and documents were obtained through litigation.
Methods and Findings
For each trial, we compared internal company documents (protocols, statistical analysis plans, and research reports, all unpublished), with publications. One author extracted data and another verified, with a third person verifying discordant items and a sample of the rest. Extracted data included the number of participants randomized and analyzed for efficacy, and types of analyses for efficacy and safety and their definitions (i.e., criteria for including participants in each type of analysis). We identified 21 trials, 11 of which were published randomized controlled trials, and that provided the documents needed for planned comparisons. For three trials, there was disagreement on the number of randomized participants between the research report and publication. Seven types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including ITT and six others. The protocol or publication described ITT using six different definitions, resulting in frequent disagreements between the two documents (i.e., different numbers of participants were included in the analyses).
Conclusions
Descriptions of analyses conducted did not agree between internal company documents and what was publicly reported. Internal company documents provide extensive documentation of methods planned and used, and trial findings, and should be publicly accessible. Reporting standards for randomized controlled trials should recommend transparent descriptions and definitions of analyses performed and which study participants are excluded.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
To be credible, published research must present an unbiased, transparent, and accurate description of the study methods and findings so that readers can assess all relevant information to make informed decisions about the impact of any conclusions. Therefore, research publications should conform to universally adopted guidelines and checklists. Studies to establish whether a treatment is effective, termed randomized controlled trials (RCTs), are checked against a comprehensive set of guidelines: The robustness of trial protocols are measured through the Standard Protocol Items for Randomized Trials (SPIRIT), and the Consolidated Standards of Reporting Trials (CONSORT) statement (which was constructed and agreed by a meeting of journal editors in 1996, and has been updated over the years) includes a 25-point checklist that covers all of the key points in reporting RCTs.
Why Was This Study Done?
Although the CONSORT statement has helped improve transparency in the reporting of the methods and findings from RCTs, the statement does not define how certain types of analyses should be conducted and which patients should be included in the analyses, for example, in an intention-to-treat analysis (in which all participants are included in the data analysis of the group to which they were assigned, whether or not they completed the intervention given to the group). So in this study, the researchers used internal company documents released in the course of litigation against the pharmaceutical company Pfizer regarding the drug gabapentin, to compare between the internal and published reports the reporting of the numbers of participants, the description of the types of analyses, and the definitions of each type of analysis. The reports involved studies of gabapentin used for medical reasons not approved for marketing by the US Food and Drug Administration, known as “off-label” uses.
What Did the Researchers Do and Find?
The researchers identified trials sponsored by Pfizer relating to four off-label uses of gabapentin and examined the internal company protocols, statistical analysis plans, research reports, and the main publications related to each trial. The researchers then compared the numbers of participants randomized and analyzed for the main (primary) outcome and the type of analysis for efficacy and safety in both the internal research report and the trial publication. The researchers identified 21 trials, 11 of which were published RCTs that had the associated documents necessary for comparison.
The researchers found that in three out of ten trials there were differences in the internal research report and the main publication regarding the number of randomized participants. Furthermore, in six out of ten trials, the researchers were unable to compare the internal research report with the main publication for the number of participants analyzed for efficacy, because the research report either did not describe the primary outcome or did not describe the type of analysis. Overall, the researchers found that seven different types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including intention-to-treat analysis. However, the protocol or publication used six different descriptions for the intention-to-treat analysis, resulting in several important differences between the internal and published documents about the number of patients included in the analysis.
What Do These Findings Mean?
These findings from a sample of industry-sponsored trials on the off-label use of gabapentin suggest that when compared to the internal research reports, the trial publications did not always accurately reflect what was actually done in the trial. Therefore, the trial publication could not be considered to be an accurate and transparent record of the numbers of participants randomized and analyzed for efficacy. These findings support the need for further revisions of the CONSORT statement, such as including explicit statements about the criteria used to define each type of analysis and the numbers of participants excluded from each type of analysis. Further guidance is also needed to ensure consistent terminology for types of analysis. Of course, these revisions will improve reporting only if authors and journals adhere to them. These findings also highlight the need for all individual patient data to be made accessible to readers of the published article.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001378.
For more information, see the CONSORT statement website
The EQUATOR Network website is a resource center for the good reporting of health research studies and has more information about the SPIRIT initiative and the CONSORT statement
doi:10.1371/journal.pmed.1001378
PMCID: PMC3558476  PMID: 23382656
25.  Endovascular Radiofrequency Ablation for Varicose Veins 
Executive Summary
Objective
The objective of the MAS evidence review was to conduct a systematic review of the available evidence on the safety, effectiveness, durability and cost–effectiveness of endovascular radiofrequency ablation (RFA) for the treatment of primary symptomatic varicose veins.
Background
The Ontario Health Technology Advisory Committee (OHTAC) met on August 26th, 2010 to review the safety, effectiveness, durability, and cost-effectiveness of RFA for the treatment of primary symptomatic varicose veins based on an evidence-based review by the Medical Advisory Secretariat (MAS).
Clinical Condition
Varicose veins (VV) are tortuous, twisted, or elongated veins. This can be due to existing (inherited) valve dysfunction or decreased vein elasticity (primary venous reflux) or valve damage from prior thrombotic events (secondary venous reflux). The end result is pooling of blood in the veins, increased venous pressure and subsequent vein enlargement. As a result of high venous pressure, branch vessels balloon out leading to varicosities (varicose veins).
Symptoms typically affect the lower extremities and include (but are not limited to): aching, swelling, throbbing, night cramps, restless legs, leg fatigue, itching and burning. Left untreated, venous reflux tends to be progressive, often leading to chronic venous insufficiency (CVI). A number of complications are associated with untreated venous reflux: including superficial thrombophlebitis as well as variceal rupture and haemorrhage. CVI often results in chronic skin changes referred to as stasis dermatitis. Stasis dermatitis is comprised of a spectrum of cutaneous abnormalities including edema, hyperpigmentation, eczema, lipodermatosclerosis and stasis ulceration. Ulceration represents the disease end point for severe CVI. CVI is associated with a reduced quality of life particularly in relation to pain, physical function and mobility. In severe cases, VV with ulcers, QOL has been rated to be as bad or worse as other chronic diseases such as back pain and arthritis.
Lower limb VV is a very common disease affecting adults – estimated to be the 7th most common reason for physician referral in the US. There is a very strong familial predisposition to VV. The risk in offspring is 90% if both parents affected, 20% when neither affected and 45% (25% boys, 62% girls) if one parent affected. The prevalence of VV worldwide ranges from 5% to 15% among men and 3% to 29% among women varying by the age, gender and ethnicity of the study population, survey methods and disease definition and measurement. The annual incidence of VV estimated from the Framingham Study was reported to be 2.6% among women and 1.9% among men and did not vary within the age range (40-89 years) studied.
Approximately 1% of the adult population has a stasis ulcer of venous origin at any one time with 4% at risk. The majority of leg ulcer patients are elderly with simple superficial vein reflux. Stasis ulcers are often lengthy medical problems and can last for several years and, despite effective compression therapy and multilayer bandaging are associated with high recurrence rates. Recent trials involving surgical treatment of superficial vein reflux have resulted in healing and significantly reduced recurrence rates.
Endovascular Radiofrequency Ablation for Varicose Veins
RFA is an image-guided minimally invasive treatment alternative to surgical stripping of superficial venous reflux. RFA does not require an operating room or general anaesthesia and has been performed in an outpatient setting by a variety of medical specialties including surgeons and interventional radiologists. Rather than surgically removing the vein, RFA works by destroying or ablating the refluxing vein segment using thermal energy delivered through a radiofrequency catheter.
Prior to performing RFA, color-flow Doppler ultrasonography is used to confirm and map all areas of venous reflux to devise a safe and effective treatment plan. The RFA procedure involves the introduction of a guide wire into the target vein under ultrasound guidance followed by the insertion of an introducer sheath through which the RFA catheter is advanced. Once satisfactory positioning has been confirmed with ultrasound, a tumescent anaesthetic solution is injected into the soft tissue surrounding the target vein along its entire length. This serves to anaesthetize the vein, insulate the heat from damaging adjacent structures, including nerves and skin and compresses the vein increasing optimal contact of the vessel wall with the electrodes or expanded prongs of the RF device. The RF generator is then activated and the catheter is slowly pulled along the length of the vein. At the end of the procedure, hemostasis is then achieved by applying pressure to the vein entry point.
Adequate and proper compression stockings and bandages are applied after the procedure to reduce the risk of venous thromboembolism and to reduce postoperative bruising and tenderness. Patients are encouraged to walk immediately after the procedure. Follow-up protocols vary, with most patients returning 1 to 3 weeks later for an initial follow-up visit. At this point, the initial clinical result is assessed and occlusion of the treated vessels is confirmed with ultrasound. Patients often have a second follow-up visit 1 to 3 months following RFA at which time clinical evaluation and ultrasound are repeated. If required, additional procedures such as phlebectomy or sclerotherapy may be performed during the RFA procedure or at any follow-up visits.
Regulatory Status
The Closure System® radiofrequency generator for endovascular thermal ablation of varicose veins was approved by Health Canada as a class 3 device in March 2005, registered under medical device license 67865. The RFA intravascular catheter was approved by Health Canada in November 2007 for the ClosureFast catheter, registered under medical device license 16574. The Closure System® also has regulatory approvals in Australia, Europe (CE Mark) and the United States (FDA clearance). In Ontario, RFA is not an insured service and is currently being introduced in private clinics.
Methods
Literature Search
The MAS evidence–based review was performed to support public financing decisions. The literature search was performed on March 9th, 2010 using standard bibliographic databases for studies published up until March, 2010.
Inclusion Criteria
English language full-reports and human studies Original reports with defined study methodologyReports including standardized measurements on outcome events such as technical success, safety, effectiveness, durability, quality of life or patient satisfaction Reports involving RFA for varicose veins (great or small saphenous veins)Randomized controlled trials (RCTs), systematic reviews and meta-analysesCohort and controlled clinical studies involving ≥ 1 month ultrasound imaging follow-up
Exclusion Criteria
Non systematic reviews, letters, comments and editorials Reports not involving outcome events such as safety, effectiveness, durability, or patient satisfaction following an intervention with RFAReports not involving interventions with RFA for varicose veinsPilot studies or studies with small samples (< 50 subjects)
Summary of Findings
The MAS evidence search on the safety and effectiveness of endovascular RFA ablation of VV identified the following evidence: three HTAs, nine systematic reviews, eight randomized controlled trials (five comparing RFA to surgery and three comparing RFA to ELT), five controlled clinical trials and fourteen cohort case series (four were multicenter registry studies).
The majority (12⁄14) of the cohort studies (3,664) evaluating RFA for VV involved treatment with first generation RFA catheters and the great saphenous vein (GSV) was the target vessel in all studies. Major adverse events were uncommonly reported and the overall pooled major adverse event rate extracted from the cohort studies was 2.9% (105⁄3,664). Imaging defined treatment effectiveness of vein closure rates were variable ranging from 68% to 96% at post-operative follow-up. Vein ablation rate at 6-month follow-up was reported in four studies with rates close to 90%. Only one study reported vein closure rates at 2 years but only for a minority of the eligible cases. The two studies reporting on RFA ablation with the more efficient second generation catheters involved better follow-up and reported higher ablation rates close to 100% at 6-month follow-up with no major adverse events. A large prospective registry trial that recruited over 1,000 patients at thirty-four largely European centers reported on treatment success in six overlapping reports on selected patient subgroups at various follow-up points up to 5 year. However, the follow-up for eligible recruited patients at all time points was low resulting in inadequate estimates of longer term treatment efficacy.
The overall level of evidence of randomized trials comparing RFA with surgical ligation and vein stripping (n = 5) was graded as low to moderate. In all trials RFA ablation was performed with first generation catheters in the setting of the operating theatre under general anaesthesia, usually without tumescent anaesthesia. Procedure times were significantly longer after RFA than surgery. Recovery after treatment was significantly quicker after RFA both with return to usual activity and return to work with on average a one week less of work loss. Major adverse events occurring after surgery were higher [(1.8% (n=4) vs. 0.4% (n = 1) than after RFA but not significantly. Treatment effectiveness measured by imaging defined vein absence or vein closure was comparable in the two treatment groups. Significant improvements in vein symptoms and quality of life over baseline were reported for both treatment groups. Improvements in these outcomes were significantly greater in the RFA group than the surgery group in the peri-operative period but not in later follow-up. Follow-up in these trials was inadequate to evaluate longer term recurrence for either treatment. Patient satisfaction was reported to be high for both treatments but was higher for RFA.
The studies comparing endovascular treatment approaches for VV (RFA and ELT) were more limited. Three RCT studies compared RFA (two with the second generation catheter) with ELT but mainly focused on peri-procedural outcomes such as pain, complications and recovery. Vein ablation rates were not evaluated in the trials, except for one small trial involving bilateral VV. Pain responses in patients undergoing ablation were extremely variable and up to 2 weeks, mean pain levels were significantly less with RFA than ELT ablation but differences were not significant at one month. Recovery, evaluated as return to usual activity or return to work, however, was similar in the treatment groups. Vein symptom and QOL improvements were improved in both groups but were significantly better in the RFA group than the ELT group at 2 weeks, but not at one month. Vein ablation rates were evaluated in several controlled clinical studies comparing the treatments between centers or within centers between individuals or over time. Comparisons in these studies were inconsistent with vein ablation rates for RFA reported to be similar to, higher than and lower than those with ELT.
Economic Analysis
RFA and surgical vein stripping, the main comparator reimbursed by the public system, are comparable in clinical benefits. Hence a cost-analysis was conducted to identify the differences in resources and costs between both procedures and a budgetary impact analysis (BIA) was conducted to project costs over a 5- year period in the province of Ontario. The target population of this economic analysis was patients with symptomatic varicose veins and the primary analytic perspective was that of the Ministry of Health and Long-Term Care.
The average case cost (based on Ontario hospital costs and medical resources) for surgical vein stripping was estimated to be $1,799. In order to calculate a procedural cost for RFA it was assumed that the hospital cost and physician labour fees, excluding anaesthesia and surgical assistance, were the same as vein stripping surgery. The manufacturer also provided details on the generator with a capital cost of $27,500 and a lifespan of 5 years and the disposables (catheter, sheath, guidewire) with a cost of $673 per case. The average case cost for RFA was therefore estimated to be $1,356. One-way sensitivity analysis was also conducted with hospital cost of RFA varied to 60% that of vein stripping surgery (average cost per case = $627.08) to calculate an impact to the province.
Historical volumes of vein stripping surgeries in Ontario were used to project surgeries in a linear fashion up to five years into the future. Volumes for RFA and ELT were calculated based on share capture from the surgery market based on discussion with clinical expert opinion and existing private data based on discussion with the manufacturer. RFA is expected to compete with ELT and capture some of the market. If ELT is reimbursed by the public sector then numbers will continue to increase from previous private data and share capture from the conventional surgical treatment market. Therefore, RFA cases will also increase since it will be capturing a share of the ELT market. A budget impact to the province was then calculated by multiplying volumes by the cost of the procedure.
RFA is comparable in clinical benefits to vein stripping surgery. It has the extra upfront cost of the generator and cost per case for disposables but does not require an operating theater, anaesthetist or surgical assistant fees. The impact to the province is expected to be 5 M by Year 5 with the introduction of new ELT and RFA image guided endovascular technologies and existing surgery for varicose veins.
Conclusion
The conclusions on the comparative outcomes between endovascular RFA and surgical ligation and saphenous vein stripping and between endovascular RFA and laser ablation for VV treatment are summarized in the table below (ES Table 1).
Outcome comparisons of RFA vs. surgery and RFA vs ELT for varicose veins
ELT refers to endovascular laser ablation; RFA, radiofrequency ablation
The outcomes of the evidence-based review on these treatments for VV based on different perspectives are summarized below:
RFA First versus Second Generation Catheters and Segmental Ablation
Ablation with second generation catheters and segmental ablation offered technical advantages with improved ease and significant decreases in procedure time. RFA ablation with second generation catheters is also no longer restricted to smaller (< 12 mm diameter) saphenous veins. The safety profile with the new device and method of energy delivery is as good as or improved over the first generation device. No major adverse events were reported in two multicenter prospective cohort studies in 6 month follow-up with over 500 patients. Post-operative complications such as bruising and pain were significantly less with RFA ablation with second generation catheters than ELT in two RCT trials.RFA treatment with second generation catheters has ablation rates that are higher than with first generation catheters and are more comparable with the consistently high rates of ELT.
Endovascular RFA versus Surgery
RFA has a quicker recovery attributable to decreased pain and lower minor complications.RFA, in the short term was comparable to surgery in treatment effectiveness as assessed by imaging defined anatomic outcomes such as vein closure, flow or reflux. Other treatment outcomes such as symptomatic relief and HRQOL were significantly improved in both groups and between group differences in the early peri-operative period were likely influenced by pain experiences. Longer term follow-up was inadequate to evaluate recurrence after either treatment.Patient satisfaction was high after both treatments but was higher for RFA than surgery.
Endovascular RFA versus ELT
RFA has significantly less post-operative pain than ELT but differences were not significant when pain was adjusted for analgesic use and pain differences between groups did not persist at 1 month follow-up.Treatment effectiveness, measured as symptom relief and QOL improvement were similar between the endovascular treatments in the short term (within 1 month) Treatment effectiveness measured as imaging defined vein ablation was not measured in any RCT trials (only for bilateral VV disease) and results were inconsistently reported in observational trials.Longer term follow-up was not available to assess recurrence after either treatment.
System Outcomes – RFA Replacing Surgery or Competing with ELT
RFA may offer system advantages in that the treatment can be offered by several medical specialties in outpatient settings and because it does not require an operating theatre or general anaesthesia. The treatment may result in decanting of patients from OR with decreased pre-surgical investigations, demand on anaesthetists’ time, hospital stay and wait time for VV treatment. It may also provide more reliable outpatient scheduling. Procedure costs may be less for endovascular approaches than surgery but the budget impact may be greater with insurance of RFA because of the transfer of cases from the private market to the public payer system.Competition between RFA and ELT endovascular approaches is likely to continue to stimulate innovation and technical changes to advance patient care and result in competitive pricing.
PMCID: PMC3377553  PMID: 23074413

Results 1-25 (992265)