PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (964789)

Clipboard (0)
None

Related Articles

1.  The Task before Psychiatry Today Redux: STSPIR* 
Mens Sana Monographs  2014;12(1):35-70.
This paper outlines six important tasks for psychiatry today, which can be put in short as:
Spread and scale up services;Talk;Science,Psychotherapy;Integrate; andResearch excellence.
As an acronym, STSPIR.
Spread and scale up services: Spreading mental health services to uncovered areas, and increasing facilities in covered areas:Mental disorders are leading cause of ill health but bottom of health agenda;Patients face widespread discrimination, human rights violations and lack of facilities;Need to stem the brain drain from developing countries;At any given point, 10% of the adult population report having some mental or behavioural disorder;In India, serious mental disorders affect nearly 80 million people, i.e. combined population of the northern top of India, including Punjab, Haryana, Jammu and Kashmir, Uttarakhand and Himachal Pradesh;Combating imbalance between burden of demand and supply of efficient psychiatric services in all countries, especially in developing ones like India, is the first task before psychiatry today. If ever a greater role for activism were needed, this is the field;The need is to scale up effective and cost-effective treatments and preventive interventions for mental disorders.Talk: Speaking to a wider audience about positive contributions of psychiatry: Being aware of, understanding, and countering, the massive anti-psychiatry propaganda online and elsewhere;Giving a firm answer to anti-psychiatry even while understanding its transformation into mental health consumerism and opposition to reckless medicalisation;Defining normality and abnormality;Bringing about greater precision in diagnosis and care;Motivating those helped by psychiatry to speak up;Setting up informative websites and organising programmes to reduce stigma and spread mental health awareness;Setting up regular columns in psychiatry journals around the globe, called ‘Patients Speak’, or something similar, wherein those who have been helped get a chance to voice their stories.Science: Shrugging ambivalence and disagreement and searching for commonalities in psychiatric phenomena; An idiographic orientation which stresses individuality cannot, and should not, preclude the nomothetic or norm laying thrust that is the crux of scientific progress.The major contribution of science has been to recognize such commonalities so they can be researched, categorized and used for human welfare.It is a mistake to stress individuality so much that commonalities are obliterated.While the purpose and approach of psychiatry, as of all medicine, has to be humane and caring, therapeutic advancements and aetiologic understandings are going to result only from a scientific methodology.Just caring is not enough, if you have not mastered the methods of care, which only science can supply.Psychotherapy: Psychiatrists continuing to do psychotherapy: Psychotherapy must be clearly defined, its parameters and methods firmly delineated, its proof of effectiveness convincingly demonstrated by evidence based and controlled trials;Psychotherapy research suffers from neglect by the mainstream at present, because of the ascendancy of biological psychiatry;It suffers resource constraints as major sponsors like pharma not interested;Needs funding from some sincere researcher organisations and altruistic sponsors, as also professional societies and governments;Psychotherapy research will have to provide enough irrefutable evidence that it works, with replicable studies that prove it across geographical areas;It will not do for psychiatrists to hand over psychotherapy to clinical psychologists and others.Integrate approaches: Welcoming biological breakthroughs, while supplying psychosocial insights: Experimental breakthroughs, both in aetiology and therapeutics, will come mainly from biology, but the insights and leads can hopefully come from many other fields, especially the psychosocial and philosophical;The biological and the psychological are not exclusive but complementary approaches;Both integration and reductionism are valid. Integration is necessary as an attitude, reductionism is necessary as an approach. Both the biological and the psychosocial must co-exist in the individual psychiatrist, as much as the branch itself.Research excellence: Promoting genuine research alone, and working towards an Indian Nobel Laureate in psychiatry by 2020: To stop promoting poor quality research and researchers, and to stop encouraging sycophants and ladder climbers. To pick up and hone genuine research talent from among faculty and students;Developing consistent quality environs in departments and having Heads of Units who recognize, hone and nurture talent. And who never give in to pessimism and cynicism;Stop being satisfied with the money, power and prestige that comes by wheeling-dealing, groupism and politicking;Infinite vistas of opportunity wait in the wings to unfold and offer opportunities for unravelling the mysteries of the ‘mind’ to the earnest seeker. Provided he is ready to seek the valuable. Provided he stops holding on to the artificial and the superfluous.
doi:10.4103/0973-1229.130295
PMCID: PMC4037900  PMID: 24891797
Biological breakthroughs; Commonalities in psychiatry; Indian Nobel Laureate; Integrate; Positive contributions of psychiatry; Psychosocial insights; Psychotherapy; Research excellence; Scale up services; Science; Stigma; Talk
2.  Anatomy of the Epidemiological Literature on the 2003 SARS Outbreaks in Hong Kong and Toronto: A Time-Stratified Review 
PLoS Medicine  2010;7(5):e1000272.
Weijia Xing and colleagues reviewed the published epidemiological literature on SARS and show that less than a quarter of papers were published during the epidemic itself, suggesting that the research published lagged substantially behind the need for it.
Background
Outbreaks of emerging infectious diseases, especially those of a global nature, require rapid epidemiological analysis and information dissemination. The final products of those activities usually comprise internal memoranda and briefs within public health authorities and original research published in peer-reviewed journals. Using the 2003 severe acute respiratory syndrome (SARS) epidemic as an example, we conducted a comprehensive time-stratified review of the published literature to describe the different types of epidemiological outputs.
Methods and Findings
We identified and analyzed all published articles on the epidemiology of the SARS outbreak in Hong Kong or Toronto. The analysis was stratified by study design, research domain, data collection, and analytical technique. We compared the SARS-case and matched-control non-SARS articles published according to the timeline of submission, acceptance, and publication. The impact factors of the publishing journals were examined according to the time of publication of SARS articles, and the numbers of citations received by SARS-case and matched-control articles submitted during and after the epidemic were compared. Descriptive, analytical, theoretical, and experimental epidemiology concerned, respectively, 54%, 30%, 11%, and 6% of the studies. Only 22% of the studies were submitted, 8% accepted, and 7% published during the epidemic. The submission-to-acceptance and acceptance-to-publication intervals of the SARS articles submitted during the epidemic period were significantly shorter than the corresponding intervals of matched-control non-SARS articles published in the same journal issues (p<0.001 and p<0.01, respectively). The differences of median submission-to-acceptance intervals and median acceptance-to-publication intervals between SARS articles and their corresponding control articles were 106.5 d (95% confidence interval [CI] 55.0–140.1) and 63.5 d (95% CI 18.0–94.1), respectively. The median numbers of citations of the SARS articles submitted during the epidemic and over the 2 y thereafter were 17 (interquartile range [IQR] 8.0–52.0) and 8 (IQR 3.2–21.8), respectively, significantly higher than the median numbers of control article citations (15, IQR 8.5–16.5, p<0.05, and 7, IQR 3.0–12.0, p<0.01, respectively).
Conclusions
A majority of the epidemiological articles on SARS were submitted after the epidemic had ended, although the corresponding studies had relevance to public health authorities during the epidemic. To minimize the lag between research and the exigency of public health practice in the future, researchers should consider adopting common, predefined protocols and ready-to-use instruments to improve timeliness, and thus, relevance, in addition to standardizing comparability across studies. To facilitate information dissemination, journal managers should reengineer their fast-track channels, which should be adapted to the purpose of an emerging outbreak, taking into account the requirement of high standards of quality for scientific journals and competition with other online resources.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every now and then, a new infectious disease appears in a human population or an old disease becomes much more common or more geographically widespread. Recently, several such “emerging infectious diseases” have become major public health problems. For example, HIV/AIDS, hepatitis C, and severe acute respiratory syndrome (SARS) have all emerged in the past three decades and spread rapidly round the world. When an outbreak (epidemic) of an emerging infectious disease occurs, epidemiologists (scientists who study the causes, distribution, and control of diseases in populations) swing into action, collecting and analyzing data on the new threat to human health. Epidemiological studies are rapidly launched to identify the causative agent of the new disease, to investigate how the disease spreads, to define diagnostic criteria for the disease, to evaluate potential treatments, and to devise ways to control the disease's spread. Public health officials then use the results of these studies to bring the epidemic under control.
Why Was This Study Done?
Clearly, epidemics of emerging infectious diseases can only be controlled rapidly and effectively if the results of epidemiological studies are made widely available in a timely manner. Public health bulletins (for example, the Morbidity and Mortality Weekly Report from the US Centers from Disease Control and Prevention) are an important way of disseminating information as is the publication of original research in peer-reviewed academic journals. But how timely is this second dissemination route? Submission, peer-review, revision, re-review, acceptance, and publication of a piece of academic research can be a long process, the speed of which is affected by the responses of both authors and journals. In this study, the researchers analyze how the results of academic epidemiological research are submitted and published in journals during and after an emerging infectious disease epidemic using the 2003 SARS epidemic as an example. The first case of SARS was identified in Asia in February 2003 and rapidly spread around the world. 8,098 people became ill with SARS and 774 died before the epidemic was halted in July 2003.
What Did the Researchers Do and Find?
The researchers identified more than 300 journal articles covering epidemiological research into the SARS outbreak in Hong Kong, China, and Toronto, Canada (two cities strongly affected by the epidemic) that were published online or in print between January 1, 2003 and July 31, 2007. The researchers' analysis of these articles shows that more than half them were descriptive epidemiological studies, investigations that focused on describing the distribution of SARS; a third were analytical epidemiological studies that tried to discover the cause of SARS. Overall, 22% of the journal articles were submitted for publication during the epidemic. Only 8% of the articles were accepted for publication and only 7% were actually published during the epidemic. The median (average) submission-to-acceptance and acceptance-to-publication intervals for SARS articles submitted during the epidemic were 55 and 77.5 days, respectively, much shorter intervals than those for non-SARS articles published in the same journal issues. After the epidemic was over, the submission-to-acceptance and acceptance-to-publication intervals for SARS articles was similar to that of non-SARS articles.
What Do These Findings Mean?
These findings show that, although the academic response to the SARS epidemic was rapid, most articles on the epidemiology of SARS were published after the epidemic was over even though SARS was a major threat to public health. Possible reasons for this publication delay include the time taken by authors to prepare and undertake their studies, to write and submit their papers, and, possibly, their tendency to first submit their results to high profile journals. The time then taken by journals to review the studies, make decisions about publication, and complete the publication process might also have delayed matters. To minimize future delays in the publication of epidemiological research on emerging infectious diseases, epidemiologists could adopt common, predefined protocols and ready-to-use instruments, which would improve timeliness and ensure comparability across studies, suggest the researchers. Journals, in turn, could improve their fast-track procedures and could consider setting up online sections that could be activated when an emerging infectious disease outbreak occurred. Finally, journals could consider altering their review system to speed up the publication process provided the quality of the final published articles was not compromised.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000272.
The US National Institute of Allergy and Infectious Diseases provides information on emerging infectious diseases
The US Centers for Control and Prevention of Diseases also provides information about emerging infectious diseases, including links to other resources, and information on SARS
Wikipedia has a page on epidemiology (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The World Health Organization has information on SARS (in several languages)
doi:10.1371/journal.pmed.1000272
PMCID: PMC2864302  PMID: 20454570
3.  An Epidemiological Network Model for Disease Outbreak Detection 
PLoS Medicine  2007;4(6):e210.
Background
Advanced disease-surveillance systems have been deployed worldwide to provide early detection of infectious disease outbreaks and bioterrorist attacks. New methods that improve the overall detection capabilities of these systems can have a broad practical impact. Furthermore, most current generation surveillance systems are vulnerable to dramatic and unpredictable shifts in the health-care data that they monitor. These shifts can occur during major public events, such as the Olympics, as a result of population surges and public closures. Shifts can also occur during epidemics and pandemics as a result of quarantines, the worried-well flooding emergency departments or, conversely, the public staying away from hospitals for fear of nosocomial infection. Most surveillance systems are not robust to such shifts in health-care utilization, either because they do not adjust baselines and alert-thresholds to new utilization levels, or because the utilization shifts themselves may trigger an alarm. As a result, public-health crises and major public events threaten to undermine health-surveillance systems at the very times they are needed most.
Methods and Findings
To address this challenge, we introduce a class of epidemiological network models that monitor the relationships among different health-care data streams instead of monitoring the data streams themselves. By extracting the extra information present in the relationships between the data streams, these models have the potential to improve the detection capabilities of a system. Furthermore, the models' relational nature has the potential to increase a system's robustness to unpredictable baseline shifts. We implemented these models and evaluated their effectiveness using historical emergency department data from five hospitals in a single metropolitan area, recorded over a period of 4.5 y by the Automated Epidemiological Geotemporal Integrated Surveillance real-time public health–surveillance system, developed by the Children's Hospital Informatics Program at the Harvard-MIT Division of Health Sciences and Technology on behalf of the Massachusetts Department of Public Health. We performed experiments with semi-synthetic outbreaks of different magnitudes and simulated baseline shifts of different types and magnitudes. The results show that the network models provide better detection of localized outbreaks, and greater robustness to unpredictable shifts than a reference time-series modeling approach.
Conclusions
The integrated network models of epidemiological data streams and their interrelationships have the potential to improve current surveillance efforts, providing better localized outbreak detection under normal circumstances, as well as more robust performance in the face of shifts in health-care utilization during epidemics and major public events.
Most surveillance systems are not robust to shifts in health care utilization. Ben Reis and colleagues developed network models that detected localized outbreaks better and were more robust to unpredictable shifts.
Editors' Summary
Background.
The main task of public-health officials is to promote health in communities around the world. To do this, they need to monitor human health continually, so that any outbreaks (epidemics) of infectious diseases (particularly global epidemics or pandemics) or any bioterrorist attacks can be detected and dealt with quickly. In recent years, advanced disease-surveillance systems have been introduced that analyze data on hospital visits, purchases of drugs, and the use of laboratory tests to look for tell-tale signs of disease outbreaks. These surveillance systems work by comparing current data on the use of health-care resources with historical data or by identifying sudden increases in the use of these resources. So, for example, more doctors asking for tests for salmonella than in the past might presage an outbreak of food poisoning, and a sudden rise in people buying over-the-counter flu remedies might indicate the start of an influenza pandemic.
Why Was This Study Done?
Existing disease-surveillance systems don't always detect disease outbreaks, particularly in situations where there are shifts in the baseline patterns of health-care use. For example, during an epidemic, people might stay away from hospitals because of the fear of becoming infected, whereas after a suspected bioterrorist attack with an infectious agent, hospitals might be flooded with “worried well” (healthy people who think they have been exposed to the agent). Baseline shifts like these might prevent the detection of increased illness caused by the epidemic or the bioterrorist attack. Localized population surges associated with major public events (for example, the Olympics) are also likely to reduce the ability of existing surveillance systems to detect infectious disease outbreaks. In this study, the researchers developed a new class of surveillance systems called “epidemiological network models.” These systems aim to improve the detection of disease outbreaks by monitoring fluctuations in the relationships between information detailing the use of various health-care resources over time (data streams).
What Did the Researchers Do and Find?
The researchers used data collected over a 3-y period from five Boston hospitals on visits for respiratory (breathing) problems and for gastrointestinal (stomach and gut) problems, and on total visits (15 data streams in total), to construct a network model that included all the possible pair-wise comparisons between the data streams. They tested this model by comparing its ability to detect simulated disease outbreaks implanted into data collected over an additional year with that of a reference model based on individual data streams. The network approach, they report, was better at detecting localized outbreaks of respiratory and gastrointestinal disease than the reference approach. To investigate how well the network model dealt with baseline shifts in the use of health-care resources, the researchers then added in a large population surge. The detection performance of the reference model decreased in this test, but the performance of the complete network model and of models that included relationships between only some of the data streams remained stable. Finally, the researchers tested what would happen in a situation where there were large numbers of “worried well.” Again, the network models detected disease outbreaks consistently better than the reference model.
What Do These Findings Mean?
These findings suggest that epidemiological network systems that monitor the relationships between health-care resource-utilization data streams might detect disease outbreaks better than current systems under normal conditions and might be less affected by unpredictable shifts in the baseline data. However, because the tests of the new class of surveillance system reported here used simulated infectious disease outbreaks and baseline shifts, the network models may behave differently in real-life situations or if built using data from other hospitals. Nevertheless, these findings strongly suggest that public-health officials, provided they have sufficient computer power at their disposal, might improve their ability to detect disease outbreaks by using epidemiological network systems alongside their current disease-surveillance systems.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040210.
Wikipedia pages on public health (note that Wikipedia is a free online encyclopedia that anyone can edit, and is available in several languages)
A brief description from the World Health Organization of public-health surveillance (in English, French, Spanish, Russian, Arabic, and Chinese)
A detailed report from the US Centers for Disease Control and Prevention called “Framework for Evaluating Public Health Surveillance Systems for the Early Detection of Outbreaks”
The International Society for Disease Surveillance Web site
doi:10.1371/journal.pmed.0040210
PMCID: PMC1896205  PMID: 17593895
4.  A regression model approach to enable cell morphology correction in high-throughput flow cytometry 
Large variations in cell size and shape can undermine traditional gating methods for analyzing flow cytometry data. Correcting for these effects enables analysis of high-throughput data sets, including >5000 yeast samples with diverse cell morphologies.
The regression model approach corrects for the effects of cell morphology on fluorescence, as well as an extremely small and restrictive gate, but without removing any of the cells.In contrast to traditional gating, this approach enables the quantitative analysis of high-throughput flow cytometry experiments, since the regression model can compare between biological samples that show no or little overlap in terms of the morphology of the cells.The analysis of a high-throughput yeast flow cytometry data set consisting of >5000 biological samples identified key proteins that affect the time and intensity of the bifurcation event that happens after the carbon source transition from glucose to fatty acids. Here, some yeast cells undergo major structural changes, while others do not.
Flow cytometry is a widely used technique that enables the measurement of different optical properties of individual cells within large populations of cells in a fast and automated manner. For example, by targeting cell-specific markers with fluorescent probes, flow cytometry is used to identify (and isolate) cell types within complex mixtures of cells. In addition, fluorescence reporters can be used in conjunction with flow cytometry to measure protein, RNA or DNA concentration within single cells of a population.
One of the biggest advantages of this technique is that it provides information of how each cell behaves instead of just measuring the population average. This can be essential when analyzing complex samples that consist of diverse cell types or when measuring cellular responses to stimuli. For example, there is an important difference between a 50% expression increase of all cells in a population after stimulation and a 100% increase in only half of the cells, while the other half remains unresponsive. Another important advantage of flow cytometry is automation, which enables high-throughput studies with thousands of samples and conditions. However, current methods are confounded by populations of cells that are non-uniform in terms of size and granularity. Such variability affects the emitted fluorescence of the cell and adds undesired variability when estimating population fluorescence. This effect also frustrates a sensible comparison between conditions, where not only fluorescence but also cell size and granularity may be affected.
Traditionally, this problem has been addressed by using ‘gates' that restrict the analysis to cells with similar morphological properties (i.e. cell size and cell granularity). Because cells inside the gate are morphologically similar to one another, they will show a smaller variability in their response within the population. Moreover, applying the same gate in all samples assures that observed differences between these samples are not due to differential cell morphologies.
Gating, however, comes with costs. First, since only a subgroup of cells is selected, the final number of cells analyzed can be significantly reduced. This means that in order to have sufficient statistical power, more cells have to be acquired, which, if even possible in the first place, increases the time and cost of the experiment. Second, finding a good gate for all samples and conditions can be challenging if not impossible, especially in cases where cellular morphology changes dramatically between conditions. Finally, gating is a very user-dependent process, where both the size and shape of the gate are determined by the researcher and will affect the outcome, introducing subjectivity in the analysis that complicates reproducibility.
In this paper, we present an alternative method to gating that addresses the issues stated above. The method is based on a regression model containing linear and non-linear terms that estimates and corrects for the effect of cell size and granularity on the observed fluorescence of each cell in a sample. The corrected fluorescence thus becomes ‘free' of the morphological effects.
Because the model uses all cells in the sample, it assures that the corrected fluorescence is an accurate representation of the sample. In addition, the regression model can predict the expected fluorescence of a sample in areas where there are no cells. This makes it possible to compare between samples that have little overlap with good confidence. Furthermore, because the regression model is automated, it is fully reproducible between labs and conditions. Finally, it allows for a rapid analysis of big data sets containing thousands of samples.
To probe the validity of the model, we performed several experiments. We show how the regression model is able to remove the morphological-associated variability as well as an extremely small and restrictive gate, but without the caveat of removing cells. We test the method in different organisms (yeast and human) and applications (protein level detection, separation of mixed subpopulations). We then apply this method to unveil new biological insights in the mechanistic processes involved in transcriptional noise.
Gene transcription is a process subjected to the randomness intrinsic to any molecular event. Although such randomness may seem to be undesirable for the cell, since it prevents consistent behavior, there are situations where some degree of randomness is beneficial (e.g. bet hedging). For this reason, each gene is tuned to exhibit different levels of randomness or noise depending on its functions. For core and essential genes, the cell has developed mechanisms to lower the level of noise, while for genes involved in the response to stress, the variability is greater.
This gene transcription tuning can be determined at many levels, from the architecture of the transcriptional network, to epigenetic regulation. In our study, we analyze the latter using the response of yeast to the presence of fatty acid in the environment. Fatty acid can be used as energy by yeast, but it requires major structural changes and commitments. We have observed that at the population level, there is a bifurcation event whereby some cells undergo these changes and others do not. We have analyzed this bifurcation event in mutants for all the non-essential epigenetic regulators in yeast and identified key proteins that affect the time and intensity of this bifurcation. Even though fatty acid triggers major morphological changes in the cell, the regression model still makes it possible to analyze the over 5000 flow cytometry samples in this data set in an automated manner, whereas a traditional gating approach would be impossible.
Cells exposed to stimuli exhibit a wide range of responses ensuring phenotypic variability across the population. Such single cell behavior is often examined by flow cytometry; however, gating procedures typically employed to select a small subpopulation of cells with similar morphological characteristics make it difficult, even impossible, to quantitatively compare cells across a large variety of experimental conditions because these conditions can lead to profound morphological variations. To overcome these limitations, we developed a regression approach to correct for variability in fluorescence intensity due to differences in cell size and granularity without discarding any of the cells, which gating ipso facto does. This approach enables quantitative studies of cellular heterogeneity and transcriptional noise in high-throughput experiments involving thousands of samples. We used this approach to analyze a library of yeast knockout strains and reveal genes required for the population to establish a bimodal response to oleic acid induction. We identify a group of epigenetic regulators and nucleoporins that, by maintaining an ‘unresponsive population,' may provide the population with the advantage of diversified bet hedging.
doi:10.1038/msb.2011.64
PMCID: PMC3202802  PMID: 21952134
flow cytometry; high-throughput experiments; statistical regression model; transcriptional noise
5.  Misrepresentation of Randomized Controlled Trials in Press Releases and News Coverage: A Cohort Study 
PLoS Medicine  2012;9(9):e1001308.
A study conducted by Amélie Yavchitz and colleagues examines the factors associated with “spin” (specific reporting strategies, intentional or unintentional, that emphasize the beneficial effect of treatments) in press releases of clinical trials.
Background
Previous studies indicate that in published reports, trial results can be distorted by the use of “spin” (specific reporting strategies, intentional or unintentional, emphasizing the beneficial effect of the experimental treatment). We aimed to (1) evaluate the presence of “spin” in press releases and associated media coverage; and (2) evaluate whether findings of randomized controlled trials (RCTs) based on press releases and media coverage are misinterpreted.
Methods and Findings
We systematically searched for all press releases indexed in the EurekAlert! database between December 2009 and March 2010. Of the 498 press releases retrieved and screened, we included press releases for all two-arm, parallel-group RCTs (n = 70). We obtained a copy of the scientific article to which the press release related and we systematically searched for related news items using Lexis Nexis.
“Spin,” defined as specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment, was identified in 28 (40%) scientific article abstract conclusions and in 33 (47%) press releases. From bivariate and multivariable analysis assessing the journal type, funding source, sample size, type of treatment (drug or other), results of the primary outcomes (all nonstatistically significant versus other), author of the press release, and the presence of “spin” in the abstract conclusion, the only factor associated, with “spin” in the press release was “spin” in the article abstract conclusions (relative risk [RR] 5.6, [95% CI 2.8–11.1], p<0.001). Findings of RCTs based on press releases were overestimated for 19 (27%) reports. News items were identified for 41 RCTs; 21 (51%) were reported with “spin,” mainly the same type of “spin” as those identified in the press release and article abstract conclusion. Findings of RCTs based on the news item was overestimated for ten (24%) reports.
Conclusion
“Spin” was identified in about half of press releases and media coverage. In multivariable analysis, the main factor associated with “spin” in press releases was the presence of “spin” in the article abstract conclusion.
Editors' Summary
Background
The mass media play an important role in disseminating the results of medical research. Every day, news items in newspapers and magazines and on the television, radio, and internet provide the general public with information about the latest clinical studies. Such news items are written by journalists and are often based on information in “press releases.” These short communications, which are posted on online databases such as EurekAlert! and sent directly to journalists, are prepared by researchers or more often by the drug companies, funding bodies, or institutions supporting the clinical research and are designed to attract favorable media attention to newly published research results. Press releases provide journalists with the information they need to develop and publish a news story, including a link to the peer-reviewed journal (a scholarly periodical containing articles that have been judged by independent experts) in which the research results appear.
Why Was This Study Done?
In an ideal world, journal articles, press releases, and news stories would all accurately reflect the results of health research. Unfortunately, the findings of randomized controlled trials (RCTs—studies that compare the outcomes of patients randomly assigned to receive alternative interventions), which are the best way to evaluate new treatments, are sometimes distorted in peer-reviewed journals by the use of “spin”—reporting that emphasizes the beneficial effects of the experimental (new) treatment. For example, a journal article may interpret nonstatistically significant differences as showing the equivalence of two treatments although such results actually indicate a lack of evidence for the superiority of either treatment. “Spin” can distort the transposition of research into clinical practice and, when reproduced in the mass media, it can give patients unrealistic expectations about new treatments. It is important, therefore, to know where “spin” occurs and to understand the effects of that “spin”. In this study, the researchers evaluate the presence of “spin” in press releases and associated media coverage and analyze whether the interpretation of RCT results based on press releases and associated news items could lead to the misinterpretation of RCT results.
What Did the Researchers Do and Find?
The researchers identified 70 press releases indexed in EurekAlert! over a 4-month period that described two-arm, parallel-group RCTs. They used Lexis Nexis, a database of news reports from around the world, to identify associated news items for 41 of these press releases and then analyzed the press releases, news items, and abstracts of the scientific articles related to each press release for “spin”. Finally, they interpreted the results of the RCTs using each source of information independently. Nearly half the press releases and article abstract conclusions contained “spin” and, importantly, “spin” in the press releases was associated with “spin” in the article abstracts. The researchers overestimated the benefits of the experimental treatment from the press release as compared to the full-text peer-reviewed article for 27% of reports. Factors that were associated with this overestimation of treatment benefits included publication in a specialized journal and having “spin” in the press release. Of the news items related to press releases, half contained “spin”, usually of the same type as identified in the press release and article abstract. Finally, the researchers overestimated the benefit of the experimental treatment from the news item as compared to the full-text peer-reviewed article in 24% of cases.
What Do These Findings Mean?
These findings show that “spin” in press releases and news reports is related to the presence of “spin” in the abstract of peer-reviewed reports of RCTs and suggest that the interpretation of RCT results based solely on press releases or media coverage could distort the interpretation of research findings in a way that favors experimental treatments. This interpretation shift is probably related to the presence of “spin” in peer-reviewed article abstracts, press releases, and news items and may be partly responsible for a mismatch between the perceived and real beneficial effects of new treatments among the general public. Overall, these findings highlight the important role that journal reviewers and editors play in disseminating research findings. These individuals, the researchers conclude, have a responsibility to ensure that the conclusions reported in the abstracts of peer-reviewed articles are appropriate and do not over-interpret the results of clinical research.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001308.
The PLOS Hub for Clinical Trials, which collects PLOS journals relating to clinical trials, includes some other articles on “spin” in clinical trial reports
EurekAlert is an online free database for science press releases
The UK National Health Service Choices website includes Beyond the Headlines, a resource that provides an unbiased and evidence-based analysis of health stories that make the news for both the public and health professionals
The US-based organization HealthNewsReview, a project supported by the Foundation for Informed Medical Decision Making, also provides expert reviews of news stories
doi:10.1371/journal.pmed.1001308
PMCID: PMC3439420  PMID: 22984354
6.  Prevention of LPS-Induced Acute Lung Injury in Mice by Mesenchymal Stem Cells Overexpressing Angiopoietin 1 
PLoS Medicine  2007;4(9):e269.
Background
The acute respiratory distress syndrome (ARDS), a clinical complication of severe acute lung injury (ALI) in humans, is a leading cause of morbidity and mortality in critically ill patients. ALI is characterized by disruption of the lung alveolar–capillary membrane barrier and resultant pulmonary edema associated with a proteinaceous alveolar exudate. Current specific treatment strategies for ALI/ARDS are lacking. We hypothesized that mesenchymal stem cells (MSCs), with or without transfection with the vasculoprotective gene angiopoietin 1 (ANGPT1) would have beneficial effects in experimental ALI in mice.
Methods and Findings
Syngeneic MSCs with or without transfection with plasmid containing the human ANGPT1 gene (pANGPT1) were delivered through the right jugular vein of mice 30 min after intratracheal instillation of lipopolysaccharide (LPS) to induce lung injury. Administration of MSCs significantly reduced LPS-induced pulmonary inflammation, as reflected by reductions in total cell and neutrophil counts in bronchoalveolar lavage (BAL) fluid (53%, 95% confidence interval [CI] 7%–101%; and 60%, CI 4%–116%, respectively) as well as reducing levels of proinflammatory cytokines in both BAL fluid and lung parenchymal homogenates. Furthermore, administration of MSCs transfected with pANGPT1 resulted in nearly complete reversal of LPS-induced increases in lung permeability as assessed by reductions in IgM and albumin levels in BAL (96%, CI 6%–185%; and 74%, CI 23%–126%, respectively). Fluorescently tagged MSCs were detected in the lung tissues by confocal microscopy and flow cytometry in both naïve and LPS-injured animals up to 3 d.
Conclusions
Treatment with MSCs alone significantly reduced LPS-induced acute pulmonary inflammation in mice, while administration of pANGPT1-transfected MSCs resulted in a further improvement in both alveolar inflammation and permeability. These results suggest a potential role for cell-based ANGPT1 gene therapy to treat clinical ALI/ARDS.
Using a mouse model of acute respiratory distress syndrome, Duncan Stewart and colleagues report that rescue with mesenchymal stem cells expressing human angiopoietin 1 can avert lung injury from lipopolysaccharide.
Editors' Summary
Background.
Critically ill people who have had an injury to their lungs, for example through pneumonia, trauma, or an immune response to infection, may end up developing a serious complication in the lung termed acute respiratory distress syndrome (ARDS). In ARDS, inflammation develops in the lung, and fluid builds up in the alveoli (the air sacs resembling “bunches of grapes” at the ends of the network of tubes in the lung). This buildup of fluid prevents oxygen from being carried efficiently from air into the blood; the individual consequently experiences problems breathing and can develop further serious complications, which contribute significantly to the burden of illness among people in intensive care units. The death rate among individuals who do develop ARDS is very high, upward of 30%. Normally, individuals with ARDS are given extra oxygen, and may need a machine to help them breathe; treatments also focus on addressing the underlying causes in each particular patient. However, currently there are very few specific treatments that address ARDS itself.
Why Was This Study Done?
The researchers here wanted to work toward new treatment options for individuals with ARDS. One possible approach involves cells known as mesenchymal stem cells (MSCs). These cells are typically found in the bone marrow and have a property shared by very few other cell types in the body; they are able to carry on dividing and renewing themselves, and can eventually develop into many other types of cell. The researchers already knew that MSCs could become incorporated into injured lungs in mice and develop there into the tissue layers lining the lung. Some interesting work had also been done on a protein called angiopoeitin 1 (ANGPT1), which seemed to play a role in protecting against inflammation in blood vessels. Therefore, there was a strong rationale for carrying out experiments in mice to see if MSCs engineered to produce the ANGPT1 protein might “rescue” lung injury in mice. These experiments would be an initial step toward developing possible new treatments for humans with ARDS.
What Did the Researchers Do and Find?
The researchers used a mouse model to mimic the human ARDS condition. This involved injecting the windpipe of experimental mice with lipopolysaccharide (a substance normally found on the outer surface of bacteria that brings about an immune reaction in the lung). After 30 minutes, the mice were then injected with either salt solution (as a control), the MSCs, or MSCs producing the ANGPT1 protein. The researchers then looked at markers of lung inflammation, the appearance of the lungs under a microscope, and whether the injected MSCs had become incorporated into the lung tissue.
The lipopolysaccharide brought about a large increase in the number of inflammatory cells in the lung fluid, which was reduced in the mice given MSCs. Furthermore, in mice given the MSCs producing ANGPT1 protein, the number of inflammatory cells was reduced to a level similar to that of mice that had not been given lipopolysaccharide. When the researchers looked at the appearance under the microscope of lungs from mice that had been given lipopolysaccharide, they saw signs of inflammation and fluid coming out into the lung air spaces. These signs were reduced among both mice treated with MSCs and those treated with MSCs producing ANGPT1. The researchers also measured the “leakiness” of the lung tissues in lipopolysaccharide-treated mice; MSCs seemed to reduce the leakiness to some extent, and the lungs of mice treated with MSCs producing ANGPT1 were no more leaky than those of mice that had never been injected with lipopolysaccharide. Finally, the MSCs were seen to be incorporated into lung tissue by three days after injection, but after that were lost from the lung.
What Do These Findings Mean?
Previous research done by the same group had shown that fibroblasts producing ANGPT1 could prevent lung injury in rats later given lipopolysaccharide. The experiments reported here go a step further than this, and suggest that MSCs producing ANGPT1 can “rescue” the condition of mouse lungs that had already been given lipopolysaccharide. In addition, treatment with MSCs alone also produced beneficial effects. This opens up a possible new treatment strategy for ARDS in humans. However, it should be emphasized that the animal model used here is not a precise parallel of ARDS in humans, and that more research remains to be done before human studies of this sort could be considered.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040269.
Medline Plus entry on acute respiratory distress syndrome, providing basic information about what ARDS is, its effects, and how it is currently managed
ARDS Network from the US National Heart, Lung, and Blood Institute of the National Institutes of Health; the site provides frequently asked questions about ARDS as well as a list of clinical trials conducted by the network
Information about stem cells from the US National Institutes of Health, including information about the potential uses of stem cells
Wikipedia page about mesenchymal stem cells (note: Wikipedia is an internet encyclopedia anyone can edit)
doi:10.1371/journal.pmed.0040269
PMCID: PMC1961632  PMID: 17803352
7.  Collaboration for Improved Disease Surveillance Literature Review 
Objective
To improve the method of automated retrieval of surveillance-related literature from a wide range of indexed repositories.
Introduction
The ISDS Research Committee (RC) is an interdisciplinary group of researchers interested in various topics related to disease surveillance. The RC hosts a literature review process with a permanent repository of relevant journal articles and bimonthly calls that provide a forum for discussion and author engagement. The calls have led to workgroups and society-wide events, boosted interest in the ISDS Conference, and fostered networking among participants.
Since 2007, the RC has identified and classified published articles using an automated search method with the aim of progressing ISDS’s mission of advancing the science and practice of disease surveillance by fostering collaboration and increasing awareness of innovations in the field of surveillance. The RC literature review efforts have provided an opportunity for interprofessional collaboration and have resulted in a repository of over 1,000 articles, but feedback from ISDS members indicated relevant articles were not captured by the existing methodology. The method of automated literature retrieval was thus refined to improve efficiency and inclusiveness of stakeholder interests.
Methods
The earlier literature review method was implemented from March 2007 to March 2012. PubCrawler [1] (articles indexed in Medline) and Google Scholar [2] search results were sent to the RC via automated e-mail. To refine this method, the RC developed search strings in PubMed [3], Embase [4], and Scopus [5], consisting of over 100 terms suggested by members. After evaluating these methods, we found that the Scopus search is the most comprehensive and improved the cross-disciplinary scope. Scopus results allowed filtering of 50–100 titles and abstracts in fewer than 30 minutes each week for the identification of relevant articles (Figure).
Journal titles were categorized to assess the increased range of fields covered; categories include epidemiology, agriculture, economics, and medicine (51 categories total).
Results
Since implementing the new method, potentially relevant articles identified per month increased from an average of 19 (SD: 13; n= 31) to 159 (SD: 63; n= 3). Both methods identified articles in the health sciences, but the new search also captured articles in the life, physical, and social sciences. Between March 2007 and March 2012, articles selected were classified into an average of 10 different categories per literature review (SD: 4; n= 31) versus an average of 33 categories (SD: 5; n= 3) with the updated process.
Conclusions
The new search method improves upon the previous method – it captures relevant articles indexed in health science and other secondary databases beyond Medline. The new method has resulted in a greater number of relevant literature articles, from a broader range of disciplines, and in reduced amount of preparation time as compared to the results of the previous search method. This improvement may increase multi-disciplinary discussions and partnerships, but changes in online publishing pose challenges to continued access of the new range of articles.
PMCID: PMC3692894
Disease surveillance literature; ISDS Research Committee; Literature search
8.  The Chilling Effect: How Do Researchers React to Controversy? 
PLoS Medicine  2008;5(11):e222.
Background
Can political controversy have a “chilling effect” on the production of new science? This is a timely concern, given how often American politicians are accused of undermining science for political purposes. Yet little is known about how scientists react to these kinds of controversies.
Methods and Findings
Drawing on interview (n = 30) and survey data (n = 82), this study examines the reactions of scientists whose National Institutes of Health (NIH)-funded grants were implicated in a highly publicized political controversy. Critics charged that these grants were “a waste of taxpayer money.” The NIH defended each grant and no funding was rescinded. Nevertheless, this study finds that many of the scientists whose grants were criticized now engage in self-censorship. About half of the sample said that they now remove potentially controversial words from their grant and a quarter reported eliminating entire topics from their research agendas. Four researchers reportedly chose to move into more secure positions entirely, either outside academia or in jobs that guaranteed salaries. About 10% of the group reported that this controversy strengthened their commitment to complete their research and disseminate it widely.
Conclusions
These findings provide evidence that political controversies can shape what scientists choose to study. Debates about the politics of science usually focus on the direct suppression, distortion, and manipulation of scientific results. This study suggests that scholars must also examine how scientists may self-censor in response to political events.
Drawing on interview and survey data, Joanna Kempner's study finds that political controversies shape what many scientists choose not to study.
Editors' Summary
Background.
Scientific research is an expensive business and, inevitably, the organizations that fund this research—governments, charities, and industry—play an important role in determining the directions that this research takes. Funding bodies can have both positive and negative effects on the acquisition of scientific knowledge. They can pump money into topical areas such as the human genome project. Alternatively, by withholding funding, they can discourage some types of research. So, for example, US federal funds cannot be used to support many aspects of human stem cell research. “Self-censoring” by scientists can also have a negative effect on scientific progress. That is, some scientists may decide to avoid areas of research in which there are many regulatory requirements, political pressure, or in which there is substantial pressure from advocacy groups. A good example of this last type of self-censoring is the withdrawal of many scientists from research that involves certain animal models, like primates, because of animal rights activists.
Why Was This Study Done?
Some people think that political controversy might also encourage scientists to avoid some areas of scientific inquiry, but no studies have formally investigated this possibility. Could political arguments about the value of certain types of research influence the questions that scientists pursue? An argument of this sort occurred in the US in 2003 when Patrick Toomey, who was then a Republican Congressional Representative, argued that National Institutes of Health (NIH) grants supporting research into certain aspects of sexual behavior were “much less worthy of taxpayer funding” than research on “devastating diseases,” and proposed an amendment to the 2004 NIH appropriations bill (which regulates the research funded by NIH). The Amendment was rejected, but more than 200 NIH-funded grants, most of which examined behaviors that affect the spread of HIV/AIDS, were internally reviewed later that year; NIH defended each grant, so none were curtailed. In this study, Joanna Kempner investigates how the scientists whose US federal grants were targeted in this clash between politics and science responded to the political controversy.
What Did the Researchers Do and Find?
Kempner interviewed 30 of the 162 principal investigators (PIs) whose grants were reviewed. She asked them to describe their research, the grants that were reviewed, and their experience with NIH before, during, and after the controversy. She also asked them whether this experience had changed their research practice. She then used the information from these interviews to design a survey that she sent to all the PIs whose grants had been reviewed; 82 responded. About half of the scientists interviewed and/or surveyed reported that they now remove “red flag” words (for example, “AIDS” and “homosexual”) from the titles and abstracts of their grant applications. About one-fourth of the respondents no longer included controversial topics (for example, “abortion” and “emergency contraception”) in their research agendas, and four researchers had made major career changes as a result of the controversy. Finally, about 10% of respondents said that their experience had strengthened their commitment to see their research completed and its results published although even many of these scientists also engaged in some self-censorship.
What Do These Findings Mean?
These findings show that, even though no funding was withdrawn, self-censoring is now common among the scientists whose grants were targeted during this particular political controversy. Because this study included researchers in only one area of health research, its findings may not be generalizable to other areas of research. Furthermore, because only half of the PIs involved in the controversy responded to the survey, these findings may be affected by selection bias. That is, the scientists most anxious about the effects of political controversy on their research funding (and thus more likely to engage in self-censorship) may not have responded. Nevertheless, these findings suggest that the political environment might have a powerful effect on self-censorship by scientists and might dissuade some scientists from embarking on research projects that they would otherwise have pursued. Further research into what Kempner calls the “chilling effect” of political controversy on scientific research is now needed to ensure that a healthy balance can be struck between political involvement in scientific decision making and scientific progress.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050222.
The Consortium of Social Science Associations, an advocacy organization that provides a bridge between the academic research community and Washington policymakers, has more information about the political controversy initiated by Patrick Toomey
Some of Kempner's previous research on self-censorship by scientists is described in a 2005 National Geographic news article
doi:10.1371/journal.pmed.0050222
PMCID: PMC2586361  PMID: 19018657
9.  Reinterpreting Ethnic Patterns among White and African American Men Who Inject Heroin: A Social Science of Medicine Approach 
PLoS Medicine  2006;3(10):e452.
Background
Street-based heroin injectors represent an especially vulnerable population group subject to negative health outcomes and social stigma. Effective clinical treatment and public health intervention for this population requires an understanding of their cultural environment and experiences. Social science theory and methods offer tools to understand the reasons for economic and ethnic disparities that cause individual suffering and stress at the institutional level.
Methods and Findings
We used a cross-methodological approach that incorporated quantitative, clinical, and ethnographic data collected by two contemporaneous long-term San Francisco studies, one epidemiological and one ethnographic, to explore the impact of ethnicity on street-based heroin-injecting men 45 years of age or older who were self-identified as either African American or white. We triangulated our ethnographic findings by statistically examining 14 relevant epidemiological variables stratified by median age and ethnicity. We observed significant differences in social practices between self-identified African Americans and whites in our ethnographic social network sample with respect to patterns of (1) drug consumption; (2) income generation; (3) social and institutional relationships; and (4) personal health and hygiene. African Americans and whites tended to experience different structural relationships to their shared condition of addiction and poverty. Specifically, this generation of San Francisco injectors grew up as the children of poor rural to urban immigrants in an era (the late 1960s through 1970s) when industrial jobs disappeared and heroin became fashionable. This was also when violent segregated inner city youth gangs proliferated and the federal government initiated its “War on Drugs.” African Americans had earlier and more negative contact with law enforcement but maintained long-term ties with their extended families. Most of the whites were expelled from their families when they began engaging in drug-related crime. These historical-structural conditions generated distinct presentations of self. Whites styled themselves as outcasts, defeated by addiction. They professed to be injecting heroin to stave off “dopesickness” rather than to seek pleasure. African Americans, in contrast, cast their physical addiction as an oppositional pursuit of autonomy and pleasure. They considered themselves to be professional outlaws and rejected any appearance of abjection. Many, but not all, of these ethnographic findings were corroborated by our epidemiological data, highlighting the variability of behaviors within ethnic categories.
Conclusions
Bringing quantitative and qualitative methodologies and perspectives into a collaborative dialog among cross-disciplinary researchers highlights the fact that clinical practice must go beyond simple racial or cultural categories. A clinical social science approach provides insights into how sociocultural processes are mediated by historically rooted and institutionally enforced power relations. Recognizing the logical underpinnings of ethnically specific behavioral patterns of street-based injectors is the foundation for cultural competence and for successful clinical relationships. It reduces the risk of suboptimal medical care for an exceptionally vulnerable and challenging patient population. Social science approaches can also help explain larger-scale patterns of health disparities; inform new approaches to structural and institutional-level public health initiatives; and enable clinicians to take more leadership in changing public policies that have negative health consequences.
Bourgois and colleagues found that the African American and white men in their study had a different pattern of drug use and risk behaviors, adopted different strategies for survival, and had different personal histories.
Editors' Summary
Background.
There are stark differences in the health of different ethnic groups in America. For example, the life expectancy for white men is 75.4 years, but it is only 69.2 years for African-American men. The reasons behind these disparities are unclear, though there are several possible explanations. Perhaps, for example, different ethnic groups are treated differently by health professionals (with some groups receiving poorer quality health care). Or maybe the health disparities are due to differences across ethnic groups in income level (we know that richer people are healthier). These disparities are likely to persist unless we gain a better understanding of how they arise.
Why Was This Study Done?
The researchers wanted to study the health of a very vulnerable community of people: heroin users living on the streets in the San Francisco Bay Area. The health status of this community is extremely poor, and its members are highly stigmatized—including by health professionals themselves. The researchers wanted to know whether African American men and white men who live on the streets have a different pattern of drug use, whether they adopt varying strategies for survival, and whether they have different personal histories. Knowledge of such differences would help the health community to provide more tailored and culturally appropriate interventions. Physicians, nurses, and social workers often treat street-based drug users, especially in emergency rooms and free clinics. These health professionals regularly report that their interactions with street-based drug users are frustrating and confrontational. The researchers hoped that their study would help these professionals to have a better understanding of the cultural backgrounds and motivations of their drug-using patients.
What Did the Researchers Do and Find?
Over the course of six years, the researchers directly observed about 70 men living on the streets who injected heroin as they went about their usual lives (this type of research is called “participant observation”). The researchers specifically looked to see whether there were differences between the white and African American men. All the men gave their consent to be studied in this way and to be photographed. The researchers also studied a database of interviews with almost 7,000 injection drug users conducted over five years, drawing out the data on differences between white and African men. The researchers found that the white men were more likely to supplement their heroin use with inexpensive fortified wine, while African American men were more likely to supplement heroin with crack. Most of the white men were expelled from their families when they began engaging in drug-related crime, and these men tended to consider themselves as destitute outcasts. African American men had earlier and more negative contact with law enforcement but maintained long-term ties with their extended families, and these men tended to consider themselves as professional outlaws. The white men persevered less in attempting to find a vein in which to inject heroin, and so were more likely to inject the drug directly under the skin—this meant that they were more likely to suffer from skin abscesses. The white men generated most of their income from panhandling (begging for money), while the African American men generated most of their income through petty crime and/or through offering services such as washing car windows at gas stations.
What Do These Findings Mean?
Among street-based heroin users, there are important differences between white men and African American men in the type of drugs used, the method of drug use, their social backgrounds, the way in which they identify themselves, and the health risks that they take. By understanding these differences, health professionals should be better placed to provide tailored and appropriate care when these men present to clinics and emergency rooms. As the researchers say, “understanding of different ethnic populations of drug injectors may reduce difficult clinical interactions and resultant physician frustration while improving patient access and adherence to care.” One limitation of this study is that the researchers studied one specific community in one particular area of the US—so we should not assume that their findings would apply to street-based heroin users elsewhere.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030452.
The US Centers for Disease Control (CDC) has a web page on HIV prevention among injection drug users
The World Health Organization has collected documents on reducing the risk of HIV in injection drug users and on harm reduction approaches
The International Harm Reduction Association has information relevant to a global audience on reducing drug-related harm among individuals and communities
US-focused information on harm reduction is available via the websites of the Harm Reduction Coalition and the Chicago Recovery Alliance
Canada-focused information can be found at the Street Works Web site
The Harm Reduction Journal publishes open-access articles
The CDC has a web page on eliminating racial and ethnic health disparities
The Drug Policy Alliance has a web page on drug policy in the United States
doi:10.1371/journal.pmed.0030452
PMCID: PMC1621100  PMID: 17076569
10.  Eurocan plus report: feasibility study for coordination of national cancer research activities 
Summary
The EUROCAN+PLUS Project, called for by the European Parliament, was launched in October 2005 as a feasibility study for coordination of national cancer research activities in Europe. Over the course of the next two years, the Project process organized over 60 large meetings and countless smaller meetings that gathered in total over a thousand people, the largest Europe–wide consultation ever conducted in the field of cancer research.
Despite a strong tradition in biomedical science in Europe, fragmentation and lack of sustainability remain formidable challenges for implementing innovative cancer research and cancer care improvement. There is an enormous duplication of research effort in the Member States, which wastes time, wastes money and severely limits the total intellectual concentration on the wide cancer problem. There is a striking lack of communication between some of the biggest actors on the European scene, and there are palpable tensions between funders and those researchers seeking funds.
It is essential to include the patients’ voice in the establishment of priority areas in cancer research at the present time. The necessity to have dialogue between funders and scientists to establish the best mechanisms to meet the needs of the entire community is evident. A top priority should be the development of translational research (in its widest form), leading to the development of effective and innovative cancer treatments and preventive strategies. Translational research ranges from bench–to–bedside innovative cancer therapies and extends to include bringing about changes in population behaviours when a risk factor is established.
The EUROCAN+PLUS Project recommends the creation of a small, permanent and independent European Cancer Initiative (ECI). This should be a model structure and was widely supported at both General Assemblies of the project. The ECI should assume responsibility for stimulating innovative cancer research and facilitating processes, becoming the common voice of the cancer research community and serving as an interface between the cancer research community and European citizens, patients’ organizations, European institutions, Member States, industry and small and medium enterprises (SMEs), putting into practice solutions aimed at alleviating barriers to collaboration and coordination of cancer research activities in the European Union, and dealing with legal and regulatory issues. The development of an effective ECI will require time, but this entity should be established immediately. As an initial step, coordination efforts should be directed towards the creation of a platform on translational research that could encompass (1) coordination between basic, clinical and epidemiological research; (2) formal agreements of co–operation between comprehensive cancer centres and basic research laboratories throughout Europe and (3) networking between funding bodies at the European level.
The European Parliament and its instruments have had a major influence in cancer control in Europe, notably in tobacco control and in the implementation of effective population–based screening. To make further progress there is a need for novelty and innovation in cancer research and prevention in Europe, and having a platform such as the ECI, where those involved in all aspects of cancer research can meet, discuss and interact, is a decisive development for Europe.
Executive Summary
Cancer is one of the biggest public health crises facing Europe in the 21st century—one for which Europe is currently not prepared nor preparing itself. Cancer is a major cause of death in Europe with two million casualties and three million new cases diagnosed annually, and the situation is set to worsen as the population ages.
These facts led the European Parliament, through the Research Directorate-General of the European Commission, to call for initiatives for better coordination of cancer research efforts in the European Union. The EUROCAN+PLUS Project was launched in October 2005 as a feasibility study for coordination of national cancer research activities. Over the course of the next two years, the Project process organized over 60 large meetings and countless smaller meetings that gathered in total over a thousand people. In this respect, the Project became the largest Europe-wide consultation ever conducted in the field of cancer research, implicating researchers, cancer centres and hospitals, administrators, healthcare professionals, funding agencies, industry, patients’ organizations and patients.
The Project first identified barriers impeding research and collaboration in research in Europe. Despite a strong tradition in biomedical science in Europe, fragmentation and lack of sustainability remain the formidable challenges for implementing innovative cancer research and cancer care improvement. There is an enormous duplication of research effort in the Member States, which wastes time, wastes money and severely limits the total intellectual concentration on the wide cancer problem. There is a striking lack of communication between some of the biggest actors on the European scene, and there are palpable tensions between funders and those researchers seeking funds.
In addition, there is a shortage of leadership, a multiplicity of institutions each focusing on its own agenda, sub–optimal contact with industry, inadequate training, non–existent career paths, low personnel mobility in research especially among clinicians and inefficient funding—all conspiring against efficient collaboration in cancer care and research. European cancer research today does not have a functional translational research continuum, that is the process that exploits biomedical research innovations and converts them into prevention methods, diagnostic tools and therapies. Moreover, epidemiological research is not integrated with other types of cancer research, and the implementation of the European Directives on Clinical Trials 1 and on Personal Data Protection 2 has further slowed the innovation process in Europe. Furthermore, large inequalities in health and research exist between the EU–15 and the New Member States.
The picture is not entirely bleak, however, as the European cancer research scene presents several strengths, such as excellent basic research and clinical research and innovative etiological research that should be better exploited.
When considering recommendations, several priority dimensions had to be retained. It is essential that proposals include actions and recommendations that can benefit all Member States of the European Union and not just States with the elite centres. It is also essential to have a broader patient orientation to help provide the knowledge to establish cancer control possibilities when we exhaust what can be achieved by the implementation of current knowledge. It is vital that the actions proposed can contribute to the Lisbon Strategy to make Europe more innovative and competitive in (cancer) research.
The Project participants identified six areas for which consensus solutions should be implemented in order to obtain better coordination of cancer research activities. The required solutions are as follows. The proactive management of innovation, detection, facilitation of collaborations and maintenance of healthy competition within the European cancer research community.The establishment of an exchange portal of information for health professionals, patients and policy makers.The provision of guidance for translational and clinical research including the establishment of a translational research platform involving comprehensive cancer centres and cancer research centres.The coordination of calls and financial management of cancer research projects.The construction of a ‘one–stop shop’ as a contact interface between the industry, small and medium enterprises, scientists and other stakeholders.The support of greater involvement of healthcare professionals in translational research and multidisciplinary training.
In the course of the EUROCAN+PLUS consultative process, several key collaborative projects emerged between the various groups and institutes engaged in the consultation. There was a collaboration network established with Europe’s leading Comprehensive Cancer Centres; funding was awarded for a closer collaboration of Owners of Cancer Registries in Europe (EUROCOURSE); there was funding received from FP7 for an extensive network of leading Biological Resource Centres in Europe (BBMRI); a Working Group identified the special needs of Central, Eastern and South–eastern Europe and proposed a remedy (‘Warsaw Declaration’), and the concept of developing a one–stop shop for dealing with academia and industry including the Innovative Medicines Initiative (IMI) was discussed in detail.
Several other dimensions currently lacking were identified. There is an absolute necessity to include the patients’ voice in the establishment of priority areas in cancer research at the present time. It was a salutary lesson when it was recognized that all that is known about the quality of life of the cancer patient comes from the experience of a tiny proportion of cancer patients included in a few clinical trials. The necessity to have dialogue between funders and scientists to establish the best mechanisms to meet the needs of the entire community was evident. A top priority should be the development of translational research (in its widest form) and the development of effective and innovative cancer treatments and preventative strategies in the European Union. Translational research ranges from bench-to-bedside innovative cancer therapies and extends to include bringing about changes in population behaviours when a risk factor is established.
Having taken note of the barriers and the solutions and having examined relevant examples of existing European organizations in the field, it was agreed during the General Assembly of 19 November 2007 that the EUROCAN+PLUS Project had to recommend the creation of a small, permanent and neutral ECI. This should be a model structure and was widely supported at both General Assemblies of the project. The proposal is based on the successful model of the European Molecular Biology Organisation (EMBO), and its principal aims include providing a forum where researchers from all backgrounds and from all countries can meet with members of other specialities including patients, nurses, clinicians, funders and scientific administrators to develop priority programmes to make Europe more competitive in research and more focused on the cancer patient.
The ECI should assume responsibility for: stimulating innovative cancer research and facilitating processes;becoming the common voice of the cancer research community and serving as an interface between the cancer research community and European citizens, patients’ and organizations;European institutions, Member States, industry and small and medium enterprises;putting into practice the aforementioned solutions aimed at alleviating barriers and coordinating cancer research activities in the EU;dealing with legal and regulatory issues.
Solutions implemented through the ECI will lead to better coordination and collaboration throughout Europe, more efficient use of resources, an increase in Europe’s attractiveness to the biomedical industry and better quality of cancer research and education of health professionals.
The Project considered that European legal instruments currently available were inadequate for addressing many aspects of the barriers identified and for the implementation of effective, lasting solutions. Therefore, the legal environment that could shelter an idea like the ECI remains to be defined but should be done so as a priority. In this context, the initiative of the European Commission for a new legal entity for research infrastructure might be a step in this direction. The development of an effective ECI will require time, but this should be established immediately. As an initial step, coordination efforts should be directed towards the creation of a platform on translational research that could encompass: (1) coordination between basic, clinical and epidemiological research; (2) formal agreements of co-operation between comprehensive cancer centres and basic research laboratories throughout Europe; (3) networking between funding bodies at the European level. Another topic deserving immediate attention is the creation of a European database on cancer research projects and cancer research facilities.
Despite enormous progress in cancer control in Europe during the past two decades, there was an increase of 300,000 in the number of new cases of cancer diagnosed between 2004 and 2006. The European Parliament and its instruments have had a major influence in cancer control, notably in tobacco control and in the implementation of effective population–based screening. To make further progress there is a need for novelty and innovation in cancer research and prevention in Europe, and having a platform such as the ECI, where those involved in all aspects of cancer research can meet, discuss and interact, is a decisive development for Europe.
doi:10.3332/ecancer.2011.84
PMCID: PMC3234055  PMID: 22274749
11.  Conflicts of Interest at Medical Journals: The Influence of Industry-Supported Randomised Trials on Journal Impact Factors and Revenue – Cohort Study 
PLoS Medicine  2010;7(10):e1000354.
Andreas Lundh and colleagues investigated the effect of publication of large industry-supported trials on citations and journal income, through reprint sales, in six general medical journals
Background
Transparency in reporting of conflict of interest is an increasingly important aspect of publication in medical journals. Publication of large industry-supported trials may generate many citations and journal income through reprint sales and thereby be a source of conflicts of interest for journals. We investigated industry-supported trials' influence on journal impact factors and revenue.
Methods and Findings
We sampled six major medical journals (Annals of Internal Medicine, Archives of Internal Medicine, BMJ, JAMA, The Lancet, and New England Journal of Medicine [NEJM]). For each journal, we identified randomised trials published in 1996–1997 and 2005–2006 using PubMed, and categorized the type of financial support. Using Web of Science, we investigated citations of industry-supported trials and the influence on journal impact factors over a ten-year period. We contacted journal editors and retrieved tax information on income from industry sources. The proportion of trials with sole industry support varied between journals, from 7% in BMJ to 32% in NEJM in 2005–2006. Industry-supported trials were more frequently cited than trials with other types of support, and omitting them from the impact factor calculation decreased journal impact factors. The decrease varied considerably between journals, with 1% for BMJ to 15% for NEJM in 2007. For the two journals disclosing data, income from the sales of reprints contributed to 3% and 41% of the total income for BMJ and The Lancet in 2005–2006.
Conclusions
Publication of industry-supported trials was associated with an increase in journal impact factors. Sales of reprints may provide a substantial income. We suggest that journals disclose financial information in the same way that they require them from their authors, so that readers can assess the potential effect of different types of papers on journals' revenue and impact.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Medical journals publish many different types of papers that inform doctors about the latest research advances and the latest treatments for their patients. They publish articles that describe laboratory-based research into the causes of diseases and the identification of potential new drugs. They publish the results of early clinical trials in which a few patients are given a potential new drug to check its safety. Finally and most importantly, they publish the results of randomized controlled trials (RCTs). RCTs are studies in which large numbers of patients are randomly allocated to different treatments without the patient or the clinician knowing the allocation and the efficacy of the various treatments compared. RCTs are best way of determining whether a new drug is effective and have to be completed before a drug can be marketed. Because RCTs are very expensive, they are often supported by drug companies. That is, drug companies provide grants or drugs for the trial or assist with data analysis and/or article preparation.
Why Was This Study Done?
Whenever a medical journal publishes an article, the article's authors have to declare any conflicts of interest such as financial gain from the paper's publication. Conflict of interest statements help readers assess papers—an author who owns the patent for a drug, for example, might put an unduly positive spin on his/her results. The experts who review papers for journals before publication provide similar conflict of interest statements. But what about the journal editors who ultimately decide which papers get published? The International Committee of Medical Journal Editors (ICMJE), which produces medical publishing guidelines, states that: “Editors who make final decisions about manuscripts must have no personal, professional, or financial involvement in any of the issues that they might judge.” However, the publication of industry-supported RCTs might create “indirect” conflicts of interest for journals by boosting the journal's impact factor (a measure of a journal's importance based on how often its articles are cited) and its income through the sale of reprints to drug companies. In this study, the researchers investigate whether the publication of industry-supported RCTs influences the impact factors and finances of six major medical journals.
What Did the Researchers Do and Find?
The researchers determined which RCTs published in the New England Journal of Medicine (NEJM), the British Medical Journal (BMJ), The Lancet, and three other major medical journals in 1996–1997 and 2005–2006 were supported wholly, partly, or not at all by industry. They then used the online academic citation index Web of Science to calculate an approximate impact factor for each journal for 1998 and 2007 and calculated the effect of the published RCTs on the impact factor. The proportion of RCTs with sole industry support varied between journals. Thus, 32% of the RCTs published in the NEJM during both two-year periods had industry support whereas only 7% of the RCTs published in the BMJ in 2005–2006 had industry support. Industry-supported trials were more frequently cited than RCTs with other types of support and omitting industry-supported RCTs from impact factor calculations decreased all the approximate journal impact factors. For example, omitting all RCTs with industry or mixed support decreased the 2007 BMJ and NEJM impact factors by 1% and 15%, respectively. Finally, the researchers asked each journal's editor about their journal's income from industry sources. For the BMJ and The Lancet, the only journals that provided this information, income from reprint sales was 3% and 41%, respectively, of total income in 2005–2006.
What Do These Findings Mean?
These findings show that the publication of industry-supported RCTs was associated with an increase in the approximate impact factors of these six major medical journals. Because these journals publish numerous RCTs, this result may not be generalizable to other journals. These findings also indicate that income from reprint sales can be a substantial proportion of a journal's total income. Importantly, these findings do not imply that the decisions of editors are affected by the possibility that the publication of an industry-supported trial might improve their journal's impact factor or income. Nevertheless, the researchers suggest, journals should live up to the same principles related to conflicts of interest as those that they require from their authors and should routinely disclose information on the source and amount of income that they receive.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000354.
This study is further discussed in a PLoS Medicine Perspective by Harvey Marcovitch
The International Committee of Medical Journal Editors provides information about the publication of medical research, including conflicts of interest
The World Association of Medical Editors also provides information on conflicts of interest in medical journals
Information about impact factors is provided by Thomson Reuters, a provider of intelligent information for businesses and professionals; Thomson Reuters also runs Web of Science
doi:10.1371/journal.pmed.1000354
PMCID: PMC2964336  PMID: 21048986
12.  Promoting synergistic research and education in genomics and bioinformatics 
BMC Genomics  2008;9(Suppl 1):I1.
Bioinformatics and Genomics are closely related disciplines that hold great promises for the advancement of research and development in complex biomedical systems, as well as public health, drug design, comparative genomics, personalized medicine and so on. Research and development in these two important areas are impacting the science and technology.
High throughput sequencing and molecular imaging technologies marked the beginning of a new era for modern translational medicine and personalized healthcare. The impact of having the human sequence and personalized digital images in hand has also created tremendous demands of developing powerful supercomputing, statistical learning and artificial intelligence approaches to handle the massive bioinformatics and personalized healthcare data, which will obviously have a profound effect on how biomedical research will be conducted toward the improvement of human health and prolonging of human life in the future. The International Society of Intelligent Biological Medicine (http://www.isibm.org) and its official journals, the International Journal of Functional Informatics and Personalized Medicine (http://www.inderscience.com/ijfipm) and the International Journal of Computational Biology and Drug Design (http://www.inderscience.com/ijcbdd) in collaboration with International Conference on Bioinformatics and Computational Biology (Biocomp), touch tomorrow's bioinformatics and personalized medicine throughout today's efforts in promoting the research, education and awareness of the upcoming integrated inter/multidisciplinary field. The 2007 international conference on Bioinformatics and Computational Biology (BIOCOMP07) was held in Las Vegas, the United States of American on June 25-28, 2007. The conference attracted over 400 papers, covering broad research areas in the genomics, biomedicine and bioinformatics. The Biocomp 2007 provides a common platform for the cross fertilization of ideas, and to help shape knowledge and scientific achievements by bridging these two very important disciplines into an interactive and attractive forum. Keeping this objective in mind, Biocomp 2007 aims to promote interdisciplinary and multidisciplinary education and research. 25 high quality peer-reviewed papers were selected from 400+ submissions for this supplementary issue of BMC Genomics. Those papers contributed to a wide-range of important research fields including gene expression data analysis and applications, high-throughput genome mapping, sequence analysis, gene regulation, protein structure prediction, disease prediction by machine learning techniques, systems biology, database and biological software development. We always encourage participants submitting proposals for genomics sessions, special interest research sessions, workshops and tutorials to Professor Hamid R. Arabnia (hra@cs.uga.edu) in order to ensure that Biocomp continuously plays the leadership role in promoting inter/multidisciplinary research and education in the fields. Biocomp received top conference ranking with a high score of 0.95/1.00. Biocomp is academically co-sponsored by the International Society of Intelligent Biological Medicine and the Research Laboratories and Centers of Harvard University – Massachusetts Institute of Technology, Indiana University - Purdue University, Georgia Tech – Emory University, UIUC, UCLA, Columbia University, University of Texas at Austin and University of Iowa etc. Biocomp - Worldcomp brings leading scientists together across the nation and all over the world and aims to promote synergistic components such as keynote lectures, special interest sessions, workshops and tutorials in response to the advances of cutting-edge research.
doi:10.1186/1471-2164-9-S1-I1
PMCID: PMC3226105  PMID: 18366597
13.  United States Private-Sector Physicians and Pharmaceutical Contract Research: A Qualitative Study 
PLoS Medicine  2012;9(7):e1001271.
Jill Fisher and Corey Kalbaugh describe their findings from a qualitative research study evaluating the motivations of private-sector physicians conducting contract research for the pharmaceutical industry.
Background
There have been dramatic increases over the past 20 years in the number of nonacademic, private-sector physicians who serve as principal investigators on US clinical trials sponsored by the pharmaceutical industry. However, there has been little research on the implications of these investigators' role in clinical investigation. Our objective was to study private-sector clinics involved in US pharmaceutical clinical trials to understand the contract research arrangements supporting drug development, and specifically how private-sector physicians engaged in contract research describe their professional identities.
Methods and Findings
We conducted a qualitative study in 2003–2004 combining observation at 25 private-sector research organizations in the southwestern United States and 63 semi-structured interviews with physicians, research staff, and research participants at those clinics. We used grounded theory to analyze and interpret our data. The 11 private-sector physicians who participated in our study reported becoming principal investigators on industry clinical trials primarily because contract research provides an additional revenue stream. The physicians reported that they saw themselves as trial practitioners and as businesspeople rather than as scientists or researchers.
Conclusions
Our findings suggest that in addition to having financial motivation to participate in contract research, these US private-sector physicians have a professional identity aligned with an industry-based approach to research ethics. The generalizability of these findings and whether they have changed in the intervening years should be addressed in future studies.
Please see later in the article for the Editors' Summary.
Editors' Summary
Background
Before a new drug can be used routinely by physicians, it must be investigated in clinical trials—studies that test the drug's safety and effectiveness in people. In the past, clinical trials were usually undertaken in academic medical centers (institutes where physicians provide clinical care, do research, and teach), but increasingly, clinical trials are being conducted in the private sector as part of a growing contract research system. In the US, for example, most clinical trials completed in the 1980s took place in academic medical centers, but nowadays, more than 70% of trials are conducted by nonacademic (community) physicians working under contract to pharmaceutical companies. The number of private-sector nonacademic physicians serving as principal investigators (PIs) for US clinical trials (the PI takes direct responsibility for completion of the trial) increased from 4,000 in 1990 to 20,250 in 2010, and research contracts for clinical trials are now worth more than USṩ11 billion annually.
Why Was This Study Done?
To date, there has been little research on the implications of this change in the conduct of clinical trials. Academic PIs are often involved in both laboratory and clinical research and are therefore likely to identify closely with the science of trials. By contrast, nonacademic PIs may see clinical trials more as a business opportunity—pharmaceutical contract research is profitable to US physicians because they get paid for every step of the trial process. As a result, pharmaceutical companies may now have more control over clinical trial data and more opportunities to suppress negative data through selective publication of study results than previously. In this qualitative study, the researchers explore the outsourcing of clinical trials to private-sector research clinics through observations of, and in-depth interviews with, physicians and other research staff involved in the US clinical trials industry. A qualitative study collects non-quantitative data such as how physicians feel about doing contract research and about their responsibilities to their patients.
What Did the Researchers Do and Find?
Between October 2003 and September 2004, the researchers observed the interactions between PIs, trial coordinators (individuals who undertake many of the trial activities such as blood collection), and trial participants at 25 US research organizations in the southwestern US and interviewed 63 informants (including 12 PIs) about the trials they were involved in and their reasons for becoming involved. The researchers found that private-sector physicians became PIs on industry-sponsored clinical trials primarily because contract research was financially lucrative. The physicians perceived their roles in terms of business rather than science and claimed that they offered something to the pharmaceutical industry that academics do not—the ability to carry out a diverse range of trials quickly and effectively, regardless of their medical specialty. Finally, the physicians saw their primary ethical responsibility as providing accurate data to the companies that hired them and did not explicitly refer to their ethical responsibility to trial participants. One possible reason for this shift in ethical concerns is the belief among private-sector physicians that pharmaceutical companies must be making scientifically and ethically sound decisions when designing trials because of the amount of money they invest in them.
What Do These Findings Mean?
These findings suggest that private-sector physicians participate as PIs in pharmaceutical clinical trials primarily for financial reasons and see themselves as trial practitioners and businesspeople rather than as scientists. The accuracy of these findings is likely to be limited by the small number of PIs interviewed and by the time that has elapsed since the researchers collected their qualitative data. Moreover, these findings may not be generalizable to other regions of the US or to other countries. Nevertheless, they have potentially troubling implications for drug development. By hiring private-sector physicians who see themselves as involved more with the business than the science of contract research, pharmaceutical companies may be able to exert more control over the conduct of clinical trials and the publication of trial results than previously. Compared to the traditional investigatorinitiated system of clinical research, this new system of contract research means that clinical trials now lack the independence that is at the heart of best science practices, a development that casts doubt on the robustness of the knowledge being produced about the safety and effectiveness of new drugs.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001271.
The ClinicalTrials.gov website is a searchable register of federally and privately supported clinical trials in the US; it provides information about all aspects of clinical trials
The US National Institutes of Health provides information about clinical trials, including personal stories about clinical trials from patients and researchers
The UK National Health Service Choices website has information for patients about clinical trials and medical research, including personal stories about participating in clinical trials
The UK Medical Research Council Clinical Trials Unit also provides information for patients about clinical trials and links to information on clinical trials provided by other organizations
MedlinePlus has links to further resources on clinical trials (in English and Spanish)
doi:10.1371/journal.pmed.1001271
PMCID: PMC3404112  PMID: 22911055
14.  Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched ClinicalTrials.gov for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
Background
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.
Methods and Findings
We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Conclusions
Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at ClinicalTrials.gov within one year of trial completion. ClinicalTrials.gov—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in ClinicalTrials.gov are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at ClinicalTrials.gov with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched ClinicalTrials.gov for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at ClinicalTrials.gov had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at ClinicalTrials.gov and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at ClinicalTrials.gov was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the ClinicalTrials.gov postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the ClinicalTrials.gov postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the ClinicalTrials.gov postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the ClinicalTrials.gov postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at ClinicalTrials.gov than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at ClinicalTrials.gov only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at ClinicalTrials.gov, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001566.
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
doi:10.1371/journal.pmed.1001566
PMCID: PMC3849189  PMID: 24311990
15.  Design of a flexible component gathering algorithm for converting cell-based models to graph representations for use in evolutionary search 
BMC Bioinformatics  2014;15:178.
Background
The ability of science to produce experimental data has outpaced the ability to effectively visualize and integrate the data into a conceptual framework that can further higher order understanding. Multidimensional and shape-based observational data of regenerative biology presents a particularly daunting challenge in this regard. Large amounts of data are available in regenerative biology, but little progress has been made in understanding how organisms such as planaria robustly achieve and maintain body form. An example of this kind of data can be found in a new repository (PlanformDB) that encodes descriptions of planaria experiments and morphological outcomes using a graph formalism.
Results
We are developing a model discovery framework that uses a cell-based modeling platform combined with evolutionary search to automatically search for and identify plausible mechanisms for the biological behavior described in PlanformDB. To automate the evolutionary search we developed a way to compare the output of the modeling platform to the morphological descriptions stored in PlanformDB. We used a flexible connected component algorithm to create a graph representation of the virtual worm from the robust, cell-based simulation data. These graphs can then be validated and compared with target data from PlanformDB using the well-known graph-edit distance calculation, which provides a quantitative metric of similarity between graphs. The graph edit distance calculation was integrated into a fitness function that was able to guide automated searches for unbiased models of planarian regeneration. We present a cell-based model of planarian that can regenerate anatomical regions following bisection of the organism, and show that the automated model discovery framework is capable of searching for and finding models of planarian regeneration that match experimental data stored in PlanformDB.
Conclusion
The work presented here, including our algorithm for converting cell-based models into graphs for comparison with data stored in an external data repository, has made feasible the automated development, training, and validation of computational models using morphology-based data. This work is part of an ongoing project to automate the search process, which will greatly expand our ability to identify, consider, and test biological mechanisms in the field of regenerative biology.
doi:10.1186/1471-2105-15-178
PMCID: PMC4083366  PMID: 24917489
16.  A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging 
EJNMMI Research  2013;3:55.
Background
Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers’ understanding of infectious diseases.
Methods
We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists’ delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework.
Results
Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis.
Conclusions
We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy.
doi:10.1186/2191-219X-3-55
PMCID: PMC3734217  PMID: 23879987
Quantitative analysis; Pulmonary infections; Small animal models; PET-CT; Image segmentation; H1N1; Tuberculosis
17.  Minimally invasive surgical procedures for the treatment of lumbar disc herniation 
Introduction
In up to 30% of patients undergoing lumbar disc surgery for herniated or protruded discs outcomes are judged unfavourable. Over the last decades this problem has stimulated the development of a number of minimally-invasive operative procedures. The aim is to relieve pressure from compromised nerve roots by mechanically removing, dissolving or evaporating disc material while leaving bony structures and surrounding tissues as intact as possible. In Germany, there is hardly any utilisation data for these new procedures – data files from the statutory health insurances demonstrate that about 5% of all lumbar disc surgeries are performed using minimally-invasive techniques. Their real proportion is thought to be much higher because many procedures are offered by private hospitals and surgeries and are paid by private health insurers or patients themselves. So far no comprehensive assessment comparing efficacy, safety, effectiveness and cost-effectiveness of minimally-invasive lumbar disc surgery to standard procedures (microdiscectomy, open discectomy) which could serve as a basis for coverage decisions, has been published in Germany.
Objective
Against this background the aim of the following assessment is:
Based on published scientific literature assess safety, efficacy and effectiveness of minimally-invasive lumbar disc surgery compared to standard procedures. To identify and critically appraise studies comparing costs and cost-effectiveness of minimally-invasive procedures to that of standard procedures. If necessary identify research and evaluation needs and point out regulative needs within the German health care system. The assessment focusses on procedures that are used in elective lumbar disc surgery as alternative treatment options to microdiscectomy or open discectomy. Chemonucleolysis, percutaneous manual discectomy, automated percutaneous lumbar discectomy, laserdiscectomy and endoscopic procedures accessing the disc by a posterolateral or posterior approach are included.
Methods
In order to assess safety, efficacy and effectiveness of minimally-invasive procedures as well as their economic implications systematic reviews of the literature are performed. A comprehensive search strategy is composed to search 23 electronic databases, among them MEDLINE, EMBASE and the Cochrane Library. Methodological quality of systematic reviews, HTA reports and primary research is assessed using checklists of the German Scientific Working Group for Health Technology Assessment. Quality and transparency of cost analyses are documented using the quality and transparency catalogues of the working group. Study results are summarised in a qualitative manner. Due to the limited number and the low methodological quality of the studies it is not possible to conduct metaanalyses. In addition to the results of controlled trials results of recent case series are introduced and discussed.
Results
The evidence-base to assess safety, efficacy and effectiveness of minimally-invasive lumbar disc surgery procedures is rather limited:
Percutaneous manual discectomy: Six case series (four after 1998)Automated percutaneous lumbar discectomy: Two RCT (one discontinued), twelve case series (one after 1998)Chemonucleolysis: Five RCT, five non-randomised controlled trials, eleven case seriesPercutaneous laserdiscectomy: One non-randomised controlled trial, 13 case series (eight after 1998)Endoscopic procedures: Three RCT, 21 case series (17 after 1998)
There are two economic analyses each retrieved for chemonucleolysis and automated percutaneous discectomy as well as one cost-minimisation analysis comparing costs of an endoscopic procedure to costs for open discectomy.
Among all minimally-invasive procedures chemonucleolysis is the only of which efficacy may be judged on the basis of results from high quality randomised controlled trials (RCT). Study results suggest that the procedure maybe (cost)effectively used as an intermediate therapeutical option between conservative and operative management of small lumbar disc herniations or protrusions causing sciatica. Two RCT comparing transforaminal endoscopic procedures with microdiscectomy in patients with sciatica and small non-sequestered disc herniations show comparable short and medium term overall success rates. Concerning speed of recovery and return to work a trend towards more favourable results for the endoscopic procedures is noted. It is doubtful though, whether these results from the eleven and five years old studies are still valid for the more advanced procedures used today. The only RCT comparing the results of automated percutaneous lumbar discectomy to those of microdiscectomy showed clearly superior results of microdiscectomy. Furthermore, success rates of automated percutaneous lumbar discectomy reported in the RCT (29%) differ extremely from success rates reported in case series (between 56% and 92%).
The literature search retrieves no controlled trials to assess efficacy and/or effectiveness of laser-discectomy, percutaneous manual discectomy or endoscopic procedures using a posterior approach in comparison to the standard procedures. Results from recent case series permit no assessment of efficacy, especially not in comparison to standard procedures. Due to highly selected patients, modi-fications of operative procedures, highly specialised surgical units and poorly standardised outcome assessment results of case series are highly variable, their generalisability is low.
The results of the five economical analyses are, due to conceptual and methodological problems, of no value for decision-making in the context of the German health care system.
Discussion
Aside from low methodological study quality three conceptual problems complicate the interpretation of results.
Continuous further development of technologies leads to a diversity of procedures in use which prohibits generalisation of study results. However, diversity is noted not only for minimally-invasive procedures but also for the standard techniques against which the new developments are to be compared. The second problem refers to the heterogeneity of study populations. For most studies one common inclusion criterion was "persisting sciatica after a course of conservative treatment of variable duration". Differences among study populations are noted concerning results of imaging studies. Even within every group of minimally-invasive procedure, studies define their own in- and exclusion criteria which differ concerning degree of dislocation and sequestration of disc material. There is the non-standardised assessment of outcomes which are performed postoperatively after variable periods of time. Most studies report results in a dichotomous way as success or failure while the classification of a result is performed using a variety of different assessment instruments or procedures. Very often the global subjective judgement of results by patients or surgeons is reported. There are no scientific discussions whether these judgements are generalisable or comparable, especially among studies that are conducted under differing socio-cultural conditions. Taking into account the weak evidence-base for efficacy and effectiveness of minimally-invasive procedures it is not surprising that so far there are no dependable economic analyses.
Conclusions
Conclusions that can be drawn from the results of the present assessment refer in detail to the specified minimally-invasive procedures of lumbar disc surgery but they may also be considered exemplary for other fields where optimisation of results is attempted by technological development and widening of indications (e.g. total hip replacement).
Compared to standard technologies (open discectomy, microdiscectomy) and with the exception of chemonucleolysis, the developmental status of all other minimally-invasive procedures assessed must be termed experimental. To date there is no dependable evidence-base to recommend their use in routine clinical practice. To create such a dependable evidence-base further research in two directions is needed: a) The studies need to include adequate patient populations, use realistic controls (e.g. standard operative procedures or continued conservative care) and use standardised measurements of meaningful outcomes after adequate periods of time. b) Studies that are able to report effectiveness of the procedures under everyday practice conditions and furthermore have the potential to detect rare adverse effects are needed. In Sweden this type of data is yielded by national quality registries. On the one hand their data are used for quality improvement measures and on the other hand they allow comprehensive scientific evaluations. Since the year of 2000 a continuous rise in utilisation of minimally-invasive lumbar disc surgery is observed among statutory health insurers. Examples from other areas of innovative surgical technologies (e.g. robot assisted total hip replacement) indicate that the rise will probably continue - especially because there are no legal barriers to hinder introduction of innovative treatments into routine hospital care. Upon request by payers or providers the "Gemeinsamer Bundesausschuss" may assess a treatments benefit, its necessity and cost-effectiveness as a prerequisite for coverage by the statutory health insurance. In the case of minimally-invasive disc surgery it would be advisable to examine the legal framework for covering procedures only if they are provided under evaluation conditions. While in Germany coverage under evaluation conditions is established practice in ambulatory health care only (“Modellvorhaben") examples from other European countries (Great Britain, Switzerland) demonstrate that it is also feasible for hospital based interventions. In order to assure patient protection and at the same time not hinder the further development of new and promising technologies provision under evaluation conditions could also be realised in the private health care market - although in this sector coverage is not by law linked to benefit, necessity and cost-effectiveness of an intervention.
PMCID: PMC3011322  PMID: 21289928
18.  Immune Protection of Nonhuman Primates against Ebola Virus with Single Low-Dose Adenovirus Vectors Encoding Modified GPs 
PLoS Medicine  2006;3(6):e177.
Background
Ebola virus causes a hemorrhagic fever syndrome that is associated with high mortality in humans. In the absence of effective therapies for Ebola virus infection, the development of a vaccine becomes an important strategy to contain outbreaks. Immunization with DNA and/or replication-defective adenoviral vectors (rAd) encoding the Ebola glycoprotein (GP) and nucleoprotein (NP) has been previously shown to confer specific protective immunity in nonhuman primates. GP can exert cytopathic effects on transfected cells in vitro, and multiple GP forms have been identified in nature, raising the question of which would be optimal for a human vaccine.
Methods and Findings
To address this question, we have explored the efficacy of mutant GPs from multiple Ebola virus strains with reduced in vitro cytopathicity and analyzed their protective effects in the primate challenge model, with or without NP. Deletion of the GP transmembrane domain eliminated in vitro cytopathicity but reduced its protective efficacy by at least one order of magnitude. In contrast, a point mutation was identified that abolished this cytopathicity but retained immunogenicity and conferred immune protection in the absence of NP. The minimal effective rAd dose was established at 1010 particles, two logs lower than that used previously.
Conclusions
Expression of specific GPs alone vectored by rAd are sufficient to confer protection against lethal challenge in a relevant nonhuman primate model. Elimination of NP from the vaccine and dose reductions to 1010 rAd particles do not diminish protection and simplify the vaccine, providing the basis for selection of a human vaccine candidate.
A simplified Ebola vaccine that consists of a modified GP protein (which is well-tolerated by human cells even at high concentrations) in a replication-defective adenoviral vector protects macaques.
Editors' Summary
Background.
Humans who get infected with Ebola virus develop an illness called Ebola hemorrhagic fever (EHV), which is one of the most deadly viral diseases known; 50%–90% of all ill patients die, and there is no available treatment for EHV. Scientists think that the occasional outbreaks of the disease occur because the virus “jumps” from an infected animal to a person (a rare event) and then is transmitted between people by direct contact with infected blood or other body fluids or parts. Several strains or variants of the Ebola virus exist. Most outbreaks have been caused either by the Zaire strain or by the Sudan/Gulu strain (so-called because that is where the particular virus was first isolated). Scientists are working on a vaccine against Ebola that could be given to people before they get infected and then protect them when they come in contact with the virus. A number of candidate vaccines have been developed and tested in animals.
Why Was This Study Done?
The researchers who did this study are working on a vaccine that consists of two particular parts of the virus. One part is called GP (which stands for glycoprotein) and is from the outer coat of the virus; the other, NP (nucleoprotein), is from its inside. Without the rest of the virus, GP and NP cannot cause EBV. However, the hope is that giving these parts of the virus to an individual can educate their immune system to build a response against GP and NP, which would then recognize the virus should the vaccinated person become infected with the whole virus, and destroy it before it can cause disease. To get the GP and NP parts into the body so that they can cause a strong immune response (which is what effective vaccines do), the researchers used a manmade version of another, harmless virus called recombinant adenovirus 5 (or rAd5) to carry the NP and GP. The researchers have shown previously that this strategy for introducing a vaccine works in animals. The vaccine—i.e., the combination of the rAd5 virus and the two Ebola virus parts—can protect animals against subsequent infection with real Ebola virus that would otherwise kill them. However, during these earlier studies, the researchers had noticed that the GP part, when present at high levels, seemed to make human cells sick. They had not seen any similar problems in the experimental animals, but to be on the safe side they decided to see whether they could change the GP part so that it would still be effective as a vaccine but no longer make human cells sick.
What Did the Researchers Do and Find?
They changed the GP part of the vaccine in different ways so that it would no longer make human cells sick and then tested whether the resulting vaccines (combined with the original NP part and the Ad5 virus) could still protect monkeys from EHF after they were infected with Ebola virus. They found that some of the new GP versions made the vaccine less effective, but others did what they had hoped for; namely, they gave the same level of protection as when the original GP part was present. While doing these experiments, the researchers also found that the NP component seemed unnecessary and in some cases even weakened the vaccine's effect.
What Do These Findings Mean?
The researchers have now developed a simplified vaccine against Ebola virus that is effective in monkeys. This vaccine consists of only a modified GP component (which is well tolerated by human cells even at high concentrations) and the rAd5 component. This vaccine is not the only candidate currently being developed against Ebola, but it seems likely that it is one of a few that will be tested in human volunteers in the near future. The initial clinical trials will test whether the vaccine is safe in humans, and whether it can cause the immune system to produce an immune response that is specific for the Ebola virus. Assuming that the outcomes of these trials are positive, the next question is whether the vaccine can protect humans against Ebola disease. Because Ebola is so dangerous and outbreaks are relatively rare, the vaccine will likely be tested only during an actual outbreak. At that time, an experimental vaccine might be given to people at immediate risk of becoming infected, especially health-care workers who, because they take care of infected patients, are themselves at very high risk of becoming infected. In addition to trials in humans, the scientists will also explore whether this vaccine, which was developed based on the GP component of the Zaire strain, can protect monkeys against infections with other strains of the Ebola virus.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030177.
• World Health Organization
• MedlinePlus Medical Encyclopedia
• US Centers for Disease Control and Prevention
• Wikipedia (note: Wikipedia is a free Internet encyclopedia that anyone can edit)
doi:10.1371/journal.pmed.0030177
PMCID: PMC1459482  PMID: 16683867
19.  The SWISS-PROT protein knowledgebase and its supplement TrEMBL in 2003 
Nucleic Acids Research  2003;31(1):365-370.
The SWISS-PROT protein knowledgebase (http://www.expasy.org/sprot/ and http://www.ebi.ac.uk/swissprot/) connects amino acid sequences with the current knowledge in the Life Sciences. Each protein entry provides an interdisciplinary overview of relevant information by bringing together experimental results, computed features and sometimes even contradictory conclusions. Detailed expertise that goes beyond the scope of SWISS-PROT is made available via direct links to specialised databases. SWISS-PROT provides annotated entries for all species, but concentrates on the annotation of entries from human (the HPI project) and other model organisms to ensure the presence of high quality annotation for representative members of all protein families. Part of the annotation can be transferred to other family members, as is already done for microbes by the High-quality Automated and Manual Annotation of microbial Proteomes (HAMAP) project. Protein families and groups of proteins are regularly reviewed to keep up with current scientific findings. Complementarily, TrEMBL strives to comprise all protein sequences that are not yet represented in SWISS-PROT, by incorporating a perpetually increasing level of mostly automated annotation. Researchers are welcome to contribute their knowledge to the scientific community by submitting relevant findings to SWISS-PROT at swiss-prot@expasy.org.
PMCID: PMC165542  PMID: 12520024
20.  Biocoder: A programming language for standardizing and automating biology protocols 
Background
Published descriptions of biology protocols are often ambiguous and incomplete, making them difficult to replicate in other laboratories. However, there is increasing benefit to formalizing the descriptions of protocols, as laboratory automation systems (such as microfluidic chips) are becoming increasingly capable of executing them. Our goal in this paper is to improve both the reproducibility and automation of biology experiments by using a programming language to express the precise series of steps taken.
Results
We have developed BioCoder, a C++ library that enables biologists to express the exact steps needed to execute a protocol. In addition to being suitable for automation, BioCoder converts the code into a readable, English-language description for use by biologists. We have implemented over 65 protocols in BioCoder; the most complex of these was successfully executed by a biologist in the laboratory using BioCoder as the only reference. We argue that BioCoder exposes and resolves ambiguities in existing protocols, and could provide the software foundations for future automation platforms. BioCoder is freely available for download at http://research.microsoft.com/en-us/um/india/projects/biocoder/.
Conclusions
BioCoder represents the first practical programming system for standardizing and automating biology protocols. Our vision is to change the way that experimental methods are communicated: rather than publishing a written account of the protocols used, researchers will simply publish the code. Our experience suggests that this practice is tractable and offers many benefits. We invite other researchers to leverage BioCoder to improve the precision and completeness of their protocols, and also to adapt and extend BioCoder to new domains.
doi:10.1186/1754-1611-4-13
PMCID: PMC2989930  PMID: 21059251
21.  Publication of Clinical Trials Supporting Successful New Drug Applications: A Literature Analysis 
PLoS Medicine  2008;5(9):e191.
Background
The United States (US) Food and Drug Administration (FDA) approves new drugs based on sponsor-submitted clinical trials. The publication status of these trials in the medical literature and factors associated with publication have not been evaluated. We sought to determine the proportion of trials submitted to the FDA in support of newly approved drugs that are published in biomedical journals that a typical clinician, consumer, or policy maker living in the US would reasonably search.
Methods and Findings
We conducted a cohort study of trials supporting new drugs approved between 1998 and 2000, as described in FDA medical and statistical review documents and the FDA approved drug label. We determined publication status and time from approval to full publication in the medical literature at 2 and 5 y by searching PubMed and other databases through 01 August 2006. We then evaluated trial characteristics associated with publication. We identified 909 trials supporting 90 approved drugs in the FDA reviews, of which 43% (394/909) were published. Among the subset of trials described in the FDA-approved drug label and classified as “pivotal trials” for our analysis, 76% (257/340) were published. In multivariable logistic regression for all trials 5 y postapproval, likelihood of publication correlated with statistically significant results (odds ratio [OR] 3.03, 95% confidence interval [CI] 1.78–5.17); larger sample sizes (OR 1.33 per 2-fold increase in sample size, 95% CI 1.17–1.52); and pivotal status (OR 5.31, 95% CI 3.30–8.55). In multivariable logistic regression for only the pivotal trials 5 y postapproval, likelihood of publication correlated with statistically significant results (OR 2.96, 95% CI 1.24–7.06) and larger sample sizes (OR 1.47 per 2-fold increase in sample size, 95% CI 1.15–1.88). Statistically significant results and larger sample sizes were also predictive of publication at 2 y postapproval and in multivariable Cox proportional models for all trials and the subset of pivotal trials.
Conclusions
Over half of all supporting trials for FDA-approved drugs remained unpublished ≥ 5 y after approval. Pivotal trials and trials with statistically significant results and larger sample sizes are more likely to be published. Selective reporting of trial results exists for commonly marketed drugs. Our data provide a baseline for evaluating publication bias as the new FDA Amendments Act comes into force mandating basic results reporting of clinical trials.
Ida Sim and colleagues investigate the publication status and publication bias of trials submitted to the US Food and Drug Administration (FDA) for a wide variety of approved drugs.
Editors' Summary
Background.
Before a new drug becomes available for the treatment of a specific human disease, its benefits and harms are carefully studied, first in the laboratory and in animals, and then in several types of clinical trials. In the most important of these trials—so-called “pivotal” clinical trials—the efficacy and safety of the new drug and of a standard treatment are compared by giving groups of patients the different treatments and measuring several predefined “outcomes.” These outcomes indicate whether the new drug is more effective than the standard treatment and whether it has any other effects on the patients' health and daily life. All this information is then submitted by the sponsor of the new drug (usually a pharmaceutical company) to the government body responsible for drug approval—in the US, this is the Food and Drug Administration (FDA).
Why Was This Study Done?
After a drug receives FDA approval, information about the clinical trials supporting the FDA's decision are included in the FDA “Summary Basis of Approval” and/or on the drug label. In addition, some clinical trials are described in medical journals. Ideally, all the clinical information that leads to a drug's approval should be publicly available to help clinicians make informed decisions about how to treat their patients. A full-length publication in a medical journal is the primary way that clinical trial results are communicated to the scientific community and the public. Unfortunately, drug sponsors sometimes publish the results only of trials where their drug performed well; as a consequence, trials where the drug did no better than the standard treatment or where it had unwanted side effects remain unpublished. Publication bias like this provides an inaccurate picture of a drug's efficacy and safety relative to other therapies and may lead to excessive prescribing of newer, more expensive (but not necessarily more effective) treatments. In this study, the researchers investigate whether selective trial reporting is common by evaluating the publication status of trials submitted to the FDA for a wide variety of approved drugs. They also ask which factors affect a trial's chances of publication.
What Did the Researchers Do and Find?
The researchers identified 90 drugs approved by the FDA between 1998 and 2000 by searching the FDA's Center for Drug Evaluation and Research Web site. From the Summary Basis of Approval for each drug, they identified 909 clinical trials undertaken to support these approvals. They then searched the published medical literature up to mid-2006 to determine if and when the results of each trial were published. Although 76% of the pivotal trials had appeared in medical journals, usually within 3 years of FDA approval, only 43% of all of the submitted trials had been published. Among all the trials, those with statistically significant results were nearly twice as likely to have been published as those without statistically significant results, and pivotal trials were three times more likely to have been published as nonpivotal trials, 5 years postapproval. In addition, a larger sample size increased the likelihood of publication. Having statistically significant results and larger sample sizes also increased the likelihood of publication of the pivotal trials.
What Do These Findings Mean?
Although the search methods used in this study may have missed some publications, these findings suggest that more than half the clinical trials undertaken to support drug approval remain unpublished 5 years or more after FDA approval. They also reveal selective reporting of results. For example, they show that a pivotal trial in which the new drug does no better than an old drug is less likely to be published than one where the new drug is more effective, a publication bias that could establish an inappropriately favorable record for the new drug in the medical literature. Importantly, these findings provide a baseline for monitoring the effects of the FDA Amendments Act 2007, which was introduced to improve the accuracy and completeness of drug trial reporting. Under this Act, all trials supporting FDA-approved drugs must be registered when they start, and the summary results of all the outcomes declared at trial registration as well as specific details about the trial protocol must be publicly posted within a year of drug approval on the US National Institutes of Health clinical trials site.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050191.
PLoS Medicine recently published an editorial discussing the FDA Amendment Act and what it means for medical journals: The PLoS Medicine Editors (2008) Next Stop, Don't Block the Doors: Opening Up Access to Clinical Trials Results. PLoS Med 5(7): e160
The US Food and Drug Administration provides information about drug approval in the US for consumers and for health care professionals; detailed information about the process by which drugs are approved is on the Web site of the FDA Center for Drug Evaluation and Research (in English and Spanish)
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, background information about clinical trials, and a fact sheet detailing the requirements of the FDA Amendments Act 2007 for trial registration
The World Health Organization's International Clinical Trials Registry Platform is working toward international norms and standards for reporting the findings of clinical trials
doi:10.1371/journal.pmed.0050191
PMCID: PMC2553819  PMID: 18816163
22.  The Role of the Toxicologic Pathologist in the Post-Genomic Era# 
Journal of Toxicologic Pathology  2013;26(2):105-110.
An era can be defined as a period in time identified by distinctive character, events, or practices. We are now in the genomic era. The pre-genomic era: There was a pre-genomic era. It started many years ago with novel and seminal animal experiments, primarily directed at studying cancer. It is marked by the development of the two-year rodent cancer bioassay and the ultimate realization that alternative approaches and short-term animal models were needed to replace this resource-intensive and time-consuming method for predicting human health risk. Many alternatives approaches and short-term animal models were proposed and tried but, to date, none have completely replaced our dependence upon the two-year rodent bioassay. However, the alternative approaches and models themselves have made tangible contributions to basic research, clinical medicine and to our understanding of cancer and they remain useful tools to address hypothesis-driven research questions. The pre-genomic era was a time when toxicologic pathologists played a major role in drug development, evaluating the cancer bioassay and the associated dose-setting toxicity studies, and exploring the utility of proposed alternative animal models. It was a time when there was shortage of qualified toxicologic pathologists. The genomic era: We are in the genomic era. It is a time when the genetic underpinnings of normal biological and pathologic processes are being discovered and documented. It is a time for sequencing entire genomes and deliberately silencing relevant segments of the mouse genome to see what each segment controls and if that silencing leads to increased susceptibility to disease. What remains to be charted in this genomic era is the complex interaction of genes, gene segments, post-translational modifications of encoded proteins, and environmental factors that affect genomic expression. In this current genomic era, the toxicologic pathologist has had to make room for a growing population of molecular biologists. In this present era newly emerging DVM and MD scientists enter the work arena with a PhD in pathology often based on some aspect of molecular biology or molecular pathology research. In molecular biology, the almost daily technological advances require one’s complete dedication to remain at the cutting edge of the science. Similarly, the practice of toxicologic pathology, like other morphological disciplines, is based largely on experience and requires dedicated daily examination of pathology material to maintain a well-trained eye capable of distilling specific information from stained tissue slides - a dedicated effort that cannot be well done as an intermezzo between other tasks. It is a rare individual that has true expertise in both molecular biology and pathology. In this genomic era, the newly emerging DVM-PhD or MD-PhD pathologist enters a marketplace without many job opportunities in contrast to the pre-genomic era. Many face an identity crisis needing to decide to become a competent pathologist or, alternatively, to become a competent molecular biologist. At the same time, more PhD molecular biologists without training in pathology are members of the research teams working in drug development and toxicology. How best can the toxicologic pathologist interact in the contemporary team approach in drug development, toxicology research and safety testing? Based on their biomedical training, toxicologic pathologists are in an ideal position to link data from the emerging technologies with their knowledge of pathobiology and toxicology. To enable this linkage and obtain the synergy it provides, the bench-level, slide-reading expert pathologist will need to have some basic understanding and appreciation of molecular biology methods and tools. On the other hand, it is not likely that the typical molecular biologist could competently evaluate and diagnose stained tissue slides from a toxicology study or a cancer bioassay. The post-genomic era: The post-genomic era will likely arrive approximately around 2050 at which time entire genomes from multiple species will exist in massive databases, data from thousands of robotic high throughput chemical screenings will exist in other databases, genetic toxicity and chemical structure-activity-relationships will reside in yet other databases. All databases will be linked and relevant information will be extracted and analyzed by appropriate algorithms following input of the latest molecular, submolecular, genetic, experimental, pathology and clinical data. Knowledge gained will permit the genetic components of many diseases to be amenable to therapeutic prevention and/or intervention. Much like computerized algorithms are currently used to forecast weather or to predict political elections, computerized sophisticated algorithms based largely on scientific data mining will categorize new drugs and chemicals relative to their health benefits versus their health risks for defined human populations and subpopulations. However, this form of a virtual toxicity study or cancer bioassay will only identify probabilities of adverse consequences from interaction of particular environmental and/or chemical/drug exposure(s) with specific genomic variables. Proof in many situations will require confirmation in intact in vivo mammalian animal models. The toxicologic pathologist in the post-genomic era will be the best suited scientist to confirm the data mining and its probability predictions for safety or adverse consequences with the actual tissue morphological features in test species that define specific test agent pathobiology and human health risk.
doi:10.1293/tox.26.105
PMCID: PMC3695332  PMID: 23914052
genomic era; history of toxicologic pathology; molecular biology
23.  Guidelines, Editors, Pharma And The Biological Paradigm Shift 
Mens Sana Monographs  2007;5(1):27-30.
Private investment in biomedical research has increased over the last few decades. At most places it has been welcomed as the next best thing to technology itself. Much of the intellectual talent from academic institutions is getting absorbed in lucrative positions in industry. Applied research finds willing collaborators in venture capital funded industry, so a symbiotic growth is ensured for both.
There are significant costs involved too. As academia interacts with industry, major areas of conflict of interest especially applicable to biomedical research have arisen. They are related to disputes over patents and royalty, hostile encounters between academia and industry, as also between public and private enterprise, legal tangles, research misconduct of various types, antagonistic press and patient-advocate lobbies and a general atmosphere in which commercial interest get precedence over patient welfare.
Pharma image stinks because of a number of errors of omission and commission. A recent example is suppression of negative findings about Bayer's Trasylol (Aprotinin) and the marketing maneuvers of Eli Lilly's Xigris (rhAPC). Whenever there is a conflict between patient vulnerability and profit motives, pharma often tends to tilt towards the latter. Moreover there are documents that bring to light how companies frequently cross the line between patient welfare and profit seeking behaviour.
A voluntary moratorium over pharma spending to pamper drug prescribers is necessary. A code of conduct adopted recently by OPPI in India to limit pharma company expenses over junkets and trinkets is a welcome step.
Clinical practice guidelines (CPG) are considered important as they guide the diagnostic/therapeutic regimen of a large number of medical professionals and hospitals and provide recommendations on drugs, their dosages and criteria for selection. Along with clinical trials, they are another area of growing influence by the pharmaceutical industry. For example, in a relatively recent survey of 2002, it was found that about 60% of 192 authors of clinical practice guidelines reported they had financial connections with the companies whose drugs were under consideration. There is a strong case for making CPGs based not just on effectivity but cost effectivity. The various ramifications of this need to be spelt out. Work of bodies like the Appraisal of Guidelines Research and Evaluation (AGREE) Collaboration and Guidelines Advisory Committee (GAC) are also worth a close look.
Even the actions of Foundations that work for disease amelioration have come under scrutiny. The process of setting up ‘Best Practices’ Guidelines for interactions between the pharmaceutical industry and clinicians has already begun and can have important consequences for patient care. Similarly, Good Publication Practice (GPP) for pharmaceutical companies have also been set up aimed at improving the behaviour of drug companies while reporting drug trials
The rapidly increasing trend toward influence and control by industry has become a concern for many. It is of such importance that the Association of American Medical Colleges has issued two relatively new documents - one, in 2001, on how to deal with individual conflicts of interest; and the other, in 2002, on how to deal with institutional conflicts of interest in the conduct of clinical research. Academic Medical Centers (AMCs), as also medical education and research institutions at other places, have to adopt means that minimize their conflicts of interest.
Both medical associations and research journal editors are getting concerned with individual and institutional conflicts of interest in the conduct of clinical research and documents are now available which address these issues. The 2001 ICMJE revision calls for full disclosure of the sponsor's role in research, as well as assurance that the investigators are independent of the sponsor, are fully accountable for the design and conduct of the trial, have independent access to all trial data and control all editorial and publication decisions. However the findings of a 2002 study suggest that academic institutions routinely participate in clinical research that does not adhere to ICMJE standards of accountability, access to data and control of publication.
There is an inevitable slant to produce not necessarily useful but marketable products which ensure the profitability of industry and research grants outflow to academia. Industry supports new, not traditional, therapies, irrespective of what is effective. Whatever traditional therapy is supported is most probably because the company concerned has a product with a big stake there, which has remained a ‘gold standard’ or which that player thinks has still some ‘juice’ left.
Industry sponsorship is mainly for potential medications, not for trying to determine whether there may be non-pharmacological interventions that may be equally good, if not better. In the paradigm shift towards biological psychiatry, the role of industry sponsorship is not overt but probably more pervasive than many have realised, or the right thinking may consider good, for the health of the branch in the long run.
An issue of major concern is protection of the interests of research subjects. Patients agree to become research subjects not only for personal medical benefit but, as an extension, to benefit the rest of the patient population and also advance medical research.
We all accept that industry profits have to be made, and investment in research and development by the pharma industry is massive. However, we must also accept there is a fundamental difference between marketing strategies for other entities and those for drugs.
The ultimate barometer is patient welfare and no drug that compromises it can stand the test of time. So, how does it make even commercial sense in the long term to market substandard products? The greatest mistake long-term players in industry may make is try to adopt the shady techniques of the upstart new entrant. Secrecy of marketing/sales tactics, of the process of manufacture, of other strategies and plans of business expansion, of strategies to tackle competition are fine business tactics. But it is critical that secrecy as a tactic not extend to reporting of research findings, especially those contrary to one's product.
Pharma has no option but to make a quality product, do comprehensive adverse reaction profiles, and market it only if it passes both tests.
Why does pharma adopt questionable tactics? The reasons are essentially two:
What with all the constraints, a drug comes to the pharmacy after huge investments. There are crippling overheads and infrastructure costs to be recovered. And there are massive profit margins to be maintained. If these were to be dependent only on genuine drug discoveries, that would be taking too great a risk.Industry players have to strike the right balance between profit making and credibility. In profit making, the marketing champions play their role. In credibility ratings, researchers and paid spokes-persons play their role. All is hunky dory till marketing is based on credibility. When there is nothing available to make for credibility, something is projected as one and marketing carried out, in the calculated hope that profits can accrue, since profit making must continue endlessly. That is what makes pharma adopt even questionable means to make profits.
Essentially, there are four types of drugs. First, drugs that work and have minimal side-effects; second, drugs which work but have serious side-effects; third, drugs that do not work and have minimal side-effects; and fourth, drugs which work minimally but have serious side-effects. It is the second and fourth types that create major hassles for industry. Often, industry may try to project the fourth type as the second to escape censure.
The major cat and mouse game being played by conscientious researchers is in exposing the third and fourth for what they are and not allowing industry to palm them off as the first and second type respectively. The other major game is in preventing the second type from being projected as the first. The third type are essentially harmless, so they attract censure all right and some merriment at the antics to market them. But they escape anything more than a light rap on the knuckles, except when they are projected as the first type.
What is necessary for industry captains and long-term players is to realise:
Their major propelling force can only be producing the first type. 2. They accept the second type only till they can lay their hands on the first. 3. The third type can be occasionally played around with to shore up profits, but never by projecting them as the first type. 4. The fourth type are the laggards, real threat to credibility and therefore do not deserve any market hype or promotion.
In finding out why most pharma indulges in questionable tactics, we are lead to some interesting solutions to prevent such tactics with the least amount of hassles for all concerned, even as both profits and credibility are kept intact.
doi:10.4103/0973-1229.32176
PMCID: PMC3192391  PMID: 22058616
Academia; Pharmaceutical Industry; Clinical Practice Guidelines; Best Practice Guidelines; Academic Medical Centers; Medical Associations; Research Journals; Clinical Research; Public Welfare; Pharma Image; Corporate Welfare; Biological Psychiatry; Law Suits Against Industry
24.  Threats to Validity in the Design and Conduct of Preclinical Efficacy Studies: A Systematic Review of Guidelines for In Vivo Animal Experiments 
PLoS Medicine  2013;10(7):e1001489.
Background
The vast majority of medical interventions introduced into clinical development prove unsafe or ineffective. One prominent explanation for the dismal success rate is flawed preclinical research. We conducted a systematic review of preclinical research guidelines and organized recommendations according to the type of validity threat (internal, construct, or external) or programmatic research activity they primarily address.
Methods and Findings
We searched MEDLINE, Google Scholar, Google, and the EQUATOR Network website for all preclinical guideline documents published up to April 9, 2013 that addressed the design and conduct of in vivo animal experiments aimed at supporting clinical translation. To be eligible, documents had to provide guidance on the design or execution of preclinical animal experiments and represent the aggregated consensus of four or more investigators. Data from included guidelines were independently extracted by two individuals for discrete recommendations on the design and implementation of preclinical efficacy studies. These recommendations were then organized according to the type of validity threat they addressed. A total of 2,029 citations were identified through our search strategy. From these, we identified 26 guidelines that met our eligibility criteria—most of which were directed at neurological or cerebrovascular drug development. Together, these guidelines offered 55 different recommendations. Some of the most common recommendations included performance of a power calculation to determine sample size, randomized treatment allocation, and characterization of disease phenotype in the animal model prior to experimentation.
Conclusions
By identifying the most recurrent recommendations among preclinical guidelines, we provide a starting point for developing preclinical guidelines in other disease domains. We also provide a basis for the study and evaluation of preclinical research practice.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
The development process for new drugs is lengthy and complex. It begins in the laboratory, where scientists investigate the causes of diseases and identify potential new treatments. Next, promising interventions undergo preclinical research in cells and in animals (in vivo animal experiments) to test whether the intervention has the expected effect and to support the generalization (extension) of this treatment–effect relationship to patients. Drugs that pass these tests then enter clinical trials, where their safety and efficacy is tested in selected groups of patients under strictly controlled conditions. Finally, the government bodies responsible for drug approval review the results of the clinical trials, and successful drugs receive a marketing license, usually a decade or more after the initial laboratory work. Notably, only 11% of agents that enter clinical testing (investigational drugs) are ultimately licensed.
Why Was This Study Done?
The frequent failure of investigational drugs during clinical translation is potentially harmful to trial participants. Moreover, the costs of these failures are passed onto healthcare systems in the form of higher drug prices. It would be good, therefore, to reduce the attrition rate of investigational drugs. One possible explanation for the dismal success rate of clinical translation is that preclinical research, the key resource for justifying clinical development, is flawed. To address this possibility, several groups of preclinical researchers have issued guidelines intended to improve the design and execution of in vivo animal studies. In this systematic review (a study that uses predefined criteria to identify all the research on a given topic), the authors identify the experimental practices that are commonly recommended in these guidelines and organize these recommendations according to the type of threat to validity (internal, construct, or external) that they address. Internal threats to validity are factors that confound reliable inferences about treatment–effect relationships in preclinical research. For example, experimenter expectation may bias outcome assessment. Construct threats to validity arise when researchers mischaracterize the relationship between an experimental system and the clinical disease it is intended to represent. For example, researchers may use an animal model for a complex multifaceted clinical disease that only includes one characteristic of the disease. External threats to validity are unseen factors that frustrate the transfer of treatment–effect relationships from animal models to patients.
What Did the Researchers Do and Find?
The researchers identified 26 preclinical guidelines that met their predefined eligibility criteria. Twelve guidelines addressed preclinical research for neurological and cerebrovascular drug development; other disorders covered by guidelines included cardiac and circulatory disorders, sepsis, pain, and arthritis. Together, the guidelines offered 55 different recommendations for the design and execution of preclinical in vivo animal studies. Nineteen recommendations addressed threats to internal validity. The most commonly included recommendations of this type called for the use of power calculations to ensure that sample sizes are large enough to yield statistically meaningful results, random allocation of animals to treatment groups, and “blinding” of researchers who assess outcomes to treatment allocation. Among the 25 recommendations that addressed threats to construct validity, the most commonly included recommendations called for characterization of the properties of the animal model before experimentation and matching of the animal model to the human manifestation of the disease. Finally, six recommendations addressed threats to external validity. The most commonly included of these recommendations suggested that preclinical research should be replicated in different models of the same disease and in different species, and should also be replicated independently.
What Do These Findings Mean?
This systematic review identifies a range of investigational recommendations that preclinical researchers believe address threats to the validity of preclinical efficacy studies. Many of these recommendations are not widely implemented in preclinical research at present. Whether the failure to implement them explains the frequent discordance between the results on drug safety and efficacy obtained in preclinical research and in clinical trials is currently unclear. These findings provide a starting point, however, for the improvement of existing preclinical research guidelines for specific diseases, and for the development of similar guidelines for other diseases. They also provide an evidence-based platform for the analysis of preclinical evidence and for the study and evaluation of preclinical research practice. These findings should, therefore, be considered by investigators, institutional review bodies, journals, and funding agents when designing, evaluating, and sponsoring translational research.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001489.
The US Food and Drug Administration provides information about drug approval in the US for consumers and for health professionals; its Patient Network provides a step-by-step description of the drug development process that includes information on preclinical research
The UK Medicines and Healthcare Products Regulatory Agency (MHRA) provides information about all aspects of the scientific evaluation and approval of new medicines in the UK; its My Medicine: From Laboratory to Pharmacy Shelf web pages describe the drug development process from scientific discovery, through preclinical and clinical research, to licensing and ongoing monitoring
The STREAM website provides ongoing information about policy, ethics, and practices used in clinical translation of new drugs
The CAMARADES collaboration offers a “supporting framework for groups involved in the systematic review of animal studies” in stroke and other neurological diseases
doi:10.1371/journal.pmed.1001489
PMCID: PMC3720257  PMID: 23935460
25.  Something going on in Milan: a review of the 4th International PhD Student Cancer Conference 
ecancermedicalscience  2010;4:198.
The 4th International PhD Student Cancer Conference was held at the IFOM-IEO-Campus in Milan from 19–21 May 2010 http://www.semm.it/events_researchPast.php
The Conference covered many topics related to cancer, from basic biology to clinical aspects of the disease. All attendees presented their research, by either giving a talk or presenting a poster. This conference is an opportunity to introduce PhD students to top cancer research institutes across Europe.
The core participanting institutes included: European School of Molecular Medicine (SEMM)—IFOM-IEO Campus, MilanBeatson Institute for Cancer Research (BICR), GlasgowCambridge Research Institute (CRI), Cambridge, UKMRC Gray Institute of Radiation Biology (GIROB), OxfordLondon Research Institute (LRI), LondonPaterson Institute for Cancer Research (PICR), ManchesterThe Netherlands Cancer Institute (NKI), Amsterdam
‘You organizers have crushed all my prejudices towards Italians. Congratulations, I enjoyed the conference immensely!’ Even if it might have sounded like rudeness for sure this was supposed to be a genuine compliment (at least, that’s how we took it), also considering that it was told by a guy who himself was the fusion of two usually antithetical concepts: fashion style and English nationality.
The year 2010 has marked an important event for Italian research in the international scientific panorama: the European School of Molecular Medicine (SEMM) had the honour to host the 4th International PhD Student Cancer Conference, which was held from 19–21 May 2010 at the IFOM-IEO-Campus (http://www.semm.it/events_researchPast.php) in Milan.
The conference was attended by more than one hundred students, coming from a selection of cutting edge European institutes devoted to cancer research. The rationale behind it is the promotion of cooperation among young scientists across Europe to debate about science and to exchange ideas and experiences. But that is not all, it is also designed for PhD students to get in touch with other prestigious research centres and to create connections for future post docs or job experiences. And last but not least, it is a golden chance for penniless PhD students to spend a couple of extra days visiting a foreign country (this motivation will of course never be voiced to supervisors).
The network of participating institutes has a three-nation core, made up of the Netherlands Cancer Institute, the Italian European School of Molecular Medicine (SEMM) and five UK Cancer Research Institutes (The London Research Institute, The Cambridge Research Institute, The Beatson Institute for Cancer Research in Glasgow, The Patterson Institute for Cancer Research in Manchester and the MRC Gray Institute for Radiation Oncology and Biology in Oxford).
The conference is hosted and organised every year by one of the core institutes; the first was in Cambridge in 2007, Amsterdam in 2008 and London in 2009, this year was the turn of Milan.
In addition to the core institutes, PhD students from several other high-profile institutes are invited to attend the conference. This year participants applied from the Spanish National Cancer Centre (CNIO, Madrid), the German Cancer Research Centre (DKFZ, Heidelberg), the European Molecular Biology Labs (EMBL, Heidelberg) and the San Raffaele Institute (HSR, Milan). Moreover four ‘special guests’ from the National Centre for Biological Sciences of Bangalore (India) attended the conference in Milan. This represents a first step in widening the horizons beyond Europe into a global worldwide network of talented PhD students in life sciences.
The conference spread over two and a half days (Wednesday 19th to Friday 21st May) and touched on a broad spectrum of topics: from basic biology to development, from cancer therapies to modelling and top-down new generation global approaches. The final selection of presentations has been a tough task for us organisers (Chiara Segré, Federica Castellucci, Francesca Milanesi, Gianluca Varetti and Gian Maria Sarra Ferraris), due to the high scientific level of the abstracts submitted. In the end, 26 top students were chosen to give a 15-min oral presentation in one of eight sessions: Development & Differentiation, Cell Migration, Immunology & Cancer, Modelling & Large Scale approaches, Genome Instability, Signal Transduction, Cancer Genetics & Drug Resistance, Stem Cells in Biology and Cancer.
The scientific programme was further enriched by two scientific special sessions, held by Professor Pier Paolo di Fiore and Dr Giuseppe Testa, Principal Investigators at the IFOM-IEO-Campus and by a bioethical round table on human embryonic stem cell research moderated by Silvia Camporesi, a senior PhD student in the SEMM PhD Programme ‘Foundation of Life Science and their Bioethical Consequences’.
On top of everything, we had the pleasure of inviting, as keynote speakers, two leading European scientists in the fields of cancer invasion and biology of stem cells, respectively: Dr Peter Friedl from The Nijmegen Centre for Molecular Life (The Netherlands) and Professor Andreas Trumpp from The Heidelberg Institute for Stem Cell Technology and Experimental Medicine (Heidelberg).
All the student talks have distinguished themselves for the impressive quality of the science; an encouraging evidence of the high profile level of research carried out in Europe. It would be beyond the purposes of this report to summarise all 26 talks, which touched many different and specific topics. For further information, the Conference Abstract book with all the scientific content is available on the conference Web site (http://www.semm.it/events_researchPast.php). In what follows, the special sessions and the keynote lectures will be discussed in detail.
doi:10.3332/ecancer.2010.198
PMCID: PMC3234021  PMID: 22276043

Results 1-25 (964789)