PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (848214)

Clipboard (0)
None

Related Articles

1.  A regression model approach to enable cell morphology correction in high-throughput flow cytometry 
Large variations in cell size and shape can undermine traditional gating methods for analyzing flow cytometry data. Correcting for these effects enables analysis of high-throughput data sets, including >5000 yeast samples with diverse cell morphologies.
The regression model approach corrects for the effects of cell morphology on fluorescence, as well as an extremely small and restrictive gate, but without removing any of the cells.In contrast to traditional gating, this approach enables the quantitative analysis of high-throughput flow cytometry experiments, since the regression model can compare between biological samples that show no or little overlap in terms of the morphology of the cells.The analysis of a high-throughput yeast flow cytometry data set consisting of >5000 biological samples identified key proteins that affect the time and intensity of the bifurcation event that happens after the carbon source transition from glucose to fatty acids. Here, some yeast cells undergo major structural changes, while others do not.
Flow cytometry is a widely used technique that enables the measurement of different optical properties of individual cells within large populations of cells in a fast and automated manner. For example, by targeting cell-specific markers with fluorescent probes, flow cytometry is used to identify (and isolate) cell types within complex mixtures of cells. In addition, fluorescence reporters can be used in conjunction with flow cytometry to measure protein, RNA or DNA concentration within single cells of a population.
One of the biggest advantages of this technique is that it provides information of how each cell behaves instead of just measuring the population average. This can be essential when analyzing complex samples that consist of diverse cell types or when measuring cellular responses to stimuli. For example, there is an important difference between a 50% expression increase of all cells in a population after stimulation and a 100% increase in only half of the cells, while the other half remains unresponsive. Another important advantage of flow cytometry is automation, which enables high-throughput studies with thousands of samples and conditions. However, current methods are confounded by populations of cells that are non-uniform in terms of size and granularity. Such variability affects the emitted fluorescence of the cell and adds undesired variability when estimating population fluorescence. This effect also frustrates a sensible comparison between conditions, where not only fluorescence but also cell size and granularity may be affected.
Traditionally, this problem has been addressed by using ‘gates' that restrict the analysis to cells with similar morphological properties (i.e. cell size and cell granularity). Because cells inside the gate are morphologically similar to one another, they will show a smaller variability in their response within the population. Moreover, applying the same gate in all samples assures that observed differences between these samples are not due to differential cell morphologies.
Gating, however, comes with costs. First, since only a subgroup of cells is selected, the final number of cells analyzed can be significantly reduced. This means that in order to have sufficient statistical power, more cells have to be acquired, which, if even possible in the first place, increases the time and cost of the experiment. Second, finding a good gate for all samples and conditions can be challenging if not impossible, especially in cases where cellular morphology changes dramatically between conditions. Finally, gating is a very user-dependent process, where both the size and shape of the gate are determined by the researcher and will affect the outcome, introducing subjectivity in the analysis that complicates reproducibility.
In this paper, we present an alternative method to gating that addresses the issues stated above. The method is based on a regression model containing linear and non-linear terms that estimates and corrects for the effect of cell size and granularity on the observed fluorescence of each cell in a sample. The corrected fluorescence thus becomes ‘free' of the morphological effects.
Because the model uses all cells in the sample, it assures that the corrected fluorescence is an accurate representation of the sample. In addition, the regression model can predict the expected fluorescence of a sample in areas where there are no cells. This makes it possible to compare between samples that have little overlap with good confidence. Furthermore, because the regression model is automated, it is fully reproducible between labs and conditions. Finally, it allows for a rapid analysis of big data sets containing thousands of samples.
To probe the validity of the model, we performed several experiments. We show how the regression model is able to remove the morphological-associated variability as well as an extremely small and restrictive gate, but without the caveat of removing cells. We test the method in different organisms (yeast and human) and applications (protein level detection, separation of mixed subpopulations). We then apply this method to unveil new biological insights in the mechanistic processes involved in transcriptional noise.
Gene transcription is a process subjected to the randomness intrinsic to any molecular event. Although such randomness may seem to be undesirable for the cell, since it prevents consistent behavior, there are situations where some degree of randomness is beneficial (e.g. bet hedging). For this reason, each gene is tuned to exhibit different levels of randomness or noise depending on its functions. For core and essential genes, the cell has developed mechanisms to lower the level of noise, while for genes involved in the response to stress, the variability is greater.
This gene transcription tuning can be determined at many levels, from the architecture of the transcriptional network, to epigenetic regulation. In our study, we analyze the latter using the response of yeast to the presence of fatty acid in the environment. Fatty acid can be used as energy by yeast, but it requires major structural changes and commitments. We have observed that at the population level, there is a bifurcation event whereby some cells undergo these changes and others do not. We have analyzed this bifurcation event in mutants for all the non-essential epigenetic regulators in yeast and identified key proteins that affect the time and intensity of this bifurcation. Even though fatty acid triggers major morphological changes in the cell, the regression model still makes it possible to analyze the over 5000 flow cytometry samples in this data set in an automated manner, whereas a traditional gating approach would be impossible.
Cells exposed to stimuli exhibit a wide range of responses ensuring phenotypic variability across the population. Such single cell behavior is often examined by flow cytometry; however, gating procedures typically employed to select a small subpopulation of cells with similar morphological characteristics make it difficult, even impossible, to quantitatively compare cells across a large variety of experimental conditions because these conditions can lead to profound morphological variations. To overcome these limitations, we developed a regression approach to correct for variability in fluorescence intensity due to differences in cell size and granularity without discarding any of the cells, which gating ipso facto does. This approach enables quantitative studies of cellular heterogeneity and transcriptional noise in high-throughput experiments involving thousands of samples. We used this approach to analyze a library of yeast knockout strains and reveal genes required for the population to establish a bimodal response to oleic acid induction. We identify a group of epigenetic regulators and nucleoporins that, by maintaining an ‘unresponsive population,' may provide the population with the advantage of diversified bet hedging.
doi:10.1038/msb.2011.64
PMCID: PMC3202802  PMID: 21952134
flow cytometry; high-throughput experiments; statistical regression model; transcriptional noise
2.  Eurocan plus report: feasibility study for coordination of national cancer research activities 
Summary
The EUROCAN+PLUS Project, called for by the European Parliament, was launched in October 2005 as a feasibility study for coordination of national cancer research activities in Europe. Over the course of the next two years, the Project process organized over 60 large meetings and countless smaller meetings that gathered in total over a thousand people, the largest Europe–wide consultation ever conducted in the field of cancer research.
Despite a strong tradition in biomedical science in Europe, fragmentation and lack of sustainability remain formidable challenges for implementing innovative cancer research and cancer care improvement. There is an enormous duplication of research effort in the Member States, which wastes time, wastes money and severely limits the total intellectual concentration on the wide cancer problem. There is a striking lack of communication between some of the biggest actors on the European scene, and there are palpable tensions between funders and those researchers seeking funds.
It is essential to include the patients’ voice in the establishment of priority areas in cancer research at the present time. The necessity to have dialogue between funders and scientists to establish the best mechanisms to meet the needs of the entire community is evident. A top priority should be the development of translational research (in its widest form), leading to the development of effective and innovative cancer treatments and preventive strategies. Translational research ranges from bench–to–bedside innovative cancer therapies and extends to include bringing about changes in population behaviours when a risk factor is established.
The EUROCAN+PLUS Project recommends the creation of a small, permanent and independent European Cancer Initiative (ECI). This should be a model structure and was widely supported at both General Assemblies of the project. The ECI should assume responsibility for stimulating innovative cancer research and facilitating processes, becoming the common voice of the cancer research community and serving as an interface between the cancer research community and European citizens, patients’ organizations, European institutions, Member States, industry and small and medium enterprises (SMEs), putting into practice solutions aimed at alleviating barriers to collaboration and coordination of cancer research activities in the European Union, and dealing with legal and regulatory issues. The development of an effective ECI will require time, but this entity should be established immediately. As an initial step, coordination efforts should be directed towards the creation of a platform on translational research that could encompass (1) coordination between basic, clinical and epidemiological research; (2) formal agreements of co–operation between comprehensive cancer centres and basic research laboratories throughout Europe and (3) networking between funding bodies at the European level.
The European Parliament and its instruments have had a major influence in cancer control in Europe, notably in tobacco control and in the implementation of effective population–based screening. To make further progress there is a need for novelty and innovation in cancer research and prevention in Europe, and having a platform such as the ECI, where those involved in all aspects of cancer research can meet, discuss and interact, is a decisive development for Europe.
Executive Summary
Cancer is one of the biggest public health crises facing Europe in the 21st century—one for which Europe is currently not prepared nor preparing itself. Cancer is a major cause of death in Europe with two million casualties and three million new cases diagnosed annually, and the situation is set to worsen as the population ages.
These facts led the European Parliament, through the Research Directorate-General of the European Commission, to call for initiatives for better coordination of cancer research efforts in the European Union. The EUROCAN+PLUS Project was launched in October 2005 as a feasibility study for coordination of national cancer research activities. Over the course of the next two years, the Project process organized over 60 large meetings and countless smaller meetings that gathered in total over a thousand people. In this respect, the Project became the largest Europe-wide consultation ever conducted in the field of cancer research, implicating researchers, cancer centres and hospitals, administrators, healthcare professionals, funding agencies, industry, patients’ organizations and patients.
The Project first identified barriers impeding research and collaboration in research in Europe. Despite a strong tradition in biomedical science in Europe, fragmentation and lack of sustainability remain the formidable challenges for implementing innovative cancer research and cancer care improvement. There is an enormous duplication of research effort in the Member States, which wastes time, wastes money and severely limits the total intellectual concentration on the wide cancer problem. There is a striking lack of communication between some of the biggest actors on the European scene, and there are palpable tensions between funders and those researchers seeking funds.
In addition, there is a shortage of leadership, a multiplicity of institutions each focusing on its own agenda, sub–optimal contact with industry, inadequate training, non–existent career paths, low personnel mobility in research especially among clinicians and inefficient funding—all conspiring against efficient collaboration in cancer care and research. European cancer research today does not have a functional translational research continuum, that is the process that exploits biomedical research innovations and converts them into prevention methods, diagnostic tools and therapies. Moreover, epidemiological research is not integrated with other types of cancer research, and the implementation of the European Directives on Clinical Trials 1 and on Personal Data Protection 2 has further slowed the innovation process in Europe. Furthermore, large inequalities in health and research exist between the EU–15 and the New Member States.
The picture is not entirely bleak, however, as the European cancer research scene presents several strengths, such as excellent basic research and clinical research and innovative etiological research that should be better exploited.
When considering recommendations, several priority dimensions had to be retained. It is essential that proposals include actions and recommendations that can benefit all Member States of the European Union and not just States with the elite centres. It is also essential to have a broader patient orientation to help provide the knowledge to establish cancer control possibilities when we exhaust what can be achieved by the implementation of current knowledge. It is vital that the actions proposed can contribute to the Lisbon Strategy to make Europe more innovative and competitive in (cancer) research.
The Project participants identified six areas for which consensus solutions should be implemented in order to obtain better coordination of cancer research activities. The required solutions are as follows. The proactive management of innovation, detection, facilitation of collaborations and maintenance of healthy competition within the European cancer research community.The establishment of an exchange portal of information for health professionals, patients and policy makers.The provision of guidance for translational and clinical research including the establishment of a translational research platform involving comprehensive cancer centres and cancer research centres.The coordination of calls and financial management of cancer research projects.The construction of a ‘one–stop shop’ as a contact interface between the industry, small and medium enterprises, scientists and other stakeholders.The support of greater involvement of healthcare professionals in translational research and multidisciplinary training.
In the course of the EUROCAN+PLUS consultative process, several key collaborative projects emerged between the various groups and institutes engaged in the consultation. There was a collaboration network established with Europe’s leading Comprehensive Cancer Centres; funding was awarded for a closer collaboration of Owners of Cancer Registries in Europe (EUROCOURSE); there was funding received from FP7 for an extensive network of leading Biological Resource Centres in Europe (BBMRI); a Working Group identified the special needs of Central, Eastern and South–eastern Europe and proposed a remedy (‘Warsaw Declaration’), and the concept of developing a one–stop shop for dealing with academia and industry including the Innovative Medicines Initiative (IMI) was discussed in detail.
Several other dimensions currently lacking were identified. There is an absolute necessity to include the patients’ voice in the establishment of priority areas in cancer research at the present time. It was a salutary lesson when it was recognized that all that is known about the quality of life of the cancer patient comes from the experience of a tiny proportion of cancer patients included in a few clinical trials. The necessity to have dialogue between funders and scientists to establish the best mechanisms to meet the needs of the entire community was evident. A top priority should be the development of translational research (in its widest form) and the development of effective and innovative cancer treatments and preventative strategies in the European Union. Translational research ranges from bench-to-bedside innovative cancer therapies and extends to include bringing about changes in population behaviours when a risk factor is established.
Having taken note of the barriers and the solutions and having examined relevant examples of existing European organizations in the field, it was agreed during the General Assembly of 19 November 2007 that the EUROCAN+PLUS Project had to recommend the creation of a small, permanent and neutral ECI. This should be a model structure and was widely supported at both General Assemblies of the project. The proposal is based on the successful model of the European Molecular Biology Organisation (EMBO), and its principal aims include providing a forum where researchers from all backgrounds and from all countries can meet with members of other specialities including patients, nurses, clinicians, funders and scientific administrators to develop priority programmes to make Europe more competitive in research and more focused on the cancer patient.
The ECI should assume responsibility for: stimulating innovative cancer research and facilitating processes;becoming the common voice of the cancer research community and serving as an interface between the cancer research community and European citizens, patients’ and organizations;European institutions, Member States, industry and small and medium enterprises;putting into practice the aforementioned solutions aimed at alleviating barriers and coordinating cancer research activities in the EU;dealing with legal and regulatory issues.
Solutions implemented through the ECI will lead to better coordination and collaboration throughout Europe, more efficient use of resources, an increase in Europe’s attractiveness to the biomedical industry and better quality of cancer research and education of health professionals.
The Project considered that European legal instruments currently available were inadequate for addressing many aspects of the barriers identified and for the implementation of effective, lasting solutions. Therefore, the legal environment that could shelter an idea like the ECI remains to be defined but should be done so as a priority. In this context, the initiative of the European Commission for a new legal entity for research infrastructure might be a step in this direction. The development of an effective ECI will require time, but this should be established immediately. As an initial step, coordination efforts should be directed towards the creation of a platform on translational research that could encompass: (1) coordination between basic, clinical and epidemiological research; (2) formal agreements of co-operation between comprehensive cancer centres and basic research laboratories throughout Europe; (3) networking between funding bodies at the European level. Another topic deserving immediate attention is the creation of a European database on cancer research projects and cancer research facilities.
Despite enormous progress in cancer control in Europe during the past two decades, there was an increase of 300,000 in the number of new cases of cancer diagnosed between 2004 and 2006. The European Parliament and its instruments have had a major influence in cancer control, notably in tobacco control and in the implementation of effective population–based screening. To make further progress there is a need for novelty and innovation in cancer research and prevention in Europe, and having a platform such as the ECI, where those involved in all aspects of cancer research can meet, discuss and interact, is a decisive development for Europe.
doi:10.3332/ecancer.2011.84
PMCID: PMC3234055  PMID: 22274749
3.  Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals 
PLoS Medicine  2013;10(12):e1001566.
Agnes Dechartres and colleagues searched ClinicalTrials.gov for completed drug RCTs with results reported and then searched for corresponding studies in PubMed to evaluate timeliness and completeness of reporting.
Please see later in the article for the Editors' Summary
Background
The US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration–approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.
Methods and Findings
We searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose).
From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n = 297) had no corresponding published article. For trials with both posted and published results (n = 202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile = 14, third quartile = 30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile = 14, third quartile = 28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p = 0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001).
The main study limitation was that we considered only the publication describing the results for the primary outcomes.
Conclusions
Our results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
When patients consult a doctor, they expect to be recommended what their doctor believes is the most effective treatment with the fewest adverse effects. To determine which treatment to recommend, clinicians rely on sources that include research studies. Among studies, the best evidence is generally agreed to come from systematic reviews and randomized controlled clinical trials (RCTs), studies that test the efficacy and safety of medical interventions by comparing clinical outcomes in groups of patients randomly chosen to receive different interventions. Decision-making based on the best available evidence is called evidence-based medicine. However, evidence-based medicine can only guide clinicians if trial results are published in a timely and complete manner. Unfortunately, underreporting of trials is common. For example, an RCT in which a new drug performs better than existing drugs is more likely to be published than one in which the new drug performs badly or has unwanted adverse effects (publication bias). There can also be a delay in publishing the results of negative trials (time-lag bias) or a failure to publish complete results for all the prespecified outcomes of a trial (reporting bias). All three types of bias threaten informed medical decision-making and the health of patients.
Why Was This Study Done?
One initiative that aims to prevent these biases was included in the 2007 US Food and Drug Administration Amendments Act (FDAAA). The Food and Drug Administration (FDA) is responsible for approving drugs and devices that are marketed in the US. The FDAAA requires that results from clinical trials of FDA-approved drugs and devices conducted in the United States be made publicly available at ClinicalTrials.gov within one year of trial completion. ClinicalTrials.gov—a web-based registry that includes US and international clinical trials—was established in 2000 in response to the 1997 FDA Modernization Act, which required mandatory registration of trial titles and designs and of the conditions and interventions under study. The FDAAA expanded these mandatory requirements by requiring researchers studying FDA-approved drugs and devices to report additional information such as the baseline characteristics of the participants in each arm of the trial and the results of primary and secondary outcome measures (the effects of the intervention on predefined clinical measurements) and their statistical significance (an indication of whether differences in outcomes might have happened by chance). Researchers of other trials registered in ClinicalTrials.gov are welcome to post trial results as well. Here, the researchers compare the timing and completeness (i.e., whether all relevant information was fully reported) of results of drug trials posted at ClinicalTrials.gov with those published in medical journals.
What Did the Researchers Do and Find?
The researchers searched ClinicalTrials.gov for reports of completed phase III and IV (late-stage) RCTs of drugs with posted results. For a random sample of 600 eligible trials, they searched PubMed (a database of biomedical publications) for corresponding publications. Only 50% of trials with results posted at ClinicalTrials.gov had a matching published article. For 202 trials with both posted and published results, the researchers compared the timing and completeness of the results posted at ClinicalTrials.gov and of results reported in the corresponding journal publication. The median time between the study completion date and the first results being publicly posted at ClinicalTrials.gov was 19 months, whereas the time between completion and publication in a journal was 21 months. The flow of participants through trials was completely reported in 64% of the ClinicalTrials.gov postings but in only 48% of the corresponding publications. Results for the primary outcome measure were completely reported in 79% and 69% of the ClinicalTrials.gov postings and corresponding publications, respectively. Finally, adverse events were completely reported in 73% of the ClinicalTrials.gov postings but in only 45% of the corresponding publications, and serious adverse events were reported in 99% and 63% of the ClinicalTrials.gov postings and corresponding publications, respectively.
What Do These Findings Mean?
These findings suggest that the reporting of trial results is significantly more complete at ClinicalTrials.gov than in published journal articles reporting the main trial results. Certain aspects of this study may affect the accuracy of this conclusion. For example, the researchers compared the results posted at ClinicalTrials.gov only with the results in the publication that described the primary outcome of each trial, even though some trials had multiple publications. Importantly, these findings suggest that, to enable patients and physicians to make informed treatment decisions, experts undertaking assessments of drugs should consider seeking efficacy and safety data posted at ClinicalTrials.gov, both for trials whose results are not published yet and for trials whose results are published. Moreover, they suggest that the use of templates to guide standardized reporting of trial results in journals and broader mandatory posting of results may help to improve the reporting and transparency of clinical trials and, consequently, the evidence available to inform treatment of patients.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001566.
Wikipedia has pages on evidence-based medicine and on publication bias (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The US Food and Drug Administration provides information about drug approval in the US for consumers and health-care professionals, plus detailed information on the 2007 Food and Drug Administration Amendments Act
ClinicalTrials.gov provides information about the US National Institutes of Health clinical trial registry, including background information about clinical trials, and a fact sheet detailing the requirements of the 2007 Food and Drug Administration Amendments Act
PLOS Medicine recently launched a Reporting Guidelines Collection, an open access collection of reporting guidelines, commentary, and related research on guidelines from across PLOS journals that aims to help advance the efficiency, effectiveness, and equitability of the dissemination of biomedical information; a 2008 PLOS Medicine editorial discusses the 2007 Food and Drug Administration Amendments Act
doi:10.1371/journal.pmed.1001566
PMCID: PMC3849189  PMID: 24311990
4.  The Task before Psychiatry Today Redux: STSPIR* 
Mens Sana Monographs  2014;12(1):35-70.
This paper outlines six important tasks for psychiatry today, which can be put in short as:
Spread and scale up services;Talk;Science,Psychotherapy;Integrate; andResearch excellence.
As an acronym, STSPIR.
Spread and scale up services: Spreading mental health services to uncovered areas, and increasing facilities in covered areas:Mental disorders are leading cause of ill health but bottom of health agenda;Patients face widespread discrimination, human rights violations and lack of facilities;Need to stem the brain drain from developing countries;At any given point, 10% of the adult population report having some mental or behavioural disorder;In India, serious mental disorders affect nearly 80 million people, i.e. combined population of the northern top of India, including Punjab, Haryana, Jammu and Kashmir, Uttarakhand and Himachal Pradesh;Combating imbalance between burden of demand and supply of efficient psychiatric services in all countries, especially in developing ones like India, is the first task before psychiatry today. If ever a greater role for activism were needed, this is the field;The need is to scale up effective and cost-effective treatments and preventive interventions for mental disorders.Talk: Speaking to a wider audience about positive contributions of psychiatry: Being aware of, understanding, and countering, the massive anti-psychiatry propaganda online and elsewhere;Giving a firm answer to anti-psychiatry even while understanding its transformation into mental health consumerism and opposition to reckless medicalisation;Defining normality and abnormality;Bringing about greater precision in diagnosis and care;Motivating those helped by psychiatry to speak up;Setting up informative websites and organising programmes to reduce stigma and spread mental health awareness;Setting up regular columns in psychiatry journals around the globe, called ‘Patients Speak’, or something similar, wherein those who have been helped get a chance to voice their stories.Science: Shrugging ambivalence and disagreement and searching for commonalities in psychiatric phenomena; An idiographic orientation which stresses individuality cannot, and should not, preclude the nomothetic or norm laying thrust that is the crux of scientific progress.The major contribution of science has been to recognize such commonalities so they can be researched, categorized and used for human welfare.It is a mistake to stress individuality so much that commonalities are obliterated.While the purpose and approach of psychiatry, as of all medicine, has to be humane and caring, therapeutic advancements and aetiologic understandings are going to result only from a scientific methodology.Just caring is not enough, if you have not mastered the methods of care, which only science can supply.Psychotherapy: Psychiatrists continuing to do psychotherapy: Psychotherapy must be clearly defined, its parameters and methods firmly delineated, its proof of effectiveness convincingly demonstrated by evidence based and controlled trials;Psychotherapy research suffers from neglect by the mainstream at present, because of the ascendancy of biological psychiatry;It suffers resource constraints as major sponsors like pharma not interested;Needs funding from some sincere researcher organisations and altruistic sponsors, as also professional societies and governments;Psychotherapy research will have to provide enough irrefutable evidence that it works, with replicable studies that prove it across geographical areas;It will not do for psychiatrists to hand over psychotherapy to clinical psychologists and others.Integrate approaches: Welcoming biological breakthroughs, while supplying psychosocial insights: Experimental breakthroughs, both in aetiology and therapeutics, will come mainly from biology, but the insights and leads can hopefully come from many other fields, especially the psychosocial and philosophical;The biological and the psychological are not exclusive but complementary approaches;Both integration and reductionism are valid. Integration is necessary as an attitude, reductionism is necessary as an approach. Both the biological and the psychosocial must co-exist in the individual psychiatrist, as much as the branch itself.Research excellence: Promoting genuine research alone, and working towards an Indian Nobel Laureate in psychiatry by 2020: To stop promoting poor quality research and researchers, and to stop encouraging sycophants and ladder climbers. To pick up and hone genuine research talent from among faculty and students;Developing consistent quality environs in departments and having Heads of Units who recognize, hone and nurture talent. And who never give in to pessimism and cynicism;Stop being satisfied with the money, power and prestige that comes by wheeling-dealing, groupism and politicking;Infinite vistas of opportunity wait in the wings to unfold and offer opportunities for unravelling the mysteries of the ‘mind’ to the earnest seeker. Provided he is ready to seek the valuable. Provided he stops holding on to the artificial and the superfluous.
doi:10.4103/0973-1229.130295
PMCID: PMC4037900  PMID: 24891797
Biological breakthroughs; Commonalities in psychiatry; Indian Nobel Laureate; Integrate; Positive contributions of psychiatry; Psychosocial insights; Psychotherapy; Research excellence; Scale up services; Science; Stigma; Talk
5.  Something going on in Milan: a review of the 4th International PhD Student Cancer Conference 
ecancermedicalscience  2010;4:198.
The 4th International PhD Student Cancer Conference was held at the IFOM-IEO-Campus in Milan from 19–21 May 2010 http://www.semm.it/events_researchPast.php
The Conference covered many topics related to cancer, from basic biology to clinical aspects of the disease. All attendees presented their research, by either giving a talk or presenting a poster. This conference is an opportunity to introduce PhD students to top cancer research institutes across Europe.
The core participanting institutes included: European School of Molecular Medicine (SEMM)—IFOM-IEO Campus, MilanBeatson Institute for Cancer Research (BICR), GlasgowCambridge Research Institute (CRI), Cambridge, UKMRC Gray Institute of Radiation Biology (GIROB), OxfordLondon Research Institute (LRI), LondonPaterson Institute for Cancer Research (PICR), ManchesterThe Netherlands Cancer Institute (NKI), Amsterdam
‘You organizers have crushed all my prejudices towards Italians. Congratulations, I enjoyed the conference immensely!’ Even if it might have sounded like rudeness for sure this was supposed to be a genuine compliment (at least, that’s how we took it), also considering that it was told by a guy who himself was the fusion of two usually antithetical concepts: fashion style and English nationality.
The year 2010 has marked an important event for Italian research in the international scientific panorama: the European School of Molecular Medicine (SEMM) had the honour to host the 4th International PhD Student Cancer Conference, which was held from 19–21 May 2010 at the IFOM-IEO-Campus (http://www.semm.it/events_researchPast.php) in Milan.
The conference was attended by more than one hundred students, coming from a selection of cutting edge European institutes devoted to cancer research. The rationale behind it is the promotion of cooperation among young scientists across Europe to debate about science and to exchange ideas and experiences. But that is not all, it is also designed for PhD students to get in touch with other prestigious research centres and to create connections for future post docs or job experiences. And last but not least, it is a golden chance for penniless PhD students to spend a couple of extra days visiting a foreign country (this motivation will of course never be voiced to supervisors).
The network of participating institutes has a three-nation core, made up of the Netherlands Cancer Institute, the Italian European School of Molecular Medicine (SEMM) and five UK Cancer Research Institutes (The London Research Institute, The Cambridge Research Institute, The Beatson Institute for Cancer Research in Glasgow, The Patterson Institute for Cancer Research in Manchester and the MRC Gray Institute for Radiation Oncology and Biology in Oxford).
The conference is hosted and organised every year by one of the core institutes; the first was in Cambridge in 2007, Amsterdam in 2008 and London in 2009, this year was the turn of Milan.
In addition to the core institutes, PhD students from several other high-profile institutes are invited to attend the conference. This year participants applied from the Spanish National Cancer Centre (CNIO, Madrid), the German Cancer Research Centre (DKFZ, Heidelberg), the European Molecular Biology Labs (EMBL, Heidelberg) and the San Raffaele Institute (HSR, Milan). Moreover four ‘special guests’ from the National Centre for Biological Sciences of Bangalore (India) attended the conference in Milan. This represents a first step in widening the horizons beyond Europe into a global worldwide network of talented PhD students in life sciences.
The conference spread over two and a half days (Wednesday 19th to Friday 21st May) and touched on a broad spectrum of topics: from basic biology to development, from cancer therapies to modelling and top-down new generation global approaches. The final selection of presentations has been a tough task for us organisers (Chiara Segré, Federica Castellucci, Francesca Milanesi, Gianluca Varetti and Gian Maria Sarra Ferraris), due to the high scientific level of the abstracts submitted. In the end, 26 top students were chosen to give a 15-min oral presentation in one of eight sessions: Development & Differentiation, Cell Migration, Immunology & Cancer, Modelling & Large Scale approaches, Genome Instability, Signal Transduction, Cancer Genetics & Drug Resistance, Stem Cells in Biology and Cancer.
The scientific programme was further enriched by two scientific special sessions, held by Professor Pier Paolo di Fiore and Dr Giuseppe Testa, Principal Investigators at the IFOM-IEO-Campus and by a bioethical round table on human embryonic stem cell research moderated by Silvia Camporesi, a senior PhD student in the SEMM PhD Programme ‘Foundation of Life Science and their Bioethical Consequences’.
On top of everything, we had the pleasure of inviting, as keynote speakers, two leading European scientists in the fields of cancer invasion and biology of stem cells, respectively: Dr Peter Friedl from The Nijmegen Centre for Molecular Life (The Netherlands) and Professor Andreas Trumpp from The Heidelberg Institute for Stem Cell Technology and Experimental Medicine (Heidelberg).
All the student talks have distinguished themselves for the impressive quality of the science; an encouraging evidence of the high profile level of research carried out in Europe. It would be beyond the purposes of this report to summarise all 26 talks, which touched many different and specific topics. For further information, the Conference Abstract book with all the scientific content is available on the conference Web site (http://www.semm.it/events_researchPast.php). In what follows, the special sessions and the keynote lectures will be discussed in detail.
doi:10.3332/ecancer.2010.198
PMCID: PMC3234021  PMID: 22276043
6.  Getting the word out: neural correlates of enthusiastic message propagation 
What happens in the mind of a person who first hears a potentially exciting idea?We examined the neural precursors of spreading ideas with enthusiasm, and dissected enthusiasm into component processes that can be identified through automated linguistic analysis, gestalt human ratings of combined linguistic and non-verbal cues, and points of convergence/divergence between the two. We combined tools from natural language processing (NLP) with data gathered using fMRI to link the neurocognitive mechanisms that are set in motion during initial exposure to ideas and subsequent behaviors of these message communicators outside of the scanner. Participants' neural activity was recorded as they reviewed ideas for potential television show pilots. Participants' language from video-taped interviews collected post-scan was transcribed and given to an automated linguistic sentiment analysis (SA) classifier, which returned ratings for evaluative language (evaluative vs. descriptive) and valence (positive vs. negative). Separately, human coders rated the enthusiasm with which participants transmitted each idea. More positive sentiment ratings by the automated classifier were associated with activation in neural regions including medial prefrontal cortex; MPFC, precuneus/posterior cingulate cortex; PC/PCC, and medial temporal lobe; MTL. More evaluative, positive, descriptions were associated exclusively with neural activity in temporal-parietal junction (TPJ). Finally, human ratings indicative of more enthusiastic sentiment were associated with activation across these regions (MPFC, PC/PCC, DMPFC, TPJ, and MTL) as well as in ventral striatum (VS), inferior parietal lobule and premotor cortex. Taken together, these data demonstrate novel links between neural activity during initial idea encoding and the enthusiasm with which the ideas are subsequently delivered. This research lays the groundwork to use machine learning and neuroimaging data to study word of mouth communication and the spread of ideas in both traditional and new media environments.
doi:10.3389/fnhum.2012.00313
PMCID: PMC3506032  PMID: 23189049
fMRI; sentiment analysis; natural language processing; information diffusion; word-of-mouth
7.  The Chilling Effect: How Do Researchers React to Controversy? 
PLoS Medicine  2008;5(11):e222.
Background
Can political controversy have a “chilling effect” on the production of new science? This is a timely concern, given how often American politicians are accused of undermining science for political purposes. Yet little is known about how scientists react to these kinds of controversies.
Methods and Findings
Drawing on interview (n = 30) and survey data (n = 82), this study examines the reactions of scientists whose National Institutes of Health (NIH)-funded grants were implicated in a highly publicized political controversy. Critics charged that these grants were “a waste of taxpayer money.” The NIH defended each grant and no funding was rescinded. Nevertheless, this study finds that many of the scientists whose grants were criticized now engage in self-censorship. About half of the sample said that they now remove potentially controversial words from their grant and a quarter reported eliminating entire topics from their research agendas. Four researchers reportedly chose to move into more secure positions entirely, either outside academia or in jobs that guaranteed salaries. About 10% of the group reported that this controversy strengthened their commitment to complete their research and disseminate it widely.
Conclusions
These findings provide evidence that political controversies can shape what scientists choose to study. Debates about the politics of science usually focus on the direct suppression, distortion, and manipulation of scientific results. This study suggests that scholars must also examine how scientists may self-censor in response to political events.
Drawing on interview and survey data, Joanna Kempner's study finds that political controversies shape what many scientists choose not to study.
Editors' Summary
Background.
Scientific research is an expensive business and, inevitably, the organizations that fund this research—governments, charities, and industry—play an important role in determining the directions that this research takes. Funding bodies can have both positive and negative effects on the acquisition of scientific knowledge. They can pump money into topical areas such as the human genome project. Alternatively, by withholding funding, they can discourage some types of research. So, for example, US federal funds cannot be used to support many aspects of human stem cell research. “Self-censoring” by scientists can also have a negative effect on scientific progress. That is, some scientists may decide to avoid areas of research in which there are many regulatory requirements, political pressure, or in which there is substantial pressure from advocacy groups. A good example of this last type of self-censoring is the withdrawal of many scientists from research that involves certain animal models, like primates, because of animal rights activists.
Why Was This Study Done?
Some people think that political controversy might also encourage scientists to avoid some areas of scientific inquiry, but no studies have formally investigated this possibility. Could political arguments about the value of certain types of research influence the questions that scientists pursue? An argument of this sort occurred in the US in 2003 when Patrick Toomey, who was then a Republican Congressional Representative, argued that National Institutes of Health (NIH) grants supporting research into certain aspects of sexual behavior were “much less worthy of taxpayer funding” than research on “devastating diseases,” and proposed an amendment to the 2004 NIH appropriations bill (which regulates the research funded by NIH). The Amendment was rejected, but more than 200 NIH-funded grants, most of which examined behaviors that affect the spread of HIV/AIDS, were internally reviewed later that year; NIH defended each grant, so none were curtailed. In this study, Joanna Kempner investigates how the scientists whose US federal grants were targeted in this clash between politics and science responded to the political controversy.
What Did the Researchers Do and Find?
Kempner interviewed 30 of the 162 principal investigators (PIs) whose grants were reviewed. She asked them to describe their research, the grants that were reviewed, and their experience with NIH before, during, and after the controversy. She also asked them whether this experience had changed their research practice. She then used the information from these interviews to design a survey that she sent to all the PIs whose grants had been reviewed; 82 responded. About half of the scientists interviewed and/or surveyed reported that they now remove “red flag” words (for example, “AIDS” and “homosexual”) from the titles and abstracts of their grant applications. About one-fourth of the respondents no longer included controversial topics (for example, “abortion” and “emergency contraception”) in their research agendas, and four researchers had made major career changes as a result of the controversy. Finally, about 10% of respondents said that their experience had strengthened their commitment to see their research completed and its results published although even many of these scientists also engaged in some self-censorship.
What Do These Findings Mean?
These findings show that, even though no funding was withdrawn, self-censoring is now common among the scientists whose grants were targeted during this particular political controversy. Because this study included researchers in only one area of health research, its findings may not be generalizable to other areas of research. Furthermore, because only half of the PIs involved in the controversy responded to the survey, these findings may be affected by selection bias. That is, the scientists most anxious about the effects of political controversy on their research funding (and thus more likely to engage in self-censorship) may not have responded. Nevertheless, these findings suggest that the political environment might have a powerful effect on self-censorship by scientists and might dissuade some scientists from embarking on research projects that they would otherwise have pursued. Further research into what Kempner calls the “chilling effect” of political controversy on scientific research is now needed to ensure that a healthy balance can be struck between political involvement in scientific decision making and scientific progress.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050222.
The Consortium of Social Science Associations, an advocacy organization that provides a bridge between the academic research community and Washington policymakers, has more information about the political controversy initiated by Patrick Toomey
Some of Kempner's previous research on self-censorship by scientists is described in a 2005 National Geographic news article
doi:10.1371/journal.pmed.0050222
PMCID: PMC2586361  PMID: 19018657
8.  Defining the next generation journal: The NLM–Elsevier interactive publications experiment* 
Information services & use  2010;30(1-2):17-30.
Objective
A unique collaborative project to identify interactive enhancements to conventional-print journal articles, and to evaluate their contribution to readers’ learning and satisfaction.
Hypothesis
It was hypothesized that (a) the enhanced article would yield more knowledge acquisition than the original article; (b) the interactivity aspects of the enhanced article would measurably contribute to the acquisition of knowledge; and (c) the enhancements to the original article would increase reader acceptance.
Methods
Fifteen SNMA medical students, assumed to have a greater generational familiarity and comfort level with interactive electronic media, reviewed 12 articles published in three Elsevier clinical and basic science journals. They used the Student National Medical Association’s asynchronous online discussion forum over a four month period to suggest desired enhancements to improve learning. “Prognostic Factors in Stage T1 Bladder Cancer”, published in the journal Urology was selected by the investigators as presenting the best opportunity to incorporate many of the students’ suggested interactive and presentational enhancements in the limited timeframe available prior to the established test date. Educational, statistical, and medical consultants assisted in designing a test protocol in which 51 second to fourth year medical students were randomly assigned to experimental and control conditions, and were administered either the original or enhanced interactive version of the article on individual computer workstations. Test subjects consisted of 23 participants in the control group (8 males, 15 females) and 28 participants in the experimental group (9 males, 19 females). All subjects completed pre- and post-test instruments which measured their knowledge gain on 30 true-false and multiple-choice questions, along with 7 Likert-type questions measuring acceptance of the articles’ format. Time to completion was recorded with the experimental group taking 22 min on average compared to 18 min for the controls; pre- and post-test times were 6 and 7 min, respectively. Statistical comparisons were based on change scores using either the Student t-test or the Two Way Analysis of Variance or Covariance. Significance was set at α = 0.05 or better.
Results
on the dependent measure of knowledge acquisition showed no difference overall on the 30 questions, but learning gain was statistically significant for the subset of 10 questions that measured gain on content that was accessible by the user-invoked interactive features of the enhanced article. Further analyses revealed significant interactions by student year and gender. Second year students (11 in the control group, 8 in the experimental group) were the best performers in terms of knowledge acquisition from both articles. The female medical students received a larger learning gain from journal enhancements and interactivity components than their male counterparts. Acceptance overall was greater for the experimental group who rated the experience more favorably than the controls.
Conclusions
Failure to consider human factors such as gender and learning style may obscure underlying differences and their impact on the interactive aspects of scientific publications. Preliminary findings suggest the need for further study to include a heavier focus on interactivity apart from presentational enhancements; a more rigorous treatment of time as a specific variable; and an expanded experimental design that evaluates acquisition, understanding, integration and acceptance as dependent measures.
doi:10.3233/ISU-2010-0608
PMCID: PMC3001620  PMID: 21165152
Electronic publishing; multimedia; graphics; computer; learning; information theory; medical students
9.  Design of a flexible component gathering algorithm for converting cell-based models to graph representations for use in evolutionary search 
BMC Bioinformatics  2014;15:178.
Background
The ability of science to produce experimental data has outpaced the ability to effectively visualize and integrate the data into a conceptual framework that can further higher order understanding. Multidimensional and shape-based observational data of regenerative biology presents a particularly daunting challenge in this regard. Large amounts of data are available in regenerative biology, but little progress has been made in understanding how organisms such as planaria robustly achieve and maintain body form. An example of this kind of data can be found in a new repository (PlanformDB) that encodes descriptions of planaria experiments and morphological outcomes using a graph formalism.
Results
We are developing a model discovery framework that uses a cell-based modeling platform combined with evolutionary search to automatically search for and identify plausible mechanisms for the biological behavior described in PlanformDB. To automate the evolutionary search we developed a way to compare the output of the modeling platform to the morphological descriptions stored in PlanformDB. We used a flexible connected component algorithm to create a graph representation of the virtual worm from the robust, cell-based simulation data. These graphs can then be validated and compared with target data from PlanformDB using the well-known graph-edit distance calculation, which provides a quantitative metric of similarity between graphs. The graph edit distance calculation was integrated into a fitness function that was able to guide automated searches for unbiased models of planarian regeneration. We present a cell-based model of planarian that can regenerate anatomical regions following bisection of the organism, and show that the automated model discovery framework is capable of searching for and finding models of planarian regeneration that match experimental data stored in PlanformDB.
Conclusion
The work presented here, including our algorithm for converting cell-based models into graphs for comparison with data stored in an external data repository, has made feasible the automated development, training, and validation of computational models using morphology-based data. This work is part of an ongoing project to automate the search process, which will greatly expand our ability to identify, consider, and test biological mechanisms in the field of regenerative biology.
doi:10.1186/1471-2105-15-178
PMCID: PMC4083366  PMID: 24917489
10.  QuantPrime – a flexible tool for reliable high-throughput primer design for quantitative PCR 
BMC Bioinformatics  2008;9:465.
Background
Medium- to large-scale expression profiling using quantitative polymerase chain reaction (qPCR) assays are becoming increasingly important in genomics research. A major bottleneck in experiment preparation is the design of specific primer pairs, where researchers have to make several informed choices, often outside their area of expertise. Using currently available primer design tools, several interactive decisions have to be made, resulting in lengthy design processes with varying qualities of the assays.
Results
Here we present QuantPrime, an intuitive and user-friendly, fully automated tool for primer pair design in small- to large-scale qPCR analyses. QuantPrime can be used online through the internet or on a local computer after download; it offers design and specificity checking with highly customizable parameters and is ready to use with many publicly available transcriptomes of important higher eukaryotic model organisms and plant crops (currently 295 species in total), while benefiting from exon-intron border and alternative splice variant information in available genome annotations. Experimental results with the model plant Arabidopsis thaliana, the crop Hordeum vulgare and the model green alga Chlamydomonas reinhardtii show success rates of designed primer pairs exceeding 96%.
Conclusion
QuantPrime constitutes a flexible, fully automated web application for reliable primer design for use in larger qPCR experiments, as proven by experimental data. The flexible framework is also open for simple use in other quantification applications, such as hydrolyzation probe design for qPCR and oligonucleotide probe design for quantitative in situ hybridization. Future suggestions made by users can be easily implemented, thus allowing QuantPrime to be developed into a broad-range platform for the design of RNA expression assays.
doi:10.1186/1471-2105-9-465
PMCID: PMC2612009  PMID: 18976492
11.  A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging 
EJNMMI Research  2013;3:55.
Background
Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers’ understanding of infectious diseases.
Methods
We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists’ delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework.
Results
Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis.
Conclusions
We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy.
doi:10.1186/2191-219X-3-55
PMCID: PMC3734217  PMID: 23879987
Quantitative analysis; Pulmonary infections; Small animal models; PET-CT; Image segmentation; H1N1; Tuberculosis
12.  Toward an interactive article: integrating journals and biological databases 
BMC Bioinformatics  2011;12:175.
Background
Journal articles and databases are two major modes of communication in the biological sciences, and thus integrating these critical resources is of urgent importance to increase the pace of discovery. Projects focused on bridging the gap between journals and databases have been on the rise over the last five years and have resulted in the development of automated tools that can recognize entities within a document and link those entities to a relevant database. Unfortunately, automated tools cannot resolve ambiguities that arise from one term being used to signify entities that are quite distinct from one another. Instead, resolving these ambiguities requires some manual oversight. Finding the right balance between the speed and portability of automation and the accuracy and flexibility of manual effort is a crucial goal to making text markup a successful venture.
Results
We have established a journal article mark-up pipeline that links GENETICS journal articles and the model organism database (MOD) WormBase. This pipeline uses a lexicon built with entities from the database as a first step. The entity markup pipeline results in links from over nine classes of objects including genes, proteins, alleles, phenotypes and anatomical terms. New entities and ambiguities are discovered and resolved by a database curator through a manual quality control (QC) step, along with help from authors via a web form that is provided to them by the journal. New entities discovered through this pipeline are immediately sent to an appropriate curator at the database. Ambiguous entities that do not automatically resolve to one link are resolved by hand ensuring an accurate link. This pipeline has been extended to other databases, namely Saccharomyces Genome Database (SGD) and FlyBase, and has been implemented in marking up a paper with links to multiple databases.
Conclusions
Our semi-automated pipeline hyperlinks articles published in GENETICS to model organism databases such as WormBase. Our pipeline results in interactive articles that are data rich with high accuracy. The use of a manual quality control step sets this pipeline apart from other hyperlinking tools and results in benefits to authors, journals, readers and databases.
doi:10.1186/1471-2105-12-175
PMCID: PMC3213741  PMID: 21595960
13.  The Role of the Toxicologic Pathologist in the Post-Genomic Era# 
Journal of Toxicologic Pathology  2013;26(2):105-110.
An era can be defined as a period in time identified by distinctive character, events, or practices. We are now in the genomic era. The pre-genomic era: There was a pre-genomic era. It started many years ago with novel and seminal animal experiments, primarily directed at studying cancer. It is marked by the development of the two-year rodent cancer bioassay and the ultimate realization that alternative approaches and short-term animal models were needed to replace this resource-intensive and time-consuming method for predicting human health risk. Many alternatives approaches and short-term animal models were proposed and tried but, to date, none have completely replaced our dependence upon the two-year rodent bioassay. However, the alternative approaches and models themselves have made tangible contributions to basic research, clinical medicine and to our understanding of cancer and they remain useful tools to address hypothesis-driven research questions. The pre-genomic era was a time when toxicologic pathologists played a major role in drug development, evaluating the cancer bioassay and the associated dose-setting toxicity studies, and exploring the utility of proposed alternative animal models. It was a time when there was shortage of qualified toxicologic pathologists. The genomic era: We are in the genomic era. It is a time when the genetic underpinnings of normal biological and pathologic processes are being discovered and documented. It is a time for sequencing entire genomes and deliberately silencing relevant segments of the mouse genome to see what each segment controls and if that silencing leads to increased susceptibility to disease. What remains to be charted in this genomic era is the complex interaction of genes, gene segments, post-translational modifications of encoded proteins, and environmental factors that affect genomic expression. In this current genomic era, the toxicologic pathologist has had to make room for a growing population of molecular biologists. In this present era newly emerging DVM and MD scientists enter the work arena with a PhD in pathology often based on some aspect of molecular biology or molecular pathology research. In molecular biology, the almost daily technological advances require one’s complete dedication to remain at the cutting edge of the science. Similarly, the practice of toxicologic pathology, like other morphological disciplines, is based largely on experience and requires dedicated daily examination of pathology material to maintain a well-trained eye capable of distilling specific information from stained tissue slides - a dedicated effort that cannot be well done as an intermezzo between other tasks. It is a rare individual that has true expertise in both molecular biology and pathology. In this genomic era, the newly emerging DVM-PhD or MD-PhD pathologist enters a marketplace without many job opportunities in contrast to the pre-genomic era. Many face an identity crisis needing to decide to become a competent pathologist or, alternatively, to become a competent molecular biologist. At the same time, more PhD molecular biologists without training in pathology are members of the research teams working in drug development and toxicology. How best can the toxicologic pathologist interact in the contemporary team approach in drug development, toxicology research and safety testing? Based on their biomedical training, toxicologic pathologists are in an ideal position to link data from the emerging technologies with their knowledge of pathobiology and toxicology. To enable this linkage and obtain the synergy it provides, the bench-level, slide-reading expert pathologist will need to have some basic understanding and appreciation of molecular biology methods and tools. On the other hand, it is not likely that the typical molecular biologist could competently evaluate and diagnose stained tissue slides from a toxicology study or a cancer bioassay. The post-genomic era: The post-genomic era will likely arrive approximately around 2050 at which time entire genomes from multiple species will exist in massive databases, data from thousands of robotic high throughput chemical screenings will exist in other databases, genetic toxicity and chemical structure-activity-relationships will reside in yet other databases. All databases will be linked and relevant information will be extracted and analyzed by appropriate algorithms following input of the latest molecular, submolecular, genetic, experimental, pathology and clinical data. Knowledge gained will permit the genetic components of many diseases to be amenable to therapeutic prevention and/or intervention. Much like computerized algorithms are currently used to forecast weather or to predict political elections, computerized sophisticated algorithms based largely on scientific data mining will categorize new drugs and chemicals relative to their health benefits versus their health risks for defined human populations and subpopulations. However, this form of a virtual toxicity study or cancer bioassay will only identify probabilities of adverse consequences from interaction of particular environmental and/or chemical/drug exposure(s) with specific genomic variables. Proof in many situations will require confirmation in intact in vivo mammalian animal models. The toxicologic pathologist in the post-genomic era will be the best suited scientist to confirm the data mining and its probability predictions for safety or adverse consequences with the actual tissue morphological features in test species that define specific test agent pathobiology and human health risk.
doi:10.1293/tox.26.105
PMCID: PMC3695332  PMID: 23914052
genomic era; history of toxicologic pathology; molecular biology
14.  The philosophy of scientific experimentation: a review 
Practicing and studying automated experimentation may benefit from philosophical reflection on experimental science in general. This paper reviews the relevant literature and discusses central issues in the philosophy of scientific experimentation. The first two sections present brief accounts of the rise of experimental science and of its philosophical study. The next sections discuss three central issues of scientific experimentation: the scientific and philosophical significance of intervention and production, the relationship between experimental science and technology, and the interactions between experimental and theoretical work. The concluding section identifies three issues for further research: the role of computing and, more specifically, automating, in experimental research, the nature of experimentation in the social and human sciences, and the significance of normative, including ethical, problems in experimental science.
doi:10.1186/1759-4499-1-2
PMCID: PMC2809324  PMID: 20098589
15.  Minimally invasive surgical procedures for the treatment of lumbar disc herniation 
Introduction
In up to 30% of patients undergoing lumbar disc surgery for herniated or protruded discs outcomes are judged unfavourable. Over the last decades this problem has stimulated the development of a number of minimally-invasive operative procedures. The aim is to relieve pressure from compromised nerve roots by mechanically removing, dissolving or evaporating disc material while leaving bony structures and surrounding tissues as intact as possible. In Germany, there is hardly any utilisation data for these new procedures – data files from the statutory health insurances demonstrate that about 5% of all lumbar disc surgeries are performed using minimally-invasive techniques. Their real proportion is thought to be much higher because many procedures are offered by private hospitals and surgeries and are paid by private health insurers or patients themselves. So far no comprehensive assessment comparing efficacy, safety, effectiveness and cost-effectiveness of minimally-invasive lumbar disc surgery to standard procedures (microdiscectomy, open discectomy) which could serve as a basis for coverage decisions, has been published in Germany.
Objective
Against this background the aim of the following assessment is:
Based on published scientific literature assess safety, efficacy and effectiveness of minimally-invasive lumbar disc surgery compared to standard procedures. To identify and critically appraise studies comparing costs and cost-effectiveness of minimally-invasive procedures to that of standard procedures. If necessary identify research and evaluation needs and point out regulative needs within the German health care system. The assessment focusses on procedures that are used in elective lumbar disc surgery as alternative treatment options to microdiscectomy or open discectomy. Chemonucleolysis, percutaneous manual discectomy, automated percutaneous lumbar discectomy, laserdiscectomy and endoscopic procedures accessing the disc by a posterolateral or posterior approach are included.
Methods
In order to assess safety, efficacy and effectiveness of minimally-invasive procedures as well as their economic implications systematic reviews of the literature are performed. A comprehensive search strategy is composed to search 23 electronic databases, among them MEDLINE, EMBASE and the Cochrane Library. Methodological quality of systematic reviews, HTA reports and primary research is assessed using checklists of the German Scientific Working Group for Health Technology Assessment. Quality and transparency of cost analyses are documented using the quality and transparency catalogues of the working group. Study results are summarised in a qualitative manner. Due to the limited number and the low methodological quality of the studies it is not possible to conduct metaanalyses. In addition to the results of controlled trials results of recent case series are introduced and discussed.
Results
The evidence-base to assess safety, efficacy and effectiveness of minimally-invasive lumbar disc surgery procedures is rather limited:
Percutaneous manual discectomy: Six case series (four after 1998)Automated percutaneous lumbar discectomy: Two RCT (one discontinued), twelve case series (one after 1998)Chemonucleolysis: Five RCT, five non-randomised controlled trials, eleven case seriesPercutaneous laserdiscectomy: One non-randomised controlled trial, 13 case series (eight after 1998)Endoscopic procedures: Three RCT, 21 case series (17 after 1998)
There are two economic analyses each retrieved for chemonucleolysis and automated percutaneous discectomy as well as one cost-minimisation analysis comparing costs of an endoscopic procedure to costs for open discectomy.
Among all minimally-invasive procedures chemonucleolysis is the only of which efficacy may be judged on the basis of results from high quality randomised controlled trials (RCT). Study results suggest that the procedure maybe (cost)effectively used as an intermediate therapeutical option between conservative and operative management of small lumbar disc herniations or protrusions causing sciatica. Two RCT comparing transforaminal endoscopic procedures with microdiscectomy in patients with sciatica and small non-sequestered disc herniations show comparable short and medium term overall success rates. Concerning speed of recovery and return to work a trend towards more favourable results for the endoscopic procedures is noted. It is doubtful though, whether these results from the eleven and five years old studies are still valid for the more advanced procedures used today. The only RCT comparing the results of automated percutaneous lumbar discectomy to those of microdiscectomy showed clearly superior results of microdiscectomy. Furthermore, success rates of automated percutaneous lumbar discectomy reported in the RCT (29%) differ extremely from success rates reported in case series (between 56% and 92%).
The literature search retrieves no controlled trials to assess efficacy and/or effectiveness of laser-discectomy, percutaneous manual discectomy or endoscopic procedures using a posterior approach in comparison to the standard procedures. Results from recent case series permit no assessment of efficacy, especially not in comparison to standard procedures. Due to highly selected patients, modi-fications of operative procedures, highly specialised surgical units and poorly standardised outcome assessment results of case series are highly variable, their generalisability is low.
The results of the five economical analyses are, due to conceptual and methodological problems, of no value for decision-making in the context of the German health care system.
Discussion
Aside from low methodological study quality three conceptual problems complicate the interpretation of results.
Continuous further development of technologies leads to a diversity of procedures in use which prohibits generalisation of study results. However, diversity is noted not only for minimally-invasive procedures but also for the standard techniques against which the new developments are to be compared. The second problem refers to the heterogeneity of study populations. For most studies one common inclusion criterion was "persisting sciatica after a course of conservative treatment of variable duration". Differences among study populations are noted concerning results of imaging studies. Even within every group of minimally-invasive procedure, studies define their own in- and exclusion criteria which differ concerning degree of dislocation and sequestration of disc material. There is the non-standardised assessment of outcomes which are performed postoperatively after variable periods of time. Most studies report results in a dichotomous way as success or failure while the classification of a result is performed using a variety of different assessment instruments or procedures. Very often the global subjective judgement of results by patients or surgeons is reported. There are no scientific discussions whether these judgements are generalisable or comparable, especially among studies that are conducted under differing socio-cultural conditions. Taking into account the weak evidence-base for efficacy and effectiveness of minimally-invasive procedures it is not surprising that so far there are no dependable economic analyses.
Conclusions
Conclusions that can be drawn from the results of the present assessment refer in detail to the specified minimally-invasive procedures of lumbar disc surgery but they may also be considered exemplary for other fields where optimisation of results is attempted by technological development and widening of indications (e.g. total hip replacement).
Compared to standard technologies (open discectomy, microdiscectomy) and with the exception of chemonucleolysis, the developmental status of all other minimally-invasive procedures assessed must be termed experimental. To date there is no dependable evidence-base to recommend their use in routine clinical practice. To create such a dependable evidence-base further research in two directions is needed: a) The studies need to include adequate patient populations, use realistic controls (e.g. standard operative procedures or continued conservative care) and use standardised measurements of meaningful outcomes after adequate periods of time. b) Studies that are able to report effectiveness of the procedures under everyday practice conditions and furthermore have the potential to detect rare adverse effects are needed. In Sweden this type of data is yielded by national quality registries. On the one hand their data are used for quality improvement measures and on the other hand they allow comprehensive scientific evaluations. Since the year of 2000 a continuous rise in utilisation of minimally-invasive lumbar disc surgery is observed among statutory health insurers. Examples from other areas of innovative surgical technologies (e.g. robot assisted total hip replacement) indicate that the rise will probably continue - especially because there are no legal barriers to hinder introduction of innovative treatments into routine hospital care. Upon request by payers or providers the "Gemeinsamer Bundesausschuss" may assess a treatments benefit, its necessity and cost-effectiveness as a prerequisite for coverage by the statutory health insurance. In the case of minimally-invasive disc surgery it would be advisable to examine the legal framework for covering procedures only if they are provided under evaluation conditions. While in Germany coverage under evaluation conditions is established practice in ambulatory health care only (“Modellvorhaben") examples from other European countries (Great Britain, Switzerland) demonstrate that it is also feasible for hospital based interventions. In order to assure patient protection and at the same time not hinder the further development of new and promising technologies provision under evaluation conditions could also be realised in the private health care market - although in this sector coverage is not by law linked to benefit, necessity and cost-effectiveness of an intervention.
PMCID: PMC3011322  PMID: 21289928
16.  A survey of urban climate change experiments in 100 cities 
Global Environmental Change  2013;23(1):92-102.
Highlights
► A database analysis reveals urban climate change experimentation as a global trend. ► Experimentation is a recent trend not confined to specific world regions or cities. ► Although experimentation is heterogeneous, energy experiments predominate. ► Multiple actors, often through partnership, intervene in urban climate change governance. ► A characteristic trend of experimentation led by private actors emerges in Asia.
Cities are key sites where climate change is being addressed. Previous research has largely overlooked the multiplicity of climate change responses emerging outside formal contexts of decision-making and led by actors other than municipal governments. Moreover, existing research has largely focused on case studies of climate change mitigation in developed economies. The objective of this paper is to uncover the heterogeneous mix of actors, settings, governance arrangements and technologies involved in the governance of climate change in cities in different parts of the world.
The paper focuses on urban climate change governance as a process of experimentation. Climate change experiments are presented here as interventions to try out new ideas and methods in the context of future uncertainties. They serve to understand how interventions work in practice, in new contexts where they are thought of as innovative. To study experimentation, the paper presents evidence from the analysis of a database of 627 urban climate change experiments in a sample of 100 global cities.
The analysis suggests that, since 2005, experimentation is a feature of urban responses to climate change across different world regions and multiple sectors. Although experimentation does not appear to be related to particular kinds of urban economic and social conditions, some of its core features are visible. For example, experimentation tends to focus on energy. Also, both social and technical forms of experimentation are visible, but technical experimentation is more common in urban infrastructure systems. While municipal governments have a critical role in climate change experimentation, they often act alongside other actors and in a variety of forms of partnership. These findings point at experimentation as a key tool to open up new political spaces for governing climate change in the city.
doi:10.1016/j.gloenvcha.2012.07.005
PMCID: PMC3688314  PMID: 23805029
Climate change experiments; Mitigation; Adaptation; Governance; Cities; Infrastructure
17.  An Epidemiological Network Model for Disease Outbreak Detection 
PLoS Medicine  2007;4(6):e210.
Background
Advanced disease-surveillance systems have been deployed worldwide to provide early detection of infectious disease outbreaks and bioterrorist attacks. New methods that improve the overall detection capabilities of these systems can have a broad practical impact. Furthermore, most current generation surveillance systems are vulnerable to dramatic and unpredictable shifts in the health-care data that they monitor. These shifts can occur during major public events, such as the Olympics, as a result of population surges and public closures. Shifts can also occur during epidemics and pandemics as a result of quarantines, the worried-well flooding emergency departments or, conversely, the public staying away from hospitals for fear of nosocomial infection. Most surveillance systems are not robust to such shifts in health-care utilization, either because they do not adjust baselines and alert-thresholds to new utilization levels, or because the utilization shifts themselves may trigger an alarm. As a result, public-health crises and major public events threaten to undermine health-surveillance systems at the very times they are needed most.
Methods and Findings
To address this challenge, we introduce a class of epidemiological network models that monitor the relationships among different health-care data streams instead of monitoring the data streams themselves. By extracting the extra information present in the relationships between the data streams, these models have the potential to improve the detection capabilities of a system. Furthermore, the models' relational nature has the potential to increase a system's robustness to unpredictable baseline shifts. We implemented these models and evaluated their effectiveness using historical emergency department data from five hospitals in a single metropolitan area, recorded over a period of 4.5 y by the Automated Epidemiological Geotemporal Integrated Surveillance real-time public health–surveillance system, developed by the Children's Hospital Informatics Program at the Harvard-MIT Division of Health Sciences and Technology on behalf of the Massachusetts Department of Public Health. We performed experiments with semi-synthetic outbreaks of different magnitudes and simulated baseline shifts of different types and magnitudes. The results show that the network models provide better detection of localized outbreaks, and greater robustness to unpredictable shifts than a reference time-series modeling approach.
Conclusions
The integrated network models of epidemiological data streams and their interrelationships have the potential to improve current surveillance efforts, providing better localized outbreak detection under normal circumstances, as well as more robust performance in the face of shifts in health-care utilization during epidemics and major public events.
Most surveillance systems are not robust to shifts in health care utilization. Ben Reis and colleagues developed network models that detected localized outbreaks better and were more robust to unpredictable shifts.
Editors' Summary
Background.
The main task of public-health officials is to promote health in communities around the world. To do this, they need to monitor human health continually, so that any outbreaks (epidemics) of infectious diseases (particularly global epidemics or pandemics) or any bioterrorist attacks can be detected and dealt with quickly. In recent years, advanced disease-surveillance systems have been introduced that analyze data on hospital visits, purchases of drugs, and the use of laboratory tests to look for tell-tale signs of disease outbreaks. These surveillance systems work by comparing current data on the use of health-care resources with historical data or by identifying sudden increases in the use of these resources. So, for example, more doctors asking for tests for salmonella than in the past might presage an outbreak of food poisoning, and a sudden rise in people buying over-the-counter flu remedies might indicate the start of an influenza pandemic.
Why Was This Study Done?
Existing disease-surveillance systems don't always detect disease outbreaks, particularly in situations where there are shifts in the baseline patterns of health-care use. For example, during an epidemic, people might stay away from hospitals because of the fear of becoming infected, whereas after a suspected bioterrorist attack with an infectious agent, hospitals might be flooded with “worried well” (healthy people who think they have been exposed to the agent). Baseline shifts like these might prevent the detection of increased illness caused by the epidemic or the bioterrorist attack. Localized population surges associated with major public events (for example, the Olympics) are also likely to reduce the ability of existing surveillance systems to detect infectious disease outbreaks. In this study, the researchers developed a new class of surveillance systems called “epidemiological network models.” These systems aim to improve the detection of disease outbreaks by monitoring fluctuations in the relationships between information detailing the use of various health-care resources over time (data streams).
What Did the Researchers Do and Find?
The researchers used data collected over a 3-y period from five Boston hospitals on visits for respiratory (breathing) problems and for gastrointestinal (stomach and gut) problems, and on total visits (15 data streams in total), to construct a network model that included all the possible pair-wise comparisons between the data streams. They tested this model by comparing its ability to detect simulated disease outbreaks implanted into data collected over an additional year with that of a reference model based on individual data streams. The network approach, they report, was better at detecting localized outbreaks of respiratory and gastrointestinal disease than the reference approach. To investigate how well the network model dealt with baseline shifts in the use of health-care resources, the researchers then added in a large population surge. The detection performance of the reference model decreased in this test, but the performance of the complete network model and of models that included relationships between only some of the data streams remained stable. Finally, the researchers tested what would happen in a situation where there were large numbers of “worried well.” Again, the network models detected disease outbreaks consistently better than the reference model.
What Do These Findings Mean?
These findings suggest that epidemiological network systems that monitor the relationships between health-care resource-utilization data streams might detect disease outbreaks better than current systems under normal conditions and might be less affected by unpredictable shifts in the baseline data. However, because the tests of the new class of surveillance system reported here used simulated infectious disease outbreaks and baseline shifts, the network models may behave differently in real-life situations or if built using data from other hospitals. Nevertheless, these findings strongly suggest that public-health officials, provided they have sufficient computer power at their disposal, might improve their ability to detect disease outbreaks by using epidemiological network systems alongside their current disease-surveillance systems.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0040210.
Wikipedia pages on public health (note that Wikipedia is a free online encyclopedia that anyone can edit, and is available in several languages)
A brief description from the World Health Organization of public-health surveillance (in English, French, Spanish, Russian, Arabic, and Chinese)
A detailed report from the US Centers for Disease Control and Prevention called “Framework for Evaluating Public Health Surveillance Systems for the Early Detection of Outbreaks”
The International Society for Disease Surveillance Web site
doi:10.1371/journal.pmed.0040210
PMCID: PMC1896205  PMID: 17593895
18.  The SWISS-PROT protein knowledgebase and its supplement TrEMBL in 2003 
Nucleic Acids Research  2003;31(1):365-370.
The SWISS-PROT protein knowledgebase (http://www.expasy.org/sprot/ and http://www.ebi.ac.uk/swissprot/) connects amino acid sequences with the current knowledge in the Life Sciences. Each protein entry provides an interdisciplinary overview of relevant information by bringing together experimental results, computed features and sometimes even contradictory conclusions. Detailed expertise that goes beyond the scope of SWISS-PROT is made available via direct links to specialised databases. SWISS-PROT provides annotated entries for all species, but concentrates on the annotation of entries from human (the HPI project) and other model organisms to ensure the presence of high quality annotation for representative members of all protein families. Part of the annotation can be transferred to other family members, as is already done for microbes by the High-quality Automated and Manual Annotation of microbial Proteomes (HAMAP) project. Protein families and groups of proteins are regularly reviewed to keep up with current scientific findings. Complementarily, TrEMBL strives to comprise all protein sequences that are not yet represented in SWISS-PROT, by incorporating a perpetually increasing level of mostly automated annotation. Researchers are welcome to contribute their knowledge to the scientific community by submitting relevant findings to SWISS-PROT at swiss-prot@expasy.org.
PMCID: PMC165542  PMID: 12520024
19.  Biocoder: A programming language for standardizing and automating biology protocols 
Background
Published descriptions of biology protocols are often ambiguous and incomplete, making them difficult to replicate in other laboratories. However, there is increasing benefit to formalizing the descriptions of protocols, as laboratory automation systems (such as microfluidic chips) are becoming increasingly capable of executing them. Our goal in this paper is to improve both the reproducibility and automation of biology experiments by using a programming language to express the precise series of steps taken.
Results
We have developed BioCoder, a C++ library that enables biologists to express the exact steps needed to execute a protocol. In addition to being suitable for automation, BioCoder converts the code into a readable, English-language description for use by biologists. We have implemented over 65 protocols in BioCoder; the most complex of these was successfully executed by a biologist in the laboratory using BioCoder as the only reference. We argue that BioCoder exposes and resolves ambiguities in existing protocols, and could provide the software foundations for future automation platforms. BioCoder is freely available for download at http://research.microsoft.com/en-us/um/india/projects/biocoder/.
Conclusions
BioCoder represents the first practical programming system for standardizing and automating biology protocols. Our vision is to change the way that experimental methods are communicated: rather than publishing a written account of the protocols used, researchers will simply publish the code. Our experience suggests that this practice is tractable and offers many benefits. We invite other researchers to leverage BioCoder to improve the precision and completeness of their protocols, and also to adapt and extend BioCoder to new domains.
doi:10.1186/1754-1611-4-13
PMCID: PMC2989930  PMID: 21059251
20.  Systematic survey of the design, statistical analysis, and reporting of studies published in the 2008 volume of the Journal of Cerebral Blood Flow and Metabolism 
Translating experimental findings into clinically effective therapies is one of the major bottlenecks of modern medicine. As this has been particularly true for cerebrovascular research, attention has turned to the quality and validity of experimental cerebrovascular studies. We set out to assess the study design, statistical analyses, and reporting of cerebrovascular research. We assessed all original articles published in the Journal of Cerebral Blood Flow and Metabolism during the year 2008 against a checklist designed to capture the key attributes relating to study design, statistical analyses, and reporting. A total of 156 original publications were included (animal, in vitro, human). Few studies reported a primary research hypothesis, statement of purpose, or measures to safeguard internal validity (such as randomization, blinding, exclusion or inclusion criteria). Many studies lacked sufficient information regarding methods and results to form a reasonable judgment about their validity. In nearly 20% of studies, statistical tests were either not appropriate or information to allow assessment of appropriateness was lacking. This study identifies a number of factors that should be addressed if the quality of research in basic and translational biomedicine is to be improved. We support the widespread implementation of the ARRIVE (Animal Research Reporting In Vivo Experiments) statement for the reporting of experimental studies in biomedicine, for improving training in proper study design and analysis, and that reviewers and editors adopt a more constructively critical approach in the assessment of manuscripts for publication.
doi:10.1038/jcbfm.2010.217
PMCID: PMC3070978  PMID: 21157472
ARRIVE; bias; CONSORT; quality; translation; validity
21.  Journal of Ethnobiology and Ethnomedicine – achievements and perspectives 
Last summer we officially launched the Journal of Ethnobiology and Ethnomedicine, published by BioMedCentral, with the aim of establishing a serious, peer-reviewed, open-access online journal that focuses on the multidisciplinary, interdisciplinary, and transdisciplinary fields of ethnobiology and ethnomedicine, drawing on approaches and methods from both the social and biological sciences. The strong start vindicates the widely held belief that the journal responds to a real need within the research community.
The success of the journal has been most gratifying. The steady influx of submissions of high scientific standards illustrates the strong demand for a dynamic, proactive, and open-minded scientific journal in these research areas. Our aim has been to dedicate JEE to the "scientific communities" worldwide, particularly those in the developing countries.
doi:10.1186/1746-4269-2-10
PMCID: PMC1383503  PMID: 16460576
22.  New paradigm for macromolecular crystallography experiments at SSRL: automated crystal screening and remote data collection 
Through the combination of robust mechanized experimental hardware and a flexible control system with an intuitive user interface, SSRL researchers have screened over 200 000 biological crystals for diffraction quality in an automated fashion. Three quarters of SSRL researchers are using these data-collection tools from remote locations.
Complete automation of the macromolecular crystallography experiment has been achieved at SSRL through the combination of robust mechanized experimental hardware and a flexible control system with an intuitive user interface. These highly reliable systems have enabled crystallography experiments to be carried out from the researchers’ home institutions and other remote locations while retaining complete control over even the most challenging systems. A breakthrough component of the system, the Stanford Auto-Mounter (SAM), has enabled the efficient mounting of cryocooled samples without human intervention. Taking advantage of this automation, researchers have successfully screened more than 200 000 samples to select the crystals with the best diffraction quality for data collection as well as to determine optimal crystallization and cryocooling conditions. These systems, which have been deployed on all SSRL macromolecular crystallography beamlines and several beamlines worldwide, are used by more than 80 research groups in remote locations, establishing a new paradigm for macromolecular crystallo­graphy experimentation.
doi:10.1107/S0907444908030564
PMCID: PMC2631117  PMID: 19018097
remote crystallography data collection; robotics
23.  Korean Association of Medical Journal Editors at the Forefront of Improving the Quality and Indexing Chances of its Member Journals 
Journal of Korean Medical Science  2013;28(5):648-650.
The article overviews some achievements and problems of Korean medical journals published in the highly competitive journal environment. Activities of Korean Association of Medical Journal Editors (KAMJE) are viewed as instrumental for improving the quality of Korean articles, indexing large number of local journals in prestigious bibliographic databases and launching new abstract and citation tracking databases or platforms (eg KoreaMed, KoreaMed Synapse, the Western Pacific Regional Index Medicus [WPRIM]). KAMJE encourages its member journals to upgrade science editing standards and to legitimately increase citation rates, primarily by publishing more great articles with global influence. Experience gained by KAMJE and problems faced by Korean editors may have global implications.
doi:10.3346/jkms.2013.28.5.648
PMCID: PMC3653074  PMID: 23678253
Periodicals as Topic; Medicine; Learned Associations; Journal Indexing; Science Communication; Korea
24.  The NMR restraints grid at BMRB for 5,266 protein and nucleic acid PDB entries 
Journal of Biomolecular Nmr  2009;45(4):389-396.
Several pilot experiments have indicated that improvements in older NMR structures can be expected by applying modern software and new protocols (Nabuurs et al. in Proteins 55:483–186, 2004; Nederveen et al. in Proteins 59:662–672, 2005; Saccenti and Rosato in J Biomol NMR 40:251–261, 2008). A recent large scale X-ray study also has shown that modern software can significantly improve the quality of X-ray structures that were deposited more than a few years ago (Joosten et al. in J. Appl Crystallogr 42:376–384, 2009; Sanderson in Nature 459:1038–1039, 2009). Recalculation of three-dimensional coordinates requires that the original experimental data are available and complete, and are semantically and syntactically correct, or are at least correct enough to be reconstructed. For multiple reasons, including a lack of standards, the heterogeneity of the experimental data and the many NMR experiment types, it has not been practical to parse a large proportion of the originally deposited NMR experimental data files related to protein NMR structures. This has made impractical the automatic recalculation, and thus improvement, of the three dimensional coordinates of these structures. We here describe a large-scale international collaborative effort to make all deposited experimental NMR data semantically and syntactically homogeneous, and thus useful for further research. A total of 4,014 out of 5,266 entries were ‘cleaned’ in this process. For 1,387 entries, human intervention was needed. Continuous efforts in automating the parsing of both old, and newly deposited files is steadily decreasing this fraction. The cleaned data files are available from the NMR restraints grid at http://restraintsgrid.bmrb.wisc.edu.
Electronic supplementary material
The online version of this article (doi:10.1007/s10858-009-9378-z) contains supplementary material, which is available to authorized users.
doi:10.1007/s10858-009-9378-z
PMCID: PMC2777234  PMID: 19809795
Biomolecular structure; BMRB; Restraints; Database; Nuclear magnetic resonance; PDB
25.  Assessment of community-submitted ontology annotations from a novel database-journal partnership 
As the scientific literature grows, leading to an increasing volume of published experimental data, so does the need to access and analyze this data using computational tools. The most commonly used method to convert published experimental data on gene function into controlled vocabulary annotations relies on a professional curator, employed by a model organism database or a more general resource such as UniProt, to read published articles and compose annotation statements based on the articles' contents. A more cost-effective and scalable approach capable of capturing gene function data across the whole range of biological research organisms in computable form is urgently needed. We have analyzed a set of ontology annotations generated through collaborations between the Arabidopsis Information Resource and several plant science journals. Analysis of the submissions entered using the online submission tool shows that most community annotations were well supported and the ontology terms chosen were at an appropriate level of specificity. Of the 503 individual annotations that were submitted, 97% were approved and community submissions captured 72% of all possible annotations. This new method for capturing experimental results in a computable form provides a cost-effective way to greatly increase the available body of annotations without sacrificing annotation quality.
Database URL: www.arabidopsis.org
doi:10.1093/database/bas030
PMCID: PMC3410254  PMID: 22859749

Results 1-25 (848214)