PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (839381)

Clipboard (0)
None

Related Articles

1.  LitMiner and WikiGene: identifying problem-related key players of gene regulation using publication abstracts 
Nucleic Acids Research  2005;33(Web Server issue):W779-W782.
The LitMiner software is a literature data-mining tool that facilitates the identification of major gene regulation key players related to a user-defined field of interest in PubMed abstracts. The prediction of gene-regulatory relationships is based on co-occurrence analysis of key terms within the abstracts. LitMiner predicts relationships between key terms from the biomedical domain in four categories (genes, chemical compounds, diseases and tissues). Owing to the limitations (no direction, unverified automatic prediction) of the co-occurrence approach, the primary data in the LitMiner database represent postulated basic gene–gene relationships. The usefulness of the LitMiner system has been demonstrated recently in a study that reconstructed disease-related regulatory networks by promoter modelling that was initiated by a LitMiner generated primary gene list. To overcome the limitations and to verify and improve the data, we developed WikiGene, a Wiki-based curation tool that allows revision of the data by expert users over the Internet. LitMiner () and WikiGene () can be used unrestricted with any Internet browser.
doi:10.1093/nar/gki417
PMCID: PMC1160178  PMID: 15980584
2.  Using phylogenetically-informed annotation (PIA) to search for light-interacting genes in transcriptomes from non-model organisms 
BMC Bioinformatics  2014;15(1):350.
Background
Tools for high throughput sequencing and de novo assembly make the analysis of transcriptomes (i.e. the suite of genes expressed in a tissue) feasible for almost any organism. Yet a challenge for biologists is that it can be difficult to assign identities to gene sequences, especially from non-model organisms. Phylogenetic analyses are one useful method for assigning identities to these sequences, but such methods tend to be time-consuming because of the need to re-calculate trees for every gene of interest and each time a new data set is analyzed. In response, we employed existing tools for phylogenetic analysis to produce a computationally efficient, tree-based approach for annotating transcriptomes or new genomes that we term Phylogenetically-Informed Annotation (PIA), which places uncharacterized genes into pre-calculated phylogenies of gene families.
Results
We generated maximum likelihood trees for 109 genes from a Light Interaction Toolkit (LIT), a collection of genes that underlie the function or development of light-interacting structures in metazoans. To do so, we searched protein sequences predicted from 29 fully-sequenced genomes and built trees using tools for phylogenetic analysis in the Osiris package of Galaxy (an open-source workflow management system). Next, to rapidly annotate transcriptomes from organisms that lack sequenced genomes, we repurposed a maximum likelihood-based Evolutionary Placement Algorithm (implemented in RAxML) to place sequences of potential LIT genes on to our pre-calculated gene trees. Finally, we implemented PIA in Galaxy and used it to search for LIT genes in 28 newly-sequenced transcriptomes from the light-interacting tissues of a range of cephalopod mollusks, arthropods, and cubozoan cnidarians. Our new trees for LIT genes are available on the Bitbucket public repository (http://bitbucket.org/osiris_phylogenetics/pia/) and we demonstrate PIA on a publicly-accessible web server (http://galaxy-dev.cnsi.ucsb.edu/pia/).
Conclusions
Our new trees for LIT genes will be a valuable resource for researchers studying the evolution of eyes or other light-interacting structures. We also introduce PIA, a high throughput method for using phylogenetic relationships to identify LIT genes in transcriptomes from non-model organisms. With simple modifications, our methods may be used to search for different sets of genes or to annotate data sets from taxa outside of Metazoa.
Electronic supplementary material
The online version of this article (doi:10.1186/s12859-014-0350-x) contains supplementary material, which is available to authorized users.
doi:10.1186/s12859-014-0350-x
PMCID: PMC4255452  PMID: 25407802
Bioinformatics; Eyes; Evolution; Galaxy; Next-generation sequence analysis; Orthology; Phototransduction; Transcriptomes; Vision
3.  BioLit: integrating biological literature with databases 
Nucleic Acids Research  2008;36(Web Server issue):W385-W389.
BioLit is a web server which provides metadata describing the semantic content of all open access, peer-reviewed articles which describe research from the major life sciences literature archive, PubMed Central. Specifically, these metadata include database identifiers and ontology terms found within the full text of the article. BioLit delivers these metadata in the form of XML-based article files and as a custom web-based article viewer that provides context-specific functionality to the metadata. This resource aims to integrate the traditional scientific publication directly into existing biological databases, thus obviating the need for a user to search in multiple locations for information relating to a specific item of interest, for example published experimental results associated with a particular biological database entry. As an example of a possible use of BioLit, we also present an instance of the Protein Data Bank fully integrated with BioLit data. We expect that the community of life scientists in general will be the primary end-users of the web-based viewer, while biocurators will make use of the metadata-containing XML files and the BioLit database of article data. BioLit is available at http://biolit.ucsd.edu.
doi:10.1093/nar/gkn317
PMCID: PMC2447735  PMID: 18515836
4.  ClearedLeavesDB: an online database of cleared plant leaf images 
Plant Methods  2014;10:8.
Background
Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers.
Description
The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database.
Conclusions
We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org.
doi:10.1186/1746-4811-10-8
PMCID: PMC3986656  PMID: 24678985
Database; Images; Leaves; Digital archiving; Digital curation
5.  Comparison of Iranian National Medical Library with digital libraries of selected countries 
Introduction:
The important role of information and communication technologies and their influence on methods of storing, retrieving information in digital libraries, has not only changed the meanings behind classic library activates but has also created great changes in their services. However, it seems that not all digital libraries provide their users with similar services and only some of them are successful in fulfilling their role in digital environment. The Iranian National Medical library is among those that appear to come short compared to other digital libraries around the world. By knowing the different services provided by digital libraries worldwide, one can evaluate the services provided by Iranian National Medical library. The goal of this study is a comparison between Iranian National Medical library and digital libraries of selected countries.
Materials and Methods:
This is an applied study and uses descriptive – survey method. The statistical population is the digital libraries around the world which were actively providing library services between October and December 2011 and were selected by using the key word “Digital Library” in Google search engine. The data-gathering tool was direct access to the websites of these digital libraries. The statistical study is descriptive and Excel software was used for data analysis and plotting of the charts.
Results:
The findings showed that among the 33 digital libraries investigated worldwide, most of them provided Browse (87.87%), Search (84.84%), and Electronic information retrieval (57.57%) services. The “Help” in public services (48/48%) and “Interlibrary Loan” in traditional services (27/27%) had the highest frequency. The Iranian National Medical library provides more digital services compared to other libraries but has less classic and public services and has less than half of possible public services. Other than Iranian National Medical library, among the 33 libraries investigated, the leaders in providing different services are Library of University of California in classic services, Count Way Library of Medicine in digital services, and Library of Finland in public services.
Results and Discussion:
The results of this study show that among the digital libraries investigated, most provided similar public, digital, and classic services and The Iranian National Medical library has been somewhat successful in providing these services compared to other digital libraries. One can also conclude that the difference in services is at least in part due to difference in environments, information needs, and users.
Conclusion:
Iranian National Medical Library has been somewhat successful in providing library services in digital environment and needs to identify the services which are valuable to its users by identifying the users’ needs and special characteristics of its environment
doi:10.4103/2277-9531.145897
PMCID: PMC4275611  PMID: 25540782
Digital library; Iranian National Medical Library; services
6.  Novel subtractive transcription-based amplification of mRNA (STAR) method and its application in search of rare and differentially expressed genes in AD brains 
BMC Genomics  2006;7:286.
Background
Alzheimer's disease (AD) is a complex disorder that involves multiple biological processes. Many genes implicated in these processes may be present in low abundance in the human brain. DNA microarray analysis identifies changed genes that are expressed at high or moderate levels. Complementary to this approach, we described here a novel technology designed specifically to isolate rare and novel genes previously undetectable by other methods. We have used this method to identify differentially expressed genes in brains affected by AD. Our method, termed Subtractive Transcription-based Amplification of mRNA (STAR), is a combination of subtractive RNA/DNA hybridization and RNA amplification, which allows the removal of non-differentially expressed transcripts and the linear amplification of the differentially expressed genes.
Results
Using the STAR technology we have identified over 800 differentially expressed sequences in AD brains, both up- and down- regulated, compared to age-matched controls. Over 55% of the sequences represent genes of unknown function and roughly half of them were novel and rare discoveries in the human brain. The expression changes of nearly 80 unique genes were further confirmed by qRT-PCR and the association of additional genes with AD and/or neurodegeneration was established using an in-house literature mining tool (LitMiner).
Conclusion
The STAR process significantly amplifies unique and rare sequences relative to abundant housekeeping genes and, as a consequence, identifies genes not previously linked to AD. This method also offers new opportunities to study the subtle changes in gene expression that potentially contribute to the development and/or progression of AD.
doi:10.1186/1471-2164-7-286
PMCID: PMC1637111  PMID: 17090317
7.  Integration of open access literature into the RCSB Protein Data Bank using BioLit 
BMC Bioinformatics  2010;11:220.
Background
Biological data have traditionally been stored and made publicly available through a variety of on-line databases, whereas biological knowledge has traditionally been found in the printed literature. With journals now on-line and providing an increasing amount of open access content, often free of copyright restriction, this distinction between database and literature is blurring. To exploit this opportunity we present the integration of open access literature with the RCSB Protein Data Bank (PDB).
Results
BioLit provides an enhanced view of articles with markup of semantic data and links to biological databases, based on the content of the article. For example, words matching to existing biological ontologies are highlighted and database identifiers are linked to their database of origin. Among other functions, it identifies PDB IDs that are mentioned in the open access literature, by parsing the full text for all research articles in PubMed Central (PMC) and exposing the results as simple XML Web Services. Here, we integrate BioLit results with the RCSB PDB website by using these services to find PDB IDs that are mentioned in research articles and subsequently retrieving abstract, figures, and text excerpts for those articles. A new RCSB PDB literature view permits browsing through the figures and abstracts of the articles that mention a given structure. The BioLit Web Services that are providing the underlying data are publicly accessible. A client library is provided that supports querying these services (Java).
Conclusions
The integration between literature and websites, as demonstrated here with the RCSB PDB, provides a broader view for how a given structure has been analyzed and used. This approach detects the mention of a PDB structure even if it is not formally cited in the paper. Other structures related through the same literature references can also be identified, possibly providing new scientific insight. To our knowledge this is the first time that database and literature have been integrated in this way and it speaks to the opportunities afforded by open and free access to both database and literature content.
doi:10.1186/1471-2105-11-220
PMCID: PMC2880030  PMID: 20429930
8.  A self-evaluation tool for integrated care services: the Development Model for Integrated Care applied in practice 
Purpose
The purpose of the workshop is to show the applications of the Development Model for Integrated Care (DMIC) in practice. This relatively new and validated model, can be used by integrated care practices to evaluate their integrated care, to assess their phase of development and reveal improvement areas. In the workshop the results of the use of the model in three types of integrated care settings in the Netherlands will be presented. Participants are offered practical instruments based on the validated DMIC to use in their own setting and will be introduced to the webbased tool.
Context
To integrate care from multiple providers into a coherent and streamlined client-focused service, a large number of activities and agreements have to be implemented like streamlining information flows and adequate transfers of clients. In the large range of possible activities it is often not clear what essential activities are and where to start or continue. Also, knowledge about how to further develop integrated care services is needed. The Development Model for Integrated Care (DMIC), based on PhD research of Mirella Minkman, describes nine clusters containing in total 89 elements that contribute to the integration of care. The clusters are named: ‘client-centeredness’, ‘delivery system’, ‘performance management’, ‘quality of care’, ‘result-focused learning’, ‘interprofessional teamwork’, ‘roles and tasks’, ‘commitment’, and ‘transparant entrepreneurship’ [1–3]. In 2011 a new digital webbased self-evolution tool which contains the 89 elements grouped in nine clusters was developed. The DMIC also describes four phases of development [4]. The model is empirically validated in practice by assessing the relevance and implementation of the elements and development phases in 84 integrated care services in The Netherlands: in stroke, acute myocardial infarct (AMI), and dementia services. The validation studies are recently published [5, 6]. In 2011 also other integrated care services started using the model [7]. Vilans developed a digital web-based self-evaluation tool for integrated care services based on the DMIC. A palliative care network, four diabetes services, a youth care service and a network for autism used the self-evaluation tool to evaluate the development of their integrated care service. Because of its generic character, the model and tool are believed to be also interesting internationally.
Data sources
In the workshop we will present the results of three studies in integrated diabetes, youth and palliative care. The three projects consist of multiple steps, see below. Workshop participants could also work with the DMIC following these steps.
One: Preparation of the digital self-evolution tool for integrated care services
Although they are very different, the three integrated care services all wanted to gain insight in their development and improvement opportunities. We tailored the digital self-evaluation tool for each specific integrated care services, but for all the basis was the DMIC. Personal accounts for the digital DMIC self-evalution survey were sent to multiple partners working in each integrated care service (4–16 partners).
Two: Use of the online self-evaluation tool each partner of the local integrated care setting evaluated the integrated care by filling in the web-based questionnaire. The tool consists of three parts (A-C) named: general information about the integrated care practice (A); the clusters and elements of the DMIC (B); and the four phases of development (C). The respondents rated the relevance and presence of each element in their integrated care practice. Respondents were asked to estimate in which phase of development their thought their service was.
Three: Analysing the results
Advisers from Vilans, the Centre of excellence for long-term care in the Netherlands, analysed the self-evolution results in cooperation with the integrated care coordinators. The results show the total amount of implemented integrated care elements per cluster in spider graphs and the development phase as calculated by the DMIC model. Suggestions for further development of the integrated care services were analysed and reported.
Four: Discussing the implications for further development
In a workshop with the local integrated care partners the results of the self-evaluation were presented and discussed. We noticed remarkable results and highlight elements for further development. In addition, we gave advice for further development appropriate to the development phase of the integrated care service. Furthermore, the professionals prioritized the elements and decided which elements to start working on. This resulted in a (quality improvement) plan for the further development of the integrated care service.
Five: Reporting results
In a report all the results of the survey (including consensus scores) and the workshops came together. The integrated care coordinators stated that the reports really helped them to assess their improvement strategy. Also, there was insight in the development phase of their service which gave tools for further development.
Case description
The three cases presented are a palliative network, an integrated diabetes services and an integrated care network for youth in the Netherlands. The palliative care network wanted to reflect on their current development, to build a guiding framework for further development of the network. About sixteen professionals within the network worked with the digital self-evaluation tool and the DMIC: home care organisations, welfare organizations, hospice centres, health care organisations, community organizations.
For diabetes care, a Dutch health care insurance company wished to gain insight in the development of the contracted integrated care services to stimulate further development of the services. Professionals of three diabetes integrated care services were invited to fill in the digital self-evaluation tool. Of each integrated care service professionals like a general practitioner, a diabetes nurse, a medical specialist, a dietician and a podiatrist were invited. In youth care, a local health organisation wondered whether the DMIC could be helpful to visualize the results of youth integrated care services at process- and organisational level. The goal of the project was to define indicators at a process- and organisational level for youth care services based on the DMIC. In the future, these indicators might be used to evaluate youth care integrated care services and improve the quality of youth care within the Netherlands.
Conclusions and discussion
It is important for the quality of integrated care services that the involved coordinators, managers and professionals are aware of the development process of the integrated care service and that they focus on elements which can further develop and improve their integrated care. However, we noticed that integrated care services in the Netherlands experience difficulties in developing their integrated care service. It is often not clear what essential activities are to work on and how to further develop the integrated care service. A guiding framework for the development of integrated care was missing. The DMIC model has been developed for that reason and offers a useful tool for assessment, self-evaluation or improvement of integrated care services in practice. The model has been validated for AMI, dementia and stroke services. The latest new studies in diabetes, palliative care and youth care gave further insight in the generic character of the DMIC. Based on these studies it can be assumed that the DMIC can be used for multiple types of integrated care services. The model is assumed to be interesting for an international audience. Improving integrated care is a complex topic in a large number of countries; the DMIC is also based on the international literature. Dutch integrated care coordinators stated that the DMIC helped them to assess their integrated care development in practice and supported them in obtaining ideas for expanding and improving their integrated care activities.
The web-based self-evaluation tool focuses on a process- and organisational level of integrated care. Also, the self assessed development phase can be compared to the development phase as calculated by the DMIC tool. The cases showed this is fruitful input for discussions. When using the tool, the results can also be used in quality policy reports and improvement plans. The web-based tool is being tested at this moment in practice, but in San Marino we can present the latest webversion and demonstrate with a short video how to use the tool and model. During practical exercises in the workshop the participants will experience how the application of the DMIC can work for them in practice or in research. For integrated care researchers and policy makers, the DMIC questionnaire and tool is a promising method for further research and policy plans in integrated care.
PMCID: PMC3617779
development model for integrated care; development of integrated care services; implementation and improvement of integrated care; self evaluation
9.  Towards a standardised approach for evaluating guidelines and guidance documents on palliative sedation: study protocol 
BMC Palliative Care  2014;13:34.
Background
Sedation in palliative care has received growing attention in recent years; and so have guidelines, position statements, and related literature that provide recommendations for its practice. Yet little is known collectively about the content, scope and methodological quality of these materials.
According to research, there are large variations in palliative sedation practice, depending on the definition and methodology used. However, a standardised approach to comparing and contrasting related documents, across countries, associations and governmental bodies is lacking. This paper reports on a protocol designed to enable thorough and systematic comparison of guidelines and guidance documents on palliative sedation.
Methods and design
A multidisciplinary and international group of palliative care researchers, identified themes and clinical issues on palliative sedation based on expert consultations and evidence drawn from the EAPC (European Association of Palliative Care) framework for palliative sedation and AGREE II (Appraisal Guideline Research and Evaluation) instrument for guideline assessment. The most relevant themes were selected and built into a comprehensive checklist. This was tested on people working closely with practitioners and patients, for user-friendliness and comprehensibility, and modified where necessary. Next, a systematic search was conducted for guidelines in English, Dutch, Flemish, or Italian. The search was performed in multiple databases (PubMed, CancerLit, CNAHL, Cochrane Library, NHS Evidence and Google Scholar), and via other Internet resources. Hereafter, the final version of the checklist will be used to extract data from selected literature, and the same will be compiled, entered into SPSS, cleaned and analysed systematically for publication.
Discussion
We have together developed a comprehensive checklist in a scientifically rigorous manner to allow standardised and systematic comparison. The protocol is applicable to all guidelines on palliative sedation, and the approach will contribute to rigorous and systematic comparison of international guidelines on any challenging topic such as this. Results from the study will provide valuable insights into common core elements and differences between the selected guidelines, and the extent to which recommendations are derived from, or match those in the EAPC framework. The outcomes of the study will be disseminated via peer-reviewed journals and directly to appropriate audiences.
doi:10.1186/1472-684X-13-34
PMCID: PMC4099031  PMID: 25028571
Palliative sedation; Practice guidelines; Content analysis; Comparative research; Study protocol
10.  Clinical Digital Libraries Project: design approach and exploratory assessment of timely use in clinical environments* 
Objective: The paper describes and evaluates the use of Clinical Digital Libraries Project (CDLP) digital library collections in terms of their facilitation of timely clinical information seeking.
Design: A convenience sample of CDLP Web server log activity over a twelve-month period (7/2002 to 6/2003) was analyzed for evidence of timely information seeking after users were referred to digital library clinical topic pages from Web search engines. Sample searches were limited to those originating from medical schools (26% North American and 19% non-North American) and from hospitals or clinics (51% North American and 4% non-North American).
Measurement: Timeliness was determined based on a calculation of the difference between the timestamps of the first and last Web server log “hit” during each search in the sample. The calculated differences were mapped into one of three ranges: less than one minute, one to three minutes, and three to five minutes.
Results: Of the 864 searches analyzed, 48% were less than 1 minute, 41% were 1 to 3 minutes, and 11% were 3 to 5 minutes. These results were further analyzed by environment (medical schools versus hospitals or clinics) and by geographic location (North America versus non-North American). Searches reflected a consistent pattern of less than 1 minute in these environments. Though the results were not consistent on a month-by-month basis over the entire time period, data for 8 of 12 months showed that searches shorter than 1 minute predominated and data for 1 month showed an equal number of less than 1 minute and 1 to 3 minute searches.
Conclusions: The CDLP digital library collections provided timely access to high-quality Web clinical resources when used for information seeking in medical education and hospital or clinic environments from North American and non–North American locations and consistently provided access to the sought information within the documented two-minute standard. The limitations of the use of Web server data warrant an exploratory assessment. This research also suggests the need for further investigation in the area of timely digital library collection services to clinical environments.
PMCID: PMC1435840  PMID: 16636712
11.  The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications 
Background
The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009.
Results
Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i) a workflow to annotate 100,000 sequences from an invertebrate species; ii) an integrated system for analysis of the transcription factor binding sites (TFBSs) enriched based on differential gene expression data obtained from a microarray experiment; iii) a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv) a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs.
Conclusions
Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i) the absence of several useful data or analysis functions in the Web service "space"; ii) the lack of documentation of methods; iii) lack of compliance with the SOAP/WSDL specification among and between various programming-language libraries; and iv) incompatibility between various bioinformatics data formats. Although it was still difficult to solve real world problems posed to the developers by the biological researchers in attendance because of these problems, we note the promise of addressing these issues within a semantic framework.
doi:10.1186/2041-1480-2-4
PMCID: PMC3170566  PMID: 21806842
12.  CIS4/403: Design and Implementation of an Intranet-based system for Real-Time Tele-Consultation in Oncology 
Introduction
This study describes a tele-consultation system (TCS) developed to provide a computing environment over a Wide Area Network (WAN) in North Italy (Province of Trento), that can be used by two or more physicians to share medical data and to work co-operatively on medical records. A pilot study has been carried out in oncology to assess the effectiveness of the system. The aim of this project is to facilitate the management of oncology patients by improving communication among the specialists of central and district hospitals.
Methods and Results
The TCS is an Intranet-based solution. The Intranet is based on a PC WAN with Windows NT Server, Microsoft SQL Server, and Internet Information Server. TCS is composed of native and custom applications developed in the Microsoft Windows (9x and NT) environment. The basic component of the system is the multimedia digital medical record, structured as a collection of HTML and ASP pages. A distributed relational database will allow users to store and retrieve medical records, accessed by a dedicated Web browser via the Web Server. The medical data to be stored and the presentation architecture of the clinical record had been determined in close collaboration with the clinicians involved in the project. TCS will allow a multi-point tele-consultation (TC) among two or more participants on remote computers, providing synchronized surfing through the clinical report. A set of collaborative and personal tools, whiteboard with drawing tools, point-to-point digital audio-conference, chat, local notepad, e-mail service, are integrated in the system to provide an user friendly environment. TCS has been developed as a client-server architecture. The client part of the system is based on the Microsoft Web Browser control and provides the user interface and the tools described above. The server part, running all the time on a dedicated computer, accepts connection requests and manages the connections among the participants in a TC, allowing multiple TC to run simultaneously. TCS has been developed in Visual C++ environment using MFC library and COM technology; ActiveX controls have been written in Visual Basic to perform dedicated tasks from the inside of the HTML clinical report. Before deploying the system in the hospital departments involved in the project, TCS has been tested in our laboratory by clinicians involved in the project to evaluate the usability of the system.
Discussion
TCS has the potential to support a "multi-disciplinary distributed virtual oncological meeting". The specialists of different departments and of different hospitals can attend "virtual meetings" and interactively discuss on medical data. An expected benefit of the "virtual meeting" should be the possibility to provide expert remote advice from oncologists to peripheral cancer units in formulating treatment plans, conducting follow-up sessions and supporting clinical research.
doi:10.2196/jmir.1.suppl1.e9
PMCID: PMC1761746
Intranet; Teleconsultation; Oncology
13.  An eUtils toolset and its use for creating a pipeline to link genomics and proteomics analyses to domain-specific biomedical literature 
Background
Numerous biomedical software applications access databases maintained by the US National Center for Biotechnology Information (NCBI). To ease software automation, NCBI provides a powerful but complex Web-service-based programming interface, eUtils. This paper describes a toolset that simplifies eUtils use through a graphical front-end that can be used by non-programmers to construct data-extraction pipelines. The front-end relies on a code library that provides high-level wrappers around eUtils functions, and which is distributed as open-source, allowing customization and enhancement by individuals with programming skills.
Methods
We initially created an application that queried eUtils to retrieve nephrology-specific biomedical literature citations for a user-definable set of genes. We later augmented the application code to create a general-purpose library that accesses eUtils capability as individual functions that could be combined into user-defined pipelines.
Results
The toolset’s use is illustrated with an application that serves as a front-end to the library and can be used by non-programmers to construct user-defined pipelines. The operation of the library is illustrated for the literature-surveillance application, which serves as a case-study. An overview of the library is also provided.
Conclusions
The library simplifies use of the eUtils service by operating at a higher level, and also transparently addresses robustness issues that would need to be individually implemented otherwise, such as error recovery and prevention of overloading of the eUtils service.
doi:10.1186/2043-9113-2-9
PMCID: PMC3422171  PMID: 22507626
Entrez Programming Utilities; Proteomics Analysis; Pubmed filters
14.  Automated collection of imaging and phenotypic data to centralized and distributed data repositories 
Accurate data collection at the ground level is vital to the integrity of neuroimaging research. Similarly important is the ability to connect and curate data in order to make it meaningful and sharable with other investigators. Collecting data, especially with several different modalities, can be time consuming and expensive. These issues have driven the development of automated collection of neuroimaging and clinical assessment data within COINS (Collaborative Informatics and Neuroimaging Suite). COINS is an end-to-end data management system. It provides a comprehensive platform for data collection, management, secure storage, and flexible data retrieval (Bockholt et al., 2010; Scott et al., 2011). It was initially developed for the investigators at the Mind Research Network (MRN), but is now available to neuroimaging institutions worldwide. Self Assessment (SA) is an application embedded in the Assessment Manager (ASMT) tool in COINS. It is an innovative tool that allows participants to fill out assessments via the web-based Participant Portal. It eliminates the need for paper collection and data entry by allowing participants to submit their assessments directly to COINS. Instruments (surveys) are created through ASMT and include many unique question types and associated SA features that can be implemented to help the flow of assessment administration. SA provides an instrument queuing system with an easy-to-use drag and drop interface for research staff to set up participants' queues. After a queue has been created for the participant, they can access the Participant Portal via the internet to fill out their assessments. This allows them the flexibility to participate from home, a library, on site, etc. The collected data is stored in a PostgresSQL database at MRN. This data is only accessible by users that have explicit permission to access the data through their COINS user accounts and access to MRN network. This allows for high volume data collection and with minimal user access to PHI (protected health information). An added benefit to using COINS is the ability to collect, store and share imaging data and assessment data with no interaction with outside tools or programs. All study data collected (imaging and assessment) is stored and exported with a participant's unique subject identifier so there is no need to keep extra spreadsheets or databases to link and keep track of the data. Data is easily exported from COINS via the Query Builder and study portal tools, which allow fine grained selection of data to be exported into comma separated value file format for easy import into statistical programs. There is a great need for data collection tools that limit human intervention and error while at the same time providing users with intuitive design. COINS aims to be a leader in database solutions for research studies collecting data from several different modalities.
doi:10.3389/fninf.2014.00060
PMCID: PMC4046572  PMID: 24926252
assessment data collection; neuroinformatics; tool suite; database; intuitive; COINS
15.  PMD2HD – A web tool aligning a PubMed search results page with the local German Cancer Research Centre library collection 
Background
Web-based searching is the accepted contemporary mode of retrieving relevant literature, and retrieving as many full text articles as possible is a typical prerequisite for research success. In most cases only a proportion of references will be directly accessible as digital reprints through displayed links. A large number of references, however, have to be verified in library catalogues and, depending on their availability, are accessible as print holdings or by interlibrary loan request.
Methods
The problem of verifying local print holdings from an initial retrieval set of citations can be solved using Z39.50, an ANSI protocol for interactively querying library information systems. Numerous systems include Z39.50 interfaces and therefore can process Z39.50 interactive requests. However, the programmed query interaction command structure is non-intuitive and inaccessible to the average biomedical researcher. For the typical user, it is necessary to implement the protocol within a tool that hides and handles Z39.50 syntax, presenting a comfortable user interface.
Results
PMD2HD is a web tool implementing Z39.50 to provide an appropriately functional and usable interface to integrate into the typical workflow that follows an initial PubMed literature search, providing users with an immediate asset to assist in the most tedious step in literature retrieval, checking for subscription holdings against a local online catalogue.
Conclusion
PMD2HD can facilitate literature access considerably with respect to the time and cost of manual comparisons of search results with local catalogue holdings. The example presented in this article is related to the library system and collections of the German Cancer Research Centre. However, the PMD2HD software architecture and use of common Z39.50 protocol commands allow for transfer to a broad range of scientific libraries using Z39.50-compatible library information systems.
doi:10.1186/1742-5581-2-4
PMCID: PMC1187866  PMID: 15982415
16.  Bypass materials in vascular surgery 
Introduction
Arteriosclerotic changes can lead to circulatory disturbances in various areas of the human vascular system. In addition to pharmacological therapy and the management of risk factors (e. g. hypertension, diabetes, lipid metabolism disorders, and lifestyle), surgical interventions also play an important role in the treatment of arteriosclerosis. Long-segment arterial occlusions, in particular, can be treated successfully with bypass sur-gery. A number of different materials are available for this type of operation, such as autologous vein or pros-thetic grafts comprised of polytetrafluoroethylene (PTFE) or Dacron®. Prosthetic materials are used especially in the treatment of peripheral artery disease, such as in aortoiliac or femoropopliteal bypass surgery. The present report will thus focus on this area in order to examine the effectiveness of different bypass materials.
Among the efforts being made to refine the newly introduced DRG system in Germany, analysing the different bypass materials used in vascular surgery is particularly important. Indeed, in its current version the German DRG system does not distinguish between bypass materials in terms of reimbursement rates. Differences in cost structures are thus of especial interest to hospitals in their budget calculations, whereas both private and statutory health insurance funds are primarily interested in long-term results and their costs.
Objectives
The goal of this HTA is to compare the different bypass materials used in vascular surgery in terms of their medical efficiency and cost-effectiveness, as well as with regard to their ethical, social and legal implications. In addition, this report aims to point out the areas in which further medical, epidemiological and health economic research is still needed.
Methods
Relevant publications were identified by means of a structured search of databases accessed through the German Institute of Medical Documentation and Information (DIMDI), as well as by a manual search. The for-mer included the following electronic resources: SOMED (SM78), Cochrane Library - Central (CCTR93), MEDLINE Alert (ME0A), MEDLINE (ME95), CATFILEplus (CATLINE) (CA66), ETHMED (ED93), GeroLit (GE79), HECLINET (HN69), AMED (CB85), CAB Abstracts (CV72), GLOBAL Health (AZ72), IPA (IA70), El-sevier BIOBASE (EB94), BIOSIS Previews (BA93), EMBASE (EM95), EMBASE Alert (EA08), SciSearch (IS90), Cochrane Library - CDSR (CDSR93), NHS-CRD-DARE (CDAR94), NHS-CRD-HTA (INAHTA), and NHS-EED (NHSEED).
The present report included German and English literature published between the years 1999 and 2004. A list of the search parameters can be found in the appendix. No limits were placed on the target population, and the methodical quality of the included studies was determined using standardised checklists.
Results
The studies included in this health technology assessment compared the following bypass materials: autologous vein, human umbilical vein (HUV) and synthetic materials such as PTFE or Dacron®. Both the systematic reviews and the randomised controlled trials comparing autologous vein grafts to other bypass materials come to the conclusion that autologous vein is superior to all other materials. From a medical viewpoint, there are no clear differences between the various synthetic materials.
To date, the subject of bypass materials in vascular surgery has not been addressed comprehensively from an economic point of view. Indeed, we were able to identify only one publication that compared the cost of various bypass materials. The remaining health economic studies did not compare costs, cost effectiveness, or quality of life associated with the use of various bypass materials.
Discussion
When deciding which bypass material to use, vascular surgeons take a number of medical considerations into account, including the bypass area, the availability of autologous vein, the amount of operation time available, and the health status of the patient. The studies included in this health technology assessment demonstrate that autologous vein is usually the preferred material for bypass grafts. In contrast, comparisons of various synthetic materials did not show any specific differences. It remains to be seen whether studies on newly developed synthetic materials will show these to have any particular advantages.
The randomised controlled trials included in the present report were limited by a number of methodological weaknesses, such as different methods for determining patency rates, sample size and power problems, the interpretation of non-significant results, and a lack of consideration of additional factors.
From an economic point of view, there is still great need for further research, and we have attempted to describe a number of pressing questions for health economic studies in the present report.
PMCID: PMC3011348  PMID: 21289957
17.  Humanization policy in primary health care: a systematic review 
Revista de Saúde Pública  2013;47(6):1186-1200.
OBJECTIVE
To analyze humanization practices in primary health care in the Brazilian Unified Health System according to the principles of the National Humanization Policy.
METHODS
A systematic review of the literature was carried out, followed by a meta-synthesis, using the following databases: BDENF (nursing database), BDTD (Brazilian digital library of theses and dissertations), CINAHL (Cumulative Index to nursing and allied health literature), LILACS (Latin American and Caribbean health care sciences literature), MedLine (International health care sciences literature), PAHO (Pan-American Health Care Organization Library) and SciELO (Scientific Electronic Library Online). The following descriptors were used: Humanization; Humanizing Health Care; Reception: Humanized care: Humanization in health care; Bonding; Family Health Care Program; Primary Care; Public Health and Sistema Único de Saúde (the Brazilian public health care system). Research articles, case studies, reports of experiences, dissertations, theses and chapters of books written in Portuguese, English or Spanish, published between 2003 and 2011, were included in the analysis.
RESULTS
Among the 4,127 publications found on the topic, 40 studies were evaluated and included in the analysis, producing three main categories: the first referring to the infrastructure and organization of the primary care service, made clear the dissatisfaction with the physical structure and equipment of the services and with the flow of attendance, which can facilitate or make difficult the access. The second, referring to the health work process, showed issues about the insufficient number of professionals, fragmentation of the work processes, the professional profile and responsibility. The third category, referring to the relational technologies, indicated the reception, bonding, listening, respect and dialog with the service users.
CONCLUSIONS
Although many practices were cited as humanizing they do not produce changes in the health services because of the lack of more profound analysis of the work processes and ongoing education in the health care services.
doi:10.1590/S0034-8910.2013047004581
PMCID: PMC4206092  PMID: 24626556
Humanization of Assistance; Delivery of Health Care; Primary Health Care; Public Health; Unified Health System; Qualitative Research; Review
18.  Conducting a user-centered information needs assessment: the Via Christi Libraries' experience* 
Purpose: The research sought to provide evidence to support the development of a long-term strategy for the Via Christi Regional Medical Center Libraries.
Methods: An information needs assessment was conducted in a large medical center serving approximately 5,900 physicians, clinicians, and nonclinical staff in 4 sites in 1 Midwestern city. Quantitative and qualitative data from 1,295 self-reporting surveys, 75 telephone interviews, and 2 focus groups were collected and analyzed to address 2 questions: how could the libraries best serve their patrons, given realistic limitations on time, resources, and personnel, and how could the libraries best help their institution improve patient care and outcomes?
Results: Clinicians emphasized the need for “just in time” information accessible at the point of care. Library nonusers emphasized the need to market library services and resources. Both clinical and nonclinical respondents emphasized the need for information services customized to their professional information needs, preferences, and patterns of use. Specific information needs in the organization were identified.
Discussion/Conclusions: The results of this three-part, user-centered information needs assessment were used to develop an evidence-based strategic plan. The findings confirmed the importance of promoting library services in the organization and suggested expanded, collaborative roles for hospital librarians.
doi:10.3163/1536-5050.95.2.173
PMCID: PMC1852625  PMID: 17443250
19.  LitInspector: literature and signal transduction pathway mining in PubMed abstracts 
Nucleic Acids Research  2009;37(Web Server issue):W135-W140.
LitInspector is a literature search tool providing gene and signal transduction pathway mining within NCBI's PubMed database. The automatic gene recognition and color coding increases the readability of abstracts and significantly speeds up literature research. A main challenge in gene recognition is the resolution of homonyms and rejection of identical abbreviations used in a ‘non-gene’ context. LitInspector uses automatically generated and manually refined filtering lists for this purpose. The quality of the LitInspector results was assessed with a published dataset of 181 PubMed sentences. LitInspector achieved a precision of 96.8%, a recall of 86.6% and an F-measure of 91.4%. To further demonstrate the homonym resolution qualities, LitInspector was compared to three other literature search tools using some challenging examples. The homonym MIZ-1 (gene IDs 7709 and 9063) was correctly resolved in 87% of the abstracts by LitInspector, whereas the other tools achieved recognition rates between 35% and 67%. The LitInspector signal transduction pathway mining is based on a manually curated database of pathway names (e.g. wingless type), pathway components (e.g. WNT1, FZD1), and general pathway keywords (e.g. signaling cascade). The performance was checked for 10 randomly selected genes. Eighty-two per cent of the 38 predicted pathway associations were correct. LitInspector is freely available at http://www.litinspector.org/.
doi:10.1093/nar/gkp303
PMCID: PMC2703902  PMID: 19417065
20.  The cost-effectiveness of preventing mother-to-child transmission of HIV in low- and middle-income countries: systematic review 
Background
Although highly effective prevention interventions exist, the epidemic of paediatric HIV continues to challenge control efforts in resource-limited settings. We reviewed the cost-effectiveness of interventions to prevent mother-to-child transmission (MTCT) of HIV in low- and middle-income countries (LMICs). This article presents syntheses of evidence on the costs, effects and cost-effectiveness of HIV MTCT strategies for LMICs from the published literature and evaluates their implications for policy and future research.
Methods
Candidate studies were identified through a comprehensive database search including PubMed, Embase, Cochrane Library, and EconLit restricted by language (English or French), date (January 1st, 1994 to January 17th, 2011) and article type (original research). Articles reporting full economic evaluations of interventions to prevent or reduce HIV MTCT were eligible for inclusion. We searched article bibliographies to identify additional studies. Two authors independently assessed eligibility and extracted data from studies retained for review. Study quality was appraised using a modified BMJ checklist for economic evaluations. Data were synthesised in narrative form.
Results
We identified 19 articles published in 9 journals from 1996 to 2010, 16 concerning sub-Saharan Africa. Collectively, the articles suggest that interventions to prevent paediatric infections are cost-effective in a variety of LMIC settings as measured against accepted international benchmarks. In concentrated epidemics where HIV prevalence in the general population is very low, MTCT strategies based on universal testing of pregnant women may not compare well against cost-effectiveness benchmarks, or may satisfy formal criteria for cost-effectiveness but offer a low relative value as compared to competing interventions to improve population health.
Conclusions and Recommendations
Interventions to prevent HIV MTCT are compelling on economic grounds in many resource-limited settings and should remain at the forefront of global HIV prevention efforts. Future cost-effectiveness analyses can help to ensure that pMTCT interventions for LMICs reach their full potential by focussing on unanswered questions in four areas: local assessment of rapidly evolving HIV MTCT options; strategies to improve coverage and reach underserved populations; evaluation of a more comprehensive set of MTCT approaches including primary HIV prevention and reproductive counselling; integration of HIV MTCT and other sexual and reproductive health services.
doi:10.1186/1478-7547-9-3
PMCID: PMC3045936  PMID: 21306625
21.  PhysiomeSpace: digital library service for biomedical data 
Every research laboratory has a wealth of biomedical data locked up, which, if shared with other experts, could dramatically improve biomedical and healthcare research. With the PhysiomeSpace service, it is now possible with a few clicks to share with selected users biomedical data in an easy, controlled and safe way. The digital library service is managed using a client–server approach. The client application is used to import, fuse and enrich the data information according to the PhysiomeSpace resource ontology and upload/download the data to the library. The server services are hosted on the Biomed Town community portal, where through a web interface, the user can complete the metadata curation and share and/or publish the data resources. A search service capitalizes on the domain ontology and on the enrichment of metadata for each resource, providing a powerful discovery environment. Once the users have found the data resources they are interested in, they can add them to their basket, following a metaphor popular in e-commerce web sites. When all the necessary resources have been selected, the user can download the basket contents into the client application. The digital library service is now in beta and open to the biomedical research community.
doi:10.1098/rsta.2010.0023
PMCID: PMC3263791  PMID: 20478910
physiome; virtual physiological human; living human project; digital library; data sharing
22.  CAMbase – A XML-based bibliographical database on Complementary and Alternative Medicine (CAM) 
The term "Complementary and Alternative Medicine (CAM)" covers a variety of approaches to medical theory and practice, which are not commonly accepted by representatives of conventional medicine. In the past two decades, these approaches have been studied in various areas of medicine. Although there appears to be a growing number of scientific publications on CAM, the complete spectrum of complementary therapies still requires more information about published evidence. A majority of these research publications are still not listed in electronic bibliographical databases such as MEDLINE. However, with a growing demand by patients for such therapies, physicians increasingly need an overview of scientific publications on CAM. Bearing this in mind, CAMbase, a bibliographical database on CAM was launched in order to close this gap. It can be accessed online free of charge or additional costs.
The user can peruse more than 80,000 records from over 30 journals and periodicals on CAM, which are stored in CAMbase. A special search engine performing syntactical and semantical analysis of textual phrases allows the user quickly to find relevant bibliographical information on CAM. Between August 2003 and July 2006, 43,299 search queries, an average of 38 search queries per day, were registered focussing on CAM topics such as acupuncture, cancer or general safety aspects. Analysis of the requests led to the conclusion that CAMbase is not only used by scientists and researchers but also by physicians and patients who want to find out more about CAM.
Closely related to this effort is our aim to establish a modern library center on Complementary Medicine which offers the complete spectrum of a modern digital library including a document delivery-service for physicians, therapists, scientists and researchers.
doi:10.1186/1742-5581-4-2
PMCID: PMC1853104  PMID: 17407592
23.  Digital pathology: Attitudes and practices in the Canadian pathology community 
Digital pathology is a rapidly evolving niche in the world of pathology and is likely to increase in popularity as technology improves. We performed a questionnaire for pathologists and pathology residents across Canada, in order to determine their current experiences and attitudes towards digital pathology; which modalities digital pathology is best suited for; and to assess the need for training in digital pathology amongst pathology residents and staff. An online survey consisting of 24 yes/no, multiple choice and free text questions regarding digital pathology was sent out via E-mail to all members of the Canadian Association of Pathologists and pathology residents across Canada. Survey results showed that telepathology (TP) is used in approximately 43% of institutions, primarily for teaching purposes (65%), followed by operating room consults (46%). Seventy-one percent of respondents believe there is a need for TP in their practice; 85% use digital images in their practice. The top two favored applications for digital pathology are teaching and consultation services, with the main advantage being easier access to cases. The main limitations of using digital pathology are cost and image/diagnostic quality. Sixty-two percent of respondents would attend training courses in pathology informatics and 91% think informatics should be part of residency training. The results of the survey indicate that Pathologists and residents across Canada do see a need for TP and the use of digital images in their daily practice. Integration of an informatics component into resident training programs and courses for staff Pathologists would be welcomed.
doi:10.4103/2153-3539.108540
PMCID: PMC3624704  PMID: 23599903
Digital pathology; informatics; pathology; telepathology; virtual slides
24.  Tobacco Company Efforts to Influence the Food and Drug Administration-Commissioned Institute of Medicine Report Clearing the Smoke: An Analysis of Documents Released through Litigation 
PLoS Medicine  2013;10(5):e1001450.
Stanton Glantz and colleagues investigate efforts by tobacco companies to influence Clearing the Smoke, a 2001 Institute of Medicine report on harm reduction tobacco products.
Please see later in the article for the Editors' Summary
Background
Spurred by the creation of potential modified risk tobacco products, the US Food and Drug Administration (FDA) commissioned the Institute of Medicine (IOM) to assess the science base for tobacco “harm reduction,” leading to the 2001 IOM report Clearing the Smoke. The objective of this study was to determine how the tobacco industry organized to try to influence the IOM committee that prepared the report.
Methods and Findings
We analyzed previously secret tobacco industry documents in the University of California, San Francisco Legacy Tobacco Documents Library, and IOM public access files. (A limitation of this method includes the fact that the tobacco companies have withheld some possibly relevant documents.) Tobacco companies considered the IOM report to have high-stakes regulatory implications. They developed and implemented strategies with consulting and legal firms to access the IOM proceedings. When the IOM study staff invited the companies to provide information on exposure and disease markers, clinical trial design for safety and efficacy, and implications for initiation and cessation, tobacco company lawyers, consultants, and in-house regulatory staff shaped presentations from company scientists. Although the available evidence does not permit drawing cause-and-effect conclusions, and the IOM may have come to the same conclusions without the influence of the tobacco industry, the companies were pleased with the final report, particularly the recommendations for a tiered claims system (with separate tiers for exposure and risk, which they believed would ease the process of qualifying for a claim) and license to sell products comparable to existing conventional cigarettes (“substantial equivalence”) without prior regulatory approval. Some principles from the IOM report, including elements of the substantial equivalence recommendation, appear in the 2009 Family Smoking Prevention and Tobacco Control Act.
Conclusions
Tobacco companies strategically interacted with the IOM to win several favored scientific and regulatory recommendations.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Up to half of tobacco users will die of cancer, lung disease, heart disease, stroke, or another tobacco-related disease. Cigarettes and other tobacco products cause disease because they expose their users to nicotine and numerous other toxic chemicals. Tobacco companies have been working to develop a “safe” cigarette for more than half a century. Initially, their attention focused on cigarettes that produced lower tar and nicotine yields in machine-smoking tests. These products were perceived as “safer” products by the public and scientists for many years, but it is now known that the use of low-yield cigarettes can actually expose smokers to higher levels of toxins than standard cigarettes. More recently, the tobacco companies have developed other products (for example, products that heat aerosols of nicotine, rather than burning the tobacco) that claim to reduce harm and the risk of tobacco-related disease, but they can only market these modified risk tobacco products in the US after obtaining Food and Drug Administration (FDA) approval. In 1999, the FDA commissioned the US Institute of Medicine (IOM, an influential source of independent expert advice on medical issues) to assess the science base for tobacco “harm reduction.” In 2001, the IOM published its report Clearing the Smoke: Assessing the Science Base for Tobacco Harm and Reduction, which, although controversial, set the tone for the development and regulation of tobacco products in the US, particularly those claiming to be less dangerous, in subsequent years.
Why Was This Study Done?
Tobacco companies have a long history of working to shape scientific discussions and agendas. For example, they have produced research results designed to “create controversy” about the dangers of smoking and secondhand smoke. In this study, the researchers investigate how tobacco companies organized to try to influence the IOM committee that prepared the Clearing the Smoke report on modified risk tobacco products by analyzing tobacco industry and IOM documents.
What Did the Researchers Do and Find?
The researchers searched the Legacy Tobacco Documents Library (a collection of internal tobacco industry documents released as a result of US litigation cases) for documents outlining how tobacco companies tried to influence the IOM Committee to Assess the Science Base for Tobacco Harm Reduction and created a timeline of events from the 1,000 or so documents they retrieved. They confirmed and supplemented this timeline using information in 80 files that detailed written interactions between the tobacco companies and the IOM committee, which they obtained through a public records access request. Analysis of these documents indicates that the tobacco companies considered the IOM report to have important regulatory implications, that they developed and implemented strategies with consulting and legal firms to access the IOM proceedings, and that tobacco company lawyers, consultants, and regulatory staff shaped presentations to the IOM committee by company scientists on various aspects of tobacco harm reduction products. The analysis also shows that tobacco companies were pleased with the final report, particularly its recommendation that tobacco products can be marketed with exposure or risk reduction claims provided the products substantially reduce exposure and provided the behavioral and health consequences of these products are determined in post-marketing surveillance and epidemiological studies (“tiered testing”) and its recommendation that, provided no claim of reduced exposure or risk is made, new products comparable to existing conventional cigarettes (“substantial equivalence”) can be marketed without prior regulatory approval.
What Do These Findings Mean?
These findings suggest that tobacco companies used their legal and regulatory staff to access the IOM committee that advised the FDA on modified risk tobacco products and that they used this access to deliver specific, carefully formulated messages designed to serve their business interests. Although these findings provide no evidence that the efforts of tobacco companies influenced the IOM committee in any way, they show that the companies were satisfied with the final IOM report and its recommendations, some of which have policy implications that continue to reverberate today. The researchers therefore call for the FDA and other regulatory bodies to remember that they are dealing with companies with a long history of intentionally misleading the public when assessing the information presented by tobacco companies as part of the regulatory process and to actively protect their public-health policies from the commercial interests of the tobacco industry.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001450.
This study is further discussed in a PLOS Medicine Perspective by Thomas Novotny
The World Health Organization provides information about the dangers of tobacco (in several languages); for information about the tobacco industry's influence on policy, see the 2009 World Health Organization report Tobacco interference with tobacco control
A PLOS Medicine Research Article by Heide Weishaar and colleagues describes tobacco company efforts to undermine the Framework Convention on Tobacco Control, an international instrument for tobacco control
Wikipedia has a page on tobacco harm reduction (note: Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The IOM report Clearing the Smoke: Assessing the Science Base for Tobacco Harm Reduction is available to read online
The Legacy Tobacco Documents Library is a public, searchable database of tobacco company internal documents detailing their advertising, manufacturing, marketing, sales, and scientific activities
The University of California, San Francisco Center for Tobacco Control Research and Education is the focal point for University of California, San Francisco (UCSF) scientists in disciplines ranging from the molecular biology of nicotine addiction through political science who combine their efforts to eradicate the use of tobacco and tobacco-induced cancer and other diseases worldwide
SmokeFree, a website provided by the UK National Health Service, offers advice on quitting smoking and includes personal stories from people who have stopped smoking
Smokefree.gov, from the US National Cancer Institute, offers online tools and resources to help people quit smoking
doi:10.1371/journal.pmed.1001450
PMCID: PMC3665841  PMID: 23723740
25.  pubmed2ensembl: A Resource for Mining the Biological Literature on Genes 
PLoS ONE  2011;6(9):e24716.
Background
The last two decades have witnessed a dramatic acceleration in the production of genomic sequence information and publication of biomedical articles. Despite the fact that genome sequence data and publications are two of the most heavily relied-upon sources of information for many biologists, very little effort has been made to systematically integrate data from genomic sequences directly with the biological literature. For a limited number of model organisms dedicated teams manually curate publications about genes; however for species with no such dedicated staff many thousands of articles are never mapped to genes or genomic regions.
Methodology/Principal Findings
To overcome the lack of integration between genomic data and biological literature, we have developed pubmed2ensembl (http://www.pubmed2ensembl.org), an extension to the BioMart system that links over 2,000,000 articles in PubMed to nearly 150,000 genes in Ensembl from 50 species. We use several sources of curated (e.g., Entrez Gene) and automatically generated (e.g., gene names extracted through text-mining on MEDLINE records) sources of gene-publication links, allowing users to filter and combine different data sources to suit their individual needs for information extraction and biological discovery. In addition to extending the Ensembl BioMart database to include published information on genes, we also implemented a scripting language for automated BioMart construction and a novel BioMart interface that allows text-based queries to be performed against PubMed and PubMed Central documents in conjunction with constraints on genomic features. Finally, we illustrate the potential of pubmed2ensembl through typical use cases that involve integrated queries across the biomedical literature and genomic data.
Conclusion/Significance
By allowing biologists to find the relevant literature on specific genomic regions or sets of functionally related genes more easily, pubmed2ensembl offers a much-needed genome informatics inspired solution to accessing the ever-increasing biomedical literature.
doi:10.1371/journal.pone.0024716
PMCID: PMC3183000  PMID: 21980353

Results 1-25 (839381)