PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (786140)

Clipboard (0)
None

Related Articles

1.  Assessment of Collaboration and Interoperability in an Information Management System to Support Bioscience Research 
Biomedical researchers often have to work on massive, detailed, and heterogeneous datasets that raise new challenges of information management. This study reports an investigation into the nature of the problems faced by the researchers in two bioscience test laboratories when dealing with their data management applications. Data were collected using ethnographic observations, questionnaires, and semi-structured interviews. The major problems identified in working with these systems were related to data organization, publications, and collaboration. The interoperability standards were analyzed using a C4I framework at the level of connection, communication, consolidation, and collaboration. Such an analysis was found to be useful in judging the capabilities of data management systems at different levels of technological competency. While collaboration and system interoperability are the “must have” attributes of these biomedical scientific laboratory information management applications, usability and human interoperability are the other design concerns that must also be addressed for easy use and implementation.
PMCID: PMC2815423  PMID: 20351900
2.  Semantic interoperability – Role and operationalization of the International Classification of Functioning, Disability and Health (ICF) 
Introduction
Globalization and the advances in modern information and communication technologies (ICT) are changing the practice of health care and policy making. In the globalized economies of the 21 century, health systems will have to respond to the need of increasingly mobile citizens, patients and providers. At the same time the increased use of ICT is enabling health systems to systematize, process and integrate multiple data silos from different settings and at various levels. To meet these challenges effectively, the creation of an interoperable, global e-Health information infrastructure is critical. Data interoperability within and across heterogeneous health systems, however, is often hampered by differences in terminological inconsistencies and the lack of a common language, particularly when multiple communities of practice from different countries are involved.
Aim
Discuss the functionality and ontological requirements for ICF in achieving semantic interoperability of e-Health information systems.
Results
Most solution attempts for interoperability to date have only focused on technical exchange of data in common formats. Automated health information exchange and aggregation is a very complex task which depends on many crucial prerequisites. The overall architecture of the health information system has to be defined clearly at macro and micro levels in terms of its building blocks and their characteristics. The taxonomic and conceptual features of the ICF make it an important architectural element in the overall design of e-Health information systems. To use the ICF in a digital environment the classification needs to be formalized and modeled using ontological principles and description logic. Ontological modeling is also required for linking assessment instruments and clinical terminologies (e.g. SNOMED) to the ICF.
Conclusions
To achieve semantic interoperability of e-Health systems a carefully elaborated overall health information system architecture has to be established. As a content standard, the ICF can play a pivotal role for meaningful and automated compilation and exchange of health information across sectors and levels. In order to fulfill this role a ICF ontology needs to be developed.
PMCID: PMC2707550
semantic interoperability; health and disability classification; ontology development
3.  Argo: an integrative, interactive, text mining-based workbench supporting curation 
Curation of biomedical literature is often supported by the automatic analysis of textual content that generally involves a sequence of individual processing components. Text mining (TM) has been used to enhance the process of manual biocuration, but has been focused on specific databases and tasks rather than an environment integrating TM tools into the curation pipeline, catering for a variety of tasks, types of information and applications. Processing components usually come from different sources and often lack interoperability. The well established Unstructured Information Management Architecture is a framework that addresses interoperability by defining common data structures and interfaces. However, most of the efforts are targeted towards software developers and are not suitable for curators, or are otherwise inconvenient to use on a higher level of abstraction. To overcome these issues we introduce Argo, an interoperable, integrative, interactive and collaborative system for text analysis with a convenient graphic user interface to ease the development of processing workflows and boost productivity in labour-intensive manual curation. Robust, scalable text analytics follow a modular approach, adopting component modules for distinct levels of text analysis. The user interface is available entirely through a web browser that saves the user from going through often complicated and platform-dependent installation procedures. Argo comes with a predefined set of processing components commonly used in text analysis, while giving the users the ability to deposit their own components. The system accommodates various areas and levels of user expertise, from TM and computational linguistics to ontology-based curation. One of the key functionalities of Argo is its ability to seamlessly incorporate user-interactive components, such as manual annotation editors, into otherwise completely automatic pipelines. As a use case, we demonstrate the functionality of an in-built manual annotation editor that is well suited for in-text corpus annotation tasks.
Database URL: http://www.nactem.ac.uk/Argo
doi:10.1093/database/bas010
PMCID: PMC3308166  PMID: 22434844
4.  Definition of Information Technology Architectures for Continuous Data Management and Medical Device Integration in Diabetes 
The growing availability of continuous data from medical devices in diabetes management makes it crucial to define novel information technology architectures for efficient data storage, data transmission, and data visualization. The new paradigm of care demands the sharing of information in interoperable systems as the only way to support patient care in a continuum of care scenario. The technological platforms should support all the services required by the actors involved in the care process, located in different scenarios and managing diverse information for different purposes. This article presents basic criteria for defining flexible and adaptive architectures that are capable of interoperating with external systems, and integrating medical devices and decision support tools to extract all the relevant knowledge to support diabetes care.
PMCID: PMC2769800  PMID: 19885276
diabetes; continuous data management; device interoperability; software architecture
5.  A Federated Design for a Neurobiological Simulation Engine: The CBI Federated Software Architecture 
PLoS ONE  2012;7(1):e28956.
Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components.
doi:10.1371/journal.pone.0028956
PMCID: PMC3252298  PMID: 22242154
6.  An open source infrastructure for managing knowledge and finding potential collaborators in a domain-specific subset of PubMed, with an example from human genome epidemiology 
BMC Bioinformatics  2007;8:436.
Background
Identifying relevant research in an ever-growing body of published literature is becoming increasingly difficult. Establishing domain-specific knowledge bases may be a more effective and efficient way to manage and query information within specific biomedical fields. Adopting controlled vocabulary is a critical step toward data integration and interoperability in any information system. We present an open source infrastructure that provides a powerful capacity for managing and mining data within a domain-specific knowledge base. As a practical application of our infrastructure, we presented two applications – Literature Finder and Investigator Browser – as well as a tool set for automating the data curating process for the human genome published literature database. The design of this infrastructure makes the system potentially extensible to other data sources.
Results
Information retrieval and usability tests demonstrated that the system had high rates of recall and precision, 90% and 93% respectively. The system was easy to learn, easy to use, reasonably speedy and effective.
Conclusion
The open source system infrastructure presented in this paper provides a novel approach to managing and querying information and knowledge from domain-specific PubMed data. Using the controlled vocabulary UMLS enhanced data integration and interoperability and the extensibility of the system. In addition, by using MVC-based design and Java as a platform-independent programming language, this system provides a potential infrastructure for any domain-specific knowledge base in the biomedical field.
doi:10.1186/1471-2105-8-436
PMCID: PMC2248211  PMID: 17996092
7.  Advancing Personalized Health Care through Health Information Technology: An Update from the American Health Information Community's Personalized Health Care Workgroup 
The Personalized Health Care Workgroup of the American Health Information Community was formed to determine what is needed to promote standard reporting and incorporation of medical genetic/genomic tests and family health history data in electronic health records. The Workgroup has examined and clarified a range of issues related to this information, including interoperability standards and requirements for confidentiality, privacy, and security, in the course of developing recommendations to facilitate its capture, storage, transmission, and use in clinical decision support. The Workgroup is one of several appointed by the American Health Information Community to study high-priority issues related to the implementation of interoperable electronic health records in the United States. It is also a component of the U.S. Department of Health and Human Services' Personalized Health Care Initiative, which is designed to create a foundation upon which information technology that supports personalized, predictive, and pre-emptive health care can be built.
doi:10.1197/jamia.M2718
PMCID: PMC2442266  PMID: 18436899
8.  Interoperability of Neuroscience Modeling Software 
Neuroinformatics  2007;5(2):127-138.
Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is invested in the development of software tools and technologies for numerical simulations and for the creation and publication of models. The diversity of related tools leads to the duplication of effort and hinders model reuse. Development practices and technologies that support interoperability between software systems therefore play an important role in making the modeling process more efficient and in ensuring that published models can be reliably and easily reused. Various forms of interoperability are possible including the development of portable model description standards, the adoption of common simulation languages or the use of standardized middleware. Each of these approaches finds applications within the broad range of current modeling activity. However more effort is required in many areas to enable new scientific questions to be addressed. Here we present the conclusions of the “Neuro-IT Interoperability of Simulators” workshop, held at the 11th computational neuroscience meeting in Edinburgh (July 19-20 2006; http://www.cnsorg.org). We assess the current state of interoperability of neural simulation software and explore the future directions that will enable the field to advance.
PMCID: PMC2658651  PMID: 17873374
Neural simulation software; Simulation language; Standards; XML; Model publication
9.  Open Data, Open Source and Open Standards in chemistry: The Blue Obelisk five years on 
Background
The Blue Obelisk movement was established in 2005 as a response to the lack of Open Data, Open Standards and Open Source (ODOSOS) in chemistry. It aims to make it easier to carry out chemistry research by promoting interoperability between chemistry software, encouraging cooperation between Open Source developers, and developing community resources and Open Standards.
Results
This contribution looks back on the work carried out by the Blue Obelisk in the past 5 years and surveys progress and remaining challenges in the areas of Open Data, Open Standards, and Open Source in chemistry.
Conclusions
We show that the Blue Obelisk has been very successful in bringing together researchers and developers with common interests in ODOSOS, leading to development of many useful resources freely available to the chemistry community.
doi:10.1186/1758-2946-3-37
PMCID: PMC3205042  PMID: 21999342
10.  An Integrated Framework to Achieve Interoperability in Person-Centric Health Management 
The need for high-quality out-of-hospital healthcare is a known socioeconomic problem. Exploiting ICT's evolution, ad-hoc telemedicine solutions have been proposed in the past. Integrating such ad-hoc solutions in order to cost-effectively support the entire healthcare cycle is still a research challenge. In order to handle the heterogeneity of relevant information and to overcome the fragmentation of out-of-hospital instrumentation in person-centric healthcare systems, a shared and open source interoperability component can be adopted, which is ontology driven and based on the semantic web data model. The feasibility and the advantages of the proposed approach are demonstrated by presenting the use case of real-time monitoring of patients' health and their environmental context.
doi:10.1155/2011/549282
PMCID: PMC3146997  PMID: 21811499
11.  OntoTrader: An Ontological Web Trading Agent Approach for Environmental Information Retrieval 
The Scientific World Journal  2014;2014:560296.
Modern Web-based Information Systems (WIS) are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS). This framework implements a “Query-Searching/Recovering-Response” information retrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated.
doi:10.1155/2014/560296
PMCID: PMC3995108  PMID: 24977211
12.  Review of Semantically Interoperable Electronic Health Records for Ubiquitous Healthcare 
In order to provide more effective and personalized healthcare services to patients and healthcare professionals, intelligent active knowledge management and reasoning systems with semantic interoperability are needed. Technological developments have changed ubiquitous healthcare making it more semantically interoperable and individual patient-based; however, there are also limitations to these methodologies. Based upon an extensive review of international literature, this paper describes two technological approaches to semantically interoperable electronic health records for ubiquitous healthcare data management: the ontology-based model and the information, or openEHR archetype model, and the link to standard terminologies such as SNOMED-CT.
doi:10.4258/hir.2010.16.1.1
PMCID: PMC3089838  PMID: 21818417
Ubiquitous Healthcare; Electronic Health Record; Ontology; OpenEHR Archetype; SNOMED-CT
13.  A Framework for evaluating the costs, effort, and value of nationwide health information exchange 
Objective
The nationwide health information network (NHIN) has been proposed to securely link community and state health information exchange (HIE) entities to create a national, interoperable network for sharing healthcare data in the USA. This paper describes a framework for evaluating the costs, effort, and value of nationwide data exchange as the NHIN moves toward a production state. The paper further presents the results of an initial assessment of the framework by those engaged in HIE activities.
Design
Using a literature review and knowledge gained from active NHIN technology and policy development, the authors constructed a framework for evaluating the costs, effort, and value of data exchange between an HIE entity and the NHIN.
Measurement
An online survey was used to assess the perceived usefulness of the metrics in the framework among HIE professionals and researchers.
Results
The framework is organized into five broad categories: implementation; technology; policy; data; and value. Each category enumerates a variety of measures and measure types. Survey respondents generally indicated the framework contained useful measures for current and future use in HIE and NHIN evaluation. Answers varied slightly based on a respondent's participation in active development of NHIN components.
Conclusion
The proposed framework supports efforts to measure the costs, effort, and value associated with nationwide data exchange. Collecting longitudinal data along the NHIN's path to production should help with the development of an evidence base that will drive adoption, create value, and stimulate further investment in nationwide data exchange.
doi:10.1136/jamia.2009.000570
PMCID: PMC2995720  PMID: 20442147
Computer communication networks; evaluation studies as topic; medical informatics; United States
14.  Improving newborn screening laboratory test ordering and result reporting using health information exchange 
Capture, coding and communication of newborn screening (NBS) information represent a challenge for public health laboratories, health departments, hospitals, and ambulatory care practices. An increasing number of conditions targeted for screening and the complexity of interpretation contribute to a growing need for integrated information-management strategies. This makes NBS an important test of tools and architecture for electronic health information exchange (HIE) in this convergence of individual patient care and population health activities. For this reason, the American Health Information Community undertook three tasks described in this paper. First, a newborn screening use case was established to facilitate standards harmonization for common terminology and interoperability specifications guiding HIE. Second, newborn screening coding and terminology were developed for integration into electronic HIE activities. Finally, clarification of privacy, security, and clinical laboratory regulatory requirements governing information exchange was provided, serving as a framework to establish pathways for improving screening program timeliness, effectiveness, and efficiency of quality patient care services.
doi:10.1197/jamia.M3295
PMCID: PMC2995628  PMID: 20064796
Newborn screening; health information exchange; electronic health record; privacy and security
15.  The next-generation electronic health record: perspectives of key leaders from the US Department of Veterans Affairs 
The rapid change in healthcare has focused attention on the necessary development of a next-generation electronic health record (EHR) to support system transformation and more effective patient-centered care. The Department of Veterans Affairs (VA) is developing plans for the next-generation EHR to support improved care delivery for veterans. To understand the needs for a next-generation EHR, we interviewed 14 VA operational, clinical and informatics leaders for their vision about system needs. Leaders consistently identified priorities for development in the areas of cognitive support, information synthesis, teamwork and communication, interoperability, data availability, usability, customization, and information management. The need to reconcile different EHR initiatives currently underway in the VA, as well as opportunities for data sharing, will be critical for continued progress. These findings may support the VA's effort for evolutionary change to its information system and draw attention to necessary research and development for a next-generation information system and EHR nationally.
doi:10.1136/amiajnl-2013-001748
PMCID: PMC3715365  PMID: 23599227
Electronic health records; Clinical decision support systems; Cognitive support; Interoperability; Usability
16.  Pains and Palliation in Distributed Research Networks: Lessons from the Field  
Large-scale comparative effectiveness research studies require detailed clinical data collected across disparate clinical practice settings and institutions. Distributed research networks (DRNs) have been promoted as one approach to wide-scale data sharing that enables data sharing organizations to retain local data ownership and access control. Despite significant investments in distributed data sharing technologies, clinical research networks using distributed methods remain difficult to implement due to a broad range of organizational and technical barriers. The panelists represent four different research networks are in different stages of implementation maturity and are leveraging different informatics technologies. Challenges common to all DRNs include governance, semantic interoperability, and identity management. This panel will describe some of the critical challenges and experimental solutions to implementing, expanding, and sustaining DRNs. Each panelist will focus on a specific challenge that requires new informatics tools to reduce barriers to participation and data sharing.
PMCID: PMC3845789  PMID: 24303246
17.  Computational toxicology using the OpenTox application programming interface and Bioclipse 
BMC Research Notes  2011;4:487.
Background
Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications.
Findings
This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources.
Conclusions
A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers.
doi:10.1186/1756-0500-4-487
PMCID: PMC3264531  PMID: 22075173
18.  Integration and visualization of systems biology data in context of the genome 
BMC Bioinformatics  2010;11:382.
Background
High-density tiling arrays and new sequencing technologies are generating rapidly increasing volumes of transcriptome and protein-DNA interaction data. Visualization and exploration of this data is critical to understanding the regulatory logic encoded in the genome by which the cell dynamically affects its physiology and interacts with its environment.
Results
The Gaggle Genome Browser is a cross-platform desktop program for interactively visualizing high-throughput data in the context of the genome. Important features include dynamic panning and zooming, keyword search and open interoperability through the Gaggle framework. Users may bookmark locations on the genome with descriptive annotations and share these bookmarks with other users. The program handles large sets of user-generated data using an in-process database and leverages the facilities of SQL and the R environment for importing and manipulating data.
A key aspect of the Gaggle Genome Browser is interoperability. By connecting to the Gaggle framework, the genome browser joins a suite of interconnected bioinformatics tools for analysis and visualization with connectivity to major public repositories of sequences, interactions and pathways. To this flexible environment for exploring and combining data, the Gaggle Genome Browser adds the ability to visualize diverse types of data in relation to its coordinates on the genome.
Conclusions
Genomic coordinates function as a common key by which disparate biological data types can be related to one another. In the Gaggle Genome Browser, heterogeneous data are joined by their location on the genome to create information-rich visualizations yielding insight into genome organization, transcription and its regulation and, ultimately, a better understanding of the mechanisms that enable the cell to dynamically respond to its environment.
doi:10.1186/1471-2105-11-382
PMCID: PMC2912892  PMID: 20642854
19.  Leveraging Standards to Support Patient-Centric Interdisciplinary Plans of Care 
As health care systems and providers move towards meaningful use of electronic health records, the once distant vision of collaborative patient-centric, interdisciplinary plans of care, generated and updated across organizations and levels of care, may soon become a reality. Effective care planning is included in the proposed Stages 2–3 Meaningful Use quality measures. To facilitate interoperability, standardization of plan of care messaging, content, information and terminology models are needed. This degree of standardization requires local and national coordination. The purpose of this paper is to review some existing standards that may be leveraged to support development of interdisciplinary patient-centric plans of care. Standards are then applied to a use case to demonstrate one method for achieving patient-centric and interoperable interdisciplinary plan of care documentation. Our pilot work suggests that existing standards provide a foundation for adoption and implementation of patient-centric plans of care that are consistent with federal requirements.
PMCID: PMC3243254  PMID: 22195088
plan of care; electronic health record; meaningful use; terminology standards
20.  The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science 
The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making.
doi:10.1155/2008/369601
PMCID: PMC2375972  PMID: 18483570
21.  Finding and sharing: new approaches to registries of databases and services for the biomedical sciences 
The recent explosion of biological data and the concomitant proliferation of distributed databases make it challenging for biologists and bioinformaticians to discover the best data resources for their needs, and the most efficient way to access and use them. Despite a rapid acceleration in uptake of syntactic and semantic standards for interoperability, it is still difficult for users to find which databases support the standards and interfaces that they need. To solve these problems, several groups are developing registries of databases that capture key metadata describing the biological scope, utility, accessibility, ease-of-use and existence of web services allowing interoperability between resources. Here, we describe some of these initiatives including a novel formalism, the Database Description Framework, for describing database operations and functionality and encouraging good database practise. We expect such approaches will result in improved discovery, uptake and utilization of data resources.
Database URL: http://www.casimir.org.uk/casimir_ddf
doi:10.1093/database/baq014
PMCID: PMC2911849  PMID: 20627863
22.  Incorporating collaboratory concepts into informatics in support of translational interdisciplinary biomedical research 
Due to its complex nature, modern biomedical research has become increasingly interdisciplinary and collaborative in nature. Although a necessity, interdisciplinary biomedical collaboration is difficult. There is, however, a growing body of literature on the study and fostering of collaboration in fields such as computer supported cooperative work (CSCW) and information science (IS). These studies of collaboration provide insight into how to potentially alleviate the difficulties of interdisciplinary collaborative research. We, therefore, undertook a cross cutting study of science and engineering collaboratories to identify emergent themes. We review many relevant collaboratory concepts: (a) general collaboratory concepts across many domains: communication, common workspace and coordination, and data sharing and management, (b) specific collaboratory concepts of particular biomedical relevance: data integration and analysis, security structure, metadata and data provenance, and interoperability and data standards, (c) environmental factors that support collaboratories: administrative and management structure, technical support, and available funding as critical environmental factors, and (d) future considerations for biomedical collaboration: appropriate training and long-term planning. In our opinion, the collaboratory concepts we discuss can guide planning and design of future collaborative infrastructure by biomedical informatics researchers to alleviate some of the difficulties of interdisciplinary biomedical collaboration.
doi:10.1016/j.ijmedinf.2008.06.011
PMCID: PMC2606933  PMID: 18706852
Collaboration; Biomedical informatics; Computer supported collaborative work; Collaboratories; Social and technical issues; Bioinformatics
23.  Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases 
BMC Bioinformatics  2013;14:126.
Background
In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers.
Results
We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases.
Conclusions
BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.
doi:10.1186/1471-2105-14-126
PMCID: PMC3680174  PMID: 23586394
24.  Ontologies and Standards in Bioscience Research: For Machine or for Human 
Ontologies and standards are very important parts of today's bioscience research. With the rapid increase of biological knowledge, they provide mechanisms to better store and represent data in a controlled and structured way, so that scientists can share the data, and utilize a wide variety of software and tools to manage and analyze the data. Most of these standards are initially designed for computers to access large amounts of data that are difficult for human biologists to handle, and it is important to keep in mind that ultimately biologists are going to produce and interpret the data. While ontologies and standards must follow strict semantic rules that may not be familiar to biologists, effort must be spent to lower the learning barrier by involving biologists in the process of development, and by providing software and tool support. A standard will not succeed without support from the wider bioscience research community. Thus, it is crucial that these standards be designed not only for machines to read, but also to be scientifically accurate and intuitive to human biologists.
doi:10.3389/fphys.2011.00005
PMCID: PMC3081276  PMID: 21519400
ontology; standard; systems biology
25.  S&I Public Health Reporting Initiative: Improving Standardization of Surveillance 
Objective
The objective of this panel is to inform the ISDS community of the progress made in the Standards & Interoperability (S&I) Framework Public Health Reporting Initiative (PHRI). Also, it will provide some context of how the initiative will likely affect biosurveillance reporting in Meaningful Use Stage 3 and future harmonization of data standards requirements for public health reporting.
Introduction
The S&I Framework is an Office of National Coordinator (ONC) initiative designed to support individual working groups who focus on a specific interoperability challenge. One of these working groups within the S&I Framework is the PHRI, which is using the S&I Framework as a platform for a community-led project focused on simplifying public health reporting and ensuring EHR interoperability with public health information systems. PHRI hopes to create a new public health reporting objective for Meaningful Use Stage 3 that is broader than the current program-specific objectives and will lay the ground work for all public health reporting in the future. To date, the initiative received over 30 descriptions of different types of public health reporting that were then grouped into 5 domain categories. Each domain category was decomposed into component elements and commonalities were identified. The PHRI is now working to reconstruct a single model of public health reporting through a consensus process that will soon lead to a pilot demonstration of the most ready reporting types. This panel will outline progress, challenges, and next steps of the initiative as well as describe how the initiative may affect a standard language for biosurveillance reporting.
Methods
Michael Coletta will provide an introduction and background of the S&I PHRI. He will describe how the PHRI intends to impact reporting in a way that is universal and helpful to both HIT vendors and public health programs.
Nikolay Lipskiy will provide an understanding of the ground breaking nature of collaboration and harmonization that is occurring across public health programs. He will describe the data harmonization process, outcomes, and hopes for the future of this work.
David Birnbaum has been a very active member of PHRI and has consistently advocated for the inclusion of Healthcare Associated Infections (HAI) reporting in Meaningful Use as a model. David has been representing one of the largest user communities among those farthest along toward automated uploading of data to public health agencies. He will describe the opportunities and challenges of this initiative from the perspective of a participant representing an already highly evolved reporting system (CDC’s National Healthcare Safety Network system).
John Abellera has been the steward of the communicable disease reporting user story for the PHRI. He will describe the current challenges to reporting and how the PHRI proposed changes could improve communicable disease reporting efforts.
This will be followed by an open discussion with the audience intended to elicit reactions regarding an eventual consolidation from individual report specific specification documents to one core report specification across public health reporting programs which is then supplemented with both program specific specifications and a limited number of implementation specific specifications.
Results
Plan to engage audience: Have a prepared list of questions to pose to the audience for reactions and discussion (to be supplied if participation is low).
PMCID: PMC3692744
Standards; Interoperability; Meaningful Use; Reporting; Stage 3

Results 1-25 (786140)