Biomedical researchers often have to work on massive, detailed, and heterogeneous datasets that raise new challenges of information management. This study reports an investigation into the nature of the problems faced by the researchers in two bioscience test laboratories when dealing with their data management applications. Data were collected using ethnographic observations, questionnaires, and semi-structured interviews. The major problems identified in working with these systems were related to data organization, publications, and collaboration. The interoperability standards were analyzed using a C4I framework at the level of connection, communication, consolidation, and collaboration. Such an analysis was found to be useful in judging the capabilities of data management systems at different levels of technological competency. While collaboration and system interoperability are the “must have” attributes of these biomedical scientific laboratory information management applications, usability and human interoperability are the other design concerns that must also be addressed for easy use and implementation.
Globalization and the advances in modern information and communication technologies (ICT) are changing the practice of health care and policy making. In the globalized economies of the 21 century, health systems will have to respond to the need of increasingly mobile citizens, patients and providers. At the same time the increased use of ICT is enabling health systems to systematize, process and integrate multiple data silos from different settings and at various levels. To meet these challenges effectively, the creation of an interoperable, global e-Health information infrastructure is critical. Data interoperability within and across heterogeneous health systems, however, is often hampered by differences in terminological inconsistencies and the lack of a common language, particularly when multiple communities of practice from different countries are involved.
Discuss the functionality and ontological requirements for ICF in achieving semantic interoperability of e-Health information systems.
Most solution attempts for interoperability to date have only focused on technical exchange of data in common formats. Automated health information exchange and aggregation is a very complex task which depends on many crucial prerequisites. The overall architecture of the health information system has to be defined clearly at macro and micro levels in terms of its building blocks and their characteristics. The taxonomic and conceptual features of the ICF make it an important architectural element in the overall design of e-Health information systems. To use the ICF in a digital environment the classification needs to be formalized and modeled using ontological principles and description logic. Ontological modeling is also required for linking assessment instruments and clinical terminologies (e.g. SNOMED) to the ICF.
To achieve semantic interoperability of e-Health systems a carefully elaborated overall health information system architecture has to be established. As a content standard, the ICF can play a pivotal role for meaningful and automated compilation and exchange of health information across sectors and levels. In order to fulfill this role a ICF ontology needs to be developed.
semantic interoperability; health and disability classification; ontology development
Curation of biomedical literature is often supported by the automatic analysis of textual content that generally involves a sequence of individual processing components. Text mining (TM) has been used to enhance the process of manual biocuration, but has been focused on specific databases and tasks rather than an environment integrating TM tools into the curation pipeline, catering for a variety of tasks, types of information and applications. Processing components usually come from different sources and often lack interoperability. The well established Unstructured Information Management Architecture is a framework that addresses interoperability by defining common data structures and interfaces. However, most of the efforts are targeted towards software developers and are not suitable for curators, or are otherwise inconvenient to use on a higher level of abstraction. To overcome these issues we introduce Argo, an interoperable, integrative, interactive and collaborative system for text analysis with a convenient graphic user interface to ease the development of processing workflows and boost productivity in labour-intensive manual curation. Robust, scalable text analytics follow a modular approach, adopting component modules for distinct levels of text analysis. The user interface is available entirely through a web browser that saves the user from going through often complicated and platform-dependent installation procedures. Argo comes with a predefined set of processing components commonly used in text analysis, while giving the users the ability to deposit their own components. The system accommodates various areas and levels of user expertise, from TM and computational linguistics to ontology-based curation. One of the key functionalities of Argo is its ability to seamlessly incorporate user-interactive components, such as manual annotation editors, into otherwise completely automatic pipelines. As a use case, we demonstrate the functionality of an in-built manual annotation editor that is well suited for in-text corpus annotation tasks.
Identifying relevant research in an ever-growing body of published literature is becoming increasingly difficult. Establishing domain-specific knowledge bases may be a more effective and efficient way to manage and query information within specific biomedical fields. Adopting controlled vocabulary is a critical step toward data integration and interoperability in any information system. We present an open source infrastructure that provides a powerful capacity for managing and mining data within a domain-specific knowledge base. As a practical application of our infrastructure, we presented two applications – Literature Finder and Investigator Browser – as well as a tool set for automating the data curating process for the human genome published literature database. The design of this infrastructure makes the system potentially extensible to other data sources.
Information retrieval and usability tests demonstrated that the system had high rates of recall and precision, 90% and 93% respectively. The system was easy to learn, easy to use, reasonably speedy and effective.
The open source system infrastructure presented in this paper provides a novel approach to managing and querying information and knowledge from domain-specific PubMed data. Using the controlled vocabulary UMLS enhanced data integration and interoperability and the extensibility of the system. In addition, by using MVC-based design and Java as a platform-independent programming language, this system provides a potential infrastructure for any domain-specific knowledge base in the biomedical field.
The Blue Obelisk movement was established in 2005 as a response to the lack of Open Data, Open Standards and Open Source (ODOSOS) in chemistry. It aims to make it easier to carry out chemistry research by promoting interoperability between chemistry software, encouraging cooperation between Open Source developers, and developing community resources and Open Standards.
This contribution looks back on the work carried out by the Blue Obelisk in the past 5 years and surveys progress and remaining challenges in the areas of Open Data, Open Standards, and Open Source in chemistry.
We show that the Blue Obelisk has been very successful in bringing together researchers and developers with common interests in ODOSOS, leading to development of many useful resources freely available to the chemistry community.
The need for high-quality out-of-hospital healthcare is a known socioeconomic problem. Exploiting ICT's evolution, ad-hoc telemedicine solutions have been proposed in the past. Integrating such ad-hoc solutions in order to cost-effectively support the entire healthcare cycle is still a research challenge. In order to handle the heterogeneity of relevant information and to overcome the fragmentation of out-of-hospital instrumentation in person-centric healthcare systems, a shared and open source interoperability component can be adopted, which is ontology driven and based on the semantic web data model. The feasibility and the advantages of the proposed approach are demonstrated by presenting the use case of real-time monitoring of patients' health and their environmental context.
The growing availability of continuous data from medical devices in diabetes management makes it crucial to define novel information technology architectures for efficient data storage, data transmission, and data visualization. The new paradigm of care demands the sharing of information in interoperable systems as the only way to support patient care in a continuum of care scenario. The technological platforms should support all the services required by the actors involved in the care process, located in different scenarios and managing diverse information for different purposes. This article presents basic criteria for defining flexible and adaptive architectures that are capable of interoperating with external systems, and integrating medical devices and decision support tools to extract all the relevant knowledge to support diabetes care.
diabetes; continuous data management; device interoperability; software architecture
The nationwide health information network (NHIN) has been proposed to securely link community and state health information exchange (HIE) entities to create a national, interoperable network for sharing healthcare data in the USA. This paper describes a framework for evaluating the costs, effort, and value of nationwide data exchange as the NHIN moves toward a production state. The paper further presents the results of an initial assessment of the framework by those engaged in HIE activities.
Using a literature review and knowledge gained from active NHIN technology and policy development, the authors constructed a framework for evaluating the costs, effort, and value of data exchange between an HIE entity and the NHIN.
An online survey was used to assess the perceived usefulness of the metrics in the framework among HIE professionals and researchers.
The framework is organized into five broad categories: implementation; technology; policy; data; and value. Each category enumerates a variety of measures and measure types. Survey respondents generally indicated the framework contained useful measures for current and future use in HIE and NHIN evaluation. Answers varied slightly based on a respondent's participation in active development of NHIN components.
The proposed framework supports efforts to measure the costs, effort, and value associated with nationwide data exchange. Collecting longitudinal data along the NHIN's path to production should help with the development of an evidence base that will drive adoption, create value, and stimulate further investment in nationwide data exchange.
Computer communication networks; evaluation studies as topic; medical informatics; United States
Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components.
One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™).
Semantic Interoperability; Model Driven Architecture; Metadata; Controlled Terminology; ISO 11179
The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making.
Ontologies and standards are very important parts of today's bioscience research. With the rapid increase of biological knowledge, they provide mechanisms to better store and represent data in a controlled and structured way, so that scientists can share the data, and utilize a wide variety of software and tools to manage and analyze the data. Most of these standards are initially designed for computers to access large amounts of data that are difficult for human biologists to handle, and it is important to keep in mind that ultimately biologists are going to produce and interpret the data. While ontologies and standards must follow strict semantic rules that may not be familiar to biologists, effort must be spent to lower the learning barrier by involving biologists in the process of development, and by providing software and tool support. A standard will not succeed without support from the wider bioscience research community. Thus, it is crucial that these standards be designed not only for machines to read, but also to be scientifically accurate and intuitive to human biologists.
ontology; standard; systems biology
Due to its complex nature, modern biomedical research has become increasingly interdisciplinary and collaborative in nature. Although a necessity, interdisciplinary biomedical collaboration is difficult. There is, however, a growing body of literature on the study and fostering of collaboration in fields such as computer supported cooperative work (CSCW) and information science (IS). These studies of collaboration provide insight into how to potentially alleviate the difficulties of interdisciplinary collaborative research. We, therefore, undertook a cross cutting study of science and engineering collaboratories to identify emergent themes. We review many relevant collaboratory concepts: (a) general collaboratory concepts across many domains: communication, common workspace and coordination, and data sharing and management, (b) specific collaboratory concepts of particular biomedical relevance: data integration and analysis, security structure, metadata and data provenance, and interoperability and data standards, (c) environmental factors that support collaboratories: administrative and management structure, technical support, and available funding as critical environmental factors, and (d) future considerations for biomedical collaboration: appropriate training and long-term planning. In our opinion, the collaboratory concepts we discuss can guide planning and design of future collaborative infrastructure by biomedical informatics researchers to alleviate some of the difficulties of interdisciplinary biomedical collaboration.
Collaboration; Biomedical informatics; Computer supported collaborative work; Collaboratories; Social and technical issues; Bioinformatics
The Personalized Health Care Workgroup of the American Health Information Community was formed to determine what is needed to promote standard reporting and incorporation of medical genetic/genomic tests and family health history data in electronic health records. The Workgroup has examined and clarified a range of issues related to this information, including interoperability standards and requirements for confidentiality, privacy, and security, in the course of developing recommendations to facilitate its capture, storage, transmission, and use in clinical decision support. The Workgroup is one of several appointed by the American Health Information Community to study high-priority issues related to the implementation of interoperable electronic health records in the United States. It is also a component of the U.S. Department of Health and Human Services' Personalized Health Care Initiative, which is designed to create a foundation upon which information technology that supports personalized, predictive, and pre-emptive health care can be built.
Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is invested in the development of software tools and technologies for numerical simulations and for the creation and publication of models. The diversity of related tools leads to the duplication of effort and hinders model reuse. Development practices and technologies that support interoperability between software systems therefore play an important role in making the modeling process more efficient and in ensuring that published models can be reliably and easily reused. Various forms of interoperability are possible including the development of portable model description standards, the adoption of common simulation languages or the use of standardized middleware. Each of these approaches finds applications within the broad range of current modeling activity. However more effort is required in many areas to enable new scientific questions to be addressed. Here we present the conclusions of the “Neuro-IT Interoperability of Simulators” workshop, held at the 11th computational neuroscience meeting in Edinburgh (July 19-20 2006; http://www.cnsorg.org). We assess the current state of interoperability of neural simulation software and explore the future directions that will enable the field to advance.
Neural simulation software; Simulation language; Standards; XML; Model publication
Public health laboratories (PHLs) are critical components of the nation's health-care system, serving as stewards of valuable specimens, delivering important diagnostic results to support clinical and public health programs, supporting public health policy, and conducting research. This article discusses the need for and challenges of creating standards-based data-sharing networks across the PHL community, which led to the development of the PHL Interoperability Project (PHLIP). Launched by the Association of Public Health Laboratories and the Centers for Disease Control and Prevention in September 2006, PHLIP has leveraged a unique community-based collaborative process, catalyzing national capabilities to more effectively share electronic laboratory-generated diagnostic information and bolster the nation's health security. PHLIP is emerging as a model of accelerated innovation for the fields of laboratory science, technology, and public health.
In order to provide more effective and personalized healthcare services to patients and healthcare professionals, intelligent active knowledge management and reasoning systems with semantic interoperability are needed. Technological developments have changed ubiquitous healthcare making it more semantically interoperable and individual patient-based; however, there are also limitations to these methodologies. Based upon an extensive review of international literature, this paper describes two technological approaches to semantically interoperable electronic health records for ubiquitous healthcare data management: the ontology-based model and the information, or openEHR archetype model, and the link to standard terminologies such as SNOMED-CT.
Ubiquitous Healthcare; Electronic Health Record; Ontology; OpenEHR Archetype; SNOMED-CT
caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG™) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities.
Capture, coding and communication of newborn screening (NBS) information represent a challenge for public health laboratories, health departments, hospitals, and ambulatory care practices. An increasing number of conditions targeted for screening and the complexity of interpretation contribute to a growing need for integrated information-management strategies. This makes NBS an important test of tools and architecture for electronic health information exchange (HIE) in this convergence of individual patient care and population health activities. For this reason, the American Health Information Community undertook three tasks described in this paper. First, a newborn screening use case was established to facilitate standards harmonization for common terminology and interoperability specifications guiding HIE. Second, newborn screening coding and terminology were developed for integration into electronic HIE activities. Finally, clarification of privacy, security, and clinical laboratory regulatory requirements governing information exchange was provided, serving as a framework to establish pathways for improving screening program timeliness, effectiveness, and efficiency of quality patient care services.
Newborn screening; health information exchange; electronic health record; privacy and security
The key idea underlying many Ambient Intelligence (AmI) projects and applications is context awareness, which is based mainly on their capacity to identify users and their locations. The actual computing capacity should remain in the background, in the periphery of our awareness, and should only move to the center if and when necessary. Computing thus becomes ‘invisible’, as it is embedded in the environment and everyday objects. The research project described herein aims to realize an Ambient Intelligence-based environment able to improve users' quality of life by learning their habits and anticipating their needs. This environment is part of an adaptive, context-aware framework designed to make today's incompatible heterogeneous domotic systems fully interoperable, not only for connecting sensors and actuators, but for providing comprehensive connections of devices to users. The solution is a middleware architecture based on open and widely recognized standards capable of abstracting the peculiarities of underlying heterogeneous technologies and enabling them to co-exist and interwork, without however eliminating their differences. At the highest level of this infrastructure, the Ambient Intelligence framework, integrated with the domotic sensors, can enable the system to recognize any unusual or dangerous situations and anticipate health problems or special user needs in a technological living environment, such as a house or a public space.
Ambient Intelligent; domotics; home environment; interoperability; DomoNet; web services; XML; data mining; machine learning; association rules
OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals.
The OpenTox Framework includes APIs and services for compounds, datasets, features, algorithms, models, ontologies, tasks, validation, and reporting which may be combined into multiple applications satisfying a variety of different user needs. OpenTox applications are based on a set of distributed, interoperable OpenTox API-compliant REST web services. The OpenTox approach to ontology allows for efficient mapping of complementary data coming from different datasets into a unifying structure having a shared terminology and representation.
Two initial OpenTox applications are presented as an illustration of the potential impact of OpenTox for high-quality and consistent structure-activity relationship modelling of REACH-relevant endpoints: ToxPredict which predicts and reports on toxicities for endpoints for an input chemical structure, and ToxCreate which builds and validates a predictive toxicity model based on an input toxicology dataset. Because of the extensible nature of the standardised Framework design, barriers of interoperability between applications and content are removed, as the user may combine data, models and validation from multiple sources in a dependable and time-effective way.
Microbial ecology has been enhanced greatly by the ongoing ‘omics revolution, bringing half the world's biomass and most of its biodiversity into analytical view for the first time; indeed, it feels almost like the invention of the microscope and the discovery of the new world at the same time. With major microbial ecology research efforts accumulating prodigious quantities of sequence, protein, and metabolite data, we are now poised to address environmental microbial research at macro scales, and to begin to characterize and understand the dimensions of microbial biodiversity on the planet. What is currently impeding progress is the need for a framework within which the research community can develop, exchange and discuss predictive ecosystem models that describe the biodiversity and functional interactions. Such a framework must encompass data and metadata transparency and interoperation; data and results validation, curation, and search; application programming interfaces for modeling and analysis tools; and human and technical processes and services necessary to ensure broad adoption. Here we discuss the need for focused community interaction to augment and deepen established community efforts, beginning with the Genomic Standards Consortium (GSC), to create a science-driven strategic plan for a Genomic Software Institute (GSI).
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.
This paper describes an approach that provides Internet-based support for a genome center to map human chromosome 12, as a collaboration between laboratories at the Albert Einstein College of Medicine in Bronx, New York, and the Yale University School of Medicine in New Haven, Connecticut. Informatics is well established as an important enabling technology within the genome mapping community. The goal of this paper is to use the chromosome 12 project as a case study to introduce a medical informatics audience to certain issues involved in genome informatics and in the Internet-based support of collaborative bioscience research. Central to the approach described is a shared database (DB/12) with Macintosh clients in the participating laboratories running the 4th Dimension database program as a user-friendly front end, and a Sun SPARCstation-2 server running Sybase. The central component of the database stores information about yeast artificial chromosomes (YACs), each containing a segment of human DNA from chromosome 12 to which genome markers have been mapped, such that an overlapping set of YACs (called a "contig") can be identified, along with an ordering of the markers. The approach also includes 1) a map assembly tool developed to help biologists interpret their data, proposing a ranked set of candidate maps, 2) the integration of DB/12 with external databases and tools, and 3) the dissemination of the results. This paper discusses several of the lessons learned that apply to many other areas of bioscience, and the potential role for the field of medical informatics in helping to provide such support.
Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications.
This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources.
A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers.