Identification of non-coding RNAs (ncRNAs) has been significantly improved over the past decade. On the other hand, semantic annotation of ncRNA data is facing critical challenges due to the lack of a comprehensive ontology to serve as common data elements and data exchange standards in the field. We developed the Non-Coding RNA Ontology (NCRO) to handle this situation. By providing a formally defined ncRNA controlled vocabulary, the NCRO aims to fill a specific and highly needed niche in semantic annotation of large amounts of ncRNA biological and clinical data.
non-coding RNA; bio-ontologies; domain ontology; reference ontology; ontology development; semantic data annotation
The Protein Ontology (PRO; http://purl.obolibrary.org/obo/pr) formally defines and describes taxon-specific and taxon-neutral protein-related entities in three major areas: proteins related by evolution; proteins produced from a given gene; and protein-containing complexes. PRO thus serves as a tool for referencing protein entities at any level of specificity. To enhance this ability, and to facilitate the comparison of such entities described in different resources, we developed a standardized representation of proteoforms using UniProtKB as a sequence reference and PSI-MOD as a post-translational modification reference. We illustrate its use in facilitating an alignment between PRO and Reactome protein entities. We also address issues of scalability, describing our first steps into the use of text mining to identify protein-related entities, the large-scale import of proteoform information from expert curated resources, and our ability to dynamically generate PRO terms. Web views for individual terms are now more informative about closely-related terms, including for example an interactive multiple sequence alignment. Finally, we describe recent improvement in semantic utility, with PRO now represented in OWL and as a SPARQL endpoint. These developments will further support the anticipated growth of PRO and facilitate discoverability of and allow aggregation of data relating to protein entities.
Human allogeneic islet transplantation for treatment of type 1 diabetes provides numerous clinical benefits, such as fewer episodes of hypoglycemic unawareness and tighter control of blood glucose levels. Availability of human pancreas for clinical and research use, however, is severely limited. Porcine pancreas offers an abundant source of tissue for optimization of islet isolation methodology and future clinical transplantation, thereby increasing patient access to this potentially lifesaving procedure.
Porcine islet isolations were performed using varying amounts of collagenase (7.5, 3.75, or 2.5 Wunsch units per gram tissue) and neutral protease activity (12 000, 6000, or 4000 neutral protease units per gram tissue) and perfusion volumes (1.7 or 0.85 mL/g tissue) to assess their effects on isolation outcomes. Retention of dissociative enzymes within the pancreas during perfusion and digestion was evaluated, along with distribution of the perfusion solution within the tissue.
Reducing enzyme usage by as much as 67% and perfusion volume by 50% led to equally successful islet isolation outcomes when compared with the control group (48 ± 7% of tissue digested and 1088 ± 299 islet equivalents per gram of pancreas vs 47 ± 11% and 1080 ± 512, respectively). Using margin-marking dye in the perfusion solution to visualize enzyme distribution demonstrated that increasing perfusion volume did not improve tissue infiltration.
Current protocols for porcine islet isolation consume excessive amounts of dissociative enzymes, elevating cost and limiting research and development. These data demonstrate that islet isolation protocols can be optimized to significantly reduce enzyme usage while maintaining yield and function and thus accelerating progress toward clinical application.
Statistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem.
The terms in OBCS including ‘data collection’, ‘data transformation in statistics’, ‘data visualization’, ‘statistical data analysis’, and ‘drawing a conclusion based on data’, cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. Currently, OBCS comprehends 878 terms, representing 20 BFO classes, 403 OBI classes, 229 OBCS specific classes, and 122 classes imported from ten other OBO ontologies.
We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. Other ongoing projects using OBCS for statistical data processing are also discussed.
The OBCS source code and documentation are available at: https://github.com/obcs/obcs.
The Ontology of Biological and Clinical Statistics (OBCS) is a community-based open source ontology in the domain of biological and clinical statistics. OBCS is a timely ontology that represents statistics-related terms and their relations in a rigorous fashion, facilitates standard data analysis and integration, and supports reproducible biological and clinical research.
OBCS; Biological statistics; Clinical outcomes statistics; Standardization; Statistical analysis; Data integration
Agarose macrobeads containing mouse renal adenocarcinoma cells (RMBs) release factors, suppressing the growth of cancer cells and prolonging survival in spontaneous or induced tumor animals, mediated, in part, by increased levels of myocyte-enhancing factor (MEF2D) via EGFR-and AKT-signaling pathways. The primary objective of this study was to determine the safety of RMBs in advanced, treatment-resistant metastatic cancers, and then its efficacy (survival), which is the secondary objective.
Thirty-one patients underwent up to four intraperitoneal implantations of RMBs (8 or 16 macrobeads/kg) via laparoscopy in this single-arm trial (FDA BB-IND 10091; NCT 00283075). Serial physical examinations, laboratory testing, and PET-CT imaging were performed before and three months after each implant.
RMBs were well tolerated at both dose levels (mean 660.9 per implant). AEs were (Grade 1/2) with no treatment-related SAEs.
The data support the safety of RMB therapy in advanced-malignancy patients, and the preliminary evidence for their potential efficacy is encouraging. A Phase 2 efficacy trial is ongoing.
solid tumors; cell-based therapeutics; cancer system biology; gastrointestinal cancer; cancer clinical trial; colorectal cancer treatment
As health systems around the world turn towards highly distributed, specialized and cooperative structures to increase quality and safety of care as well as efficiency and efficacy of delivery processes, there is a growing need for supporting communication and collaboration of all parties involved with advanced ICT solutions. The Electronic Health Record (EHR) provides the information platform which is maturing towards the eHealth core application. To meet the requirements for sustainable, semantically interoperable, and trustworthy EHR solutions, different standards and different national strategies have been established. The workshop summarizes the requirements for such advanced EHR systems and their underlying architecture, presents different strategies and solutions advocated by corresponding protagonists, discusses pros and cons as well as harmonization and migration strategies for those approaches. It particularly highlights a turn towards ontology-driven architectures. The workshop is a joint activity of the EFMI Working Groups “Electronic Health Records” and “Security, Safety and Ethics”.
electronic health records; interoperability; ontology; EHR architecture; eHealth platform
Semantic interoperability is one of the priority themes of the ARGOS Trans-Atlantic Observatory. This topic represents a globally recognised challenge that must be addressed if electronic health records are to be shared among heterogeneous systems, and the information in them exploited to the maximum benefit of patients, professionals, health services, research, and industry. Progress in this multi-faceted challenge has been piecemeal, and valuable lessons have been learned, and approaches discovered, in Europe and in the US that can be shared and combined.
Experts from both continents have met at three ARGOS workshops during 2010 and 2011 to share understanding of these issues and how they might be tackled collectively from both sides of the Atlantic. This policy brief summarises the problems and the reasons why they are important to tackle, and also why they are so difficult. It outlines the major areas of semantic innovation that exist and that are available to help address this challenge. It proposes a series of next steps that need to be championed on both sides of the Atlantic if further progress is to be made in sharing and analysing electronic health records meaningfully.
Semantic interoperability requires the use of standards, not only for EHR data to be transferred and structurally mapped into a receiving repository, but also for the clinical content of the EHR to be interpreted in conformity with the original meanings intended by its authors. Wide-scale engagement with professional bodies, globally, is needed to develop these clinical information standards. Accurate and complete clinical documentation, faithful to the patient’s situation, and interoperability between systems, require widespread and dependable access to published and maintained collections of coherent and quality-assured semantic resources, including models such as archetypes and templates that would (1) provide clinical context, (2) be mapped to interoperability standards for EHR data, (3) be linked to well specified multi-lingual terminology value sets, and (4) be derived from high quality ontologies.
There is need to gain greater experience in how semantic resources should be defined, validated, and disseminated, how users (who increasingly will include patients) should be educated to improve the quality and consistency of EHR documentation and to make full use of it. There are urgent needs to scale up the authorship, acceptance, and adoption of clinical information standards, to leverage and harmonise the islands of standardisation optimally, to assure the quality of the artefacts produced, and to organise end-to-end governance of the development and adoption of solutions.
Electronic health records; interoperability; knowledge representation
As a special class of non-coding RNAs (ncRNAs), microRNAs (miRNAs) perform important roles in numerous biological and pathological processes. The realization of miRNA functions depends largely on how miRNAs regulate specific target genes. It is therefore critical to identify, analyze, and cross-reference miRNA-target interactions to better explore and delineate miRNA functions. Semantic technologies can help in this regard. We previously developed a miRNA domain-specific application ontology, Ontology for MIcroRNA Target (OMIT), whose goal was to serve as a foundation for semantic annotation, data integration, and semantic search in the miRNA field. In this paper we describe our continuing effort to develop the OMIT, and demonstrate its use within a semantic search system, OmniSearch, designed to facilitate knowledge capture of miRNA-target interaction data. Important changes in the current version OMIT are summarized as: (1) following a modularized ontology design (with 2559 terms imported from the NCRO ontology); (2) encoding all 1884 human miRNAs (vs. 300 in previous versions); and (3) setting up a GitHub project site along with an issue tracker for more effective community collaboration on the ontology development. The OMIT ontology is free and open to all users, accessible at: http://purl.obolibrary.org/obo/omit.owl. The OmniSearch system is also free and open to all users, accessible at: http://omnisearch.soc.southalabama.edu/index.php/Software.
microRNA; Non-coding RNA; Target gene; Biomedical ontology; Ontology development; Data annotation; Data integration; Semantic search; SPARQL query
In recent years, sequencing technologies have enabled the identification of a wide range of non-coding RNAs (ncRNAs). Unfortunately, annotation and integration of ncRNA data has lagged behind their identification. Given the large quantity of information being obtained in this area, there emerges an urgent need to integrate what is being discovered by a broad range of relevant communities. To this end, the Non-Coding RNA Ontology (NCRO) is being developed to provide a systematically structured and precisely defined controlled vocabulary for the domain of ncRNAs, thereby facilitating the discovery, curation, analysis, exchange, and reasoning of data about structures of ncRNAs, their molecular and cellular functions, and their impacts upon phenotypes. The goal of NCRO is to serve as a common resource for annotations of diverse research in a way that will significantly enhance integrative and comparative analysis of the myriad resources currently housed in disparate sources. It is our belief that the NCRO ontology can perform an important role in the comprehensive unification of ncRNA biology and, indeed, fill a critical gap in both the Open Biological and Biomedical Ontologies (OBO) Library and the National Center for Biomedical Ontology (NCBO) BioPortal. Our initial focus is on the ontological representation of small regulatory ncRNAs, which we see as the first step in providing a resource for the annotation of data about all forms of ncRNAs. The NCRO ontology is free and open to all users, accessible at: http://purl.obolibrary.org/obo/ncro.owl.
Non-coding RNA; Biomedical ontology; Domain ontology; Reference ontology; Ontology development; Data annotation
The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed in association with OBI. The current release of OBI is available at http://purl.obolibrary.org/obo/obi.owl.
The original Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Axis I diagnostic algorithms have been demonstrated to be reliable. However, the Validation Project determined that the RDC/TMD Axis I validity was below the target sensitivity of ≥ 0.70 and specificity of ≥ 0.95.
Consequently, these empirical results supported the development of revised RDC/TMD Axis I diagnostic algorithms that were subsequently demonstrated to be valid for the most common pain-related TMD and for one temporomandibular joint (TMJ) intra-articular disorder. The original RDC/TMD Axis II instruments were shown to be both reliable and valid. Working from these findings and revisions, two international consensus workshops were convened, from which recommendations were obtained for the finalization of new Axis I diagnostic algorithms and new Axis II instruments.
Through a series of workshops and symposia, a panel of clinical and basic science pain experts modified the revised RDC/TMD Axis I algorithms by using comprehensive searches of published TMD diagnostic literature followed by review and consensus via a formal structured process. The panel's recommendations for further revision of the Axis I diagnostic algorithms were assessed for validity by using the Validation Project's data set, and for reliability by using newly collected data from the ongoing TMJ Impact Project—the follow-up study to the Validation Project. New Axis II instruments were identified through a comprehensive search of the literature providing valid instruments that, relative to the RDC/TMD, are shorter in length, are available in the public domain, and currently are being used in medical settings.
The newly recommended Diagnostic Criteria for TMD (DC/TMD) Axis I protocol includes both a valid screener for detecting any pain-related TMD as well as valid diagnostic criteria for differentiating the most common pain-related TMD (sensitivity ≥ 0.86, specificity ≥ 0.98) and for one intra-articular disorder (sensitivity of 0.80 and specificity of 0.97). Diagnostic criteria for other common intra-articular disorders lack adequate validity for clinical diagnoses but can be used for screening purposes. Inter-examiner reliability for the clinical assessment associated with the validated DC/TMD criteria for pain-related TMD is excellent (kappa ≥ 0.85). Finally, a comprehensive classification system that includes both the common and less common TMD is also presented. The Axis II protocol retains selected original RDC/TMD screening instruments augmented with new instruments to assess jaw function as well as behavioral and additional psychosocial factors. The Axis II protocol is divided into screening and comprehensive self-report instrument sets. The screening instruments’ 41 questions assess pain intensity, pain-related disability, psychological distress, jaw functional limitations, and parafunctional behaviors, and a pain drawing is used to assess locations of pain. The comprehensive instruments, composed of 81 questions, assess in further detail jaw functional limitations and psychological distress as well as additional constructs of anxiety and presence of comorbid pain conditions.
The recommended evidence-based new DC/TMD protocol is appropriate for use in both clinical and research settings. More comprehensive instruments augment short and simple screening instruments for Axis I and Axis II. These validated instruments allow for identification of patients with a range of simple to complex TMD presentations.
The Protein Ontology (PRO) provides terms for and supports annotation of species-specific protein complexes in an ontology framework that relates them both to their components and to species-independent families of complexes. Comprehensive curation of experimentally known forms and annotations thereof is expected to expose discrepancies, differences, and gaps in our knowledge. We have annotated the early events of innate immune signaling mediated by Toll-Like Receptor 3 and 4 complexes in human, mouse, and chicken. The resulting ontology and annotation data set has allowed us to identify species-specific gaps in experimental data and possible functional differences between species, and to employ inferred structural and functional relationships to suggest plausible resolutions of these discrepancies and gaps.
Medical terminology collects and organizes the many different kinds of terms employed in the biomedical domain both by practitioners and also in the course of biomedical research. In addition to serving as labels for biomedical classes, these names reflect the organizational principles of biomedical vocabularies and ontologies. Some names represent invariant features (classes, universals) of biomedical reality (i.e., they are a matter for ontology). Other names, however, convey also how this reality is perceived, measured, and understood by health professionals (i.e., they belong to the domain of epistemology). We analyze terms from several biomedical vocabularies in order to throw light on the interactions between ontological and epistemological components of these terminologies. We identify four cases: 1) terms containing classification criteria, 2) terms reflecting detectability, modality, uncertainty, and vagueness, 3) terms created in order to obtain a complete partition of a given domain, and 4) terms reflecting mere fiat boundaries. We show that epistemology-loaded terms are pervasive in biomedical vocabularies, that the “classes” they name often do not comply with sound classification principles, and that they are therefore likely to cause problems in the evolution and alignment of terminologies and associated ontologies.
Imagine if we could compute across phenotype data as easily as genomic data; this article calls for efforts to realize this vision and discusses the potential benefits.
Despite a large and multifaceted effort to understand the vast landscape of phenotypic data, their current form inhibits productive data analysis. The lack of a community-wide, consensus-based, human- and machine-interpretable language for describing phenotypes and their genomic and environmental contexts is perhaps the most pressing scientific bottleneck to integration across many key fields in biology, including genomics, systems biology, development, medicine, evolution, ecology, and systematics. Here we survey the current phenomics landscape, including data resources and handling, and the progress that has been made to accurately capture relevant data descriptions for phenotypes. We present an example of the kind of integration across domains that computable phenotypes would enable, and we call upon the broader biology community, publishers, and relevant funding agencies to support efforts to surmount today's data barriers and facilitate analytical reproducibility.
Ontology is one strategy for promoting interoperability of heterogeneous data through consistent tagging. An ontology is a controlled structured vocabulary consisting of general terms (such as “cell” or “image” or “tissue” or “microscope”) that form the basis for such tagging. These terms are designed to represent the types of entities in the domain of reality that the ontology has been devised to capture; the terms are provided with logical definitions thereby also supporting reasoning over the tagged data.
This paper provides a survey of the biomedical imaging ontologies that have been developed thus far. It outlines the challenges, particularly faced by ontologies in the fields of histopathological imaging and image analysis, and suggests a strategy for addressing these challenges in the example domain of quantitative histopathology imaging.
Results and Conclusions:
The ultimate goal is to support the multiscale understanding of disease that comes from using interoperable ontologies to integrate imaging data with clinical and genomics data.
Histopathology imaging; interoperability; ontology; quantitative histopathology image ontology
The 2013 Rostock Symposium on Systems Biology and Bioinformatics in Aging Research was again dedicated to dissecting the aging process using in silico means. A particular focus was on ontologies, because these are a key technology to systematically integrate heterogeneous information about the aging process. Related topics were databases and data integration. Other talks tackled modeling issues and applications, the latter including talks focused on marker development and cellular stress as well as on diseases, in particular on diseases of kidney and skin.
A medical intervention is a medical procedure or application intended to relieve or prevent illness or injury. Examples of medical interventions include vaccination and drug administration. After a medical intervention, adverse events (AEs) may occur which lie outside the intended consequences of the intervention. The representation and analysis of AEs are critical to the improvement of public health.
The Ontology of Adverse Events (OAE), previously named Adverse Event Ontology (AEO), is a community-driven ontology developed to standardize and integrate data relating to AEs arising subsequent to medical interventions, as well as to support computer-assisted reasoning. OAE has over 3,000 terms with unique identifiers, including terms imported from existing ontologies and more than 1,800 OAE-specific terms. In OAE, the term ‘adverse event’ denotes a pathological bodily process in a patient that occurs after a medical intervention. Causal adverse events are defined by OAE as those events that are causal consequences of a medical intervention. OAE represents various adverse events based on patient anatomic regions and clinical outcomes, including symptoms, signs, and abnormal processes. OAE has been used in the analysis of several different sorts of vaccine and drug adverse event data. For example, using the data extracted from the Vaccine Adverse Event Reporting System (VAERS), OAE was used to analyse vaccine adverse events associated with the administrations of different types of influenza vaccines. OAE has also been used to represent and classify the vaccine adverse events cited in package inserts of FDA-licensed human vaccines in the USA.
OAE is a biomedical ontology that logically defines and classifies various adverse events occurring after medical interventions. OAE has successfully been applied in several adverse event studies. The OAE ontological framework provides a platform for systematic representation and analysis of adverse events and of the factors (e.g., vaccinee age) important for determining their clinical outcomes.
Ontology of Adverse Events; OAE; Adverse event; Ontology; Vaccine; Drug; Vaccine adverse event; VAERS; Drug adverse event; Design pattern
We have previously described the use of a double coated agarose-agarose porcine islet macrobead for the treatment of type I diabetes mellitus. In the current study, the long-term viral safety of macrobead implantation into pancreatectomized diabetic dogs treated with pravastatin (n = 3) was assessed while 2 dogs served as nonimplanted controls. A more gradual return to preimplant insulin requirements occurred after a 2nd implant procedure (days 148, 189, and >652) when compared to a first macrobead implantation (days 9, 21, and 21) in all macrobead implanted animals. In all three implanted dogs, porcine C-peptide was detected in the blood for at least 10 days following the first implant and for at least 26 days following the second implant. C-peptide was also present in the peritoneal fluid of all three implanted dogs at 6 months after 2nd implant and in 2 of 3 dogs at necropsy. Prescreening results of islet macrobeads and culture media prior to transplantation were negative for 13 viruses. No evidence of PERV or other viral transmission was found throughout the study. This study demonstrates that the long-term (2.4 years) implantation of agarose-agarose encapsulated porcine islets is a safe procedure in a large animal model of type I diabetes mellitus.
The encapsulation of porcine islets is an attractive methodology for the treatment of Type I diabetes. In the current study, the use of pravastatin as a mild anti-inflammatory agent was investigated in pancreatectomized diabetic canines transplanted with porcine islets encapsulated in agarose-agarose macrobeads and given 80 mg/day of pravastatin (n = 3) while control animals did not receive pravastatin (n = 3). Control animals reached preimplant insulin requirements on days 18, 19, and 32. Pravastatin-treated animals reached preimplant insulin requirements on days 22, 27, and 50. Two animals from each group received a second macrobead implant: control animals remained insulin-free for 15 and 21 days (AUC = 3003 and 5078 mg/dL/24 hr days 1 to 15) and reached preimplant insulin requirements on days 62 and 131. Pravastatin treated animals remained insulin-free for 21 and 34 days (AUC = 1559 and 1903 mg/dL/24 hr days 1 to 15) and reached preimplant insulin requirements on days 38 and 192. Total incidence (83.3% versus 64.3%) and total severity (22.7 versus 18.3) of inflammation on tissue surfaces were higher in the control group at necropsy. These findings support pravastatin therapy in conjunction with the transplantation of encapsulated xenogeneic islets for the treatment of diabetes mellitus.
The study of biodiversity spans many disciplines and includes data pertaining to species distributions and abundances, genetic sequences, trait measurements, and ecological niches, complemented by information on collection and measurement protocols. A review of the current landscape of metadata standards and ontologies in biodiversity science suggests that existing standards such as the Darwin Core terminology are inadequate for describing biodiversity data in a semantically meaningful and computationally useful way. Existing ontologies, such as the Gene Ontology and others in the Open Biological and Biomedical Ontologies (OBO) Foundry library, provide a semantic structure but lack many of the necessary terms to describe biodiversity data in all its dimensions. In this paper, we describe the motivation for and ongoing development of a new Biological Collections Ontology, the Environment Ontology, and the Population and Community Ontology. These ontologies share the aim of improving data aggregation and integration across the biodiversity domain and can be used to describe physical samples and sampling processes (for example, collection, extraction, and preservation techniques), as well as biodiversity observations that involve no physical sampling. Together they encompass studies of: 1) individual organisms, including voucher specimens from ecological studies and museum specimens, 2) bulk or environmental samples (e.g., gut contents, soil, water) that include DNA, other molecules, and potentially many organisms, especially microbes, and 3) survey-based ecological observations. We discuss how these ontologies can be applied to biodiversity use cases that span genetic, organismal, and ecosystem levels of organization. We argue that if adopted as a standard and rigorously applied and enriched by the biodiversity community, these ontologies would significantly reduce barriers to data discovery, integration, and exchange among biodiversity resources and researchers.
We discuss recent progress in the development of cognitive ontologies and summarize three challenges in the coordinated development and application of these resources. Challenge 1 is to adopt a standardized definition for cognitive processes. We describe three possibilities and recommend one that is consistent with the standard view in cognitive and biomedical sciences. Challenge 2 is harmonization. Gaps and conflicts in representation must be resolved so that these resources can be combined for mark-up and interpretation of multi-modal data. Finally, Challenge 3 is to test the utility of these resources for large-scale annotation of data, search and query, and knowledge discovery and integration. As term definitions are tested and revised, harmonization should enable coordinated updates across ontologies. However, the true test of these definitions will be in their community-wide adoption which will test whether they support valid inferences about psychological and neuroscientific data.
ontology; cognition; mental functioning; neuroscience; annotation; integration; big data; brain science
As biological and biomedical research increasingly reference the environmental context of the biological entities under study, the need for formalisation and standardisation of environment descriptors is growing. The Environment Ontology (ENVO;
http://www.environmentontology.org) is a community-led, open project which seeks to provide an ontology for specifying a wide range of environments relevant to multiple life science disciplines and, through an open participation model, to accommodate the terminological requirements of all those needing to annotate data using ontology classes. This paper summarises ENVO’s motivation, content, structure, adoption, and governance approach. The ontology is available from
http://purl.obolibrary.org/obo/envo.owl - an OBO format version is also available by switching the file suffix to “obo”.
Environment; Ecosystem; Biome; Ontology