BrainSnail is an application software package that allows for easy extraction of information from scientific literature
and the display of interactions evident from the processed text. The display application is based on a paradigm
presented earlier in OntoSlug [3
]. In brief, there are five
classes of objects: regions, substances, receptors, transporters and genes. They are displayed in three different layers (anatomy,
pharmacology and referencecentered), each offering a different point of view of the given topic of interest.
The anatomy layer is based on the nested relationships typical for the anatomical perception of regionalism in that
brain regions have specific substances, receptors, transporters and genes that are documented in the literature. The pharmacology
layer focuses on a chemicalmolecular approach, in that interactions between substances, receptors, transporters and genes are the main
focus. In the pharmacology layer regionalism is omitted. The reference layer is a tool for displaying elements that stand
in relationships to one another or are associated with specific references in the literature. In this layer, all objects
are treated without a hierarchy or a specific set of interactions in mind.
Control of the objects in these three layers takes place through a dynamic graphical user interface which has been
augmented with scripting capacity for quick execution of multi‐object manipulation and data extraction. The Graphical User Interface is
designed to facilitate the manipulation of nodes, the underlying information they represent and an intuitive display of the relationships
among them. However, each of the three layers has a different range of interactions based on the aspect of the layer.
Data input takes place in the separate input application which has been optimized for rapid text extraction and note taking. Text may
be combined with a picture source, which allows the addition of figures and tables pertinent to the text note. The input process is
programmed so that the text is processed for words referring to receptors, regions, substances, genes and transporters as well as keywords
which allow a deduction of interaction between them. The suggested standardized interactions between elements can then be accepted as
suggested, corrected or discarded. This is followed by integration of data into individual xml files to make up the data core of the display application. This
method of linking individual elements to text passages and text passages to reference information allows for ease of either sharing or
removing individual segments of the data collection. A basic natural language processing engine is then used to extract the standardized
interactions from entered text.
Output of searches or information pertaining to a single element takes place using html with embedded links that allows quick
navigation to the original reference provided the reference is available on the internet. It was decided to use the html format over a
custom display system since most users are accustomed to and comfortable navigating in this environment. Combining offline data and online
data offers a great degree of consistency in terms of accuracy of references. Information that is not available through printed media, such as
personal communications can be integrated as well, but lack the in‐print verification. This allows for a great degree of flexibility in data
handling, while maintaining accountability in terms of information accuracy and verification.