With massive amounts of data being generated in electronic format, there is a need in basic science laboratories to adopt new methods for tracking and analyzing data. An electronic laboratory notebook (ELN) is not just a replacement for a paper lab notebook, it is a new method of storing and organizing data while maintaining the data entry flexibility and legal recording functions of paper notebooks. Paper notebooks are regarded as highly flexible since the user can configure it to store almost anything that can be written or physically pasted onto the pages. However, data retrieval and data sharing from paper notebooks are labor intensive processes and notebooks can be misplaced, a single point of failure that loses all entries in the volume. Additional features provided by electronic notebooks include searchable indices, data sharing, automatic archiving for security against loss and ease of data duplication. Furthermore, ELNs can be tasked with additional functions not commonly found in paper notebooks such as inventory control. While ELNs have been on the market for some time now, adoption of an ELN in academic basic science laboratories has been lagging. Issues that have restrained development and adoption of ELN in research laboratories are the sheer variety and frequency of changes in protocols with a need for the user to control notebook configuration outside the framework of professional IT staff support. In this commentary, we will look at some of the issues and experiences in academic laboratories that have proved challenging in implementing an electronic lab notebook.
eCAT is an electronic lab notebook (ELN) developed by Axiope Limited. It is the first online ELN, the first ELN to be developed in close collaboration with lab scientists, and the first ELN to be targeted at researchers in non-commercial institutions. eCAT was developed in response to feedback from users of a predecessor product. By late 2006 the basic concept had been clarified: a highly scalable web-based collaboration tool that possessed the basic capabilities of commercial ELNs, i.e. a permissions system, controlled sharing, an audit trail, electronic signature and search, and a front end that looked like the electronic counterpart to a paper notebook.
During the development of the beta version feedback was incorporated from many groups including the FDA's Center for Biologics Evaluation & Research, Uppsala University, Children's Hospital Boston, Alex Swarbrick's lab at the Garvan Institute in Sydney and Martin Spitaler at Imperial College. More than 100 individuals and groups worldwide then participated in the beta testing between September 2008 and June 2009. The generally positive response is reflected in the following quote about how one lab is making use of eCAT: "Everyone uses it as an electronic notebook, so they can compile the diverse collections of data that we generate as biologists, such as images and spreadsheets. We use to it to take minutes of meetings. We also use it to manage our common stocks of antibodies, plasmids and so on. Finally, perhaps the most important feature for us is the ability to link records, reagents and experiments."
By developing eCAT in close collaboration with lab scientists, Axiope has come up with a practical and easy-to-use product that meets the need of scientists to manage, store and share data online. eCAT is already being perceived as a product that labs can continue to use as their data management and sharing grows in scale and complexity.
Policymakers advocate universal electronic medical records (EMRs) and propose incentives for “meaningful use” of EMRs. Though emergency departments (EDs) are particularly sensitive to the benefits and unintended consequences of EMR adoption, surveillance has been limited. We analyze data from a nationally representative sample of US EDs to ascertain the adoption of various EMR functionalities.
We analyzed data from the National Hospital Ambulatory Medical Care Survey, after pooling data from 2005 and 2006, reporting proportions with 95% confidence intervals (95% CI). In addition to reporting adoption of various EMR functionalities, we used logistic regression to ascertain patient and hospital characteristics predicting “meaningful use,” defined as a “basic” system (managing demographic information, computerized provider order entry, and lab and imaging results). We found that 46% (95% CI 39–53%) of US EDs reported having adopted EMRs. Computerized provider order entry was present in 21% (95% CI 16–27%), and only 15% (95% CI 10–20%) had warnings for drug interactions or contraindications. The “basic” definition of “meaningful use” was met by 17% (95% CI 13–21%) of EDs. Rural EDs were substantially less likely to have a “basic” EMR system than urban EDs (odds ratio 0.19, 95% CI 0.06–0.57, p = 0.003), and Midwestern (odds ratio 0.37, 95% CI 0.16–0.84, p = 0.018) and Southern (odds ratio 0.47, 95% CI 0.26–0.84, p = 0.011) EDs were substantially less likely than Northeastern EDs to have a “basic” system.
EMRs are becoming more prevalent in US EDs, though only a minority use EMRs in a “meaningful” way, no matter how “meaningful” is defined. Rural EDs are less likely to have an EMR than metropolitan EDs, and Midwestern and Southern EDs are less likely to have an EMR than Northeastern EDs. We discuss the nuances of how to define “meaningful use,” and the importance of considering not only adoption, but also full implementation and consequences.
A new generation of DNA sequencing technologies has enabled a variety of novel genome-scale experimental techniques. What is perhaps most unique about this recent data explosion is that it is distributed — relatively inexpensive instruments allow any lab or institution to produce enormous amounts of data. Yet the infrastructure upstream and downstream of sequencing instruments is largely undeveloped. In addition to the instrument cost labs, core facilities and sequencing service providers are forced to earspend thousands on commercial LIMS systems and sequence analysis packages, which are in-turn based on tools from the public domain. Galaxy provides a robust open-source alternative. It's lightweight sample tracking system is aimed at helping small labs and core facilities managing requests for sequencing runs. It allows one to track the entire “life-cycle” of sequencing request from the initial sample to the resulting dataset. Once the run is complete the user can apply a variety of NGS tools including format converters, mappers, ChIP-seq and transcriptome utilities. Results of these analyses can be visualized, shared, and published. In this presentation we will demonstrate sample tracking functionality from the moment of sample submission to the sequencing facility, through the sequencing run, until the sample becomes a dataset and can be analyzed with a variety of NGS tools.
In modern life science research it is very important to have an efficient management of high throughput primary lab data. To realise such an efficient management, four main aspects have to be handled: (I) long term storage, (II) security, (III) upload and (IV) retrieval.
In this paper we define central requirements for a primary lab data management and discuss aspects of best practices to realise these requirements. As a proof of concept, we introduce a pipeline that has been implemented in order to manage primary lab data at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK). It comprises: (I) a data storage implementation including a Hierarchical Storage Management system, a relational Oracle Database Management System and a BFiler package to store primary lab data and their meta information, (II) the Virtual Private Database (VPD) implementation for the realisation of data security and the LIMS Light application to (III) upload and (IV) retrieve stored primary lab data.
With the LIMS Light system we have developed a primary data management system which provides an efficient storage system with a Hierarchical Storage Management System and an Oracle relational database. With our VPD Access Control Method we can guarantee the security of the stored primary data. Furthermore the system provides high performance upload and download and efficient retrieval of data.
In recent years, the genome biology community has expended considerable effort to confront the challenges of managing heterogeneous data in a structured and organized way and developed laboratory information management systems (LIMS) for both raw and processed data. On the other hand, electronic notebooks were developed to record and manage scientific data, and facilitate data-sharing. Software which enables both, management of large datasets and digital recording of laboratory procedures would serve a real need in laboratories using medium and high-throughput techniques.
We have developed iLAP (Laboratory data management, Analysis, and Protocol development), a workflow-driven information management system specifically designed to create and manage experimental protocols, and to analyze and share laboratory data. The system combines experimental protocol development, wizard-based data acquisition, and high-throughput data analysis into a single, integrated system. We demonstrate the power and the flexibility of the platform using a microscopy case study based on a combinatorial multiple fluorescence in situ hybridization (m-FISH) protocol and 3D-image reconstruction. iLAP is freely available under the open source license AGPL from http://genome.tugraz.at/iLAP/.
iLAP is a flexible and versatile information management system, which has the potential to close the gap between electronic notebooks and LIMS and can therefore be of great value for a broad scientific community.
OmicsHub Proteomics integrates in one single platform all the steps of a Mass Spectrometry Experiment reducing time and data management complexity. The proteomics data automation and data management/analysis provided by OmicsHub Proteomics solves the typical problems your lab members find on a daily basis and makes life easier when performing tasks such as multiple search engine support, pathways integration or custom report generation for external customers. OmicsHub has been designed as a central data management system to collect, analyze and annotate proteomics experimental data enabling users to automate tasks. OmicsHub Proteomics helps laboratories to easily meet proteomics standards such as PRIDE or FuGE and works with controlled vocabulary experiment annotation. The software enables your lab members to take a greater advantage of the Mascot and Phenyx search engines unique capabilities for protein identification. Multiple searches can be launch at once, allowing peak list data from several spots or chromatograms to be sent concurrently to Mascot/Phenyx. OmicsHub Proteomics works for both LC and Gel workflows. The system allows to store and compare proteomics data generated from different Mass Spectrometry instruments in a single platform instead of having a specific software for each of them. It is a web application which installs in a single server needing just Web Browser to have access to it. All experimental actions are userstamp and datestamp allowing the audit tracking of every action performed in OmicsHub. Some of the OmicsHub Proteomics main features are Protein identification, Biological annotation, Report customization, PRIDE standard, Pathways integration, Group proteins results removing redundancy, Peak filtering and FDR cutoff for decoy databases. OmicsHub Proteomics its flexible enough to parsers for new file formats to be easily imported and fits your budget having a very competitive price for its perpetual license.
At University of California, San Francisco, an automated cytopathology system has been developed to meet two main objectives: the information processing needs of the cytopathology department, and the integration of the cytopathology system into both the surgical pathology system, and the hospital information system. The cytopathology system has been in operation since March 1,1982. Benefits to the department include automatic SNOMED coding of diagnoses, online retrieval of diagnoses, automatic billing, faster turnaround between accession and signout, improved management, and reduced paperflow. Current interactions with the hospital information system include access to the centralized patient demographic file, access to medical data from other systems such as the clinical lab, medical records, radiology, and surgical pathology. Planned extensions include online signout of cases, and transmittal of cytology diagnoses to other clinical systems.
The operation of the bioinformatics core facility is constantly challenged by increasing data volume, emerging technologies, and limited budget. We discuss a Collaborative Life Cycle (CLC) process as a business management model for this challenging environment. Unlike the traditional involvement at the last stage of data analysis, the CLC process engages the bioinformatics core facility throughout the project with the Pis and the wet lab core facilities. Various tasks of the bioinformatics core during the project's life cycle include: 1) Planning -study design and statistical power analysis with the PIs 2) Experiment-core wet lab sample tracking and data management 3) Data analysis -data quality control and interpretation in the biological context. Collaborations throughout the project life cycle are critical because the multiple-phase process involves numbers of professional disciplines with many skills, tools, and procedures. The CLC process will help the bioinformatics core facility align the multiple goals, adjust the expectations, mitigate the potential risks, and improve the end results.
After the first report by Kalloo et al on transgastric peritoneoscopy in pigs, it rapidly became apparent that there was no room for an under-evaluated concept and blind adoption of an appealing (r)evolution in minimal access surgery. Systematic experimental work became mandatory before any translation to the clinical setting. Choice and management of the access site, techniques of dissection, exposure, retraction and tissue approximation-sealing were the basics that needed to be evaluated before considering any surgical procedure or study of the relevance of natural orifice transluminal endoscopic surgery (NOTES). After several years of testing in experimental labs, the revolutionary concept of NOTES, is now progressively being experimented on in clinical settings. In this paper the authors analyse the challenges, limitations and solutions to assess how to move from the lab to clinical implementation of transgastric endoscopic cholecystectomy.
Flexible surgery; Cholecystectomy; Natural orifice transluminal endoscopic surgery; Minimal invasive surgery; Endoscopic surgery
The spreading of whole slide imaging or digital slide systems in pathology as an innovative technique seems to be unstoppable. Successful introduction of digital slides in education has played a crucial role to reach this level of acceptance. Practically speaking there is no university institute where digital materials are not built into pathology education. At the 1st. Department of Pathology and Experimental Cancer Research, Semmelweis University optical microscopes have been replaced and for four years only digital slides have been used in education. The aim of this paper is to summarize our experiences gathered with the installation of a fully digitized histology lab for graduate education.
We have installed a digital histology lab with 40 PCs, two slide servers - one for internal use and one with external internet access. We have digitized hundreds of slides and after 4 years we use a set of 126 slides during the pathology course. A Student satisfaction questionnaire and a Tutor satisfaction questionnaire have been designed, both to be completed voluntarily to have feed back from the users. The page load statistics of the external slide server were evaluated.
The digital histology lab served ~900 students and ~1600 hours of histology practice. The questionnaires revealed high satisfaction with digital slides. The results also emphasize the importance of the tutors' attitude towards digital microscopy as a factor influencing the students' satisfaction. The constantly growing number of page downloads from the external server confirms this satisfaction and the acceptance of digital slides.
We are confident, and have showed as well, that digital slides have got numerous advantages over optical slides and are more suitable in education.
As proteomic data sets increase in size and complexity, the necessity for database-centric software systems able to organize, compare, and visualize all the proteomic experiments in a lab grows. We recently developed an integrated platform called high-throughput autonomous proteomic pipeline (HTAPP) for the automated acquisition and processing of quantitative proteomic data, and integration of proteomic results with existing external protein information resources within a lab-based relational database called PeptideDepot. Here, we introduce the peptide validation software component of this system, which combines relational database-integrated electronic manual spectral annotation in Java with a new software tool in the R programming language for the generation of logistic regression spectral models from user-supplied validated data sets and flexible application of these user-generated models in automated proteomic workflows. This logistic regression spectral model uses both variables computed directly from SEQUEST output in addition to deterministic variables based on expert manual validation criteria of spectral quality. In the case of linear quadrupole ion trap (LTQ) or LTQ-FTICR LC/MS data, our logistic spectral model outperformed both XCorr (242% more peptides identified on average) and the X!Tandem E-value (87% more peptides identified on average) at a 1% false discovery rate estimated by decoy database approach.
Decoy database; Logistic regression model; SEQUEST; Software; Spectral validation
Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases.
Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources.
We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/
Semantic Web; neuroscience; description logic; ontology mapping; Web Ontology Language; integration
Consumer-friendly Personal Health Records (PHRs) have the potential of providing patients with the basis for taking an active role in their healthcare. However, few studies focused on the features that make health records comprehensible for lay audiences. This paper presents a survey of patients’ experience with reviewing their health records, in order to identify barriers to optimal record use. The data are analyzed via descriptive statistical and thematic analysis. The results point to providers’ notes, laboratory test results and radiology reports as the most difficult records sections for lay reviewers. Professional medical terminology, lack of explanations of complex concepts (e.g., lab test ranges) and suboptimal data ordering emerge as the most common comprehension barriers. While most patients today access their records in paper format, electronic PHRs present much more opportunities for providing comprehension support.
Research Specimen Banking (RSB) system is a component of the translational investigations infrastructure at Moffitt Cancer Center & Research Institute. It was implemented to provide specimen management functions to support basic science cancer research taking place in conjunction with caner clinical trials. RSB handles the receipt and distribution of clinical specimens to the research labs, with identifiers that both mask personal identity and enable linkage of clinical data to correlative research lab data collected by the study system. RSB was integrated with existing clinical, research lab, and administrative workflow. This poster summarizes the system's features.
The means we use to record the process of carrying out research remains tied to the concept of a paginated paper notebook despite the advances over the past decade in web based communication and publication tools. The development of these tools offers an opportunity to re-imagine what the laboratory record would look like if it were re-built in a web-native form. In this paper I describe a distributed approach to the laboratory record based which uses the most appropriate tool available to house and publish each specific object created during the research process, whether they be a physical sample, a digital data object, or the record of how one was created from another. I propose that the web-native laboratory record would act as a feed of relationships between these items. This approach can be seen as complementary to, rather than competitive with, integrative approaches that aim to aggregate relevant objects together to describe knowledge. The potential for the recent announcement of the Google Wave protocol to have a significant impact on realizing this vision is discussed along with the issues of security and provenance that are raised by such an approach.
Inquiry-based labs have been shown to greatly increase student participation and learning within the biological sciences. One challenge is to develop effective lab exercises within the constraints of large introductory labs. We have designed a lab for first-year biology majors to address two primary goals: to provide effective learning of the unique aspects of the plant life cycle and to gain a practical knowledge of experimental design. An additional goal was to engage students regardless of their biology background. In our experience, plant biology, and the plant life cycle in particular, present a pedagogical challenge because of negative student attitudes and lack of experience with this topic. This lab uses the fern Ceratopteris richardii (C-Fern), a model system for teaching and research that is particularly useful for illustrating alternation of generations. This lab does not simply present the stages of the life cycle; it also uses knowledge of alternation of generations as a starting point for characterizing the her1 mutation that affects gametophyte sexual development. Students develop hypotheses, arrive at an appropriate experimental design, and carry out a guided inquiry on the mechanism underlying the her1 mutation. Quantitative assessment of student learning and attitudes demonstrate that this lab achieves the desired goals.
Laboratories that produce protein reagents for research and development face the challenge of deciding whether to track batch-related data using simple file based storage mechanisms (e.g. spreadsheets and notebooks), or commit the time and effort to install, configure and maintain a more complex laboratory information management system (LIMS). Managing reagent data stored in files is challenging because files are often copied, moved, and reformatted. Furthermore, there is no simple way to query the data if/when questions arise. Commercial LIMS often include additional modules that may be paid for but not actually used, and often require software expertise to truly customize them for a given environment.
This web-application allows small to medium-sized protein production groups to track data related to plasmid DNA, conditioned media samples (supes), cell lines used for expression, and purified protein information, including method of purification and quality control results. In addition, a request system was added that includes a means of prioritizing requests to help manage the high demand of protein production resources at most organizations. ProteinTracker makes extensive use of existing open-source libraries and is designed to track essential data related to the production and purification of proteins.
ProteinTracker is an open-source web-based application that provides organizations with the ability to track key data involved in the production and purification of proteins and may be modified to meet the specific needs of an organization. The source code and database setup script can be downloaded from http://sourceforge.net/projects/proteintracker. This site also contains installation instructions and a user guide. A demonstration version of the application can be viewed at http://www.proteintracker.org.
Protein; Production; Purification; Reagent; Tracking; Prioritization; Web; Application
Because cell biology has rapidly increased in breadth and depth, instructors are challenged not only to provide undergraduate science students with a strong, up-to-date foundation of knowledge, but also to engage them in the scientific process. To these ends, revision of the Cell Biology Lab course at the University of Wisconsin–La Crosse was undertaken to allow student involvement in experimental design, emphasize data collection and analysis, make connections to the “big picture,” and increase student interest in the field. Multiweek laboratory modules were developed as a method to establish an inquiry-based learning environment. Each module utilizes relevant techniques to investigate one or more questions within the context of a fictional story, and there is a progression during the semester from more instructor-guided to more open-ended student investigation. An assessment tool was developed to evaluate student attitudes regarding their lab experience. Analysis of five semesters of data strongly supports the module format as a successful model for inquiry education by increasing student interest and improving attitude toward learning. In addition, student performance on inquiry-based assignments improved over the course of each semester, suggesting an improvement in inquiry-related skills.
inquiry; undergraduate; laboratory; cell biology; multiweek
As gene expression profile data from DNA microarrays accumulate rapidly, there is a natural need to compare data across labs and platforms. Comparisons of microarray data can be quite challenging due to data complexity and variability. Different labs may adopt different technology platforms. One may ask about the degree of agreement we can expect from different labs and different platforms. To address this question, we conducted a study of inter-lab and inter-platform agreement of microarray data across three platforms and three labs. The statistical measures of consistency and agreement used in this paper are the Pearson correlation, intraclass correlation, kappa coefficients, and a measure of intra-transcript correlation. The three platforms used in the present paper were Affymetrix GeneChip, custom cDNA arrays, and custom oligo arrays. Using the within-platform variability as a benchmark, we found that these technology platforms exhibited an acceptable level of agreement, but the agreement between two technologies within the same lab was greater than that between two labs using the same technology. The consistency of replicates in each experiment varies from lab to lab. When there is high consistency among replicates, different technologies show good agreement within and across labs using the same RNA samples. On the other hand, the lab effect, especially when confounded with the RNA sample effect, plays a bigger role than the platform effect on data agreement.
Health care has taken advantage of computers to streamline many clinical and administrative processes. However, the potential of health care information technology as a source of data for clinical and administrative decision support has not been fully explored. This paper describes the process of developing on-line analytical processing (OLAP) capacity from data generated in an on-line transaction processing (OLTP) system (the electronic patient record). We discuss the steps used to evaluate the EPR system, retrieve the data, and create an analytical data warehouse accessible for analysis. We also summarize studies based on the data (lab re-engineering, practice variation in diagnostic decision-making and evaluation of a clinical alert). Besides producing a useful data warehouse, the process also increased understanding of organizational and cost considerations in purchasing OLAP tools. We discuss the limitations of our approach and ways in which these limitations can be addressed.
Image acquisition, processing, and quantification of objects (morphometry) require the integration of data inputs and outputs originating from heterogeneous sources. Management of the data exchange along this workflow in a systematic manner poses several challenges, notably the description of the heterogeneous meta-data and the interoperability between the software used. The use of integrated software solutions for morphometry and management of imaging data in combination with ontologies can reduce meta-data loss and greatly facilitate subsequent data analysis. This paper presents an integrated information system, called LabIS. The system has the objectives to automate (i) the process of storage, annotation, and querying of image measurements and (ii) to provide means for data sharing with third party applications consuming measurement data using open standard communication protocols. LabIS implements 3-tier architecture with a relational database back-end and an application logic middle tier realizing web-based user interface for reporting and annotation and a web-service communication layer. The image processing and morphometry functionality is backed by interoperability with ImageJ, a public domain image processing software, via integrated clients. Instrumental for the latter feat was the construction of a data ontology representing the common measurement data model. LabIS supports user profiling and can store arbitrary types of measurements, regions of interest, calibrations, and ImageJ settings. Interpretation of the stored measurements is facilitated by atlas mapping and ontology-based markup. The system can be used as an experimental workflow management tool allowing for description and reporting of the performed experiments. LabIS can be also used as a measurements repository that can be transparently accessed by computational environments, such as Matlab. Finally, the system can be used as a data sharing tool.
web-service; ontology; morphometry
The objective of this paper is to understand what characteristics and features of clinical data influence physician’s decision about ordering laboratory tests or prescribing medications the most. We conduct our analysis on data and decisions extracted from electronic health records of 4486 post-surgical cardiac patients. The summary statistics for 335 different lab order decisions and 407 medication decisions are reported. We show that in many cases, physician’s lab-order and medication decisions are predicted well by simple patterns such as last value of a single test result, time since a certain lab test was ordered or time since certain procedure was executed.
Data Interpretation; Statistical [E05.318.740.300] Decision Support Systems; Clinical [L01.700.508.300.190] Decision Support Techniques [E05.245] Evidence-Based Medicine [H02.249.750]
Long-term sample storage, tracing of data flow and data export for subsequent analyses are of great importance in genetics studies. Therefore, molecular labs do need a proper information system to handle an increasing amount of data from different projects.
We have developed a molecular labs information management system (MolabIS). It was implemented as a web-based system allowing the users to capture original data at each step of their workflow. MolabIS provides essential functionality for managing information on individuals, tracking samples and storage locations, capturing raw files, importing final data from external files, searching results, accessing and modifying data. Further important features are options to generate ready-to-print reports and convert sequence and microsatellite data into various data formats, which can be used as input files in subsequent analyses. Moreover, MolabIS also provides a tool for data migration.
MolabIS is designed for small-to-medium sized labs conducting Sanger sequencing and microsatellite genotyping to store and efficiently handle a relative large amount of data. MolabIS not only helps to avoid time consuming tasks but also ensures the availability of data for further analyses. The software is packaged as a virtual appliance which can run on different platforms (e.g. Linux, Windows). MolabIS can be distributed to a wide range of molecular genetics labs since it was developed according to a general data model. Released under GPL, MolabIS is freely available at http://www.molabis.org.
Research laboratories studying the genetics of companion animals have no database tools specifically designed to aid in the management of the many kinds of data that are generated, stored and analyzed. We have developed a relational database, "DOG-SPOT," to provide such a tool. Implemented in MS-Access, the database is easy to extend or customize to suit a lab's particular needs. With DOG-SPOT a lab can manage data relating to dogs, breeds, samples, biomaterials, phenotypes, owners, communications, amplicons, sequences, markers, genotypes and personnel. Such an integrated data structure helps ensure high quality data entry and makes it easy to track physical stocks of biomaterials and oligonucleotides.