PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-13 (13)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
jtitle_s:("auto Exp")
1.  Design preferences and cognitive styles: experimentation by automated website synthesis 
Background
This article aims to demonstrate computational synthesis of Web-based experiments in undertaking experimentation on relationships among the participants' design preference, rationale, and cognitive test performance. The exemplified experiments were computationally synthesised, including the websites as materials, experiment protocols as methods, and cognitive tests as protocol modules. This work also exemplifies the use of a website synthesiser as an essential instrument enabling the participants to explore different possible designs, which were generated on the fly, before selection of preferred designs.
Methods
The participants were given interactive tree and table generators so that they could explore some different ways of presenting causality information in tables and trees as the visualisation formats. The participants gave their preference ratings for the available designs, as well as their rationale (criteria) for their design decisions. The participants were also asked to take four cognitive tests, which focus on the aspects of visualisation and analogy-making. The relationships among preference ratings, rationale, and the results of cognitive tests were analysed by conservative non-parametric statistics including Wilcoxon test, Krustal-Wallis test, and Kendall correlation.
Results
In the test, 41 of the total 64 participants preferred graphical (tree-form) to tabular presentation. Despite the popular preference for graphical presentation, the given tabular presentation was generally rated to be easier than graphical presentation to interpret, especially by those who were scored lower in the visualization and analogy-making tests.
Conclusions
This piece of evidence helps generate a hypothesis that design preferences are related to specific cognitive abilities. Without the use of computational synthesis, the experiment setup and scientific results would be impractical to obtain.
doi:10.1186/1759-4499-4-2
PMCID: PMC3386886  PMID: 22748000
2.  P2P proteomics -- data sharing for enhanced protein identification 
Background
In order to tackle the important and challenging problem in proteomics of identifying known and new protein sequences using high-throughput methods, we propose a data-sharing platform that uses fully distributed P2P technologies to share specifications of peer-interaction protocols and service components. By using such a platform, information to be searched is no longer centralised in a few repositories but gathered from experiments in peer proteomics laboratories, which can subsequently be searched by fellow researchers.
Methods
The system distributively runs a data-sharing protocol specified in the Lightweight Communication Calculus underlying the system through which researchers interact via message passing. For this, researchers interact with the system through particular components that link to database querying systems based on BLAST and/or OMSSA and GUI-based visualisation environments. We have tested the proposed platform with data drawn from preexisting MS/MS data reservoirs from the 2006 ABRF (Association of Biomolecular Resource Facilities) test sample, which was extensively tested during the ABRF Proteomics Standards Research Group 2006 worldwide survey. In particular we have taken the data available from a subset of proteomics laboratories of Spain's National Institute for Proteomics, ProteoRed, a network for the coordination, integration and development of the Spanish proteomics facilities.
Results and Discussion
We performed queries against nine databases including seven ProteoRed proteomics laboratories, the NCBI Swiss-Prot database and the local database of the CSIC/UAB Proteomics Laboratory. A detailed analysis of the results indicated the presence of a protein that was supported by other NCBI matches and highly scored matches in several proteomics labs. The analysis clearly indicated that the protein was a relatively high concentrated contaminant that could be present in the ABRF sample. This fact is evident from the information that could be derived from the proposed P2P proteomics system, however it is not straightforward to arrive to the same conclusion by conventional means as it is difficult to discard organic contamination of samples. The actual presence of this contaminant was only stated after the ABRF study of all the identifications reported by the laboratories.
doi:10.1186/1759-4499-4-1
PMCID: PMC3298698  PMID: 22293032
3.  OpenKnowledge for peer-to-peer experimentation in protein identification by MS/MS 
Background
Traditional scientific workflow platforms usually run individual experiments with little evaluation and analysis of performance as required by automated experimentation in which scientists are being allowed to access numerous applicable workflows rather than being committed to a single one. Experimental protocols and data under a peer-to-peer environment could potentially be shared freely without any single point of authority to dictate how experiments should be run. In such environment it is necessary to have mechanisms by which each individual scientist (peer) can assess, locally, how he or she wants to be involved with others in experiments. This study aims to implement and demonstrate simple peer ranking under the OpenKnowledge peer-to-peer infrastructure by both simulated and real-world bioinformatics experiments involving multi-agent interactions.
Methods
A simulated experiment environment with a peer ranking capability was specified by the Lightweight Coordination Calculus (LCC) and automatically executed under the OpenKnowledge infrastructure. The peers such as MS/MS protein identification services (including web-enabled and independent programs) were made accessible as OpenKnowledge Components (OKCs) for automated execution as peers in the experiments. The performance of the peers in these automated experiments was monitored and evaluated by simple peer ranking algorithms.
Results
Peer ranking experiments with simulated peers exhibited characteristic behaviours, e.g., power law effect (a few dominant peers dominate), similar to that observed in the traditional Web. Real-world experiments were run using an interaction model in LCC involving two different types of MS/MS protein identification peers, viz., peptide fragment fingerprinting (PFF) and de novo sequencing with another peer ranking algorithm simply based on counting the successful and failed runs. This study demonstrated a novel integration and useful evaluation of specific proteomic peers and found MASCOT to be a dominant peer as judged by peer ranking.
Conclusion
The simulated and real-world experiments in the present study demonstrated that the OpenKnowledge infrastructure with peer ranking capability can serve as an evaluative environment for automated experimentation.
doi:10.1186/1759-4499-3-3
PMCID: PMC3377912  PMID: 22192521
4.  The benefits of integrated systems for managing both samples and experimental data: An opportunity for labs in universities and government research institutions to lead the way 
Currently most biomedical labs in universities and government funded research institutions use paper lab notebooks for recording experimental data and spreadsheets for managing sample data. One consequence is that sample management and documenting experiments are viewed as separate and distinct activities, notwithstanding that samples and aliquots are an integral part of a majority of the experiments carried out by these labs.
Various drivers are pushing labs towards integrated management of sample data and experimental data. These include the ever increasing amounts of both kinds of data, the increasing adoption of online collaborative tools, changing expectations about online communication, and the increasing affordability of electronic lab notebooks and sample management software. There is now an opportunity for smaller labs, which have been slow to move from paper to electronic record keeping, to leapfrog better resourced commercial labs and lead the way in adopting the new generation of tools which permit integrated management of samples and experimental data and a range of tangible benefits to conducting research, including:
1. Fewer lost and mislabelled samples
2. Clearer visualization of relationships between samples and experiments
3. Reduction of experimental error
4. More effective search
5. Productivity gains
6. More efficient use of freezers, leading to cost reduction and enhanced sustainability
7. Improved archiving and enhanced memory at the lab and institutional levels
doi:10.1186/1759-4499-3-2
PMCID: PMC3146905  PMID: 21707999
5.  Automated experimentation in ecological networks 
Background
In ecological networks, natural communities are studied from a complex systems perspective by representing interactions among species within them in the form of a graph, which is in turn analysed using mathematical tools. Topological features encountered in complex networks have been proved to provide the systems they represent with interesting attributes such as robustness and stability, which in ecological systems translates into the ability of communities to resist perturbations of different kinds. A focus of research in community ecology is on understanding the mechanisms by which these complex networks of interactions among species in a community arise. We employ an agent-based approach to model ecological processes operating at the species' interaction level for the study of the emergence of organisation in ecological networks.
Results
We have designed protocols of interaction among agents in a multi-agent system based on ecological processes occurring at the interaction level between species in plant-animal mutualistic communities. Interaction models for agents coordination thus engineered facilitate the emergence of network features such as those found in ecological networks of interacting species, in our artificial societies of agents.
Conclusions
Agent based models developed in this way facilitate the automation of the design an execution of simulation experiments that allow for the exploration of diverse behavioural mechanisms believed to be responsible for community organisation in ecological communities. This automated way of conducting experiments empowers the study of ecological networks by exploiting the expressive power of interaction models specification in agent systems.
doi:10.1186/1759-4499-3-1
PMCID: PMC3117761  PMID: 21554669
6.  Combining ontologies and workflows to design formal protocols for biological laboratories 
Background
Laboratory protocols in life sciences tend to be written in natural language, with negative consequences on repeatability, distribution and automation of scientific experiments. Formalization of knowledge is becoming popular in science. In the case of laboratory protocols two levels of formalization are needed: one for the entities and individuals operations involved in protocols and another one for the procedures, which can be manually or automatically executed. This study aims to combine ontologies and workflows for protocol formalization.
Results
A laboratory domain specific ontology and the COW (Combining Ontologies with Workflows) software tool were developed to formalize workflows built on ontologies. A method was specifically set up to support the design of structured protocols for biological laboratory experiments. The workflows were enhanced with ontological concepts taken from the developed domain specific ontology.
The experimental protocols represented as workflows are saved in two linked files using two standard interchange languages (i.e. XPDL for workflows and OWL for ontologies). A distribution package of COW including installation procedure, ontology and workflow examples, is freely available from http://www.bmr-genomics.it/farm/cow.
Conclusions
Using COW, a laboratory protocol may be directly defined by wet-lab scientists without writing code, which will keep the resulting protocol's specifications clear and easy to read and maintain.
doi:10.1186/1759-4499-2-3
PMCID: PMC2873243  PMID: 20416048
7.  Construction and analysis of protein–protein interaction networks 
Protein–protein interactions form the basis for a vast majority of cellular events, including signal transduction and transcriptional regulation. It is now understood that the study of interactions between cellular macromolecules is fundamental to the understanding of biological systems. Interactions between proteins have been studied through a number of high-throughput experiments and have also been predicted through an array of computational methods that leverage the vast amount of sequence data generated in the last decade. In this review, I discuss some of the important computational methods for the prediction of functional linkages between proteins. I then give a brief overview of some of the databases and tools that are useful for a study of protein–protein interactions. I also present an introduction to network theory, followed by a discussion of the parameters commonly used in analysing networks, important network topologies, as well as methods to identify important network components, based on perturbations.
doi:10.1186/1759-4499-2-2
PMCID: PMC2834675  PMID: 20334628
8.  Towards Robot Scientists for autonomous scientific discovery 
We review the main components of autonomous scientific discovery, and how they lead to the concept of a Robot Scientist. This is a system which uses techniques from artificial intelligence to automate all aspects of the scientific discovery process: it generates hypotheses from a computer model of the domain, designs experiments to test these hypotheses, runs the physical experiments using robotic systems, analyses and interprets the resulting data, and repeats the cycle. We describe our two prototype Robot Scientists: Adam and Eve. Adam has recently proven the potential of such systems by identifying twelve genes responsible for catalysing specific reactions in the metabolic pathways of the yeast Saccharomyces cerevisiae. This work has been formally recorded in great detail using logic. We argue that the reporting of science needs to become fully formalised and that Robot Scientists can help achieve this. This will make scientific information more reproducible and reusable, and promote the integration of computers in scientific reasoning. We believe the greater automation of both the physical and intellectual aspects of scientific investigations to be essential to the future of science. Greater automation improves the accuracy and reliability of experiments, increases the pace of discovery and, in common with conventional laboratory automation, removes tedious and repetitive tasks from the human scientist.
doi:10.1186/1759-4499-2-1
PMCID: PMC2813846  PMID: 20119518
9.  Make it better but don't change anything 
With massive amounts of data being generated in electronic format, there is a need in basic science laboratories to adopt new methods for tracking and analyzing data. An electronic laboratory notebook (ELN) is not just a replacement for a paper lab notebook, it is a new method of storing and organizing data while maintaining the data entry flexibility and legal recording functions of paper notebooks. Paper notebooks are regarded as highly flexible since the user can configure it to store almost anything that can be written or physically pasted onto the pages. However, data retrieval and data sharing from paper notebooks are labor intensive processes and notebooks can be misplaced, a single point of failure that loses all entries in the volume. Additional features provided by electronic notebooks include searchable indices, data sharing, automatic archiving for security against loss and ease of data duplication. Furthermore, ELNs can be tasked with additional functions not commonly found in paper notebooks such as inventory control. While ELNs have been on the market for some time now, adoption of an ELN in academic basic science laboratories has been lagging. Issues that have restrained development and adoption of ELN in research laboratories are the sheer variety and frequency of changes in protocols with a need for the user to control notebook configuration outside the framework of professional IT staff support. In this commentary, we will look at some of the issues and experiences in academic laboratories that have proved challenging in implementing an electronic lab notebook.
doi:10.1186/1759-4499-1-5
PMCID: PMC2810290  PMID: 20098591
10.  Welcome to Automated Experimentation: a new open access journal 
Modern experimental science provides more opportunities for yet larger series of experiments. Demand for experimental results also has become more diverse, requiring results that have direct connections to systems outside the laboratory. With this has come an ability to automate many areas of experimental science, not only the experiments themselves but also the larger processes that contribute to experimentation and analysis more broadly. As automated experimentation becomes more widely used and understood, we launch this journal to provide a proper publication channel for this new breed of interdisciplinary research as well as a bridge to all significant groundwork research that would facilitate possible automated experimentation. With this in mind, we are interested in publishing all kinds of research into scientific experimentation, including research where the potential for automation is at proof or concept or early deployment stage.
doi:10.1186/1759-4499-1-1
PMCID: PMC2809325  PMID: 20098588
11.  The philosophy of scientific experimentation: a review 
Practicing and studying automated experimentation may benefit from philosophical reflection on experimental science in general. This paper reviews the relevant literature and discusses central issues in the philosophy of scientific experimentation. The first two sections present brief accounts of the rise of experimental science and of its philosophical study. The next sections discuss three central issues of scientific experimentation: the scientific and philosophical significance of intervention and production, the relationship between experimental science and technology, and the interactions between experimental and theoretical work. The concluding section identifies three issues for further research: the role of computing and, more specifically, automating, in experimental research, the nature of experimentation in the social and human sciences, and the significance of normative, including ethical, problems in experimental science.
doi:10.1186/1759-4499-1-2
PMCID: PMC2809324  PMID: 20098589
12.  Head in the clouds: Re-imagining the experimental laboratory record for the web-based networked world 
The means we use to record the process of carrying out research remains tied to the concept of a paginated paper notebook despite the advances over the past decade in web based communication and publication tools. The development of these tools offers an opportunity to re-imagine what the laboratory record would look like if it were re-built in a web-native form. In this paper I describe a distributed approach to the laboratory record based which uses the most appropriate tool available to house and publish each specific object created during the research process, whether they be a physical sample, a digital data object, or the record of how one was created from another. I propose that the web-native laboratory record would act as a feed of relationships between these items. This approach can be seen as complementary to, rather than competitive with, integrative approaches that aim to aggregate relevant objects together to describe knowledge. The potential for the recent announcement of the Google Wave protocol to have a significant impact on realizing this vision is discussed along with the issues of security and provenance that are raised by such an approach.
doi:10.1186/1759-4499-1-3
PMCID: PMC2809323  PMID: 20098590
13.  eCAT: Online electronic lab notebook for scientific research 
Background
eCAT is an electronic lab notebook (ELN) developed by Axiope Limited. It is the first online ELN, the first ELN to be developed in close collaboration with lab scientists, and the first ELN to be targeted at researchers in non-commercial institutions. eCAT was developed in response to feedback from users of a predecessor product. By late 2006 the basic concept had been clarified: a highly scalable web-based collaboration tool that possessed the basic capabilities of commercial ELNs, i.e. a permissions system, controlled sharing, an audit trail, electronic signature and search, and a front end that looked like the electronic counterpart to a paper notebook.
Results
During the development of the beta version feedback was incorporated from many groups including the FDA's Center for Biologics Evaluation & Research, Uppsala University, Children's Hospital Boston, Alex Swarbrick's lab at the Garvan Institute in Sydney and Martin Spitaler at Imperial College. More than 100 individuals and groups worldwide then participated in the beta testing between September 2008 and June 2009. The generally positive response is reflected in the following quote about how one lab is making use of eCAT: "Everyone uses it as an electronic notebook, so they can compile the diverse collections of data that we generate as biologists, such as images and spreadsheets. We use to it to take minutes of meetings. We also use it to manage our common stocks of antibodies, plasmids and so on. Finally, perhaps the most important feature for us is the ability to link records, reagents and experiments."
Conclusion
By developing eCAT in close collaboration with lab scientists, Axiope has come up with a practical and easy-to-use product that meets the need of scientists to manage, store and share data online. eCAT is already being perceived as a product that labs can continue to use as their data management and sharing grows in scale and complexity.
doi:10.1186/1759-4499-1-4
PMCID: PMC2809322  PMID: 20334629

Results 1-13 (13)