PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (667485)

Clipboard (0)
None

Related Articles

1.  Representation of research hypotheses 
Journal of Biomedical Semantics  2011;2(Suppl 2):S9.
Background
Hypotheses are now being automatically produced on an industrial scale by computers in biology, e.g. the annotation of a genome is essentially a large set of hypotheses generated by sequence similarity programs; and robot scientists enable the full automation of a scientific investigation, including generation and testing of research hypotheses.
Results
This paper proposes a logically defined way for recording automatically generated hypotheses in machine amenable way. The proposed formalism allows the description of complete hypotheses sets as specified input and output for scientific investigations. The formalism supports the decomposition of research hypotheses into more specialised hypotheses if that is required by an application. Hypotheses are represented in an operational way – it is possible to design an experiment to test them. The explicit formal description of research hypotheses promotes the explicit formal description of the results and conclusions of an investigation. The paper also proposes a framework for automated hypotheses generation. We demonstrate how the key components of the proposed framework are implemented in the Robot Scientist “Adam”.
Conclusions
A formal representation of automatically generated research hypotheses can help to improve the way humans produce, record, and validate research hypotheses.
Availability
http://www.aber.ac.uk/en/cs/research/cb/projects/robotscientist/results/
doi:10.1186/2041-1480-2-S2-S9
PMCID: PMC3102898  PMID: 21624164
2.  RES6/466: Toward a Discovery Support System Based on Medical and Health Unifying Principles to Formulate Recombinant Hypotheses through Internet Online Databases 
Introduction
Since the 17-century, scientists have been enquiring for the logical scientific principles of medicine and informatics, among other disciplines, encouraged by the instance of Newtonian physics. In the 20-century, the main principles of informatics were found making possible the development of present computers & Internet. However, very little research has been done seeking medical & health scientific principles, allowing among other functions, assistance in scientific hypotheses formation beside empirical data. One important effort on hypothesis formulation, has been the running of the Arrowsmith system of software and database search strategies at http://kiwi.uchicago.edu (Swanson & Smalheiser, 1997), which evokes hypothesis using the relational structure of the NCBI PubMed Internet on-line database (1966-). Nevertheless, although it uses a powerful logical mathematical method, it does not include any logical scientific principle from experimental or clinical medicine, & public health sciences. The aim of this paper is to give an outline of the design & rationale of an international collaborative research, complementary to Arrowsmith system, whose outcomes would be the logical basis of content seeking a more rational discovery support system.
Methods
Crucial fragmented information of multiple specialities and cognitive levels, synthesised by an international cross-disciplinary team or teams of experts, through a complex inductive method using Internet research facilities.
Expected Results:
Medical & health unifying principles that would perfect Arrowsmith target search strategies or other formal discovery computer-assisted systems to formulate recombinant hypotheses, using PubMed on-line database, and even in the future, the NCBI E-Biomed Internet on-line database proposed at http://www.nih.gov/welcome/director/ebiomed/ebiomed.htm (Varmus, Lipman & Brown, 1999). The perfected system will complete then, the premises to receive the benefits of Artificial Intelligence concepts & tools, to continue its improving.
doi:10.2196/jmir.1.suppl1.e81
PMCID: PMC1761764
Unifying Principles; Inductive Method; Hypothesis Formulation; Internet; Discover Support System
3.  The Chilling Effect: How Do Researchers React to Controversy? 
PLoS Medicine  2008;5(11):e222.
Background
Can political controversy have a “chilling effect” on the production of new science? This is a timely concern, given how often American politicians are accused of undermining science for political purposes. Yet little is known about how scientists react to these kinds of controversies.
Methods and Findings
Drawing on interview (n = 30) and survey data (n = 82), this study examines the reactions of scientists whose National Institutes of Health (NIH)-funded grants were implicated in a highly publicized political controversy. Critics charged that these grants were “a waste of taxpayer money.” The NIH defended each grant and no funding was rescinded. Nevertheless, this study finds that many of the scientists whose grants were criticized now engage in self-censorship. About half of the sample said that they now remove potentially controversial words from their grant and a quarter reported eliminating entire topics from their research agendas. Four researchers reportedly chose to move into more secure positions entirely, either outside academia or in jobs that guaranteed salaries. About 10% of the group reported that this controversy strengthened their commitment to complete their research and disseminate it widely.
Conclusions
These findings provide evidence that political controversies can shape what scientists choose to study. Debates about the politics of science usually focus on the direct suppression, distortion, and manipulation of scientific results. This study suggests that scholars must also examine how scientists may self-censor in response to political events.
Drawing on interview and survey data, Joanna Kempner's study finds that political controversies shape what many scientists choose not to study.
Editors' Summary
Background.
Scientific research is an expensive business and, inevitably, the organizations that fund this research—governments, charities, and industry—play an important role in determining the directions that this research takes. Funding bodies can have both positive and negative effects on the acquisition of scientific knowledge. They can pump money into topical areas such as the human genome project. Alternatively, by withholding funding, they can discourage some types of research. So, for example, US federal funds cannot be used to support many aspects of human stem cell research. “Self-censoring” by scientists can also have a negative effect on scientific progress. That is, some scientists may decide to avoid areas of research in which there are many regulatory requirements, political pressure, or in which there is substantial pressure from advocacy groups. A good example of this last type of self-censoring is the withdrawal of many scientists from research that involves certain animal models, like primates, because of animal rights activists.
Why Was This Study Done?
Some people think that political controversy might also encourage scientists to avoid some areas of scientific inquiry, but no studies have formally investigated this possibility. Could political arguments about the value of certain types of research influence the questions that scientists pursue? An argument of this sort occurred in the US in 2003 when Patrick Toomey, who was then a Republican Congressional Representative, argued that National Institutes of Health (NIH) grants supporting research into certain aspects of sexual behavior were “much less worthy of taxpayer funding” than research on “devastating diseases,” and proposed an amendment to the 2004 NIH appropriations bill (which regulates the research funded by NIH). The Amendment was rejected, but more than 200 NIH-funded grants, most of which examined behaviors that affect the spread of HIV/AIDS, were internally reviewed later that year; NIH defended each grant, so none were curtailed. In this study, Joanna Kempner investigates how the scientists whose US federal grants were targeted in this clash between politics and science responded to the political controversy.
What Did the Researchers Do and Find?
Kempner interviewed 30 of the 162 principal investigators (PIs) whose grants were reviewed. She asked them to describe their research, the grants that were reviewed, and their experience with NIH before, during, and after the controversy. She also asked them whether this experience had changed their research practice. She then used the information from these interviews to design a survey that she sent to all the PIs whose grants had been reviewed; 82 responded. About half of the scientists interviewed and/or surveyed reported that they now remove “red flag” words (for example, “AIDS” and “homosexual”) from the titles and abstracts of their grant applications. About one-fourth of the respondents no longer included controversial topics (for example, “abortion” and “emergency contraception”) in their research agendas, and four researchers had made major career changes as a result of the controversy. Finally, about 10% of respondents said that their experience had strengthened their commitment to see their research completed and its results published although even many of these scientists also engaged in some self-censorship.
What Do These Findings Mean?
These findings show that, even though no funding was withdrawn, self-censoring is now common among the scientists whose grants were targeted during this particular political controversy. Because this study included researchers in only one area of health research, its findings may not be generalizable to other areas of research. Furthermore, because only half of the PIs involved in the controversy responded to the survey, these findings may be affected by selection bias. That is, the scientists most anxious about the effects of political controversy on their research funding (and thus more likely to engage in self-censorship) may not have responded. Nevertheless, these findings suggest that the political environment might have a powerful effect on self-censorship by scientists and might dissuade some scientists from embarking on research projects that they would otherwise have pursued. Further research into what Kempner calls the “chilling effect” of political controversy on scientific research is now needed to ensure that a healthy balance can be struck between political involvement in scientific decision making and scientific progress.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050222.
The Consortium of Social Science Associations, an advocacy organization that provides a bridge between the academic research community and Washington policymakers, has more information about the political controversy initiated by Patrick Toomey
Some of Kempner's previous research on self-censorship by scientists is described in a 2005 National Geographic news article
doi:10.1371/journal.pmed.0050222
PMCID: PMC2586361  PMID: 19018657
4.  On the formalization and reuse of scientific research 
The reuse of scientific knowledge obtained from one investigation in another investigation is basic to the advance of science. Scientific investigations should therefore be recorded in ways that promote the reuse of the knowledge they generate. The use of logical formalisms to describe scientific knowledge has potential advantages in facilitating such reuse. Here, we propose a formal framework for using logical formalisms to promote reuse. We demonstrate the utility of this framework by using it in a worked example from biology: demonstrating cycles of investigation formalization [F] and reuse [R] to generate new knowledge. We first used logic to formally describe a Robot scientist investigation into yeast (Saccharomyces cerevisiae) functional genomics [f1]. With Robot scientists, unlike human scientists, the production of comprehensive metadata about their investigations is a natural by-product of the way they work. We then demonstrated how this formalism enabled the reuse of the research in investigating yeast phenotypes [r1 = R(f1)]. This investigation found that the removal of non-essential enzymes generally resulted in enhanced growth. The phenotype investigation was then formally described using the same logical formalism as the functional genomics investigation [f2 = F(r1)]. We then demonstrated how this formalism enabled the reuse of the phenotype investigation to investigate yeast systems-biology modelling [r2 = R(f2)]. This investigation found that yeast flux-balance analysis models fail to predict the observed changes in growth. Finally, the systems biology investigation was formalized for reuse in future investigations [f3 = F(r2)]. These cycles of reuse are a model for the general reuse of scientific knowledge.
doi:10.1098/rsif.2011.0029
PMCID: PMC3163424  PMID: 21490004
semantic web; logic; Saccharomyces cerevisiae; ontology
5.  The GENIUS Grid Portal and robot certificates: a new tool for e-Science 
BMC Bioinformatics  2009;10(Suppl 6):S21.
Background
Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates.
Methods
Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates.
Results
The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users.
Conclusion
The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities.
doi:10.1186/1471-2105-10-S6-S21
PMCID: PMC2697645  PMID: 19534747
6.  Jumping into the 20th century before it is too late: is laboratory robotics still in its infancy? 
Successful management of laboratory robotic automation programmes in the environment of research and drug discovery within the pharmaceutical industry may perhaps be best compared to a chef preparing the perfect hollandaise sauce. All the ingredients must be available at the same time and be of highest quality for the right price. However, if components are not added in the right quantities and in the proper order, no amount of whipping together by the product champion will create the best product. In the past, managerial scepticism surrounding useful implementation of cost-effective, high-throughput robotic systems often placed these ‘modern toys’ at low priorities for research development laboratories. Management now recognizes the unique contributions of robotics in the research environment. Although the scientific director must still play the role of product champion, new questions are being proposed and new commitments are being made to bring the potential of robotic automation to every laboratory where repetitive functions can benefit from new applications. Research laboratory directors have become both the key ingredient, as well as the rate-limiting determinant in the development of new applications. Having fulfilled the promise of robotic automation to release talented personnel, the challenge now is for the ‘end users’, the bench scientists, to be provided with opportunities to invest the time and effort required for future applications and new career functions.
doi:10.1155/S1463924692000142
PMCID: PMC2547949  PMID: 18924929
7.  PLAGIARISM IN SCIENTIFIC PUBLISHING 
Acta Informatica Medica  2012;20(4):208-213.
Scientific publishing is the ultimate product of scientist work. Number of publications and their quoting are measures of scientist success while unpublished researches are invisible to the scientific community, and as such nonexistent. Researchers in their work rely on their predecessors, while the extent of use of one scientist work, as a source for the work of other authors is the verification of its contributions to the growth of human knowledge. If the author has published an article in a scientific journal it cannot publish the article in any other journal h with a few minor adjustments or without quoting parts of the first article, which are used in another article. Copyright infringement occurs when the author of a new article with or without the mentioning the author used substantial portions of previously published articles, including tables and figures. Scientific institutions and universities should,in accordance with the principles of Good Scientific Practice (GSP) and Good Laboratory Practices (GLP) have a center for monitoring,security, promotion and development of quality research. Establish rules and compliance to rules of good scientific practice are the obligations of each research institutions,universities and every individual-researchers,regardless of which area of science is investigated. In this way, internal quality control ensures that a research institution such as a university, assume responsibility for creating an environment that promotes standards of excellence, intellectual honesty and legality. Although the truth should be the aim of scientific research, it is not guiding fact for all scientists. The best way to reach the truth in its study and to avoid the methodological and ethical mistakes is to consistently apply scientific methods and ethical standards in research. Although variously defined plagiarism is basically intended to deceive the reader’s own scientific contribution. There is no general regulation of control of scientific research and intellectual honesty of researchers which would be absolutely applicable in all situations and in all research institutions. A special form of plagiarism is self-plagiarism. Scientists need to take into consideration this form of plagiarism, though for now there is an attitude as much as their own words can be used without the word about plagiarism. If the authors cite their own research facilities already stated then they should be put in quote sand cite the source in which it was published. Science should not be exempt from disclosure and sanctioning plagiarism. In the fight against intellectual dishonesty on ethics education in science has a significant place. A general understanding of ethics in scientific research work in all its stages had to be acquired during the undergraduate course and continue to intensify. It is also important ethical aspect of the publishing industry,especially in small and developing economies,because the issuer has an educational role in the development of the scientific community that aspires to relish so. In this paper author describe his experiences in discovering of plagiarism as Editor-in-Chief of three indexed medical journals with presentations of several examples of plagiarism recorded in countries in Southeastern Europe.
doi:10.5455/aim.2012.20.208-213
PMCID: PMC3558294  PMID: 23378684
ethical dilemmas; scientific publishing; medical journals; plagiarism; self-plagiarism.
8.  In vivo robotics: the automation of neuroscience and other intact-system biological fields 
Annals of the New York Academy of Sciences  2013;1305(1):10.1111/nyas.12171.
Robotic and automation technologies have played a huge role in in vitro biological science, having proved critical for scientific endeavors such as genome sequencing and high-throughput screening. Robotic and automation strategies are beginning to play a greater role in in vivo and in situ sciences, especially when it comes to the difficult in vivo experiments required for understanding the neural mechanisms of behavior and disease. In this perspective, we discuss the prospects for robotics and automation to impact neuroscientific and intact-system biology fields. We discuss how robotic innovations might be created to open up new frontiers in basic and applied neuroscience, and present a concrete example with our recent automation of in vivo whole cell patch clamp electrophysiology of neurons in the living mouse brain.
doi:10.1111/nyas.12171
PMCID: PMC3797229  PMID: 23841584
robotics; neuroscience; patch clamping
9.  SLAM algorithm applied to robotics assistance for navigation in unknown environments 
Background
The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI).
Methods
In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents.
Results
The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface.
Conclusions
The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.
doi:10.1186/1743-0003-7-10
PMCID: PMC2842281  PMID: 20163735
10.  Blackawton bees 
Biology Letters  2010;7(2):168-172.
Background
Real science has the potential to not only amaze, but also transform the way one thinks of the world and oneself. This is because the process of science is little different from the deeply resonant, natural processes of play. Play enables humans (and other mammals) to discover (and create) relationships and patterns. When one adds rules to play, a game is created. This is science: the process of playing with rules that enables one to reveal previously unseen patterns of relationships that extend our collective understanding of nature and human nature. When thought of in this way, science education becomes a more enlightened and intuitive process of asking questions and devising games to address those questions. But, because the outcome of all game-playing is unpredictable, supporting this ‘messyness’, which is the engine of science, is critical to good science education (and indeed creative education generally). Indeed, we have learned that doing ‘real’ science in public spaces can stimulate tremendous interest in children and adults in understanding the processes by which we make sense of the world. The present study (on the vision of bumble-bees) goes even further, since it was not only performed outside my laboratory (in a Norman church in the southwest of England), but the ‘games’ were themselves devised in collaboration with 25 8- to 10-year-old children. They asked the questions, hypothesized the answers, designed the games (in other words, the experiments) to test these hypotheses and analysed the data. They also drew the figures (in coloured pencil) and wrote the paper. Their headteacher (Dave Strudwick) and I devised the educational programme (we call ‘i,scientist’), and I trained the bees and transcribed the childrens' words into text (which was done with smaller groups of children at the school's local village pub). So what follows is a novel study (scientifically and conceptually) in ‘kids speak’ without references to past literature, which is a challenge. Although the historical context of any study is of course important, including references in this instance would be disingenuous for two reasons. First, given the way scientific data are naturally reported, the relevant information is simply inaccessible to the literate ability of 8- to 10-year-old children, and second, the true motivation for any scientific study (at least one of integrity) is one's own curiousity, which for the children was not inspired by the scientific literature, but their own observations of the world. This lack of historical, scientific context does not diminish the resulting data, scientific methodology or merit of the discovery for the scientific and ‘non-scientific’ audience. On the contrary, it reveals science in its truest (most naive) form, and in this way makes explicit the commonality between science, art and indeed all creative activities.
Principal finding
‘We discovered that bumble-bees can use a combination of colour and spatial relationships in deciding which colour of flower to forage from. We also discovered that science is cool and fun because you get to do stuff that no one has ever done before. (Children from Blackawton)’.
doi:10.1098/rsbl.2010.1056
PMCID: PMC3061190  PMID: 21177694
Bombus terrestris; buff-tailed bumble-bee; visual perception; colour vision; behaviour
11.  Facebook for Scientists: Requirements and Services for Optimizing How Scientific Collaborations Are Established 
Background
As biomedical research projects become increasingly interdisciplinary and complex, collaboration with appropriate individuals, teams, and institutions becomes ever more crucial to project success. While social networks are extremely important in determining how scientific collaborations are formed, social networking technologies have not yet been studied as a tool to help form scientific collaborations. Many currently emerging expertise locating systems include social networking technologies, but it is unclear whether they make the process of finding collaborators more efficient and effective.
Objective
This study was conducted to answer the following questions: (1) Which requirements should systems for finding collaborators in biomedical science fulfill? and (2) Which information technology services can address these requirements?
Methods
The background research phase encompassed a thorough review of the literature, affinity diagramming, contextual inquiry, and semistructured interviews. This phase yielded five themes suggestive of requirements for systems to support the formation of collaborations. In the next phase, the generative phase, we brainstormed and selected design ideas for formal concept validation with end users. Then, three related, well-validated ideas were selected for implementation and evaluation in a prototype.
Results
Five main themes of systems requirements emerged: (1) beyond expertise, successful collaborations require compatibility with respect to personality, work style, productivity, and many other factors (compatibility); (2) finding appropriate collaborators requires the ability to effectively search in domains other than your own using information that is comprehensive and descriptive (communication); (3) social networks are important for finding potential collaborators, assessing their suitability and compatibility, and establishing contact with them (intermediation); (4) information profiles must be complete, correct, up-to-date, and comprehensive and allow fine-grained control over access to information by different audiences (information quality and access); (5) keeping online profiles up-to-date should require little or no effort and be integrated into the scientist’s existing workflow (motivation). Based on the requirements, 16 design ideas underwent formal validation with end users. Of those, three were chosen to be implemented and evaluated in a system prototype, “Digital|Vita”: maintaining, formatting, and semi-automated updating of biographical information; searching for experts; and building and maintaining the social network and managing document flow.
Conclusions
In addition to quantitative and factual information about potential collaborators, social connectedness, personal and professional compatibility, and power differentials also influence whether collaborations are formed. Current systems only partially model these requirements. Services in Digital|Vita combine an existing workflow, maintaining and formatting biographical information, with collaboration-searching functions in a novel way. Several barriers to the adoption of systems such as Digital|Vita exist, such as potential adoption asymmetries between junior and senior researchers and the tension between public and private information. Developers and researchers may consider one or more of the services described in this paper for implementation in their own expertise locating systems.
doi:10.2196/jmir.1047
PMCID: PMC2553246  PMID: 18701421
Expertise locating systems; computer supported collaborative work; information systems; collaborators; research; social networks; translational research
12.  An Absolute Index (Ab-index) to Measure a Researcher’s Useful Contributions and Productivity 
PLoS ONE  2013;8(12):e84334.
Bibliographic analysis has been a very powerful tool in evaluating the effective contributions of a researcher and determining his/her future research potential. The lack of an absolute quantification of the author’s scientific contributions by the existing measurement system hampers the decision-making process. In this paper, a new metric system, Absolute index (Ab-index), has been proposed that allows a more objective comparison of the contributions of a researcher. The Ab-index takes into account the impact of research findings while keeping in mind the physical and intellectual contributions of the author(s) in accomplishing the task. The Ab-index and h-index were calculated for 10 highly cited geneticists and molecular biologist and 10 young researchers of biological sciences and compared for their relationship to the researchers input as a primary author. This is the first report of a measuring method clarifying the contributions of the first author, corresponding author, and other co-authors and the sharing of credit in a logical ratio. A java application has been developed for the easy calculation of the Ab-index. It can be used as a yardstick for comparing the credibility of different scientists competing for the same resources while the Productivity index (Pr-index), which is the rate of change in the Ab-index per year, can be used for comparing scientists of different age groups. The Ab-index has clear advantage over other popular metric systems in comparing scientific credibility of young scientists. The sum of the Ab-indices earned by individual researchers of an institute per year can be referred to as Pr-index of the institute.
doi:10.1371/journal.pone.0084334
PMCID: PMC3877305  PMID: 24391941
13.  Regenerative patterning in Swarm Robots: mutual benefits of research in robotics and stem cell biology 
This paper presents a novel perspective of Robotic Stem Cells (RSCs), defined as the basic non-biological elements with stem cell like properties that can self-reorganize to repair damage to their swarming organization. “Self” here means that the elements can autonomously decide and execute their actions without requiring any preset triggers, commands, or help from external sources. We develop this concept for two purposes. One is to develop a new theory for self-organization and self-assembly of multi-robots systems that can detect and recover from unforeseen errors or attacks. This self-healing and self-regeneration is used to minimize the compromise of overall function for the robot team. The other is to decipher the basic algorithms of regenerative behaviors in multi-cellular animal models, so that we can understand the fundamental principles used in the regeneration of biological systems. RSCs are envisioned to be basic building elements for future systems that are capable of self-organization, self-assembly, self-healing and self-regeneration. We first discuss the essential features of biological stem cells for such a purpose, and then propose the functional requirements of robotic stem cells with properties equivalent to gene controller, program selector and executor. We show that RSCs are a novel robotic model for scalable self-organization and self-healing in computer simulations and physical implementation. As our understanding of stem cells advances, we expect that future robots will be more versatile, resilient and complex, and such new robotic systems may also demand and inspire new knowledge from stem cell biology and related fields, such as artificial intelligence and tissue engineering.
doi:10.1387/ijdb.092937mr
PMCID: PMC2874133  PMID: 19557691
robot team; self-reconfiguration; regeneration; tissue engineering; self-organization; self-healing; morphallaxis; wound healing; morphogenesis; pattern formation; multi-agent systems
14.  Commercializing medical technology 
Cytotechnology  2007;53(1-3):107-112.
As medicine moves into the 21st century, life saving therapies will move from inception into medical products faster if there is a better synergy between science and business. Medicine appears to have 50-year innovative cycles of education and scientific discoveries. In the 1880’s, the chemical industry in Germany was faced with the dilemma of modernization to exploit the new scientific discoveries. The solution was the spawning of novel technical colleges for training in these new chemical industries. The impact of those new employees and their groundbreaking compounds had a profound influence on medicine and medical education in Germany between 1880 and 1930. Germany dominated international science during this period and was a training center for scientists worldwide. This model of synergy between education and business was envied and admired in Europe, Asia and America. British science soon after evolved to dominate the field of science during the prewar and post World War (1930’s–1970’s) because the German scientists fled Hitler’s government. These expatriated scientists had a profound influence on the teaching and training of British scientists, which lead to advances in medicine such as antibiotics. After the Second World War, the US government wisely funded the development of the medical infrastructure that we see today. British and German scientists in medicine moved to America because of this bountiful funding for their research. These expatriated scientists helped drive these medical advances into commercialized products by the 1980’s. America has been the center of medical education and advances of biotechnology but will it continue? International scientists trained in America have started to return to Europe and Asia. These American-trained scientists and their governments are very aware of the commercial potential of biotechnology. Those governments are now more prepared to play an active role this new science. Germany, Ireland, Britain, Singapore, Taiwan and Israel are such examples of this government support for biotechnology in the 21st century. Will the US continue to maintain its domination of biotechnology in this century? Will the US education system adjust to the new dynamic of synergistic relationships between the education system, industry and government? This article will try to address these questions but also will help the reader understand who will emerge by 2015 as the leader in science and education.
doi:10.1007/s10616-007-9056-5
PMCID: PMC2267620  PMID: 19003196
Biotechnology industry; Pharmaceutical industry; Entrepreneurs; Commercialization; Medical technology; Investment community; Business incubators
15.  Conflicting Biomedical Assumptions for Mathematical Modeling: The Case of Cancer Metastasis 
PLoS Computational Biology  2011;7(10):e1002132.
Computational models in biomedicine rely on biological and clinical assumptions. The selection of these assumptions contributes substantially to modeling success or failure. Assumptions used by experts at the cutting edge of research, however, are rarely explicitly described in scientific publications. One can directly collect and assess some of these assumptions through interviews and surveys. Here we investigate diversity in expert views about a complex biological phenomenon, the process of cancer metastasis. We harvested individual viewpoints from 28 experts in clinical and molecular aspects of cancer metastasis and summarized them computationally. While experts predominantly agreed on the definition of individual steps involved in metastasis, no two expert scenarios for metastasis were identical. We computed the probability that any two experts would disagree on k or fewer metastatic stages and found that any two randomly selected experts are likely to disagree about several assumptions. Considering the probability that two or more of these experts review an article or a proposal about metastatic cascades, the probability that they will disagree with elements of a proposed model approaches 1. This diversity of conceptions has clear consequences for advance and deadlock in the field. We suggest that strong, incompatible views are common in biomedicine but largely invisible to biomedical experts themselves. We built a formal Markov model of metastasis to encapsulate expert convergence and divergence regarding the entire sequence of metastatic stages. This model revealed stages of greatest disagreement, including the points at which cancer enters and leaves the bloodstream. The model provides a formal probabilistic hypothesis against which researchers can evaluate data on the process of metastasis. This would enable subsequent improvement of the model through Bayesian probabilistic update. Practically, we propose that model assumptions and hunches be harvested systematically and made available for modelers and scientists.
Author Summary
Mathematical models and scientific theories fail not only from internal inconsistency, but also from the poor selection of basic assumptions. Assumptions in computational models of biomedicine are typically provided by scientists who interact directly with empirical data. If we seek to model the dynamics of cancer metastasis and ask experts regarding valid assumptions, how widely will they agree and on which assumptions? To answer this question, we queried 28 faculty-level experts about the progression of metastasis. We demonstrate an unexpected diversity of assumptions across experts leading to a striking lack of agreement over the basic stages and sequence of metastasis. We suggest a formal model and framework that builds on this diversity and enables researchers to evaluate divergent hypotheses about metastasis with experimental data. We conclude that modeling biomedical processes could be substantially improved by harvesting scientific assumptions and exposing them for formalization and experiment.
doi:10.1371/journal.pcbi.1002132
PMCID: PMC3188482  PMID: 21998558
16.  Towards mainstreaming of biodiversity data publishing: recommendations of the GBIF Data Publishing Framework Task Group 
BMC Bioinformatics  2011;12(Suppl 15):S1.
Background
Data are the evidentiary basis for scientific hypotheses, analyses and publication, for policy formation and for decision-making. They are essential to the evaluation and testing of results by peer scientists both present and future. There is broad consensus in the scientific and conservation communities that data should be freely, openly available in a sustained, persistent and secure way, and thus standards for 'free' and 'open' access to data have become well developed in recent years. The question of effective access to data remains highly problematic.
Discussion
Specifically with respect to scientific publishing, the ability to critically evaluate a published scientific hypothesis or scientific report is contingent on the examination, analysis, evaluation - and if feasible - on the re-generation of data on which conclusions are based. It is not coincidental that in the recent 'climategate' controversies, the quality and integrity of data and their analytical treatment were central to the debate. There is recent evidence that even when scientific data are requested for evaluation they may not be available. The history of dissemination of scientific results has been marked by paradigm shifts driven by the emergence of new technologies. In recent decades, the advance of computer-based technology linked to global communications networks has created the potential for broader and more consistent dissemination of scientific information and data. Yet, in this digital era, scientists and conservationists, organizations and institutions have often been slow to make data available. Community studies suggest that the withholding of data can be attributed to a lack of awareness, to a lack of technical capacity, to concerns that data should be withheld for reasons of perceived personal or organizational self interest, or to lack of adequate mechanisms for attribution.
Conclusions
There is a clear need for institutionalization of a 'data publishing framework' that can address sociocultural, technical-infrastructural, policy, political and legal constraints, as well as addressing issues of sustainability and financial support. To address these aspects of a data publishing framework - a systematic, standard approach to the formal definition and public disclosure of data - in the context of biodiversity data, the Global Biodiversity Information Facility (GBIF, the single inter-governmental body most clearly mandated to undertake such an effort) convened a Data Publishing Framework Task Group. We conceive this data publishing framework as an environment conducive to ensure free and open access to world's biodiversity data. Here, we present the recommendations of that Task Group, which are intended to encourage free and open access to the worlds' biodiversity data.
doi:10.1186/1471-2105-12-S15-S1
PMCID: PMC3287444  PMID: 22373150
17.  The epistemology of Deep Brain Stimulation and neuronal pathophysiology 
Deep Brain Stimulation (DBS) is a remarkable therapy succeeding where all manner of pharmacological manipulations and brain transplants fail. The success of DBS has resurrected the relevance of electrophysiology and dynamics on the order of milliseconds. Despite the remarkable effects of DBS, its mechanisms of action are largely unknown. There has been an expanding catalogue of various neuronal and neural responses to DBS or DBS-like stimulation but no clear conceptual encompassing explanatory scheme has emerged despite the technological prowess and intellectual sophistication of the scientists involved. Something is amiss. If the scientific observations are sound, then why has there not been more progress? The alternative is that it may be the hypotheses that frame the questions are at fault as well as the methods of inference (logic) used to validate the hypotheses. An analysis of the past and current notions of the DBS mechanisms of action is the subject in order to identify the presuppositions (premises) and logical fallacies that may be at fault. The hope is that these problems will be avoided in the future so the DBS can realize its full potential quickly. In this regard, the discussion of the methods of inference and presuppositions that underlie many current notions is no different then a critique of experimental methods common in scientific discussions and consequently, examinations of the epistemology and logic are appropriate. This analysis is in keeping with the growing appreciation among scientists and philosophers of science, the scientific observations (data) to not “speak for themselves” nor is the scientific method self-evidently true and that consideration of the underlying inferential methods is necessary.
doi:10.3389/fnint.2012.00078
PMCID: PMC3447188  PMID: 23024631
Deep Brain Stimulation; epistemology; pathophysiology; mechanisms of action
18.  The Robot in the Crib: A Developmental Analysis of Imitation Skills in Infants and Robots 
Infant and child development  2008;17(1):43-53.
Interesting systems, whether biological or artificial, develop. Starting from some initial conditions, they respond to environmental changes, and continuously improve their capabilities. Developmental psychologists have dedicated significant effort to studying the developmental progression of infant imitation skills, because imitation underlies the infant’s ability to understand and learn from his or her social environment. In a converging intellectual endeavour, roboticists have been equipping robots with the ability to observe and imitate human actions because such abilities can lead to rapid teaching of robots to perform tasks. We provide here a comparative analysis between studies of infants imitating and learning from human demonstrators, and computational experiments aimed at equipping a robot with such abilities. We will compare the research across the following two dimensions: (a) initial conditions—what is innate in infants, and what functionality is initially given to robots, and (b) developmental mechanisms—how does the performance of infants improve over time, and what mechanisms are given to robots to achieve equivalent behaviour. Both developmental science and robotics are critically concerned with: (a) how their systems can and do go ‘beyond the stimulus’ given during the demonstration, and (b) how the internal models used in this process are acquired during the lifetime of the system.
doi:10.1002/icd.543
PMCID: PMC2367332  PMID: 18458795
imitation; robot learning; developmental robotics; ‘like me’ hypothesis; active intermodal matching
19.  The Role of the Toxicologic Pathologist in the Post-Genomic Era# 
Journal of Toxicologic Pathology  2013;26(2):105-110.
An era can be defined as a period in time identified by distinctive character, events, or practices. We are now in the genomic era. The pre-genomic era: There was a pre-genomic era. It started many years ago with novel and seminal animal experiments, primarily directed at studying cancer. It is marked by the development of the two-year rodent cancer bioassay and the ultimate realization that alternative approaches and short-term animal models were needed to replace this resource-intensive and time-consuming method for predicting human health risk. Many alternatives approaches and short-term animal models were proposed and tried but, to date, none have completely replaced our dependence upon the two-year rodent bioassay. However, the alternative approaches and models themselves have made tangible contributions to basic research, clinical medicine and to our understanding of cancer and they remain useful tools to address hypothesis-driven research questions. The pre-genomic era was a time when toxicologic pathologists played a major role in drug development, evaluating the cancer bioassay and the associated dose-setting toxicity studies, and exploring the utility of proposed alternative animal models. It was a time when there was shortage of qualified toxicologic pathologists. The genomic era: We are in the genomic era. It is a time when the genetic underpinnings of normal biological and pathologic processes are being discovered and documented. It is a time for sequencing entire genomes and deliberately silencing relevant segments of the mouse genome to see what each segment controls and if that silencing leads to increased susceptibility to disease. What remains to be charted in this genomic era is the complex interaction of genes, gene segments, post-translational modifications of encoded proteins, and environmental factors that affect genomic expression. In this current genomic era, the toxicologic pathologist has had to make room for a growing population of molecular biologists. In this present era newly emerging DVM and MD scientists enter the work arena with a PhD in pathology often based on some aspect of molecular biology or molecular pathology research. In molecular biology, the almost daily technological advances require one’s complete dedication to remain at the cutting edge of the science. Similarly, the practice of toxicologic pathology, like other morphological disciplines, is based largely on experience and requires dedicated daily examination of pathology material to maintain a well-trained eye capable of distilling specific information from stained tissue slides - a dedicated effort that cannot be well done as an intermezzo between other tasks. It is a rare individual that has true expertise in both molecular biology and pathology. In this genomic era, the newly emerging DVM-PhD or MD-PhD pathologist enters a marketplace without many job opportunities in contrast to the pre-genomic era. Many face an identity crisis needing to decide to become a competent pathologist or, alternatively, to become a competent molecular biologist. At the same time, more PhD molecular biologists without training in pathology are members of the research teams working in drug development and toxicology. How best can the toxicologic pathologist interact in the contemporary team approach in drug development, toxicology research and safety testing? Based on their biomedical training, toxicologic pathologists are in an ideal position to link data from the emerging technologies with their knowledge of pathobiology and toxicology. To enable this linkage and obtain the synergy it provides, the bench-level, slide-reading expert pathologist will need to have some basic understanding and appreciation of molecular biology methods and tools. On the other hand, it is not likely that the typical molecular biologist could competently evaluate and diagnose stained tissue slides from a toxicology study or a cancer bioassay. The post-genomic era: The post-genomic era will likely arrive approximately around 2050 at which time entire genomes from multiple species will exist in massive databases, data from thousands of robotic high throughput chemical screenings will exist in other databases, genetic toxicity and chemical structure-activity-relationships will reside in yet other databases. All databases will be linked and relevant information will be extracted and analyzed by appropriate algorithms following input of the latest molecular, submolecular, genetic, experimental, pathology and clinical data. Knowledge gained will permit the genetic components of many diseases to be amenable to therapeutic prevention and/or intervention. Much like computerized algorithms are currently used to forecast weather or to predict political elections, computerized sophisticated algorithms based largely on scientific data mining will categorize new drugs and chemicals relative to their health benefits versus their health risks for defined human populations and subpopulations. However, this form of a virtual toxicity study or cancer bioassay will only identify probabilities of adverse consequences from interaction of particular environmental and/or chemical/drug exposure(s) with specific genomic variables. Proof in many situations will require confirmation in intact in vivo mammalian animal models. The toxicologic pathologist in the post-genomic era will be the best suited scientist to confirm the data mining and its probability predictions for safety or adverse consequences with the actual tissue morphological features in test species that define specific test agent pathobiology and human health risk.
doi:10.1293/tox.26.105
PMCID: PMC3695332  PMID: 23914052
genomic era; history of toxicologic pathology; molecular biology
20.  Biomimetic vibrissal sensing for robots 
Active vibrissal touch can be used to replace or to supplement sensory systems such as computer vision and, therefore, improve the sensory capacity of mobile robots. This paper describes how arrays of whisker-like touch sensors have been incorporated onto mobile robot platforms taking inspiration from biology for their morphology and control. There were two motivations for this work: first, to build a physical platform on which to model, and therefore test, recent neuroethological hypotheses about vibrissal touch; second, to exploit the control strategies and morphology observed in the biological analogue to maximize the quality and quantity of tactile sensory information derived from the artificial whisker array. We describe the design of a new whiskered robot, Shrewbot, endowed with a biomimetic array of individually controlled whiskers and a neuroethologically inspired whisking pattern generation mechanism. We then present results showing how the morphology of the whisker array shapes the sensory surface surrounding the robot's head, and demonstrate the impact of active touch control on the sensory information that can be acquired by the robot. We show that adopting bio-inspired, low latency motor control of the rhythmic motion of the whiskers in response to contact-induced stimuli usefully constrains the sensory range, while also maximizing the number of whisker contacts. The robot experiments also demonstrate that the sensory consequences of active touch control can be usefully investigated in biomimetic robots.
doi:10.1098/rstb.2011.0164
PMCID: PMC3172604  PMID: 21969690
vibrissa; whisker; active touch; robot; biomimetic; whisking pattern generator
21.  Comparison of Laparoscopic Pyeloplasty With and Without Robotic Assistance 
Objectives:
The benefits of laparoscopic surgery with robotic assistance (da Vinci Robotic Surgical System, Intuitive Surgical, Sunnyvale, CA) includes elimination of tremor, motion scaling, 3D laparoscopic vision, and instruments with 7 degrees of freedom. The benefit of robotic assistance could be most pronounced with reconstructive procedures, such as pyeloplasty. We aimed to compare laparoscopic pyeloplasty, with and without robotic assistance, during a surgeon's initial experience to determine whether robotic assistance has distinct advantages over the pure laparoscopic technique.
Methods:
We retrospectively compared the first 7 laparoscopic pyeloplasties with the first 7 robotic pyeloplasties performed by a single surgeon. All patients were preoperatively evaluated with computed tomographic angiography with 3D reconstruction to image crossing vessels at the ureteropelvic junction. All patients were followed up by lasix renograms and routine clinic visits.
Results:
Patients were similar with respect to mean age (34 in laparoscopic pyeloplasty group vs 32 in the robotic pyeloplasty group), operative time (5.2 hours vs 5.4 hours), estimated blood loss (40 mL vs 60 mL), and hospital stay (3 days vs 2.5 days). Two patients in the laparoscopic pyeloplasty group had small anastomotic leaks managed conservatively, and one patient in the robotic pyeloplasty group had a febrile urinary tract infection necessitating treatment with intravenous antibiotics. Another patient in the robotic pyeloplasty group was readmitted with hematuria that was treated conservatively without transfusion. No recurrences were detected in either group.
Conclusions:
Operating times and outcomes during the learning curve for laparoscopic pyeloplasty were similar to those for robotic pyeloplasty. Long-term data with greater experience is needed to make definitive conclusions about the superiority of either technique and to justify the expense of robotic pyeloplasty.
PMCID: PMC3015612  PMID: 16121867
Laparoscopy; Robotics; Ureteral obstruction; Ureteropelvic junction obstruction
22.  Solving Navigational Uncertainty Using Grid Cells on Robots 
PLoS Computational Biology  2010;6(11):e1000995.
To successfully navigate their habitats, many mammals use a combination of two mechanisms, path integration and calibration using landmarks, which together enable them to estimate their location and orientation, or pose. In large natural environments, both these mechanisms are characterized by uncertainty: the path integration process is subject to the accumulation of error, while landmark calibration is limited by perceptual ambiguity. It remains unclear how animals form coherent spatial representations in the presence of such uncertainty. Navigation research using robots has determined that uncertainty can be effectively addressed by maintaining multiple probabilistic estimates of a robot's pose. Here we show how conjunctive grid cells in dorsocaudal medial entorhinal cortex (dMEC) may maintain multiple estimates of pose using a brain-based robot navigation system known as RatSLAM. Based both on rodent spatially-responsive cells and functional engineering principles, the cells at the core of the RatSLAM computational model have similar characteristics to rodent grid cells, which we demonstrate by replicating the seminal Moser experiments. We apply the RatSLAM model to a new experimental paradigm designed to examine the responses of a robot or animal in the presence of perceptual ambiguity. Our computational approach enables us to observe short-term population coding of multiple location hypotheses, a phenomenon which would not be easily observable in rodent recordings. We present behavioral and neural evidence demonstrating that the conjunctive grid cells maintain and propagate multiple estimates of pose, enabling the correct pose estimate to be resolved over time even without uniquely identifying cues. While recent research has focused on the grid-like firing characteristics, accuracy and representational capacity of grid cells, our results identify a possible critical and unique role for conjunctive grid cells in filtering sensory uncertainty. We anticipate our study to be a starting point for animal experiments that test navigation in perceptually ambiguous environments.
Author Summary
Navigating robots face similar challenges to wild rodents in creating useable maps of their environments. Both must learn about their environments through experience, and in doing so face similar problems dealing with ambiguous and noisy information from their sensory inputs. Navigation research using robots has determined that uncertainty can be effectively addressed by maintaining multiple probabilistic estimates of a robot's pose. Neural recordings from navigating rats have revealed cells with grid-like spatial firing properties in the entorhinal cortex region of the rodent brain. Here we show how a robot equipped with conjunctive grid-cell-like cells can maintain multiple estimates of pose and solve a navigation task in an environment with no uniquely identifying cues. We propose that grid cells in the entorhinal cortex provide a similar ability for rodents. Robotics has learned much from biological systems. In a complementary way, in this study our understanding of neural systems is enhanced by insights from engineered solutions to a common problem faced by mobile robots and navigating animals.
doi:10.1371/journal.pcbi.1000995
PMCID: PMC2978698  PMID: 21085643
23.  “Positive” Results Increase Down the Hierarchy of the Sciences 
PLoS ONE  2010;5(4):e10068.
The hypothesis of a Hierarchy of the Sciences with physical sciences at the top, social sciences at the bottom, and biological sciences in-between is nearly 200 years old. This order is intuitive and reflected in many features of academic life, but whether it reflects the “hardness” of scientific research—i.e., the extent to which research questions and results are determined by data and theories as opposed to non-cognitive factors—is controversial. This study analysed 2434 papers published in all disciplines and that declared to have tested a hypothesis. It was determined how many papers reported a “positive” (full or partial) or “negative” support for the tested hypothesis. If the hierarchy hypothesis is correct, then researchers in “softer” sciences should have fewer constraints to their conscious and unconscious biases, and therefore report more positive outcomes. Results confirmed the predictions at all levels considered: discipline, domain and methodology broadly defined. Controlling for observed differences between pure and applied disciplines, and between papers testing one or several hypotheses, the odds of reporting a positive result were around 5 times higher among papers in the disciplines of Psychology and Psychiatry and Economics and Business compared to Space Science, 2.3 times higher in the domain of social sciences compared to the physical sciences, and 3.4 times higher in studies applying behavioural and social methodologies on people compared to physical and chemical studies on non-biological material. In all comparisons, biological studies had intermediate values. These results suggest that the nature of hypotheses tested and the logical and methodological rigour employed to test them vary systematically across disciplines and fields, depending on the complexity of the subject matter and possibly other factors (e.g., a field's level of historical and/or intellectual development). On the other hand, these results support the scientific status of the social sciences against claims that they are completely subjective, by showing that, when they adopt a scientific approach to discovery, they differ from the natural sciences only by a matter of degree.
doi:10.1371/journal.pone.0010068
PMCID: PMC2850928  PMID: 20383332
24.  A knowledgebase system to enhance scientific discovery: Telemakus 
Background
With the rapid expansion of scientific research, the ability to effectively find or integrate new domain knowledge in the sciences is proving increasingly difficult. Efforts to improve and speed up scientific discovery are being explored on a number of fronts. However, much of this work is based on traditional search and retrieval approaches and the bibliographic citation presentation format remains unchanged.
Methods
Case study.
Results
The Telemakus KnowledgeBase System provides flexible new tools for creating knowledgebases to facilitate retrieval and review of scientific research reports. In formalizing the representation of the research methods and results of scientific reports, Telemakus offers a potential strategy to enhance the scientific discovery process. While other research has demonstrated that aggregating and analyzing research findings across domains augments knowledge discovery, the Telemakus system is unique in combining document surrogates with interactive concept maps of linked relationships across groups of research reports.
Conclusion
Based on how scientists conduct research and read the literature, the Telemakus KnowledgeBase System brings together three innovations in analyzing, displaying and summarizing research reports across a domain: (1) research report schema, a document surrogate of extracted research methods and findings presented in a consistent and structured schema format which mimics the research process itself and provides a high-level surrogate to facilitate searching and rapid review of retrieved documents; (2) research findings, used to index the documents, allowing searchers to request, for example, research studies which have studied the relationship between neoplasms and vitamin E; and (3) visual exploration interface of linked relationships for interactive querying of research findings across the knowledgebase and graphical displays of what is known as well as, through gaps in the map, what is yet to be tested. The rationale and system architecture are described and plans for the future are discussed.
doi:10.1186/1742-5581-1-2
PMCID: PMC524025  PMID: 15507158
25.  Life Science Research and Drug Discovery at the Turn of the 21st Century: The Experience of SwissBioGrid 
Background
It is often said that the life sciences are transforming into an information science. As laboratory experiments are starting to yield ever increasing amounts of data and the capacity to deal with those data is catching up, an increasing share of scientific activity is seen to be taking place outside the laboratories, sifting through the data and modelling “in-silico” the processes observed “in-vitro.” The transformation of the life sciences and similar developments in other disciplines have inspired a variety of initiatives around the world to create technical infrastructure to support the new scientific practices that are emerging. The e-Science programme in the United Kingdom and the NSF Office for Cyberinfrastructure are examples of these. In Switzerland there have been no such national initiatives. Yet, this has not prevented scientists from exploring the development of similar types of computing infrastructures. In 2004, a group of researchers in Switzerland established a project, SwissBioGrid, to explore whether Grid computing technologies could be successfully deployed within the life sciences. This paper presents their experiences as a case study of how the life sciences are currently operating as an information science and presents the lessons learned about how existing institutional and technical arrangements facilitate or impede this operation.
Results
SwissBioGrid was established to provide computational support to two pilot projects: one for proteomics data analysis, and the other for high-throughput molecular docking (“virtual screening”) to find new drugs for neglected diseases (specifically, for dengue fever). The proteomics project was an example of a large-scale data management problem, applying many different analysis algorithms to Terabyte-sized datasets from mass spectrometry, involving comparisons with many different reference databases; the virtual screening project was more a purely computational problem, modelling the interactions of millions of small molecules with a limited number of dengue virus protein targets. Both present interesting lessons about how scientific practices are changing when they tackle the problems of large-scale data analysis and data management by means of creating a novel technical infrastructure.
Conclusions
In the experience of SwissBioGrid, data intensive discovery has a lot to gain from close collaboration with industry and harnessing distributed computing power. Yet the diversity in life science research implies only a limited role for generic infrastructure; and the transience of support means that researchers need to integrate their efforts with others if they want to sustain the benefits of their success, which are otherwise lost.
PMCID: PMC2850249  PMID: 19521952

Results 1-25 (667485)