PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1343021)

Clipboard (0)
None

Related Articles

1.  Make it better but don't change anything 
With massive amounts of data being generated in electronic format, there is a need in basic science laboratories to adopt new methods for tracking and analyzing data. An electronic laboratory notebook (ELN) is not just a replacement for a paper lab notebook, it is a new method of storing and organizing data while maintaining the data entry flexibility and legal recording functions of paper notebooks. Paper notebooks are regarded as highly flexible since the user can configure it to store almost anything that can be written or physically pasted onto the pages. However, data retrieval and data sharing from paper notebooks are labor intensive processes and notebooks can be misplaced, a single point of failure that loses all entries in the volume. Additional features provided by electronic notebooks include searchable indices, data sharing, automatic archiving for security against loss and ease of data duplication. Furthermore, ELNs can be tasked with additional functions not commonly found in paper notebooks such as inventory control. While ELNs have been on the market for some time now, adoption of an ELN in academic basic science laboratories has been lagging. Issues that have restrained development and adoption of ELN in research laboratories are the sheer variety and frequency of changes in protocols with a need for the user to control notebook configuration outside the framework of professional IT staff support. In this commentary, we will look at some of the issues and experiences in academic laboratories that have proved challenging in implementing an electronic lab notebook.
doi:10.1186/1759-4499-1-5
PMCID: PMC2810290  PMID: 20098591
2.  Mediators between Theoretical and Practical Medieval Knowledge: Medical Notebooks from the Cairo Genizah and their Significance 
Medical History  2013;57(4):487-515.
This article presents a plethora of fragments from the medical notebooks found in the Cairo Genizah that comprise a unique source of historical data for scholarly study and for a better understanding of the ways in which medieval medical knowledge in Egypt was transferred from theory to practice and vice versa. These documents provide the most direct evidence we have for preferred practical medical recipes because they record the choices of medical practitioners in medieval Cairo. Since the language most commonly used in them was Judaeo-Arabic, they were evidently written by Jews. The medical genre in the notebooks was primarily pharmacopoeic, consisting of apparently original recipes for the treatment of various diseases. There are also a few notebooks on materia medica. The subject matter of the Genizah medical notebooks shows that they were mostly of an eclectic nature, i.e. the writers had probably learnt about these treatments and recipes from their teachers, applied them at the hospitals where they worked or copied them from the books they read. Foremost among the subjects dealt with were eye diseases, followed by skin diseases, coughs and colds, dentistry and oral hygiene, and gynaecological conditions. The writers of the Genizah notebooks apparently recorded the practical medical knowledge they wished to preserve for their future use as amateur physicians, students, traditional healers or professional practitioners.
doi:10.1017/mdh.2013.56
PMCID: PMC3865955  PMID: 24069914
Cairo Genizah; History of Medicine; Jewish; Medieval Middle East; Middle Ages; Notebook
3.  From documents to datasets: A MediaWiki-based method of annotating and extracting species observations in century-old field notebooks 
ZooKeys  2012;235-253.
Part diary, part scientific record, biological field notebooks often contain details necessary to understanding the location and environmental conditions existent during collecting events. Despite their clear value for (and recent use in) global change studies, the text-mining outputs from field notebooks have been idiosyncratic to specific research projects, and impossible to discover or re-use. Best practices and workflows for digitization, transcription, extraction, and integration with other sources are nascent or non-existent. In this paper, we demonstrate a workflow to generate structured outputs while also maintaining links to the original texts. The first step in this workflow was to place already digitized and transcribed field notebooks from the University of Colorado Museum of Natural History founder, Junius Henderson, on Wikisource, an open text transcription platform. Next, we created Wikisource templates to document places, dates, and taxa to facilitate annotation and wiki-linking. We then requested help from the public, through social media tools, to take advantage of volunteer efforts and energy. After three notebooks were fully annotated, content was converted into XML and annotations were extracted and cross-walked into Darwin Core compliant record sets. Finally, these recordsets were vetted, to provide valid taxon names, via a process we call “taxonomic referencing.” The result is identification and mobilization of 1,068 observations from three of Henderson’s thirteen notebooks and a publishable Darwin Core record set for use in other analyses. Although challenges remain, this work demonstrates a feasible approach to unlock observations from field notebooks that enhances their discovery and interoperability without losing the narrative context from which those observations are drawn.
“Compose your notes as if you were writing a letter to someone a century in the future.”
Perrine and Patton (2011)
doi:10.3897/zookeys.209.3247
PMCID: PMC3406479  PMID: 22859891
Field notes; notebooks; crowd sourcing; digitization; biodiversity; transcription; text-mining; Darwin Core; Junius Henderson; annotation; taxonomic referencing; natural history; Wikisource; Colorado; species occurrence records
4.  A novel collaborative e-learning platform for medical students - ALERT STUDENT 
BMC Medical Education  2014;14:143.
Background
The increasing complexity of medical curricula would benefit from adaptive computer supported collaborative learning systems that support study management using instructional design and learning object principles. However, to our knowledge, there are scarce reports regarding applications developed to meet this goal and encompass the complete medical curriculum. The aim of ths study was to develop and assess the usability of an adaptive computer supported collaborative learning system for medical students to manage study sessions.
Results
A study platform named ALERT STUDENT was built as a free web application. Content chunks are represented as Flashcards that hold knowledge and open ended questions. These can be created in a collaborative fashion. Multiple Flashcards can be combined into custom stacks called Notebooks that can be accessed in study Groups that belong to the user institution. The system provides a Study Mode that features text markers, text notes, timers and color-coded content prioritization based on self-assessment of open ended questions presented in a Quiz Mode. Time spent studying and Perception of knowledge are displayed for each student and peers using charts. Computer supported collaborative learning is achieved by allowing for simultaneous creation of Notebooks and self-assessment questions by many users in a pre-defined Group. Past personal performance data is retrieved when studying new Notebooks containing previously studied Flashcards. Self-report surveys showed that students highly agreed that the system was useful and were willing to use it as a reference tool.
Conclusions
The platform employs various instructional design and learning object principles in a computer supported collaborative learning platform for medical students that allows for study management. The application broadens student insight over learning results and supports informed decisions based on past learning performance. It serves as a potential educational model for the medical education setting that has gathered strong positive feedback from students at our school.
This platform provides a case study on how effective blending of instructional design and learning object principles can be brought together to manage study, and takes an important step towards bringing information management tools to support study decisions and improving learning outcomes.
doi:10.1186/1472-6920-14-143
PMCID: PMC4131539  PMID: 25017028
Medical education; Computer supported collaborative learning; E-learning; Information management; Memory retention; Computer-assisted instruction; Tailored learning; Student-centered learning; Spaced repetition
5.  Mobile education in autopsy conferences of pathology: presentation of complex cases 
Diagnostic Pathology  2006;1:42.
Background
MeduMobile was a project to develop and evaluate learning scenarios for medical students and teachers by use of video communication and notebooks. Its core part was assigned to various medical routines, conferences or meetings such as doctor-patient bedside conversation. These were filmed by video teams and broadcasted live via the WLAN of the Charité campus to course participating students. One type of the learning arrangements was the autopsy conference as an on-call scenario.
Materials and methods
The MeduMobile project consisted of two main compartments: the regular seminar event which took place every week or month, and the on-call event. For an on-call event the students were informed two hours before the lesson's start. A mobile video team organised the video conference via a specific MeduMobile seminar system. This software offered the students to log. The MeduMobile seminar system is based on the Windows operating system and realises an extended video communication via WLAN. Thirteen access points were implemented at the Charité Campus Virchow Klinikum and Campus Mitte. A questionnaire was developed to investigate in the response and learning effect of the mobile seminar system.
Results
During the MeduMobile project 42 video conferences with (cumulative) 145 participating students took place. Four autopsy conferences could be organised as on-call scenarios within this project. A prospective, not randomised follow-up study was included 25 students of the 1st – 6th clinical semester. According to the answers, professional reasoning, professional performance, sustainability, and the complexity were broadly accepted by the students.
Discussion
In principle, the MeduMobile realised an interdisciplinary case presentation using video conference and web page. The evaluation indicates a high acception of such complex case presentation with multidisciplinary settings. The use of the notebooks in mobile learning enables an interconnective training and promotes a complex learning.
doi:10.1186/1746-1596-1-42
PMCID: PMC1654189  PMID: 17094805
6.  In Vivo Spinal Posture during Upright and Reclined Sitting in an Office Chair 
BioMed Research International  2013;2013:916045.
Increasing numbers of people spend the majority of their working lives seated in an office chair. Musculoskeletal disorders, in particular low back pain, resulting from prolonged static sitting are ubiquitous, but regularly changing sitting position throughout the day is thought to reduce back problems. Nearly all currently available office chairs offer the possibility to alter the backrest reclination angles, but the influence of changing seating positions on the spinal column remains unknown. In an attempt to better understand the potential to adjust or correct spine posture using adjustable seating, five healthy subjects were analysed in an upright and reclined sitting position conducted in an open, upright MRI scanner. The shape of the spine, as described using the vertebral bodies' coordinates, wedge angles, and curvature angles, showed high inter-subject variability between the two seating positions. The mean lumbar, thoracic, and cervical curvature angles were 29 ± 15°, −29 ± 4°, and 13 ± 8° for the upright and 33 ± 12°, −31 ± 7°, and 7 ± 7° for the reclined sitting positions. Thus, a wide range of seating adaptation is possible through modification of chair posture, and dynamic seating options may therefore provide a key feature in reducing or even preventing back pain caused by prolonged static sitting.
doi:10.1155/2013/916045
PMCID: PMC3794512  PMID: 24175307
7.  Telemonitoring of home infusion technology 
Background
The specialized registered nurses working in the technologic homecare team of our organization are highly qualified in technical nursing.
One component of their job is the intravenous administration of medication to patients in their own home by using an infusion pump.
In a hospital setting you can ask a colleague nurse to check the installation of the pump and the dose of medication. In the home situation of a patient this is not possible.
The Inspection for Healthcare in the Netherlands has mentioned this problem in a report about home infusion technology, for the absence of a double check means a higher risk of making mistakes.
This was a motivation to look for a safe solution for this problem by using telemonitoring.
Method
To conceive this method we found an enthusiastic technical installation company (Focus Cura) to develop a portable telemonitoring device which can film and record. The device allows a colleague to receive the recorded pictures at the same time at another location.
After editing a list of requirements made together with the team of specialized nurses, Focus Cura made the first prototype of a portable suitcase with all the equipment. Four different methods of receiving the images have been examined.
Result
The result is a portable suitcase with a camera that makes high quality video images, which are sent by a safe and protected connection to the notebook of a colleague at another location in the region. We have developed a protocol which describes the use of telemonitoring to aid home infusion technology.
Conclusion
Thus, specialized nurses working in an area of about 100 kilometres (62 miles) can reach each other in order of a safe double-check. A simple method which improves the safety of the patient and professional.
PMCID: PMC3031816
telemonitoring; infusion technology; home
8.  Informing Comprehensive HIV Prevention: A Situational Analysis of the HIV Prevention and Care Context, North West Province South Africa 
PLoS ONE  2014;9(7):e102904.
Objective
Building a successful combination prevention program requires understanding the community’s local epidemiological profile, the social community norms that shape vulnerability to HIV and access to care, and the available community resources. We carried out a situational analysis in order to shape a comprehensive HIV prevention program that address local barriers to care at multiple contextual levels in the North West Province of South Africa.
Method
The situational analysis was conducted in two sub-districts in 2012 and guided by an adaptation of WHO’s Strategic Approach, a predominantly qualitative method, including observation of service delivery points and in-depth interviews and focus groups with local leaders, providers, and community members, in order to recommend context-specific HIV prevention strategies. Analysis began during fieldwork with nightly discussions of findings and continued with coding original textual data from the fieldwork notebooks and a select number of recorded interviews.
Results
We conducted over 200 individual and group interviews and gleaned four principal social barriers to HIV prevention and care, including: HIV fatalism, traditional gender norms, HIV-related stigma, and challenges with communication around HIV, all of which fuel the HIV epidemic. At the different levels of response needed to stem the epidemic, we found evidence of national policies and programs that are mitigating the social risk factors but little community-based responses that address social risk factors to HIV.
Conclusions
Understanding social and structural barriers to care helped shape our comprehensive HIV prevention program, which address the four ‘themes’ identified into each component of the program. Activities are underway to engage communities, offer community-based testing in high transmission areas, community stigma reduction, and a positive health, dignity and prevention program for stigma reduction and improve communication skills. The situational analysis process successfully shaped key programmatic decisions and cultivated a deeper collaboration with local stakeholders to support program implementation.
doi:10.1371/journal.pone.0102904
PMCID: PMC4100930  PMID: 25028976
9.  Collecting Knowledge for the Family: Recipes, Gender and Practical Knowledge in the Early Modern English Household 
When Mary Cholmeley married Henry Fairfax in 1627, she carried to her new home in Yorkshire a leather-bound notebook filled with medical recipes. Over the next few decades, Mary and Henry, their children and various members of the Fairfax and Cholmeley families continually entered new medical and culinary information into this ‘treasury for health.’ Consequently, as it stands now, the manuscript can be read both as a repository of household medical knowledge and as a family archive. Focusing on two Fairfax ‘family books,’ this essay traces on the process through which early modern recipe books were created. In particular, it explores the role of the family collective in compiling books of knowledge. In contrast to past studies where household recipe books have largely been described as the products of exclusively female endeavors, I argue that the majority of early modern recipe collections were created by family collectives and that the members of these collectives worked in collaboration across spatial, geographical and temporal boundaries. This new reading of recipe books as testaments of the interests and needs of particular families encourages renewed examination of the role played by gender in the transmission and production of knowledge in early modern households.
doi:10.1111/1600-0498.12019
PMCID: PMC3709121  PMID: 23926360
Early modern medicine; gender history; household; informal science
10.  LabTrove: A Lightweight, Web Based, Laboratory “Blog” as a Route towards a Marked Up Record of Work in a Bioscience Research Laboratory 
PLoS ONE  2013;8(7):e67460.
Background
The electronic laboratory notebook (ELN) has the potential to replace the paper notebook with a marked-up digital record that can be searched and shared. However, it is a challenge to achieve these benefits without losing the usability and flexibility of traditional paper notebooks. We investigate a blog-based platform that addresses the issues associated with the development of a flexible system for recording scientific research.
Methodology/Principal Findings
We chose a blog-based approach with the journal characteristics of traditional notebooks in mind, recognizing the potential for linking together procedures, materials, samples, observations, data, and analysis reports. We implemented the LabTrove blog system as a server process written in PHP, using a MySQL database to persist posts and other research objects. We incorporated a metadata framework that is both extensible and flexible while promoting consistency and structure where appropriate. Our experience thus far is that LabTrove is capable of providing a successful electronic laboratory recording system.
Conclusions/Significance
LabTrove implements a one-item one-post system, which enables us to uniquely identify each element of the research record, such as data, samples, and protocols. This unique association between a post and a research element affords advantages for monitoring the use of materials and samples and for inspecting research processes. The combination of the one-item one-post system, consistent metadata, and full-text search provides us with a much more effective record than a paper notebook. The LabTrove approach provides a route towards reconciling the tensions and challenges that lie ahead in working towards the long-term goals for ELNs. LabTrove, an electronic laboratory notebook (ELN) system from the Smart Research Framework, based on a blog-type framework with full access control, facilitates the scientific experimental recording requirements for reproducibility, reuse, repurposing, and redeployment.
doi:10.1371/journal.pone.0067460
PMCID: PMC3720848  PMID: 23935832
11.  The Memory Support System for Mild Cognitive Impairment: Randomized trial of a cognitive rehabilitation intervention 
Objective
Individuals with amnestic Mild Cognitive Impairment (MCI) have few empirically-based treatment options for combating their memory loss. This study sought to examine the efficacy of a calendar/notebook rehabilitation intervention, the Memory Support System (MSS), for individuals with amnestic MCI.
Methods
Forty individuals with single domain amnestic MCI and their program partners were randomized to receive the MSS, either with training or without (controls). Measures of adherence, activities of daily living, and emotional impact were completed at the first and last intervention session and again at 8-weeks and 6 months post intervention.
Results
Training in use of a notebook/calendar system significantly improved adherence over those who received the calendars but no training. Functional ability and memory self efficacy significantly improved for those who received MSS training. Change in functional ability remained significantly better in the intervention group than in the control group out to 8 week follow up. Care partners in the intervention group demonstrated improved mood by 8 week and 6 month follow-up, while control care partners reported worse caregiver burden by 6 month follow up.
Conclusions
MSS training resulted in improvement in ADLs and sense of memory self efficacy for individuals with MCI. While ADL benefits were maintained out to 8 weeks post intervention, future inclusion of booster sessions may help extend the therapeutic effect out even further. Improved mood of care partners of trained individuals and worsening sense of caregiver burden over time for partners of untrained individuals further supports the efficacy of the MSS for MCI.
doi:10.1002/gps.3838
PMCID: PMC3766962  PMID: 22678947
Mild Cognitive Impairment; Rehabilitation; Behavioral Intervention; Activities of Daily Living; Quality of Life; Caregivers
12.  Coordination of Hand Shape 
The neural control of hand movement involves coordination of the sensory, motor and memory systems. Recent studies have documented the motor coordinates for hand shape, but less is known about the corresponding patterns of somatosensory activity. To initiate this line of investigation, the present study characterized the sense of hand shape by evaluating the influence of differences in the amount of grasping or twisting force, and differences in forearm orientation. Human subjects were asked to use the left hand to report the perceived shape of the right hand. In Experiment 1, six commonly grasped items were arranged on the table in front of the subject: bottle, doorknob, egg, notebook, carton, pan. With eyes closed, subjects used the right hand to lightly touch, forcefully support or imagine holding each object, while 15 joint angles were measured in each hand with a pair of wired gloves. The forces introduced by supporting or twisting did not influence the perceptual report of hand shape, but for most objects, the report was distorted in a consistent manner by differences in forearm orientation. Subjects appeared to adjust the intrinsic joint angles of the left hand, as well as the left wrist posture, so as to maintain the imagined object in its proper spatial orientation. In a second experiment, this result was largely replicated with unfamiliar objects. Thus somatosensory and motor information appear to be coordinated in an object-based, spatial coordinate system, sensitive to orientation relative to gravitational forces, but invariant to grasp forcefulness.
doi:10.1523/JNEUROSCI.5158-10.2011
PMCID: PMC3066006  PMID: 21389230
13.  iLAP: a workflow-driven software for experimental protocol development, data acquisition and analysis 
BMC Bioinformatics  2009;10:390.
Background
In recent years, the genome biology community has expended considerable effort to confront the challenges of managing heterogeneous data in a structured and organized way and developed laboratory information management systems (LIMS) for both raw and processed data. On the other hand, electronic notebooks were developed to record and manage scientific data, and facilitate data-sharing. Software which enables both, management of large datasets and digital recording of laboratory procedures would serve a real need in laboratories using medium and high-throughput techniques.
Results
We have developed iLAP (Laboratory data management, Analysis, and Protocol development), a workflow-driven information management system specifically designed to create and manage experimental protocols, and to analyze and share laboratory data. The system combines experimental protocol development, wizard-based data acquisition, and high-throughput data analysis into a single, integrated system. We demonstrate the power and the flexibility of the platform using a microscopy case study based on a combinatorial multiple fluorescence in situ hybridization (m-FISH) protocol and 3D-image reconstruction. iLAP is freely available under the open source license AGPL from http://genome.tugraz.at/iLAP/.
Conclusion
iLAP is a flexible and versatile information management system, which has the potential to close the gap between electronic notebooks and LIMS and can therefore be of great value for a broad scientific community.
doi:10.1186/1471-2105-10-390
PMCID: PMC2789074  PMID: 19941647
14.  Reflex control of the spine and posture: a review of the literature from a chiropractic perspective 
Objective
This review details the anatomy and interactions of the postural and somatosensory reflexes. We attempt to identify the important role the nervous system plays in maintaining reflex control of the spine and posture. We also review, illustrate, and discuss how the human vertebral column develops, functions, and adapts to Earth's gravity in an upright position. We identify functional characteristics of the postural reflexes by reporting previous observations of subjects during periods of microgravity or weightlessness.
Background
Historically, chiropractic has centered around the concept that the nervous system controls and regulates all other bodily systems; and that disruption to normal nervous system function can contribute to a wide variety of common ailments. Surprisingly, the chiropractic literature has paid relatively little attention to the importance of neurological regulation of static upright human posture. With so much information available on how posture may affect health and function, we felt it important to review the neuroanatomical structures and pathways responsible for maintaining the spine and posture. Maintenance of static upright posture is regulated by the nervous system through the various postural reflexes. Hence, from a chiropractic standpoint, it is clinically beneficial to understand how the individual postural reflexes work, as it may explain some of the clinical presentations seen in chiropractic practice.
Method
We performed a manual search for available relevant textbooks, and a computer search of the MEDLINE, MANTIS, and Index to Chiropractic Literature databases from 1970 to present, using the following key words and phrases: "posture," "ocular," "vestibular," "cervical facet joint," "afferent," "vestibulocollic," "cervicocollic," "postural reflexes," "spaceflight," "microgravity," "weightlessness," "gravity," "posture," and "postural." Studies were selected if they specifically tested any or all of the postural reflexes either in Earth's gravity or in microgravitational environments. Studies testing the function of each postural component, as well as those discussing postural reflex interactions, were also included in this review.
Discussion
It is quite apparent from the indexed literature we searched that posture is largely maintained by reflexive, involuntary control. While reflexive components for postural control are found in skin and joint receptors, somatic graviceptors, and baroreceptors throughout the body, much of the reflexive postural control mechanisms are housed, or occur, within the head and neck region primarily. We suggest that the postural reflexes may function in a hierarchical fashion. This hierarchy may well be based on the gravity-dependent or gravity-independent nature of each postural reflex. Some or all of these postural reflexes may contribute to the development of a postural body scheme, a conceptual internal representation of the external environment under normal gravity. This model may be the framework through which the postural reflexes anticipate and adapt to new gravitational environments.
Conclusion
Visual and vestibular input, as well as joint and soft tissue mechanoreceptors, are major players in the regulation of static upright posture. Each of these input sources detects and responds to specific types of postural stimulus and perturbations, and each region has specific pathways by which it communicates with other postural reflexes, as well as higher central nervous system structures. This review of the postural reflex structures and mechanisms adds to the growing body of posture rehabilitation literature relating specifically to chiropractic treatment. Chiropractic interest in these reflexes may enhance the ability of chiropractic physicians to treat and correct global spine and posture disorders. With the knowledge and understanding of these postural reflexes, chiropractors can evaluate spinal configurations not only from a segmental perspective, but can also determine how spinal dysfunction may be the ultimate consequence of maintaining an upright posture in the presence of other postural deficits. These perspectives need to be explored in more detail.
doi:10.1186/1746-1340-13-16
PMCID: PMC1198239  PMID: 16091134
Cervical spine; Posture; Reflex
15.  Building a Virtual Network in a Community Health Research Training Program 
Objective: To describe the experiences, lessons, and implications of building a virtual network as part of a two-year community health research training program in a Canadian province.
Design: An action research field study in which 25 health professionals from 17 health regions participated in a seven-week training course on health policy, management, economics, research methods, data analysis, and computer technology. The participants then returned to their regions to apply the knowledge in different community health research projects. Ongoing faculty consultations and support were provided as needed. Each participant was given a notebook computer with the necessary software, Internet access, and technical support for two years, to access information resources, engage in group problem solving, share ideas and knowledge, and collaborate on projects.
Measurements: Data collected over two years consisted of program documents, records of interviews with participants and staff, meeting notes, computer usage statistics, automated online surveys, computer conference postings, program Web site, and course feedback. The analysis consisted of detailed review and comparison of the data from different sources. NUD*IST was then used to validate earlier study findings.
Results: The ten key lessons are that role clarity, technology vision, implementation staging, protected time, just-in-time training, ongoing facilitation, work integration, participatory design, relationship building, and the demonstration of results are essential ingredients for building a successful network.
Conclusion: This study provides a descriptive model of the processes involved in developing, in the community health setting, virtual networks that can be used as the basis for future research and as a practical guide for managers.
PMCID: PMC61441  PMID: 10887165
16.  MultiSeq: unifying sequence and structure data for evolutionary analysis 
BMC Bioinformatics  2006;7:382.
Background
Since the publication of the first draft of the human genome in 2000, bioinformatic data have been accumulating at an overwhelming pace. Currently, more than 3 million sequences and 35 thousand structures of proteins and nucleic acids are available in public databases. Finding correlations in and between these data to answer critical research questions is extremely challenging. This problem needs to be approached from several directions: information science to organize and search the data; information visualization to assist in recognizing correlations; mathematics to formulate statistical inferences; and biology to analyze chemical and physical properties in terms of sequence and structure changes.
Results
Here we present MultiSeq, a unified bioinformatics analysis environment that allows one to organize, display, align and analyze both sequence and structure data for proteins and nucleic acids. While special emphasis is placed on analyzing the data within the framework of evolutionary biology, the environment is also flexible enough to accommodate other usage patterns. The evolutionary approach is supported by the use of predefined metadata, adherence to standard ontological mappings, and the ability for the user to adjust these classifications using an electronic notebook. MultiSeq contains a new algorithm to generate complete evolutionary profiles that represent the topology of the molecular phylogenetic tree of a homologous group of distantly related proteins. The method, based on the multidimensional QR factorization of multiple sequence and structure alignments, removes redundancy from the alignments and orders the protein sequences by increasing linear dependence, resulting in the identification of a minimal basis set of sequences that spans the evolutionary space of the homologous group of proteins.
Conclusion
MultiSeq is a major extension of the Multiple Alignment tool that is provided as part of VMD, a structural visualization program for analyzing molecular dynamics simulations. Both are freely distributed by the NIH Resource for Macromolecular Modeling and Bioinformatics and MultiSeq is included with VMD starting with version 1.8.5. The MultiSeq website has details on how to download and use the software:
doi:10.1186/1471-2105-7-382
PMCID: PMC1586216  PMID: 16914055
17.  A highly efficient multi-core algorithm for clustering extremely large datasets 
BMC Bioinformatics  2010;11:169.
Background
In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer.
Results
We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization.
Conclusions
Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer.
doi:10.1186/1471-2105-11-169
PMCID: PMC2865495  PMID: 20370922
18.  Intraoperative Evaluation of Laparoscopic Insufflation Technique for Quality Control in the OR 
Objective:
With increasing technology and computerized systems in the OR, the physician's responsibility is growing. For intraoperative evaluation of insufflation techniques, a data acquisition model for quality control study of potential insufflation problems is necessary.
Methods:
A computer-based, online data acquisition model was designed with a Pentium notebook, PCMCIA data acquisition board PCI-460-P1 and a Visual Designer 3.0 measurement program (both Intelligent Instrumentation, Inc., Tucson, AZ), temperature meters Therm 2280-1 and 2283-2 (Ahlborn, Holzkirchen, Germany) and temperature probes 401 AC and 402 AC (YSI, Inc., Yellow Springs, OH) and T-430-2R (Ahlborn, Holzkirchen, Germany). Gas flow was measured with laminar flow element LFE 1 and flow meters Digima premo 720 (both Special Instruments, Noerdlingen, Germany). During 73 standard laparoscopic procedures, gas flow (L/min) in the insufflation hose, pressure (mm Hg) in the hose and abdomen as well as temperature (°C) in the hose, abdomen and rectum were measured continuously at 3 Hz rate.
Results:
Actual values measured show a wide range often not identical with insufflator presetting. Pressure in the abdomen is usually less than hose pressure. Intraabdominal pressure peaks (≤50 mm Hg) occurred during insufficient anesthesia, while leaning on the abdomen, during trocar insertion and other manipulation. Blood irrigation fluids found in the hose (n=3/73) can lead to bacterial contamination. Negative pressure (−50 mm Hg) was measured due to Endobag removal. Negative flow (≤15 L/min) was caused by pressure on the abdomen, insufflator regulation and an empty CO2 gas tank. Gas temperature in the hose equals room temperature but can decrease in the abdomen to 27.7°C due to high gas flow, large amounts of gas used and prolonged insufflation. Further insufflation-related problems were documented.
Conclusions:
This computer-based measurement model proved to be useful for quality control study in the OR. Results demonstrate the need for intraoperative evaluation of insufflation techniques for laparoscopy. Although no obvious complication related to insufflation problems occurred, some findings potentially question patient security.
PMCID: PMC3021324  PMID: 10987394
Laparoscopy; Insufflation technique; Data acquisition; Quality control
19.  eCAT: Online electronic lab notebook for scientific research 
Background
eCAT is an electronic lab notebook (ELN) developed by Axiope Limited. It is the first online ELN, the first ELN to be developed in close collaboration with lab scientists, and the first ELN to be targeted at researchers in non-commercial institutions. eCAT was developed in response to feedback from users of a predecessor product. By late 2006 the basic concept had been clarified: a highly scalable web-based collaboration tool that possessed the basic capabilities of commercial ELNs, i.e. a permissions system, controlled sharing, an audit trail, electronic signature and search, and a front end that looked like the electronic counterpart to a paper notebook.
Results
During the development of the beta version feedback was incorporated from many groups including the FDA's Center for Biologics Evaluation & Research, Uppsala University, Children's Hospital Boston, Alex Swarbrick's lab at the Garvan Institute in Sydney and Martin Spitaler at Imperial College. More than 100 individuals and groups worldwide then participated in the beta testing between September 2008 and June 2009. The generally positive response is reflected in the following quote about how one lab is making use of eCAT: "Everyone uses it as an electronic notebook, so they can compile the diverse collections of data that we generate as biologists, such as images and spreadsheets. We use to it to take minutes of meetings. We also use it to manage our common stocks of antibodies, plasmids and so on. Finally, perhaps the most important feature for us is the ability to link records, reagents and experiments."
Conclusion
By developing eCAT in close collaboration with lab scientists, Axiope has come up with a practical and easy-to-use product that meets the need of scientists to manage, store and share data online. eCAT is already being perceived as a product that labs can continue to use as their data management and sharing grows in scale and complexity.
doi:10.1186/1759-4499-1-4
PMCID: PMC2809322  PMID: 20334629
20.  An automated and reproducible workflow for running and analyzing neural simulations using Lancet and IPython Notebook 
Lancet is a new, simulator-independent Python utility for succinctly specifying, launching, and collating results from large batches of interrelated computationally demanding program runs. This paper demonstrates how to combine Lancet with IPython Notebook to provide a flexible, lightweight, and agile workflow for fully reproducible scientific research. This informal and pragmatic approach uses IPython Notebook to capture the steps in a scientific computation as it is gradually automated and made ready for publication, without mandating the use of any separate application that can constrain scientific exploration and innovation. The resulting notebook concisely records each step involved in even very complex computational processes that led to a particular figure or numerical result, allowing the complete chain of events to be replicated automatically. Lancet was originally designed to help solve problems in computational neuroscience, such as analyzing the sensitivity of a complex simulation to various parameters, or collecting the results from multiple runs with different random starting points. However, because it is never possible to know in advance what tools might be required in future tasks, Lancet has been designed to be completely general, supporting any type of program as long as it can be launched as a process and can return output in the form of files. For instance, Lancet is also heavily used by one of the authors in a separate research group for launching batches of microprocessor simulations. This general design will allow Lancet to continue supporting a given research project even as the underlying approaches and tools change.
doi:10.3389/fninf.2013.00044
PMCID: PMC3874632  PMID: 24416014
IPython; pandas; reproducibility; workflow; simulation; batch computation; provenance; big data
21.  Guidelines for information about therapy experiments: a proposal on best practice for recording experimental data on cancer therapy 
BMC Research Notes  2012;5:10.
Background
Biology, biomedicine and healthcare have become data-driven enterprises, where scientists and clinicians need to generate, access, validate, interpret and integrate different kinds of experimental and patient-related data. Thus, recording and reporting of data in a systematic and unambiguous fashion is crucial to allow aggregation and re-use of data. This paper reviews the benefits of existing biomedical data standards and focuses on key elements to record experiments for therapy development. Specifically, we describe the experiments performed in molecular, cellular, animal and clinical models. We also provide an example set of elements for a therapy tested in a phase I clinical trial.
Findings
We introduce the Guidelines for Information About Therapy Experiments (GIATE), a minimum information checklist creating a consistent framework to transparently report the purpose, methods and results of the therapeutic experiments. A discussion on the scope, design and structure of the guidelines is presented, together with a description of the intended audience. We also present complementary resources such as a classification scheme, and two alternative ways of creating GIATE information: an electronic lab notebook and a simple spreadsheet-based format. Finally, we use GIATE to record the details of the phase I clinical trial of CHT-25 for patients with refractory lymphomas. The benefits of using GIATE for this experiment are discussed.
Conclusions
While data standards are being developed to facilitate data sharing and integration in various aspects of experimental medicine, such as genomics and clinical data, no previous work focused on therapy development. We propose a checklist for therapy experiments and demonstrate its use in the 131Iodine labeled CHT-25 chimeric antibody cancer therapy. As future work, we will expand the set of GIATE tools to continue to encourage its use by cancer researchers, and we will engineer an ontology to annotate GIATE elements and facilitate unambiguous interpretation and data integration.
doi:10.1186/1756-0500-5-10
PMCID: PMC3285520  PMID: 22226027
22.  Innovative data tools: a suite for managing peer outreach to key affected populations in Viet Nam 
Problem
The paper tools used to monitor outreach work in all major cities in Viet Nam had substantial writing requirements for each contact with difficulty maintaining confidentiality.
Action
This paper describes the development of a Unique Identifier Code (UIC), a field data collection notebook (databook) and a computer data entry system in Viet Nam. The databook can document 40 individual clients and has space for commodity distribution, group contacts and needles/syringe collection for each month.
Outcome
Field implementation trials of the UIC and databook have been undertaken by more than 160 peer outreach workers to document their work with people who inject drugs (PWID) and sex workers (SW). Following an expanded trial in Hai Phong province, there have been requests for national circulation of the databook to be used by peer educators documenting outreach to PWID, SW and men who have sex with men. The standardized UIC and databook, in a variety of locally adapted formats, have now been introduced in more than 40 of the 63 provinces in Viet Nam.
Discussion
This development in Viet Nam is, to our knowledge, the first example of the combination of a confidential UIC and an innovative, simple pocket-sized paper instrument with associated customized data-entry software for documenting outreach.
doi:10.5365/WPSAR.2012.3.2.003
PMCID: PMC3731006  PMID: 23908919
23.  Data Management in the Modern Structural Biology and Biomedical Research Environment 
Modern high-throughput structural biology laboratories produce vast amounts of raw experimental data. The traditional method of data reduction is very simple—results are summarized in peer-reviewed publications, which are hopefully published in high-impact journals. By their nature, publications include only the most important results derived from experiments that may have been performed over the course of many years. The main content of the published paper is a concise compilation of these data, an interpretation of the experimental results, and a comparison of these results with those obtained by other scientists.
Due to an avalanche of structural biology manuscripts submitted to scientific journals, in many recent cases descriptions of experimental methodology (and sometimes even experimental results) are pushed to supplementary materials that are only published online and sometimes may not be reviewed as thoroughly as the main body of a manuscript. Trouble may arise when experimental results are contradicting the results obtained by other scientists, which requires (in the best case) the reexamination of the original raw data or independent repetition of the experiment according to the published description of the experiment. There are reports that a significant fraction of experiments obtained in academic laboratories cannot be repeated in an industrial environment (Begley CG & Ellis LM, Nature 483(7391):531–3, 2012). This is not an indication of scientific fraud but rather reflects the inadequate description of experiments performed on different equipment and on biological samples that were produced with disparate methods. For that reason the goal of a modern data management system is not only the simple replacement of the laboratory notebook by an electronic one but also the creation of a sophisticated, internally consistent, scalable data management system that will combine data obtained by a variety of experiments performed by various individuals on diverse equipment. All data should be stored in a core database that can be used by custom applications to prepare internal reports, statistics, and perform other functions that are specific to the research that is pursued in a particular laboratory.
This chapter presents a general overview of the methods of data management and analysis used by structural genomics (SG) programs. In addition to a review of the existing literature on the subject, also presented is experience in the development of two SG data management systems, UniTrack and LabDB. The description is targeted to a general audience, as some technical details have been (or will be) published elsewhere. The focus is on “data management,” meaning the process of gathering, organizing, and storing data, but also briefly discussed is “data mining,” the process of analysis ideally leading to an understanding of the data. In other words, data mining is the conversion of data into information. Clearly, effective data management is a precondition for any useful data mining. If done properly, gathering details on millions of experiments on thousands of proteins and making them publicly available for analysis—even after the projects themselves have ended—may turn out to be one of the most important benefits of SG programs.
doi:10.1007/978-1-4939-0354-2_1
PMCID: PMC4086192  PMID: 24590705
Databases; Data management; Structural biology; LIMS; PSI; CSGID
24.  Quality of Life with Gefitinib in Patients with EGFR-Mutated Non-Small Cell Lung Cancer: Quality of Life Analysis of North East Japan Study Group 002 Trial 
The Oncologist  2012;17(6):863-870.
The quality of life analysis from the North East Japan 002 study is reported. Quality of life was maintained much longer in patients treated with gefitinib than in patients treated with standard chemotherapy.
Background.
For non-small cell lung cancer (NSCLC) patients with epidermal growth factor receptor (EGFR) mutations, first-line gefitinib produced a longer progression-free survival interval than first-line carboplatin plus paclitaxel but did not show any survival advantage in the North East Japan 002 study. This report describes the quality of life (QoL) analysis of that study.
Methods.
Chemotherapy-naïve patients with sensitive EGFR-mutated, advanced NSCLC were randomized to receive gefitinib or chemotherapy (carboplatin and paclitaxel). Patient QoL was assessed weekly using the Care Notebook, and the primary endpoint of the QoL analysis was time to deterioration from baseline on each of the physical, mental, and life well-being QoL scales. Kaplan–Meier probability curves and log-rank tests were employed to clarify differences.
Results.
QoL data from 148 patients (72 in the gefitinib arm and 76 in the carboplatin plus paclitaxel arm) were analyzed. Time to defined deterioration in physical and life well-being significantly favored gefitinib over chemotherapy (hazard ratio [HR] of time to deterioration, 0.34; 95% confidence interval [CI], 0.23–0.50; p < .0001 and HR, 0.43; 95% CI, 0.28–0.65; p < .0001, respectively).
Conclusion.
QoL was maintained much longer in patients treated with gefitinib than in patients treated with standard chemotherapy, indicating that gefitinib should be considered as the standard first-line therapy for advanced EGFR-mutated NSCLC in spite of no survival advantage.
doi:10.1634/theoncologist.2011-0426
PMCID: PMC3380886  PMID: 22581822
Lung carcinoma; Epidermal growth factor receptor; EGFR; Tyrosine kinase inhibitor; TKI; Gefitinib; Quality of life; QoL
25.  Molecule database framework: a framework for creating database applications with chemical structure search capability 
Background
Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time.
Results
Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:
• Support for multi-component compounds (mixtures)
• Import and export of SD-files
• Optional security (authorization)
For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).
Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files.
Conclusions
By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework.
doi:10.1186/1758-2946-5-48
PMCID: PMC3892073  PMID: 24325762
Chemical structure search; Database; Framework; Open-source

Results 1-25 (1343021)