Single molecule tracking (SMT) analysis of fluorescently tagged lipid and protein probes is an attractive alternative to ensemble averaged methods such as fluorescence correlation spectroscopy (FCS) or fluorescence recovery after photobleaching (FRAP) for measuring diffusion in artificial and plasma membranes. The meaningful estimation of diffusion coefficients and their errors is however not straightforward, and is heavily dependent on sample type, acquisition method, and equipment used. Many approaches require advanced computing and programming skills for their implementation.
Here we present TrackArt software, an accessible graphic interface for simulation and complex analysis of multiple particle paths. Imported trajectories can be filtered to eliminate spurious or corrupted tracks, and are then analyzed using several previously described methodologies, to yield single or multiple diffusion coefficients, their population fractions, and estimated errors. We use TrackArt to analyze the single-molecule diffusion behavior of a sphingolipid analog SM-Atto647N, in mica supported DOPC (1,2-dioleoyl-sn-glycero-3-phosphocholine) bilayers. Fitting with a two-component diffusion model confirms the existence of two separate populations of diffusing particles in these bilayers on mica. As a demonstration of the TrackArt workflow, we characterize and discuss the effective activation energies required to increase the diffusion rates of these populations, obtained from Arrhenius plots of temperature-dependent diffusion. Finally, TrackArt provides a simulation module, allowing the user to generate models with multiple particle trajectories, diffusing with different characteristics. Maps of domains, acting as impermeable or permeable obstacles for particles diffusing with given rate constants and diffusion coefficients, can be simulated or imported from an image. Importantly, this allows one to use simulated data with a known diffusion behavior as a comparison for results acquired using particular algorithms on actual, “natural” samples whose diffusion behavior is to be extracted. It can also serve as a tool for demonstrating diffusion principles.
TrackArt is an open source, platform-independent, Matlab-based graphical user interface, and is easy to use even for those unfamiliar with the Matlab programming environment. TrackArt can be used for accurate simulation and analysis of complex diffusion data, such as diffusion in lipid bilayers, providing publication-quality formatted results.
Fluorescence; Single molecule tracking; Diffusion; Lipid bilayers; Total internal reflection; Microscopy; Mica; MSD
Graphical user interface (GUI) software promotes novelty by allowing users to extend the functionality. SVM Classifier is a cross-platform graphical application that handles very large datasets well. The purpose of this study is to create a GUI application that allows SVM users to perform SVM training, classification and prediction.
The GUI provides user-friendly access to state-of-the-art SVM methods embodied in the LIBSVM implementation of Support Vector Machine. We implemented the java interface using standard swing libraries.
We used a sample data from a breast cancer study for testing classification accuracy. We achieved 100% accuracy in classification among the BRCA1–BRCA2 samples with RBF kernel of SVM.
We have developed a java GUI application that allows SVM users to perform SVM training, classification and prediction. We have demonstrated that support vector machines can accurately classify genes into functional categories based upon expression data from DNA microarray hybridization experiments. Among the different kernel functions that we examined, the SVM that uses a radial basis kernel function provides the best performance.
The SVM Classifier is available at .
R is the leading open source statistics software with a vast number of biostatistical and bioinformatical analysis packages. To exploit the advantages of R, extensive scripting/programming skills are required.
We have developed a software tool called R GUI Generator (RGG) which enables the easy generation of Graphical User Interfaces (GUIs) for the programming language R by adding a few Extensible Markup Language (XML) – tags. RGG consists of an XML-based GUI definition language and a Java-based GUI engine. GUIs are generated in runtime from defined GUI tags that are embedded into the R script. User-GUI input is returned to the R code and replaces the XML-tags. RGG files can be developed using any text editor. The current version of RGG is available as a stand-alone software (RGGRunner) and as a plug-in for JGR.
RGG is a general GUI framework for R that has the potential to introduce R statistics (R packages, built-in functions and scripts) to users with limited programming skills and helps to bridge the gap between R developers and GUI-dependent users. RGG aims to abstract the GUI development from individual GUI toolkits by using an XML-based GUI definition language. Thus RGG can be easily integrated in any software. The RGG project further includes the development of a web-based repository for RGG-GUIs. RGG is an open source project licensed under the Lesser General Public License (LGPL) and can be downloaded freely at
Proteolytic 18O-labeling has been widely used in quantitative proteomics since it can uniformly label all peptides from different kinds of proteins. There have been multiple algorithms and tools developed over the last few years to analyze high-resolution proteolytic 16O/18O labeled mass spectra. We have developed a software package, O18Quant, which addresses two major issues in the previously developed algorithms. First, O18Quant uses a robust linear model (RLM) for peptide-to-protein ratio estimation. RLM can minimize the effect of outliers instead of iteratively removing them which is a common practice in other approaches. Second, the existing algorithms lack applicable implementation. We address this by implementing O18Quant using C# under Microsoft.net framework and R. O18Quant automatically calculates the peptide/protein relative ratio and provides a friendly graphical user interface (GUI) which allows the user to manually validate the quantification results at scan, peptide, and protein levels. The intuitive GUI of O18Quant can greatly enhance the user's visualization and understanding of the data analysis. O18Quant can be downloaded for free as part of the software suite ProteomicsTools.
This Technical Note describes a novel modular framework for development and interlaboratory distribution and validation of 3D tractography algorithms based on in vivo diffusion tensor imaging (DTI) measurements. The proposed framework allows individual MRI research centers to benefit from new tractography algorithms developed at other independent centers by “plugging” new tractography modules directly into their own custom DTI software tools, such as existing graphical user interfaces (GUI) for visualizing brain white matter pathways. The proposed framework is based on the Java 3D programming platform, which provides an object-oriented programming (OOP) model and independence of computer hardware configuration and operating system. To demonstrate the utility of the proposed approach, a complete GUI for interactive DTI tractography was developed, along with two separate and interchangeable modules that implement two different tractography algorithms. Although the application discussed here relates to DTI tractography, the programming concepts presented here should be of interest to anyone who wishes to develop platform-independent GUI applications for interactive 3D visualization.
Diffusion tensor imaging; white matter; tractography
DockoMatic is a free and open source application that unifies a suite of software programs within a user-friendly Graphical User Interface (GUI) to facilitate molecular docking experiments. Here we describe the release of DockoMatic 2.0; significant software advances include the ability to: (1) conduct high throughput Inverse Virtual Screening (IVS); (2) construct 3D homology models; and (3) customize the user interface. Users can now efficiently setup, start, and manage IVS experiments through the DockoMatic GUI by specifying a receptor(s), ligand(s), grid parameter file(s), and docking engine (either AutoDock or AutoDock Vina). DockoMatic automatically generates the needed experiment input files and output directories, and allows the user to manage and monitor job progress. Upon job completion, a summary of results is generated by Dockomatic to facilitate interpretation by the user. DockoMatic functionality has also been expanded to facilitate the construction of 3D protein homology models using the Timely Integrated Modeler (TIM) wizard. The wizard TIM provides an interface that accesses the basic local alignment search tool (BLAST) and MODELLER programs, and guides the user through the necessary steps to easily and efficiently create 3D homology models for biomacromolecular structures. The DockoMatic GUI can be customized by the user, and the software design makes it relatively easy to integrate additional docking engines, scoring functions, or third party programs. DockoMatic is a free comprehensive molecular docking software program for all levels of scientists in both research and education.
Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing.
Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development.
Narrator is a flexible and intuitive systems biology tool. It is specifically intended for users aiming to construct and simulate dynamic models of biology without recourse to extensive mathematical detail. Its design facilitates mappings to different formal languages and frameworks. The combined set of features makes Narrator unique among tools of its kind. Narrator is implemented as Java software program and available as open-source from .
The main objective of the study was to analyze the structure of data contained in the archives exported from a tomotherapy treatment planning system. An additional aim was to create an application equipped with a user-friendly interface to enable automatic reading of files and data analysis, also using external algorithms. Analyses had to include image registration, dose deformation and summation.
Materials and methods
Files from the archive exported from the tomotherapy treatment planning system (TPS) were analyzed. Two programs were used to analyze the information contained in the archive files: XML Viewer by MindFusion Limited and HxD hex editor by Maël Hora. To create an application enabling loading and analyzing the data, Matlab by MathWorks, version R2009b, was used.
Archive exported from the TPS is a directory with several files. It contains three types of data: .xml, .img and .sin. Tools available in Matlab offer great opportunities for analysis and transformation of loaded information. Proposed application automates the loading of necessary information and simplifies data handling. Furthermore, the application is equipped with a graphical user interface (GUI). The main application window contains buttons for opening the archives and analyzing the loaded data.
The analysis of data contained in the archive exported from the tomotherapy treatment planning system allowed to determine the way and place of saving information of our interest, such as tomography images, structure sets and dose distributions. This enabled us to develop and optimize methods of loading and analyzing this information.
Tomotherapy; Non-rigid image registration; Dose distribution; Matlab
Many biological laboratories that deal with genomic samples are facing the problem of sample tracking, both for pure laboratory management and for efficiency. Our laboratory exploits PCR techniques and Next Generation Sequencing (NGS) methods to perform high-throughput integration site monitoring in different clinical trials and scientific projects. Because of the huge amount of samples that we process every year, which result in hundreds of millions of sequencing reads, we need to standardize data management and tracking systems, building up a scalable and flexible structure with web-based interfaces, which are usually called Laboratory Information Management System (LIMS).
We started collecting end-users' requirements, composed of desired functionalities of the system and Graphical User Interfaces (GUI), and then we evaluated available tools that could address our requirements, spanning from pure LIMS to Content Management Systems (CMS) up to enterprise information systems. Our analysis identified ADempiere ERP, an open source Enterprise Resource Planning written in Java J2EE, as the best software that also natively implements some highly desirable technological advances, such as the high usability and modularity that grants high use-case flexibility and software scalability for custom solutions.
We extended and customized ADempiere ERP to fulfil LIMS requirements and we developed adLIMS. It has been validated by our end-users verifying functionalities and GUIs through test cases for PCRs samples and pre-sequencing data and it is currently in use in our laboratories. adLIMS implements authorization and authentication policies, allowing multiple users management and roles definition that enables specific permissions, operations and data views to each user. For example, adLIMS allows creating sample sheets from stored data using available exporting operations. This simplicity and process standardization may avoid manual errors and information backtracking, features that are not granted using track recording on files or spreadsheets.
adLIMS aims to combine sample tracking and data reporting features with higher accessibility and usability of GUIs, thus allowing time to be saved on doing repetitive laboratory tasks, and reducing errors with respect to manual data collection methods. Moreover, adLIMS implements automated data entry, exploiting sample data multiplexing and parallel/transactional processing. adLIMS is natively extensible to cope with laboratory automation through platform-dependent API interfaces, and could be extended to genomic facilities due to the ERP functionalities.
LIMS; Open Source Software; Information Systems; ADempiere ERP; Sample Tracking
We present MultiElec, an open source MATLAB based application for data analysis of microelectrode array (MEA) recordings. MultiElec displays an extremely user-friendly graphic user interface (GUI) that allows the simultaneous display and analysis of voltage traces for 60 electrodes and includes functions for activation-time determination, the production of activation-time heat maps with activation time and isoline display. Furthermore, local conduction velocities are semi-automatically calculated along with their corresponding vector plots. MultiElec allows ad hoc signal suppression, enabling the user to easily and efficiently handle signal artefacts and for incomplete data sets to be analysed. Voltage traces and heat maps can be simply exported for figure production and presentation. In addition, our platform is able to produce 3D videos of signal progression over all 60 electrodes. Functions are controlled entirely by a single GUI with no need for command line input or any understanding of MATLAB code. MultiElec is open source under the terms of the GNU General Public License as published by the Free Software Foundation, version 3. Both the program and source code are available to download from http://www.cancer.manchester.ac.uk/MultiElec/.
RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers.
We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification” includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module “mRNA identification” includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module “Target screening” provides expression profiling analyses and graphic visualization. The module “Self-testing” offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program’s functionality.
eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory.
RNA sequencing; Bioinformatics tool; Graphic user interface; Parallel processing
Biophysicists use single particle tracking (SPT) methods to probe the dynamic behavior of individual proteins and lipids in cell membranes. The mean squared displacement (MSD) has proven to be a powerful tool for analyzing the data and drawing conclusions about membrane organization, including features like lipid rafts, protein islands, and confinement zones defined by cytoskeletal barriers. Here, we implement time series analysis as a new analytic tool to analyze further the motion of membrane proteins. The experimental data track the motion of 40 nm gold particles bound to Class I major histocompatibility complex (MHCI) molecules on the membranes of mouse hepatoma cells.
Our first novel result is that the tracks are significantly autocorrelated. Because of this, we developed linear autoregressive models to elucidate the autocorrelations. Estimates of the signal to noise ratio for the models show that the autocorrelated part of the motion is significant. Next, we fit the probability distributions of jump sizes with four different models. The first model is a general Weibull distribution that shows that the motion is characterized by an excess of short jumps as compared to a normal random walk. We also fit the data with a chi distribution which provides a natural estimate of the dimension d of the space in which a random walk is occurring. For the biological data, the estimates satisfy 1 < d < 2, implying that particle motion is not confined to a line, but also does not occur freely in the plane. The dimension gives a quantitative estimate of the amount of nanometer scale obstruction met by a diffusing molecule. We introduce a new distribution and use the generalized extreme value distribution to show that the biological data also have an excess of long jumps as compared to normal diffusion. These fits provide novel estimates of the microscopic diffusion constant.
Previous MSD analyses of SPT data have provided evidence for nanometer-scale confinement zones that restrict lateral diffusion, supporting the notion that plasma membrane organization is highly structured. Our demonstration that membrane protein motion is autocorrelated and is characterized by an excess of both short and long jumps reinforces the concept that the membrane environment is heterogeneous and dynamic. Autocorrelation analysis and modeling of the jump distributions are powerful new techniques for the analysis of SPT data and the development of more refined models of membrane organization.
The time series analysis also provides several methods of estimating the diffusion constant in addition to the constant provided by the mean squared displacement. The mean squared displacement for most of the biological data shows a power law behavior rather the linear behavior of Brownian motion. In this case, we introduce the notion of an instantaneous diffusion constant. All of the diffusion constants show a strong consistency for most of the biological data.
Time series analysis; Single particle tracking; Cell membrane; Mean squared displacement
In the aftermath of the London ‘7/7’ attacks in 2005, UK government agencies required the development of a quick-running tool to predict the weapon and injury effects caused by the initiation of a person borne improvised explosive device (PBIED) within crowded metropolitan environments. This prediction tool, termed the HIP (human injury predictor) code, was intended to:
— assist the security services to encourage favourable crowd distributions and densities within scenarios of ‘sensitivity’;— provide guidance to security engineers concerning the most effective location for protection systems;— inform rescue services as to where, in the case of such an event, individuals with particular injuries will be located;— assist in training medical personnel concerning the scope and types of injuries that would be sustained as a consequence of a particular attack;— assist response planners in determining the types of medical specialists (burns, traumatic amputations, lungs, etc.) required and thus identify the appropriate hospitals to receive the various casualty types.This document describes the algorithms used in the development of this tool, together with the pertinent underpinning physical processes. From its rudimentary beginnings as a simple spreadsheet, the HIP code now has a graphical user interface (GUI) that allows three-dimensional visualization of results and intuitive scenario set-up. The code is underpinned by algorithms that predict the pressure and momentum outputs produced by PBIEDs within open and confined environments, as well as the trajectories of shrapnel deliberately placed within the device to increase injurious effects. Further logic has been implemented to transpose these weapon effects into forms of human injury depending on where individuals are located relative to the PBIED. Each crowd member is subdivided into representative body parts, each of which is assigned an abbreviated injury score after a particular calculation cycle. The injury levels of each affected body part are then summated and a triage state assigned for each individual crowd member based on the criteria specified within the ‘injury scoring system’. To attain a comprehensive picture of a particular event, it is important that a number of simulations, using what is substantively the same scenario, are undertaken with natural variation being applied to the crowd distributions and the PBIED output. Accurate mathematical representation of such complex phenomena is challenging, particularly as the code must be quick-running to be of use to the stakeholder community. In addition to discussing the background and motivation for the algorithm and GUI development, this document also discusses the steps taken to validate the tool and the plans for further functionality implementation.
quick-running; prediction; human injury; person borne; improvised explosive device (PBIED); crowded metropolitan environment
Gastrointestinal contractions are controlled by an underlying bioelectrical activity. High-resolution spatiotemporal electrical mapping has become an important advance for investigating gastrointestinal electrical behaviors in health and motility disorders. However, research progress has been constrained by the low efficiency of the data analysis tasks. This work introduces a new efficient software package: GEMS (Gastrointestinal Electrical Mapping Suite), for analyzing and visualizing high-resolution multi-electrode gastrointestinal mapping data in spatiotemporal detail.
GEMS incorporates a number of new and previously validated automated analytical and visualization methods into a coherent framework coupled to an intuitive and user-friendly graphical user interface. GEMS is implemented using MATLAB®, which combines sophisticated mathematical operations and GUI compatibility. Recorded slow wave data can be filtered via a range of inbuilt techniques, efficiently analyzed via automated event-detection and cycle clustering algorithms, and high quality isochronal activation maps, velocity field maps, amplitude maps, frequency (time interval) maps and data animations can be rapidly generated. Normal and dysrhythmic activities can be analyzed, including initiation and conduction abnormalities. The software is distributed free to academics via a community user website and forum (http://sites.google.com/site/gimappingsuite).
This software allows for the rapid analysis and generation of critical results from gastrointestinal high-resolution electrical mapping data, including quantitative analysis and graphical outputs for qualitative analysis. The software is designed to be used by non-experts in data and signal processing, and is intended to be used by clinical researchers as well as physiologists and bioengineers. The use and distribution of this software package will greatly accelerate efforts to improve the understanding of the causes and clinical consequences of gastrointestinal electrical disorders, through high-resolution electrical mapping.
Slow wave; Spike; Signal processing; Electrophysiology; Motility; Tachygastria
Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named “Pipeline for Analyzing braiN Diffusion imAges” (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics [e.g., fractional anisotropy (FA) and mean diffusivity (MD)] that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies.
PANDA; diffusion MRI; DTI; pipeline; diffusion metrics; structural connectivity; network; connectome
Recent evidence suggests that DNA methylation changes may underlie numerous complex traits and diseases. The advent of commercial, array-based methods to interrogate DNA methylation has led to a profusion of epigenetic studies in the literature. Array-based methods, such as the popular Illumina GoldenGate and Infinium platforms, estimate the proportion of DNA methylated at single-base resolution for thousands of CpG sites across the genome. These arrays generate enormous amounts of data, but few software resources exist for efficient and flexible analysis of these data. We developed a software package called MethLAB (http://genetics.emory.edu/conneely/MethLAB) using R, an open source statistical language that can be edited to suit the needs of the user. MethLAB features a graphical user interface (GUI) with a menu-driven format designed to efficiently read in and manipulate array-based methylation data in a user-friendly manner. MethLAB tests for association between methylation and relevant phenotypes by fitting a separate linear model for each CpG site. These models can incorporate both continuous and categorical phenotypes and covariates, as well as fixed or random batch or chip effects. MethLAB accounts for multiple testing by controlling the false discovery rate (FDR) at a user-specified level. Standard output includes a spreadsheet-ready text file and an array of publication-quality figures. Considering the growing interest in and availability of DNA methylation data, there is a great need for user-friendly open source analytical tools. With MethLAB, we present a timely resource that will allow users with no programming experience to implement flexible and powerful analyses of DNA methylation data.
DNA methylation; software; genome-wide; microarrays; Infinium 450K array
Public databases such as the NCBI Gene Expression Omnibus contain extensive and exponentially increasing amounts of high-throughput data that can be applied to molecular phenotype characterization. Collectively, these data can be analyzed for such purposes as disease diagnosis or phenotype classification. One family of algorithms that has proven useful for disease classification is based on relative expression analysis and includes the Top-Scoring Pair (TSP), k-Top-Scoring Pairs (k-TSP), Top-Scoring Triplet (TST) and Differential Rank Conservation (DIRAC) algorithms. These relative expression analysis algorithms hold significant advantages for identifying interpretable molecular signatures for disease classification, and have been implemented previously on a variety of computational platforms with varying degrees of usability. To increase the user-base and maximize the utility of these methods, we developed the program AUREA (Adaptive Unified Relative Expression Analyzer)—a cross-platform tool that has a consistent application programming interface (API), an easy-to-use graphical user interface (GUI), fast running times and automated parameter discovery.
Herein, we describe AUREA, an efficient, cohesive, and user-friendly open-source software system that comprises a suite of methods for relative expression analysis. AUREA incorporates existing methods, while extending their capabilities and bringing uniformity to their interfaces. We demonstrate that combining these algorithms and adaptively tuning parameters on the training sets makes these algorithms more consistent in their performance and demonstrate the effectiveness of our adaptive parameter tuner by comparing accuracy across diverse datasets.
We have integrated several relative expression analysis algorithms and provided a unified interface for their implementation while making data acquisition, parameter fixing, data merging, and results analysis ‘point-and-click’ simple. The unified interface and the adaptive parameter tuning of AUREA provide an effective framework in which to investigate the massive amounts of publically available data by both ‘in silico’ and ‘bench’ scientists. AUREA can be found at http://price.systemsbiology.net/AUREA/.
DCE@urLAB is a software application for analysis of dynamic contrast-enhanced magnetic resonance imaging data (DCE-MRI). The tool incorporates a friendly graphical user interface (GUI) to interactively select and analyze a region of interest (ROI) within the image set, taking into account the tissue concentration of the contrast agent (CA) and its effect on pixel intensity.
Pixel-wise model-based quantitative parameters are estimated by fitting DCE-MRI data to several pharmacokinetic models using the Levenberg-Marquardt algorithm (LMA). DCE@urLAB also includes the semi-quantitative parametric and heuristic analysis approaches commonly used in practice. This software application has been programmed in the Interactive Data Language (IDL) and tested both with publicly available simulated data and preclinical studies from tumor-bearing mouse brains.
A user-friendly solution for applying pharmacokinetic and non-quantitative analysis DCE-MRI in preclinical studies has been implemented and tested. The proposed tool has been specially designed for easy selection of multi-pixel ROIs. A public release of DCE@urLAB, together with the open source code and sample datasets, is available at http://www.die.upm.es/im/archives/DCEurLAB/.
DCE-MRI; Imaging; Levenberg-Marquardt; Fitting; Preclinical; Pharmacokinetics; Animal models; High field MR; IDL
Motivation: Scientists and regulators are often faced with complex decisions, where use of scarce resources must be prioritized using collections of diverse information. The Toxicological Prioritization Index (ToxPi™) was developed to enable integration of multiple sources of evidence on exposure and/or safety, transformed into transparent visual rankings to facilitate decision making. The rankings and associated graphical profiles can be used to prioritize resources in various decision contexts, such as testing chemical toxicity or assessing similarity of predicted compound bioactivity profiles. The amount and types of information available to decision makers are increasing exponentially, while the complex decisions must rely on specialized domain knowledge across multiple criteria of varying importance. Thus, the ToxPi bridges a gap, combining rigorous aggregation of evidence with ease of communication to stakeholders.
Results: An interactive ToxPi graphical user interface (GUI) application has been implemented to allow straightforward decision support across a variety of decision-making contexts in environmental health. The GUI allows users to easily import and recombine data, then analyze, visualize, highlight, export and communicate ToxPi results. It also provides a statistical metric of stability for both individual ToxPi scores and relative prioritized ranks.
Availability: The ToxPi GUI application, complete user manual and example data files are freely available from http://comptox.unc.edu/toxpi.php.
We introduce SimTB, a MATLAB toolbox designed to simulate functional magnetic resonance imaging (fMRI) datasets under a model of spatiotemporal separability. The toolbox meets the increasing need of the fMRI community to more comprehensively understand the effects of complex processing strategies by providing a ground truth that estimation methods may be compared against. SimTB captures the fundamental structure of real data, but data generation is fully parameterized and fully controlled by the user, allowing for accurate and precise comparisons. The toolbox offers a wealth of options regarding the number and configuration of spatial sources, implementation of experimental paradigms, inclusion of tissue-specific properties, addition of noise and head movement, and much more. A straightforward data generation method and short computation time (3–10 seconds for each dataset) allow a practitioner to simulate and analyze many datasets to potentially understand a problem from many angles. Beginning MATLAB users can use the SimTB graphical user interface (GUI) to design and execute simulations while experienced users can write batch scripts to automate and customize this process. The toolbox is freely available at http://mialab.mrn.org/software together with sample scripts and tutorials.
simulation; fMRI; group analysis
Random-sequence peptide libraries are a commonly used tool to identify novel ligands for binding antibodies, other proteins, and small molecules. It is often of interest to compare the selected peptide sequences to the natural protein binding partners to infer the exact binding site or the importance of particular residues. The ability to search a set of sequences for similarity to a set of peptides may sometimes enable the prediction of an antibody epitope or a novel binding partner. We have developed a software application designed specifically for this task.
GuiTope provides a graphical user interface for aligning peptide sequences to protein sequences. All alignment parameters are accessible to the user including the ability to specify the amino acid frequency in the peptide library; these frequencies often differ significantly from those assumed by popular alignment programs. It also includes a novel feature to align di-peptide inversions, which we have found improves the accuracy of antibody epitope prediction from peptide microarray data and shows utility in analyzing phage display datasets. Finally, GuiTope can randomly select peptides from a given library to estimate a null distribution of scores and calculate statistical significance.
GuiTope provides a convenient method for comparing selected peptide sequences to protein sequences, including flexible alignment parameters, novel alignment features, ability to search a database, and statistical significance of results. The software is available as an executable (for PC) at http://www.immunosignature.com/software and ongoing updates and source code will be available at sourceforge.net.
In the inter-subject correlation (ISC) based analysis of the functional magnetic resonance imaging (fMRI) data, the extent of shared processing across subjects during the experiment is determined by calculating correlation coefficients between the fMRI time series of the subjects in the corresponding brain locations. This implies that ISC can be used to analyze fMRI data without explicitly modeling the stimulus and thus ISC is a potential method to analyze fMRI data acquired under complex naturalistic stimuli. Despite of the suitability of ISC based approach to analyze complex fMRI data, no generic software tools have been made available for this purpose, limiting a widespread use of ISC based analysis techniques among neuroimaging community. In this paper, we present a graphical user interface (GUI) based software package, ISC Toolbox, implemented in Matlab for computing various ISC based analyses. Many advanced computations such as comparison of ISCs between different stimuli, time window ISC, and inter-subject phase synchronization are supported by the toolbox. The analyses are coupled with re-sampling based statistical inference. The ISC based analyses are data and computation intensive and the ISC toolbox is equipped with mechanisms to execute the parallel computations in a cluster environment automatically and with an automatic detection of the cluster environment in use. Currently, SGE-based (Oracle Grid Engine, Son of a Grid Engine, or Open Grid Scheduler) and Slurm environments are supported. In this paper, we present a detailed account on the methods behind the ISC Toolbox, the implementation of the toolbox and demonstrate the possible use of the toolbox by summarizing selected example applications. We also report the computation time experiments both using a single desktop computer and two grid environments demonstrating that parallelization effectively reduces the computing time. The ISC Toolbox is available in https://code.google.com/p/isc-toolbox/
functional magnetic resonance imaging; naturalistic stimulus; re-sampling test; Matlab; grid-computing; GUI
The Windows 95/NT operating systems (Microsoft Corp, Redmond, WA) currently provide the only low-cost truly preemptive multitasking environment and as such become an attractive diagnostic workstation platform. The purpose of this project is to test and optimize display station graphical user interface (GUI) actions previously designed on the pseudomultitasking Macintosh (Apple Computer, Cupertino, CA) platform, and image data transmission using time slicing/dynamic prioritization assignment capabilities of the new Windows platform. A diagnostic workstation in the clinical environment must process two categories of events: user interaction with the GUI through keyboard/mouse input, and transmission of incoming data files. These processes contend for central processing units (CPU) time resulting in GUI “lockout” during image transmission or delay in transmission until GUI “quiet time.” WinSockets and the Transmission Control Protocol/Internet Protocal (TCP/IP) communication protocol software (Microsoft) are implemented using dynamic priority timeslicing to ensure that GUI delays at the time of Digital Imaging and Communications in Medicine (DICOM) file transfer do not exceed 1/10 second. Assignment of thread priority does not translate into an absolute fixed percentage of CPU time. Therefore, the relationship between dynamic priority assignment by the processor, and the GUI and communication application threads will be more fully investigated to optimize CPU resource allocation. These issues will be tested using 10 MB/sec Ethernet and 100 MB/sec fast and wide Ethernet transmission. Preliminary results of typical clinical files (10 to 30 MB) over Ethernet show no visually perceptible interruption of the GUI, suggesting that the new Windows PC platform may be a viable diagnostic workstation option.
preemptive multitasking; diagnostic workstation; optimization; CPU resources
Next-generation sequencers (NGSs) have become one of the main tools for current biology. To obtain useful insights from the NGS data, it is essential to control low-quality portions of the data affected by technical errors such as air bubbles in sequencing fluidics.
We develop a software SUGAR (subtile-based GUI-assisted refiner) which can handle ultra-high-throughput data with user-friendly graphical user interface (GUI) and interactive analysis capability. The SUGAR generates high-resolution quality heatmaps of the flowcell, enabling users to find possible signals of technical errors during the sequencing. The sequencing data generated from the error-affected regions of a flowcell can be selectively removed by automated analysis or GUI-assisted operations implemented in the SUGAR. The automated data-cleaning function based on sequence read quality (Phred) scores was applied to a public whole human genome sequencing data and we proved the overall mapping quality was improved.
The detailed data evaluation and cleaning enabled by SUGAR would reduce technical problems in sequence read mapping, improving subsequent variant analysis that require high-quality sequence data and mapping results. Therefore, the software will be especially useful to control the quality of variant calls to the low population cells, e.g., cancers, in a sample with technical errors of sequencing procedures.
Automated analysis; Data cleaning; Illumina HiSeq; MiSeq; NGS
Charge states of ionizable residues in proteins determine their pH-dependent properties through their pKa values. Thus, various theoretical methods to determine ionization constants of residues in biological systems have been developed. One of the more widely used approaches for predicting pKa values in proteins is the PROPKA program, which provides convenient structural rationalization of the predicted pKa values without any additional calculations.
The PROPKA Graphical User Interface (GUI) is a new tool for studying the pH-dependent properties of proteins such as charge and stabilization energy. It facilitates a quantitative analysis of pKa values of ionizable residues together with their structural determinants by providing a direct link between the pKa data, predicted by the PROPKA calculations, and the structure via the Visual Molecular Dynamics (VMD) program. The GUI also calculates contributions to the pH-dependent unfolding free energy at a given pH for each ionizable group in the protein. Moreover, the PROPKA-computed pKa values or energy contributions of the ionizable residues in question can be displayed interactively. The PROPKA GUI can also be used for comparing pH-dependent properties of more than one structure at the same time.
The GUI considerably extends the analysis and validation possibilities of the PROPKA approach. The PROPKA GUI can conveniently be used to investigate ionizable groups, and their interactions, of residues with significantly perturbed pKa values or residues that contribute to the stabilization energy the most. Charge-dependent properties can be studied either for a single protein or simultaneously with other homologous structures, which makes it a helpful tool, for instance, in protein design studies or structure-based function predictions. The GUI is implemented as a Tcl/Tk plug-in for VMD, and can be obtained online at http://propka.ki.ku.dk/~luca/wiki/index.php/GUI_Web.