PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1291739)

Clipboard (0)
None

Related Articles

1.  A repository based on a dynamically extensible data model supporting multidisciplinary research in neuroscience 
Background
Robust, extensible and distributed databases integrating clinical, imaging and molecular data represent a substantial challenge for modern neuroscience. It is even more difficult to provide extensible software environments able to effectively target the rapidly changing data requirements and structures of research experiments. There is an increasing request from the neuroscience community for software tools addressing technical challenges about: (i) supporting researchers in the medical field to carry out data analysis using integrated bioinformatics services and tools; (ii) handling multimodal/multiscale data and metadata, enabling the injection of several different data types according to structured schemas; (iii) providing high extensibility, in order to address different requirements deriving from a large variety of applications simply through a user runtime configuration.
Methods
A dynamically extensible data structure supporting collaborative multidisciplinary research projects in neuroscience has been defined and implemented. We have considered extensibility issues from two different points of view. First, the improvement of data flexibility has been taken into account. This has been done through the development of a methodology for the dynamic creation and use of data types and related metadata, based on the definition of “meta” data model. This way, users are not constrainted to a set of predefined data and the model can be easily extensible and applicable to different contexts. Second, users have been enabled to easily customize and extend the experimental procedures in order to track each step of acquisition or analysis. This has been achieved through a process-event data structure, a multipurpose taxonomic schema composed by two generic main objects: events and processes. Then, a repository has been built based on such data model and structure, and deployed on distributed resources thanks to a Grid-based approach. Finally, data integration aspects have been addressed by providing the repository application with an efficient dynamic interface designed to enable the user to both easily query the data depending on defined datatypes and view all the data of every patient in an integrated and simple way.
Results
The results of our work have been twofold. First, a dynamically extensible data model has been implemented and tested based on a “meta” data-model enabling users to define their own data types independently from the application context. This data model has allowed users to dynamically include additional data types without the need of rebuilding the underlying database. Then a complex process-event data structure has been built, based on this data model, describing patient-centered diagnostic processes and merging information from data and metadata. Second, a repository implementing such a data structure has been deployed on a distributed Data Grid in order to provide scalability both in terms of data input and data storage and to exploit distributed data and computational approaches in order to share resources more efficiently. Moreover, data managing has been made possible through a friendly web interface. The driving principle of not being forced to preconfigured data types has been satisfied. It is up to users to dynamically configure the data model for the given experiment or data acquisition program, thus making it potentially suitable for customized applications.
Conclusions
Based on such repository, data managing has been made possible through a friendly web interface. The driving principle of not being forced to preconfigured data types has been satisfied. It is up to users to dynamically configure the data model for the given experiment or data acquisition program, thus making it potentially suitable for customized applications.
doi:10.1186/1472-6947-12-115
PMCID: PMC3560115  PMID: 23043673
Neuroscience; Data models; Multidisciplinary studies
2.  Internet Patient Records: new techniques 
Background
The ease by which the Internet is able to distribute information to geographically-distant users on a wide variety of computers makes it an obvious candidate for a technological solution for electronic patient record systems. Indeed, second-generation Internet technologies such as the ones described in this article - XML (eXtensible Markup Language), XSL (eXtensible Style Language), DOM (Document Object Model), CSS (Cascading Style Sheet), JavaScript, and JavaBeans - may significantly reduce the complexity of the development of distributed healthcare systems.
Objective
The demonstration of an experimental Electronic Patient Record (EPR) system built from those technologies that can support viewing of medical imaging exams and graphically-rich clinical reporting tools, while conforming to the newly emerging XML standard for digital documents. In particular, we aim to promote rapid prototyping of new reports by clinical specialists.
Methods
We have built a prototype EPR client, InfoDOM, that runs in both the popular web browsers. In this second version it receives each EPR as an XML record served via the secure SSL (Secure Socket Layer) protocol. JavaBean software components manipulate the XML to store it and then to transform it into a variety of useful clinical views. First a web page summary for the patient is produced. From that web page other JavaBeans can be launched. In particular, we have developed a medical imaging exam Viewer and a clinical Reporter bean parameterized appropriately for the particular patient and exam in question. Both present particular views of the XML data. The Viewer reads image sequences from a patient-specified network URL on a PACS (Picture Archiving and Communications System) server and presents them in a user-controllable animated sequence, while the Reporter provides a configurable anatomical map of the site of the pathology, from which individual "reportlets" can be launched. The specification of these reportlets is achieved using standard HTML forms and thus may conceivably be authored by clinical specialists. A generic JavaScript library has been written that allows the seamless incorporation of such contributions into the InfoDOM client. In conjunction with another JavaBean, that library renders graphically-enhanced reporting tools that read and write content to and from the XML data-structure, ready for resubmission to the EPR server.
Results
We demonstrate the InfoDOM experimental EPR system that is currently being adapted for test-bed use in three hospitals in Cagliari, Italy. For this we are working with specialists in neurology, radiology, and epilepsy.
Conclusions
Early indications are that the rapid prototyping of reports afforded by our EPR system can assist communication between clinical specialists and our system developers. We are now experimenting with new technologies that may provide services to the kind of XML EPR client described here.
doi:10.2196/jmir.3.1.e8
PMCID: PMC1761888  PMID: 11720950
Electronic Medical Record; Medical Information Systems; Internet; Java; JavaScript; XML; XSL; Rapid Prototyping; Elicitation Methods
3.  Computers in imaging and health care: Now and in the future 
Journal of Digital Imaging  2000;13(4):145-156.
Early picture archiving and communication systems (PACS) were characterized by the use of very expensive hardware devices, cumbersome display stations, duplication of database content, lack of interfaces to other clinical information systems, and immaturity in their understanding of the folder manager concepts and workflow reengineering. They were implemented historically at large academic medical centers by biomedical engineers and imaging informaticists. PACS were nonstandard, home-grown projects with mixed clinical acceptance. However, they clearly showed the great potential for PACS and filmless medical imaging. Filmless radiology is a reality today. The advent of efficient softcopy display of images provides a means for dealing with the ever-increasing number of studies and number of images per study. Computer power has increased, and archival storage cost has decreased to the extent that the economics of PACS is justifiable with respect to film. Network bandwidths have increased to allow large studies of many megabytes to arrive at display stations within seconds of examination completion. PACS vendors have recognized the need for efficient workflow and have built systems with intelligence in the mangement of patient data. Close integration with the hospital information system (HIS)-radiology information system (RIS) is critical for system functionality. Successful implementation of PACS requires integration or interoperation with hospital and radiology information systems. Besides the economic advantages, secure rapid access to all clinical information on patients, including imaging studies, anytime and anywhere, enhances the quality of patient care, although it is difficult to quantify. Medical image management systems are maturing, providing access outside of the radiology department to images and clinical information throughout the hospital or the enterprise via the Internet. Small and medium-sized community hospitals, private practices, and outpatient centers in rural areas will begin realizing the benefits of PACS already realized by the large tertiary care academic medical centers and research institutions. Hand-held devices and the Worldwide Web are going to change the way people communicate and do business. The impact on health care will be huge, including radiology. Computer-aided diagnosis, decision support tools, virtual imaging, and guidance systems will transform our practice as value-added applications utilizing the technologies pushed by PACS development efforts. Outcomes data and the electronic medical record (EMR) will drive our interactions with referring physicians and we expect the radiologist to become the informaticist, a new version of the medical management consultant.
doi:10.1007/BF03168389
PMCID: PMC3453069  PMID: 11110253
picture archiving and communication systems (PACS); image storage and retrieval; folder manager; workflow manager; radiology information systems; computers; digital radiology
4.  Methods for visual mining of genomic and proteomic data atlases 
BMC Bioinformatics  2012;13:58.
Background
As the volume, complexity and diversity of the information that scientists work with on a daily basis continues to rise, so too does the requirement for new analytic software. The analytic software must solve the dichotomy that exists between the need to allow for a high level of scientific reasoning, and the requirement to have an intuitive and easy to use tool which does not require specialist, and often arduous, training to use. Information visualization provides a solution to this problem, as it allows for direct manipulation and interaction with diverse and complex data. The challenge addressing bioinformatics researches is how to apply this knowledge to data sets that are continually growing in a field that is rapidly changing.
Results
This paper discusses an approach to the development of visual mining tools capable of supporting the mining of massive data collections used in systems biology research, and also discusses lessons that have been learned providing tools for both local researchers and the wider community. Example tools were developed which are designed to enable the exploration and analyses of both proteomics and genomics based atlases. These atlases represent large repositories of raw and processed experiment data generated to support the identification of biomarkers through mass spectrometry (the PeptideAtlas) and the genomic characterization of cancer (The Cancer Genome Atlas). Specifically the tools are designed to allow for: the visual mining of thousands of mass spectrometry experiments, to assist in designing informed targeted protein assays; and the interactive analysis of hundreds of genomes, to explore the variations across different cancer genomes and cancer types.
Conclusions
The mining of massive repositories of biological data requires the development of new tools and techniques. Visual exploration of the large-scale atlas data sets allows researchers to mine data to find new meaning and make sense at scales from single samples to entire populations. Providing linked task specific views that allow a user to start from points of interest (from diseases to single genes) enables targeted exploration of thousands of spectra and genomes. As the composition of the atlases changes, and our understanding of the biology increase, new tasks will continually arise. It is therefore important to provide the means to make the data available in a suitable manner in as short a time as possible. We have done this through the use of common visualization workflows, into which we rapidly deploy visual tools. These visualizations follow common metaphors where possible to assist users in understanding the displayed data. Rapid development of tools and task specific views allows researchers to mine large-scale data almost as quickly as it is produced. Ultimately these visual tools enable new inferences, new analyses and further refinement of the large scale data being provided in atlases such as PeptideAtlas and The Cancer Genome Atlas.
doi:10.1186/1471-2105-13-58
PMCID: PMC3352268  PMID: 22524279
5.  Ultra-Structure database design methodology for managing systems biology data and analyses 
BMC Bioinformatics  2009;10:254.
Background
Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping).
Results
We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research.
Conclusion
We find Ultra-Structure offers substantial benefits for biological information systems, the largest being the integration of diverse information sources into a common framework. This facilitates systems biology research by integrating data from disparate high-throughput techniques. It also enables us to readily incorporate new data types, sources, and domain knowledge with no change to the database structure or associated computer code. Ultra-Structure may be a significant step towards solving the hard problem of data management and integration in the systems biology era.
doi:10.1186/1471-2105-10-254
PMCID: PMC2748085  PMID: 19691849
6.  The Use of PROMIS and Assessment Center to Deliver Patient-Reported Outcome Measures in Clinical Research 
Journal of applied measurement  2010;11(3):304-314.
The Patient-Reported Outcomes Measurement Information System (PROMIS) was developed as one of the first projects funded by the NIH Roadmap for Medical Research Initiative to re-engineer the clinical research enterprise. The primary goal of PROMIS is to build item banks and short forms that measure key health outcome domains that are manifested in a variety of chronic diseases which could be used as a “common currency” across research projects. To date, item banks, short forms and computerized adaptive tests (CAT) have been developed for 13 domains with relevance to pediatric and adult subjects. To enable easy delivery of these new instruments, PROMIS built a web-based resource (Assessment Center) for administering CATs and other self-report data, tracking item and instrument development, monitoring accrual, managing data, and storing statistical analysis results. Assessment Center can also be used to deliver custom researcher developed content, and has numerous features that support both simple and complicated accrual designs (branching, multiple arms, multiple time points, etc.). This paper provides an overview of the development of the PROMIS item banks and details Assessment Center functionality.
PMCID: PMC3686485  PMID: 20847477
7.  Managing and querying gene expression data using Curray 
BMC Proceedings  2011;5(Suppl 2):S10.
Background
In principle, gene expression data can be viewed as providing just the three-valued expression profiles of target biological elements relative to an experiment at hand. Although complicated, gathering expression profiles does not pose much of a challenge from a query language standpoint. What is interesting is how these expression profiles are used to tease out information from the vast array of information repositories that ascribe meaning to the expression profiles. Since such annotations are inherently experiment specific functions, much the same way as queries in databases, developing a querying system for gene expression data appears to be pointless. Instead, developing tools and techniques to support individual assignment has been considered prudent in contemporary research.
Results
We propose a gene expression data management and querying system that is able to support pre-expression, expression and post-expression level analysis and reduce impedance mismatch between analysis systems. To this end, we propose a new, platform-independent and general purpose query language called Curray, for Custom Microarray query language, to support online expression data analysis using distributed resources. It includes features to design expression analysis pipelines using language constructs at the conceptual level. The ability to include user defined functions as a first-class language feature facilitates unlimited analysis support and removes language limitations. We show that Curray’s declarative and extensible features nimbly allow flexible modeling and room for customization.
Conclusions
The developments proposed in this article allow users to view their expression data from a conceptual standpoint - experiments, probes, expressions, mapping, etc. at multiple levels of representation and independent of the underlying chip technologies. It also allows transparent roll-up and drill-down along representation hierarchies from raw data to standards such as MIAME and MAGE-ML using linguistic constructs. Curray also allows seamless integration with distributed web resources through its LifeDB system of which it is a part.
doi:10.1186/1753-6561-5-S2-S10
PMCID: PMC3090758  PMID: 21554758
8.  WANDA: an end-to-end solution for tele-monitoring of chronic conditions 
Introduction
Approximately two-thirds of the deaths in the world are caused by chronic diseases such as cancer, diabetes, heart disease, and lung diseases. Chronic diseases are increasing in frequency and are costing the world $47 trillion in treatment costs and lost wages. Remote monitoring platforms promise to revolutionize healthcare by reducing healthcare costs and improving quality of care in chronic disease management. WANDA, designed at UCLA and deployed in clinical settings, is a three tiered, end-to-end remote patient monitoring solution with extensive hardware/software components designed to cover the broad spectrum of the telehealth paradigm. The system tiers consist of a data collection framework, a data storage and management, and an analytics engine, all supported by several applications for data visualization, annotation, and social support.
Aims and objectives
Our primary objective was to build basic elements of WANDA that enables deployment of the system in clinical settings and test feasibility and utilization of the system in monitoring chronic conditions. To this end, we have developed user-friendly hardware/software elements for data collection, regulatory compliant data storage, and highly flexible and adaptable data analytics which facilitate physician data review and provide built-in ability to identify statistically significant correlations between phenomena, conditions, and medical events for each defined chronic disease, and predicting clinical episodes and medical complications such as hospitalizations, heart attacks, asthma attacks, and diabetes complications so that such events can be mitigated, reducing care costs and improving patient quality of life.
Methods
A data collection framework was designed to acquire data from a set of heterogeneous off-the-shelf sensor nodes including blood pressure monitors, blood glucose monitors, weight scales, pulse oximeters, and accelerometers, and transmit the data, through an Android phone, to a back-end server. Multivariate data imputation techniques were used to estimate any missing values in the continuously collected data. Statistical egression techniques were applied to the medical signals to identify statistically significant correlations between events of interest. A feature selection algorithm was developed to improve the regression methods. The system has been used in five clinical studies including three heart failure studies, a diabetes study, and a weight loss study. The system is currently used in a large clinical study with a total of 1500 heart failure patients are being recruited from six sites across California. In a smaller study in the past, we have recruited a total of 21 patients (10 men and 11 women; mean age 73.1±9.3, range 58–88) who completed a three-month intervention. A reference group of 21 patients (matched on age, gender, and race) was included in the preliminary analysis to provide a broad comparison group for studying patients. Baseline socio-demographic and clinical characteristics of the two groups were comparable. Both groups showed improvements in perceived health, HRQOL, and depression over time; however, anxiety increased in the comparison group. Patients assigned to the intervention group showed greater improvements in all six psychosocial factors over time.
Results
Positive patient receptiveness and attitude toward using WANDA, and our preliminary results on improvements made by patients in the psychosocial factors are promising. Our data imputation approach, applied on predicting missing answers for 12 daily questionnaires shows an accuracy of 83%. Our feature selection algorithm finds optimal feature set (20 features) usable for regression analysis while maintaining the same accuracy as the full feature set.
Conclusions
The WANDA is built on a three-tier architecture. The first tier is a sensor tier that measures patients’ vital signals and transmits data to the web server tier. The second tier consists of web servers that receive data from the first tier and maintains data integrity. The third tier is a back-end database server and analytics engine.
PMCID: PMC3571138
remote monitoring; chronic disease; tiered architecture; analytics; clinical study
9.  A semantic proteomics dashboard (SemPoD) for data management in translational research 
BMC Systems Biology  2012;6(Suppl 3):S20.
Background
One of the primary challenges in translational research data management is breaking down the barriers between the multiple data silos and the integration of 'omics data with clinical information to complete the cycle from the bench to the bedside. The role of contextual metadata, also called provenance information, is a key factor ineffective data integration, reproducibility of results, correct attribution of original source, and answering research queries involving "What", "Where", "When", "Which", "Who", "How", and "Why" (also known as the W7 model). But, at present there is limited or no effective approach to managing and leveraging provenance information for integrating data across studies or projects. Hence, there is an urgent need for a paradigm shift in creating a "provenance-aware" informatics platform to address this challenge. We introduce an ontology-driven, intuitive Semantic Proteomics Dashboard (SemPoD) that uses provenance together with domain information (semantic provenance) to enable researchers to query, compare, and correlate different types of data across multiple projects, and allow integration with legacy data to support their ongoing research.
Results
The SemPoD platform, currently in use at the Case Center for Proteomics and Bioinformatics (CPB), consists of three components: (a) Ontology-driven Visual Query Composer, (b) Result Explorer, and (c) Query Manager. Currently, SemPoD allows provenance-aware querying of 1153 mass-spectrometry experiments from 20 different projects. SemPod uses the systems molecular biology provenance ontology (SysPro) to support a dynamic query composition interface, which automatically updates the components of the query interface based on previous user selections and efficientlyprunes the result set usinga "smart filtering" approach. The SysPro ontology re-uses terms from the PROV-ontology (PROV-O) being developed by the World Wide Web Consortium (W3C) provenance working group, the minimum information required for reporting a molecular interaction experiment (MIMIx), and the minimum information about a proteomics experiment (MIAPE) guidelines. The SemPoD was evaluated both in terms of user feedback and as scalability of the system.
Conclusions
SemPoD is an intuitive and powerful provenance ontology-driven data access and query platform that uses the MIAPE and MIMIx metadata guideline to create an integrated view over large-scale systems molecular biology datasets. SemPoD leverages the SysPro ontology to create an intuitive dashboard for biologists to compose queries, explore the results, and use a query manager for storing queries for later use. SemPoD can be deployed over many existing database applications storing 'omics data, including, as illustrated here, the LabKey data-management system. The initial user feedback evaluating the usability and functionality of SemPoD has been very positive and it is being considered for wider deployment beyond the proteomics domain, and in other 'omics' centers.
doi:10.1186/1752-0509-6-S3-S20
PMCID: PMC3524316  PMID: 23282161
10.  Keeping Your DNA Sequencing, Genotyping, and Microarray Laboratory Competitive in a New Era of Genomics 
w2-2
Laboratory directors are facing enormous challenges with respect to keeping their laboratories competitive and retaining customers in the face of shrinking budgets and rapidly changing technology. A well-designed Laboratory Information Management System (LIMS) can help meet these challenges and manage costs as the scale and complexity of data collection and related services increase. LIMS can also offer competitive advantages through increased automation and improved customer experiences.Implementing a LIMS strategy that will reduce data collection costs while improving competitiveness is a daunting proposition. LIMS are computerized data and information tracking systems that are highly variable with respect to their purpose, customization capabilities, and overall acquisition (initial purchase) and ownership (maintenance) costs. A simple LIMS can be built from a small number of spread sheets and track a few specific processes. Sophisticated LIMS rely on databases to manage multiple laboratory processes, capture and analyze different kinds of data, and provide decision support capabilities.In this presentation, I will share 20 years of academic and industrial LIMS experiences and perspectives that have been informed through 100’s of interactions with core, research, and manufacturing laboratories engaged in DNA sequencing, genotyping, and microarrays. We’ll explore the issues that need to be addressed with respect to either building a LIMS, or acquiring a LIMS product. A new model that allows laboratories to offer competitive services, utilizing cost-effective laboratory automation strategies and new technologies like next generation sequencing, will be presented. We’ll also compare different IT infrastructures and discuss their advantages and how investments can be made to protect against unexpected costs as new instruments, like the HiSeq 2000™ or SOLiD 4™, third generation sequencing, or other genetic analysis platforms are introduced.
PMCID: PMC2918195
11.  The Medicago truncatula gene expression atlas web server 
BMC Bioinformatics  2009;10:441.
Background
Legumes (Leguminosae or Fabaceae) play a major role in agriculture. Transcriptomics studies in the model legume species, Medicago truncatula, are instrumental in helping to formulate hypotheses about the role of legume genes. With the rapid growth of publically available Affymetrix GeneChip Medicago Genome Array GeneChip data from a great range of tissues, cell types, growth conditions, and stress treatments, the legume research community desires an effective bioinformatics system to aid efforts to interpret the Medicago genome through functional genomics. We developed the Medicago truncatula Gene Expression Atlas (MtGEA) web server for this purpose.
Description
The Medicago truncatula Gene Expression Atlas (MtGEA) web server is a centralized platform for analyzing the Medicago transcriptome. Currently, the web server hosts gene expression data from 156 Affymetrix GeneChip® Medicago genome arrays in 64 different experiments, covering a broad range of developmental and environmental conditions. The server enables flexible, multifaceted analyses of transcript data and provides a range of additional information about genes, including different types of annotation and links to the genome sequence, which help users formulate hypotheses about gene function. Transcript data can be accessed using Affymetrix probe identification number, DNA sequence, gene name, functional description in natural language, GO and KEGG annotation terms, and InterPro domain number. Transcripts can also be discovered through co-expression or differential expression analysis. Flexible tools to select a subset of experiments and to visualize and compare expression profiles of multiple genes have been implemented. Data can be downloaded, in part or full, in a tabular form compatible with common analytical and visualization software. The web server will be updated on a regular basis to incorporate new gene expression data and genome annotation, and is accessible at: http://bioinfo.noble.org/gene-atlas/.
Conclusions
The MtGEA web server has a well managed rich data set, and offers data retrieval and analysis tools provided in the web platform. It's proven to be a powerful resource for plant biologists to effectively and efficiently identify Medicago transcripts of interest from a multitude of aspects, formulate hypothesis about gene function, and overall interpret the Medicago genome from a systematic point of view.
doi:10.1186/1471-2105-10-441
PMCID: PMC2804685  PMID: 20028527
12.  iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources 
PLoS ONE  2008;3(5):e2265.
The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu.
doi:10.1371/journal.pone.0002265
PMCID: PMC2386255  PMID: 18509477
13.  Developing an efficient scheduling template of a chemotherapy treatment unit 
The Australasian Medical Journal  2011;4(10):575-588.
This study was undertaken to improve the performance of a Chemotherapy Treatment Unit by increasing the throughput and reducing the average patient’s waiting time. In order to achieve this objective, a scheduling template has been built. The scheduling template is a simple tool that can be used to schedule patients' arrival to the clinic. A simulation model of this system was built and several scenarios, that target match the arrival pattern of the patients and resources availability, were designed and evaluated. After performing detailed analysis, one scenario provide the best system’s performance. A scheduling template has been developed based on this scenario. After implementing the new scheduling template, 22.5% more patients can be served.
Introduction
CancerCare Manitoba is a provincially mandated cancer care agency. It is dedicated to provide quality care to those who have been diagnosed and are living with cancer. MacCharles Chemotherapy unit is specially built to provide chemotherapy treatment to the cancer patients of Winnipeg. In order to maintain an excellent service, it tries to ensure that patients get their treatment in a timely manner. It is challenging to maintain that goal because of the lack of a proper roster, the workload distribution and inefficient resource allotment. In order to maintain the satisfaction of the patients and the healthcare providers, by serving the maximum number of patients in a timely manner, it is necessary to develop an efficient scheduling template that matches the required demand with the availability of resources. This goal can be reached using simulation modelling. Simulation has proven to be an excellent modelling tool. It can be defined as building computer models that represent real world or hypothetical systems, and hence experimenting with these models to study system behaviour under different scenarios.1, 2
A study was undertaken at the Children's Hospital of Eastern Ontario to identify the issues behind the long waiting time of a emergency room.3 A 20-­‐day field observation revealed that the availability of the staff physician and interaction affects the patient wait time. Jyväskylä et al.4 used simulation to test different process scenarios, allocate resources and perform activity-­‐based cost analysis in the Emergency Department (ED) at the Central Hospital. The simulation also supported the study of a new operational method, named "triage-team" method without interrupting the main system. The proposed triage team method categorises the entire patient according to the urgency to see the doctor and allows the patient to complete the necessary test before being seen by the doctor for the first time. The simulation study showed that it will decrease the throughput time of the patient and reduce the utilisation of the specialist and enable the ordering all the tests the patient needs right after arrival, thus quickening the referral to treatment.
Santibáñez et al.5 developed a discrete event simulation model of British Columbia Cancer Agency"s ambulatory care unit which was used to study the impact of scenarios considering different operational factors (delay in starting clinic), appointment schedule (appointment order, appointment adjustment, add-­‐ons to the schedule) and resource allocation. It was found that the best outcomes were obtained when not one but multiple changes were implemented simultaneously. Sepúlveda et al.6 studied the M. D. Anderson Cancer Centre Orlando, which is a cancer treatment facility and built a simulation model to analyse and improve flow process and increase capacity in the main facility. Different scenarios were considered like, transferring laboratory and pharmacy areas, adding an extra blood draw room and applying different scheduling techniques of patients. The study shows that by increasing the number of short-­‐term (four hours or less) patients in the morning could increase chair utilisation.
Discrete event simulation also helps improve a service where staff are ignorant about the behaviour of the system as a whole; which can also be described as a real professional system. Niranjon et al.7 used simulation successfully where they had to face such constraints and lack of accessible data. Carlos et al. 8 used Total quality management and simulation – animation to improve the quality of the emergency room. Simulation was used to cover the key point of the emergency room and animation was used to indicate the areas of opportunity required. This study revealed that a long waiting time, overload personnel and increasing withdrawal rate of patients are caused by the lack of capacity in the emergency room.
Baesler et al.9 developed a methodology for a cancer treatment facility to find stochastically a global optimum point for the control variables. A simulation model generated the output using a goal programming framework for all the objectives involved in the analysis. Later a genetic algorithm was responsible for performing the search for an improved solution. The control variables that were considered in this research are number of treatment chairs, number of drawing blood nurses, laboratory personnel, and pharmacy personnel. Guo et al. 10 presented a simulation framework considering demand for appointment, patient flow logic, distribution of resources, scheduling rules followed by the scheduler. The objective of the study was to develop a scheduling rule which will ensure that 95% of all the appointment requests should be seen within one week after the request is made to increase the level of patient satisfaction and balance the schedule of each doctor to maintain a fine harmony between "busy clinic" and "quiet clinic".
Huschka et al.11 studied a healthcare system which was about to change their facility layout. In this case a simulation model study helped them to design a new healthcare practice by evaluating the change in layout before implementation. Historical data like the arrival rate of the patients, number of patients visited each day, patient flow logic, was used to build the current system model. Later, different scenarios were designed which measured the changes in the current layout and performance.
Wijewickrama et al.12 developed a simulation model to evaluate appointment schedule (AS) for second time consultations and patient appointment sequence (PSEQ) in a multi-­‐facility system. Five different appointment rule (ARULE) were considered: i) Baily; ii) 3Baily; iii) Individual (Ind); iv) two patients at a time (2AtaTime); v) Variable Interval and (V-­‐I) rule. PSEQ is based on type of patients: Appointment patients (APs) and new patients (NPs). The different PSEQ that were studied in this study were: i) first-­‐ come first-­‐serve; ii) appointment patient at the beginning of the clinic (APBEG); iii) new patient at the beginning of the clinic (NPBEG); iv) assigning appointed and new patients in an alternating manner (ALTER); v) assigning a new patient after every five-­‐appointment patients. Also patient no show (0% and 5%) and patient punctuality (PUNCT) (on-­‐time and 10 minutes early) were also considered. The study found that ALTER-­‐Ind. and ALTER5-­‐Ind. performed best on 0% NOSHOW, on-­‐time PUNCT and 5% NOSHOW, on-­‐time PUNCT situation to reduce WT and IT per patient. As NOSHOW created slack time for waiting patients, their WT tends to reduce while IT increases due to unexpected cancellation. Earliness increases congestion whichin turn increases waiting time.
Ramis et al.13 conducted a study of a Medical Imaging Center (MIC) to build a simulation model which was used to improve the patient journey through an imaging centre by reducing the wait time and making better use of the resources. The simulation model also used a Graphic User Interface (GUI) to provide the parameters of the centre, such as arrival rates, distances, processing times, resources and schedule. The simulation was used to measure the waiting time of the patients in different case scenarios. The study found that assigning a common function to the resource personnel could improve the waiting time of the patients.
The objective of this study is to develop an efficient scheduling template that maximises the number of served patients and minimises the average patient's waiting time at the given resources availability. To accomplish this objective, we will build a simulation model which mimics the working conditions of the clinic. Then we will suggest different scenarios of matching the arrival pattern of the patients with the availability of the resources. Full experiments will be performed to evaluate these scenarios. Hence, a simple and practical scheduling template will be built based on the indentified best scenario. The developed simulation model is described in section 2, which consists of a description of the treatment room, and a description of the types of patients and treatment durations. In section 3, different improvement scenarios are described and their analysis is presented in section 4. Section 5 illustrates a scheduling template based on one of the improvement scenarios. Finally, the conclusion and future direction of our work is exhibited in section 6.
Simulation Model
A simulation model represents the actual system and assists in visualising and evaluating the performance of the system under different scenarios without interrupting the actual system. Building a proper simulation model of a system consists of the following steps.
Observing the system to understand the flow of the entities, key players, availability of resources and overall generic framework.
Collecting the data on the number and type of entities, time consumed by the entities at each step of their journey, and availability of resources.
After building the simulation model it is necessary to confirm that the model is valid. This can be done by confirming that each entity flows as it is supposed to and the statistical data generated by the simulation model is similar to the collected data.
Figure 1 shows the patient flow process in the treatment room. On the patient's first appointment, the oncologist comes up with the treatment plan. The treatment time varies according to the patient’s condition, which may be 1 hour to 10 hours. Based on the type of the treatment, the physician or the clinical clerk books an available treatment chair for that time period.
On the day of the appointment, the patient will wait until the booked chair is free. When the chair is free a nurse from that station comes to the patient, verifies the name and date of birth and takes the patient to a treatment chair. Afterwards, the nurse flushes the chemotherapy drug line to the patient's body which takes about five minutes and sets up the treatment. Then the nurse leaves to serve another patient. Chemotherapy treatment lengths vary from less than an hour to 10 hour infusions. At the end of the treatment, the nurse returns, removes the line and notifies the patient about the next appointment date and time which also takes about five minutes. Most of the patients visit the clinic to take care of their PICC line (a peripherally inserted central catheter). A PICC is a line that is used to inject the patient with the chemical. This PICC line should be regularly cleaned, flushed to maintain patency and the insertion site checked for signs of infection. It takes approximately 10–15 minutes to take care of a PICC line by a nurse.
Cancer Care Manitoba provided access to the electronic scheduling system, also known as "ARIA" which is comprehensive information and image management system that aggregates patient data into a fully-­‐electronic medical chart, provided by VARIAN Medical System. This system was used to find out how many patients are booked in every clinic day. It also reveals which chair is used for how many hours. It was necessary to search a patient's history to find out how long the patient spends on which chair. Collecting the snapshot of each patient gives the complete picture of a one day clinic schedule.
The treatment room consists of the following two main limited resources:
Treatment Chairs: Chairs that are used to seat the patients during the treatment.
Nurses: Nurses are required to inject the treatment line into the patient and remove it at the end of the treatment. They also take care of the patients when they feel uncomfortable.
Mc Charles Chemotherapy unit consists of 11 nurses, and 5 stations with the following description:
Station 1: Station 1 has six chairs (numbered 1 to 6) and two nurses. The two nurses work from 8:00 to 16:00.
Station 2: Station 2 has six chairs (7 to 12) and three nurses. Two nurses work from 8:00 to 16:00 and one nurse works from 12:00 to 20:00.
Station 3: Station 4 has six chairs (13 to 18) and two nurses. The two nurses work from 8:00 to 16:00.
Station 4: Station 4 has six chairs (19 to 24) and three nurses. One nurse works from 8:00 to 16:00. Another nurse works from 10:00 to 18:00.
Solarium Station: Solarium Station has six chairs (Solarium Stretcher 1, Solarium Stretcher 2, Isolation, Isolation emergency, Fire Place 1, Fire Place 2). There is only one nurse assigned to this station that works from 12:00 to 20:00. The nurses from other stations can help when need arises.
There is one more nurse known as the "float nurse" who works from 11:00 to 19:00. This nurse can work at any station. Table 1 summarises the working hours of chairs and nurses. All treatment stations start at 8:00 and continue until the assigned nurse for that station completes her shift.
Currently, the clinic uses a scheduling template to assign the patients' appointments. But due to high demand of patient appointment it is not followed any more. We believe that this template can be improved based on the availability of nurses and chairs. Clinic workload was collected from 21 days of field observation. The current scheduling template has 10 types of appointment time slot: 15-­‐minute, 1-­‐hour, 1.5-­‐hour, 2-­‐hour, 3-­‐hour, 4-­‐hour, 5-­‐hour, 6-­‐hour, 8-­‐hour and 10-­‐hour and it is designed to serve 95 patients. But when the scheduling template was compared with the 21 days observations, it was found that the clinic is serving more patients than it is designed for. Therefore, the providers do not usually follow the scheduling template. Indeed they very often break the time slots to accommodate slots that do not exist in the template. Hence, we find that some of the stations are very busy (mostly station 2) and others are underused. If the scheduling template can be improved, it will be possible to bring more patients to the clinic and reduce their waiting time without adding more resources.
In order to build or develop a simulation model of the existing system, it is necessary to collect the following data:
Types of treatment durations.
Numbers of patients in each treatment type.
Arrival pattern of the patients.
Steps that the patients have to go through in their treatment journey and required time of each step.
Using the observations of 2,155 patients over 21 days of historical data, the types of treatment durations and the number of patients in each type were estimated. This data also assisted in determining the arrival rate and the frequency distribution of the patients. The patients were categorised into six types. The percentage of these types and their associated service times distributions are determined too.
ARENA Rockwell Simulation Software (v13) was used to build the simulation model. Entities of the model were tracked to verify that the patients move as intended. The model was run for 30 replications and statistical data was collected to validate the model. The total number of patients that go though the model was compared with the actual number of served patients during the 21 days of observations.
Improvement Scenarios
After verifying and validating the simulation model, different scenarios were designed and analysed to identify the best scenario that can handle more patients and reduces the average patient's waiting time. Based on the clinic observation and discussion with the healthcare providers, the following constraints have been stated:
The stations are filled up with treatment chairs. Therefore, it is literally impossible to fit any more chairs in the clinic. Moreover, the stakeholders are not interested in adding extra chairs.
The stakeholders and the caregivers are not interested in changing the layout of the treatment room.
Given these constraints the options that can be considered to design alternative scenarios are:
Changing the arrival pattern of the patients: that will fit over the nurses' availability.
Changing the nurses' schedule.
Adding one full time nurse at different starting times of the day.
Figure 2 compares the available number of nurses and the number of patients' arrival during different hours of a day. It can be noticed that there is a rapid growth in the arrival of patients (from 13 to 17) between 8:00 to 10:00 even though the clinic has the equal number of nurses during this time period. At 12:00 there is a sudden drop of patient arrival even though there are more available nurses. It is clear that there is an imbalance in the number of available nurses and the number of patient arrivals over different hours of the day. Consequently, balancing the demand (arrival rate of patients) and resources (available number of nurses) will reduce the patients' waiting time and increases the number of served patients. The alternative scenarios that satisfy the above three constraints are listed in Table 2. These scenarios respect the following rules:
Long treatments (between 4hr to 11hr) have to be scheduled early in the morning to avoid working overtime.
Patients of type 1 (15 minutes to 1hr treatment) are the most common. They can be fitted in at any time of the day because they take short treatment time. Hence, it is recommended to bring these patients in at the middle of the day when there are more nurses.
Nurses get tired at the end of the clinic day. Therefore, fewer patients should be scheduled at the late hours of the day.
In Scenario 1, the arrival pattern of the patient was changed so that it can fit with the nurse schedule. This arrival pattern is shown Table 3. Figure 3 shows the new patients' arrival pattern compared with the current arrival pattern. Similar patterns can be developed for the remaining scenarios too.
Analysis of Results
ARENA Rockwell Simulation software (v13) was used to develop the simulation model. There is no warm-­‐up period because the model simulates day-­‐to-­‐day scenarios. The patients of any day are supposed to be served in the same day. The model was run for 30 days (replications) and statistical data was collected to evaluate each scenario. Tables 4 and 5 show the detailed comparison of the system performance between the current scenario and Scenario 1. The results are quite interesting. The average throughput rate of the system has increased from 103 to 125 patients per day. The maximum throughput rate can reach 135 patients. Although the average waiting time has increased, the utilisation of the treatment station has increased by 15.6%. Similar analysis has been performed for the rest of the other scenarios. Due to the space limitation the detailed results are not given. However, Table 6 exhibits a summary of the results and comparison between the different scenarios. Scenario 1 was able to significantly increase the throughput of the system (by 21%) while it still results in an acceptable low average waiting time (13.4 minutes). In addition, it is worth noting that adding a nurse (Scenarios 3, 4, and 5) does not significantly reduce the average wait time or increase the system's throughput. The reason behind this is that when all the chairs are busy, the nurses have to wait until some patients finish the treatment. As a consequence, the other patients have to wait for the commencement of their treatment too. Therefore, hiring a nurse, without adding more chairs, will not reduce the waiting time or increase the throughput of the system. In this case, the only way to increase the throughput of the system is by adjusting the arrival pattern of patients over the nurses' schedule.
Developing a Scheduling Template based on Scenario 1
Scenario 1 provides the best performance. However a scheduling template is necessary for the care provider to book the patients. Therefore, a brief description is provided below on how scheduling the template is developed based on this scenario.
Table 3 gives the number of patients that arrive hourly, following Scenario 1. The distribution of each type of patient is shown in Table 7. This distribution is based on the percentage of each type of patient from the collected data. For example, in between 8:00-­‐9:00, 12 patients will come where 54.85% are of Type 1, 34.55% are of Type 2, 15.163% are of Type 3, 4.32% are of Type 4, 2.58% are of Type 5 and the rest are of Type 6. It is worth noting that, we assume that the patients of each type arrive as a group at the beginning of the hourly time slot. For example, all of the six patients of Type 1 from 8:00 to 9:00 time slot arrive at 8:00.
The numbers of patients from each type is distributed in such a way that it respects all the constraints described in Section 1.3. Most of the patients of the clinic are from type 1, 2 and 3 and they take less amount of treatment time compared with the patients of other types. Therefore, they are distributed all over the day. Patients of type 4, 5 and 6 take a longer treatment time. Hence, they are scheduled at the beginning of the day to avoid overtime. Because patients of type 4, 5 and 6 come at the beginning of the day, most of type 1 and 2 patients come at mid-­‐day (12:00 to 16:00). Another reason to make the treatment room more crowded in between 12:00 to 16:00 is because the clinic has the maximum number of nurses during this time period. Nurses become tired at the end of the clinic which is a reason not to schedule any patient after 19:00.
Based on the patient arrival schedule and nurse availability a scheduling template is built and shown in Figure 4. In order to build the template, if a nurse is available and there are patients waiting for service, a priority list of these patients will be developed. They are prioritised in a descending order based on their estimated slack time and secondarily based on the shortest service time. The secondary rule is used to break the tie if two patients have the same slack. The slack time is calculated using the following equation:
Slack time = Due time - (Arrival time + Treatment time)
Due time is the clinic closing time. To explain how the process works, assume at hour 8:00 (in between 8:00 to 8:15) two patients in station 1 (one 8-­‐hour and one 15-­‐ minute patient), two patients in station 2 (two 12-­‐hour patients), two patients in station 3 (one 2-­‐hour and one 15-­‐ minute patient) and one patient in station 4 (one 3-­‐hour patient) in total seven patients are scheduled. According to Figure 2, there are seven nurses who are available at 8:00 and it takes 15 minutes to set-­‐up a patient. Therefore, it is not possible to schedule more than seven patients in between 8:00 to 8:15 and the current scheduling is also serving seven patients by this time. The rest of the template can be justified similarly.
doi:10.4066/AMJ.2011.837
PMCID: PMC3562880  PMID: 23386870
14.  Computational framework to support integration of biomolecular and clinical data within a translational approach 
BMC Bioinformatics  2013;14:180.
Background
The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information.
Results
We have implemented an extension of Chado – the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications.
Conclusions
Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans.
doi:10.1186/1471-2105-14-180
PMCID: PMC3688149  PMID: 23742129
15.  Factors Influencing the Use of a Web-Based Application for Supporting the Self-Care of Patients with Type 2 Diabetes: A Longitudinal Study 
Background
The take-up of eHealth applications in general is still rather low and user attrition is often high. Only limited information is available about the use of eHealth technologies among specific patient groups.
Objective
The aim of this study was to explore the factors that influence the initial and long-term use of a Web-based application (DiabetesCoach) for supporting the self-care of patients with type 2 diabetes.
Methods
A mixed-methods research design was used for a process analysis of the actual usage of the Web application over a 2-year period and to identify user profiles. Research instruments included log files, interviews, usability tests, and a survey.
Results
The DiabetesCoach was predominantly used for interactive features like online monitoring, personal data, and patient–nurse email contact. It was the continuous, personal feedback that particularly appealed to the patients; they felt more closely monitored by their nurse and encouraged to play a more active role in self-managing their disease. Despite the positive outcomes, usage of the Web application was hindered by low enrollment and nonusage attrition. The main barrier to enrollment had to do with a lack of access to the Internet (146/226, 65%). Although 68% (34/50) of the enrollees were continuous users, of whom 32% (16/50) could be defined as hardcore users (highly active), the remaining 32% (16/50) did not continue using the Web application for the full duration of the study period. Barriers to long-term use were primarily due to poor user-friendliness of the Web application (the absence of “push” factors or reminders) and selection of the “wrong” users; the well-regulated patients were not the ones who could benefit the most from system use because of a ceiling effect. Patients with a greater need for care seemed to be more engaged in long-term use; highly active users were significantly more often medication users than low/inactive users (P = .005) and had a longer diabetes duration (P = .03).
Conclusion
Innovations in health care will diffuse more rapidly when technology is employed that is simple to use and has applicable components for interactivity. This would foresee the patients’ need for continuous and personalized feedback, in particular for patients with a greater need for care. From this study several factors appear to influence increased use of eHealth technologies: (1) avoiding selective enrollment, (2) making use of participatory design methods, and (3) developing push factors for persistence. Further research should focus on the causal relationship between using the system’s features and actual usage, as such a view would provide important evidence on how specific technology features can engage and captivate users.
doi:10.2196/jmir.1603
PMCID: PMC3222177  PMID: 21959968
Internet; technology; eHealth; email; communication; primary care; self-care; diabetes
16.  Easier surveillance of climate-related health vulnerabilities through a Web-based spatial OLAP application 
Background
Climate change has a significant impact on population health. Population vulnerabilities depend on several determinants of different types, including biological, psychological, environmental, social and economic ones. Surveillance of climate-related health vulnerabilities must take into account these different factors, their interdependence, as well as their inherent spatial and temporal aspects on several scales, for informed analyses. Currently used technology includes commercial off-the-shelf Geographic Information Systems (GIS) and Database Management Systems with spatial extensions. It has been widely recognized that such OLTP (On-Line Transaction Processing) systems were not designed to support complex, multi-temporal and multi-scale analysis as required above. On-Line Analytical Processing (OLAP) is central to the field known as BI (Business Intelligence), a key field for such decision-support systems. In the last few years, we have seen a few projects that combine OLAP and GIS to improve spatio-temporal analysis and geographic knowledge discovery. This has given rise to SOLAP (Spatial OLAP) and a new research area. This paper presents how SOLAP and climate-related health vulnerability data were investigated and combined to facilitate surveillance.
Results
Based on recent spatial decision-support technologies, this paper presents a spatio-temporal web-based application that goes beyond GIS applications with regard to speed, ease of use, and interactive analysis capabilities. It supports the multi-scale exploration and analysis of integrated socio-economic, health and environmental geospatial data over several periods. This project was meant to validate the potential of recent technologies to contribute to a better understanding of the interactions between public health and climate change, and to facilitate future decision-making by public health agencies and municipalities in Canada and elsewhere. The project also aimed at integrating an initial collection of geo-referenced multi-scale indicators that were identified by Canadian specialists and end-users as relevant for the surveillance of the public health impacts of climate change. This system was developed in a multidisciplinary context involving researchers, policy makers and practitioners, using BI and web-mapping concepts (more particularly SOLAP technologies), while exploring new solutions for frequent automatic updating of data and for providing contextual warnings for users (to minimize the risk of data misinterpretation). According to the project participants, the final system succeeds in facilitating surveillance activities in a way not achievable with today's GIS. Regarding the experiments on frequent automatic updating and contextual user warnings, the results obtained indicate that these are meaningful and achievable goals but they still require research and development for their successful implementation in the context of surveillance and multiple organizations.
Conclusion
Surveillance of climate-related health vulnerabilities may be more efficiently supported using a combination of BI and GIS concepts, and more specifically, SOLAP technologies (in that it facilitates and accelerates multi-scale spatial and temporal analysis to a point where a user can maintain an uninterrupted train of thought by focussing on "what" she/he wants (not on "how" to get it) and always obtain instant answers, including to the most complex queries that take minutes or hours with OLTP systems (e.g., aggregated, temporal, comparative)). The developed system respects Newell's cognitive band of 10 seconds when performing knowledge discovery (exploring data, looking for hypotheses, validating models). The developed system provides new operators for easily and rapidly exploring multidimensional data at different levels of granularity, for different regions and epochs, and for visualizing the results in synchronized maps, tables and charts. It is naturally adapted to deal with multiscale indicators such as those used in the surveillance community, as confirmed by this project's end-users.
doi:10.1186/1476-072X-8-18
PMCID: PMC2672060  PMID: 19344512
17.  Web 2.0 systems supporting childhood chronic disease management: A pattern language representation of a general architecture 
Background
Chronic disease management is a global health concern. By the time they reach adolescence, 10–15% of all children live with a chronic disease. The role of educational interventions in facilitating adaptation to chronic disease is receiving growing recognition, and current care policies advocate greater involvement of patients in self-care. Web 2.0 is an umbrella term for new collaborative Internet services characterized by user participation in developing and managing content. Key elements include Really Simple Syndication (RSS) to rapidly disseminate awareness of new information; weblogs (blogs) to describe new trends, wikis to share knowledge, and podcasts to make information available on personal media players. This study addresses the potential to develop Web 2.0 services for young persons with a chronic disease. It is acknowledged that the management of childhood chronic disease is based on interplay between initiatives and resources on the part of patients, relatives, and health care professionals, and where the balance shifts over time to the patients and their families.
Methods
Participatory action research was used to stepwise define a design specification in the form of a pattern language. Support for children diagnosed with diabetes Type 1 was used as the example area. Each individual design pattern was determined graphically using card sorting methods, and textually in the form Title, Context, Problem, Solution, Examples and References. Application references were included at the lowest level in the graphical overview in the pattern language but not specified in detail in the textual descriptions.
Results
The design patterns are divided into functional and non-functional design elements, and formulated at the levels of organizational, system, and application design. The design elements specify access to materials for development of the competences needed for chronic disease management in specific community settings, endorsement of self-learning through online peer-to-peer communication, and systematic accreditation and evaluation of materials and processes.
Conclusion
The use of design patterns allows representing the core design elements of a Web 2.0 system upon which an 'ecological' development of content respecting these constraints can be built. Future research should include evaluations of Web 2.0 systems implemented according to the architecture in practice settings.
doi:10.1186/1472-6947-8-54
PMCID: PMC2627839  PMID: 19040738
18.  Main Report 
Genetics in Medicine  2006;8(Suppl 1):12S-252S.
Background:
States vary widely in their use of newborn screening tests, with some mandating screening for as few as three conditions and others mandating as many as 43 conditions, including varying numbers of the 40+ conditions that can be detected by tandem mass spectrometry (MS/MS). There has been no national guidance on the best candidate conditions for newborn screening since the National Academy of Sciences report of 19751 and the United States Congress Office of Technology Assessment report of 1988,2 despite rapid developments since then in genetics, in screening technologies, and in some treatments.
Objectives:
In 2002, the Maternal and Child Health Bureau (MCHB) of the Health Resources and Services Administration (HRSA) of the United States Department of Health and Human Services (DHHS) commissioned the American College of Medical Genetics (ACMG) to: Conduct an analysis of the scientific literature on the effectiveness of newborn screening.Gather expert opinion to delineate the best evidence for screening for specified conditions and develop recommendations focused on newborn screening, including but not limited to the development of a uniform condition panel.Consider other components of the newborn screening system that are critical to achieving the expected outcomes in those screened.
Methods:
A group of experts in various areas of subspecialty medicine and primary care, health policy, law, public health, and consumers worked with a steering committee and several expert work groups, using a two-tiered approach to assess and rank conditions. A first step was developing a set of principles to guide the analysis. This was followed by developing criteria by which conditions could be evaluated, and then identifying the conditions to be evaluated. A large and broadly representative group of experts was asked to provide their opinions on the extent to which particular conditions met the selected criteria, relying on supporting evidence and references from the scientific literature. The criteria were distributed among three main categories for each condition: The availability and characteristics of the screening test;The availability and complexity of diagnostic services; andThe availability and efficacy of treatments related to the conditions. A survey process utilizing a data collection instrument was used to gather expert opinion on the conditions in the first tier of the assessment. The data collection format and survey provided the opportunity to quantify expert opinion and to obtain the views of a diverse set of interest groups (necessary due to the subjective nature of some of the criteria). Statistical analysis of data produced a score for each condition, which determined its ranking and initial placement in one of three categories (high scoring, moderately scoring, or low scoring/absence of a newborn screening test). In the second tier of these analyses, the evidence base related to each condition was assessed in depth (e.g., via systematic reviews of reference lists including MedLine, PubMed and others; books; Internet searches; professional guidelines; clinical evidence; and cost/economic evidence and modeling). The fact sheets reflecting these analyses were evaluated by at least two acknowledged experts for each condition. These experts assessed the data and the associated references related to each criterion and provided corrections where appropriate, assigned a value to the level of evidence and the quality of the studies that established the evidence base, and determined whether there were significant variances from the survey data. Survey results were subsequently realigned with the evidence obtained from the scientific literature during the second-tier analysis for all objective criteria, based on input from at least three acknowledged experts in each condition. The information from these two tiers of assessment was then considered with regard to the overriding principles and other technology or condition-specific recommendations. On the basis of this information, conditions were assigned to one of three categories as described above:Core Panel;Secondary Targets (conditions that are part of the differential diagnosis of a core panel condition.); andNot Appropriate for Newborn Screening (either no newborn screening test is available or there is poor performance with regard to multiple other evaluation criteria).
ACMG also considered features of optimal newborn screening programs beyond the tests themselves by assessing the degree to which programs met certain goals (e.g., availability of educational programs, proportions of newborns screened and followed up). Assessments were based on the input of experts serving in various capacities in newborn screening programs and on 2002 data provided by the programs of the National Newborn Screening and Genetics Resource Center (NNSGRC). In addition, a brief cost-effectiveness assessment of newborn screening was conducted.
Results:
Uniform panel
A total of 292 individuals determined to be generally representative of the regional distribution of the United States population and of areas of expertise or involvement in newborn screening provided a total of 3,949 evaluations of 84 conditions. For each condition, the responses of at least three experts in that condition were compared with those of all respondents for that condition and found to be consistent. A score of 1,200 on the data collection instrument provided a logical separation point between high scoring conditions (1,200–1,799 of a possible 2,100) and low scoring (<1,000) conditions. A group of conditions with intermediate scores (1,000–1,199) was identified, all of which were part of the differential diagnosis of a high scoring condition or apparent in the result of the multiplex assay. Some are identified by screening laboratories and others by diagnostic laboratories. This group was designated as a “secondary target” category for which the program must report the diagnostic result.
Using the validated evidence base and expert opinion, each condition that had previously been assigned to a category based on scores gathered through the data collection instrument was reconsidered. Again, the factors taken into consideration were: 1) available scientific evidence; 2) availability of a screening test; 3) presence of an efficacious treatment; 4) adequate understanding of the natural history of the condition; and 5) whether the condition was either part of the differential diagnosis of another condition or whether the screening test results related to a clinically significant condition.
The conditions were then assigned to one of three categories as previously described (core panel, secondary targets, or not appropriate for Newborn Screening).
Among the 29 conditions assigned to the core panel are three hemoglobinopathies associated with a Hb/S allele, six amino acidurias, five disorders of fatty oxidation, nine organic acidurias, and six unrelated conditions (congenital hypothyroidism (CH), biotinidase deficiency (BIOT), congenital adrenal hyperplasia (CAH), classical galactosemia (GALT), hearing loss (HEAR) and cystic fibrosis (CF)). Twenty-three of the 29 conditions in the core panel are identified with multiplex technologies such as tandem mass spectrometry (MS/MS) or high pressure liquid chromatography (HPLC). On the basis of the evidence, six of the 35 conditions initially placed in the core panel were moved into the secondary target category, which expanded to 25 conditions. Test results not associated with potential disease in the infant (e.g., carriers) were also placed in the secondary target category. When newborn screening laboratory results definitively establish carrier status, the result should be made available to the health care professional community and families. Twenty-seven conditions were determined to be inappropriate for newborn screening at this time.
Conditions with limited evidence reported in the scientific literature were more difficult to evaluate, quantify and place in one of the three categories. In addition, many conditions were found to occur in multiple forms distinguished by age-of-onset, severity, or other features. Further, unless a condition was already included in newborn screening programs, there was a potential for bias in the information related to some criteria. In such circumstances, the quality of the studies underlying the data such as expert opinion that considered case reports and reasoning from first principles determined the placement of the conditions into particular categories.
Newborn screening program optimization
– Assessment of the activities of newborn screening programs, based on program reports, was done for the six program components: education; screening; follow-up; diagnostic confirmation; management; and program evaluation. Considerable variation was found between programs with regard to whether particular aspects (e.g., prenatal education program availability, tracking of specimen collection and delivery) were included and the degree to which they are provided. Newborn screening program evaluation systems also were assessed in order to determine their adequacy and uniformity with the goal being to improve interprogram evaluation and comparison to ensure that the expected outcomes from having been identified in screening are realized.
Conclusions:
The state of the published evidence in the fast-moving worlds of newborn screening and medical genetics has not kept up with the implementation of new technologies, thus requiring the considerable use of expert opinion to develop recommendations about a core panel of conditions for newborn screening. Twenty-nine conditions were identified as primary targets for screening from which all components of the newborn screening system should be maximized. An additional 25 conditions were listed that could be identified in the course of screening for core panel conditions. Programs are obligated to establish a diagnosis and communicate the result to the health care provider and family. It is recognized that screening may not have been maximized for the detection of these secondary conditions but that some proportion of such cases may be found among those screened for core panel conditions. With additional screening, greater training of primary care health care professionals and subspecialists will be needed, as will the development of an infrastructure for appropriate follow-up and management throughout the lives of children who have been identified as having one of these rare conditions. Recommended actions to overcome barriers to an optimal newborn screening system include: The establishment of a national role in the scientific evaluation of conditions and the technologies by which they are screened;Standardization of case definitions and reporting procedures;Enhanced oversight of hospital-based screening activities;Long-term data collection and surveillance; andConsideration of the financial needs of programs to allow them to deliver the appropriate services to the screened population.
doi:10.1097/01.gim.0000223467.60151.02
PMCID: PMC3109899
19.  Anatomy of the Epidemiological Literature on the 2003 SARS Outbreaks in Hong Kong and Toronto: A Time-Stratified Review 
PLoS Medicine  2010;7(5):e1000272.
Weijia Xing and colleagues reviewed the published epidemiological literature on SARS and show that less than a quarter of papers were published during the epidemic itself, suggesting that the research published lagged substantially behind the need for it.
Background
Outbreaks of emerging infectious diseases, especially those of a global nature, require rapid epidemiological analysis and information dissemination. The final products of those activities usually comprise internal memoranda and briefs within public health authorities and original research published in peer-reviewed journals. Using the 2003 severe acute respiratory syndrome (SARS) epidemic as an example, we conducted a comprehensive time-stratified review of the published literature to describe the different types of epidemiological outputs.
Methods and Findings
We identified and analyzed all published articles on the epidemiology of the SARS outbreak in Hong Kong or Toronto. The analysis was stratified by study design, research domain, data collection, and analytical technique. We compared the SARS-case and matched-control non-SARS articles published according to the timeline of submission, acceptance, and publication. The impact factors of the publishing journals were examined according to the time of publication of SARS articles, and the numbers of citations received by SARS-case and matched-control articles submitted during and after the epidemic were compared. Descriptive, analytical, theoretical, and experimental epidemiology concerned, respectively, 54%, 30%, 11%, and 6% of the studies. Only 22% of the studies were submitted, 8% accepted, and 7% published during the epidemic. The submission-to-acceptance and acceptance-to-publication intervals of the SARS articles submitted during the epidemic period were significantly shorter than the corresponding intervals of matched-control non-SARS articles published in the same journal issues (p<0.001 and p<0.01, respectively). The differences of median submission-to-acceptance intervals and median acceptance-to-publication intervals between SARS articles and their corresponding control articles were 106.5 d (95% confidence interval [CI] 55.0–140.1) and 63.5 d (95% CI 18.0–94.1), respectively. The median numbers of citations of the SARS articles submitted during the epidemic and over the 2 y thereafter were 17 (interquartile range [IQR] 8.0–52.0) and 8 (IQR 3.2–21.8), respectively, significantly higher than the median numbers of control article citations (15, IQR 8.5–16.5, p<0.05, and 7, IQR 3.0–12.0, p<0.01, respectively).
Conclusions
A majority of the epidemiological articles on SARS were submitted after the epidemic had ended, although the corresponding studies had relevance to public health authorities during the epidemic. To minimize the lag between research and the exigency of public health practice in the future, researchers should consider adopting common, predefined protocols and ready-to-use instruments to improve timeliness, and thus, relevance, in addition to standardizing comparability across studies. To facilitate information dissemination, journal managers should reengineer their fast-track channels, which should be adapted to the purpose of an emerging outbreak, taking into account the requirement of high standards of quality for scientific journals and competition with other online resources.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every now and then, a new infectious disease appears in a human population or an old disease becomes much more common or more geographically widespread. Recently, several such “emerging infectious diseases” have become major public health problems. For example, HIV/AIDS, hepatitis C, and severe acute respiratory syndrome (SARS) have all emerged in the past three decades and spread rapidly round the world. When an outbreak (epidemic) of an emerging infectious disease occurs, epidemiologists (scientists who study the causes, distribution, and control of diseases in populations) swing into action, collecting and analyzing data on the new threat to human health. Epidemiological studies are rapidly launched to identify the causative agent of the new disease, to investigate how the disease spreads, to define diagnostic criteria for the disease, to evaluate potential treatments, and to devise ways to control the disease's spread. Public health officials then use the results of these studies to bring the epidemic under control.
Why Was This Study Done?
Clearly, epidemics of emerging infectious diseases can only be controlled rapidly and effectively if the results of epidemiological studies are made widely available in a timely manner. Public health bulletins (for example, the Morbidity and Mortality Weekly Report from the US Centers from Disease Control and Prevention) are an important way of disseminating information as is the publication of original research in peer-reviewed academic journals. But how timely is this second dissemination route? Submission, peer-review, revision, re-review, acceptance, and publication of a piece of academic research can be a long process, the speed of which is affected by the responses of both authors and journals. In this study, the researchers analyze how the results of academic epidemiological research are submitted and published in journals during and after an emerging infectious disease epidemic using the 2003 SARS epidemic as an example. The first case of SARS was identified in Asia in February 2003 and rapidly spread around the world. 8,098 people became ill with SARS and 774 died before the epidemic was halted in July 2003.
What Did the Researchers Do and Find?
The researchers identified more than 300 journal articles covering epidemiological research into the SARS outbreak in Hong Kong, China, and Toronto, Canada (two cities strongly affected by the epidemic) that were published online or in print between January 1, 2003 and July 31, 2007. The researchers' analysis of these articles shows that more than half them were descriptive epidemiological studies, investigations that focused on describing the distribution of SARS; a third were analytical epidemiological studies that tried to discover the cause of SARS. Overall, 22% of the journal articles were submitted for publication during the epidemic. Only 8% of the articles were accepted for publication and only 7% were actually published during the epidemic. The median (average) submission-to-acceptance and acceptance-to-publication intervals for SARS articles submitted during the epidemic were 55 and 77.5 days, respectively, much shorter intervals than those for non-SARS articles published in the same journal issues. After the epidemic was over, the submission-to-acceptance and acceptance-to-publication intervals for SARS articles was similar to that of non-SARS articles.
What Do These Findings Mean?
These findings show that, although the academic response to the SARS epidemic was rapid, most articles on the epidemiology of SARS were published after the epidemic was over even though SARS was a major threat to public health. Possible reasons for this publication delay include the time taken by authors to prepare and undertake their studies, to write and submit their papers, and, possibly, their tendency to first submit their results to high profile journals. The time then taken by journals to review the studies, make decisions about publication, and complete the publication process might also have delayed matters. To minimize future delays in the publication of epidemiological research on emerging infectious diseases, epidemiologists could adopt common, predefined protocols and ready-to-use instruments, which would improve timeliness and ensure comparability across studies, suggest the researchers. Journals, in turn, could improve their fast-track procedures and could consider setting up online sections that could be activated when an emerging infectious disease outbreak occurred. Finally, journals could consider altering their review system to speed up the publication process provided the quality of the final published articles was not compromised.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000272.
The US National Institute of Allergy and Infectious Diseases provides information on emerging infectious diseases
The US Centers for Control and Prevention of Diseases also provides information about emerging infectious diseases, including links to other resources, and information on SARS
Wikipedia has a page on epidemiology (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
The World Health Organization has information on SARS (in several languages)
doi:10.1371/journal.pmed.1000272
PMCID: PMC2864302  PMID: 20454570
20.  e-Health, m-Health and healthier social media reform: the big scale view 
Introduction
In the upcoming decade, digital platforms will be the backbone of a strategic revolution in the way medical services are provided, affecting both healthcare providers and patients. Digital-based patient-centered healthcare services allow patients to actively participate in managing their own care, in times of health as well as illness, using personally tailored interactive tools. Such empowerment is expected to increase patients’ willingness to adopt actions and lifestyles that promote health as well as improve follow-up and compliance with treatment in cases of chronic illness. Clalit Health Services (CHS) is the largest HMO in Israel and second largest world-wide. Through its 14 hospitals, 1300 primary and specialized clinics, and 650 pharmacies, CHS provides comprehensive medical care to the majority of Israel’s population (above 4 million members). CHS e-Health wing focuses on deepening patient involvement in managing health, through personalized digital interactive tools. Currently, CHS e-Health wing provides e-health services for 1.56 million unique patients monthly with 2.4 million interactions every month (August 2011). Successful implementation of e-Health solutions is not a sum of technology, innovation and health; rather it’s the expertise of tailoring knowledge and leadership capabilities in multidisciplinary areas: clinical, ethical, psychological, legal, comprehension of patient and medical team engagement etc. The Google Health case excellently demonstrates this point. On the other hand, our success with CHS is a demonstration that e-Health can be enrolled effectively and fast with huge benefits for both patients and medical teams, and with a robust business model.
CHS e-Health core components
They include:
1. The personal health record layer (what the patient can see) presents patients with their own medical history as well as the medical history of their preadult children, including diagnoses, allergies, vaccinations, laboratory results with interpretations in layman’s terms, medications with clear, straightforward explanations regarding dosing instructions, important side effects, contraindications, such as lactation etc., and other important medical information. All personal e-Health services require identification and authorization.
2. The personal knowledge layer (what the patient should know) presents patients with personally tailored recommendations for preventative medicine and health promotion. For example, diabetic patients are push notified regarding their yearly eye exam. The various health recommendations include: occult blood testing, mammography, lipid profile etc. Each recommendation contains textual, visual and interactive content components in order to promote engagement and motivate the patient to actually change his health behaviour.
3. The personal health services layer (what the patient can do) enables patients to schedule clinic visits, order chronic prescriptions, e-consult their physician via secured e-mail, set SMS medication reminders, e-consult a pharmacist regarding personal medications. Consultants’ answers are sent securely to the patients’ personal mobile device.
On December 2009 CHS launched secured, web based, synchronous medical consultation via video conference. Currently 11,780 e-visits are performed monthly (May 2011). The medical encounter includes e-prescription and referral capabilities which are biometrically signed by the physician. On December 2010 CHS launched a unique mobile health platform, which is one of the most comprehensive personal m-Health applications world-wide. An essential advantage of mobile devices is their potential to bridge the digital divide. Currently, CHS m-Health platform is used by more than 45,000 unique users, with 75,000 laboratory results views/month, 1100 m-consultations/month and 9000 physician visit scheduling/month.
4. The Bio-Sensing layer (what physiological data the patient can populate) includes diagnostic means that allow remote physical examination, bio-sensors that broadcast various physiological measurements, and smart homecare devices, such as e-Pill boxes that gives seniors, patients and their caregivers the ability to stay at home and live life to its fullest. Monitored data is automatically transmitted to the patient’s Personal Health Record and to relevant medical personnel.
The monitoring layer is embedded in the chronic disease management platform, and in the interactive health promotion and wellness platform. It includes tailoring of consumer-oriented medical devices and service provided by various professional personnel—physicians, nurses, pharmacists, dieticians and more.
5. The Social layer (what the patient can share). Social media networks triggered an essential change at the humanity ‘genome’ level, yet to be further defined in the upcoming years. Social media has huge potential in promoting health as it combines fun, simple yet extraordinary user experience, and bio-social-feedback. There are two major challenges in leveraging health care through social networks:
a. Our personal health information is the cornerstone for personalizing healthier lifestyle, disease management and preventative medicine. We naturally see our personal health data as a super-private territory. So, how do we bring the power of our private health information, currently locked within our Personal Health Record, into social media networks without offending basic privacy issues?
b. Disease management and preventive medicine are currently neither considered ‘cool’ nor ‘fun’ or ‘potentially highly viral’ activities; yet, health is a major issue of everybody’s life. It seems like we are missing a crucial element with a huge potential in health behavioural change—the Fun Theory. Social media platforms comprehends user experience tools that potentially could break current misconception, and engage people in the daily task of taking better care of themselves.
CHS e-Health innovation team characterized several break-through applications in this unexplored territory within social media networks, fusing personal health and social media platforms without offending privacy. One of the most crucial issues regarding adoption of e-health and m-health platforms is change management. Being a ‘hot’ innovative ‘gadget’ is far from sufficient for changing health behaviours at the individual and population levels.
CHS health behaviour change management methodology includes 4 core elements:
1. Engaging two completely different populations: patients, and medical teams. e-Health applications must present true added value for both medical teams and patients, engaging them through understanding and assimilating “what’s really in it for me”. Medical teams are further subdivided into physicians, nurses, pharmacists and administrative personnel—each with their own driving incentive. Resistance to change is an obstacle in many fields but it is particularly true in the conservative health industry. To successfully manage a large scale persuasive process, we treat intra-organizational human resources as “Change Agents”. Harnessing the persuasive power of ~40,000 employees requires engaging them as the primary target group. Successful recruitment has the potential of converting each patient-medical team interaction into an exposure opportunity to the new era of participatory medicine via e-health and m-health channels.
2. Implementation waves: every group of digital health products that are released at the same time are seen as one project. Each implementation wave leverages the focus of the organization and target populations to a defined time span. There are three major and three minor implementation waves a year.
3. Change-Support Arrow: a structured infrastructure for every implementation wave. The sub-stages in this strategy include:
Cross organizational mapping and identification of early adopters and stakeholders relevant to the implementation wave
Mapping positive or negative perceptions and designing specific marketing approaches for the distinct target groups
Intra and extra organizational marketing
Conducting intensive training and presentation sessions for groups of implementers
Running conflict-prevention activities, such as advanced tackling of potential union resistance
Training change-agents with resistance-management behavioural techniques, focused intervention for specific incidents and for key opinion leaders
Extensive presence in the clinics during the launch period, etc.
The entire process is monitored and managed continuously by a review team.
4. Closing Phase: each wave is analyzed and a “lessons-learned” session concludes the changes required in the modus operandi of the e-health project team.
PMCID: PMC3571141
e-Health; mobile health; personal health record; online visit; patient empowerment; knowledge prescription
21.  VisANT: an online visualization and analysis tool for biological interaction data 
BMC Bioinformatics  2004;5:17.
Background
New techniques for determining relationships between biomolecules of all types – genes, proteins, noncoding DNA, metabolites and small molecules – are now making a substantial contribution to the widely discussed explosion of facts about the cell. The data generated by these techniques promote a picture of the cell as an interconnected information network, with molecular components linked with one another in topologies that can encode and represent many features of cellular function. This networked view of biology brings the potential for systematic understanding of living molecular systems.
Results
We present VisANT, an application for integrating biomolecular interaction data into a cohesive, graphical interface. This software features a multi-tiered architecture for data flexibility, separating back-end modules for data retrieval from a front-end visualization and analysis package. VisANT is a freely available, open-source tool for researchers, and offers an online interface for a large range of published data sets on biomolecular interactions, including those entered by users. This system is integrated with standard databases for organized annotation, including GenBank, KEGG and SwissProt. VisANT is a Java-based, platform-independent tool suitable for a wide range of biological applications, including studies of pathways, gene regulation and systems biology.
Conclusion
VisANT has been developed to provide interactive visual mining of biological interaction data sets. The new software provides a general tool for mining and visualizing such data in the context of sequence, pathway, structure, and associated annotations. Interaction and predicted association data can be combined, overlaid, manipulated and analyzed using a variety of built-in functions. VisANT is available at .
doi:10.1186/1471-2105-5-17
PMCID: PMC368431  PMID: 15028117
22.  Implementation of an Enterprise Information Portal (EIP) in the Loyola University Health System 
Loyola University Chicago Stritch School of Medicine and Loyola University Medical Center have long histories in the development of applications to support the institutions' missions of education, research and clinical care. In late 1998, the institutions' application development group undertook an ambitious program to re-architecture more than 10 years of legacy application development (30+ core applications) into a unified World Wide Web (WWW) environment. The primary project objectives were to construct an environment that would support the rapid development of n-tier, web-based applications while providing standard methods for user authentication/validation, security/access control and definition of a user's organizational context. The project's efforts resulted in Loyola's Enterprise Information Portal (EIP), which meets the aforementioned objectives. This environment: 1) allows access to other vertical Intranet portals (e.g., electronic medical record, patient satisfaction information and faculty effort); 2) supports end-user desktop customization; and 3) provides a means for standardized application “look and feel.” The portal was constructed utilizing readily available hardware and software. Server hardware consists of multiprocessor (Intel Pentium 500Mhz) Compaq 6500 servers with one gigabyte of random access memory and 75 gigabytes of hard disk storage. Microsoft SQL Server was selected to house the portal's internal or security data structures. Netscape Enterprise Server was selected for the web server component of the environment and Allaire's ColdFusion was chosen for access and application tiers. Total costs for the portal environment was less than $40,000. User data storage is accomplished through two Microsoft SQL Servers and an existing SUN Microsystems enterprise server with eight processors, 750 gigabytes of disk storage operating Sybase relational database manager. Total storage capacity for all system exceeds one terabyte. In the past 12 months, the EIP has supported development of more than 88 applications and is utilized by more than 2,200 users.
PMCID: PMC2243648
23.  Future vision for the quality assurance of oncology clinical trials 
The National Cancer Institute clinical cooperative groups have been instrumental over the past 50 years in developing clinical trials and evidence-based process improvements for clinical oncology patient care. The cooperative groups are undergoing a transformation process as we further integrate molecular biology into personalized patient care and move to incorporate international partners in clinical trials. To support this vision, data acquisition and data management informatics tools must become both nimble and robust to support transformational research at an enterprise level. Information, including imaging, pathology, molecular biology, radiation oncology, surgery, systemic therapy, and patient outcome data needs to be integrated into the clinical trial charter using adaptive clinical trial mechanisms for design of the trial. This information needs to be made available to investigators using digital processes for real-time data analysis. Future clinical trials will need to be designed and completed in a timely manner facilitated by nimble informatics processes for data management. This paper discusses both past experience and future vision for clinical trials as we move to develop data management and quality assurance processes to meet the needs of the modern trial.
doi:10.3389/fonc.2013.00031
PMCID: PMC3598226  PMID: 23508883
radiation oncology; quality assurance; oncology clinical trials; National Cancer Institute; Clinical Trials Cooperative Group Program
24.  A novel collaborative e-learning platform for medical students - ALERT STUDENT 
BMC Medical Education  2014;14:143.
Background
The increasing complexity of medical curricula would benefit from adaptive computer supported collaborative learning systems that support study management using instructional design and learning object principles. However, to our knowledge, there are scarce reports regarding applications developed to meet this goal and encompass the complete medical curriculum. The aim of ths study was to develop and assess the usability of an adaptive computer supported collaborative learning system for medical students to manage study sessions.
Results
A study platform named ALERT STUDENT was built as a free web application. Content chunks are represented as Flashcards that hold knowledge and open ended questions. These can be created in a collaborative fashion. Multiple Flashcards can be combined into custom stacks called Notebooks that can be accessed in study Groups that belong to the user institution. The system provides a Study Mode that features text markers, text notes, timers and color-coded content prioritization based on self-assessment of open ended questions presented in a Quiz Mode. Time spent studying and Perception of knowledge are displayed for each student and peers using charts. Computer supported collaborative learning is achieved by allowing for simultaneous creation of Notebooks and self-assessment questions by many users in a pre-defined Group. Past personal performance data is retrieved when studying new Notebooks containing previously studied Flashcards. Self-report surveys showed that students highly agreed that the system was useful and were willing to use it as a reference tool.
Conclusions
The platform employs various instructional design and learning object principles in a computer supported collaborative learning platform for medical students that allows for study management. The application broadens student insight over learning results and supports informed decisions based on past learning performance. It serves as a potential educational model for the medical education setting that has gathered strong positive feedback from students at our school.
This platform provides a case study on how effective blending of instructional design and learning object principles can be brought together to manage study, and takes an important step towards bringing information management tools to support study decisions and improving learning outcomes.
doi:10.1186/1472-6920-14-143
PMCID: PMC4131539  PMID: 25017028
Medical education; Computer supported collaborative learning; E-learning; Information management; Memory retention; Computer-assisted instruction; Tailored learning; Student-centered learning; Spaced repetition
25.  ‘Ultrasound is an invaluable third eye, but it can’t see everything’: a qualitative study with obstetricians in Australia 
Background
Obstetric ultrasound has come to play a significant role in obstetrics since its introduction in clinical care. Today, most pregnant women in the developed world are exposed to obstetric ultrasound examinations, and there is no doubt that the advantages of obstetric ultrasound technique have led to improvements in pregnancy outcomes. However, at the same time, the increasing use has also raised many ethical challenges. This study aimed to explore obstetricians’ experiences of the significance of obstetric ultrasound for clinical management of complicated pregnancy and their perceptions of expectant parents’ experiences.
Methods
A qualitative study was undertaken in November 2012 as part of the CROss-Country Ultrasound Study (CROCUS). Semi-structured individual interviews were held with 14 obstetricians working at two large hospitals in Victoria, Australia. Transcribed data underwent qualitative content analysis.
Results
An overall theme emerged during the analyses, ‘Obstetric ultrasound - a third eye’, reflecting the significance and meaning of ultrasound in pregnancy, and the importance of the additional information that ultrasound offers clinicians managing the surveillance of a pregnant woman and her fetus. This theme was built on four categories: I:‘Everyday-tool’ for pregnancy surveillance, II: Significance for managing complicated pregnancy, III: Differing perspectives on obstetric ultrasound, and IV: Counselling as a balancing act. In summary, the obstetricians viewed obstetric ultrasound as an invaluable tool in their everyday practice. More importantly however, the findings emphasise some of the clinical dilemmas that occur due to its use: the obstetricians’ and expectant parents’ differing perspectives and expectations of obstetric ultrasound examinations, the challenges of uncertain ultrasound findings, and how this information was conveyed and balanced by obstetricians in counselling expectant parents.
Conclusions
This study highlights a range of previously rarely acknowledged clinical dilemmas that obstetricians face in relation to the use of obstetric ultrasound. Despite being a tool of considerable significance in the surveillance of pregnancy, there are limitations and uncertainties that arise with its use that make counselling expectant parents challenging. Research is needed which further investigates the effects and experiences of the continuing worldwide rapid technical advances in surveillance of pregnancies.
doi:10.1186/1471-2393-14-363
PMCID: PMC4287579  PMID: 25336335
Australia; Clinical experiences; Clinical management; Counselling; Obstetric ultrasound; Obstetricians; Obstetrics; Perspectives; Pregnancy complications; Qualitative study

Results 1-25 (1291739)