PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (740663)

Clipboard (0)
None

Related Articles

1.  Promoting coherent minimum reporting guidelines for biological and biomedical investigations: the MIBBI project 
Nature biotechnology  2008;26(8):889-896.
The Minimum Information for Biological and Biomedical Investigations (MIBBI) project provides a resource for those exploring the range of extant minimum information checklists and fosters coordinated development of such checklists.
doi:10.1038/nbt.1411
PMCID: PMC2771753  PMID: 18688244
2.  Minimum Information about a Genotyping Experiment (MIGEN) 
Standards in Genomic Sciences  2011;5(2):224-229.
Genotyping experiments are widely used in clinical and basic research laboratories to identify associations between genetic variations and normal/abnormal phenotypes. Genotyping assay techniques vary from single genomic regions that are interrogated using PCR reactions to high throughput assays examining genome-wide sequence and structural variation. The resulting genotype data may include millions of markers of thousands of individuals, requiring various statistical, modeling or other data analysis methodologies to interpret the results. To date, there are no standards for reporting genotyping experiments. Here we present the Minimum Information about a Genotyping Experiment (MIGen) standard, defining the minimum information required for reporting genotyping experiments. MIGen standard covers experimental design, subject description, genotyping procedure, quality control and data analysis. MIGen is a registered project under MIBBI (Minimum Information for Biological and Biomedical Investigations) and is being developed by an interdisciplinary group of experts in basic biomedical science, clinical science, biostatistics and bioinformatics. To accommodate the wide variety of techniques and methodologies applied in current and future genotyping experiment, MIGen leverages foundational concepts from the Ontology for Biomedical Investigations (OBI) for the description of the various types of planned processes and implements a hierarchical document structure. The adoption of MIGen by the research community will facilitate consistent genotyping data interpretation and independent data validation. MIGen can also serve as a framework for the development of data models for capturing and storing genotyping results and experiment metadata in a structured way, to facilitate the exchange of metadata.
doi:10.4056/sigs.1994602
PMCID: PMC3235517  PMID: 22180825
3.  Getting teams to talk: development and pilot implementation of a checklist to promote interprofessional communication in the OR 
Quality & safety in health care  2005;14(5):340-346.
Background: Pilot studies of complex interventions such as a team checklist are an essential precursor to evaluating how these interventions affect quality and safety of care. We conducted a pilot implementation of a preoperative team communication checklist. The objectives of the study were to assess the feasibility of the checklist (that is, team members' willingness and ability to incorporate it into their work processes); to describe how the checklist tool was used by operating room (OR) teams; and to describe perceived functions of the checklist discussions.
Methods: A checklist prototype was developed and OR team members were asked to implement it before 18 surgical procedures. A research assistant was present to prompt the participants, if necessary, to initiate each checklist discussion. Trained observers recorded ethnographic field notes and 11 brief feedback interviews were conducted. Observation and interview data were analyzed for trends.
Results: The checklist was implemented by the OR team in all 18 study cases. The rate of team participation was 100% (33 vascular surgery team members). The checklist discussions lasted 1–6 minutes (mean 3.5) and most commonly took place in the OR before the patient's arrival. Perceived functions of the checklist discussions included provision of detailed case related information, confirmation of details, articulation of concerns or ambiguities, team building, education, and decision making. Participants consistently valued the checklist discussions. The most significant barrier to undertaking the team checklist was variability in team members' preoperative workflow patterns, which sometimes presented a challenge to bringing the entire team together.
Conclusions: The preoperative team checklist shows promise as a feasible and efficient tool that promotes information exchange and team cohesion. Further research is needed to determine the sustainability and generalizability of the checklist intervention, to fully integrate the checklist routine into workflow patterns, and to measure its impact on patient safety.
doi:10.1136/qshc.2004.012377
PMCID: PMC1744073  PMID: 16195567
4.  Standardized Metadata for Human Pathogen/Vector Genomic Sequences 
PLoS ONE  2014;9(6):e99979.
High throughput sequencing has accelerated the determination of genome sequences for thousands of human infectious disease pathogens and dozens of their vectors. The scale and scope of these data are enabling genotype-phenotype association studies to identify genetic determinants of pathogen virulence and drug/insecticide resistance, and phylogenetic studies to track the origin and spread of disease outbreaks. To maximize the utility of genomic sequences for these purposes, it is essential that metadata about the pathogen/vector isolate characteristics be collected and made available in organized, clear, and consistent formats. Here we report the development of the GSCID/BRC Project and Sample Application Standard, developed by representatives of the Genome Sequencing Centers for Infectious Diseases (GSCIDs), the Bioinformatics Resource Centers (BRCs) for Infectious Diseases, and the U.S. National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health (NIH), informed by interactions with numerous collaborating scientists. It includes mapping to terms from other data standards initiatives, including the Genomic Standards Consortium’s minimal information (MIxS) and NCBI’s BioSample/BioProjects checklists and the Ontology for Biomedical Investigations (OBI). The standard includes data fields about characteristics of the organism or environmental source of the specimen, spatial-temporal information about the specimen isolation event, phenotypic characteristics of the pathogen/vector isolated, and project leadership and support. By modeling metadata fields into an ontology-based semantic framework and reusing existing ontologies and minimum information checklists, the application standard can be extended to support additional project-specific data fields and integrated with other data represented with comparable standards. The use of this metadata standard by all ongoing and future GSCID sequencing projects will provide a consistent representation of these data in the BRC resources and other repositories that leverage these data, allowing investigators to identify relevant genomic sequences and perform comparative genomics analyses that are both statistically meaningful and biologically relevant.
doi:10.1371/journal.pone.0099979
PMCID: PMC4061050  PMID: 24936976
5.  Home Telehealth for Patients With Chronic Obstructive Pulmonary Disease (COPD) 
Executive Summary
In July 2010, the Medical Advisory Secretariat (MAS) began work on a Chronic Obstructive Pulmonary Disease (COPD) evidentiary framework, an evidence-based review of the literature surrounding treatment strategies for patients with COPD. This project emerged from a request by the Health System Strategy Division of the Ministry of Health and Long-Term Care that MAS provide them with an evidentiary platform on the effectiveness and cost-effectiveness of COPD interventions.
After an initial review of health technology assessments and systematic reviews of COPD literature, and consultation with experts, MAS identified the following topics for analysis: vaccinations (influenza and pneumococcal), smoking cessation, multidisciplinary care, pulmonary rehabilitation, long-term oxygen therapy, noninvasive positive pressure ventilation for acute and chronic respiratory failure, hospital-at-home for acute exacerbations of COPD, and telehealth (including telemonitoring and telephone support). Evidence-based analyses were prepared for each of these topics. For each technology, an economic analysis was also completed where appropriate. In addition, a review of the qualitative literature on patient, caregiver, and provider perspectives on living and dying with COPD was conducted, as were reviews of the qualitative literature on each of the technologies included in these analyses.
The Chronic Obstructive Pulmonary Disease Mega-Analysis series is made up of the following reports, which can be publicly accessed at the MAS website at: http://www.hqontario.ca/en/mas/mas_ohtas_mn.html.
Chronic Obstructive Pulmonary Disease (COPD) Evidentiary Framework
Influenza and Pneumococcal Vaccinations for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Smoking Cessation for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Community-Based Multidisciplinary Care for Patients With Stable Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Pulmonary Rehabilitation for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Long-term Oxygen Therapy for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Noninvasive Positive Pressure Ventilation for Acute Respiratory Failure Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Noninvasive Positive Pressure Ventilation for Chronic Respiratory Failure Patients With Stable Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Hospital-at-Home Programs for Patients With Acute Exacerbations of Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Home Telehealth for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis
Cost-Effectiveness of Interventions for Chronic Obstructive Pulmonary Disease Using an Ontario Policy Model
Experiences of Living and Dying With COPD: A Systematic Review and Synthesis of the Qualitative Empirical Literature
For more information on the qualitative review, please contact Mita Giacomini at: http://fhs.mcmaster.ca/ceb/faculty_member_giacomini.htm.
For more information on the economic analysis, please visit the PATH website: http://www.path-hta.ca/About-Us/Contact-Us.aspx.
The Toronto Health Economics and Technology Assessment (THETA) collaborative has produced an associated report on patient preference for mechanical ventilation. For more information, please visit the THETA website: http://theta.utoronto.ca/static/contact.
Objective
The objective of this analysis was to conduct an evidence-based assessment of home telehealth technologies for patients with chronic obstructive pulmonary disease (COPD) in order to inform recommendations regarding the access and provision of these services in Ontario. This analysis was one of several analyses undertaken to evaluate interventions for COPD. The perspective of this assessment was that of the Ontario Ministry of Health and Long-Term Care, a provincial payer of medically necessary health care services.
Clinical Need: Condition and Target Population
Canada is facing an increase in chronic respiratory diseases due in part to its aging demographic. The projected increase in COPD will put a strain on health care payers and providers. There is therefore an increasing demand for telehealth services that improve access to health care services while maintaining or improving quality and equality of care. Many telehealth technologies however are in the early stages of development or diffusion and thus require study to define their application and potential harms or benefits. The Medical Advisory Secretariat (MAS) therefore sought to evaluate telehealth technologies for COPD.
Technology
Telemedicine (or telehealth) refers to using advanced information and communication technologies and electronic medical devices to support the delivery of clinical care, professional education, and health-related administrative services.
Generally there are 4 broad functions of home telehealth interventions for COPD:
to monitor vital signs or biological health data (e.g., oxygen saturation),
to monitor symptoms, medication, or other non-biologic endpoints (e.g., exercise adherence),
to provide information (education) and/or other support services (such as reminders to exercise or positive reinforcement), and
to establish a communication link between patient and provider.
These functions often require distinct technologies, although some devices can perform a number of these diverse functions. For the purposes of this review, MAS focused on home telemonitoring and telephone only support technologies.
Telemonitoring (or remote monitoring) refers to the use of medical devices to remotely collect a patient’s vital signs and/or other biologic health data and the transmission of those data to a monitoring station for interpretation by a health care provider.
Telephone only support refers to disease/disorder management support provided by a health care provider to a patient who is at home via telephone or videoconferencing technology in the absence of transmission of patient biologic data.
Research Questions
What is the effectiveness, cost-effectiveness, and safety of home telemonitoring compared with usual care for patients with COPD?
What is the effectiveness, cost-effectiveness, and safety of telephone only support programs compared with usual care for patients with COPD?
Research Methods
Literature Search
Search Strategy
A literature search was performed on November 3, 2010 using OVID MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, the Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Cochrane Library, and the International Agency for Health Technology Assessment (INAHTA) for studies published from January 1, 2000 until November 3, 2010. Abstracts were reviewed by a single reviewer and, for those studies meeting the eligibility criteria, full-text articles were obtained. Reference lists were also examined for any additional relevant studies not identified through the search. Articles with unknown eligibility were reviewed with a second clinical epidemiologist, and then a group of epidemiologists until consensus was established. The quality of evidence was assessed as high, moderate, low, or very low according to GRADE methodology.
Inclusion Criteria – Question #1
frequent transmission of a patient’s physiological data collected at home and without a health care professional physically present to health care professionals for routine monitoring through the use of a communication technology;
monitoring combined with a coordinated management and feedback system based on transmitted data;
telemonitoring as a key component of the intervention (subjective determination);
usual care as provided by the usual care provider for the control group;
randomized controlled trials (RCTs), controlled clinical trials (CCTs), systematic reviews, and/or meta-analyses;
published between January 1, 2000 and November 3, 2010.
Inclusion Criteria – Question #2
scheduled or frequent contact between patient and a health care professional via telephone or videoconferencing technology in the absence of transmission of patient physiological data;
monitoring combined with a coordinated management and feedback system based on transmitted data;
telephone support as a key component of the intervention (subjective determination);
usual care as provided by the usual care provider for the control group;
RCTs, CCTs, systematic reviews, and/or meta-analyses;
published between January 1, 2000 and November 3, 2010.
Exclusion Criteria
published in a language other than English;
intervention group (and not control) receiving some form of home visits by a medical professional, typically a nurse (i.e., telenursing) beyond initial technology set-up and education, to collect physiological data, or to somehow manage or treat the patient;
not recording patient or health system outcomes (e.g., technical reports testing accuracy, reliability or other development-related outcomes of a device, acceptability/feasibility studies, etc.);
not using an independent control group that received usual care (e.g., studies employing historical or periodic controls).
Outcomes of Interest
hospitalizations (primary outcome)
mortality
emergency department visits
length of stay
quality of life
other […]
Subgroup Analyses (a priori)
length of intervention (primary)
severity of COPD (primary)
Quality of Evidence
The quality of evidence assigned to individual studies was determined using a modified CONSORT Statement Checklist for Randomized Controlled Trials. (1) The CONSORT Statement was adapted to include 3 additional quality measures: the adequacy of control group description, significant differential loss to follow-up between groups, and greater than or equal to 30% study attrition. Individual study quality was defined based on total scores according to the CONSORT Statement checklist: very low (0 to < 40%), low (≥ 40 to < 60%), moderate (≥ 60 to < 80%), and high (≥ 80 to 100%).
The quality of the body of evidence was assessed as high, moderate, low, or very low according to the GRADE Working Group criteria. The following definitions of quality were used in grading the quality of the evidence:
Summary of Findings
Six publications, representing 5 independent trials, met the eligibility criteria for Research Question #1. Three trials were RCTs reported across 4 publications, whereby patients were randomized to home telemonitoring or usual care, and 2 trials were CCTs, whereby patients or health care centers were nonrandomly assigned to intervention or usual care.
A total of 310 participants were studied across the 5 included trials. The mean age of study participants in the included trials ranged from 61.2 to 74.5 years for the intervention group and 61.1 to 74.5 years for the usual care group. The percentage of men ranged from 40% to 64% in the intervention group and 46% to 72% in the control group.
All 5 trials were performed in a moderate to severe COPD patient population. Three trials initiated the intervention following discharge from hospital. One trial initiated the intervention following a pulmonary rehabilitation program. The final trial initiated the intervention during management of patients at an outpatient clinic.
Four of the 5 trials included oxygen saturation (i.e., pulse oximetry) as one of the biological patient parameters being monitored. Additional parameters monitored included forced expiratory volume in one second, peak expiratory flow, and temperature.
There was considerable clinical heterogeneity between trials in study design, methods, and intervention/control. In relation to the telemonitoring intervention, 3 of the 5 included studies used an electronic health hub that performed multiple functions beyond the monitoring of biological parameters. One study used only a pulse oximeter device alone with modem capabilities. Finally, in 1 study, patients measured and then forwarded biological data to a nurse during a televideo consultation. Usual care varied considerably between studies.
Only one trial met the eligibility criteria for Research Question #2. The included trial was an RCT that randomized 60 patients to nurse telephone follow-up or usual care (no telephone follow-up). Participants were recruited from the medical department of an acute-care hospital in Hong Kong and began receiving follow-up after discharge from the hospital with a diagnosis of COPD (no severity restriction). The intervention itself consisted of only two 10-to 20-minute telephone calls, once between days 3 to 7 and once between days 14 to 20, involving a structured, individualized educational and supportive programme led by a nurse that focused on 3 components: assessment, management options, and evaluation.
Regarding Research Question #1:
Low to very low quality evidence (according to GRADE) finds non-significant effects or conflicting effects (of significant or non-significant benefit) for all outcomes examined when comparing home telemonitoring to usual care.
There is a trend towards significant increase in time free of hospitalization and use of other health care services with home telemonitoring, but these findings need to be confirmed further in randomized trials of high quality.
There is severe clinical heterogeneity between studies that limits summary conclusions.
The economic impact of home telemonitoring is uncertain and requires further study.
Home telemonitoring is largely dependent on local information technologies, infrastructure, and personnel, and thus the generalizability of external findings may be low. Jurisdictions wishing to replicate home telemonitoring interventions should likely test those interventions within their jurisdictional framework before adoption, or should focus on home-grown interventions that are subjected to appropriate evaluation and proven effective.
Regarding Research Question #2:
Low quality evidence finds significant benefit in favour of telephone-only support for self-efficacy and emergency department visits when compared to usual care, but non-significant results for hospitalizations and hospital length of stay.
There are very serious issues with the generalizability of the evidence and thus additional research is required.
PMCID: PMC3384362  PMID: 23074421
6.  Maximising harm reduction in early specialty training for general practice: validation of a safety checklist 
BMC Family Practice  2012;13:62.
Background
Making health care safer is a key policy priority worldwide. In specialty training, medical educators may unintentionally impact on patient safety e.g. through failures of supervision; providing limited feedback on performance; and letting poorly developed behaviours continue unchecked. Doctors-in-training are also known to be susceptible to medical error. Ensuring that all essential educational issues are addressed during training is problematic given the scale of the tasks to be undertaken. Human error and the reliability of local systems may increase the risk of safety-critical topics being inadequately covered. However adherence to a checklist reminder may improve the reliability of task delivery and maximise harm reduction. We aimed to prioritise the most safety-critical issues to be addressed in the first 12-weeks of specialty training in the general practice environment and validate a related checklist reminder.
Methods
We used mixed methods with different groups of GP educators (n = 127) and specialty trainees (n = 9) in two Scottish regions to prioritise, develop and validate checklist content. Generation and refinement of checklist themes and items were undertaken on an iterative basis using a range of methods including small group work in dedicated workshops; a modified-Delphi process; and telephone interviews. The relevance of potential checklist items was rated using a 4-point scale content validity index to inform final inclusion.
Results
14 themes (e.g. prescribing safely; dealing with medical emergency; implications of poor record keeping; and effective & safe communication) and 47 related items (e.g. how to safety-net face-to-face or over the telephone; knowledge of practice systems for results handling; recognition of harm in children) were judged to be essential safety-critical educational issues to be covered. The mean content validity index ratio was 0.98.
Conclusion
A checklist was developed and validated for educational supervisors to assist in the reliable delivery of safety-critical educational issues in the opening 12-week period of training, and aligned with national curriculum competencies. The tool can also be adapted for use as a self-assessment instrument by trainees to guide patient safety-related learning needs. Dissemination and implementation of the checklist and self-rating scale are proceeding on a national, voluntary basis with plans to evaluate its feasibility and educational impact.
doi:10.1186/1471-2296-13-62
PMCID: PMC3418214  PMID: 22721273
7.  CONSORT for Reporting Randomized Controlled Trials in Journal and Conference Abstracts: Explanation and Elaboration 
PLoS Medicine  2008;5(1):e20.
Background
Clear, transparent, and sufficiently detailed abstracts of conferences and journal articles related to randomized controlled trials (RCTs) are important, because readers often base their assessment of a trial solely on information in the abstract. Here, we extend the CONSORT (Consolidated Standards of Reporting Trials) Statement to develop a minimum list of essential items, which authors should consider when reporting the results of a RCT in any journal or conference abstract.
Methods and Findings
We generated a list of items from existing quality assessment tools and empirical evidence. A three-round, modified-Delphi process was used to select items. In all, 109 participants were invited to participate in an electronic survey; the response rate was 61%. Survey results were presented at a meeting of the CONSORT Group in Montebello, Canada, January 2007, involving 26 participants, including clinical trialists, statisticians, epidemiologists, and biomedical editors. Checklist items were discussed for eligibility into the final checklist. The checklist was then revised to ensure that it reflected discussions held during and subsequent to the meeting. CONSORT for Abstracts recommends that abstracts relating to RCTs have a structured format. Items should include details of trial objectives; trial design (e.g., method of allocation, blinding/masking); trial participants (i.e., description, numbers randomized, and number analyzed); interventions intended for each randomized group and their impact on primary efficacy outcomes and harms; trial conclusions; trial registration name and number; and source of funding. We recommend the checklist be used in conjunction with this explanatory document, which includes examples of good reporting, rationale, and evidence, when available, for the inclusion of each item.
Conclusions
CONSORT for Abstracts aims to improve reporting of abstracts of RCTs published in journal articles and conference proceedings. It will help authors of abstracts of these trials provide the detail and clarity needed by readers wishing to assess a trial's validity and the applicability of its results.
The authors extend the CONSORT Statement to develop a minimum list of essential items to consider when reporting the results of a randomized trial in any journal or conference abstract.
doi:10.1371/journal.pmed.0050020
PMCID: PMC2211558  PMID: 18215107
8.  RCN4GSC Workshop Report: Managing Data at the Interface of Biodiversity and (Meta)Genomics, March 2011 
Standards in Genomic Sciences  2012;7(1):159-165.
Building on the planning efforts of the RCN4GSC project, a workshop was convened in San Diego to bring together experts from genomics and metagenomics, biodiversity, ecology, and bioinformatics with the charge to identify potential for positive interactions and progress, especially building on successes at establishing data standards by the GSC and by the biodiversity and ecological communities. Until recently, the contribution of microbial life to the biomass and biodiversity of the biosphere was largely overlooked (because it was resistant to systematic study). Now, emerging genomic and metagenomic tools are making investigation possible. Initial research findings suggest that major advances are in the offing. Although different research communities share some overlapping concepts and traditions, they differ significantly in sampling approaches, vocabularies and workflows. Likewise, their definitions of ‘fitness for use’ for data differ significantly, as this concept stems from the specific research questions of most importance in the different fields. Nevertheless, there is little doubt that there is much to be gained from greater coordination and integration. As a first step toward interoperability of the information systems used by the different communities, participants agreed to conduct a case study on two of the leading data standards from the two formerly disparate fields: (a) GSC’s standard checklists for genomics and metagenomics and (b) TDWG’s Darwin Core standard, used primarily in taxonomy and systematic biology.
doi:10.4056/sigs.3156511
PMCID: PMC3570804  PMID: 23451294
9.  A checklist for identifying determinants of practice: A systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice 
Background
Determinants of practice are factors that might prevent or enable improvements. Several checklists, frameworks, taxonomies, and classifications of determinants of healthcare professional practice have been published. In this paper, we describe the development of a comprehensive, integrated checklist of determinants of practice (the TICD checklist).
Methods
We performed a systematic review of frameworks of determinants of practice followed by a consensus process. We searched electronic databases and screened the reference lists of key background documents. Two authors independently assessed titles and abstracts, and potentially relevant full text articles. We compiled a list of attributes that a checklist should have: comprehensiveness, relevance, applicability, simplicity, logic, clarity, usability, suitability, and usefulness. We assessed included articles using these criteria and collected information about the theory, model, or logic underlying how the factors (determinants) were selected, described, and grouped, the strengths and weaknesses of the checklist, and the determinants and the domains in each checklist. We drafted a preliminary checklist based on an aggregated list of determinants from the included checklists, and finalized the checklist by a consensus process among implementation researchers.
Results
We screened 5,778 titles and abstracts and retrieved 87 potentially relevant papers in full text. Several of these papers had references to papers that we also retrieved in full text. We also checked potentially relevant papers we had on file that were not retrieved by the searches. We included 12 checklists. None of these were completely comprehensive when compared to the aggregated list of determinants and domains. We developed a checklist with 57 potential determinants of practice grouped in seven domains: guideline factors, individual health professional factors, patient factors, professional interactions, incentives and resources, capacity for organisational change, and social, political, and legal factors. We also developed five worksheets to facilitate the use of the checklist.
Conclusions
Based on a systematic review and a consensus process we developed a checklist that aims to be comprehensive and to build on the strengths of each of the 12 included checklists. The checklist is accompanied with five worksheets to facilitate its use in implementation research and quality improvement projects.
doi:10.1186/1748-5908-8-35
PMCID: PMC3617095  PMID: 23522377
10.  MOPED enables discoveries through consistently processed proteomics data 
Journal of proteome research  2013;13(1):107-113.
The Model Organism Protein Expression Database (MOPED, http://moped.proteinspire.org), is an expanding proteomics resource to enable biological and biomedical discoveries. MOPED aggregates simple, standardized and consistently processed summaries of protein expression and metadata from proteomics (mass spectrometry) experiments from human and model organisms (mouse, worm and yeast). The latest version of MOPED adds new estimates of protein abundance and concentration, as well as relative (differential) expression data. MOPED provides a new updated query interface that allows users to explore information by organism, tissue, localization, condition, experiment, or keyword. MOPED supports the Human Proteome Project’s efforts to generate chromosome and diseases specific proteomes by providing links from proteins to chromosome and disease information, as well as many complementary resources. MOPED supports a new omics metadata checklist in order to harmonize data integration, analysis and use. MOPED’s development is driven by the user community, which spans 90 countries guiding future development that will transform MOPED into a multi-omics resource. MOPED encourages users to submit data in a simple format. They can use the metadata a checklist generate a data publication for this submission. As a result, MOPED will provide even greater insights into complex biological processes and systems and enable deeper and more comprehensive biological and biomedical discoveries.
doi:10.1021/pr400884c
PMCID: PMC4039175  PMID: 24350770
11.  Computational disease modeling – fact or fiction? 
BMC Systems Biology  2009;3:56.
Background
Biomedical research is changing due to the rapid accumulation of experimental data at an unprecedented scale, revealing increasing degrees of complexity of biological processes. Life Sciences are facing a transition from a descriptive to a mechanistic approach that reveals principles of cells, cellular networks, organs, and their interactions across several spatial and temporal scales. There are two conceptual traditions in biological computational-modeling. The bottom-up approach emphasizes complex intracellular molecular models and is well represented within the systems biology community. On the other hand, the physics-inspired top-down modeling strategy identifies and selects features of (presumably) essential relevance to the phenomena of interest and combines available data in models of modest complexity.
Results
The workshop, "ESF Exploratory Workshop on Computational disease Modeling", examined the challenges that computational modeling faces in contributing to the understanding and treatment of complex multi-factorial diseases. Participants at the meeting agreed on two general conclusions. First, we identified the critical importance of developing analytical tools for dealing with model and parameter uncertainty. Second, the development of predictive hierarchical models spanning several scales beyond intracellular molecular networks was identified as a major objective. This contrasts with the current focus within the systems biology community on complex molecular modeling.
Conclusion
During the workshop it became obvious that diverse scientific modeling cultures (from computational neuroscience, theory, data-driven machine-learning approaches, agent-based modeling, network modeling and stochastic-molecular simulations) would benefit from intense cross-talk on shared theoretical issues in order to make progress on clinically relevant problems.
doi:10.1186/1752-0509-3-56
PMCID: PMC2697138  PMID: 19497118
12.  Overview of the BioCreative III Workshop 
BMC Bioinformatics  2011;12(Suppl 8):S1.
Background
The overall goal of the BioCreative Workshops is to promote the development of text mining and text processing tools which are useful to the communities of researchers and database curators in the biological sciences. To this end BioCreative I was held in 2004, BioCreative II in 2007, and BioCreative II.5 in 2009. Each of these workshops involved humanly annotated test data for several basic tasks in text mining applied to the biomedical literature. Participants in the workshops were invited to compete in the tasks by constructing software systems to perform the tasks automatically and were given scores based on their performance. The results of these workshops have benefited the community in several ways. They have 1) provided evidence for the most effective methods currently available to solve specific problems; 2) revealed the current state of the art for performance on those problems; 3) and provided gold standard data and results on that data by which future advances can be gauged. This special issue contains overview papers for the three tasks of BioCreative III.
Results
The BioCreative III Workshop was held in September of 2010 and continued the tradition of a challenge evaluation on several tasks judged basic to effective text mining in biology, including a gene normalization (GN) task and two protein-protein interaction (PPI) tasks. In total the Workshop involved the work of twenty-three teams. Thirteen teams participated in the GN task which required the assignment of EntrezGene IDs to all named genes in full text papers without any species information being provided to a system. Ten teams participated in the PPI article classification task (ACT) requiring a system to classify and rank a PubMed® record as belonging to an article either having or not having “PPI relevant” information. Eight teams participated in the PPI interaction method task (IMT) where systems were given full text documents and were required to extract the experimental methods used to establish PPIs and a text segment supporting each such method. Gold standard data was compiled for each of these tasks and participants competed in developing systems to perform the tasks automatically.
BioCreative III also introduced a new interactive task (IAT), run as a demonstration task. The goal was to develop an interactive system to facilitate a user’s annotation of the unique database identifiers for all the genes appearing in an article. This task included ranking genes by importance (based preferably on the amount of described experimental information regarding genes). There was also an optional task to assist the user in finding the most relevant articles about a given gene. For BioCreative III, a user advisory group (UAG) was assembled and played an important role 1) in producing some of the gold standard annotations for the GN task, 2) in critiquing IAT systems, and 3) in providing guidance for a future more rigorous evaluation of IAT systems. Six teams participated in the IAT demonstration task and received feedback on their systems from the UAG group. Besides innovations in the GN and PPI tasks making them more realistic and practical and the introduction of the IAT task, discussions were begun on community data standards to promote interoperability and on user requirements and evaluation metrics to address utility and usability of systems.
Conclusions
In this paper we give a brief history of the BioCreative Workshops and how they relate to other text mining competitions in biology. This is followed by a synopsis of the three tasks GN, PPI, and IAT in BioCreative III with figures for best participant performance on the GN and PPI tasks. These results are discussed and compared with results from previous BioCreative Workshops and we conclude that the best performing systems for GN, PPI-ACT and PPI-IMT in realistic settings are not sufficient for fully automatic use. This provides evidence for the importance of interactive systems and we present our vision of how best to construct an interactive system for a GN or PPI like task in the remainder of the paper.
doi:10.1186/1471-2105-12-S8-S1
PMCID: PMC3269932  PMID: 22151647
13.  The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: Guidelines for Reporting Observational Studies 
PLoS Medicine  2007;4(10):e296.
Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalisability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover three main study designs: cohort, case-control, and cross-sectional studies. We convened a 2-day workshop in September 2004, with methodologists, researchers, and journal editors to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE Statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles. 18 items are common to all three study designs and four are specific for cohort, case-control, or cross-sectional studies. A detailed Explanation and Elaboration document is published separately and is freely available on the Web sites of PLoS Medicine, Annals of Internal Medicine, and Epidemiology. We hope that the STROBE Statement will contribute to improving the quality of reporting of observational studies.
This paper describes the recommendations of The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative on what should be included in an accurate and complete report of an observational study.
doi:10.1371/journal.pmed.0040296
PMCID: PMC2020495  PMID: 17941714
14.  Developing content for a process-of-care checklist for use in intensive care units: a dual-method approach to establishing construct validity 
Background
In the intensive care unit (ICU), checklists can be used to support the delivery of quality and consistent clinical care. While studies have reported important benefits for clinical checklists in this context, lack of formal validity testing in the literature prompted the study aim; to develop relevant ‘process-of-care’ checklist statements, using rigorously applied and reported methods that were clear, concise and reflective of the current evidence base. These statements will be sufficiently instructive for use by physicians during ICU clinical rounds.
Methods
A dual-method approach was utilized; semi-structured interviews with local clinicians; and rounds of surveys to an expert Delphi panel. The interviews helped determine checklist item inclusion/exclusion prior to the first round Delphi survey. The panel for the modified-Delphi technique consisted of local intensivists and a state-wide ICU quality committee. Minimum standards for consensus agreement were set prior to the distribution of questionnaires, and rounds of surveys continued until consensus was achieved.
Results
A number of important issues such as overlap with other initiatives were identified in interviews with clinicians and integrated into the Delphi questionnaire, but no additional checklist items were suggested, demonstrating adequate checklist coverage sourced from the literature. These items were verified by local clinicians as being relevant to ICU and important elements of care that required checking during ward rounds. Two rounds of Delphi surveys were required to reach consensus on nine checklist statements: nutrition, pain management, sedation, deep vein thrombosis and stress ulcer prevention, head-of-bed elevation, blood glucose levels, readiness to extubate, and medications.
Conclusions
Statements were developed as the most clear, concise, evidence-informed and instructive statements for use during clinical rounds in an ICU. Initial evidence in support of the checklist’s construct validity was established prior to further prospective evaluation in the same ICU.
doi:10.1186/1472-6963-13-380
PMCID: PMC3852734  PMID: 24088360
Checklists; Construct validity; Delphi technique; Healthcare quality improvement; Patient safety; Critical care
15.  The tissue microarray data exchange specification: A community-based, open source tool for sharing tissue microarray data 
Background
Tissue Microarrays (TMAs) allow researchers to examine hundreds of small tissue samples on a single glass slide. The information held in a single TMA slide may easily involve Gigabytes of data. To benefit from TMA technology, the scientific community needs an open source TMA data exchange specification that will convey all of the data in a TMA experiment in a format that is understandable to both humans and computers. A data exchange specification for TMAs allows researchers to submit their data to journals and to public data repositories and to share or merge data from different laboratories. In May 2001, the Association of Pathology Informatics (API) hosted the first in a series of four workshops, co-sponsored by the National Cancer Institute, to develop an open, community-supported TMA data exchange specification.
Methods
A draft tissue microarray data exchange specification was developed through workshop meetings. The first workshop confirmed community support for the effort and urged the creation of an open XML-based specification. This was to evolve in steps with approval for each step coming from the stakeholders in the user community during open workshops. By the fourth workshop, held October, 2002, a set of Common Data Elements (CDEs) was established as well as a basic strategy for organizing TMA data in self-describing XML documents.
Results
The TMA data exchange specification is a well-formed XML document with four required sections: 1) Header, containing the specification Dublin Core identifiers, 2) Block, describing the paraffin-embedded array of tissues, 3)Slide, describing the glass slides produced from the Block, and 4) Core, containing all data related to the individual tissue samples contained in the array. Eighty CDEs, conforming to the ISO-11179 specification for data elements constitute XML tags used in the TMA data exchange specification. A set of six simple semantic rules describe the complete data exchange specification. Anyone using the data exchange specification can validate their TMA files using a software implementation written in Perl and distributed as a supplemental file with this publication.
Conclusion
The TMA data exchange specification is now available in a draft form with community-approved Common Data Elements and a community-approved general file format and data structure. The specification can be freely used by the scientific community. Efforts sponsored by the Association for Pathology Informatics to refine the draft TMA data exchange specification are expected to continue for at least two more years. The interested public is invited to participate in these open efforts. Information on future workshops will be posted at (API we site).
doi:10.1186/1472-6947-3-5
PMCID: PMC165444  PMID: 12769826
16.  Accelerating the Development of 21st-Century Toxicology: Outcome of a Human Toxicology Project Consortium Workshop 
Toxicological Sciences  2011;125(2):327-334.
The U.S. National Research Council (NRC) report on “Toxicity Testing in the 21st century” calls for a fundamental shift in the way that chemicals are tested for human health effects and evaluated in risk assessments. The new approach would move toward in vitro methods, typically using human cells in a high-throughput context. The in vitro methods would be designed to detect significant perturbations to “toxicity pathways,” i.e., key biological pathways that, when sufficiently perturbed, lead to adverse health outcomes. To explore progress on the report’s implementation, the Human Toxicology Project Consortium hosted a workshop on 9–10 November 2010 in Washington, DC. The Consortium is a coalition of several corporations, a research institute, and a non-governmental organization dedicated to accelerating the implementation of 21st-century Toxicology as aligned with the NRC vision. The goal of the workshop was to identify practical and scientific ways to accelerate implementation of the NRC vision. The workshop format consisted of plenary presentations, breakout group discussions, and concluding commentaries. The program faculty was drawn from industry, academia, government, and public interest organizations. Most presentations summarized ongoing efforts to modernize toxicology testing and approaches, each with some overlap with the NRC vision. In light of these efforts, the workshop identified recommendations for accelerating implementation of the NRC vision, including greater strategic coordination and planning across projects (facilitated by a steering group), the development of projects that test the proof of concept for implementation of the NRC vision, and greater outreach and communication across stakeholder communities.
doi:10.1093/toxsci/kfr248
PMCID: PMC3262850  PMID: 21948868
toxicity testing in the 21st century; safety assessment; in vitro alternatives; National Research Council
17.  Guidelines for information about therapy experiments: a proposal on best practice for recording experimental data on cancer therapy 
BMC Research Notes  2012;5:10.
Background
Biology, biomedicine and healthcare have become data-driven enterprises, where scientists and clinicians need to generate, access, validate, interpret and integrate different kinds of experimental and patient-related data. Thus, recording and reporting of data in a systematic and unambiguous fashion is crucial to allow aggregation and re-use of data. This paper reviews the benefits of existing biomedical data standards and focuses on key elements to record experiments for therapy development. Specifically, we describe the experiments performed in molecular, cellular, animal and clinical models. We also provide an example set of elements for a therapy tested in a phase I clinical trial.
Findings
We introduce the Guidelines for Information About Therapy Experiments (GIATE), a minimum information checklist creating a consistent framework to transparently report the purpose, methods and results of the therapeutic experiments. A discussion on the scope, design and structure of the guidelines is presented, together with a description of the intended audience. We also present complementary resources such as a classification scheme, and two alternative ways of creating GIATE information: an electronic lab notebook and a simple spreadsheet-based format. Finally, we use GIATE to record the details of the phase I clinical trial of CHT-25 for patients with refractory lymphomas. The benefits of using GIATE for this experiment are discussed.
Conclusions
While data standards are being developed to facilitate data sharing and integration in various aspects of experimental medicine, such as genomics and clinical data, no previous work focused on therapy development. We propose a checklist for therapy experiments and demonstrate its use in the 131Iodine labeled CHT-25 chimeric antibody cancer therapy. As future work, we will expand the set of GIATE tools to continue to encourage its use by cancer researchers, and we will engineer an ontology to annotate GIATE elements and facilitate unambiguous interpretation and data integration.
doi:10.1186/1756-0500-5-10
PMCID: PMC3285520  PMID: 22226027
18.  Managing personal health information in distributed research network environments 
Background
Studying rare outcomes, new interventions and diverse populations often requires collaborations across multiple health research partners. However, transferring healthcare research data from one institution to another can increase the risk of data privacy and security breaches.
Methods
A working group of multi-site research programmers evaluated the need for tools to support data security and data privacy. The group determined that data privacy support tools should: 1) allow for a range of allowable Protected Health Information (PHI); 2) clearly identify what type of data should be protected under the Health Insurance Portability and Accountability Act (HIPAA); and 3) help analysts identify which protected health information data elements are allowable in a given project and how they should be protected during data transfer. Based on these requirements we developed two performance support tools to support data programmers and site analysts in exchanging research data.
Results
The first tool, a workplan template, guides the lead programmer through effectively communicating the details of multi-site programming, including how to run the program, what output the program will create, and whether the output is expected to contain protected health information. The second performance support tool is a checklist that site analysts can use to ensure that multi-site program output conforms to expectations and does not contain protected health information beyond what is allowed under the multi-site research agreements.
Conclusions
Together the two tools create a formal multi-site programming workflow designed to reduce the chance of accidental PHI disclosure.
doi:10.1186/1472-6947-13-116
PMCID: PMC3851487  PMID: 24099117
HIPAA; Protected health information; Distributed research; Privacy; Security
19.  Consolidating newborn screening efforts in the Asia Pacific region 
Journal of Community Genetics  2012;3(1):35-45.
Many of the countries in the Asia Pacific Region, particularly those with depressed and developing economies, are just initiating newborn screening programs for selected metabolic and other congenital disorders. The cultural, geographic, language, and economic differences that exist throughout the region add to the challenges of developing sustainable newborn screening systems. There are currently more developing programs than developed programs within the region. Newborn screening activities in the Asia Pacific Region are particularly important since births there account for approximately half of the world’s births. To date, there have been two workshops to facilitate formation of the Asia Pacific Newborn Screening Collaboratives. The 1st Workshop on Consolidating Newborn Screening Efforts in the Asia Pacific Region occurred in Cebu, Philippines, on March 30–April 1, 2008, as a satellite meeting to the 7th Asia Pacific Conference on Human Genetics. The second workshop was held on June 4–5, 2010, in Manila, Philippines. Workshop participants included key policy-makers, service providers, researchers, and consumer advocates from 11 countries with 50% or less newborn screening coverage. Expert lectures included experiences in the United States and the Netherlands, international quality assurance activities and ongoing and potential research activities. Additional meeting support was provided by the U.S. National Institutes of Health, the Centers for Disease Control and Prevention, the U.S. National Newborn Screening and Genetics Resource Center, the International Society for Neonatal Screening, and the March of Dimes. As part of both meeting activities, participants shared individual experiences in program implementation with formal updates of screening information for each country. This report reviews the activities and country reports from two Workshops on Consolidating Newborn Screening Efforts in the Asia Pacific Region with emphasis on the second workshop. It also updates the literature on screening activities and implementation/expansion challenges in the participating countries.
doi:10.1007/s12687-011-0076-7
PMCID: PMC3266966  PMID: 22271560
Newborn screening; Asia Pacific; Cebu Declaration; Manila Declaration
20.  So many filters, so little time: the development of a search filter appraisal checklist 
Objectives:
The authors developed a tool to assess the quality of search filters designed to retrieve records for studies with specific research designs (e.g., diagnostic studies).
Methods:
The UK InterTASC Information Specialists' Sub-Group (ISSG), a group of experienced health care information specialists, reviewed the literature to evaluate existing search filter appraisal tools and determined that existing tools were inadequate for their needs. The group held consensus meetings to develop a new filter appraisal tool consisting of a search filter appraisal checklist and a structured abstract. ISSG members tested the final checklist using three published search filters.
Results:
The detailed ISSG Search Filter Appraisal Checklist captures relevance criteria and methods used to develop and test search filters. The checklist includes categorical and descriptive responses and is accompanied by a structured abstract that provides a summary of key quality features of a filter.
Discussion:
The checklist is a comprehensive appraisal tool that can assist health sciences librarians and others in choosing search filters. The checklist reports filter design methods and search performance measures, such as sensitivity and precision. The checklist can also aid filter developers by indicating information on core methods that should be reported to help assess filter suitability. The generalizability of the checklist for non-methods filters remains to be explored.
doi:10.3163/1536-5050.96.4.011
PMCID: PMC2568852  PMID: 18974813
21.  Meeting Report from the Genomic Standards Consortium (GSC) Workshops 6 and 7 
Standards in Genomic Sciences  2009;1(1):68-71.
This report summarizes the proceedings of the 6th and 7th workshops of the Genomic Standards Consortium (GSC), held back-to-back in 2008. GSC 6 focused on furthering the activities of GSC working groups, GSC 7 focused on outreach to the wider community. GSC 6 was held October 10-14, 2008 at the European Bioinformatics Institute, Cambridge, United Kingdom and included a two-day workshop focused on the refinement of the Genomic Contextual Data Markup Language (GCDML). GSC 7 was held as the opening day of the International Congress on Metagenomics 2008 in San Diego California. Major achievements of these combined meetings included an agreement from the International Nucleotide Sequence Database Consortium (INSDC) to create a “MIGS” keyword for capturing ”Minimum Information about a Genome Sequence” compliant information within INSDC (DDBJ/EMBL /Genbank) records, launch of GCDML 1.0, MIGS compliance of the first set of “Genomic Encyclopedia of Bacteria and Archaea” project genomes, approval of a proposal to extend MIGS to 16S rRNA sequences within a “Minimum Information about an Environmental Sequence”, finalization of plans for the GSC eJournal, “Standards in Genomic Sciences” (SIGS), and the formation of a GSC Board. Subsequently, the GSC has been awarded a Research Co-ordination Network (RCN4GSC) grant from the National Science Foundation, held the first SIGS workshop and launched the journal. The GSC will also be hosting outreach workshops at both ISMB 2009 and PSB 2010 focused on “Metagenomics, Metadata and MetaAnalysis” (M3). Further information about the GSC and its range of activities can be found at http://gensc.org, including videos of all the presentations at GSC 7.
doi:10.4056/sigs.25165
PMCID: PMC3035212  PMID: 21304639
22.  The development and deployment of Common Data Elements for tissue banks for translational research in cancer – An emerging standard based approach for the Mesothelioma Virtual Tissue Bank 
BMC Cancer  2008;8:91.
Background
Recent advances in genomics, proteomics, and the increasing demands for biomarker validation studies have catalyzed changes in the landscape of cancer research, fueling the development of tissue banks for translational research. A result of this transformation is the need for sufficient quantities of clinically annotated and well-characterized biospecimens to support the growing needs of the cancer research community. Clinical annotation allows samples to be better matched to the research question at hand and ensures that experimental results are better understood and can be verified. To facilitate and standardize such annotation in bio-repositories, we have combined three accepted and complementary sets of data standards: the College of American Pathologists (CAP) Cancer Checklists, the protocols recommended by the Association of Directors of Anatomic and Surgical Pathology (ADASP) for pathology data, and the North American Association of Central Cancer Registry (NAACCR) elements for epidemiology, therapy and follow-up data. Combining these approaches creates a set of International Standards Organization (ISO) – compliant Common Data Elements (CDEs) for the mesothelioma tissue banking initiative supported by the National Institute for Occupational Safety and Health (NIOSH) of the Center for Disease Control and Prevention (CDC).
Methods
The purpose of the project is to develop a core set of data elements for annotating mesothelioma specimens, following standards established by the CAP checklist, ADASP cancer protocols, and the NAACCR elements. We have associated these elements with modeling architecture to enhance both syntactic and semantic interoperability. The system has a Java-based multi-tiered architecture based on Unified Modeling Language (UML).
Results
Common Data Elements were developed using controlled vocabulary, ontology and semantic modeling methodology. The CDEs for each case are of different types: demographic, epidemiologic data, clinical history, pathology data including block level annotation, and follow-up data including treatment, recurrence and vital status. The end result of such an effort would eventually provide an increased sample set to the researchers, and makes the system interoperable between institutions.
Conclusion
The CAP, ADASP and the NAACCR elements represent widely established data elements that are utilized in many cancer centers. Herein, we have shown these representations can be combined and formalized to create a core set of annotations for banked mesothelioma specimens. Because these data elements are collected as part of the normal workflow of a medical center, data sets developed on the basis of these elements can be easily implemented and maintained.
doi:10.1186/1471-2407-8-91
PMCID: PMC2329649  PMID: 18397527
23.  Text mining for the biocuration workflow 
Molecular biology has become heavily dependent on biological knowledge encoded in expert curated biological databases. As the volume of biological literature increases, biocurators need help in keeping up with the literature; (semi-) automated aids for biocuration would seem to be an ideal application for natural language processing and text mining. However, to date, there have been few documented successes for improving biocuration throughput using text mining. Our initial investigations took place for the workshop on ‘Text Mining for the BioCuration Workflow’ at the third International Biocuration Conference (Berlin, 2009). We interviewed biocurators to obtain workflows from eight biological databases. This initial study revealed high-level commonalities, including (i) selection of documents for curation; (ii) indexing of documents with biologically relevant entities (e.g. genes); and (iii) detailed curation of specific relations (e.g. interactions); however, the detailed workflows also showed many variabilities. Following the workshop, we conducted a survey of biocurators. The survey identified biocurator priorities, including the handling of full text indexed with biological entities and support for the identification and prioritization of documents for curation. It also indicated that two-thirds of the biocuration teams had experimented with text mining and almost half were using text mining at that time. Analysis of our interviews and survey provide a set of requirements for the integration of text mining into the biocuration workflow. These can guide the identification of common needs across curated databases and encourage joint experimentation involving biocurators, text mining developers and the larger biomedical research community.
doi:10.1093/database/bas020
PMCID: PMC3328793  PMID: 22513129
24.  Setting research priorities to reduce malaria burden in a post graduate training programme: lessons learnt from the Nigeria field epidemiology and laboratory training programme scientific workshop 
Although several research groups within institutions in Nigeria have been involved in extensive malaria research, the link between the research community and policy formulation has not been optimal. The workshop aimed to assist post graduate students to identify knowledge gaps and to develop relevant Malaria-related research proposals in line with identified research priorities. A training needs assessment questionnaire was completed by 22 students two week prior to the workshop. Also, a one page concept letter was received from 40 residents. Thirty students were selected based the following six criteria: - answerability and ethics; efficacy and impact; deliverability, affordability; scalability, sustainability; health systems, partnership and community involvement; and equity in achieved disease burden reduction. The workshop was over a three day period. The participants at the workshop were 30 Nigeria Field Epidemiology and Laboratory Training Programme (NFELTP) residents from cohorts 4 and 5. Ten technical papers were presented by the experts from the academia, National Malaria Elimination (NMEP) Programme, NFELTP Faculty and Implementing partners including CDC/PMI. Draft proposals were developed and presented by the residents. The “strongest need” for training was on malaria prevention, followed by malaria diagnosis. Forty seven new research questions were generated, while the 19 developed by the NMEP were shared. Evaluation revealed that all (100%) students either “agreed” that the workshop objectives were met. Full proposals were developed by some of the residents. A debriefing meeting was held with the NMEP coordinator to discuss funding of the projects. Future collaborative partnership has developed as the residents have supported NMEP to develop a research protocol for a national evaluation. Research prioritization workshops are required in most training programmes to ensure that students embark on studies that address the research needs of their country and foster collaborative linkages.
doi:10.11604/pamj.2014.18.226.4800
PMCID: PMC4239453  PMID: 25422701
Setting Malaria Research Priorities; Post graduate Training Programme; Malaria workshop; Malaria Research; Post graduate students Research
25.  Agile parallel bioinformatics workflow management using Pwrake 
BMC Research Notes  2011;4:331.
Background
In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error.
Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows.
Findings
We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows.
Conclusions
Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles may facilitate sharing workflows among the scientific community. Workflows for GATK and Dindel are available at http://github.com/misshie/Workflows.
doi:10.1186/1756-0500-4-331
PMCID: PMC3180464  PMID: 21899774

Results 1-25 (740663)