PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1170558)

Clipboard (0)
None

Related Articles

1.  eCAT: Online electronic lab notebook for scientific research 
Background
eCAT is an electronic lab notebook (ELN) developed by Axiope Limited. It is the first online ELN, the first ELN to be developed in close collaboration with lab scientists, and the first ELN to be targeted at researchers in non-commercial institutions. eCAT was developed in response to feedback from users of a predecessor product. By late 2006 the basic concept had been clarified: a highly scalable web-based collaboration tool that possessed the basic capabilities of commercial ELNs, i.e. a permissions system, controlled sharing, an audit trail, electronic signature and search, and a front end that looked like the electronic counterpart to a paper notebook.
Results
During the development of the beta version feedback was incorporated from many groups including the FDA's Center for Biologics Evaluation & Research, Uppsala University, Children's Hospital Boston, Alex Swarbrick's lab at the Garvan Institute in Sydney and Martin Spitaler at Imperial College. More than 100 individuals and groups worldwide then participated in the beta testing between September 2008 and June 2009. The generally positive response is reflected in the following quote about how one lab is making use of eCAT: "Everyone uses it as an electronic notebook, so they can compile the diverse collections of data that we generate as biologists, such as images and spreadsheets. We use to it to take minutes of meetings. We also use it to manage our common stocks of antibodies, plasmids and so on. Finally, perhaps the most important feature for us is the ability to link records, reagents and experiments."
Conclusion
By developing eCAT in close collaboration with lab scientists, Axiope has come up with a practical and easy-to-use product that meets the need of scientists to manage, store and share data online. eCAT is already being perceived as a product that labs can continue to use as their data management and sharing grows in scale and complexity.
doi:10.1186/1759-4499-1-4
PMCID: PMC2809322  PMID: 20334629
2.  LabTrove: A Lightweight, Web Based, Laboratory “Blog” as a Route towards a Marked Up Record of Work in a Bioscience Research Laboratory 
PLoS ONE  2013;8(7):e67460.
Background
The electronic laboratory notebook (ELN) has the potential to replace the paper notebook with a marked-up digital record that can be searched and shared. However, it is a challenge to achieve these benefits without losing the usability and flexibility of traditional paper notebooks. We investigate a blog-based platform that addresses the issues associated with the development of a flexible system for recording scientific research.
Methodology/Principal Findings
We chose a blog-based approach with the journal characteristics of traditional notebooks in mind, recognizing the potential for linking together procedures, materials, samples, observations, data, and analysis reports. We implemented the LabTrove blog system as a server process written in PHP, using a MySQL database to persist posts and other research objects. We incorporated a metadata framework that is both extensible and flexible while promoting consistency and structure where appropriate. Our experience thus far is that LabTrove is capable of providing a successful electronic laboratory recording system.
Conclusions/Significance
LabTrove implements a one-item one-post system, which enables us to uniquely identify each element of the research record, such as data, samples, and protocols. This unique association between a post and a research element affords advantages for monitoring the use of materials and samples and for inspecting research processes. The combination of the one-item one-post system, consistent metadata, and full-text search provides us with a much more effective record than a paper notebook. The LabTrove approach provides a route towards reconciling the tensions and challenges that lie ahead in working towards the long-term goals for ELNs. LabTrove, an electronic laboratory notebook (ELN) system from the Smart Research Framework, based on a blog-type framework with full access control, facilitates the scientific experimental recording requirements for reproducibility, reuse, repurposing, and redeployment.
doi:10.1371/journal.pone.0067460
PMCID: PMC3720848  PMID: 23935832
3.  A Molecular Host Response Assay to Discriminate Between Sepsis and Infection-Negative Systemic Inflammation in Critically Ill Patients: Discovery and Validation in Independent Cohorts 
PLoS Medicine  2015;12(12):e1001916.
Background
Systemic inflammation is a whole body reaction having an infection-positive (i.e., sepsis) or infection-negative origin. It is important to distinguish between these two etiologies early and accurately because this has significant therapeutic implications for critically ill patients. We hypothesized that a molecular classifier based on peripheral blood RNAs could be discovered that would (1) determine which patients with systemic inflammation had sepsis, (2) be robust across independent patient cohorts, (3) be insensitive to disease severity, and (4) provide diagnostic utility. The goal of this study was to identify and validate such a molecular classifier.
Methods and Findings
We conducted an observational, non-interventional study of adult patients recruited from tertiary intensive care units (ICUs). Biomarker discovery utilized an Australian cohort (n = 105) consisting of 74 cases (sepsis patients) and 31 controls (post-surgical patients with infection-negative systemic inflammation) recruited at five tertiary care settings in Brisbane, Australia, from June 3, 2008, to December 22, 2011. A four-gene classifier combining CEACAM4, LAMP1, PLA2G7, and PLAC8 RNA biomarkers was identified. This classifier, designated SeptiCyte Lab, was validated using reverse transcription quantitative PCR and receiver operating characteristic (ROC) curve analysis in five cohorts (n = 345) from the Netherlands. Patients for validation were selected from the Molecular Diagnosis and Risk Stratification of Sepsis study (ClinicalTrials.gov, NCT01905033), which recruited ICU patients from the Academic Medical Center in Amsterdam and the University Medical Center Utrecht. Patients recruited from November 30, 2012, to August 5, 2013, were eligible for inclusion in the present study. Validation cohort 1 (n = 59) consisted entirely of unambiguous cases and controls; SeptiCyte Lab gave an area under curve (AUC) of 0.95 (95% CI 0.91–1.00) in this cohort. ROC curve analysis of an independent, more heterogeneous group of patients (validation cohorts 2–5; 249 patients after excluding 37 patients with an infection likelihood of “possible”) gave an AUC of 0.89 (95% CI 0.85–0.93). Disease severity, as measured by Sequential Organ Failure Assessment (SOFA) score or Acute Physiology and Chronic Health Evaluation (APACHE) IV score, was not a significant confounding variable. The diagnostic utility of SeptiCyte Lab was evaluated by comparison to various clinical and laboratory parameters available to a clinician within 24 h of ICU admission. SeptiCyte Lab was significantly better at differentiating cases from controls than all tested parameters, both singly and in various logistic combinations, and more than halved the diagnostic error rate compared to procalcitonin in all tested cohorts and cohort combinations. Limitations of this study relate to (1) cohort compositions that do not perfectly reflect the composition of the intended use population, (2) potential biases that could be introduced as a result of the current lack of a gold standard for diagnosing sepsis, and (3) lack of a complete, unbiased comparison to C-reactive protein.
Conclusions
SeptiCyte Lab is a rapid molecular assay that may be clinically useful in managing ICU patients with systemic inflammation. Further study in population-based cohorts is needed to validate this assay for clinical use.
Thomas Yager and colleagues develop and validate a molecular host assay to discriminate between sepsis and infection-negative systemic inflammation in critically ill patients.
Editors' Summary
Background
Our immune system protects us from disease by recognizing and killing bacteria and other infectious organisms (pathogens). An important part of the immune response is inflammation, a process that is triggered by infection, tissue injury, and other stimuli. Localized inflammation, which is characterized by swelling, redness, heat, pain, and loss of function, restricts the tissue damage caused by these stimuli to the affected site. Unfortunately, the immune system occasionally initiates a series of reactions that lead to widespread (systemic) inflammation. Systemic inflammation can damage vital organs and can be life-threatening. For example, it has been estimated that 30%–50% of people who develop severe infection-positive systemic inflammation (sepsis) die. Clinical management of systemic inflammation depends on whether the condition is caused by an infection or by another stimulus: many patients with systemic inflammation need to be given corticosteroids and other anti-inflammatory agents, but only patients with infection-positive systemic inflammation need to be given antibiotics.
Why Was This Study Done?
Clinicians need to be able to distinguish between sepsis and infection-negative systemic inflammation quickly and accurately when treating critically ill patients. Patients with sepsis need to be given antibiotics as soon as possible to clear the infection, but giving antibiotics to someone with infection-negative systemic inflammation may do more harm than good. Current diagnostic approaches for identifying patients with sepsis rely on isolating and identifying the causative pathogen, but it can take more than 24 hours to obtain a result and pathogens are only isolated from about 30% of patients with clinically confirmed sepsis. Analysis of the host immune response might provide a quicker, more accurate way to differentiate between sepsis and infection-negative systemic inflammation. Thus, measurement of blood levels of procalcitonin (a pro-inflammatory biomarker) or of C-reactive protein (which the liver releases in response to inflammation) can sometimes differentiate between the two conditions. However, because the immune response is complex, measurement of a single biomarker in the blood is unlikely to be diagnostically helpful in every patient. Here, the researchers identify and validate a set of RNA molecules present in the blood capable of discriminating between sepsis and infection-negative systemic inflammation in critically ill patients (a “molecular classifier”).
What Did the Researchers Do and Find?
Using microarray analysis (a technique that measures the RNA levels of thousands of different genes) to compare the RNA molecules present in the blood in a discovery cohort of 74 patients with sepsis and 31 post-surgical patients with infection-negative systemic inflammation, the researchers identified a molecular classifier (SeptiCyte Lab) consisting of four RNA biomarkers. They validated this classifier using five additional patient cohorts, RT-qPCR (a technique that measures the amounts of specific RNAs in biological samples), and receiver operating characteristic (ROC) curve analysis (a graphical method for determining diagnostic test performance). The overall AUC for the test in the five validation cohorts was 0.88. The AUC (area under a ROC curve) quantifies the ability of a test to discriminate between individuals with and without a disease. A perfect test that yields no false positives or false negatives has an AUC of 1.00; a test no better at identifying true positives than flipping a coin has an AUC of 0.5. Importantly, disease severity did not affect the performance of SeptiCyte Lab, and the assay was better at discriminating sepsis from infection-negative systemic inflammation than all tested clinical and laboratory parameters (singly and in combination) that can currently be obtained within 24 hours of admission to an intensive care unit.
What Do These Findings Mean?
These findings suggest that SeptiCyte Lab might be useful in the management of critically ill patients with systemic inflammation. In the validation cohorts tested, this rapid diagnostic test (SeptiCyte Lab currently takes 4–6 hours to perform but could be developed to provide a result in about 90 minutes) was able to correctly identify 90% of patients with sepsis, although it also mistakenly identified some patients as having sepsis when they had infection-negative systemic inflammation. Further clinical studies are needed before SeptiCyte Lab can be used clinically because of limitations in the current study. For example, the patients in the validation cohorts imperfectly reflect the composition of real-world patients with systemic inflammation. However, the researchers suggest that, in combination with other clinical parameters and clinical judgment, SeptiCyte Lab might help physicians make appropriate therapeutic decisions for patients with systemic inflammation.
Additional Information
This list of resources contains links that can be accessed when viewing the PDF on a device or via the online version of the article at http://dx.doi.org/10.1371/journal.pmed.1001916.
The UK National Health Service Choices website provides information on sepsis
The US National Institute of General Medical Sciences provides a fact sheet on sepsis
The UK Sepsis Trust is a not-for-profit organization that provides information and personal stories about sepsis
The World Sepsis Day website also provides information and personal stories about sepsis
MedlinePlus provides links to further information about sepsis (in English and Spanish)
Wikipedia has pages on sepsis, systemic inflammatory response syndrome, and ROC curve analysis (note that Wikipedia is a free online encyclopedia that anyone can edit; available in several languages)
doi:10.1371/journal.pmed.1001916
PMCID: PMC4672921  PMID: 26645559
4.  Data Management in the Modern Structural Biology and Biomedical Research Environment 
Modern high-throughput structural biology laboratories produce vast amounts of raw experimental data. The traditional method of data reduction is very simple—results are summarized in peer-reviewed publications, which are hopefully published in high-impact journals. By their nature, publications include only the most important results derived from experiments that may have been performed over the course of many years. The main content of the published paper is a concise compilation of these data, an interpretation of the experimental results, and a comparison of these results with those obtained by other scientists.
Due to an avalanche of structural biology manuscripts submitted to scientific journals, in many recent cases descriptions of experimental methodology (and sometimes even experimental results) are pushed to supplementary materials that are only published online and sometimes may not be reviewed as thoroughly as the main body of a manuscript. Trouble may arise when experimental results are contradicting the results obtained by other scientists, which requires (in the best case) the reexamination of the original raw data or independent repetition of the experiment according to the published description of the experiment. There are reports that a significant fraction of experiments obtained in academic laboratories cannot be repeated in an industrial environment (Begley CG & Ellis LM, Nature 483(7391):531–3, 2012). This is not an indication of scientific fraud but rather reflects the inadequate description of experiments performed on different equipment and on biological samples that were produced with disparate methods. For that reason the goal of a modern data management system is not only the simple replacement of the laboratory notebook by an electronic one but also the creation of a sophisticated, internally consistent, scalable data management system that will combine data obtained by a variety of experiments performed by various individuals on diverse equipment. All data should be stored in a core database that can be used by custom applications to prepare internal reports, statistics, and perform other functions that are specific to the research that is pursued in a particular laboratory.
This chapter presents a general overview of the methods of data management and analysis used by structural genomics (SG) programs. In addition to a review of the existing literature on the subject, also presented is experience in the development of two SG data management systems, UniTrack and LabDB. The description is targeted to a general audience, as some technical details have been (or will be) published elsewhere. The focus is on “data management,” meaning the process of gathering, organizing, and storing data, but also briefly discussed is “data mining,” the process of analysis ideally leading to an understanding of the data. In other words, data mining is the conversion of data into information. Clearly, effective data management is a precondition for any useful data mining. If done properly, gathering details on millions of experiments on thousands of proteins and making them publicly available for analysis—even after the projects themselves have ended—may turn out to be one of the most important benefits of SG programs.
doi:10.1007/978-1-4939-0354-2_1
PMCID: PMC4086192  PMID: 24590705
Databases; Data management; Structural biology; LIMS; PSI; CSGID
5.  LabKey Server: An open source platform for scientific data integration, analysis and collaboration 
BMC Bioinformatics  2011;12:71.
Background
Broad-based collaborations are becoming increasingly common among disease researchers. For example, the Global HIV Enterprise has united cross-disciplinary consortia to speed progress towards HIV vaccines through coordinated research across the boundaries of institutions, continents and specialties. New, end-to-end software tools for data and specimen management are necessary to achieve the ambitious goals of such alliances. These tools must enable researchers to organize and integrate heterogeneous data early in the discovery process, standardize processes, gain new insights into pooled data and collaborate securely.
Results
To meet these needs, we enhanced the LabKey Server platform, formerly known as CPAS. This freely available, open source software is maintained by professional engineers who use commercially proven practices for software development and maintenance. Recent enhancements support: (i) Submitting specimens requests across collaborating organizations (ii) Graphically defining new experimental data types, metadata and wizards for data collection (iii) Transitioning experimental results from a multiplicity of spreadsheets to custom tables in a shared database (iv) Securely organizing, integrating, analyzing, visualizing and sharing diverse data types, from clinical records to specimens to complex assays (v) Interacting dynamically with external data sources (vi) Tracking study participants and cohorts over time (vii) Developing custom interfaces using client libraries (viii) Authoring custom visualizations in a built-in R scripting environment.
Diverse research organizations have adopted and adapted LabKey Server, including consortia within the Global HIV Enterprise. Atlas is an installation of LabKey Server that has been tailored to serve these consortia. It is in production use and demonstrates the core capabilities of LabKey Server. Atlas now has over 2,800 active user accounts originating from approximately 36 countries and 350 organizations. It tracks roughly 27,000 assay runs, 860,000 specimen vials and 1,300,000 vial transfers.
Conclusions
Sharing data, analysis tools and infrastructure can speed the efforts of large research consortia by enhancing efficiency and enabling new insights. The Atlas installation of LabKey Server demonstrates the utility of the LabKey platform for collaborative research. Stable, supported builds of LabKey Server are freely available for download at http://www.labkey.org. Documentation and source code are available under the Apache License 2.0.
doi:10.1186/1471-2105-12-71
PMCID: PMC3062597  PMID: 21385461
6.  A Semantic Problem Solving Environment for Integrative Parasite Research: Identification of Intervention Targets for Trypanosoma cruzi 
Background
Research on the biology of parasites requires a sophisticated and integrated computational platform to query and analyze large volumes of data, representing both unpublished (internal) and public (external) data sources. Effective analysis of an integrated data resource using knowledge discovery tools would significantly aid biologists in conducting their research, for example, through identifying various intervention targets in parasites and in deciding the future direction of ongoing as well as planned projects. A key challenge in achieving this objective is the heterogeneity between the internal lab data, usually stored as flat files, Excel spreadsheets or custom-built databases, and the external databases. Reconciling the different forms of heterogeneity and effectively integrating data from disparate sources is a nontrivial task for biologists and requires a dedicated informatics infrastructure. Thus, we developed an integrated environment using Semantic Web technologies that may provide biologists the tools for managing and analyzing their data, without the need for acquiring in-depth computer science knowledge.
Methodology/Principal Findings
We developed a semantic problem-solving environment (SPSE) that uses ontologies to integrate internal lab data with external resources in a Parasite Knowledge Base (PKB), which has the ability to query across these resources in a unified manner. The SPSE includes Web Ontology Language (OWL)-based ontologies, experimental data with its provenance information represented using the Resource Description Format (RDF), and a visual querying tool, Cuebee, that features integrated use of Web services. We demonstrate the use and benefit of SPSE using example queries for identifying gene knockout targets of Trypanosoma cruzi for vaccine development. Answers to these queries involve looking up multiple sources of data, linking them together and presenting the results.
Conclusion/Significance
The SPSE facilitates parasitologists in leveraging the growing, but disparate, parasite data resources by offering an integrative platform that utilizes Semantic Web techniques, while keeping their workload increase minimal.
Author Summary
Effective research in parasite biology requires analyzing experimental lab data in the context of constantly expanding public data resources. Integrating lab data with public resources is particularly difficult for biologists who may not possess significant computational skills to acquire and process heterogeneous data stored at different locations. Therefore, we develop a semantic problem solving environment (SPSE) that allows parasitologists to query their lab data integrated with public resources using ontologies. An ontology specifies a common vocabulary and formal relationships among the terms that describe an organism, and experimental data and processes in this case. SPSE supports capturing and querying provenance information, which is metadata on the experimental processes and data recorded for reproducibility, and includes a visual query-processing tool to formulate complex queries without learning the query language syntax. We demonstrate the significance of SPSE in identifying gene knockout targets for T. cruzi. The overall goal of SPSE is to help researchers discover new or existing knowledge that is implicitly present in the data but not always easily detected. Results demonstrate improved usefulness of SPSE over existing lab systems and approaches, and support for complex query design that is otherwise difficult to achieve without the knowledge of query language syntax.
doi:10.1371/journal.pntd.0001458
PMCID: PMC3260319  PMID: 22272365
7.  Guidelines for information about therapy experiments: a proposal on best practice for recording experimental data on cancer therapy 
BMC Research Notes  2012;5:10.
Background
Biology, biomedicine and healthcare have become data-driven enterprises, where scientists and clinicians need to generate, access, validate, interpret and integrate different kinds of experimental and patient-related data. Thus, recording and reporting of data in a systematic and unambiguous fashion is crucial to allow aggregation and re-use of data. This paper reviews the benefits of existing biomedical data standards and focuses on key elements to record experiments for therapy development. Specifically, we describe the experiments performed in molecular, cellular, animal and clinical models. We also provide an example set of elements for a therapy tested in a phase I clinical trial.
Findings
We introduce the Guidelines for Information About Therapy Experiments (GIATE), a minimum information checklist creating a consistent framework to transparently report the purpose, methods and results of the therapeutic experiments. A discussion on the scope, design and structure of the guidelines is presented, together with a description of the intended audience. We also present complementary resources such as a classification scheme, and two alternative ways of creating GIATE information: an electronic lab notebook and a simple spreadsheet-based format. Finally, we use GIATE to record the details of the phase I clinical trial of CHT-25 for patients with refractory lymphomas. The benefits of using GIATE for this experiment are discussed.
Conclusions
While data standards are being developed to facilitate data sharing and integration in various aspects of experimental medicine, such as genomics and clinical data, no previous work focused on therapy development. We propose a checklist for therapy experiments and demonstrate its use in the 131Iodine labeled CHT-25 chimeric antibody cancer therapy. As future work, we will expand the set of GIATE tools to continue to encourage its use by cancer researchers, and we will engineer an ontology to annotate GIATE elements and facilitate unambiguous interpretation and data integration.
doi:10.1186/1756-0500-5-10
PMCID: PMC3285520  PMID: 22226027
8.  Make it better but don't change anything 
With massive amounts of data being generated in electronic format, there is a need in basic science laboratories to adopt new methods for tracking and analyzing data. An electronic laboratory notebook (ELN) is not just a replacement for a paper lab notebook, it is a new method of storing and organizing data while maintaining the data entry flexibility and legal recording functions of paper notebooks. Paper notebooks are regarded as highly flexible since the user can configure it to store almost anything that can be written or physically pasted onto the pages. However, data retrieval and data sharing from paper notebooks are labor intensive processes and notebooks can be misplaced, a single point of failure that loses all entries in the volume. Additional features provided by electronic notebooks include searchable indices, data sharing, automatic archiving for security against loss and ease of data duplication. Furthermore, ELNs can be tasked with additional functions not commonly found in paper notebooks such as inventory control. While ELNs have been on the market for some time now, adoption of an ELN in academic basic science laboratories has been lagging. Issues that have restrained development and adoption of ELN in research laboratories are the sheer variety and frequency of changes in protocols with a need for the user to control notebook configuration outside the framework of professional IT staff support. In this commentary, we will look at some of the issues and experiences in academic laboratories that have proved challenging in implementing an electronic lab notebook.
doi:10.1186/1759-4499-1-5
PMCID: PMC2810290  PMID: 20098591
9.  The Clinical and Economic Impact of Point-of-Care CD4 Testing in Mozambique and Other Resource-Limited Settings: A Cost-Effectiveness Analysis 
PLoS Medicine  2014;11(9):e1001725.
Emily Hyle and colleagues conduct a cost-effectiveness analysis to estimate the clinical and economic impact of point-of-care CD4 testing compared to laboratory-based tests in Mozambique.
Please see later in the article for the Editors' Summary
Background
Point-of-care CD4 tests at HIV diagnosis could improve linkage to care in resource-limited settings. Our objective is to evaluate the clinical and economic impact of point-of-care CD4 tests compared to laboratory-based tests in Mozambique.
Methods and Findings
We use a validated model of HIV testing, linkage, and treatment (CEPAC-International) to examine two strategies of immunological staging in Mozambique: (1) laboratory-based CD4 testing (LAB-CD4) and (2) point-of-care CD4 testing (POC-CD4). Model outcomes include 5-y survival, life expectancy, lifetime costs, and incremental cost-effectiveness ratios (ICERs). Input parameters include linkage to care (LAB-CD4, 34%; POC-CD4, 61%), probability of correctly detecting antiretroviral therapy (ART) eligibility (sensitivity: LAB-CD4, 100%; POC-CD4, 90%) or ART ineligibility (specificity: LAB-CD4, 100%; POC-CD4, 85%), and test cost (LAB-CD4, US$10; POC-CD4, US$24). In sensitivity analyses, we vary POC-CD4-specific parameters, as well as cohort and setting parameters to reflect a range of scenarios in sub-Saharan Africa. We consider ICERs less than three times the per capita gross domestic product in Mozambique (US$570) to be cost-effective, and ICERs less than one times the per capita gross domestic product in Mozambique to be very cost-effective. Projected 5-y survival in HIV-infected persons with LAB-CD4 is 60.9% (95% CI, 60.9%–61.0%), increasing to 65.0% (95% CI, 64.9%–65.1%) with POC-CD4. Discounted life expectancy and per person lifetime costs with LAB-CD4 are 9.6 y (95% CI, 9.6–9.6 y) and US$2,440 (95% CI, US$2,440–US$2,450) and increase with POC-CD4 to 10.3 y (95% CI, 10.3–10.3 y) and US$2,800 (95% CI, US$2,790–US$2,800); the ICER of POC-CD4 compared to LAB-CD4 is US$500/year of life saved (YLS) (95% CI, US$480–US$520/YLS). POC-CD4 improves clinical outcomes and remains near the very cost-effective threshold in sensitivity analyses, even if point-of-care CD4 tests have lower sensitivity/specificity and higher cost than published values. In other resource-limited settings with fewer opportunities to access care, POC-CD4 has a greater impact on clinical outcomes and remains cost-effective compared to LAB-CD4. Limitations of the analysis include the uncertainty around input parameters, which is examined in sensitivity analyses. The potential added benefits due to decreased transmission are excluded; their inclusion would likely further increase the value of POC-CD4 compared to LAB-CD4.
Conclusions
POC-CD4 at the time of HIV diagnosis could improve survival and be cost-effective compared to LAB-CD4 in Mozambique, if it improves linkage to care. POC-CD4 could have the greatest impact on mortality in settings where resources for HIV testing and linkage are most limited.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
AIDS has already killed about 36 million people, and a similar number of people (mostly living in low- and middle-income countries) are currently infected with HIV, the virus that causes AIDS. HIV destroys immune system cells (including CD4 cells, a type of lymphocyte), leaving infected individuals susceptible to other infections. Early in the AIDS epidemic, HIV-infected individuals usually died within ten years of infection. After effective antiretroviral therapy (ART) became available in 1996, HIV infection became a chronic condition for people living in high-income countries, but because ART was expensive, HIV/AIDS remained a fatal disease in low- and middle-income countries. In 2003, the international community began to work towards achieving universal ART coverage, and by the end of 2012, 61% of HIV-positive people (nearly 10 million individuals) living low- and middle-income countries who were eligible for treatment—because their CD4 cell count had fallen below 350 cells/mm3 of blood or they had developed an AIDS-defining condition—were receiving treatment.
Why Was This Study Done?
In sub-Saharan Africa nearly 50% of HIV-infected people eligible for treatment remain untreated, in part because of poor linkage between HIV diagnosis and clinical care. After patients receive a diagnosis of HIV infection, their eligibility for ART initiation is determined by sending a blood sample away to a laboratory for a CD4 cell count (the current threshold for treatment is a CD4 count below 500/mm3, although low- and middle-income countries have yet to update their national guidelines from the threshold CD4 count below 350/mm3). Patients have to return to the clinic to receive their test results and to initiate ART if they are eligible for treatment. Unfortunately, many patients are “lost” during this multistep process in resource-limited settings. Point-of-care CD4 tests at HIV diagnosis—tests that are done on the spot and provide results the same day—might help to improve linkage to care in such settings. Here, the researchers use a mathematical model to assess the clinical outcomes and cost-effectiveness of point-of-care CD4 testing at the time of HIV diagnosis compared to laboratory-based testing in Mozambique, where about 1.5 million HIV-positive individuals live.
What Did the Researchers Do and Find?
The researchers used a validated model of HIV testing, linkage, and treatment called the Cost-Effectiveness of Preventing AIDS Complications–International (CEPAC-I) model to compare the clinical impact, costs, and cost-effectiveness of point-of-care and laboratory CD4 testing in newly diagnosed HIV-infected patients in Mozambique. They used published data to estimate realistic values for various model input parameters, including the probability of linkage to care following the use of each test, the accuracy of the tests, and the cost of each test. At a CD4 threshold for treatment of 250/mm3, the model predicted that 60.9% of newly diagnosed HIV-infected people would survive five years if their immunological status was assessed using the laboratory-based CD4 test, whereas 65% would survive five years if the point-of-care test was used. Predicted life expectancies were 9.6 and 10.3 years with the laboratory-based and point-of-care tests, respectively, and the per person lifetime costs (which mainly reflect treatment costs) associated with the two tests were US$2,440 and $US2,800, respectively. Finally, the incremental cost-effectiveness ratio—calculated as the incremental costs of one therapeutic intervention compared to another divided by the incremental benefits—was US$500 per year of life saved, when comparing use of the point-of-care test with a laboratory-based test.
What Do These Findings Mean?
These findings suggest that, compared to laboratory-based CD4 testing, point-of-care testing at HIV diagnosis could improve survival for HIV-infected individuals in Mozambique. Because the per capita gross domestic product in Mozambique is US$570, these findings also indicate that point-of-care testing would be very cost-effective compared to laboratory-based testing (an incremental cost-effectiveness ratio less than one times the per capita gross domestic product is regarded as very cost-effective). As with all modeling studies, the accuracy of these findings depends on the assumptions built into the model and on the accuracy of the input parameters. However, the point-of-care strategy averted deaths and was estimated to be cost-effective compared to the laboratory-based test over a wide range of input parameter values reflecting Mozambique and several other resource-limited settings that the researchers modeled. Importantly, these “sensitivity analyses” suggest that point-of-care CD4 testing is likely to have the greatest impact on HIV-related deaths and be economically efficient in settings in sub-Saharan Africa with the most limited health care resources, provided point-of-care CD4 testing improves the linkage to care for HIV-infected people.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001725.
The World Health Organization provides information on all aspects of HIV/AIDS (in several languages); its “Consolidated Guidelines on the Use of Antiretroviral Drugs for Treating and Preventing HIV Infections: Recommendations for a Public Health Approach”, which highlights the potential of point-of-care tests to improve the linkage of newly diagnosed HIV-infected patients to care, is available
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
NAM/aidsmap provides basic information about HIV/AIDS, and summaries of recent research findings on HIV care and treatment; it has a fact sheet on CD4 testing
Information is available from Avert, an international AIDS charity, on many aspects of HIV/AIDS, including information on starting, monitoring, and switching treatment and on HIV and AIDS in sub-Saharan Africa (in English and Spanish)
The “UNAIDS Report on the Global AIDS Epidemic 2013” provides up-to-date information about the AIDS epidemic and efforts to halt it
Personal stories about living with HIV/AIDS are available through Avert, Nam/aidsmap, and Healthtalkonline
doi:10.1371/journal.pmed.1001725
PMCID: PMC4165752  PMID: 25225800
10.  Respiratory medicine and research at McGill University: A historical perspective 
The history of respiratory medicine and research at McGill University (Montreal, Quebec) is tightly linked with the growth of academic medicine within its teaching hospitals. Dr Jonathan Meakins, a McGill medical graduate, was recruited to the Royal Victoria Hospital in 1924; as McGill’s first full-time clinical professor and Physician-in-Chief at the Royal Victoria Hospital. His focus on respiratory medicine led to the publication of his first book, Respiratory Function in Disease, in 1925. Meakins moved clinical laboratories from the Department of Pathology and placed them within the hospital. As such, he was responsible for the development of hospital-based research.
Dr Ronald Christie was recruited as a postdoctoral fellow by Meakins in the early 1930s. After his fellowship, he returned to Britain but came back to McGill from St Bartholomew’s Hospital (London, United Kingdom) to become Chair of the Department of Medicine in 1955; he occupied the post for 10 years. He published extensively on the mechanical properties of the lung in common diseases such as emphysema and heart failure.
Dr David Bates was among Dr Christie’s notable recruits; Bates in turn, recruited Drs Maurice McGregor, Margaret Becklake, William Thurlbeck, Joseph Milic-Emili, Nicholas Anthonisen, Charles Bryan and Peter Macklem. Bates published extensively in the area of respiratory physiology and, with Macklem and Christie, coauthored the book Respiratory Function in Disease, which integrated physiology into the analysis of disease.
Dr JA Peter Paré joined the attending staff of the Royal Victoria Hospital and the Royal Edward Laurentian Hospital in 1949. A consummate clinician and teacher, he worked closely with Dr Robert Fraser, the Chair of Radiology, to write the reference text Diagnosis of Diseases of the Chest. This was a sentinel contribution in its focus on radiographic findings as the foundation for a systematic approach to diagnosis, and the correlation of these findings with pathological and clinical observations.
Dr Margaret Becklake immigrated to Montreal from South Africa in 1957. Her research focused on occupational lung disease. She established the respiratory epidemiology research unit at McGill. She was renowned for her insistence on the importance of a clearly stated, relevant research question and for her clarity and insight.
Dr William Thurlbeck, another South African, had developed an interest in emphysema and chronic bronchitis and applied a structure-function approach in collaboration with Peter Macklem and other respirologists. As chief of the Royal Victoria autopsy service, he used pathological specimens to develop a semiquantitative grading system of gross emphysema severity. He promoted the use of morphometry to quantify structural abnormalities.
Dr James Hogg studied the functional consequences of pathological processes for lung function during his PhD studies under the joint supervision of Drs Macklem and Thurlbeck. His contributions to understanding the structural basis for chronic obstructive pulmonary disease (COPD) are numerous, reflecting his transdisciplinary knowledge of respiratory pathology and physiology. He trained other outstanding investigators such as Peter Paré Jr, with whom he founded the Pulmonary Research Laboratory in St Paul’s Hospital in Vancouver (British Columbia) in 1977.
A signal event in the evolution of respiratory research at McGill was the construction of the Meakins-Christie Laboratories in 1972. These laboratories were directed by Dr Peter Macklem, a trainee of Dr Becklake’s. The research within the laboratory initially focused on respiratory mechanics, gas distribution within the lung and the contribution of airways of different sizes to the overall mechanical behaviour of the lungs. The effects of cigarette smoking on lung dysfunction, mechanisms of possible loss of lung elastic recoil in asthma and control of bronchomotor tone were all additional areas of active investigation. Dr Macklem pioneered the study of the physiological consequences of small airway pathology.
Dr Joseph Milic-Emili succeeded Dr Macklem as director of the Meakins-Christie Laboratories in 1979. Milic-Emili was renowned for his work on ventilation distribution and the assessment of pleural pressure. He led the development of convenient tools for the assessment of respiratory drive. He clarified the physiological basis for carbon dioxide retention in patients with COPD placed on high inspired oxygen concentrations.
Another area that captured many investigators’ attention in the 1980s was the notion of respiratory failure as a consequence of respiratory muscle fatigue. Dr Charalambos (‘Charis’) Roussos made seminal contributions in this field. These studies triggered a long-lasting interest in respiratory muscle training, in rehabilitation, and in noninvasive mechanical ventilation for acute and chronic respiratory failure.
Dr Ludwig Engel obtained his PhD under the supervision of Peter Macklem and established himself in the area of ventilation distribution in health and in bronchoconstriction and the mechanics of breathing in asthma; he trained many investigators including one of the authors, Dr Jim Martin, who succeeded Milic-Emili as director of the Meakins Christie Laboratories from 1993 to 2008. Dr Martin developed small animal models of allergic asthma, and adopted a recruitment strategy that diversified the research programs at the Meakins Christie Laboratories.
Dr Manuel Cosio built on earlier work with Macklem and Hogg in his development of key structure-function studies of COPD. He was instrumental in recruiting a new generation of young investigators with interests in sleep medicine and neuromuscular diseases.
The 1970s and 1980s also witnessed the emergence of a topnotch respiratory division at the Montreal General Hospital, in large part reflecting the leadership of Dr Neil Colman, later a lead author of the revised Fraser and Paré textbook. At the Montreal General, areas of particular clinical strength and investigation included asthma, occupational and immunological lung diseases.
In 1989, the Meakins Christie Laboratories relocated to its current site on Rue St Urbain, adjacent to the Montreal Chest Institute. Dr Qutayba Hamid, on faculty at the Brompton Hospital, joined the Meakins-Christie Labs in 1994. In addition to an outstanding career in the area of the immunopathology of human asthma, he broadened the array of techniques routinely applied at the labs and has ably led the Meakins-Christie Labs from 2008 to the present.
The Meakins Christie Laboratories have had a remarkable track record that continues to this day. The basis for its enduring success is not immediately clear but it has almost certainly been linked to the balance of MD and PhD scientists that brought perspective and rigour. The diverse disciplines and research programs also facilitated adaptation to changing external research priorities.
The late 1990s and the early 21st century also saw the flourishing of the Respiratory Epidemiology Unit, under the leadership of Drs Pierre Ernst, Dick Menzies and Jean Bourbeau. It moved from McGill University to the Montreal Chest Institute in 2004. This paved the way for expanded clinical and translational research programs in COPD, tuberculosis, asthma, respiratory sleep disorders and other pulmonary diseases. The faculty now comprises respiratory clinician-researchers and PhD scientists with expertise in epidemiological methods and biostatistics.
Respiratory physiology and medicine at McGill benefitted from a strong start through the influence of Meakins and Christie, and a tight linkage between clinical observation and physiological research. The subsequent recruitment of talented and creative faculty members with absolute dedication to academic medicine continued the legacy. No matter how significant the scientific contributions of the individuals themselves, their most important impact resulted from the training of a large cohort of other gifted physicians and graduate students. Some of these are further described in the accompanying full-length online article.
PMCID: PMC4324519  PMID: 25664457
11.  Quality control, analysis and secure sharing of Luminex® immunoassay data using the open source LabKey Server platform 
BMC Bioinformatics  2013;14:145.
Background
Immunoassays that employ multiplexed bead arrays produce high information content per sample. Such assays are now frequently used to evaluate humoral responses in clinical trials. Integrated software is needed for the analysis, quality control, and secure sharing of the high volume of data produced by such multiplexed assays. Software that facilitates data exchange and provides flexibility to perform customized analyses (including multiple curve fits and visualizations of assay performance over time) could increase scientists’ capacity to use these immunoassays to evaluate human clinical trials.
Results
The HIV Vaccine Trials Network and the Statistical Center for HIV/AIDS Research and Prevention collaborated with LabKey Software to enhance the open source LabKey Server platform to facilitate workflows for multiplexed bead assays. This system now supports the management, analysis, quality control, and secure sharing of data from multiplexed immunoassays that leverage Luminex xMAP® technology. These assays may be custom or kit-based. Newly added features enable labs to: (i) import run data from spreadsheets output by Bio-Plex Manager™ software; (ii) customize data processing, curve fits, and algorithms through scripts written in common languages, such as R; (iii) select script-defined calculation options through a graphical user interface; (iv) collect custom metadata for each titration, analyte, run and batch of runs; (v) calculate dose–response curves for titrations; (vi) interpolate unknown concentrations from curves for titrated standards; (vii) flag run data for exclusion from analysis; (viii) track quality control metrics across runs using Levey-Jennings plots; and (ix) automatically flag outliers based on expected values. Existing system features allow researchers to analyze, integrate, visualize, export and securely share their data, as well as to construct custom user interfaces and workflows.
Conclusions
Unlike other tools tailored for Luminex immunoassays, LabKey Server allows labs to customize their Luminex analyses using scripting while still presenting users with a single, graphical interface for processing and analyzing data. The LabKey Server system also stands out among Luminex tools for enabling smooth, secure transfer of data, quality control information, and analyses between collaborators. LabKey Server and its Luminex features are freely available as open source software at http://www.labkey.com under the Apache 2.0 license.
doi:10.1186/1471-2105-14-145
PMCID: PMC3671158  PMID: 23631706
12.  The Safety and Efficacy of Bariatric Surgery: The Longitudinal Assessment of Bariatric Surgery (LABS) 
Background
Obesity is a leading health concern in the United States. Since traditional treatment approaches for weight loss are generally unsuccessful long-term 1, bariatric surgical procedures are increasingly being performed to treat extreme obesity. To facilitate research in this field, the National Institute of Diabetes and Digestive and Kidney Diseases responded to this knowledge gap by establishing the Longitudinal Assessment of Bariatric Surgery (LABS) consortium.
Methods
A competitive NIDDK grant process resulted in the creation of a group of investigators with expertise in bariatric surgery, internal medicine, endocrinology, behavioral science, outcomes research, epidemiology, biostatistics and other relevant fields who have worked closely to plan, develop, and conduct the LABS study. The LABS consortium protocol is a prospective, multi-center observational cohort study of consecutive patients undergoing bariatric surgery at six clinical centers. LABS includes an extensive database of information systematically collected pre-operatively, at surgery, and perioperatively during the 30 day post-operative period, and longer term.
Results
The LABS study is organized into three phases. LABS-1 includes all patients at least 18 years of age who undergo bariatric surgery by LABS-certified surgeons with the goal to evaluate the short-term safety of bariatric surgery. LABS-2, a subset of approximately 2400 LABS-1 patients, evaluates the relationship of patient and surgical characteristics to longer-term safety and efficacy of bariatric surgery. LABS-3 involves a subset of LABS-2 subjects who undergo detailed studies of mechanisms involved in weight change. The rationale, goals, and approach to study bariatric surgery are detailed in this report along with a description of the outcomes, measures, and hypotheses utilized in LABS-1 and -2.
Conclusions
The goal of the LABS consortium is to accelerate clinical research and understanding of extreme obesity and its complications by evaluating the risks and benefits of bariatric surgery. LABS investigators use standardized definitions, high fidelity data collection, and validated instruments to enhance the ability of clinicians to provide meaningful evidence-based recommendations for patient evaluation, selection for surgery, and follow-up care.
doi:10.1016/j.soard.2007.01.006
PMCID: PMC3805365  PMID: 17386392
Bariatric Surgery; Obesity
13.  Representation and Misrepresentation of Scientific Evidence in Contemporary Tobacco Regulation: A Review of Tobacco Industry Submissions to the UK Government Consultation on Standardised Packaging 
PLoS Medicine  2014;11(3):e1001629.
Selda Ulucanlar and colleagues analyze submissions by two tobacco companies to the UK government consultation on standardized packaging.
Please see later in the article for the Editors' Summary
Background
Standardised packaging (SP) of tobacco products is an innovative tobacco control measure opposed by transnational tobacco companies (TTCs) whose responses to the UK government's public consultation on SP argued that evidence was inadequate to support implementing the measure. The government's initial decision, announced 11 months after the consultation closed, was to wait for ‘more evidence’, but four months later a second ‘independent review’ was launched. In view of the centrality of evidence to debates over SP and TTCs' history of denying harms and manufacturing uncertainty about scientific evidence, we analysed their submissions to examine how they used evidence to oppose SP.
Methods and Findings
We purposively selected and analysed two TTC submissions using a verification-oriented cross-documentary method to ascertain how published studies were used and interpretive analysis with a constructivist grounded theory approach to examine the conceptual significance of TTC critiques. The companies' overall argument was that the SP evidence base was seriously flawed and did not warrant the introduction of SP. However, this argument was underpinned by three complementary techniques that misrepresented the evidence base. First, published studies were repeatedly misquoted, distorting the main messages. Second, ‘mimicked scientific critique’ was used to undermine evidence; this form of critique insisted on methodological perfection, rejected methodological pluralism, adopted a litigation (not scientific) model, and was not rigorous. Third, TTCs engaged in ‘evidential landscaping’, promoting a parallel evidence base to deflect attention from SP and excluding company-held evidence relevant to SP. The study's sample was limited to sub-sections of two out of four submissions, but leaked industry documents suggest at least one other company used a similar approach.
Conclusions
The TTCs' claim that SP will not lead to public health benefits is largely without foundation. The tools of Better Regulation, particularly stakeholder consultation, provide an opportunity for highly resourced corporations to slow, weaken, or prevent public health policies.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
Every year, about 6 million people die from tobacco-related diseases and, if current trends continue, annual tobacco-related deaths will increase to more than 8 million by 2030. To reduce this loss of life, national and international bodies have drawn up various conventions and directives designed to implement tobacco control measures such as the adoption of taxation policies aimed at reducing tobacco consumption and bans on tobacco advertising, promotion, and sponsorship. One innovative but largely unused tobacco control measure is standardised packaging of tobacco products. Standardised packaging aims to prevent the use of packaging as a marketing tool by removing all brand imagery and text (other than name) and by introducing packs of a standard shape and colour that include prominent pictorial health warnings. Standardised packaging was first suggested as a tobacco control measure in 1986 but has been consistently opposed by the tobacco industry.
Why Was This Study Done?
The UK is currently considering standardised packaging of tobacco products. In the UK, Better Regulation guidance obliges officials to seek the views of stakeholders, including corporations, on the government's cost and benefit estimates of regulatory measures such as standardised packaging and on the evidence underlying these estimates. In response to a public consultation about standardised packaging in July 2013, which considered submissions from several transnational tobacco companies (TTCs), the UK government announced that it would wait for the results of the standardised packaging legislation that Australia adopted in December 2012 before making its final decision about this tobacco control measure. Parliamentary debates and media statements have suggested that doubt over the adequacy of the evidence was the main reason for this ‘wait and see’ decision. Notably, TTCs have a history of manufacturing uncertainty about the scientific evidence related to the harms of tobacco. Given the centrality of evidence to the debate about standardised packaging, in this study, the researchers analyse submissions made by two TTCs, British American Tobacco (BAT) and Japan Tobacco International (JTI), to the first UK consultation on standardised packaging (a second review is currently underway and will report shortly) to examine how TTCs used evidence to oppose standardised packaging.
What Did the Researchers Do and Find?
The researchers analysed sub-sections of two of the four TTC submissions (those submitted by BAT and JTI) made to the public consultation using verification-oriented cross-documentary analysis, which compared references made to published sources with the original sources to ascertain how these sources had been used, and interpretative analysis to examine the conceptual significance of TTC critiques of the evidence on standardised packaging. The researchers report that the companies' overall argument was that the evidence base in support of standardised packaging was seriously flawed and did not warrant the introduction of such packaging. The researchers identified three ways in which the TTC reports misrepresented the evidence base. First, the TTCs misquoted published studies, thereby distorting the main messages of these studies. For example, the TTCs sometimes omitted important qualifying information when quoting from published studies. Second, the TTCs undermined evidence by employing experts to review published studies for methodological rigor and value in ways that did not conform to normal scientific critique approaches (‘mimicked scientific critique’). So, for example, the experts considered each piece of evidence in isolation for its ability to support standardised packaging rather than considering the cumulative weight of the evidence. Finally, the TTCs engaged in ‘evidential landscaping’. That is, they promoted research that deflected attention from standardised packaging (for example, research into social explanations of smoking behaviour) and omitted internal industry research on the role of packaging in marketing.
What Do These Findings Mean?
These findings suggest that the TTC critique of the evidence in favour of standardised packaging that was presented to the UK public consultation on this tobacco control measure is highly misleading. However, because the researchers' analysis only considered subsections of the submissions from two TTCs, these findings may not be applicable to the other submissions or to other TTCs. Moreover, their analysis only considered the efforts made by TTCs to influence public health policy and not the effectiveness of these efforts. Nevertheless, these findings suggest that the claim of TTCs that standardised packaging will not lead to public health benefits is largely without foundation. More generally, these findings highlight the possibility that the tools of Better Regulation, particularly stakeholder consultation, provide an opportunity for wealthy corporations to slow, weaken, or prevent the implementation of public health policies.
Additional Information
Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001629.
The World Health Organization provides information about the dangers of tobacco (in several languages) and an article about first experiences with Australia's tobacco plain packaging law; for information about the tobacco industry's influence on policy, see the 2009 World Health Organization report ‘Tobacco industry interference with tobacco control’
A UK parliamentary briefing on standardised packaging of tobacco products, a press release about the consultation, and a summary report of the consultation are available; the ideas behind the UK's Better Regulation guidance are described in a leaflet produced by the Better Regulation Task Force
Cancer Research UK (CRUK) has a web page with information on standardised packaging and includes videos
Wikipedia has a page on standardised packaging of tobacco products (note: Wikipedia is a free online encyclopaedia that anyone can edit; available in several languages)
The UK Centre for Tobacco Control Studies is a network of UK universities that undertakes original research, policy development, advocacy, and teaching and training in the field of tobacco control
TobaccoTactics.org, an online resource managed by the University of Bath, provides up-to-date information on the tobacco industry and the tactics it uses to influence tobacco regulation
SmokeFree, a website provided by the UK National Health Service, offers advice on quitting smoking and includes personal stories from people who have stopped smoking
Smokefree.gov, from the US National Cancer Institute, offers online tools and resources to help people quit smoking
doi:10.1371/journal.pmed.1001629
PMCID: PMC3965396  PMID: 24667150
14.  Engaging teenagers in improving their health behaviours and increasing their interest in science (Evaluation of LifeLab Southampton): study protocol for a cluster randomized controlled trial 
Trials  2015;16:372.
Background
Lifestyle and health behaviours are strongly linked to non-communicable disease risk, but modifying them is challenging. There is an increasing recognition that adolescence is an important time for lifestyle and health behaviours to become embedded. Improving these behaviours in adolescents is important not only for their own health but also for that of their future children. LifeLab Southampton has been developed as a purpose-built classroom and laboratory in University Hospital Southampton. Secondary school students visit LifeLab to learn how childhood, adolescent and parental nutrition influences health, understand the impact of their lifestyle on their cardiovascular and metabolic health, and to inspire them with the excitement of research and future career possibilities in science. The LifeLab visit is part of a programme of work linked to the English National Curriculum. Pilot work has indicated that attitudes towards health can be changed by such LifeLab sessions.
Methods/Design
A cluster randomised controlled trial is being conducted to evaluate the effectiveness of the LifeLab intervention, the primary outcome being a measurement of the change in nutrition, health and lifestyle literacy from before to after the LifeLab intervention.
The LifeLab intervention comprises professional development for the teachers involved; preparatory lessons for the school students, delivered in school; a hands-on practical day at LifeLab, including a ‘Meet the Scientist’ session; post-visit lessons delivered in school; and the opportunity to participate in the annual LifeLab Schools’ Conference. This study aims to recruit approximately 2,500 secondary school students aged 13 to 14 years from 32 schools (the clusters) from Southampton and neighbouring areas. Participating schools will be randomised to control or intervention groups. The intervention will be run over two academic school years, with baseline questionnaire data collected from students at participating schools at the start of the academic year and follow- up questionnaire data collected approximately 12 months later.
Trial registration
Evaluation of LifeLab is a cluster randomised controlled trial (ISRCTN71951436, registered 25 March 2015), funded by the British Heart Foundation (PG/14/33/30827).
Electronic supplementary material
The online version of this article (doi:10.1186/s13063-015-0890-z) contains supplementary material, which is available to authorized users.
doi:10.1186/s13063-015-0890-z
PMCID: PMC4546100  PMID: 26292675
science education; health literacy; science literacy; nutrition literacy; cluster randomised trial; adolescent health; health behaviours; LifeLab
15.  Standardization of cytokine flow cytometry assays 
BMC Immunology  2005;6:13.
Background
Cytokine flow cytometry (CFC) or intracellular cytokine staining (ICS) can quantitate antigen-specific T cell responses in settings such as experimental vaccination. Standardization of ICS among laboratories performing vaccine studies would provide a common platform by which to compare the immunogenicity of different vaccine candidates across multiple international organizations conducting clinical trials. As such, a study was carried out among several laboratories involved in HIV clinical trials, to define the inter-lab precision of ICS using various sample types, and using a common protocol for each experiment (see additional files online).
Results
Three sample types (activated, fixed, and frozen whole blood; fresh whole blood; and cryopreserved PBMC) were shipped to various sites, where ICS assays using cytomegalovirus (CMV) pp65 peptide mix or control antigens were performed in parallel in 96-well plates. For one experiment, antigens and antibody cocktails were lyophilised into 96-well plates to simplify and standardize the assay setup. Results (CD4+cytokine+ cells and CD8+cytokine+ cells) were determined by each site. Raw data were also sent to a central site for batch analysis with a dynamic gating template.
Mean inter-laboratory coefficient of variation (C.V.) ranged from 17–44% depending upon the sample type and analysis method. Cryopreserved peripheral blood mononuclear cells (PBMC) yielded lower inter-lab C.V.'s than whole blood. Centralized analysis (using a dynamic gating template) reduced the inter-lab C.V. by 5–20%, depending upon the experiment. The inter-lab C.V. was lowest (18–24%) for samples with a mean of >0.5% IFNγ + T cells, and highest (57–82%) for samples with a mean of <0.1% IFNγ + cells.
Conclusion
ICS assays can be performed by multiple laboratories using a common protocol with good inter-laboratory precision, which improves as the frequency of responding cells increases. Cryopreserved PBMC may yield slightly more consistent results than shipped whole blood. Analysis, particularly gating, is a significant source of variability, and can be reduced by centralized analysis and/or use of a standardized dynamic gating template. Use of pre-aliquoted lyophilized reagents for stimulation and staining can provide further standardization to these assays.
doi:10.1186/1471-2172-6-13
PMCID: PMC1184077  PMID: 15978127
16.  Data Management, Analysis, Standardization and Reproducibility in a ProteoRed Multicentric Quantitative Proteomics Study with the OmicsHub Proteomics Software Tool 
RP-47
The ProteoRed multicentric experiment was designed to test each laboratory abilities to perform quantitative proteomic analysis, compare methodologies and inter-lab reproducibility for relative quantitative analysis of proteomes and to evaluate data reporting and data sharing tools (MIAPE documents, standard formats, public repositories). The experiment consist in analyzing two different samples (A and B), which contain an identical matrix of E. Coli proteins plus four standard proteins (CYC_HORSE, MYG_HORSE, ALDOA_RABIT, BSA_BOVIN), spiked in different amounts. The samples are designed primarily to be analyzed by LC-MS, although DIGE analysis could be also possible. Each lab will have the choice to test their preferred method for quantitative comparison of the two samples. However, to set as much standardization and reproducibility as possible in terms of data analysis, data sharing, protocols information, results and reporting we propose the OmicsHub Proteomics server to be the single platform to integrate the protein identification steps of the MS multicentric experiment and serve as a web repository. After the “In-lab” analysis is performed, with the laboratory's own tools, every lab is able to load its experiment (protocols, parameters, instruments, etc.) and import its spectrum data via web into the OmicsHub Proteomics analysis and managment server. Every experiment in OmicsHub is automatically stored following the PRIDE standard format. The OmicsHub Proteomics software tool performs the workflow tasks of Protein identification (using the search engines Mascot and Phenyx), Protein annotation, Protein grouping, FDR filtering (allowing the use of a local decoy database, designed ad hoc for this experiment) and Reporting of the protein identification results in a systematic and centralized manner. The OmicsHub Proteomics allows the researchers at ProteoRed consortium to perform its multicentric study with full reproducibility, standardization and experiment comparison; reducing time and data management complexity prior to the final quantification analysis.
PMCID: PMC2918095
17.  A study of inter-lab and inter-platform agreement of DNA microarray data 
BMC Genomics  2005;6:71.
As gene expression profile data from DNA microarrays accumulate rapidly, there is a natural need to compare data across labs and platforms. Comparisons of microarray data can be quite challenging due to data complexity and variability. Different labs may adopt different technology platforms. One may ask about the degree of agreement we can expect from different labs and different platforms. To address this question, we conducted a study of inter-lab and inter-platform agreement of microarray data across three platforms and three labs. The statistical measures of consistency and agreement used in this paper are the Pearson correlation, intraclass correlation, kappa coefficients, and a measure of intra-transcript correlation. The three platforms used in the present paper were Affymetrix GeneChip, custom cDNA arrays, and custom oligo arrays. Using the within-platform variability as a benchmark, we found that these technology platforms exhibited an acceptable level of agreement, but the agreement between two technologies within the same lab was greater than that between two labs using the same technology. The consistency of replicates in each experiment varies from lab to lab. When there is high consistency among replicates, different technologies show good agreement within and across labs using the same RNA samples. On the other hand, the lab effect, especially when confounded with the RNA sample effect, plays a bigger role than the platform effect on data agreement.
doi:10.1186/1471-2164-6-71
PMCID: PMC1142313  PMID: 15888200
18.  Semantic SenseLab: implementing the vision of the Semantic Web in neuroscience 
Summary
Objective
Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases.
Methods
Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources.
Conclusion
We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/
doi:10.1016/j.artmed.2009.11.003
PMCID: PMC3131218  PMID: 20006477
Semantic Web; neuroscience; description logic; ontology mapping; Web Ontology Language; integration
19.  Machines first, humans second: on the importance of algorithmic interpretation of open chemistry data 
The current rise in the use of open lab notebook techniques means that there are an increasing number of scientists who make chemical information freely and openly available to the entire community as a series of micropublications that are released shortly after the conclusion of each experiment. We propose that this trend be accompanied by a thorough examination of data sharing priorities. We argue that the most significant immediate benefactor of open data is in fact chemical algorithms, which are capable of absorbing vast quantities of data, and using it to present concise insights to working chemists, on a scale that could not be achieved by traditional publication methods. Making this goal practically achievable will require a paradigm shift in the way individual scientists translate their data into digital form, since most contemporary methods of data entry are designed for presentation to humans rather than consumption by machine learning algorithms. We discuss some of the complex issues involved in fixing current methods, as well as some of the immediate benefits that can be gained when open data is published correctly using unambiguous machine readable formats.
Graphical AbstractLab notebook entries must target both visualisation by scientists and use by machine learning algorithms
doi:10.1186/s13321-015-0057-7
PMCID: PMC4369291  PMID: 25798198
Cheminformatics; File formats; Open lab notebooks; Public data; Machine learning
20.  Machines first, humans second: on the importance of algorithmic interpretation of open chemistry data 
The current rise in the use of open lab notebook techniques means that there are an increasing number of scientists who make chemical information freely and openly available to the entire community as a series of micropublications that are released shortly after the conclusion of each experiment. We propose that this trend be accompanied by a thorough examination of data sharing priorities. We argue that the most significant immediate benefactor of open data is in fact chemical algorithms, which are capable of absorbing vast quantities of data, and using it to present concise insights to working chemists, on a scale that could not be achieved by traditional publication methods. Making this goal practically achievable will require a paradigm shift in the way individual scientists translate their data into digital form, since most contemporary methods of data entry are designed for presentation to humans rather than consumption by machine learning algorithms. We discuss some of the complex issues involved in fixing current methods, as well as some of the immediate benefits that can be gained when open data is published correctly using unambiguous machine readable formats.
Graphical AbstractLab notebook entries must target both visualisation by scientists and use by machine learning algorithms
doi:10.1186/s13321-015-0057-7
PMCID: PMC4369291  PMID: 25798198
Cheminformatics; File formats; Open lab notebooks; Public data; Machine learning
21.  LabKey Server NAb: A tool for analyzing, visualizing and sharing results from neutralizing antibody assays 
BMC Immunology  2011;12:33.
Background
Multiple types of assays allow sensitive detection of virus-specific neutralizing antibodies. For example, the extent of antibody neutralization of HIV-1, SIV and SHIV can be measured in the TZM-bl cell line through the degree of luciferase reporter gene expression after infection. In the past, neutralization curves and titers for this standard assay have been calculated using an Excel macro. Updating all instances of such a macro with new techniques can be unwieldy and introduce non-uniformity across multi-lab teams. Using Excel also poses challenges in centrally storing, sharing and associating raw data files and results.
Results
We present LabKey Server's NAb tool for organizing, analyzing and securely sharing data, files and results for neutralizing antibody (NAb) assays, including the luciferase-based TZM-bl NAb assay. The customizable tool supports high-throughput experiments and includes a graphical plate template designer, allowing researchers to quickly adapt calculations to new plate layouts. The tool calculates the percent neutralization for each serum dilution based on luminescence measurements, fits a range of neutralization curves to titration results and uses these curves to estimate the neutralizing antibody titers for benchmark dilutions. Results, curve visualizations and raw data files are stored in a database and shared through a secure, web-based interface. NAb results can be integrated with other data sources based on sample identifiers. It is simple to make results public after publication by updating folder security settings.
Conclusions
Standardized tools for analyzing, archiving and sharing assay results can improve the reproducibility, comparability and reliability of results obtained across many labs. LabKey Server and its NAb tool are freely available as open source software at http://www.labkey.com under the Apache 2.0 license. Many members of the HIV research community can also access the LabKey Server NAb tool without installing the software by using the Atlas Science Portal (https://atlas.scharp.org). Atlas is an installation of LabKey Server.
doi:10.1186/1471-2172-12-33
PMCID: PMC3115917  PMID: 21619655
22.  Toward modular biological models: defining analog modules based on referent physiological mechanisms 
BMC Systems Biology  2014;8:95.
Background
Currently, most biomedical models exist in isolation. It is often difficult to reuse or integrate models or their components, in part because they are not modular. Modular components allow the modeler to think more deeply about the role of the model and to more completely address a modeling project’s requirements. In particular, modularity facilitates component reuse and model integration for models with different use cases, including the ability to exchange modules during or between simulations. The heterogeneous nature of biology and vast range of wet-lab experimental platforms call for modular models designed to satisfy a variety of use cases. We argue that software analogs of biological mechanisms are reasonable candidates for modularization. Biomimetic software mechanisms comprised of physiomimetic mechanism modules offer benefits that are unique or especially important to multi-scale, biomedical modeling and simulation.
Results
We present a general, scientific method of modularizing mechanisms into reusable software components that we call physiomimetic mechanism modules (PMMs). PMMs utilize parametric containers that partition and expose state information into physiologically meaningful groupings. To demonstrate, we modularize four pharmacodynamic response mechanisms adapted from an in silico liver (ISL). We verified the modularization process by showing that drug clearance results from in silico experiments are identical before and after modularization. The modularized ISL achieves validation targets drawn from propranolol outflow profile data. In addition, an in silico hepatocyte culture (ISHC) is created. The ISHC uses the same PMMs and required no refactoring. The ISHC achieves validation targets drawn from propranolol intrinsic clearance data exhibiting considerable between-lab variability. The data used as validation targets for PMMs originate from both in vitro to in vivo experiments exhibiting large fold differences in time scale.
Conclusions
This report demonstrates the feasibility of PMMs and their usefulness across multiple model use cases. The pharmacodynamic response module developed here is robust to changes in model context and flexible in its ability to achieve validation targets in the face of considerable experimental uncertainty. Adopting the modularization methods presented here is expected to facilitate model reuse and integration, thereby accelerating the pace of biomedical research.
doi:10.1186/s12918-014-0095-1
PMCID: PMC4236728  PMID: 25123169
Modularity; Model reuse; Modeling & simulation; Mechanism; Agent-based model
23.  User Experience of the U.S. Department of Defense (DoD) Respiratory Disease Dashboard 
Objective
Evaluate the user experience of a novel electronic disease reporting and analysis system deployed across the DoD global laboratory surveillance network
Introduction
Lessons learned from the 2009 influenza pandemic have driven many changes in the standards and practices of respiratory disease surveillance worldwide. In response to the needs for timely information sharing of emerging respiratory pathogens (1), the DoD Armed Forces Health Surveillance Center (AFHSC) collaborated with the Johns Hopkins University Applied Physics Laboratory (JHU/APL) to develop an Internet-based data management system known as the Respiratory Disease Dashboard (RDD). The goal of the RDD is to provide the AFHSC global respiratory disease surveillance network a centralized system for the monitoring and tracking of lab-confirmed respiratory pathogens, thereby streamlining the data reporting process and enhancing the timeliness for detection of potential pandemic threats. This system consists of a password-protected internet portal that allows users to directly input respiratory specimen data and visualize data on an interactive, global map. Currently, eight DoD partner laboratories are actively entering respiratory pathogen data into the RDD, encompassing specimens from sentinel sites in eleven countries: Cambodia, Colombia, Kenya, Ecuador, Egypt, Honduras, Nicaragua, Paraguay, Peru, Uganda, and the United States. A user satisfaction survey was conducted to guide further development of the RDD and to support other disease surveillance efforts at the AFHSC.
Methods
User training was provided to partner laboratories during a transition of data submission from Excel spreadsheet to RDD electronic data entry between November 2011 and May 2012. A user experience survey was distributed to the participating laboratories in August 2012 and based on the experience of 139 entries. The survey adopted elements of the SWOT (Strength-Weaknesses-Opportunities-Threats) analysis to determine the system’s strengths and weaknesses as well as to solicit users’ perspectives on the efficiency of the system in assisting with disease surveillance data entry and visualization. Questionnaires in an open-ended (free-text response) format were distributed to all eight participating laboratories. Common themes were identified based on the solicited responses.
Results
Although only four of eight participating laboratory partners replied to the survey (50% survey response rate), all survey were completed without any omission of questions (100% completion rate). 2/25 (8%) total responses were neutral comments and therefore omitted in the thematic analysis (Table 1). In general, there was a distinct dichotomy in opinion between overseas laboratories and domestic laboratories with regard to the usefulness of the RDD, with overseas laboratories viewing the RDD as more useful than domestic laboratories. A review of the comparison between weekly specimens submitted to the AFHSC via Excel spreadsheet and data entered directly into the RDD revealed misunderstandings about the meaning of the data entry labels in the RDD interface. It was noted by four laboratories that a “Quick Start” user manual would be useful to clarify the definitions of some data labels.
Conclusions
Overall, this user experience evaluation has identified the needs for additional training on RDD data entry procedures and a “Quick Start” user manual to support the standardization of surveillance definitions. In general, users appreciate the visualization of the global DoD laboratory network data. This evaluation demonstrated the importance of active participation from data contributors and the invaluable organizational support in the development of the RDD as an electronic disease reporting and analysis system.
PMCID: PMC3692836
Outbreak Detection; Disease Surveillance; User Experience Evaluation; Data Management
24.  A research education program model to prepare a highly qualified workforce in biomedical and health-related research and increase diversity 
BMC Medical Education  2014;14:202.
Background
The National Institutes of Health has recognized a compelling need to train highly qualified individuals and promote diversity in the biomedical/clinical sciences research workforce. In response, we have developed a research-training program known as REPID (Research Education Program to Increase Diversity among Health Researchers) to prepare students/learners to pursue research careers in these fields and address the lack of diversity and health disparities. By inclusion of students/learners from minority and diverse backgrounds, the REPID program aims to provide a research training and enrichment experience through team mentoring to inspire students/learners to pursue research careers in biomedical and health-related fields.
Methods
Students/learners are recruited from the University campus from a diverse population of undergraduates, graduates, health professionals, and lifelong learners. Our recruits first enroll into an innovative on-line introductory course in Basics and Methods in Biomedical Research that uses a laboratory Tool-Kit (a lab in a box called the My Dr. ET Lab Tool-Kit) to receive the standard basics of research education, e.g., research skills, and lab techniques. The students/learners will also learn about the responsible conduct of research, research concept/design, data recording/analysis, and scientific writing/presentation. The course is followed by a 12-week hands-on research experience during the summer. The students/learners also attend workshops and seminars/conferences. The students/learners receive scholarship to cover stipends, research related expenses, and to attend a scientific conference.
Results
The scholarship allows the students/learners to gain knowledge and seize opportunities in biomedical and health-related careers. This is an ongoing program, and during the first three years of the program, fifty-one (51) students/learners have been recruited. Thirty-six (36) have completed their research training, and eighty percent (80%) of them have continued their research experiences beyond the program. The combination of carefully providing standard basics of research education and mentorship has been successful and instrumental for training these students/learners and their success in finding biomedical/health-related jobs and/or pursuing graduate/medical studies. All experiences have been positive and highly promoted.
Conclusions
This approach has the potential to train a highly qualified workforce, change lives, enhance biomedical research, and by extension, improve national health-care.
doi:10.1186/1472-6920-14-202
PMCID: PMC4197236  PMID: 25248498
Innovative on-line biomedical research training; Mobile biomedical lab; Innovative lab tool-kit; Microscopy training; Health-care research training program; Team mentoring; On-line medical research education
25.  The EnzymeTracker: an open-source laboratory information management system for sample tracking 
BMC Bioinformatics  2012;13:15.
Background
In many laboratories, researchers store experimental data on their own workstation using spreadsheets. However, this approach poses a number of problems, ranging from sharing issues to inefficient data-mining. Standard spreadsheets are also error-prone, as data do not undergo any validation process. To overcome spreadsheets inherent limitations, a number of proprietary systems have been developed, which laboratories need to pay expensive license fees for. Those costs are usually prohibitive for most laboratories and prevent scientists from benefiting from more sophisticated data management systems.
Results
In this paper, we propose the EnzymeTracker, a web-based laboratory information management system for sample tracking, as an open-source and flexible alternative that aims at facilitating entry, mining and sharing of experimental biological data. The EnzymeTracker features online spreadsheets and tools for monitoring numerous experiments conducted by several collaborators to identify and characterize samples. It also provides libraries of shared data such as protocols, and administration tools for data access control using OpenID and user/team management. Our system relies on a database management system for efficient data indexing and management and a user-friendly AJAX interface that can be accessed over the Internet. The EnzymeTracker facilitates data entry by dynamically suggesting entries and providing smart data-mining tools to effectively retrieve data. Our system features a number of tools to visualize and annotate experimental data, and export highly customizable reports. It also supports QR matrix barcoding to facilitate sample tracking.
Conclusions
The EnzymeTracker was designed to be easy to use and offers many benefits over spreadsheets, thus presenting the characteristics required to facilitate acceptance by the scientific community. It has been successfully used for 20 months on a daily basis by over 50 scientists. The EnzymeTracker is freely available online at http://cubique.fungalgenomics.ca/enzymedb/index.html under the GNU GPLv3 license.
doi:10.1186/1471-2105-13-15
PMCID: PMC3353834  PMID: 22280360

Results 1-25 (1170558)