PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of jbtJBT IndexAssociation Homepage
 
J Biomol Tech. 2009 February; 20(1): 1–30.
PMCID: PMC2670551

Oral Session Abstracts

J Biomol Tech. 2009 February; 20(1): 1.

The Changing Landscape of Genomics

Abstract

For more than two decades Genomics has grown as a discipline separately from Genetics. During this period the HGP was completed, other species were analyzed and the technology dramatically incremented. Methods for nucleic acids analysis now include the NexGen sequencing technologies for analysis of whole genomes and new DNA capture methods that allow sequencing of every human exon in a single sample. As a consequence, we are beginning to accrue the raw sequence of the whole genomes of multiple individuals and cell types. These methods are also being directed at finding specific disease alleles, and to characterize the full spectrum of variation in human populations as well as to study cancer. In combination these technologies provide unprecedented opportunity to study fundamental questions in biology. At long last the two disciplines of Genetics and Genomics are coming back together.

J Biomol Tech. 2009 February; 20(1): 1.

Technologies for Comprehensive Proteome Quantitation

Abstract

Proteomics and Signal Transduction; Max-Planck Institute for Biochemistry, Martinsried, Germany

Mass spectrometry (MS)-based proteomics has become a very powerful technology during the last decade. Despite tremendous and ongoing technological advances, identification and quantitation of entire proteomes had remained elusive and was even thought to be impossible with current approaches. Here we show that a combination of efficient sample preparation, high resolution LC MS/MS instrumentation, and novel computational proteomics techniques can now characterized essentially complete proteomes. We chose the yeast system because its proteome is independently characterized using tagging methods. The measured yeast proteome has a dynamic range of protein abundance of four to five orders of magnitude, with no bias against membrane or low level regulatory proteins. In each abundance class mass spectrometry identified as many or more proteins than the tagging methods. Quantitation of haploid vs. diploid yeast using SILAC revealed widespread, but not complete regulation of the pheromone signaling pathway. Comparison of proteome changes in this and in the Drosophila system revealed that the proteome is largely co-regulated with the transcriptome for long-term changes that are significant at the message level. We are now trying to achieve comprehensive proteome coverage in mammalian systems and report promising interim results. Important parameters are increased peptide sequencing speed as well as improved sample preparation. We further discuss extension of the SILAC method to in vivo situations in mice and in humans. The talk will also encompass practical issues of sample preparation using efficient proteome solubilization, OFFGEL peptide fractionation, Stage Tip clean up, computational proteomics analysis with the free Max Quant family of algorithms as well as downstream bioinformatics.

J Biomol Tech. 2009 February; 20(1): 1–2.

Genomics of Extinct and Endangered Species

Abstract

Only 200 years ago the concept of static species was challenged by the first fledgling theories of evolution. Since then, ever-growing collections of fossils have allowed mankind to gain insight into the constant changes that have shaped fauna and flora for eons. While these studies initially were of strictly anatomical nature, it was discovered only in the last 25 years that in addition to petrified structural information, also biomolecules have survived the demise of individuals and species. With successful sequencing of DNA retrieved from the Quagga, an extinct species of zebra, the field of ancient DNA was invented in 1984. The last three years have seen a rapid succession of improvements in sequencing of ancient DNA, driven by the onset of next-generation sequencing. This has now resulted in a draft version of the mammoth’s nuclear genome, together with an extensive set of complete mitochondrial sequences of this extinct group of Proboscideans. The availability of a large set of genetic information from an extinct species allows for the assessment of the genetic diversity of animals that ceased to exist several 10,000s of years before our time, and thereby allowing the investigation the contributions made by genetic factors to the extinction process. For these analyses, mitochondrial markers have historically been used, as the survival of nuclear DNA on a larger scale had not been documented in fossils until recently. Studies on mammoth populations have revealed a surprisingly small genetic diversity, as well as the existence of two previously undetected groups of animals. While this in itself does not explain the extinction process, it raises the question of a contributing factor in addition to population size. We have followed up on this lead from mammoth populations in an analysis on the recently extinct species of the Tasmanian Tiger, who’s genetic analysis has unsuccessfully been attempted for the last decade. Our findings have lead us to believe that the observations made for the two extinct species come to play also in endangered species that are at the brink of extinction today. We are therefore sequencing the nuclear genome of the Tasmanian Devil, the largest remaining marsupial carnivore. This species is currently dramatically endangered through a form of infectious cancer. We will show how the lack of biological diversity in this species is relevant to the animal’s failing immune response in the onset of the disease.

The lessons learned from past extinctions, documented through the efforts of today’s paleogenomics, may therefore help to assess not only the status of endangerment of a species, but also help it from going extinct by directing breeding programs that are already underway.

J Biomol Tech. 2009 February; 20(1): 2.

Overview of ABRF Progress for 2008 and a Look Toward the Future

Abstract

A summary will be given of the accomplishments made by the ABRF Executive Board and other parts of the organization during 2008. A substantial amount of effort has been directed toward infrastructure and administrative matters, such as working with the new business and meeting management offices, optimization of our main web site, and the creation of our own annual meeting web site (which is now operational and integrated with our main web site). A report will also be given on the plans the ABRF Executive Board has for goals in 2009 and beyond.

J Biomol Tech. 2009 February; 20(1): 2.

Financial State of the ABRF

Abstract

The ABRF remains a financially viable organization but faces the challenges associated with a stable membership/income, but escalating costs. The corporate Relations Committee (CRC) has done an exemplary job of recruiting corporate sponsors and newly added academic sponsors. Substantial savings in expenditures have been realized by having an electronic JBT, but have been offset by a loss in advertising revenue. Substantially less revenue is being generated by annual meetings due to static attendance, but rising costs. Stock market losses have affected ABRF investments somewhat, but a conservative portfolio has minimized our losses compared to the general market and greater than 2 years operating expenses are in reserve. 2009 will result in some “belt-tightening” measures to efficiently utilize reduced income.

J Biomol Tech. 2009 February; 20(1): 2.

The ABRF Education Committee Update

Abstract

In 2008, the Education Committee spearheaded the development of five new Satellite Educational Workshops. We sent out an e-mail blast request for topics and developed a survey in which a number of topics were rated by the membership (136 respondents). Based on the survey results, we received approval for implementation of the following workshops: Next Generation DNA Sequencing, Proteomics Instrumentation, Recombinant Protein Laboratory, HPLC Theory and Practice and Proteome Informatics. We have worked with the meeting management company and the Executive Board to begin to develop an SOP and timeline for facilitating future workshop planning. Additionally, we have established objective rubrics for judging posters and secured funding for poster awards, a very important aspect of the Annual Meeting. We have represented the ABRF at the Annual Biomedical Research Conference for Minority Students and are investigating the potential of offering summer internships to undergraduate minority students. In the future, the EdComm hopes to help develop web-based educational resources to support ABRF members and core users.

J Biomol Tech. 2009 February; 20(1): 2.

ABRF Membership Committee Activity 2008

Abstract

The ABRF membership represents a large international community of life sciences core facilities and biotechnology research laboratories that cover a wide range of research, technology platforms and life sciences applications. The organization’s membership trends reflect both the growth of biotechnology fields over time and the relevance of the organization to these fields. The ABRF Membership Committee tracks these trends with tools such as survey questionnaires, analyses their significance, and recommends policies to help the ABRF retain and add members and to help the ABRF to continue to be relevant to the evolving interests of it members. We will present both the past trends and current focus of the ABRF membership, including technology interests, institutional affiliation, and geographic distribution. We will also explore the goals and challenges of maintaining and possibly expanding the membership of the ABRF both now and in the future.

J Biomol Tech. 2009 February; 20(1): 3.

(s1) Personalized medicine

Abstract

The session will focus on the impact of microarray and NextGen sequencing technologies on the development of personalized medicine applications. Recent progress and future trajectories will be discussed, as well as regulatory issues and other challenges that must be met to successfully implement these technologies in a clinical setting. The session will feature talks from Dr. Mary Relling (St. Jude) on linking genome wide genotype data with global gene expression data in patient samples, Dr. Ulrich Broeckel (Medical College of Wisconsin) on implementing microarray tests for copy number and pharmacogenomics applications in a CLIA/CAP environment, and Dr. Ed Highsmith (Mayo Clinic) on the development of NextGen Sequencing as a tool for clinical testing.

J Biomol Tech. 2009 February; 20(1): 3.

s1-a Moving Genomics from Research to Clinical Care in Childhood Leukemia

Abstract

Safety and efficacy of drug therapy are paramount in children with leukemia: individualization of therapy may increase the chance of cure of a life-threatening disease and decrease the chance of serious adverse effects of what can be toxic drug therapy. We and others have used candidate genotyping techniques, whole-genome interrogation of leukemic blasts and normal leukocytes, whole-genome SNP arrays, and whole-genome comparative genomic hybridization array techniques (for leukemic blasts vs germline) to discover genomic variation associated with interindividual differences in drug response in children with leukemia. Use of candidate gene genotyping is incorporated into the treatment of childhood leukemia at our institution to optimize use of a few medications. Additional genotyping is conducted on a research basis. At St. Jude Children’s Research Hospital, drug therapy individualization is facilitated by a fully penetrant institutional mission that combines treatment and research, the effective maintenance of multidisciplinary disease-oriented research teams, and substantial financial support for clinical care, for generating and curating research data, and for shared research resources. The integration of clinical and research pharmacokinetics in a single department also facilitates utilization of pediatric biological specimens and provision of clinical pharmacogenetic consults. Examples of drug individualization in pediatric acute lymphoblastic leukemia, and their integration with pharmacokinetic, pharmacodynamic, and pharmacogenetic research, will be presented.

J Biomol Tech. 2009 February; 20(1): 3.

Towards a Biofuels Economy: Technologies and Infrastructure Associated with Developing New Bioenergy Crops

Abstract

The combined issues of a dwindling oil supply, the impact of fossil fuel on global climate and concerns over national energy security are collectively driving an unprecedented search for alternative forms of energy: in particular transportation fuels. Public and private investment in biofuels and bioenergy crops research has escalated to the point where there is a growing demand for “biofuels” analytical facilities. This workshop will highlight come core technologies and techniques that are in growing demand for the profiling of plant biomass and the short to mid-term future of biofuels research will be discussed.

J Biomol Tech. 2009 February; 20(1): 3.

s2-a Instrumentation and Methods for the High Throughput Analysis of Plant Materials as a Resource for Biofuels

Abstract

One bottleneck in generating economically sustainable biofuels from plant biomass is the recalcitrance of the biomass to degradation by chemical or enzymatic treatments. Plant biomass consists mainly of cell wall based lignocelluloses, composite materials consisting of polysaccharides such as cellulose and hemicelluloses but also polyphenols such as lignin. Various strategies can be employed to enhance degradation of wall materials, a process also known as saccharifiation. The feedstock, the plant biomass, can be selected for the identification of high biomass generating specialized energy-crops. In addition, lignocelluloses can be altered through, e.g., breeding programs or genetical engineering, resulting in plants with more easily accessible wall polymers. Moreover, chemical treatments can be optimized to loosen wall architecture, and enzymes can be engineered to more efficiently degrade lignocelluloses. Here we present a high-throughput robotic platform that can evaluate and test all of the above mentioned parameters: assessment of plant lignocellulosic structure by mass spectrometry, and saccharification output by altering feedstock input, chemical pre-treatment steps and enzymes including mixtures thereof.

J Biomol Tech. 2009 February; 20(1): 4.

s2-b From Single Molecules Spectroscopy to Process Batch Fermentation: Methods and Instrumentation Implemented in the Biofuel Research Laboratory at Cornell University

Abstract

“The human command of inanimate energy grew from 0.9 megawatt-hour per year per person in 1860 to nearly 19 megawatt-hours per year per person in 1990.” This increase has been achieved through the development of fossil fuels and advancements in energy technology which are now reaching their limits. Biomass, especially dedicated energy crops, can part of the renewable energy portfolio. However, strong research capacity and strategic partnerships in plant science, molecular biology, genetics, material science, nanobiotechnology, computational biology, and process engineering need to be integrated to explore biological phenomena and paradigms that are important for addressing environmental, biological, physical and chemical barriers to sustainable bioenergy production.

To help advance technologies that convert biomass to bioenergy and bioproducts, Cornell University has used a $10 million grant from the Empire State Development Corporation (ESDC) to develop a Biofuels Research Laboratory. These facilities have been designed to enhance our ability to carry out research to overcome the physical, chemical and biological barriers to liberating sugars from energy crops such as switchgrass, cold tolerant sorghum, and woody biomass, and to biologically convert these sugars into biofuels and bioproducts such as ethanol, butanol, hydrogen and methane. This facility houses laboratories for the following activities: (1) biomass pretreatment, (2) material size reduction and handling, (3) solid-state fermentation, (4) biochemical conversion, (5) submerged fermentation, (6) state-of-the-art analytical systems.

To address the multiple fundamental research angles and integrate those in an engineering framework, we are developing and exploring methods such as TIRF microscopy for single molecule analysis of saccharification processes, integrated FT-NIR and chemometric modeling for biomass characterization and process analysis, integrated high-throughput systems (UV-vis, MIR, HPLC, RT-PCR) to carry microorganism screening, enzymatic synergism, metabolic activities in system biology framework, and bio-conversion studies in heterogeneous frameworks.

J Biomol Tech. 2009 February; 20(1): 4.

s2-c Identify Molecular Structural Features of Biomass Recalcitrance Using Non-Destructive Microscopy and Spectroscopy

Abstract

Lignocellulosic biomass is a renewable material that can be converted to simple sugars and then fermented to transportation fuels. The most costly step of current technology used for biofuels production is the liberation of fermentable sugars from recalcitrant biomass, primarily the cell walls of higher plants, where cellulose forms the core of the relatively rigid microfibrils and other polymers are possibly randomly assembled or cross-linked with each other to form so called “liquid polymer crystal” nano-composites. The detailed molecular structure of cell walls, however, remains essentially unknown today, due to the limitation of traditional microscopy that do not allow direct observation of these biological entities at the nanometer scale. Our goal is to develop and apply non-destructive imaging tools that are suitable for characterizing biomass conversion processes at high spatial and chemical resolution. In this paper, I will summary new development of scanning probe and non-linear optical microscopy, and applications in imaging biomass and its conversion processes.

J Biomol Tech. 2009 February; 20(1): 4.

(s3) Optical Imaging

Abstract

Applying technical advances in optical imaging to intravital microscopy in laboratory animals is challenging, yet highly rewarding. Fundamental hurdles include optical absorbance and scattering in dense, thick, pigmented tissues; working distances limited to relatively superficial anatomy; and monitoring and maintenance of complex physiological parameters. In addition, detection and measurement of highly resolved information in the context of macroscopic, living systems requires enormous amounts of data collection, all while the subject appears to be sleeping calmly, yet at the microscopic level behaves as a sustained, controlled explosion. In this session we will provide several examples of micro-imaging in live rodents, including point-scanning confocal microscopy to assess real-time visualization of tumor destruction by CD8+ T cells; high-speed, line-scanning confocal microscopy to assess blood flow in transplanted pancreatic islets and muscle; and multiphoton-excited fluorescence microscopy to evaluate renal blood flow, clot formation, hematopoietic stem cell movement in bone marrow, biliary secretion in the liver, and tumor imaging. Remarkably, these applications have been implemented in heavily-used core resources.

J Biomol Tech. 2009 February; 20(1): 4–5.

s3-a Developments in Microscopic Imaging of Intravital Dynamics

Abstract

A large array of materials and equipment are available for imaging and measuring micro-anatomy and molecular dynamics in the context of isolated cells and tissues. However, the detailed roles and behaviors of biological molecules and cells are ideally studied in their natural environment, exposed to a host of highly evolved, complex feedback and support systems. Consequently, significant challenges and exciting scientific opportunities are emerging as investigators learn how to integrate the great advances in microscopy and probe development with techniques for imaging in live animals. The goal of this presentation is to provide a very brief overview of fundamental hurdles and current approaches to intravital microscopy of live mice using examples from point- and line-scanning confocal microscopy experiments conducted in a core resource environment. Following a simple definition of the imaging goal, essential experimental details are considered, including selection of probes, imaging platform, objective lenses, animal surgery, anesthesia, temperature control, monitoring vitals, and dealing with motion artifacts. These issues are briefly described in three examples: 1) real-time, direct visualization of tumor destruction by CD8+ T cells conducted over a period of weeks, 2) rapid detection of blood flow patterns in transplanted pancreatic islets, and 3) insulin-regulated blood flow in leg muscle. The integration of materials, methods, equipment and expertise result in significant team efforts to reveal new information regarding the microscopic behavior of biological cells and molecules in their natural environment.

J Biomol Tech. 2009 February; 20(1): 5.

s3-b Intravital Multiphoton Microscopy in a Microscopy Core Resource

Abstract

Although intravital microscopy has been conducted for nearly 200 years, its capabilities have recently been extended with the development of multiphoton microscopy, which provides the capability to collect images with sub-micron resolution hundreds of microns into living tissues. Parallel developments in the field of molecular biology and fluorescent probe chemistry now make it possible for biomedical researchers to evaluate the functions of individual proteins in individual cells in living animals. Once the domain of specialized laboratories, multiphoton microscopy has been commercially developed to the point that it is becoming a standard tool of microscopy core resources. This development has been critical to the application of multiphoton microscopy to a broad range of new biomedical questions, and the realization of the potential of multiphoton microscopy as a research tool. Here we present a review of the diverse applications of multiphoton microscopy in biomedical research along with a summary of our experience in implementing multiphoton microscopy in a core resource for academic and industry research.

J Biomol Tech. 2009 February; 20(1): 5.

(s4) Next Generation Nucleic Acid Quantitation

Abstract

Like DNA sequencing, next generation instrumentation used for nucleic acid quantification has emerged. These next generation technologies feature the ability to conduct nucleic acid quantification studies at a nano-scale. This scientific session encompasses three presentations that feature the use of these emerging technologies to quantify levels of nucleic acids.

J Biomol Tech. 2009 February; 20(1): 5.

s4-a Profiling Circulating Tumor Cells as Prognostic Biomarkers in Patients with Metastatic Prostate Cancer

Abstract

Tumor specific markers developed to select targeted therapies and to assess outcome in clinical trials and clinical practice remain significant unmet medical needs. Currently, assessing the disease progression in patients with metastatic prostate cancer relies on the modest association between changes in PSA, and the limitations of imaging distant metastases with actual disease prognosis. Profiling prostate tumor specimens demonstrated that patients with progression despite castrate levels of testosterone harbor changes in the androgen receptor (AR), such as gene amplification, rendering tumor cells more sensitive to lower levels of androgen, and promoting tumor proliferation. Antibody-based enumeration of circulating tumor cells (CTC) based on immunocytochemistry as cells staining positive for epithelial cellular adhesion molecule (EpCAM) and nuclear DAPI, and negative for CD45 white blood cell marker has been used to predict prognosis of patients with castrate metastatic prostate cancer. In patients with progressive castrate metastatic prostate cancer, a large number of patients (57%) had 5 or more circulating tumor cells, while only 21% patients had 1 or less CTC per sample of blood. CTC counts were higher in patients with bone involvement compared to soft tissue only metastasis, while only moderate correlations to BSI, and PSA were noted. CTC isolated from small volumes of peripheral blood are viable for molecular profiling at genomic and transcriptional levels. FISH analysis in CTC for AR showed more frequently gene amplification or gains in copy number in patients with higher CTC count. RT-PCR has been used to further characterize AR dependent gene expression in enriched CTC. These results suggest that shedding cells into the circulation and molecular profiles of CTC are intrinsic properties of the tumor, which can significantly impact patient management and clinical trial design. The profiles identified are prospectively evaluated in clinical trials currently evaluating novel agents acting on AR pathway.

J Biomol Tech. 2009 February; 20(1): 5–6.

(s5) Optimizing Forensics, Systematics and Clinical Diagnostics — Lessons In Optimizing Specialty Facilities: Forensics, Systematics and Clinical Diagnostics

Abstract

Some core DNA facilities need a great deal of flexibility to support a variety of constituents. The human genome race and the concentration of effort into a small number of dedicated centers showed us how much we can learn from the optimizations of core facilities that specialize in structured, repeated and sometimes regulated experiments. This session will introduce the audience to some applications that they may not ordinarily hear about and present three examples of core lab specialization so we can learn from their optimizations First, you will hear from a leading forensic biology facility that deals in human identification and criminalistics using a combination of extraction and typing techniques. Next, a commercial lab at the forefront of clinical decision making will present on the prediction of antiretroviral drug susceptibility from genotypic data in chronic infectious diseases like HIV and HCV. Finally, you will hear from the leader of Smithsonian Institute’s DNA Barcoding project, contributing to an international effort to develop a DNA tag for every species on earth, beginning with every bird found in North America.

J Biomol Tech. 2009 February; 20(1): 6.

s5-a Assay Development: From Proof-of-Concept to CLIA-Validation of Assays for Patient Testing

Abstract

Developing assays suitable for diagnostic applications requires robust validation, and in several respects parallels the clinical development of drug candidates. Resourcing increases exponentially during the assay development process, and selection criteria at each step gate which assays are carried forward. Attrition rates are greatest earlier in the process and few assays that have been analytically validated undergo transitional development and validation for use in a regulated clinical setting. Our facility has generalized an assay development framework that allows us to systematically move from research and development to final validated assays. The process includes: (1) exploration of alternative assay formats, reagents, methodologies; (2) optimization, scale-up and automation of component processes (e.g., flow issues such as timing, workflow, hand-offs, layout, and ergonomics); (3) characterization of assay performance (e.g., guard-banding studies to determine dynamic ranges of various assay parameters and reagents); (4) validation for research use only (RUO), including other pre-validation experiments (e.g., development of test reagents, positive and negative controls, and the definition of acceptance criteria for Accuracy Precision, Reproducibility, Linearity, Sensitivity and Specificity to be used in Assay Validation); (5) design and execution of rigorous validation studies; (6) documentation, reagent qualification, and proficiency testing of certified Clinical Laboratory Scientists; and (7) final assay validation. Final assay validation is done in accordance with regulations specified by the Clinical Laboratories Improvement Amendments (CLIA, 1988); IT systems are validated in compliance with the Health Insurance Portability and Accountability Act (HIPAA). As an interim step (4b), assays may be developed and validated to the Research-Use Only (RUO) stage until such time as an assay for patient testing is required. These principles will be illustrated using examples from our commercially available HIV-1 resistance tests.

J Biomol Tech. 2009 February; 20(1): 6.

s5-b Biodiversity Documentation via DNA Barcoding at the Smithsonian Institution’s L.A.B.

Abstract

Post genome technologies have enabled massive increases in throughput and reduced costs, but most of these advances have not been applied efficiently to global biodiversity documentation due to other constraints. I’ll use a couple of examples including the DNA Barcoding/Barcode of Life program and the BioCode Moorea project to illustrate what we’ve done, where we are, and where (we think) we need to go with our technology development and IT infrastructure in order to facilitate “the 21st Century Museum.” Birdstrikes, stream water quality assessments, seafood substitution, invasive species identifications, and rapid biotic inventories all rely on, or benefit from, a library of known organisms with genetic data aiding identifications. The building of that library is non-trivial, and meta-data information flow is a major impediment. The DNA Barcoding effort is standardizing the types of information being captured, and increasing the quality of and confidence in the data and results by linking chromatograms and voucher specimens to sequence records. The Consortium for the Barcode of Life (CBOL) has established a Leading Labs Network (LLN) to gather and disseminate “best-practice” and how-to information on all steps in the process—from permitting and collecting, through identification and tissue sampling, and the basics of the lab work-flow, including data quality control and analysis—and is working on ways to efficiently distribute that information. I’ll report on progress on our species identification projects, our all-taxon biotic inventory projects, our comparison of technology studies, and our efforts to Wikify the sharing of information on all our processes.

J Biomol Tech. 2009 February; 20(1): 6–7.

s5-c Existing and Emerging Biotechnologies for Forensic DNA Applications

Abstract

From the very first application of RFLP methods to identify perpetrators of crime back in the mid-1980s, the forensics community has continued to make use of the bio-technologies that have been developed since. By far, the most widespread and influential of these has been PCR; which, of course, has had an enormous impact on all molecular studies. PCR techniques are used daily by the thousands of forensic DNA facilities across the country, and around the world, primarily to determine STR profiles from 10–15 highly discriminating loci. This has resulted in the development of huge convicted offender databases (e.g., more than 6.4 million profiles in the U.S. database alone), and has allowed for the rapid processing of hundreds of thousands of sexual assault cases in just the last 5–10 years. From its roots, PCR has branch off into areas such as SNP genotyping, real-time probing of molecular disease and now into the next generation DNA sequencing technologies that have revolutionized modern, 21st century approaches to genomics, and have paved the way for the era of the genomicist. With these recent, rapid and expansive developments in DNA sequencing technology, it is now possible to consider how widespread DNA sequencing approaches can be used in forensic cases. Forensic investigations are often times quite difficult to predict, and forensic evidence comes in all sorts of sizes, shapes and sources. Therefore, while the instrumentation used by forensic DNA crime laboratories is highly advanced (robotics, real-time PCR, capillary array electrophoresis), the nature of forensic evidence has made it difficult to establish high volume, core facility environments. The core facility concept, past and present, and in relation to new generation sequencing approaches, will be discussed.

J Biomol Tech. 2009 February; 20(1): 7.

Reproducibility in Quantitative Proteomics

Abstract

As quantitative proteomics methods become more extensively used by multiple laboratories, inter-lab. reproducibility has become more of an issue. Using a common sample, comparable results ought to be achieved irrespective of the methodological approach or institution. Despite open studies from ABRF proteomics research group (PRG) over the last couple of years, and a closed study carried out by HUPO amongst others, cross-lab and cross approach studies are still few and far between. In this session we discuss the importance of experimental design, knowledge of maximum points of variance within an experimental protocol and appropriate data analysis within quantitative proteomics and their impact on reproducibility. We go on to introduce two cross-laboratory studies where 2D gels and MRM analyses respectively are carried on the same samples across multiple institutions.

J Biomol Tech. 2009 February; 20(1): 7.

s6-a Reproducibility of 2D Gel-based Proteomics Experiments

Abstract

Large scale, in depth, proteomic studies of cells, tissues or body fluids, involving multiple laboratories, employing a variety of proteomics technologies have been executed in the past. Although the aim of these studies was not focused on achieving maximal reproducibility between the participating labs but rather to explore the deliverables of the individual technologies, it is apparent that different labs using different technologies on the same samples obtain rather different results. For any comparative study, i.e., analysis of two or more different but related samples, such as healthy versus disease or different drug treatments, there is even more of a challenge to be investigated in such an experimental design.

The availability of internal or external standards in conjunction with comparative proteomics experiments are thought to be a major step forwards in obtaining data sets of such a quality that bioinformatics approaches can be applied productively. In order to take the next step forward it was deemed important by the HUPO-IAB (Industrial advisory board of the human proteome organization) to focus particular attention on analysis of the reproducibility of the various proteomics technologies.

A first study was initiated and was supported by five labs in a worldwide setting. The goal of this study was to demonstrate that cross lab experiments can be done, which would provide a very necessary leap forward in the ability to collect reliable data in proteomics. The study was of two different but related samples of H. Flu, one of which was treated with Actinonin. The study was a cross-lab two-dimensional polyacrylamide gel electrophoresis analysis of the samples as it was believed by the organizers that that with the current technologies available and strict adherence to protocols, this would be possible, although a major challenge.

Ready to use samples were provided to the labs. Each participating lab ran two different but related samples on 2D gels according to the supplied protocol, and then performed an analysis between the two samples using software provided to identify what are, in their opinion, the top 200 significantly changing spots. This of course includes newly appearing spots. The full analysis was uploaded to Nonlinear Dynamics (a HUPO-IAB member) for the comparison between labs and further analysis.

The presentation will provide an overview of the results achieved and demonstrate 2D gel experiments can indeed be reproduced in different labs. This a major step forward for proteomics as a discipline.

J Biomol Tech. 2009 February; 20(1): 8.

s6-b Reproducibility of Protein MRM-Based Assays: Towards Verification of Candidate Biomarkers in Human Plasma

Abstract

Verification of candidate protein biomarkers requires high throughput analytical methodology that is quantitative, sensitive and specific. In addition, robust standard operating procedures (SOPs) for sample preparation, data acquisition and analysis must be strictly followed in order implement these types of studies across multiple sites with minimum variation in the results. Currently, validation of biomarkers is accomplished by employing immunoassays. However, development of immunoassays for protein biomarkers depends on suitable antibodies that, if not available, can result in considerable time and expense to produce. Recently, the utility of multiple reaction monitoring (MRM) coupled with stable-isotope dilution mass spectrometry (SID-MS) for quantification of proteins in human plasma and serum has been demonstrated. However, reproducibility and transferability of protein-based MRM assays across multiple laboratories has yet to be shown as feasible. Towards this goal, we describe a study that was implemented to assess intra- and inter-laboratory performance of an MRM/SID-MS assay for quantitating seven target proteins that were spiked into human plasma.

J Biomol Tech. 2009 February; 20(1): 8.

Computational Approaches for Metagenomic and Supragenomic Data Sets

Abstract

Next generation DNA sequencing technologies when applied to microbial communities provide vast amounts of data. Whether these data are produced by deep16S sequencing of a microbiome; result from metagenomic and metabolomic analyses of complex ecosystems; or from intraspecies supragenomic analyses they require user-friendly, robust, computational analysis methods. This session is designed to provide participants familiarity with state-of-the-art computational aids for the design, processing, and analysis of large and complex microbial DNA sequence data sets.

J Biomol Tech. 2009 February; 20(1): 8.

s7-a Development and Application of a Global Comparative Genomics Pipeline for Performing Inter-isolate Analyses within Microbial Species

Abstract

We have developed a comparative genomics pipeline for performing genome level comparisons among any number of isolates for any microbial species. This pipeline was developed to test the Distributed Genome Hypothesis which posits that chronic bacterial pathogens utilize a strategy of polyclonal infection and continual reassortment of genic characters to ensure persistence. The pipeline begins by performing gene clustering according to an empirically-derived single linkage algorithm. Genes are binned as either core or distributed. The core genome is composed of those gene clusters that are shared among all strains and the supragenome is composed of all core and distributed gene clusters. The system then plots the core and supragenome sizes as a function of the number of strains sequenced, and performs an exhaustive pair-wise genic comparison among all strains to determine the degree of gene sharing for each individual strain pair, as well as providing the average level of gene sharing/differences for the species as a whole. The percentage of distributed genes/genome and the rate of increase in the number of unique genes/genome sequenced is then used by the finite supragenome model to predict the number of genomes that need to be sequenced to obtain any desired level of coverage of the species supragenome. The core genome data serves as a excellent marker for determining if an assembled group of strains all belong within a species, or whether current taxonomy has lumped together multiple independent species; it can also be used to determine if two or more current species are actually a single species. We have used this pipeline to analyze >20 species and have determined that between 10–40% of the genome of all species is composed of distributed gene clusters. Comparisons within a species revealed average genic differences of between 100 and >1000 clusters.

J Biomol Tech. 2009 February; 20(1): 8–9.

s7-b Massively Parallel Barcoded Pyrosequencing Reveals Unexpected Diversity in the Human Microbiome

Abstract

Massively parallel pyrosequencing, such as that provided by the 454 GS FLX instrument, has revolutionized our view of the microbial world in general and the human microbiome in particular. The use of error-correcting bar-codes allows hundreds of microbial samples to be sequenced in parallel with extremely low rates of sample misassignment, and powerful tools for community comparison such as UniFrac, which compares samples in terms of the evolutionary history they share, can allow us to understand the relationships among hundreds or thousands of communities simultaneously. There are two main uses of this technique: sequencing of the 16S rRNA gene provides a detailed view of which taxa are present in a sample, whereas metagenomic shotgun sequencing can provide a total view of the community. In this talk, I describe our studies of various human body habitats, focusing on the gut and the hand. The overlap at the species level between individual microbiomes is extremely low, but radically different species assemblages converge on the same profile of gene functions at the whole-microbiome level. Thus, microbial ecosystems mirror macroscale ecosystems, in which e.g. grasslands and forests on different continents reach strikingly similar states while composed of different species: these analogies among ecosystems at different scales will be critical for understanding how changes in the human microbiome affect health and disease.

J Biomol Tech. 2009 February; 20(1): 9.

s7-c MG-RAST: a Web-based Tool for the Analysis of Metagenomic Data Sets

Abstract

Sequencing of random community shotgun data sets started a just few years back. Today there are more than 50 gigabases of metagenomic data, more than 200 data sets in available for comparison and it is clear that the field has left its infancy. The Metagenomics RAST server (http://metagenomics.nmpdr.org) provides a number of critical services to the metagenomics community. It acts as the de-facto repository and provides comprehensive analysis capabilities. Using data structures from the SEED (http://www.theseed.org) environment, the MG-RAST server provides reconstructions of both community composition and community metabolism. A user friendly web interface allows navigation of the data sets and supports various approaches for data mining and downloading subsets of relevant data.

J Biomol Tech. 2009 February; 20(1): 9.

(s8) Label Free Detection

Abstract

This session will highlight selected technologies for label free detection. Biochemically, label-free methods can be used in to directly measure and characterize unperturbed intermolecular interactions. In cellular systems, they can be used to measure functional responses in primary and untransformed cells. Applications in screening and characterization will be discussed.

J Biomol Tech. 2009 February; 20(1): 9.

s8-a Label-Free Characterization of Protein-Peptide and Protein-Drug Interactions

Abstract

A wide variety of technologies are available for the measurement of intermolecular interactions through direct or indirect binding assays. Label-free methods can measure direct intermolecular associations and do not depend on many of the implicit assumptions inherent to labeled reporter systems. The methods vary greatly in terms of sample requirement, sensitivity, and adaptability to high-throughput formats, and several new technologies have recently become available to industrial and academic labs. This presentation will describe our work in developing small molecule inhibitors of a BCL6/corepressor protein-protein interaction interface.

J Biomol Tech. 2009 February; 20(1): 9.

s8-b Application of Differential Static Light Scattering and Isothermal Denaturation to Investigate Thermostability and Binding Specificity of Protein Families

Abstract

Structural genomics efforts have led to the expression of thousands of proteins, many of which have not been purified or characterized previously. Identification of small molecules that bind to and stabilize these proteins can promote their crystallization as well as provide valuable functional information. We have employed differential static light scattering (DSLS) and isothermal denaturation (ITD) to investigate the thermostability and ligand binding specificity of different families of human proteins. The screening results facilitated the comparison of substrate specificities and also identified compounds which appeared to be general inhibitors for some of these protein families. Moreover, other compounds were discovered that only bind to a subset of proteins in each family of proteins and thus appear to discriminate among different members of the family. Our preliminary results show that DSLS can also be used to screen and characterize membrane proteins.

J Biomol Tech. 2009 February; 20(1): 9–10.

s8-c Biochemical and Cell-based Detection by Epic System

Abstract

Label-free detection of molecular interactions and cellular signaling events are of great interest for isolation of bioactive molecules. We have applied Epicsystem to studies a variety of biochemical and cellular processes including kinase-mediated biomolecular interactions, virus infection and receptor-mediated signaling events. The results and utilities of these studies will be discussed.

J Biomol Tech. 2009 February; 20(1): 10.

(s9) Stem Cell Applications

Abstract

Stem cells and their derivatives offer great hope for regenerative medicine. The potential of stem cells to ameliorate tissue injuries and acquired and heritable diseases is enormous. There are now many sources of stem cells available including embryonic stem cells (ES cells), induced pluripotent cells (iPS cells), umbilical cord stem cells, and adult stem cells from multiple tissues. To aid in the isolation, banking, and dissemination of the different stem/progenitor cell types, academic universities across the country have developed Stem Cell Cores. In addition to providing both cells and consultation, Stem Cell Cores often provide specialized training and reagents to investigators with interest in applying stem cell biology to their own research. The Stem Cell Applications session will discuss a variety of Stem Cell Core issues including: isolation, banking and quality control (standardization) of stem cells; student and investigator training; regulatory and safety issues; and moving stem cell technologies into the clinic. The featured speakers include: Jeffrey L. Spees, Ph.D., University of Vermont, Burlington; Karl Willert, Ph.D., University of California, San Diego; and Carol Ware, Ph.D., University of Washington, Seattle.

J Biomol Tech. 2009 February; 20(1): 10.

s9-a Adult Stem Cell Cores and Regenerative Medicine

Abstract

The Spees laboratory is broadly interested in the process of tissue repair after injury. We are currently developing therapeutics for stroke and myocardial infarction based on different subpopulations of adult human bone marrow progenitor cells and their secreted reparative factors. We are determining their abilities to rescue existing tissue and also to promote repair through the augmentation of resident tissue-specific stem/progenitor cells. In separate studies we are exploring the possible use of isolated human atrial cardiac progenitor cells for cell replacement strategies after myocardial infarction. As the UVM Stem Cell Core, we provide adult murine neural stem/progenitor cells, adult rodent bone marrow-derived stem/progenitor cells, and human umbilical cord stem/progenitor cells to investigators at UVM and nationally and internationally. We train students and investigators in how to isolate adult stem cells from their tissue of interest, how to expand them, and how to differentiate them.

J Biomol Tech. 2009 February; 20(1): 10.

s9-b Probing Human Embryonic Stem Cell Proliferation with Arrayed Cellular Microenvironment Technology

Abstract

We have developed a multi-factorial cellular microarray technology platform to rapidly screen and optimize culture conditions of any cell type of interest. Application of this technology to define culture conditions of human embryonic stem cell (hESC) illustrates many of the challenges confronted by scientists who study and utilize hESCs in their laboratories. Previous research has primarily focused on developing defined media formulations for long-term culture of hESCs with little attention on the establishment of defined substrates for hESC proliferation and self-renewal. Using the cellular microarrays we identified fully defined and optimized culture conditions for the proliferation of hESCs. Through the systematic screening of extracellular matrix proteins and signaling molecules, we developed and characterized a completely defined culture system for the long-term self-renewal of hESCs. The efficacy of these conditions was confirmed over long-term culture using three independent hESC lines. In the future, the novel array platform and analysis procedure presented here will be applied towards the directed differentiation of hESCs and maintenance of other stem and progenitor cell populations.

J Biomol Tech. 2009 February; 20(1): 10–11.

s9-c The Ellison Stem Cell Core

Abstract

The stem cell core at the University of Washington is housed in a new building, within a purpose-designed 3500 ft2, 18-hood facility in the Institute for Stem Cell and Regenerative Medicine (ISCRM). The core was equipped with private donations while investigator hood rent recaptures some of the running expense and a program project grant from NIGMS supports the participating investigators. We focus on human embryonic stem cells (hESC), have 14 of the federally approved lines and are actively generating new hESC lines from donated embryos. The core trains in hESC and mESC culture and provides expertise and practical support for the 14+ laboratories that utilize the core. This represents a broad range of interests in regenerative medicine that fosters collaborations and exchange of information that encompasses the greater Seattle research community. The core extends resources on a collaborative basis for laboratories where it is impractical to take on hESC culture. We assist with induced pluripotent (iPS) cell production for ISCRM investigators. The core research focus is on maintenance of pluripotency, manipulation of the gradations of developmental stages associated with pluripotency and characterization and cryopreservation of hESC and iPS cells. The core has an abiding interest in comparative ESC studies, including mouse, non-human primate, dog and mink ESC. The future promises to include a focus on cancer stem cells and assistance in devising high throughput screening assays.

J Biomol Tech. 2009 February; 20(1): 11.

(s10) Next Generation Sequencing

Abstract

Genome technology has become a central driving force for new discoveries and inventions in biomedical science and medicine. Recent discoveries from the international genome research consortium have reshaped our understanding of how the human genome functions. These findings indicate that the human genome of 3 billion base-pairs is a complex interwoven network and contains very little unused sequence. These new discoveries will drive the continued development of accurate, cost-effective and high-throughput DNA sequencing technologies to decipher the functions of the complex genome for applications in clinical medicine and healthcare. The new DNA sequencing technologies will also serve as fundamental discovery tools for investigations in comparative genomics, the functions of the nervous system and the Cancer Genome Project. This session will cover the development of new DNA sequencing technologies and harnessing the power of the new technologies to solve unique problems in biology.

J Biomol Tech. 2009 February; 20(1): 11.

s10-a Toward the $1000 Genome: Molecular Engineering Approaches for DNA Sequencing by Synthesis

Abstract

DNA sequencing by synthesis (SBS) on a solid surface during polymerase reaction offers a new paradigm to decipher DNA sequences. We are pursuing the research and development of this novel DNA sequencing system using molecular engineering approaches. In one approach, four nucleotides (A, C, G, T) are modified as reversible terminators by attaching a cleavable fluorophore to the base and capping the 3′-OH group with a small reversible moiety so that they are still recognized by DNA polymerase as substrates. DNA templates consisting of homopolymer regions were accurately sequenced by using these new molecules on a DNA chip and a four-color fluorescent scanner. This general strategy to rationally design cleavable fluorescent nucleotide reversible terminators for DNA sequencing by synthesis is the basis for a newly developed, next generation DNA sequencer that has already found wide applications in genome biology. In another approach, we have solved the homopolymer sequencing problem in pyrosequencing by using non-fluorescent nucleotide reversible terminators (NRT). We have also developed a new SBS approach using these NRTs in combination with the four cleavable fluorophore labeled dideoxynucleotides. DNA sequences are determined by the unique fluorescence emission of each fluorophore on the ddNTPs. Upon removing the 3′-OH capping group on the dNTPs and the fluorophore from the ddNTPs, the polymerase reaction reinitiates and the DNA sequence are continuously determined. Various DNA templates, including those with homopolymer regions were accurately sequenced by using this hybrid SBS method on a chip and with a four-color fluorescent scanner. Adapting our general strategy to rationally design cleavable fluorescent nucleotide reversible terminators for DNA sequencing by synthesis to a single molecule detection platform will have the potential to achieve the $1000 genome paradigm for personalized medicine.

J Biomol Tech. 2009 February; 20(1): 11–12.

s10-b Nearly Complete Genomic Profiling of Individual Identified Neurons: SOLiD Approach

Abstract

What makes a neuron a neuron? What are the genomic bases of unique neuronal phenotypes? How different is the transcriptional profile of one neuron from another? Here, we attempt to identify and quantify nearly all RNA species present in a given neuron. Therefore, we have provided the first unbiased view of the operation of an entire genome from a single characterized neuron. First, we developed protocols for digital expression profiling of identified Aplysia neurons representing interneuron, sensory and motor neuronal classes. The generated single-neuron cDNA libraries accommodate the emPCR for two complementary massive parallel sequencing technologies (stating from pyrosequencing to the-sequencing-by-ligation -SOLiD) and allow assembly of the shorter sequence reads. In summary, >9,000,000,000 bases from just three identified neurons were obtained (~80 million sequences from each neuron). It is estimated that such coverage represents >99% of all RNA species in a single neuron. Using absolute real-time PCR we demonstrated that our method is fully quantitative with the dynamic range covering the entire neuronal transcriptome (from the rarest transcripts with only a few copies per cell to the most abundant RNAs with many thousands of copies). Quantitative Real-time PCR and in situ hybridization further validated this method of digital profiling. As a result of our initial analysis we propose that >70% of a genome is expressed in a single neuron with a significant fraction being non-coding RNA including antisense RNAs. Many TxFrags, a fragment of a transcript defined as a genomic region, are also found to be specific to neuron subtypes, strongly suggesting a role in the generation of neuronal identity. Furthermore, the numerous classes of non-coding RNAs revealed here likely represent both enormous complexity and crucial contributions of epigenetic RNA signaling mechanisms in the regulation of multiple neural functions, including establishing and maintaining diverse neuronal phenotypes in neural circuits. Support Contributed By: NIH, NSF, & McKnight Brain Research Foundation & ABI.

J Biomol Tech. 2009 February; 20(1): 12.

s10-c Using the SOLiD System for Genomic Solutions

Abstract

Since its introduction just over one year ago, the SOLiD™ System has rapidly developed longer read lengths, higher throughputs and a wide range of applications. The first portion of my presentation will focus on the latest technical advances that are part of the SOLiD3 System. This system permits the generation of more than 400 million mappable 50 bp sequence reads with each run. This number of sequence tags is ideally suited for many genomic, transcriptomic and epigenomic applications. The second portion of my presentation will focus on the wide range of end-to-end application solutions that have been developed for SOLiD users. Examples of these are miRNA discovery, whole transcriptome analysis and ChIP Seq studies. For the first time, the combination of molecular barcodes with the ultra high throughput of the system makes these applications much more cost effective for use in a core environment and allows core lab managers to provide services to a broader range of clients.

J Biomol Tech. 2009 February; 20(1): 12.

(s11) phosphoproteomics

Abstract

New Developments in Analysis of Phosphoproteomics has drawn great attention all over the world. In this session we will present new enrichment techniques like ERLIC, improved methods for phosphopeptide fragmentation in combination with different quantification techniques (label free quantification, iTRAQ). Different model organism for systems biology will be discussed. Further we will discuss relevant bioinformatics questions (e.g. phospho-database, software).

J Biomol Tech. 2009 February; 20(1): 12.

s11-a A New Acid Mix Enhances Phosphopeptide Enrichment on Titanium- and Zirconium Dioxide for Mapping of Phosphorylation Sites on Protein Complexes

Abstract

Reversible protein phosphorylation plays a major role in regulating many complex biological processes such as cellular growth, division, and signaling. Also, the phosphorylation and the regulation of cell signalisation are taught to have important role in human diseases, primarily for cancer. The high importance of protein phosphorylation makes the proper identification of phosphorylation sites an important task. Compared to other proteins in biological system, phosphorylated peptides are present only in low abundance and this presents a problem for their detection and identification. The selective enrichment of phosphorylated peptides prior to reversed phase separation and mass spectrometric detection significantly improves the analytical results in terms of higher number of detected phosphorylation sites and spectra of higher quality. Recently, there have been several publications describing offline and online chromatographic approaches for selective enrichment of phosphorylated peptides using metal oxide chromatography (MOC) with titanium dioxide and zirconium dioxide media. We have established a rapid method based on titanium dioxide chromatography, suitable for the enrichment of phosphopeptides from digested purified protein complexes that is compatible with subsequent analysis by nano-reversed phase HPLC-MS/MS. In the present work we have tested the effect of a modified loading solvent and new wash conditions on the efficiency of the enrichment of phosphopeptides from a test mixture of synthetic and BSA-derived peptides and from digested protein complexes and have shown this to improve the performance of the previously-published method. For the test mixture, the new method showed improved selectivity for phosphopeptides, whilst retaining a high recovery rate. Application of the new enrichment method to the digested protein complexes resulted in the identification of a significantly higher number of phosphopeptides as compared to the previous method. Additionally, we have compared the performance of TiO2 and ZrO2 columns for the isolation and identification of phosphopeptides from purified protein complexes and found that for our test set, both media performed comparably well for the selective enrichment of phosphopeptides. In summary, our improved method is highly effective for the enrichment of phosphopeptides from purified protein complexes prior to mass spectrometry, and is suitable for large-scale phosphoproteomic projects that aim to elucidate phosphorylation-dependent cellular processes.

J Biomol Tech. 2009 February; 20(1): 12–13.

s11-b Current Methods in Phospho-proteomics

Abstract

Substantial progress has been made over the last years regarding the analysis of phosphorylation in mass spectrometry-based proteomics. New methodologies have been introduced not only on the MS level but also for the enrichment of phosphorylated species and for the data interpretation and correct annotation of phosphopeptide MS/MS spectra. Thus, revealing the phosphoproteooteome seems to be an accomplishable task and what is even more relevant for biomedical research: uncovering dynamic changes in phosphorylation patterns due to specific stimuli is possible when applying quantitative mass spectrometry. The latter, however, still poses a major challenge, since generating reliable results rather than just loads of data strongly depends on highest repdroducibility in sample preparation/processing, phosphopeptide enrichment as well as LC-MS analysis. In this context, there is an exceptional need for automation in order to minimize sample-to-sample variations upon slightly different treatment resulting in wrong interpretation of cellular signaling.

J Biomol Tech. 2009 February; 20(1): 13.

s11-c Analysis of the Yeast Protein Kinase—Substrate Networks by Quantitative Phosphoproteomics

Abstract

Protein phosphorylation plays a pivotal role in the regulation of cells and is controlled by kinases and phosphatases. So far it has been difficult to identify substrates of a particular kinase in vivo and it was impossible to understand kinase-substrate relationships on a systemwide scale. This is because kinases, phosphatases and their substrates form complex networks that hitherto could not be comprehensively analyzed. Here we describe a novel method based on quantitative phosphoproteomics allowing such analyses and present a nearly complete network of kinase-substrate relationships in S. cerevisiae. The method consists of five steps: (1) Phosphopeptides are reproducibly isolated from a digest of a S. cerevisiae wild-type and a kinase gene deletion mutant proteome (2) Of each isolate LC-MS(/MS) maps are generated (3) These maps are analyzed using the algorithms SuperHirn and Corra, determining regulated phosphopeptide features (4) These features are identified using targeted LC-MS/MS (5) The kinase-substrate networks are generated and further analyzed. To validate our approach we first studied the TOR kinase. We identified both known targets and novel candidates to be regulated. As yet several candidates were confirmed to be direct targets or dependant on TOR. As these results underlined the reliability of our approach we applied it to all viable yeast kinase and phosphatase gene deletion mutants. So far over 2000 phosphoproteins were confidently identified and quantified in our kinase-substrate networks, allowing for a systemwide analysis. Besides identifying many known in vitro, in vivo and novel targets of a given kinase in the generated networks, we were also able to show that we first, can determine the biological function of a particular kinase and second, can derive target phosphorylation motifs. From our data it can be concluded that the presented approach allowed for the first systemwide analysis of most kinase- and phosphatase-substrate networks in yeast.

J Biomol Tech. 2009 February; 20(1): 13.

microRNA Profiling in Medical Practice

Abstract

MicroRNA alterations are involved in the initiation, progression and metastases of human cancer. The main molecular alterations are represented by variations in gene expression, usually mild and with consequences for a vast number of target protein coding genes. The causes of the widespread differential expression of miRNA genes in malignant compared with normal cells can be explained by the location of these genes in cancer-associated genomic regions, by epigenetic mechanisms and by alterations in the microRNA processing machinery. MicroRNA expression profiling of human tumors has identified signatures associated with diagnosis, staging, progression, prognosis and response to treatment. In addition, profiling has been exploited to identify microRNAs genes that may represent downstream targets of activated oncogenic pathways or that are targeting protein coding genes involved in cancer. Recent studies proved that miRNAs are main candidates for the elusive class of cancer predisposing genes and that other types of non-coding RNAs participate in the genetic puzzle giving rise to the malignant phenotype. These discoveries could be exploited for the development of useful markers for diagnosis and prognosis, as well as for the development of new RNA-based cancer therapies.

J Biomol Tech. 2009 February; 20(1): 13–14.

s12-a Real-time PCR Expression Profiling of microRNA

Abstract

MicroRNAs (miRNAs) are short noncoding regulatory RNAs. miRNAs are initially transcribed as long miRNA precursors (pri-miRNA) which are then subsequently processed into the precursor miRNA (pre-miRNA) and then the active mature miRNA. Altered miRNA expression has been implicated in a number of human diseases including cancer. miRNAs are challenging molecules to amplify and quantity by PCR because the miRNA precursors exist as a stable hairpin and the mature miRNA is roughly the size of a standard PCR primer. Despite these challenges successful approaches have been developed to amplify and quantify both the precursor and mature miRNA by qRT-PCR. In a systematic evaluation of several hundreds of precursor and mature miRNA by real-time qRT-PCR we demonstrated that many of the miRNAs are initially transcribed as precursors but are not processed into mature miRNA. This observation occurs more frequently in human cancer cell lines and in liver tumors as compared to normal tissues and pancreatic tumors. The expression of miR-31 was of particular interest in that it was processed to mature miRNA in some cell lines but not in others. In the cell lines that did not process mature miR-31, the precursor was shown by in situ RT-PCR to be localized to the nucleolus. Numerous mature miRNAs including miR-21, miR-221, miR-301 and miR-212 were increased in pancreatic cancer tissues compared to normal and adjacent benign pancreas. By quantifying the pri-, pre- and mature miRNA by qRT-PCR, we were able to demonstrate that the increase in mature miRNA resulted from an increase in processing of the pri-miRNA and was not due to new transcription. In summary, real-time PCR may be used to identify examples of post-transcription regulation of miRNA biogenesis.

J Biomol Tech. 2009 February; 20(1): 14.

s12-a MicroRNAs: Regulation, Development, and Disease

Abstract

MicroRNAs are short non-coding RNA transcripts that post-transcriptionally regulate gene expression. Several thousand microRNA genes have been identified in a multitude of organisms including Caenorhabditis elegans, Drosophila, fish, plants, mammals and viruses. MicroRNAs have been linked to developmental processes in C. elegans, fish, Humans, and plants, and cancer, cell growth and apoptosis in Humans, mice, C. elegans and Drosophila. A major impediment in the study of microRNA function was the lack of quantitative expression profiling methods. To close this technological gap we developed custom microarrays that monitored expression levels for over 900 microRNAs encompassing human, mouse, rat, and fish. Here we show how these arrays have revealed distinct patterns of expression in mammalian development and how these expression patterns are altered in cancer.

J Biomol Tech. 2009 February; 20(1): 14.

s13-a Techniques for Cytosine Methylation Patterns

Abstract

Methylation of DNA is believed to be involved in various biological phenomena, causing gene silencing, stabilizing chromosomal structure and suppressing the mobility of retrotransposons. However, very few studies have been published that studied methylation patterns genomewide, despite recent advances in techniques based on microarrays or massively-parallel sequencing assays. We developed a technique, the HpaII-tiny fragment Enrichment by Ligation-mediated PCR (HELP) assay, that is based on microarray hybridization following restriction enzyme digestion and ligation-mediated PCR. Using this method, we can analyze the methylation patterns at the genome wide level in many cells or tissue types. The assay allows us to gain insight into cytosine methylation differences between normal tissues and cancer, or stem cells and differentiated cells. The assay reveals that methylation in promoter region and also in gene bodies is correlated with gene expression status. In addition, we can use the data generated for HELP assays to analyze copy number variability simultaneously. We developed the assay further to use massively parallel sequencing (HELP-seq). We find the two techniques are comparable for most genomic regions, but HELP-seq can analyze short fragments that are not detectable in microarray-based HELP, allowing greater insights into CG-rich regions. In this presentation, we will discuss the application of the HELP assay for simultaneous copy number analysis, HELP-seq and the supporting bioinformatic analyses.

J Biomol Tech. 2009 February; 20(1): 14.

s13-b Deciphering the Methylome

Abstract

Sequencing the human genome was a huge accomplishment, but still does not answer most of our questions regarding health and disease. DNA mutations can cause disease, but so can changes in gene activity that are independent of mutations. Addition of methyl groups to cytosines within the regulatory regions of genes, an epigenetic process known as DNA methylation, typically alters the gene’s activity. Similarly, the methylation, acetylation or ubiquitination of certain amino acids on histone proteins can change the way DNA is wrapped around the histone cores and can affect activation of genes in conjunction with DNA methylation. Many cancers and other diseases have aberrant DNA methylation at particular sites in the genome and are key to biological events such as embryonic development, ageing and cellular regulation. Researchers have access to a collection of tools to decipher the epigenome, which could lead to new ways to treat diseases. These tools mainly involve a set of preparative techniques including modification of DNA using sodium bisulfite, which converts unmethylated cytosine into uracil. The sample can then be analyzed in several ways, including DNA sequencing to identify sites of C to T transitions following PCR. Other methods rely on restriction enzymes that are sensitive or insensitive to cytosine methylation or precipitation with antibodies or methyl binding proteins that recognize 5 ′ methylcytosine. Detection systems can include PCR-based methods, microarray hybridizations, mass spectrometry, and direct high-throughput DNA sequencing. The next five years will see a boom in epigenomic research thanks to advances in these and other technologies. One major issue in these genome-scale studies is the bioinformatics data analyses. Several groups are taking advantage of these developments to characterize the entire human epigenome in both healthy and disease states. The NIH has recognized the importance of epigenomics by making this one of their Roadmap Initiatives.

J Biomol Tech. 2009 February; 20(1): 14.

s13-c High-Throughput Chromatin Immunoprecipitations, Integrative Epigenomics, and Personalized Medicine

Abstract

In recent years technical advances allowing the combination of chromatin immunoprecipitations (ChIPs) with genomic microarray and deep sequencing technology has contributed fundamental observations concerning the nature of gene regulation in many different biological contexts. The different variations of ChIP allow the location of specific transcription factors, chromatin modifying proteins, and histones to be determined throughout the genome. The most frequently applied versions of ChIP involve a chemical cross linking step in order to stabilize the interaction of proteins with nearby DNA followed by one ore more affinity purification steps to isolate specific complexes of interest. DNA fragments enriched through this procedure can then be amplified using protocols such as LMPCR, whole genome amplification, or linear amplification, each with their pros and cons, and then hybridized to genomic microarrays or directly prepared for analysis using deep sequencing platforms. Enrichment of specific DNA fragments should be indicative of binding by the proteins of interest. However, Interpretation of ChIP-on-chip and ChIP-seq can be complicated by challenges related to assay reproducibility, particular design features of the many different types of microarrays or sequencing devices, and the inherent difficulty in ascertaining the biological relevance of the data. High throughout ChIP methods have been used to map the location of previously unidentified transcription start sites, to determine the epigenetic regulatory state of different cell types, alterations in epigenomic programming typical to disease states such as cancer, to identify target genes of transcription factors, etc. In spite of the technical challenges, the reliability of the assay is such that it can be readily applied to low abundance clinical specimens and used to identify robust chromatin regulatory signatures. Integrative analysis combining ChIP-on-chip with other high throughput methods allow biological differences between patient biopsies to be captured with far greater depth that single platform analysis.

J Biomol Tech. 2009 February; 20(1): 15.

Practical Targeted Proteomics

Abstract

Targeted proteomics is a relatively new field that shows great potential for detecting and quantifying low copy number proteins in complex matrices. There is a great deal of interest among individual laboratories and core facilities to implement this new technology. Unfortunately, because the field is so new, there is little practical advice on the best ways to design, implement and execute a targeted proteomics project. This session will cover the practical aspects of targeted proteomics including, preparing samples, designing experiments, analyzing data, and the practical aspects of using triple quadrupole mass spectrometers. This session will provide practical real world advice from a wide range of expert speakers that perform these experiments in their laboratories.

J Biomol Tech. 2009 February; 20(1): 15.

s14-a Establishing SRM as a Robust and Reliable Technique for Targeted Proteomics

Abstract

Selected reaction monitoring (SRM) is a quantitative technique widely employed for the quantification of targeted compounds in complex mixtures. The ability to select and monitor specific pairs of precursor-/fragmentions confers high selectivity and multiplexing capacity to the technique making it of particular promise for hypothesis driven proteomics. The concept of monitoring specific peptides from proteins of interest is becoming well established. These methods have high specificity within a complex mixture and, thus, can be performed in a fraction of the instrument time relative to discovery based methods. Ultimately, targeted studies are intended to complement discovery based analysis and facilitate hypothesis-driven proteomic experiments.

One of the challenges in the application of SRM for protein analysis and assay development is the streamlining of method development. This process involves the (i) selection of a proteotypic set of peptides that best represent the targeted proteins, (ii) prediction of their most sensitive and selective SRM transitions, and (iii) selection of a narrow time window in which to measure the respective transitions during the chromatographic gradient. In the case where no protein/peptide standards are available, these steps become particularly challenging.

In this presentation, we will discuss the techniques and informatics tools developed in our laboratory to simplify and streamline the development and application of SRM methods. In particular, we will present the use of discovery based MS/MS spectral libraries to improve the development of SRM methods and the interpretation of the results. Furthermore, an approach will be presented to establish scheduled SRM methods for peptides that have never been detected previously.

J Biomol Tech. 2009 February; 20(1): 15–16.

s14-b Targeted Proteomics as a Translational Tool in Drug Discovery

Abstract

Proteins as biomarkers are increasingly being used in pharmaceutical research. Proteomics discoveries in areas such as target engagement, pharmacodynamics, toxicity and disease progression could play important roles in crucial decision makings at every stage of drug development. A sustainable biomarker workflow normally begins with discovery based proteomics where candidate markers are discovered with unbiased measurements of thousands of proteins. Immediately after the discovery phase is the biomarker verification stage where potential protein biomarkers are tested with greater number of subjects than in the discovery phase using hypothesis based methods. LC-MS based targeted proteomics with triple quadrupole (QQQ) mass spectrometer enables the translational strategy with prompt method development, high sample throughput and multiplexing capacity. In situations where antibody based assays are not available, targeted proteomics is the only path forward to protein biomarker verification. We will present two case studies in which targeted proteomics played a translational role in verifying biomarkers. In the first example, we will describe the development of a translational QQQ method that quantifies a panel of 20 Alzheimer’s disease (AD) progression markers in cerebral spinal fluid (CSF). Specifically, we will discuss how we transformed a rather complicated biochemical workflow in the discovery phase to a relatively simple, fit for purpose one for the targeted QQQ method. The simplified workflow not only reduced the variation introduced by biochemical sample processing but also increased the throughput significantly—two requirements in biomarker verification phase. Selection and evolution of internal standards as well as comparison of analytical characteristics between the two platforms will also be discussed. The second case study is an example of QQQ as a translational strategy for genomics studies when antibody based assays are not readily available. We will describe the development of a QQQ method that quantifies a low-abundance protein in plasma as drug efficacy marker. The challenges were to develop a targeted QQQ method without any mass spectrometry data from the discovery phase and to handle the complexity of plasma proteome. We will present how the method evolved to a stage with practical utilities.

J Biomol Tech. 2009 February; 20(1): 16.

s14-c Practical Aspects of Quantitation with Triple-quadrupole Mass Spectrometers

Abstract

Targeted proteomics is a developing field in need of standardized methodologies to properly implement quantitative analysis. The field can incorporate strategies developed for small molecule quantitation and adapt them for use in determination of low level proteins from complex matrices. Triple-quadrupole mass spectrometers play a large role in small molecule quantitation in both academic and industrial settings. The use of multiple reaction monitoring (MRM) allows for the combination of fast scan rates, specificity and sensitivity required to analyze low abundance compounds. Standardized method validation strategies are necessary to produce high quality reproducible data. This presentation will detail small molecule method validation strategies and possible applications in targeted proteomics. Real world examples of validated small molecule quantitative methods will be presented with an overview on the determination of limits of detection, limits of quantitation, assay linearity, precision, accuracy, robustness and matrix effects. Practical aspects of triple-quadrupole mass spectrometer use in quantitative analysis will be discussed including the design of methods that incorporate the collection of MRM spectra and use of stable isotope standards.

J Biomol Tech. 2009 February; 20(1): 16.

(s15) Bioinformatics

Abstract

The Bioinformatics session will highlight the emerging need of bioinformatics tools for handling mass amounts of data from high-throughput DNA sequencers and meta-genomics/metaproteomics projects. Speakers will discuss database resources and analysis tools needed for the large datasets from such projects.

J Biomol Tech. 2009 February; 20(1): 16.

s15-a Informatic Challenges in Metaproteomics

Abstract

Microbial communities play key roles in the earth’s bio-geochemical cycles. Our knowledge of the structure and function of these communities is limited because analyses of microbial physiology and genetics have been largely confined to isolates grown in laboratories. Recent acquisition of genomic data directly from natural samples has begun to reveal the genetic potential of communities (Tyson, Nature 2004) and environments (Venter, Science 2004) and spawned the field of metagenomics. The ability to obtain whole or partial genome sequences from microbial community samples has opened up the door for microbial community proteomics. We have developed and applied a combined proteogenomic approach using genomics and mass spectrometry-based proteomic methods (Ram, Science 2005). The key to this technology is the combination of robust multidimensional nano liquid chromatography with rapid scanning tandem mass spectrometry and a variety of informatic tools. Concurrent with developments in analytical technologies, such as liquid chromatography and mass spectrometry, are advances in proteome informatics. From the start of metaproteomics of microbial communities, it was clear challenges existed not present in simple microbial isolate measurements. We developed and implemented many changes to our informatics pipeline to address these needs but it’s a continual evolving process. Prime examples include the need to clearly classify between unique, semi-unique and non-unique peptides in complex microbial mixtures across many strains of microbes. The issue of false positives and the need for high mass accuracy was recognized from the start. Quantitation methodologies are seriously challenged, with differences not only in protein concentrations, but species concentrations. Very often in metaproteomics there is not a one to one correlation with the metagenome and metaproteome being measured. It has been widely recognized that metaproteomics desperately needs advanced de-novo/sequence tagging approaches that can be used in a high throughput mode. These are only a few of the many informatic challenges associated with the developing field of microbial metaproteomics.

J Biomol Tech. 2009 February; 20(1): 17.

s15-b An Aqua-Silico Algorithm for Genome Assembly Validation and DNA Biomarker Discovery, and Its Potential as a Mass-Market Application for Microfluidics Platforms

Abstract

Personal genomics typically promises affordable re-sequencing, not de novo sequencing. This is one of the largest “blue ocean” markets on the horizon. However, every genome assembly is a hypothesis, and the supporting evidence for that hypothesis should be commensurate with its implications. Thus, an equally large blue ocean market awaits the developers of affordable genome-scale assembly validation procedures. Given the potentially enormous implications of a personal genome sequence assembly, it is unreasonable to expect regulatory agencies to refrain from requiring such a procedure in the personal genomics era. I describe the computational implementation of a scalable, combinatorial aqua-silico algorithm for genome assembly validation (and its corollary, DNA biomarker discovery). The full implementation of an aqua-silico algorithm combines computation and a related wet-lab protocol, and though I also fully describe this protocol, it has not been attempted yet. Microfluidics platforms offer the potential for an affordable wet-lab implementation of this aqua-silico algorithm, which in turn has the potential to offer micro-fluidics platform developers one of their first mass-market applications.

J Biomol Tech. 2009 February; 20(1): 17.

s15-c Bioinformatics for Metagenomic Taxonomic Classification

Abstract

Metagenomics is an emerging field of genomic analysis applied to entire communities of microbes alleviating the need to isolate and culture individual microbial species. The field opens up study of the 99% of species that cannot be cultured in the lab, and now the challenge lies in classifying their placement in the evolutionary tree of life. Current taxonomic annotation relies on extracting 16S sequences which exploits highly conserved and hypervariable regions for evolutionary distance. I survey the advantages and pitfalls of using 16S-only techniques and propose machine learning classifiers which can use ANY genome fragment. Indiscriminate annotation on a population of fragments will enable us to answer not only “Who is there?” but “How much is there?”. Also, high-throughput sequencing technologies are enabling a deep sampling of a community’s population but at a price of short-read lengths which limits the resolution of annotation. We benchmark the performance of homology-based BLAST to a composition-based Bayesian classifier for taxonomic classification. Our study demonstrates that a naive Bayesian classifier has similar and reasonable performance to BLAST for fine-resolution (strain and species classification), and I will discuss the potential of machine learning to conquer the “annotation” problem.

J Biomol Tech. 2009 February; 20(1): 17.

New Integrated Analytical Systems Based on Miniaturized Optics and Fluidics

Abstract

Techniques of micro and nanofabrication allow the formation of a variety of optical devices and structures that can be employed in ultra-sensitive chemical and biochemical analysis. Optical methods provide highly sensitive detection of molecular binding, reaching to single molecule sensitivity. Combined with chemically specific reactions, these approaches can lead to highly sensitive immunoassays, spectroscopic chemical analysis and high-throughput DNA sequencing. The technologies that create the optical devices can also be integrated with electronics and fluid handling structures to form integrated systems. The fabrication approaches also allow creating highly parallel systems for drastically increasing the analytical throughput by carrying out many simultaneous analyses. These approaches that resemble the integration and parallelism of electronic integrated circuits are producing increasingly functional analytical systems and labs-on-a-chip. This session will consider some of the advancing technologies and breakthrough applications in life science technologies.

J Biomol Tech. 2009 February; 20(1): 17.

s16-a Optical Biosensors: Future Trends and Perspectives

Abstract

Optical biosensors originated as large, cumbersome laboratory curiosities with limited capabilities for sample analysis. Since that time, these systems have developed into analytical instruments with much broader applications and capabilities. However, these instruments have largely not yet made the leap from the lab to on-site testing. Sensitivity and selectivity are perhaps the most important factors affecting commercial success of lab instruments. However, other critical issues required for transition for on-site use include: ease of use, short response time, reduced size and cost (per instrument, per sample), stability/robustness (instrument, biological components), overall versatility (different sample types, different targets). At the Naval Research Laboratory (NRL), we are addressing many of these issues through research in nanotechnology, microfluidics, biological recognition, and systems integration. Several optical biosensor systems—in various stages of development at NRL—will be highlighted. Also described will be research on new chemical and biological materials for target recognition and signal generation/transduction.

J Biomol Tech. 2009 February; 20(1): 18.

s16-b Lithographically Printed Optical, Fluidic, and Electronic Systems

Abstract

We have integrated electronic, optical, magnetic, thermal and fluidic devices into systems to construct useful analysis tools. Over the past several years, we have developed soft lithography approaches to define microfluidic systems in which picoliter volumes can be manipulated. These fluidic delivery systems have more recently been integrated with optical and electronic devices. We have also developed thermal control systems with fast (>50°C/s) cooling and heating ramp speeds and excellent accuracy. Moreover, the sizes of microfabricated fluidic elements now match those of electronic, optical and magnetic measurement devices, and lithographically assembled systems can be constructed. These integrated chips can be applied to address medical analysis needs, and to construct compact and efficient immunoassay chips, cell and bacterial sorters, cell culturing chips, and hand-held reverse-transcriptase polymerase chain reactors (RT-PCR) for pathogen identification.

By integrating fluidic and photonic systems, picoliter sample delivery and spectroscopic measurements can be realized. In general, such device miniaturization results in the opportunity to reduce the original analyte volumes and leads to new opportunities, such as single cell analysis. Systems based on silicon on insulator technology, photonic crystal sensors, the introduction of gain into spectroscopic systems and magnetic observation of nanoliter samples through nuclear magnetic resonance is expected to ultimately result in realistic “laboratory on chip” capabilities—rather than “chips in laboratories” with many biological and medical applications.

Here, we show our latest results and capabilities, as well as microlithography techniques optimized from the microelectronic industry for integrating optics with fluidics and magnetics to build integrated micro-chips. We show the opportunities of silicon photonics to generate inexpensive optical systems for data communications, as well as optofluidic and electromagnetic system for antibody analysis chips.

J Biomol Tech. 2009 February; 20(1): 18.

s16-c Real-Time DNA Sequencing from Single Polymerase Molecules

Abstract

SMRT (single molecule real time) DNA sequencing is a high-throughput method for eavesdropping on template-directed synthesis by DNA polymerase in real time. Pacific Biosciences has developed two critical technology components which enable this process: The first is phospho-linked nucleotides where, in contrast to other sequencing approaches, the fluorescent label is attached to the terminal phosphate rather than the base. The enzyme cleaves away the fluorophore as part of the incorporation process, leaving behind completely natural double-stranded DNA. The second critical component is zero-mode waveguide (ZMW) confinement technology that allows single-molecule detection at concentrations of labeled nucleotides relevant to the enzyme. Through the combination of these innovations, our technology allows sequencing at speeds of multiple bases per second with a read length distribution competitive with Sanger sequencing on average and extending out to thousands of bases in length. We apply this technology to shotgun sequencing using a fast and simple sample preparation concept that facilitates whole-genome sequencing directly from genomic DNA. Using a strand-displacing DNA polymerase, we will also demonstrate sequencing multiple times around a circular strand of DNA, allowing consensus sequencing on an individual molecule. This enables high confidence detection of low frequency variants even in heterogeneous samples, enabling new research and diagnostic applications.

J Biomol Tech. 2009 February; 20(1): 19.

Evaluating the State of the Art in Quantitative Proteomics

Abstract

Comparative proteomics and absolute protein quantitation continue to be difficult challenges that face proteomics researchers and core facilities from both an analytic and an informatics perspective. For ABRF 2009, the three Proteomics Research Groups have prepared studies tackling a broad range of quantitative proteomics techniques: relative quantitation, absolute quantitation, and difference testing. The main Proteomics Research Group (PRG) study deals with assessing the relative quantities of known protein targets in a complex mixture. This targeted approach is extended in the Proteomics Standards Research Group (sPRG) study, which uses isotopic labeled peptides in known concentrations to enable researchers to assess the absolute quantities of proteins. Finally, the Proteome Informatics Research Group (iPRG) study considers the shotgun informatics challenge of accurately identifying a large number of changes between samples despite sampling variation across multiple technical replicates. By combining participant results and survey answers, the PRG studies attempt to assess the state of the art concerning these three aspects of quantitative proteomics.

J Biomol Tech. 2009 February; 20(1): 19.

r1-a PRG 2009 Study: Relative Protein Quantification in a Clinical Matrix

Abstract

The Proteomics Research Group (PRG) of the ABRF developed the 2009 study to assess approaches that individual laboratories would use to determine the relative abundance of target proteins in a complex mixture. An increasingly common request for proteomics laboratories is the detection of a specific target protein of interest in a complex mixture. Likewise, most of these requests are also interested in knowing the abundance of the target protein relative to that in a control sample. While this type of analysis has traditionally been addressed using Western blots or other immunoaffinity assay, recent advances in targeted mass spectrometry-based analyses are beginning to be reported in the literature as an alternative.

For this year’s study, four different proteins were spiked into a plasma background matrix at three different levels. Two of these proteins are commonly measured plasma protein biomarkers, and the remaining two had identical primary structure and differed by only a single phosphorylation site. The participants were shipped six samples in total (three samples in blinded duplicate) and asked to report the relative abundances of the four target proteins in the six samples. Results from analysis of the samples and survey responses will be used to assess the different approaches that are used by the proteomics community to determine the relative abundance of a target protein of interest.

J Biomol Tech. 2009 February; 20(1): 19.

r1-b sPRG 2009 Study: Challenges Along the Way to a Quantitative Proteomics Standard

Abstract

A standard for quantitation of proteins with stable isotope labeled peptides would be a valuable tool for core laboratories to assess techniques and instrumentation. Designing and testing such a standard has been an educational tool for the members of the Proteomics Standards Research Group (sPRG). This presentation will detail how the challenges of design, synthesis, purification, quantification and analysis were overcome and should be valuable to others who want to design a protocol for quantitation of proteins.

J Biomol Tech. 2009 February; 20(1): 20.

r1-c iPRG 2009 Study: Testing for Qualitative Differences Between Samples in MS/MS Proteomics Datasets

Abstract

Determining significant differences between mass spectrometry datasets from biological samples is one of the major challenges for proteome informatics. Accurate and reproducible protein quantitation in complex samples in the face of biological and technical variability has long been a desired goal for proteomics. The ability to apply qualitative difference testing is a first step towards that goal, and is routinely used in tasks such as biomarker discovery. In this work the Proteome Informatics Research Group (iPRG) of the ABRF presents the results of a collaborative study focusing on the determination of significantly different proteins between two complex samples. In this study, datasets representing five technical replicates of each sample were provided to volunteer participants and their ability to evaluate reproducible differences was tested. A survey was used to determine the relative merits of spectrum counting versus MS intensity-based differentiation, whether sophisticated statistical methods are necessary, and if computer software must be augmented by scientific expertise and intuition. Results and survey responses were used to assess the present status of the field and to provide a benchmark for qualitative difference testing on a realistically complex dataset.

J Biomol Tech. 2009 February; 20(1): 20.

NARG 2008–2009 Study: A Comparison of Different Priming Strategies for cDNA Synthesis by Reverse Transcriptase, as Evaluated by Real-Time qPCR

Abstract

Real-Time Reverse Transcriptase Quantitative PCR (RT-qPCR) has become the method of choice to quantify transcript levels. Efficient priming, a highly processive enzyme, and quality RNA are key elements of a successful reverse transcription reaction to produce cDNA for qPCR. The Nucleic Acid Research Group (NARG) designed a study to evaluate RT priming strategies and enzymes. The NARG 2008–09 study was an extension of the 2007–08 study in which we evaluated the effect of reverse transcription priming strategies on RT-qPCR results. The previous study suggested a relationship between the assay sensitivity using cDNA generated with oligo(dT)20 primers and qPCR assay placement relative to the 3′ end of the transcript. This year’s study was designed specifically to compare oligo(dT)20 and random priming strategies as the assay target site varied. Because the previous study identified random hexamers or nonamers as most efficient of those tested, this years study was designed specifically to compare oligo(dT)20, random 6-mers and 9-mers or gene specific primers and combinations. Four reverse transcriptases; Superscript II, Superscript III, Transcriptor and MultiScribe, were employed to determine the effect of enzyme. In addition, the qPCR assays looked at three genes of varying abundance, b-actin (high copy), b-glucuronidase (medium copy) and TATA binding protein (low copy) as well as varying distance from the 3′ end for each transcript. An unexpected challenge to the current study was ability to successfully transport and store the RNA. As a result, RNA handling workflows, reagents, consumables, quantification techniques, integrity analysis with the Bioanalyzer 2100, and preservation techniques were examined. Protocols for routine RNA isolation, isolation of RNA from FACS sorted and laser-capture micro-dissected cells, and working with partially degraded RNA will also be addressed.

J Biomol Tech. 2009 February; 20(1): 21.

r3-a The DNA Sequencing Research Group general survey, 2009: Second Generation Sequencing Instruments and Services in Core Facilities

Abstract

The ABRF DNA Sequencing Research Group (DSRG) has conducted a general survey to collect data on the current state of second generation sequencing instrumentation (often termed “massively parallel” or “next generation” sequencers) and services offered by core facilities. The DSRG has monitored trends in sequencing platforms in core facilities by conducting surveys in years 2000, 2003, and 2006. This survey was the first to focus on second generation sequencers since their introduction. The information gathered this year provided data about the widespread availability of this equipment and these services in core facilities. The future acquisition and expectations for such instruments were also assessed. For comparison, the survey gathered information on Sanger (first generation) sequencing operations to determine the impact of the second generation technologies on conventional sequencing. The importance of this survey lies in the fact that it serves as an initial “snapshot” of the status of second generation sequencing services in core facilities while they are in their infancy, and therefore is a baseline for surveys in years to come. The results from this survey will be presented; some information may be presented in the “Implementing Next Generation Sequencing Technologies” concurrent technical workshop.

J Biomol Tech. 2009 February; 20(1): 21–22.

r3-b 2008–09 Joint Research Group Project: Detection of Human microRNAs Across miRNA Array and Next Generation DNA Sequencing Platforms

Abstract

MicroRNA (miRNAs) are non-coding RNA molecules between 19 and 30 nucleotides in length that are believed to regulate approximately 30 percent of all human genes. They act as negative regulators of their gene targets in many biological processes. Recent developments in microarray options and the introduction of high throughput DNA sequencing (HT DNA Seq) technologies now make it possible to use these advanced platforms for miRNA-expression profiling. To determine the effectiveness of each of these platforms in measuring miRNA expression and to compare the accuracy of the microarray and HT DNA Seq profiles with quantitative RT-PCR analysis, the Microarray Research Group (MARG) and the DNA Sequencing Group (DSRG) developed a joint research project. The goal of the MARG component of the research project was to evaluate miRNA platforms for their ability to detect miRNAs from complex total RNA samples. In addition to three DNA microarray platforms, two PCR-based platforms were included in the study for performance comparison. Each of the five miRNA platforms was tested with total RNA to evaluate the ability of each platform to detect and measure individual miRNAs in a complex biological sample. Aliquots from single pools of two different tissue RNAs (Ambion First Choice Human Total RNA) were analyzed at separate test sites for each of the following miRNA platforms: Agilent miRNA microarray, Illumina miRNA expression panel, Exiqon miRCURY LNA arrays, Applied Biosystems TaqMan miRNA assay, and FlexmiR by Luminex. The second component of the study was conducted in collaboration with the DSRG. miRNA expression profiles of the total RNAs used in the microarray phase of this study were determined using the “Next Generation” HT DNA sequencer, Illumina Genome Analyzer (Solexa). The miRNA profiles produced by the HT DNA sequencer were compared to the miRNA array platform results to determine correspondence of the two technologies, with a common reference to RT-PCR data. The results of miRNA profiling from both components of this study will be presented. This abstract does not necessarily reflect EPA policy.

J Biomol Tech. 2009 February; 20(1): 22.

r3-c GVRG 2009 Study

Abstract

Sequencing of Jim Watson’s genome revealed 3.3 million single nucleotide polymorphisms (SNPs) of which approximately 10,500 caused amino-acid substitutions, with the potential to alter protein function. Also reported were over 200,000 other small genomic variations. Just how common or unique these genomic differences are should become clearer with the completion of the ongoing 1000 Genomes Project. While these genomic differences are small in number compared to the 6-gigabase human genome, they do present a real challenge for determining whether or not a base call is real or artifact when using next-generation sequencing platforms. Sequencing methods used by the current commercial next-generation sequencing platforms produce inherently lower quality base calls than Sanger Sequencing. This can make discrimination of genotypic variation difficult when dealing with heterozygous sites in a diploid organism. The Genomic Variation Research Group (GVRG) 2009 study investigated the current base calling accuracy of two commercial next-generation sequencing platforms and compared the base calls collected to other high, and low throughput genotyping platforms. A diploid strain of yeast (Saccharomyces cerevisiae) sequenced by the Stowers Institute (Cell, 2008) was the subject of our study. GVRG collected genotyping data using two next-gen sequencers, the ABI SOLiD and Illumina Solexa GA II. The Sequenom Mass ARRAY system represents our high throughput genotyping platform and was used to assay 900+ previously identified SNP’s discovered in the Stowers yeast strain. The low throughput platforms, ABI 3730, 3730XL DNA Analyzers and ABI 7900HT, interrogated regions in or near repetitive sequence that appear to contain a SNP. We will report on coverage levels achieved with the next-generation sequencing platforms, as well as the number of false positives and negatives generated by these systems. We will review the general procedures, time, and cost of running this sample on all the platforms used.

J Biomol Tech. 2009 February; 20(1): 22.

(r4-a) ESRG Study 2009: Comparison of Edman and Mass Spectrometry Techniques for N-terminal Sequencing

Abstract

For decades, automated Edman sequencing has been the method of choice for determining the N-terminal amino acid sequence of proteins. However, the advantages of mass spectrometric techniques have in recent years driven investigators to look beyond Edman chemistry to find alternative technologies to obtain N-terminal sequence. Several mass spectrometric methodologies have been published, primarily for proteomics analyses, which may be quicker, less costly and more sensitive than Edman sequencing. Because such techniques involve a range of biochemical and instrumental methodologies having different advantages and limitations the ESRG has created a study to ascertain how reliably they can produce N-terminal amino acid sequence information and to compare those results to those obtained by automated Edman sequencing.

The ESRG 2009 oral presentation will cover current methodologies used to extrapolate terminal information from a protein. The study was designed to allow the participants freedom to use their analytical technique of choice to obtain as much N-terminal amino acid sequence information as possible from two test proteins. Results of the multiple techniques and experiments produced by participating laboratories will be shown and success of the methods utilized will be evaluated. Finally the future directions of terminal sequence analysis, by Edman or other means, will be discussed.

J Biomol Tech. 2009 February; 20(1): 22–23.

Performance Testing and Standard Good Operating Practice in Light Microscopy

Abstract

Light microscopes have had a seminal influence on science for more than 300 years. The past three decades have seen a dramatic resurgence in the use of the light microscope, as well as very substantial technical advances in the field of light microscopy. This has in turn led to an increase in the use of the light microscope as a research tool. The most important advance has been the development of the confocal microscope, which combines the detection efficiency of fluorescence with the high resolution of the light microscope. Improvements in design of optical components include, for example, aberration-corrected objectives (correction of both chromatic and spherical aberrations), more efficient filters (glass and AOBS), and improved detection (PMTs, CCD cameras and single photon avalanche diodes). As a result of these improvements, as well as improved performance and functionality of the systems, there has been a dramatic increase in costs of these types of instruments. The increase in cost coupled with decreasing grant support for research has resulted in many of these new instruments to be placed in multi-user facilities, i.e. imaging “cores.” Establishment of imaging cores has led to a shift in responsibility for instrument acquisition, maintenance and training, from an individual PI to the director of the core and core personnel. Among the myriad of functions of the core is performance testing of the instrumentation. Users need to be confident that data collected will be uniform over time and between specimens. There is a need to developed standard Good Operating Practice (GOP) procedures for the imaging instrumentation.

J Biomol Tech. 2009 February; 20(1): 24.

w1 Core Facility Management Models: Development and Culture

Abstract

Over time academic, medical and independent research communities have become increasingly reliant upon core facility operations for cost-effective access to current technologies. As such successful core operations are essential partners in the conduct of basic research, critical to sustaining research competitiveness and serve as effective faculty recruitment and retention tools. In a perfect world alignment of institutional objectives, with research faculty needs should serve to drive the development, management and operation of core facilities. Influenced largely by institutional culture, vision and objectives the organizational structures and management approaches of these facilities differ from institution to institution. Nonetheless common challenges exist for all cores which transcend the boundaries technical specialty.

This session will review findings from a recent assessment of existing core facility management models which considered the various architectures across research sectors. Current crosscutting core issues including the translation of institutional vision into operational practices and decision making through core leadership will be explored. Panelists will share different institutional approaches, experiences, actions and outcomes relative to hot operational topics as well as facility evaluation and accountability.

J Biomol Tech. 2009 February; 20(1): 24.

w1-c Enhancing Core Research Facility Management, Strategy, and Investment

Abstract

Research institutions—including research universities, academic medical centers, and independent research institutes—are increasingly realizing the important role that core research facilities play in their ability to conduct cutting-edge research and their competitiveness for recruiting and retaining strong faculty members and for securing external research funding, especially in biological science and engineering (S&E) disciplines.

With this realization comes an understanding that more attention needs to be placed on effective management and strategy of these important components of institutions’ overall research enterprises. For example, how should core facility investment decisions be made, how should facilities be governed and evaluated, how can sharing and usage be enhanced, and how can related compliance risks be best managed?

Huron Consulting Group’s Higher Education practice was engaged by a major research university to review the state of its core research facilities in the biological S&E area, with the goal of enhancing the institution’s investments in these facilities via recommendations related to core research facility organizational structures and business models. As part of the process, we interviewed approximately 40 individuals at the university (including unit leadership, facility managers and administrators, senior and junior faculty members, and university administrators), reviewed facilities data, conducted interviews with peer institutions to discuss their related opportunities and challenges, and reviewed the most relevant literature related to regulatory compliance. Our recommendations focus on the creation of enhanced organizational structures and business models “tuned” to the history and culture of the university and units involved. During this presentation, generalized findings from our work will be presented. Observed challenges and opportunities will be presented and selected recommendations related to enhancing core research facility management, strategy, and investment will be shared.

J Biomol Tech. 2009 February; 20(1): 24–25.

w2 Women in Science and Core Laboratories

Abstract

This workshop will focus on issues pertaining to women making the transition to a career as a core laboratory director. Approximately 50% of the membership in the Association of Biomolecular Resource Facilities (ABRF) includes scientists working in core facilities. In an ABRF survey study published in Nature Biotechnology in 2000, across all core facility sectors, the percentage of male employees holding MDs or PhDs was significantly greater than the percentage of female employees (24% to 9%, respectively). This discrepancy raises the important question as to whether women with PhDs are represented in the job applicant pool in the expected ratio and whether women are selected for core facility director positions in numbers that are reflective of their overall numbers within the field. What role can the ABRF play in helping to sustain women in core facility careers? During a panel discussion, Kathryn Lilley, Director of the Cambridge Center for Proteomics, and Nancy Denslow, Associate Professor in the Department of Biochemistry and Molecular Biology at the University of Florida, will focus on practical tips that helped them successfully make the transition to a core director position, obstacles they encountered along their career paths and how they overcame the obstacles, as well as their ideas on how the ABRF can help women desiring to advance into core director positions. Also during the panel discussion, Joan Goldberg, the Executive Director of the American Society for Cell Biology (ASCB), will be describing the ASCB’s successful programs aimed at promoting women in their society and field, and perhaps ignite interest in similar initiatives for the ABRF. Following the panel discussion, workshop participants will be encouraged to share their own experiences, ask the speakers questions, and submit ideas for a survey to address issues pertaining to women in core laboratories.

J Biomol Tech. 2009 February; 20(1): 25.

w3 Educational Outreach

Abstract

Core facilities are charged with bringing education about their services and technology to their users. Networking through outreach endeavors with other higher educational institutes can lead to collaborations and an increase in the number of users in a facility. Outreach to K-12 institutions can be very rewarding and can help to make connections with the entire community. There are many different ways that outreach can be delivered to primary and secondary educational institutions. The focus of this panel is to discuss different types of outreach resources and the potential for outreach through core facilities. We will address questions and concerns in developing such activities. Including but not limited to, 1. How do you implement outreach within your core facility? 2. What are the risks and rewards in doing these activities? 3. What are examples of outreach activities? 4. What are the costs of such activities? 5. How do you fund such activities? 6. What are some of the obstacles? We encourage an active discussion with audience members participating and offering individual insight and perspective.

J Biomol Tech. 2009 February; 20(1): 25.

w4 Implementing Next Generation Sequencing Technologies—Hit the Ground Running with Next Gen!

Abstract

The new generation of sequencing technology such as AB SOLiD, Illumina GA Analyzer and Roche 454 FLX brings to Core labs the ability to generate unparalleled amounts of data used for a myriad of research applications. But such technology requires a level of sample preparation, ancillary equipment, data handling and fixed cost structure that calls for careful planning before implementation.

In this session we will discuss the most important aspects of human and financial resources, technical skills, physical space and IT infrastructure necessary to implement and manage successful next generation sequencing services by institutional Core labs.

We will also present survey data on labs that have already adopted those technologies. We will attempt to address issues such as choice of instrument, types of services, costs and fee structures, trends in demand and plans for the future.

J Biomol Tech. 2009 February; 20(1): 25.

w4-a Platform Cross-Comparison and Core Facility Next-Gen Survey Data

Abstract

This talk will present the results from an online and call survey on next-gen services offered by Core labs. The survey data will attempt to provide the audience with a broad picture of how current Core labs are handling next-gen technology, how have they structure their services, the costs behind them, and what is the demand for services now, with some forecasting for the future.

Finally, a comparison summary will be presented of the specs for the Applied Biosystems, Illumina and Roche next generation sequencing instruments, and the necessary planning for successfully implementation in an institutional Core facility.

J Biomol Tech. 2009 February; 20(1): 25.

w4-a Implementing and Running the Illumina GA and Roche 454 at a DNA Sequencing Core Lab

Abstract

This talk will address the implementation process of the Illumina GA and Roche 454 in the DNA Sequencing Core of The University of Michigan and will attempt to answer some of the following questions: Why where those instruments chosen? What preparations were necessary and what are the necessary resources? Can the instruments pay for themselves? How is the upfront sample preparation set up between the Core lab and the users? What are the pitfalls to avoid (the things they didn’t tell you)? What QC checks need to be in plate?

J Biomol Tech. 2009 February; 20(1): 25.

w4-b Implementing and Running SOLiD services at a Microarray Core Facility

Abstract

This talk will address some of the same questions of the first talk in this session but from a Microarray Core Lab point of view using the Applied Biosystem SOLiD platform. Additionally issues involving the data handling and downstream analysis pipeline will be addressed: How much bioinformatics support should the Core provide? What resources are needed for data management and IT infrastructure?

J Biomol Tech. 2009 February; 20(1): 26.

w04-c Platform Cross-Comparison and Core Facility Next-Gen Survey Data

Abstract

This talk will present the results from an online and call survey on next-gen services offered by Core labs. The survey data will attempt to provide the audience with a broad picture of how current Core labs are handling next-gen technology, how have they structure their services, the costs behind them, and what is the demand for services now, with some forecasting for the future. Finally, a comparison summary will be presented of the specs for the Applied Bio-systems, Illumina and Roche next generation sequencing instruments, and the necessary planning for successfully implementation in an institutional Core facility.

J Biomol Tech. 2009 February; 20(1): 26.

w5 Implementing Mass Spectrometry Technologies—Challenges of Implementing New Mass Spectrometry Technologies in Shared Resource Facilities

Abstract

The aim of the workshop is to enable mass spectrometry and proteomics core directors who have experience with implementing both conventional and new mass spectrometry technologies to share and discuss their practical experiences in using these platforms for multiple user applications. The range of topics that will be discussed include the challenges of securing resources and funding for new instrumentation and maintenance of existing equipment, deciding what types and sizes of projects to support, how conventional and new mass spectrometry technologies factor in the decision of the types of projects to support in a shared resource, and training personnel to use/apply conventional and new MS technologies. The experience from Directors of different sized academic core facilities, from small to very large, will be featured in the workshop.

J Biomol Tech. 2009 February; 20(1): 26.

w5-a Application of Advanced Mass Spectrometry Technologies at the University of Tennessee Health Science Center

Abstract

This presentation will discuss selected aspects related to the activities of the Mass Spectrometry Core Facility at the University of Tennessee Health Science Center (UTHSC). Topics to be discussed include: how the facility developed over the past years in reaction to the expansion of mass spectrometry and the introduction of improved technologies; how the shared resources are maintained and upgraded; and what strategies are being used to meet the evolving needs of the scientific community at UTHSC. The UTHSC mass spectrometry center is an established laboratory that conducts independent and collaborative research, and provides mass spectrometry services to investigators at multiple colleges at UTHSC, and at neighboring institutions. The facility is headed by a Director and two Associate Directors; additional personnel include a Research Associate responsible for day-to-day management of the facility. The instruments available at the facility encompass matrix-assisted laser desorption/ionization (MALDI) and electrospray (ESI) ionization modes, in combination with different types of analyzers (ion trap; time-of-flight; and quadrupole-time-of-flight). Three nanoflow LC systems are interfaced with the mass spectrometers. Instrument access (trained-user, or facility-personnel-only) depends on the instrument type and on the nature of the analysis. The facility currently supports research projects of 14 principal investigators. Majority of the applications are in the area of peptide and protein analysis, in particular identification of proteins by MS/MS-based approaches. Recent additions to the services of the facility include characterization of phosphopeptides and phosphoproteins, and accurate mass measurement (for support of medicinal chemistry projects). Facility personnel also conduct instrument demonstrations and training of new users (faculty, students, postdoctoral fellows). Directors of the facility spearhead funding applications for new equipment, direct graduate courses with emphasis on mass spectrometry, and promote mass spectrometry through discussions and seminars.

J Biomol Tech. 2009 February; 20(1): 26–27.

w5-b Center for Mass Spectrometry and Proteomics at the University of Minnesota: Organization and Policies

Abstract

The Center for Mass Spectrometry and Proteomics (CMSP) grew from a long-standing mass spectrometry facility under Thomas Krick. Major expansion occurred since 1998. Numerous funding sources included 5 federal instrumentation grants and local opportunities that ranged from the cigarette settlement to support from individual colleges and units, to funds made available when ear-mark items were dropped from the federal budget. CMSP operates 15 mass spectrometers for proteomic and metabolomic research, including the following (dates of purchase): Orbitrap with ETD (2008), ESI-TOF (2008), Leco GC/GC-TOF (2008), Agilent single quad with mass-triggered sample collection (2008), ABI 4800 (2007), ABI QTrap 4000 (2006) and 2000 (2004), an LTQ (2005), ABI Pulsar and XL QStars plus four older instruments. The University recently pledged 5-year continuation of funding sufficient to cover personnel in the facility (three PhD-, one MS- and one BS-level laboratory scientist, a computer scientist and half-time appointments for postdoc, research associate and BS-level technician). Properties of the facility include at least the following: (A) A history of providing mass spectrometry to the biological sciences. (B) Strong technical expertise to maintain instruments in operating condition. (C) Active participation of a faculty director with access to University officials. (D) Active recruitment of science collaborators in areas such as the medical School. (E) Extensive support from the Minnesota Supercomputing Institute for software and hardware purchase and maintenance. (F) A strong educational mission with open door policies that educate and train users of the facility, including bimonthly 3-day workshops. (G) Employment of CMSP personnel as available collaborators for biological scientists. (H) Recruitment of new faculty with interest in biological mass spectrometry and who require expansion of a central core facility as part of their start-up package. Overall, the CMSP is seen as a strong recruitment tool for new faculty in the biological sciences.

J Biomol Tech. 2009 February; 20(1): 27.

w5-c The NIH National Center for Research Resources Mass Spectrometry at Washington University

Abstract

Washington University in St. Louis has been the host on an NIH NCRR Mass Spectrometry Facility for over 30 years. In 1994, M.L. Gross became PI and J.W. Turk, co PI. In 2003, Reid Townsend joined as a second coPI to help expand the foci of the resource in proteomics. The resource exists in three different laboratories, two in the School of Medicine and one in the Department of Chemistry. The laboratory in Chemistry has over ten mass spectrometers, ranging from ion-trap/FT ICR and ion-trap/orbitrap, to MALDI TOF/TOF and MALDI FT-ICR, a QToF, and a number of ion traps. Soon to be installed is a 12-Tesla FT-ICR for top-down proteomics and a high performance QToF (Bruker MaXis). One of the medical school labs (J. Turk, director) has lipidomics as its theme and utilizes ion traps, a QToF, and triple quadrupoles for this research. Isotope ratio measurements are also available in this lab. The second medical school lab (R. Townsend, director) specializes in proteomics and makes use of TOF/TOF and orbitrap and FT-ICR technologies. The three labs collaborate on projects of mutual interest, share instrumentations, sponsor dissemination activities from local seminars to regional workshops, and train graduate students and medical scientists in mass spectrometry.

J Biomol Tech. 2009 February; 20(1): 27.

w6-a Implementing Optical Imaging Technologies

Abstract

In the post-genomic era of biomedical research understanding the functionality of molecules at the cellular and subcellular level in living systems will become predominant. In this era we must move beyond static “snapshots” of the cellular state to an understanding of the biology of cells over time and in three-dimensional space. Within the cellular environment it is expected that we will be able to study the expression, the functional role(s) and interactions of multiple unique molecules concurrently. Furthermore, it will be desirable to determine the effects of these molecules on cell development, organization and fate over extended periods of time. To perform these types of studies it is necessary to develop new methodologies that will allow multiparametric analysis of cells while maintaining their functional viability. In the past this goal would have been extraordinarily difficult to achieve. However, developments in optical and computational technology and the development of spectrally discrete, extremely efficient fluorescent dyes have empowered modern microscopists to undertake these previously forbidding tasks. However to perform these tasks is both highly technical and extremely expensive. Thus the high end multicapability imaging resource has taken a central stage allowing unskilled researchers access to these extremely powerful and multiplexed imaging technologies. This workshop will consist of three sections and for the large part will be a group forum presentation with extensive questions and answers. The three sections will be:

  1. “What can I do with optical imaging today?” which will discuss the limits of the current technologies.
  2. “How and what do I need to organize a high end optical facility?”
  3. “How do I get money to pay for all this stuff?”

The session organizer, Dr. Frolich and Dr. Henderson all direct very large well equipped imaging centers, and have extensive experience attracting, maintaining and improving the resources for the centers. Therefore following a brief introduction to each topic will work as a round table to help attendees plan and implement strategies for building or expanding pre-existing imaging centers.

J Biomol Tech. 2009 February; 20(1): 28.

w7-a Structure, Function and Funding of Shared Research Resources

Abstract

Shared Research Resources or as commonly called “cores” began appearing in a very rudimentary form in the late 1960s. Initially these were focal points of faculty expertise and technology/instrumentation housed within an investigator’s own laboratory and developed primarily for the faculty member’s own use and that of selected colleagues. As biomedical research became more dependent on sophisticated instrumentation and the demand by investigators for access to such technology and expertise became widespread a more formalized structure and operation for such cores began to be developed. Currently, a variety of models are observed for shared research resource cores with equally variable mechanisms for operation and funding, most of which are somewhat specifically tailored for the host institution. In this presentation I will discuss the range of core models and well as the various operational and funding mechanisms utilized by most cores. The expectation is that one will see that often one size does not fit all and flexibility and dynamic positioning of the cores within an institution is critical for both fiscal and scientific success. Factors which influence such success will also be discussed.

J Biomol Tech. 2009 February; 20(1): 28.

w7-c Instrumentation for Core Facilities: the NIH SIG and HEI Programs

Abstract

For many years the NIH has provided expensive state of the art equipment to the biomedical community through the NCRR Shared Instrumentation Grant (SIG) and more recently, the High-End Instrumentation (HEI) programs. In many cases, these instruments are located in institutional core facilities which provide access and the high level technical expertise in cutting edge technologies and complex analytical procedures required for both basic and translational studies. Although the SIG and HEI programs fund instrumentation that span the technology spectrum, some instruments are placed in DNA sequencing, microarray and mass spectrometry cores managed by facility directors who are members of the ABRF. This talk will summarize the funding levels and trends in equipment for both the SIG and HEI programs.

J Biomol Tech. 2009 February; 20(1): 28.

w8 Bio-Information Technology (Bio-IT)—Custom Software Development in Support of Core Facilities

Abstract

Core facilities are faced with numerous information management issues, ranging from daily operations support to complex year-end queries needed for budgeting and grant renewals. Commercial off-the-shelf (COTS) software that can effectively address these issues is rare, and even when available, tends to create more issues that it solves. Often COTS solutions only address a small portion of the information management needs for a particular core, leading to an amalgamation of several different tools addressing different areas. These tools are then often “glued together” by an ad-hoc, tangled web of custom scripting and manual processes. Each core at an institution ends up with its own combination of homegrown solutions, leading to tremendous information integration and consistency issues at the institutional level. This session will examine the need for software development groups “in context” with the core facilities themselves and their ability to develop consistent, integrated solutions to core facility information management issues. It will also highlight the solutions being developed at the panelists’ institutions.

J Biomol Tech. 2009 February; 20(1): 28–29.

w8-a Removing Data Silos with ISIS

Abstract

This paper discusses the ongoing process at The Jackson Laboratory (JAX) for the integration of scientific and administrative data under a unified data management plan called Integrated Services Information System, or ISIS. This effort is in line with, albeit on a smaller scale, the goals of larger national and international efforts such as caBIG. Indeed, caBIG is an increasingly important resource for ISIS. The drivers for implementing ISIS have scientific, technological, and economic roots. The Scientific Services can be considered JAX’s biggest scientific data generator. But until recently, the services have generally treated data products as individual research results generated by a single service at the specific request of one or more investigators. Each data product (e.g. a sequencing run result) was generated and delivered to research clients with little or no consideration of its integrated scientific value with other scientific data generated in other services or even other data generated within a the same service. As the number of requests has grown and the complexity of the assays has increased, this thinking has begun to change. Data generated for one investigator may be of interest to another investigator or experiment. This data silo concept is wasteful and, from an IT perspective, costly. To overcome the problems of developing expensive LIMS and disparate databases at JAX, we are standardizing controlled vocabularies, creating common underlying data models, implementing upon standard system architectures, and providing open application programming interfaces. As more services utilize relational database management systems to track data and workflow, the potential for adding value to data products through integration grows rapidly. In addition, as services expand and contract and new services are added, it is increasingly important yet more difficult to gather productivity and use metrics. ISIS is a significant step towards solving that problem.

J Biomol Tech. 2009 February; 20(1): 29.

w8-b Internal Software Development and Integration Experiences at the Cornell University Life Sciences Core Laboratories Center

Abstract

The Core Laboratories Center at Cornell University provides a wide variety of technologies, platforms and expertise to internal and external investigators. We have a long history of supporting the operations of our laboratories through internal development efforts, while integrating vendor software and commercial LIMS offerings. Rapidly changing technologies and the accelerated adoption of new platforms present unique challenges to an internal development effort. A creative combination of software development, software acquisition, deployment and integration along with planning and communication with laboratory managers is necessary to successfully support laboratory operations. I will offer a historical perspective of our software development and deployment from our earliest efforts in supporting slab gel DNA Sequencing to invoicing, equipment scheduling, complex proteomics workflow and massively parallel DNA Sequencing while describing our successes as well as unresolved problems and challenges.

J Biomol Tech. 2009 February; 20(1): 29.

w8-c SRM 2.0: Building the Next Generation of Core Facility Management Systems

Abstract

St. Jude’s Shared Resource Management (SRM) system is a laboratory management system designed to support core facility activities. It was originally designed to support the laboratories in the Hartwell Center for Bioinformatics and Biotechnology at St. Jude, including DNA synthesis, Peptide synthesis, DNA sequencing, Functional Genomics (spotted microarray), and Affymetrix laboratories. An original goal for SRM was for it to be sufficiently modular and scalable to support additional laboratories and potentially support all core facilities within an institution, providing a single portal for investigators to requisition services, retrieve data and invoices for services, and generate reports. The original implementation of SRM allowed new facilities to be deployed within 3–6 months, assuming two to three developers concurrently dedicated to developing modules for a new facility. Little did we know that the rate of desired SRM adoption and introduction of new facilities to St. Jude would far outstrip our ability to deliver the required modules. Clearly a different approach was necessary. Work is now underway to deliver SRM 2.0. SRM 2.0 will largely be driven by domain-specific metadata specified at runtime, allowing new facilities to be deployed without writing any additional code modules. According to our estimates, this will allow us to deliver at least 75–80% of the required functionality to support any one facility. The remaining functionality will be delivered by utilizing an innovative plug-in system, whereby various pre-defined extension points throughout the system can be enhanced by developing domain specific plug-in modules that will be deployed independently of the core system. We believe these two fundamental concepts will allow us to shorten deployment time to 3–6 weeks or possibly less. It is our vision that SRM 2.0 will represent the next generation of core facility management systems and be the best available software in the world for managing core facilities.

J Biomol Tech. 2009 February; 20(1): 29.

Searching and Sorting: Preparing Protein Identification Data for Publication

Abstract

The identification of proteins in complex mixtures as well as characterizing the myriad of post-translational modifications that they undergo remains a major part of proteomic research. The vast majority of such data are generated by mass spectrometry and analyzing, interpreting and refining the output of experiments utilizing this technology in preparation for their publication is of great significance as these manipulations directly affect the reliability of the reported results. There are many search engines and other programs for refining protein identification data including some that assess the confidence in the findings and the variability inherent in these programs poses problems for the editors and reviewers of journals. In this workshop, several general aspects of this issue will be discussed illustrated by three major software packages: Protein Prospector, Mascot and Scaffold.

J Biomol Tech. 2009 February; 20(1): 29–30.

w11-a From Results to Publication: Journal Guidelines and Protein Prospector

Abstract

As amounts of proteomic data acquired in a given experiment have exploded over the last few years, analysis of mass spectrometry data by software has moved from a semi-supervised process to an essentially complete reliance on automated search engine results. This has lead to more pressure on search engines by both researchers and journals to report results of measurable reliability. In this presentation the changes that have been made to Protein Prospector to adjust to these demands will be presented, including use of expectation values, target-decoy database searching strategies and reporting of quantitation statistics. Strengths and weaknesses of these approaches will be highlighted.

J Biomol Tech. 2009 February; 20(1): 30.

w11-b Providing Mascot Search Results in a Format Suitable for Submission as Supplementary Data

Abstract

The different journals publication guidelines for the analysis and documentation of peptide and protein identifications have proved to be a challenge to many authors when submitting manuscripts for publication. In this talk, three requirements and potential solutions for the Mascot search engine will be described:

  1. “For large scale experiments, provide the results of any additional statistical analyses that indicate or establish a measure of identification certainty, or allow a determination of the false-positive rate.” While this requirement has certainly improved the quality of data being provided, generating this data has caused difficulties for laboratories with limited bio-informatic resources.
  2. For all experiments, provide reports with information about proteins identified, for example sequence coverage, “score(s) and any associated statistical information obtained for searches conducted.” Standard Mascot reports don’t always provide all the required information in a format suitable for the different journals. It is also often unclear what statistical data should be provided.
  3. For “one hit wonders,” the additional requirement of «annotated spectra with masses observed as well as fragment assignments.” When the MCP guidelines were developed, it was probably assumed that this would entail producing at most a handful of spectra. What about cases where 100s of such spectra are required?

The talk will conclude with a description of the Analysis XML format developed by the HUPO Proteomics Standard Initiative. As further tools are developed, this format will greatly ease the burden of producing supplementary data for publication and enable reviewers to more readily make judgements about the reliability of submitted data.

J Biomol Tech. 2009 February; 20(1): 20.

w11-c Organizing MS/MS Proteomic Data for Publication

Abstract

Large-scale proteomics studies generate enormous amounts of data that are impossible to curate by hand. As a result, all of the major proteomics journals have adopted guidelines to insure high quality standards for publication. However, meeting these guidelines can be difficult when studies span years, multiple instruments and tens or hundreds of result files. Scaffold was designed to help researchers organize and manipulate these data as well as automatically collect the necessary search parameters for publication. Scaffold stays instrument, methodology and search engine agnostic by reinterpreting all search engine results on an even playing field using generic statistical techniques. These techniques let scientists and journal reviewers filter data in expected ways to separate trusted results from the portion of data that requires hand curation. Finally,

Scaffold implicitly acknowledges that the assumptions it and other software make about MS/MS data can be wrong and gives scientists and journal reviewers the tools to independently audit the accuracy of the analysis.

J Biomol Tech. 2009 February; 20(1): 30.

w12-a Technical Workshop on Real-Time Biophysical Technologies Used for Characterization of Biomolecular Interactions

Abstract

There are many existing techniques, such as ELISA, analytical ultracentrifugation, ITC, NMR for studying biomolecular interactions. New technologies based on Surface Plasmon Resonance (SPR), Quartz Crystal Microbalance (QCM), Fluorescence Fluctuation Spectroscopy (FFS) are emerging to complement the existing ones to offer a label-free, real time biomolecular interaction analysis. Recently a wide spectrum of biosensor platforms has come on the market from several manufacturers. These biosensors offer a comprehensive analysis of binding kinetics, affinity, specificity, and concentration measurements of a wide range of biomolecules including small molecules, antibodies, lipids, nucleic acid and proteins. Comprehensive characterization of biomolecular interactions is central to understanding of molecular mechanisms, structure-function relationships of biomolecules for research and development, antibody characterization, drug discovery, biotheurapeutic development, and proteomics. The technology workshop will focus on the new generation of instruments based SPR and QCM technologies developed by GE health sciences, Fortebio, Inc., Bio Rad, Corning and Attana and their applications to quantify biomolecular interactions. A 10–12 minute presentation by each speaker (Eric Rhous from GE Health Sciences, Sriram Kumaraswamy from Fortebio, Inc., Yasmina N. Abdiche of BBC Rinat Laboratories-Pfizer, Inc., on behalf of Bio Rad, Anthony G. Frutos from Corning, and Theres Jägerbrink from Attana) will be made, followed by a roundtable discussion.


Articles from Journal of Biomolecular Techniques : JBT are provided here courtesy of The Association of Biomolecular Resource Facilities