PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (37)
 

Clipboard (0)
None

Select a Filter Below

Year of Publication
more »
1.  MOPED enables discoveries through consistently processed proteomics data 
Journal of proteome research  2013;13(1):107-113.
The Model Organism Protein Expression Database (MOPED, http://moped.proteinspire.org), is an expanding proteomics resource to enable biological and biomedical discoveries. MOPED aggregates simple, standardized and consistently processed summaries of protein expression and metadata from proteomics (mass spectrometry) experiments from human and model organisms (mouse, worm and yeast). The latest version of MOPED adds new estimates of protein abundance and concentration, as well as relative (differential) expression data. MOPED provides a new updated query interface that allows users to explore information by organism, tissue, localization, condition, experiment, or keyword. MOPED supports the Human Proteome Project’s efforts to generate chromosome and diseases specific proteomes by providing links from proteins to chromosome and disease information, as well as many complementary resources. MOPED supports a new omics metadata checklist in order to harmonize data integration, analysis and use. MOPED’s development is driven by the user community, which spans 90 countries guiding future development that will transform MOPED into a multi-omics resource. MOPED encourages users to submit data in a simple format. They can use the metadata a checklist generate a data publication for this submission. As a result, MOPED will provide even greater insights into complex biological processes and systems and enable deeper and more comprehensive biological and biomedical discoveries.
doi:10.1021/pr400884c
PMCID: PMC4039175  PMID: 24350770
2.  Toward More Transparent and Reproducible Omics Studies Through a Common Metadata Checklist and Data Publications 
Abstract
Biological processes are fundamentally driven by complex interactions between biomolecules. Integrated high-throughput omics studies enable multifaceted views of cells, organisms, or their communities. With the advent of new post-genomics technologies, omics studies are becoming increasingly prevalent; yet the full impact of these studies can only be realized through data harmonization, sharing, meta-analysis, and integrated research. These essential steps require consistent generation, capture, and distribution of metadata. To ensure transparency, facilitate data harmonization, and maximize reproducibility and usability of life sciences studies, we propose a simple common omics metadata checklist. The proposed checklist is built on the rich ontologies and standards already in use by the life sciences community. The checklist will serve as a common denominator to guide experimental design, capture important parameters, and be used as a standard format for stand-alone data publications. The omics metadata checklist and data publications will create efficient linkages between omics data and knowledge-based life sciences innovation and, importantly, allow for appropriate attribution to data generators and infrastructure science builders in the post-genomics era. We ask that the life sciences community test the proposed omics metadata checklist and data publications and provide feedback for their use and improvement.
doi:10.1089/omi.2013.0149
PMCID: PMC3903324  PMID: 24456465
4.  Optimizing high performance computing workflow for protein functional annotation 
Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.
doi:10.1002/cpe.3264
PMCID: PMC4194055  PMID: 25313296
science gateways; petascale; data-enabled life sciences; sequence similarity; computational bioinformatics; protein annotation; protein sequence universe; COG; BLAST; PSI-BLAST; HSPp-BLAST; XSEDE; PS
5.  Modeling sequence and function similarity between proteins for protein functional annotation 
A common task in biological research is to predict function for proteins by comparing sequences between proteins of known and unknown function. This is often done using pair-wise sequence alignment algorithms (e.g. BLAST). A problem with this approach is the assumption of a simple equivalence between a minimum sequence similarity threshold and the function similarity between proteins. This assumption is based on the binary concept of homology in that proteins are or not homologous. The relationship between sequence and function however is more complex as well as pertinent for predicting protein function, e.g. evaluating BLAST alignments or developing training sets for profile models based on functional rather than homologous groupings. Our motivation for this study was to model sequence and function similarity between proteins to gain insights into the “sequence-function similarity relationship between proteins for predicting function. Using our model we found that function similarity generally increases with sequence similarity but with a high degree of variability. This result has implications for pair-wise approaches in that it appears sequence similarity must be very high to ensure high function similarity. Profile models which enable higher sensitivity are a potential solution. However, multiple sequences alignments (a necessary prerequisite) are a problem in that current algorithms have difficulty aligning sequences with very low sequence similarity, which is common in our data set, or are intractable for high numbers of sequences. Given the importance of predicting protein function and the need for multiple sequence alignments, algorithms for accomplishing this task should be further refined and developed.
doi:10.1145/1851476.1851548
PMCID: PMC4120521  PMID: 25101328
Experimentation; Biostatistics; Bioinformatics; Multiple Sequence Alignment
6.  Designing a post-genomics knowledge ecosystem to translate pharmacogenomics into public health action 
Genome Medicine  2012;4(11):91.
Translation of pharmacogenomics to public health action is at the epicenter of the life sciences agenda. Post-genomics knowledge is simultaneously co-produced at multiple scales and locales by scientists, crowd-sourcing and biological citizens. The latter are entrepreneurial citizens who are autonomous, self-governing and increasingly conceptualizing themselves in biological terms, ostensibly taking responsibility for their own health, and engaging in patient advocacy and health activism. By studying these heterogeneous 'scientific cultures', we can locate innovative parameters of collective action to move pharmacogenomics to practice (personalized therapeutics). To this end, we reconceptualize knowledge-based innovation as a complex ecosystem comprising 'actors' and 'narrators'. For robust knowledge translation, we require a nested post-genomics technology governance system composed of first-order narrators (for example, social scientists, philosophers, bioethicists) situated at arm's length from innovation actors (for example, pharmacogenomics scientists). Yet, second-order narrators (for example, an independent and possibly crowd-funded think-tank of citizen scholars, marginalized groups and knowledge end-users) are crucial to prevent first-order narrators from gaining excessive power that can be misused in the course of steering innovations. To operate such 'self-calibrating' and nested innovation ecosystems, we introduce the concept of 'wiki-governance' to enable mutual and iterative learning among innovation actors and first- and second-order narrators.
'[A] scientific expert is someone who knows more and more about less and less, until finally knowing (almost) everything about (almost) nothing.' [1]
'Ubuntu: I am because you are.' [2]
doi:10.1186/gm392
PMCID: PMC3580424  PMID: 23194449
7.  Integrative Analysis of Longitudinal Metabolomics Data from a Personal Multi-Omics Profile  
Metabolites  2013;3(3):741-760.
The integrative personal omics profile (iPOP) is a pioneering study that combines genomics, transcriptomics, proteomics, metabolomics and autoantibody profiles from a single individual over a 14-month period. The observation period includes two episodes of viral infection: a human rhinovirus and a respiratory syncytial virus. The profile studies give an informative snapshot into the biological functioning of an organism. We hypothesize that pathway expression levels are associated with disease status. To test this hypothesis, we use biological pathways to integrate metabolomics and proteomics iPOP data. The approach computes the pathways’ differential expression levels at each time point, while taking into account the pathway structure and the longitudinal design. The resulting pathway levels show strong association with the disease status. Further, we identify temporal patterns in metabolite expression levels. The changes in metabolite expression levels also appear to be consistent with the disease status. The results of the integrative analysis suggest that changes in biological pathways may be used to predict and monitor the disease. The iPOP experimental design, data acquisition and analysis issues are discussed within the broader context of personal profiling.
doi:10.3390/metabo3030741
PMCID: PMC3901289  PMID: 24958148
metabolomics; integrative pathway analysis; DEAP; dendrogram sharpening; DELSA; iPOP; longitudinal design; multi-omics data; single linkage
8.  Correction: Differential Expression Analysis for Pathways 
PLoS Computational Biology  2013;9(4):10.1371/annotation/58cf4d21-f9b0-4292-94dd-3177f393a284.
doi:10.1371/annotation/58cf4d21-f9b0-4292-94dd-3177f393a284
PMCID: PMC3648644
9.  Differential Expression Analysis for Pathways 
PLoS Computational Biology  2013;9(3):e1002967.
Life science technologies generate a deluge of data that hold the keys to unlocking the secrets of important biological functions and disease mechanisms. We present DEAP, Differential Expression Analysis for Pathways, which capitalizes on information about biological pathways to identify important regulatory patterns from differential expression data. DEAP makes significant improvements over existing approaches by including information about pathway structure and discovering the most differentially expressed portion of the pathway. On simulated data, DEAP significantly outperformed traditional methods: with high differential expression, DEAP increased power by two orders of magnitude; with very low differential expression, DEAP doubled the power. DEAP performance was illustrated on two different gene and protein expression studies. DEAP discovered fourteen important pathways related to chronic obstructive pulmonary disease and interferon treatment that existing approaches omitted. On the interferon study, DEAP guided focus towards a four protein path within the 26 protein Notch signalling pathway.
Author Summary
The data deluge represents a growing challenge for life sciences. Within this sea of data surely lie many secrets to understanding important biological and medical systems. To quantify important patterns in this data, we present DEAP (Differential Expression Analysis for Pathways). DEAP amalgamates information about biological pathway structure and differential expression to identify important patterns of regulation. On both simulated and biological data, we show that DEAP is able to identify key mechanisms while making significant improvements over existing methodologies. For example, on the interferon study, DEAP uniquely identified both the interferon gamma signalling pathway and the JAK STAT signalling pathway.
doi:10.1371/journal.pcbi.1002967
PMCID: PMC3597535  PMID: 23516350
10.  Opportunities and Challenges for the Life Sciences Community 
Abstract
Twenty-first century life sciences have transformed into data-enabled (also called data-intensive, data-driven, or big data) sciences. They principally depend on data-, computation-, and instrumentation-intensive approaches to seek comprehensive understanding of complex biological processes and systems (e.g., ecosystems, complex diseases, environmental, and health challenges). Federal agencies including the National Science Foundation (NSF) have played and continue to play an exceptional leadership role by innovatively addressing the challenges of data-enabled life sciences. Yet even more is required not only to keep up with the current developments, but also to pro-actively enable future research needs. Straightforward access to data, computing, and analysis resources will enable true democratization of research competitions; thus investigators will compete based on the merits and broader impact of their ideas and approaches rather than on the scale of their institutional resources. This is the Final Report for Data-Intensive Science Workshops DISW1 and DISW2. The first NSF-funded Data Intensive Science Workshop (DISW1, Seattle, WA, September 19–20, 2010) overviewed the status of the data-enabled life sciences and identified their challenges and opportunities. This served as a baseline for the second NSF-funded DIS workshop (DISW2, Washington, DC, May 16–17, 2011). Based on the findings of DISW2 the following overarching recommendation to the NSF was proposed: establish a community alliance to be the voice and framework of the data-enabled life sciences. After this Final Report was finished, Data-Enabled Life Sciences Alliance (DELSA, www.delsall.org) was formed to become a Digital Commons for the life sciences community.
doi:10.1089/omi.2011.0152
PMCID: PMC3300061  PMID: 22401659
12.  Design and Initial Characterization of the SC-200 Proteomics Standard Mixture 
Abstract
High-throughput (HTP) proteomics studies generate large amounts of data. Interpretation of these data requires effective approaches to distinguish noise from biological signal, particularly as instrument and computational capacity increase and studies become more complex. Resolving this issue requires validated and reproducible methods and models, which in turn requires complex experimental and computational standards. The absence of appropriate standards and data sets for validating experimental and computational workflows hinders the development of HTP proteomics methods. Most protein standards are simple mixtures of proteins or peptides, or undercharacterized reference standards in which the identity and concentration of the constituent proteins is unknown. The Seattle Children's 200 (SC-200) proposed proteomics standard mixture is the next step toward developing realistic, fully characterized HTP proteomics standards. The SC-200 exhibits a unique modular design to extend its functionality, and consists of 200 proteins of known identities and molar concentrations from 6 microbial genomes, distributed into 10 molar concentration tiers spanning a 1,000-fold range. We describe the SC-200's design, potential uses, and initial characterization. We identified 84% of SC-200 proteins with an LTQ-Orbitrap and 65% with an LTQ-Velos (false discovery rate = 1% for both). There were obvious trends in success rate, sequence coverage, and spectral counts with protein concentration; however, protein identification, sequence coverage, and spectral counts vary greatly within concentration levels.
doi:10.1089/omi.2010.0118
PMCID: PMC3110723  PMID: 21250827
13.  The necessity of adjusting tests of protein category enrichment in discovery proteomics 
Bioinformatics  2010;26(24):3007-3011.
Motivation: Enrichment tests are used in high-throughput experimentation to measure the association between gene or protein expression and membership in groups or pathways. The Fisher's exact test is commonly used. We specifically examined the associations produced by the Fisher test between protein identification by mass spectrometry discovery proteomics, and their Gene Ontology (GO) term assignments in a large yeast dataset. We found that direct application of the Fisher test is misleading in proteomics due to the bias in mass spectrometry to preferentially identify proteins based on their biochemical properties. False inference about associations can be made if this bias is not corrected. Our method adjusts Fisher tests for these biases and produces associations more directly attributable to protein expression rather than experimental bias.
Results: Using logistic regression, we modeled the association between protein identification and GO term assignments while adjusting for identification bias in mass spectrometry. The model accounts for five biochemical properties of peptides: (i) hydrophobicity, (ii) molecular weight, (iii) transfer energy, (iv) beta turn frequency and (v) isoelectric point. The model was fit on 181 060 peptides from 2678 proteins identified in 24 yeast proteomics datasets with a 1% false discovery rate. In analyzing the association between protein identification and their GO term assignments, we found that 25% (134 out of 544) of Fisher tests that showed significant association (q-value ≤0.05) were non-significant after adjustment using our model. Simulations generating yeast protein sets enriched for identification propensity show that unadjusted enrichment tests were biased while our approach worked well.
Contact: eugene.kolker@seattlechildrens.org
Supplementary information: Supplementary data are available at Bioinformatics online.
doi:10.1093/bioinformatics/btq541
PMCID: PMC2995116  PMID: 21068002
14.  MOPED: Model Organism Protein Expression Database 
Nucleic Acids Research  2011;40(Database issue):D1093-D1099.
Large numbers of mass spectrometry proteomics studies are being conducted to understand all types of biological processes. The size and complexity of proteomics data hinders efforts to easily share, integrate, query and compare the studies. The Model Organism Protein Expression Database (MOPED, htttp://moped.proteinspire.org) is a new and expanding proteomics resource that enables rapid browsing of protein expression information from publicly available studies on humans and model organisms. MOPED is designed to simplify the comparison and sharing of proteomics data for the greater research community. MOPED uniquely provides protein level expression data, meta-analysis capabilities and quantitative data from standardized analysis. Data can be queried for specific proteins, browsed based on organism, tissue, localization and condition and sorted by false discovery rate and expression. MOPED empowers users to visualize their own expression data and compare it with existing studies. Further, MOPED links to various protein and pathway databases, including GeneCards, Entrez, UniProt, KEGG and Reactome. The current version of MOPED contains over 43 000 proteins with at least one spectral match and more than 11 million high certainty spectra.
doi:10.1093/nar/gkr1177
PMCID: PMC3245040  PMID: 22139914
15.  In-silico human genomics with GeneCards 
Human Genomics  2011;5(6):709-717.
Since 1998, the bioinformatics, systems biology, genomics and medical communities have enjoyed a synergistic relationship with the GeneCards database of human genes (http://www.genecards.org). This human gene compendium was created to help to introduce order into the increasing chaos of information flow. As a consequence of viewing details and deep links related to specific genes, users have often requested enhanced capabilities, such that, over time, GeneCards has blossomed into a suite of tools (including GeneDecks, GeneALaCart, GeneLoc, GeneNote and GeneAnnot) for a variety of analyses of both single human genes and sets thereof. In this paper, we focus on inhouse and external research activities which have been enabled, enhanced, complemented and, in some cases, motivated by GeneCards. In turn, such interactions have often inspired and propelled improvements in GeneCards. We describe here the evolution and architecture of this project, including examples of synergistic applications in diverse areas such as synthetic lethality in cancer, the annotation of genetic variations in disease, omics integration in a systems biology approach to kidney disease, and bioinformatics tools.
doi:10.1186/1479-7364-5-6-709
PMCID: PMC3525253  PMID: 22155609
GeneCards; GeneDecks; Partner Hunter; Set Distiller; omics; genomics; human genes; database; synthetic lethality; genetic variations
17.  Design and Initial Characterization of the SC-200 Proteomics Standard Mixture 
High-throughput (HTP) proteomics studies generate large amounts of data. Interpretation of these data requires effective approaches to distinguish noise from biological signal, particularly as instrument and computational capacity increase and studies become more complex. Resolving this issue requires validated and reproducible methods and models, which in turn requires complex experimental and computational standards. The absence of appropriate standards and data sets for validating experimental and computational workflows hinders the development of HTP proteomics methods. Most protein standards are simple mixtures of proteins or peptides, or undercharacterized reference standards in which the identity and concentration of the constituent proteins is unknown. The Seattle Children's 200 (SC-200) proposed proteomics standard mixture is the next step toward developing realistic, fully characterized HTP proteomics standards. The SC-200 exhibits a unique modular design to extend its functionality, and consists of 200 proteins of known identities and molar concentrations from 6 microbial genomes, distributed into 10 molar concentration tiers spanning a 1,000-fold range. We describe the SC-200's design, potential uses, and initial characterization. We identified 84% of SC-200 proteins with an LTQ-Orbitrap and 65% with an LTQ-Velos (false discovery rate = 1% for both). There were obvious trends in success rate, sequence coverage, and spectral counts with protein concentration; however, protein identification, sequence coverage, and spectral counts vary greatly within concentration levels.
doi:10.1089/omi.2010.0118
PMCID: PMC3110723  PMID: 21250827
18.  Meta-analysis for Protein Identification: A Case Study on Yeast Data 
Abstract
Large amounts of mass spectrometry (MS) proteomics data are now publicly available; however, little attention has been given to how to best combine these data and assess the error rates for protein identification. The objective of this article is to show how variation in the type and amount of data included with each study impacts coverage of the yeast proteome and estimation of the false discovery rate (FDR). Our analysis of a subset of the publicly available yeast data showed that failure to reevaluate the FDR when combining protein IDs from different experiments resulted in an underestimation of the FDR by approximately threefold. A worst-case approximation of the FDR was only slightly larger than estimating the FDR by randomized database matches. The use of a weighted model to emphasize the most informative experimental data provided an increase in the number of IDs at a 1% FDR when compared to other meta-analysis approaches. Also, using an FDR higher than 1% results in a very high rate of false discoveries for IDs above the 1% threshold. Ideally, raw MS data will be made publicly available for complete and consistent reanalysis. In the circumstance that raw data is not available, determining a combined FDR on the basis of the worst-case estimation provides a reasonable approximation of the FDR. When combining experimental results, adding additional experiments results in diminishing and in some cases negative returns on protein identifications. It may be beneficial to include only those experiments generating the most unique identifications due to solid experimental design and sensitive instrumentation.
doi:10.1089/omi.2010.0034
PMCID: PMC3133781  PMID: 20569183
19.  Interplay of heritage and habitat in the distribution of bacterial signal transduction systems 
Molecular bioSystems  2010;6(4):721-728.
Comparative analysis of the complete genome sequences from a variety of poorly studied organisms aims at predicting ecological and behavioral properties of these organisms and help in characterizing their habitats. This task requires finding appropriate descriptors that could be correlated with the core traits of each system and would allow meaningful comparisons. Using the relatively simple bacterial models, first attempts have been made to introduce suitable metrics to describe the complexity of organism’s signaling machinery, which included introducing the “bacterial IQ” score. Here, we use an updated census of prokaryotic signal transduction systems to improve this parameter and evaluate its consistency within selected bacterial phyla. We also introduce a more elaborate descriptor, a set of profiles of relative abundance of members of each family of signal transduction proteins encoded in each genome. We show that these family profiles are well conserved within each genus and are often consistent within families of bacteria. Thus, they reflect evolutionary relationships between organisms as well as individual adaptations of each organism to its specific ecological niche.
doi:10.1039/b908047c
PMCID: PMC3071642  PMID: 20237650
comparative genomics; evolution; protein phosphorylation; receptor; Mycobacterium; Shewanella
20.  Meeting Report: The Terabase Metagenomics Workshop and the Vision of an Earth Microbiome Project 
Standards in Genomic Sciences  2010;3(3):243-248.
Between July 18th and 24th 2010, 26 leading microbial ecology, computation, bioinformatics and statistics researchers came together in Snowbird, Utah (USA) to discuss the challenge of how to best characterize the microbial world using next-generation sequencing technologies. The meeting was entitled “Terabase Metagenomics” and was sponsored by the Institute for Computing in Science (ICiS) summer 2010 workshop program. The aim of the workshop was to explore the fundamental questions relating to microbial ecology that could be addressed using advances in sequencing potential. Technological advances in next-generation sequencing platforms such as the Illumina HiSeq 2000 can generate in excess of 250 billion base pairs of genetic information in 8 days. Thus, the generation of a trillion base pairs of genetic information is becoming a routine matter. The main outcome from this meeting was the birth of a concept and practical approach to exploring microbial life on earth, the Earth Microbiome Project (EMP). Here we briefly describe the highlights of this meeting and provide an overview of the EMP concept and how it can be applied to exploration of the microbiome of each ecosystem on this planet.
doi:10.4056/sigs.1433550
PMCID: PMC3035311  PMID: 21304727
21.  Meeting Report from the Genomic Standards Consortium (GSC) Workshop 9 
Standards in Genomic Sciences  2010;3(3):216-224.
This report summarizes the proceedings of the 9th workshop of the Genomic Standards Consortium (GSC), held at the J. Craig Venter Institute, Rockville, MD, USA. It was the first GSC workshop to have open registration and attracted over 90 participants. This workshop featured sessions that provided overviews of the full range of ongoing GSC projects. It included sessions on Standards in Genomic Sciences, the open access journal of the GSC, building standards for genome annotation, the M5 platform for next-generation collaborative computational infrastructures, building ties with the biodiversity research community and two discussion panels with government and industry participants. Progress was made on all fronts, and major outcomes included the completion of the MIENS specification for publication and the formation of the Biodiversity working group.
doi:10.4056/sigs.1353455
PMCID: PMC3035308  PMID: 21304722
22.  The United States of America and Scientific Research 
PLoS ONE  2010;5(8):e12203.
To gauge the current commitment to scientific research in the United States of America (US), we compared federal research funding (FRF) with the US gross domestic product (GDP) and industry research spending during the past six decades. In order to address the recent globalization of scientific research, we also focused on four key indicators of research activities: research and development (R&D) funding, total science and engineering doctoral degrees, patents, and scientific publications. We compared these indicators across three major population and economic regions: the US, the European Union (EU) and the People's Republic of China (China) over the past decade. We discovered a number of interesting trends with direct relevance for science policy. The level of US FRF has varied between 0.2% and 0.6% of the GDP during the last six decades. Since the 1960s, the US FRF contribution has fallen from twice that of industrial research funding to roughly equal. Also, in the last two decades, the portion of the US government R&D spending devoted to research has increased. Although well below the US and the EU in overall funding, the current growth rate for R&D funding in China greatly exceeds that of both. Finally, the EU currently produces more science and engineering doctoral graduates and scientific publications than the US in absolute terms, but not per capita. This study's aim is to facilitate a serious discussion of key questions by the research community and federal policy makers. In particular, our results raise two questions with respect to: a) the increasing globalization of science: “What role is the US playing now, and what role will it play in the future of international science?”; and b) the ability to produce beneficial innovations for society: “How will the US continue to foster its strengths?”
doi:10.1371/journal.pone.0012203
PMCID: PMC2922381  PMID: 20808949
23.  Meeting Report: “Metagenomics, Metadata and Meta-analysis” (M3) Workshop at the Pacific Symposium on Biocomputing 2010 
Standards in Genomic Sciences  2010;2(3):357-360.
This report summarizes the M3 Workshop held at the January 2010 Pacific Symposium on Biocomputing. The workshop, organized by Genomic Standards Consortium members, included five contributed talks, a series of short presentations from stakeholders in the genomics standards community, a poster session, and, in the evening, an open discussion session to review current projects and examine future directions for the GSC and its stakeholders.
doi:10.4056/sigs.802738
PMCID: PMC3035291  PMID: 21304719
24.  Quantifying Protein Function Specificity in the Gene Ontology 
Standards in Genomic Sciences  2010;2(2):238-244.
Quantitative or numerical metrics of protein function specificity made possible by the Gene Ontology are useful in that they enable development of distance or similarity measures between protein functions. Here we describe how to calculate four measures of function specificity for GO terms: 1) number of ancestor terms; 2) number of offspring terms; 3) proportion of terms; and 4) Information Content (IC). We discuss the relationship between the metrics and the strengths and weaknesses of each.
doi:10.4056/sigs.561626
PMCID: PMC3035283  PMID: 21304708
protein annotation; protein function; function specificity
25.  Risk Assessment and Communication Tools for Genotype Associations with Multifactorial Phenotypes: The Concept of “Edge Effect” and Cultivating an Ethical Bridge between Omics Innovations and Society 
Abstract
Applications of omics technologies in the postgenomics era swiftly expanded from rare monogenic disorders to multifactorial common complex diseases, pharmacogenomics, and personalized medicine. Already, there are signposts indicative of further omics technology investment in nutritional sciences (nutrigenomics), environmental health/ecology (ecogenomics), and agriculture (agrigenomics). Genotype–phenotype association studies are a centerpiece of translational research in omics science. Yet scientific and ethical standards and ways to assess and communicate risk information obtained from association studies have been neglected to date. This is a significant gap because association studies decisively influence which genetic loci become genetic tests in the clinic or products in the genetic test marketplace. A growing challenge concerns the interpretation of large overlap typically observed in distribution of quantitative traits in a genetic association study with a polygenic/multifactorial phenotype. To remedy the shortage of risk assessment and communication tools for association studies, this paper presents the concept of edge effect. That is, the shift in population edges of a multifactorial quantitative phenotype is a more sensitive measure (than population averages) to gauge the population level impact and by extension, policy significance of an omics marker. Empirical application of the edge effect concept is illustrated using an original analysis of warfarin pharmacogenomics and the VKORC1 genetic variation in a Brazilian population sample. These edge effect analyses are examined in relation to regulatory guidance development for association studies. We explain that omics science transcends the conventional laboratory bench space and includes a highly heterogeneous cast of stakeholders in society who have a plurality of interests that are often in conflict. Hence, communication of risk information in diagnostic medicine also demands attention to processes involved in production of knowledge and human values embedded in scientific practice, for example, why, how, by whom, and to what ends association studies are conducted, and standards are developed (or not). To ensure sustainability of omics innovations and forecast their trajectory, we need interventions to bridge the gap between omics laboratory and society. Appreciation of scholarship in history of omics science is one remedy to responsibly learn from the past to ensure a sustainable future in omics fields, both emerging (nutrigenomics, ecogenomics), and those that are more established (pharmacogenomics). Another measure to build public trust and sustainability of omics fields could be legislative initiatives to create a multidisciplinary oversight body, at arm's length from conflict of interests, to carry out independent, impartial, and transparent innovation analyses and prospective technology assessment.
doi:10.1089/omi.2009.0011
PMCID: PMC2727354  PMID: 19290811

Results 1-25 (37)