PMCC PMCC

Conseils de recherche
Les critères de recherche

Avancée
Résultats 1-25 (41)
 

Notices sélectionnées (0)
Aucune

Sélectionner un filtre

Revues
Année de publication
Type de document
1.  Combining an Expert-Based Medical Entity Recognizer to a Machine-Learning System: Methods and a Case Study 
Biomedical Informatics Insights  2013;6(Suppl 1):51-62.
Medical entity recognition is currently generally performed by data-driven methods based on supervised machine learning. Expert-based systems, where linguistic and domain expertise are directly provided to the system are often combined with data-driven systems. We present here a case study where an existing expert-based medical entity recognition system, Ogmios, is combined with a data-driven system, Caramba, based on a linear-chain Conditional Random Field (CRF) classifier. Our case study specifically highlights the risk of overfitting incurred by an expert-based system. We observe that it prevents the combination of the 2 systems from obtaining improvements in precision, recall, or F-measure, and analyze the underlying mechanisms through a post-hoc feature-level analysis. Wrapping the expert-based system alone as attributes input to a CRF classifier does boost its F-measure from 0.603 to 0.710, bringing it on par with the data-driven system. The generalization of this method remains to be further investigated.
doi:10.4137/BII.S11770
PMCID: PMC3776026  PMID: 24052691
natural language processing; information extraction; medical records; machine learning; hybrid methods; overfitting
2.  Text Categorization of Heart, Lung, and Blood Studies in the Database of Genotypes and Phenotypes (dbGaP) Utilizing n-grams and Metadata Features 
The database of Genotypes and Phenotypes (dbGaP) allows researchers to understand phenotypic contribution to genetic conditions, generate new hypotheses, confirm previous study results, and identify control populations. However, effective use of the database is hindered by suboptimal study retrieval. Our objective is to evaluate text classification techniques to improve study retrieval in the context of the dbGaP database. We utilized standard machine learning algorithms (naive Bayes, support vector machines, and the C4.5 decision tree) trained on dbGaP study text and incorporated n-gram features and study metadata to identify heart, lung, and blood studies. We used the χ2 feature selection algorithm to identify features that contributed most to classification performance and experimented with dbGaP associated PubMed papers as a proxy for topicality. Classifier performance was favorable in comparison to keyword-based search results. It was determined that text categorization is a useful complement to document retrieval techniques in the dbGaP.
doi:10.4137/BII.S11987
PMCID: PMC3728208  PMID: 23926434
text classification; text categorization; database; genome-wide association studies; GWAS; natural language processing 35/45
3.  Using Conversation Topics for Predicting Therapy Outcomes in Schizophrenia 
Biomedical Informatics Insights  2013;6(Suppl 1):39-50.
Previous research shows that aspects of doctor-patient communication in therapy can predict patient symptoms, satisfaction and future adherence to treatment (a significant problem with conditions such as schizophrenia). However, automatic prediction has so far shown success only when based on low-level lexical features, and it is unclear how well these can generalize to new data, or whether their effectiveness is due to their capturing aspects of style, structure or content. Here, we examine the use of topic as a higher-level measure of content, more likely to generalize and to have more explanatory power. Investigations show that while topics predict some important factors such as patient satisfaction and ratings of therapy quality, they lack the full predictive power of lower-level features. For some factors, unsupervised methods produce models comparable to manual annotation.
doi:10.4137/BII.S11661
PMCID: PMC3740209  PMID: 23943658
topic modelling; LDA; doctor-patient communication
4.  Towards Converting Clinical Phrases into SNOMED CT Expressions 
Biomedical Informatics Insights  2013;6(Suppl 1):29-37.
Converting information contained in natural language clinical text into computer-amenable structured representations can automate many clinical applications. As a step towards that goal, we present a method which could help in converting novel clinical phrases into new expressions in SNOMED CT, a standard clinical terminology. Since expressions in SNOMED CT are written in terms of their relations with other SNOMED CT concepts, we formulate the important task of identifying relations between clinical phrases and SNOMED CT concepts. We present a machine learning approach for this task and using the dataset of existing SNOMED CT relations we show that it performs well.
doi:10.4137/BII.S11645
PMCID: PMC3702194  PMID: 23847425
SNOMED CT; clinical phrases; relation identification; natural language processing
5.  Using Empirically Constructed Lexical Resources for Named Entity Recognition 
Biomedical Informatics Insights  2013;6(Suppl 1):17-27.
Because of privacy concerns and the expense involved in creating an annotated corpus, the existing small-annotated corpora might not have sufficient examples for learning to statistically extract all the named-entities precisely. In this work, we evaluate what value may lie in automatically generated features based on distributional semantics when using machine-learning named entity recognition (NER). The features we generated and experimented with include n-nearest words, support vector machine (SVM)-regions, and term clustering, all of which are considered distributional semantic features. The addition of the n-nearest words feature resulted in a greater increase in F-score than by using a manually constructed lexicon to a baseline system. Although the need for relatively small-annotated corpora for retraining is not obviated, lexicons empirically derived from unannotated text can not only supplement manually created lexicons, but also replace them. This phenomenon is observed in extracting concepts from both biomedical literature and clinical notes.
doi:10.4137/BII.S11664
PMCID: PMC3702195  PMID: 23847424
natural language processing; distributional semantics; concept extraction; named entity recognition; empirical lexical resources
6.  Computational Semantics in Clinical Text 
Biomedical Informatics Insights  2013;6(Suppl 1):3-5.
doi:10.4137/BII.S11847
PMCID: PMC3702196  PMID: 23847422
7.  Analysis of Cross-Institutional Medication Description Patterns in Clinical Narratives 
Biomedical Informatics Insights  2013;6(Suppl 1):7-16.
A large amount of medication information resides in the unstructured text found in electronic medical records, which requires advanced techniques to be properly mined. In clinical notes, medication information follows certain semantic patterns (eg, medication, dosage, frequency, and mode). Some medication descriptions contain additional word(s) between medication attributes. Therefore, it is essential to understand the semantic patterns as well as the patterns of the context interspersed among them (ie, context patterns) to effectively extract comprehensive medication information. In this paper we examined both semantic and context patterns, and compared those found in Mayo Clinic and i2b2 challenge data. We found that some variations exist between the institutions but the dominant patterns are common.
doi:10.4137/BII.S11634
PMCID: PMC3702197  PMID: 23847423
medication extraction; electronic medical record; natural language processing
8.  Computational Semantics in Clinical Text Supplement 
Biomedical Informatics Insights  2013;6(Suppl 1):1-2.
doi:10.4137/BII.S11868
PMCID: PMC3702198  PMID: 23847421
9.  Using n-Grams for Syndromic Surveillance in a Turkish Emergency Department Without English Translation: A Feasibility Study 
Introduction
Syndromic surveillance is designed for early detection of disease outbreaks. An important data source for syndromic surveillance is free-text chief complaints (CCs), which are generally recorded in the local language. For automated syndromic surveillance, CCs must be classified into predefined syndromic categories. The n-gram classifier is created by using text fragments to measure associations between chief complaints (CC) and a syndromic grouping of ICD codes.
Objectives
The objective was to create a Turkish n-gram CC classifier for the respiratory syndrome and then compare daily volumes between the n-gram CC classifier and a respiratory ICD-10 code grouping on a test set of data.
Methods
The design was a feasibility study based on retrospective cohort data. The setting was a university hospital emergency department (ED) in Turkey. Included were all ED visits in the 2002 database of this hospital. Two of the authors created a respiratory grouping of International Classification of Diseases, 10th Revision ICD-10-CM codes by consensus, chosen to be similar to a standard respiratory (RESP) grouping of ICD codes created by the Electronic Surveillance System for Early Notification of Community-based Epidemics (ESSENCE), a project of the Centers for Disease Control and Prevention. An n-gram method adapted from AT&T Labs’ technologies was applied to the first 10 months of data as a training set to create a Turkish CC RESP classifier. The classifier was then tested on the subsequent 2 months of visits to generate a time series graph and determine the correlation with daily volumes measured by the CC classifier versus the RESP ICD-10 grouping.
Results
The Turkish ED database contained 30,157 visits. The correlation (R2) of n-gram versus ICD-10 for the test set was 0.78.
Conclusion
The n-gram method automatically created a CC RESP classifier of the Turkish CCs that performed similarly to the ICD-10 RESP grouping. The n-gram technique has the advantage of systematic, consistent, and rapid deployment as well as language independence.
doi:10.4137/BII.S11334
PMCID: PMC3653813  PMID: 23700370
disease outbreaks; epidemiology; public health; surveillance; n-gram
10.  Recognizing Scientific Artifacts in Biomedical Literature 
Today’s search engines and digital libraries offer little or no support for discovering those scientific artifacts (hypotheses, supporting/contradicting statements, or findings) that form the core of scientific written communication. Consequently, we currently have no means of identifying central themes within a domain or to detect gaps between accepted knowledge and newly emerging knowledge as a means for tracking the evolution of hypotheses from incipient phases to maturity or decline. We present a hybrid Machine Learning approach using an ensemble of four classifiers, for recognizing scientific artifacts (ie, hypotheses, background, motivation, objectives, and findings) within biomedical research publications, as a precursory step to the general goal of automatically creating argumentative discourse networks that span across multiple publications. The performance achieved by the classifiers ranges from 15.30% to 78.39%, subject to the target class. The set of features used for classification has led to promising results. Furthermore, their use strictly in a local, publication scope, ie, without aggregating corpus-wide statistics, increases the versatility of the ensemble of classifiers and enables its direct applicability without the necessity of re-training.
doi:10.4137/BII.S11572
PMCID: PMC3623603  PMID: 23645987
scientific artifacts; conceptualization zones; information extraction
11.  Decomposing Phenotype Descriptions for the Human Skeletal Phenome 
Over the course of the last few years there has been a significant amount of research performed on ontology-based formalization of phenotype descriptions. The intrinsic value and knowledge captured within such descriptions can only be expressed by taking advantage of their inner structure that implicitly combines qualities and anatomical entities. We present a meta-model (the Phenotype Fragment Ontology) and a processing pipeline that enable together the automatic decomposition and conceptualization of phenotype descriptions for the human skeletal phenome. We use this approach to showcase the usefulness of the generic concept of phenotype decomposition by performing an experimental study on all skeletal phenotype concepts defined in the Human Phenotype Ontology.
doi:10.4137/BII.S10729
PMCID: PMC3572876  PMID: 23440304
human skeletal phenome; phenotype decomposition; phenotype segmentation; ontologies
12.  What’s In a Note: Construction of a Suicide Note Corpus 
This paper reports on the results of an initiative to create and annotate a corpus of suicide notes that can be used for machine learning. Ultimately, the corpus included 1,278 notes that were written by someone who died by suicide. Each note was reviewed by at least three annotators who mapped words or sentences to a schema of emotions. This corpus has already been used for extensive scientific research.
doi:10.4137/BII.S10213
PMCID: PMC3500150  PMID: 23170067
natural language processing; computational linguistics; corpus; suicide
13.  Sentiment Analysis of Suicide Notes: A Shared Task 
Biomedical informatics insights  2012;5(Suppl 1):3-16.
This paper reports on a shared task involving the assignment of emotions to suicide notes. Two features distinguished this task from previous shared tasks in the biomedical domain. One is that it resulted in the corpus of fully anonymized clinical text and annotated suicide notes. This resource is permanently available and will (we hope) facilitate future research. The other key feature of the task is that it required categorization with respect to a large set of labels. The number of participants was larger than in any previous biomedical challenge task. We describe the data production process and the evaluation measures, and give a preliminary analysis of the results. Many systems performed at levels approaching the inter-coder agreement, suggesting that human-like performance on this task is within the reach of currently available technologies.
PMCID: PMC3299408  PMID: 22419877
Sentiment analysis; suicide; suicide notes; natural language processing; computational linguistics; shared task; challenge 2011
14.  Comparison of LDA and SPRT on Clinical Dataset Classifications 
In this work, we investigate the well-known classification algorithm LDA as well as its close relative SPRT. SPRT affords many theoretical advantages over LDA. It allows specification of desired classification error rates α and β and is expected to be faster in predicting the class label of a new instance. However, SPRT is not as widely used as LDA in the pattern recognition and machine learning community. For this reason, we investigate LDA, SPRT and a modified SPRT (MSPRT) empirically using clinical datasets from Parkinson’s disease, colon cancer, and breast cancer. We assume the same normality assumption as LDA and propose variants of the two SPRT algorithms based on the order in which the components of an instance are sampled. Leave-one-out cross-validation is used to assess and compare the performance of the methods. The results indicate that two variants, SPRT-ordered and MSPRT-ordered, are superior to LDA in terms of prediction accuracy. Moreover, on average SPRT-ordered and MSPRT-ordered examine less components than LDA before arriving at a decision. These advantages imply that SPRT-ordered and MSPRT-ordered are the preferred algorithms over LDA when the normality assumption can be justified for a dataset.
doi:10.4137/BII.S6935
PMCID: PMC3178328  PMID: 21949476
clinical data classification; linear discriminant analysis; sequential probability ratio test; supervised learning
15.  Suicide Note Classification Using Natural Language Processing: A Content Analysis 
Biomedical informatics insights  2010;2010(3):19-28.
Suicide is the second leading cause of death among 25–34 year olds and the third leading cause of death among 15–25 year olds in the United States. In the Emergency Department, where suicidal patients often present, estimating the risk of repeated attempts is generally left to clinical judgment. This paper presents our second attempt to determine the role of computational algorithms in understanding a suicidal patient’s thoughts, as represented by suicide notes. We focus on developing methods of natural language processing that distinguish between genuine and elicited suicide notes. We hypothesize that machine learning algorithms can categorize suicide notes as well as mental health professionals and psychiatric physician trainees do. The data used are comprised of suicide notes from 33 suicide completers and matched to 33 elicited notes from healthy control group members. Eleven mental health professionals and 31 psychiatric trainees were asked to decide if a note was genuine or elicited. Their decisions were compared to nine different machine-learning algorithms. The results indicate that trainees accurately classified notes 49% of the time, mental health professionals accurately classified notes 63% of the time, and the best machine learning algorithm accurately classified the notes 78% of the time. This is an important step in developing an evidence-based predictor of repeated suicide attempts because it shows that natural language processing can aid in distinguishing between classes of suicidal notes.
PMCID: PMC3107011  PMID: 21643548
suicide; suicide prediction; suicide notes; machine learning
16.  A Method to Detect Differential Gene expression in Cross-Species Hybridization Experiments at Gene and Probe Level 
Biomedical informatics insights  2010;2010(3):1-10.
Motivation
Whole genome microarrays are increasingly becoming the method of choice to study responses in model organisms to disease, stressors or other stimuli. However, whole genome sequences are available for only some model organisms, and there are still many species whose genome sequences are not yet available. Cross-species studies, where arrays developed for one species are used to study gene expression in a closely related species, have been used to address this gap, with some promising results. Current analytical methods have included filtration of some probes or genes that showed low hybridization activities. But consensus filtration schemes are still not available.
Results
A novel masking procedure is proposed based on currently available target species sequences to filter out probes and study a cross-species data set using this masking procedure and gene-set analysis. Gene-set analysis evaluates the association of some priori defined gene groups with a phenotype of interest. Two methods, Gene Set Enrichment Analysis (GSEA) and Test of Test Statistics (ToTS) were investigated. The results showed that masking procedure together with ToTS method worked well in our data set. The results from an alternative way to study cross-species hybridization experiments without masking are also presented. We hypothesize that the multi-probes structure of Affymetrix microarrays makes it possible to aggregate the effects of both well-hybridized and poorly-hybridized probes to study a group of genes. The principles of gene-set analysis were applied to the probe-level data instead of gene-level data. The results showed that ToTS can give valuable information and thus can be used as a powerful technique for analyzing cross-species hybridization experiments.
Availability
Software in the form of R code is available at http://anson.ucdavis.edu/~ychen/cross-species.html
PMCID: PMC2928260  PMID: 20798791
gene expression; cross-species; probes; genes; hybridization
17.  Current Challenge in Consumer Health Informatics: Bridging the Gap between Access to Information and Information Understanding 
The number of health-related websites has proliferated over the past few years. Health information consumers confront a myriad of health related resources on the internet that have varying levels of quality and are not always easy to comprehend. There is thus a need to help health information consumers to bridge the gap between access to information and information understanding—i.e. to help consumers understand health related web-based resources so that they can act upon it. At the same time health information consumers are becoming not only more involved in their own health care but also more information technology minded. One way to address this issue is to provide consumers with tailored information that is contextualized and personalized e.g. directly relevant and easily comprehensible to the person's own health situation. This paper presents a current trend in Consumer Health Informatics which focuses on theory-based design and development of contextualized and personalized tools to allow the evolving consumer with varying backgrounds and interests to use online health information efficiently. The proposed approach uses a theoretical framework of communication in order to support the consumer's capacity to understand health-related web-based resources.
PMCID: PMC2858407  PMID: 20419038
consumer health information; internet; contextualization of information; personalization
18.  Avoiding Pitfalls in the Statistical Analysis of Heterogeneous Tumors 
Information about tumors is usually obtained from a single assessment of a tumor sample, performed at some point in the course of the development and progression of the tumor, with patient characteristics being surrogates for natural history context. Differences between cells within individual tumors (intratumor heterogeneity) and between tumors of different patients (intertumor heterogeneity) may mean that a small sample is not representative of the tumor as a whole, particularly for solid tumors which are the focus of this paper. This issue is of increasing importance as high-throughput technologies generate large multi-feature data sets in the areas of genomics, proteomics, and image analysis. Three potential pitfalls in statistical analysis are discussed (sampling, cut-points, and validation) and suggestions are made about how to avoid these pitfalls.
PMCID: PMC2828739  PMID: 20191105
cancer; statistics; biomarkers; prognosis; heterogeneity
19.  Personalizing Drug Selection Using Advanced Clinical Decision Support 
This article describes the process of developing an advanced pharmacogenetics clinical decision support at one of the United States’ leading pediatric academic medical centers. This system, called CHRISTINE, combines clinical and genetic data to identify the optimal drug therapy when treating patients with epilepsy or Attention Deficit Hyperactivity Disorder. In the discussion a description of clinical decision support systems is provided, along with an overview of neurocognitive computing and how it is applied in this setting.
PMCID: PMC2773552  PMID: 19898682
20.  [No title available] 
PMCID: PMC3299408  PMID: 22419877
21.  [No title available] 
PMCID: PMC3409250  PMID: 22879756
22.  [No title available] 
PMCID: PMC3409473  PMID: 22879763
23.  [No title available] 
PMCID: PMC3409474  PMID: 22879774
24.  [No title available] 
PMCID: PMC3409475  PMID: 22879775
25.  [No title available] 
PMCID: PMC3409476  PMID: 22879776

Résultats 1-25 (41)