PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-3 (3)
 

Clipboard (0)
None

Select a Filter Below

Journals
Authors
more »
Year of Publication
Document Types
1.  Quantification of Normal Cell Fraction and Copy Number Neutral LOH in Clinical Lung Cancer Samples Using SNP Array Data 
PLoS ONE  2009;4(6):e6057.
Background
Technologies based on DNA microarrays have the potential to provide detailed information on genomic aberrations in tumor cells. In practice a major obstacle for quantitative detection of aberrations is the heterogeneity of clinical tumor tissue. Since tumor tissue invariably contains genetically normal stromal cells, this may lead to a failure to detect aberrations in the tumor cells.
Principal Finding
Using SNP array data from 44 non-small cell lung cancer samples we have developed a bioinformatic algorithm that accurately models the fractions of normal and tumor cells in clinical tumor samples. The proportion of normal cells in combination with SNP array data can be used to detect and quantify copy number neutral loss-of-heterozygosity (CNNLOH) in the tumor cells both in crude tumor tissue and in samples enriched for tumor cells by laser capture microdissection.
Conclusion
Genome-wide quantitative analysis of CNNLOH using the CNNLOH Quantifier method can help to identify recurrent aberrations contributing to tumor development in clinical tumor samples. In addition, SNP-array based analysis of CNNLOH may become important for detection of aberrations that can be used for diagnostic and prognostic purposes.
doi:10.1371/journal.pone.0006057
PMCID: PMC2699026  PMID: 19557126
2.  Segmentation-based detection of allelic imbalance and loss-of-heterozygosity in cancer cells using whole genome SNP arrays 
Genome Biology  2008;9(9):R136.
A strategy is presented for detection of loss-of-heterozygosity and allelic imbalance in cancer cells from whole genome SNP genotyping data.
We present a strategy for detection of loss-of-heterozygosity and allelic imbalance in cancer cells from whole genome single nucleotide polymorphism genotyping data. Using a dilution series of a tumor cell line mixed with its paired normal cell line and data generated on Affymetrix and Illumina platforms, including paired tumor-normal samples and tumors characterized by fluorescent in situ hybridization, we demonstrate a high sensitivity and specificity of the strategy for detecting both minute and gross allelic imbalances in heterogeneous tumor samples.
doi:10.1186/gb-2008-9-9-r136
PMCID: PMC2592714  PMID: 18796136
3.  Improved variance estimation of classification performance via reduction of bias caused by small sample size 
BMC Bioinformatics  2006;7:127.
Background
Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT).
Results
Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set.
Conclusion
We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.
doi:10.1186/1471-2105-7-127
PMCID: PMC1435937  PMID: 16533392

Results 1-3 (3)