PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-8 (8)
 

Clipboard (0)
None

Select a Filter Below

Journals
Authors
more »
Year of Publication
Document Types
1.  Selective voting in convex-hull ensembles improves classification accuracy 
Objective
Classification algorithms can be used to predict risks and responses of patients based on genomic and other high-dimensional data. While there is optimism for using these algorithms to improve the treatment of diseases, they have yet to demonstrate sufficient predictive ability for routine clinical practice. They generally classify all patients according to the same criteria, under an implicit assumption of population homogeneity. The objective here is to allow for population heterogeneity, possibly unrecognized, in order to increase classification accuracy and further the goal of tailoring therapies on an individualized basis.
Methods and materials
Anew selective-voting algorithm is developed in the context of a classifier ensemble of two-dimensional convex hulls of positive and negative training samples. Individual classifiers in the ensemble are allowed to vote on test samples only if those samples are located within or behind pruned convex hulls of training samples that define the classifiers.
Results
Validation of the new algorithm’s increased accuracy is carried out using two publicly available datasets having cancer as the outcome variable and expression levels of thousands of genes as predictors. Selective voting leads to statistically significant increases in accuracy from 86.0% to 89.8% (p < 0.001) and 63.2% to 67.8% (p < 0.003) compared to the original algorithm.
Conclusion
Selective voting by members of convex-hull classifier ensembles significantly increases classification accuracy compared to one-size-fits-all approaches.
doi:10.1016/j.artmed.2011.10.003
PMCID: PMC3666100  PMID: 22064044
Cross-validation; Genomic prediction; Cancer screening; Individualized therapy
2.  Statistical Analysis of Survival Data From Radiation Countermeasure Experiments 
Radiation Research  2012;177(5):546-554.
We present an introduction to, and examples of, Cox proportional hazards regression in the context of animal lethality studies of potential radioprotective agents. This established method is seldom used to analyze survival data collected in such studies, but is appropriate in many instances. Presenting a hypothetical radiation study that examines the efficacy of a potential radioprotectant both in the absence and presence of a potential modifier, we detail how to implement and interpret results from a Cox proportional hazards regression analysis used to analyze the survival data, and we provide relevant SAS® code. Cox proportional hazards regression analysis of survival data from lethal radiation experiments (1) considers the whole distribution of survival times rather than simply the commonly used proportions of animals that survived, (2) provides a unified analysis when multiple factors are present, and (3) can increase statistical power by combining information across different levels of a factor. Cox proportional hazards regression should be considered as a potential statistical method in the toolbox of radiation researchers.
PMCID: PMC3387733  PMID: 22401302
3.  Risk-Assessment Implications of Mechanistic Model’S Prediction of Low-Dose Nonlinearity of Liver Tumor Risk for Mice Fed Fumonisin B1 
A two-stage, clonal-expansion model of liver tumor risk in mice was developed by Kodell et al. (Food Addit Contam 18:237–253, 2001) based on the hypothesis that fumonisin B1, a naturally occurring mycotoxin in corn, is not genotoxic, but rather causes cancer through the disruption of sphingolipid metabolism. This disruption is assumed to cause an increase in apoptosis, in response to which cells proliferate to compensate for reduced tissue mass. The resulting differential increase in the number of pre-neoplastic cells at risk of mutation during cell division is assumed to lead to an increase in the incidence of tumors. Two-year liver tumor incidences predicted by the model using data on organ weight, cell proliferation, and sphingolipid metabolism provided a reasonable match to the actual 2-year observed incidences in a study conducted at the National Center for Toxicological Research. The predictions indicated no risk at low doses (even a possible hormetic effect) and high risk at high doses in females, as well as a complete absence of a dose response (or perhaps, a hormetic effect) in males. This paper provides a commentary on the risk-assessment implications of the modeling results, pointing out that the model’s low-dose predictions provide scientific support and justification for the U.S. Food and Drug Administration’s low-ppm guidance levels in corn products. These guidance levels are significantly higher than would be obtained using linear extrapolation, the method most often used for genotoxic carcinogens and other carcinogens for which low-dose linearity cannot be ruled out.
doi:10.1080/15401420490426981
PMCID: PMC2647820  PMID: 19330107
apoptosis; FDA guidance; hormesis; mycotoxin; nongenotoxic
4.  Estimating misclassification error: a closer look at cross-validation based methods 
BMC Research Notes  2012;5:656.
Background
To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV) methods based on sampling without replacement. Monte Carlo (MC) simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV) to k-fold CV for estimating clasification error.
Findings
For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias.
Conclusions
We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.
doi:10.1186/1756-0500-5-656
PMCID: PMC3556102  PMID: 23190936
Cross-validation; Bootstrap Cross-validation; Classification Error Estimation; Mean Squared Error
5.  Assessment of performance of survival prediction models for cancer prognosis 
Background
Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments.
Methods
We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models.
Results
A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1) For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2) The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3) Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results.
Conclusions
1) Different performance metrics for evaluation of a survival prediction model may give different conclusions in its discriminatory ability. 2) Evaluation using a high-risk versus low-risk group comparison depends on the selected risk-score threshold; a plot of p-values from all possible thresholds can show the sensitivity of the threshold selection. 3) A randomization test of the significance of Somers’ rank correlation can be used for further evaluation of performance of a prediction model. 4) The cross-validated power of survival prediction models decreases as the training and test sets become less balanced.
doi:10.1186/1471-2288-12-102
PMCID: PMC3410808  PMID: 22824262
6.  A Selective Voting Convex-Hull Ensemble Procedure for Personalized Medicine 
Genes work in concert as a system as opposed to independent entities and mediate disease states. There has been considerable interest in understanding variations in molecular signatures between normal and disease states. However, a majority of techniques implicitly assume homogeneity between samples within a given group and use a fixed set of genes in discerning the groups. The proposed study overcomes these caveats by using a selective-voting convex-hull ensemble procedure that accommodates molecular heterogeneity within and between groups. The significance of the study is its potential to selectively retrieve sample-specific ensemble sets and investigate variations in the networks corresponding to the ensemble set across these samples. These characteristics fit well within the scope of personalized medicine and comparative effectiveness research that emphasize on patient-tailored interventions. While the results are demonstrated on colon cancer gene expression profiles the approach as such is generic and can be readily extended to other settings.
PMCID: PMC3392048  PMID: 22779058
7.  Determination of Sample Sizes for Demonstrating Efficacy of Radiation Countermeasures 
Biometrics  2009;66(1):239-248.
SUMMARY
In response to the ever increasing threat of radiological and nuclear terrorism, active development of non-toxic new drugs and other countermeasures to protect against and/or mitigate adverse health effects of radiation is ongoing. Although the classical LD50 study used for many decades as a first step in preclinical toxicity testing of new drugs has been largely replaced by experiments that use fewer animals, the need to evaluate the radioprotective efficacy of new drugs necessitates the conduct of traditional LD50 comparative studies (FDA, 2002). There is, however, no readily available method to determine the number of animals needed for establishing efficacy in these comparative potency studies. This paper presents a sample-size formula based on Student’s t for comparative potency testing. It is motivated by FDA’s requirements for robust efficacy data in the testing of response modifiers in total body irradiation experiments where human studies are not ethical or feasible. Monte Carlo simulation demonstrated the formula’s performance for Student’s t, Wald, and Likelihood Ratio tests in both logistic and probit models. Importantly, the results showed clear potential for justifying the use of substantially fewer animals than are customarily used in these studies. The present paper may thus initiate a dialogue among researchers who use animals for radioprotection survival studies, institutional animal care and use committees, and drug regulatory bodies to reach a consensus on the number of animals needed to achieve statistically robust results for demonstrating efficacy of radioprotective drugs.
doi:10.1111/j.1541-0420.2009.01236.x
PMCID: PMC3036987  PMID: 19432769
Dose reduction factor; Logit; Power; Probit; Quantal assay; Radiation countermeasures; Radiation protection; Relative potency; Terrorism
8.  Classification methods for the development of genomic signatures from high-dimensional data 
Genome Biology  2006;7(12):R121.
Several classification algorithms for class prediction using high-dimensional biomedical data are presented and applied to data from leukaemia and breast cancer patients
Personalized medicine is defined by the use of genomic signatures of patients to assign effective therapies. We present Classification by Ensembles from Random Partitions (CERP) for class prediction and apply CERP to genomic data on leukemia patients and to genomic data with several clinical variables on breast cancer patients. CERP performs consistently well compared to the other classification algorithms. The predictive accuracy can be improved by adding some relevant clinical/histopathological measurements to the genomic data.
doi:10.1186/gb-2006-7-12-r121
PMCID: PMC1794434  PMID: 17181863

Results 1-8 (8)