|Home | About | Journals | Submit | Contact Us | Français|
YC, DD, YD, ZJ, KR, ZW, YZ conceived the study. YC, DD, YD, ZJ, ZW, YZ cleaned the datasets. DD established and implemented the prediction algorithm. YC incorporated the algorithm to super learner for challenge 1a. YC, DD, YD complied the code for final submission. YC, DD, YD, ZJ, KR, ZW, YZ wrote the manuscript.
In this paper, we present our winning method for survival time prediction in the 2015 Prostate Cancer DREAM Challenge, a recent crowdsourced competition focused on risk and survival time predictions for patients with metastatic castration-resistant prostate cancer (mCRPC). We are interested in using a patient's covariates to predict his or her time until death after initiating standard therapy. We propose an iterative algorithm to multiply impute right-censored survival times and use ensemble learning methods to characterize the dependence of these imputed survival times on possibly many covariates. We show that by iterating over imputation and ensemble learning steps, we guide imputation with patient covariates and, subsequently, optimize the accuracy of survival time prediction. This method is generally applicable to time-to-event prediction problems in the presence of right-censoring. We demonstrate the proposed method's performance with training and validation results from the DREAM Challenge and compare its accuracy with existing methods.
Predicting overall survival for cancer patients remains central to studying new treatment options. Given a patient’s covariates and preferences, doctors can anticipate prognosis and likely treatment effects and make clinical recommendations accordingly. For example, docetaxel is a standard treatment for patients with metastatic prostate cancer who have developed resistance to conventional androgen deprivation therapy. Using data from the docetaxel arm of four recent phase III trials of experimental interventions, the 2015 Prostate Cancer DREAM Challenge 1 aims to amass community-based efforts to develop, apply, and validate prognostic models for overall patient survival under this standard treatment.
A frequently encountered problem in survival analysis is data censoring, in which exact survival times are not observed for all patients. The most common type of censoring is right censoring, in which the survival time is only observed up to a certain censoring time; event times are not observed for individuals after censoring occurs. Many state-of-the-art statistical and machine learning tools cannot be directly applied to censored data while most standard methodologies that do allow for censoring assume independence between censoring and survival time; this assumption is frequently inappropriate.
Among survival analysis methods that accommodate censoring, many approaches focus on maximizing the partial likelihood, which depends only on the order of events rather than the time at which they occur. One of the most widely used methods, the proportional hazards model (also known as the Cox regression model) parameterizes this partial likelihood through a baseline hazard function and a multiplicative scaling term that depends on covariates 2, 3. Other methods in this class often seek different formulations of the hazard function. For instance, proportional hazard models based on artificial neural networks 4, 5 and the gradient boosting proportional hazard model 6 have been developed to model more complex forms of the non-linear hazard function.
Alternate objective functions have also been developed for survival analysis with censored data. Support vector regression techniques can be adapted to survival time prediction by considering censored outcomes as interval targets and forming a new maximum margin loss function directly with log-transformed survival time 7. In random survival forests (RSF) 8, 9, a tree-based ensemble model that relies on bagging, each survival tree split is determined by maximizing the survival difference 10 between child nodes. More recently, a gradient boosting-based model with direct optimization of Harrell’s concordance index has been developed 11, 12.
As an alternative to the above methods that directly accommodate right-censored survival data, multiple imputation 13 methods treat the censored observations as missing data. To overcome the obstacle posed by censoring, these methods randomly generate missing survival outcomes many times in order to permit complete-data inferences. Taylor et al.(2002) 14 propose two nonparametric imputation methods that enable estimation of the survival distribution for right-censored survival data without covariates. One approach, risk set imputation (RSI), replaces an individual’s censored time with a random draw of observed event times among those at risk (beyond the particular censoring time), starting from the smallest and proceeding toward the largest censored time. With an infinite number of imputations, RSI survival point estimates are equivalent to the Kaplan-Meier estimator, where the expectation is taken with respect to the distribution of all possible random imputations. This imputation technique does not use the covariate data which, if modeled jointly with survival times, can improve accuracy of survival time predictions.
Conditional survival estimates are more informative for individual survival time predictions. Unbiased conditional survival estimation, i.e., ensures unbiased population-averaged survival curve estimation, while the reverse does not hold. Given a covariate-specific survival distribution estimate, ∀ t > 0, it remains open as to how to predict an individual’s exact survival time ( T i). Our method approaches this problem from another perspective by directly modeling survival times.
In this paper, we propose a new method for exact survival time prediction that relies on strategically imputing censored time and, then, building an ensemble prediction model based on the “complete” dataset. In so doing, we are able to exploit the predictive power of many state-of-the-art regression technologies. This imputation algorithm first multiply imputes censored survival times in order to construct a complete dataset without using covariates. Then, the algorithm iterates between 1) predicting the completed survival times using covariates and 2) adjusting the imputed value.
In the following, we first describe the data for training, testing, and validating our proposed survival time prediction model and, then, summarize the statistical methods that we used to construct the ensemble model. We conclude by discussing potential directions for future research and further improvements.
Data from the control arm of four phase III clinical trials of experimental therapies for mCRP were made available to participants in the Prostate Cancer DREAM Challenge. The trials are ASCENT-2 (conducted by Memorial Sloan Kettering Cancer Center) 15, VENICE (Sanofi) 16, MAINSAIL (Celgene) 17, and ENTHUSE-33 (AstraZeneca) 18. Training data include survival outcomes (time of death or censored survival time) and 131 clinical covariates from the ASCENT-2, MAINSAIL, and VENICE trials. Only covariate data were available for the ENTHUSE-33 trial; survival outcomes were blinded for scoring. Clinical covariates included patient demographics, vital signs, lab results, medical history, medication use, and tumor measurements.
2.1.1 Data consolidation: A primary dataset, referred to here as the “CoreTable", was provided by the DREAM Challenge organizers and summarized many relevant baseline covariates at patient level. An additional five raw datasets containing more detailed baseline and follow-up data were also provided. We summarized additional baseline information from these secondary tables to augment the CoreTable. For example, medications were grouped according to drug type or use including opiod analgesics, anti-depressants, and vitamin supplements. Tumor data were also summarized across disease sites including the number, average size, and maximum size of lesions. Continuous lab values were log-transformed; non-transformed values were also kept in the data. Covariate data from secondary tables that duplicated or were highly correlated with existing variables in the CoreTable were excluded from the analysis.
The resulting dataset had 2070 observations and 256 covariates, among which 78 covariates were continuous variables and 177 were categorical variables.
2.1.2 Splitting data for 10-fold cross-validation: In order to maintain consistent groupings for cross validation, we evenly split the training data into ten groups by randomly generating a uniform 10-fold index for each observation. As a result, we were able to maintain the same hold-out datasets as we employed different prediction methods. When generating the random 10-fold index, we set a random number generation seed for reproducibility (DOI: 10.7303/syn4732982).
2.1.3 Multiply imputing covariates: Missingness was common in the combined dataset. Figure 1 shows the missingness patterns for covariates (columns) within each study (row block). As suggested by the heat map, the missingness is largely study-dependent and likely due to the differences in study protocol and data collection procedure.
The ten continuous covariates with the most missingness are listed in Table 1. Since a considerable proportion of categorical variables were created by categorizing continuous variables (for e.g., labeling lab values as low, normal, or high), these categorical variables have a rate of missingness similar to their continuous counterparts. Other categorical covariates with a large proportion of missingness include a categorization of baseline weight and height (77.6% and 77.3% missing, respectively) and an indicator for a history of smoking (77.1% missing).
|Variable name||Percent missing|
|Free prostate specific antigen||78.4|
|Brain natriuretic peptide||78.3|
|Mean corpuscular volume||77.6|
|Mean corpuscular hemoglobin||77.5|
|Urine protein creatinine ratio||76.1|
Missing covariate data for the combined dataset was then imputed using multiple imputation 13 via the fastpmm function in the R package mice (R 3.2.1). Multiple imputation was performed using covariate data from both training (ASCENT-2, VENICE, MAINSAIL) and validation (ENTHUSE-33) studies and was repeated to obtain five completed datasets.
2.1.4 Covariate standardization: We standardized continuous data by applying the Box-Cox transformation (with power parameter 0.2) to all continuous covariates, followed by mean-variance standardization.
2.1.5 Survival summaries: Figure [X of the main paper] shows the the Kaplan-Meier estimates of survival curves along with the 95% confidence band for each of the three studies in the DREAM Challenge training data. The three studies have similar survival curves up to 17 months from baseline.
In this section, we describe an iterative imputation procedure that can be used in tandem with ensemble learning methods to predict survival times given possibly many covariates. This method constitutes our wining algorithm for the Prostate Cancer DREAM Challenge’s sub-challenge 1b for predicting exact survival times. Throughout our presentation, we use integrated area under the curve (iAUC) to evaluate predictive accuracy and select optimal values for tuning parameters 19.
Let ( Y i ,Δ i) be the pair of observed or censored survival times and the censoring indicator for patient i = 1, … , N. Δ i = 1 if Y i is the observed survival time and 0 if censored. Let X i be the vector of covariates. We describe our prediction algorithm below in three steps.
|Model name||Tuning parameter||Tuned value at IIa||Tuned value at III||R package|
|Regularized random forest||mtry, coefReg||2, 0.9||2, 0.9||RRF|
|SVM-RBF kernel||σ, C||0.001, 0.1||0.002, 0.3||e1071|
|Quantile random forest||mtry||6||6||quantregForest|
|SVM-Polynomial kernel||degree, scale, C||3, 0.0005, 0.15||5, 0.0005, 0.3||e1071|
|Partial least square||ncomp||2||2||pls|
This algorithm is summarized in Figure 2.
Figure 3 displays Kaplan-Meier (KM) estimates for observed survival data and several stages of survival time prediction. The black curve shows the KM estimate for the observed survival data assuming independent censoring. The density function for censoring is given by the dashed black line and indicates that most censoring occurred between six and 20 months.
The red, green, and blue curves show the KM estimates for survival predictions after initial random imputation (step I) and k = 1 and k = 2 iterations of the covariate-based, adjusted survival time predictions (step II), respectively. All imputed survival time curves closely track the observed survival curve until 16 months follow-up, at which time survival decreases more rapidly than expected under the assumption of independent censoring.
We note that it is possible for survival estimates after initial imputation (red curve) to lie above or below the observed KM curve (black) depending on larger or smaller choices of α, respectively. Here, we see that cross-validation favors larger values of α suggesting that censored individuals likely experience shorter-than-average survival after censoring. Survival estimates of model-based predictions (green and blue curves) also suggest that patients censored earlier are expected to have an event around 13–23 months. The green and the blue curves are very similar, indicating that the imputation algorithm converges very quickly.
The left hand panel of Figure 4 shows a plot of the observed times against the out-of-sample predicted times made in the first predictive iteration ( k=1) in step IIa, prior to adjusting prediction in step IIb. By the imputation algorithm we proposed, we keep a patient’s survival time Y i , new = Y i if an event was observed and censoring did not occur(Δ i = 1) and impute a patient’s survival time by Y i , new = Y i + E i if Y i , adj < Y i and for patients with censored survival time (Δ i = 0). The right hand plot shows that, after multiple iterations of this algorithm (k=3), the final imputed values show greater risk stratification for censored patients (blue circles). Because we use observed event times instead of predicted event times for uncensored patients (red diamonds), these observations lie directly on the line of equality (black dashed line).
The left panel in Figure 4 also indicates regression to the mean, i.e., the initial imputations tend to overestimate earlier survival times ( Y i < 16 months) and underestimate later survival times ( Y i > 16 months), resulting in a horizontal cloud of points. Our imputation algorithm deals with the underestimation at later survival times by forcing the imputed times to be larger than the observed censoring time, i.e., by the Y i , new = Y i + E i step. On the other hand, overestimation at earlier survival times is controlled by tuning the rate parameters of the exponential distributions ( α, α*) in steps I and IIb. The right panel of Figure 3 shows that the patients with earlier censoring times (circles toward the lower left) have larger differences between the imputed survival time and the observed censoring time ( y — x) in comparison to patients who survive longer (circles toward the upper right).
Although iAUC was used for evaluating the prediction performance in the training stage to make better use of the censored data, root mean squared error (RMSE) based on uncensored observations is used as the scoring metric for survival time prediction accuracy. Based on the training set, the 10-fold cross-validated RMSE of our ensemble predictive model is 246.5. (In the following section, we compare our method with other benchmarks with respect to the cross-validated RMSE using the same data-splitting index.)
In the final scoring round of the DREAM Challenge, our model was trained on the entire training set and then tested on the validation dataset from an independent clinical trial (ENTHUSE-33). Our final ensemble predictive model yielded a RMSE of 198.1 and was one of the top performing algorithms. Our predictions ranked sixth overall in accuracy are were not significantly different from the most accurate survival time predictions (Bayes factor > 3) [placeholder for main challenge paper].
We also compared RMSE of the proposed method to that of an off-the-shelf method: survival random forest (SRF). Ishwaran et al. (2008) 8 propose a popular SRF method which outputs ensemble cumulative hazard function predictions enabling one to specify the survival function for subject i = 1, … , n at time t. We predicted the exact survival time using the q% quantile of the estimated survival curve with q common to all subjects and selected by 10-fold cross-validation to minimize RMSE. Survival random forest is distinguished from the usual random forest methods by the criterion for choosing and splitting a node. In our implementation, we used a log-rank splitting rule that splits nodes by maximizing the log-rank test statistic 10, 20. We increased the speed of training using a randomized log-rank splitting rule meaning that, at each splitting step of growing a tree, we randomly split the candidate covariates and choose the covariate and split point pair that maximize the log-rank statistic. This randomized scheme is recommended to avoid overly favoring splitting continuous covariates when both continuous and categorical variables exist.
We generated 1, 000 bootstrap samples from the original training data (compiled and completed as detailed in section 2). We grew one survival tree for each bootstrap sample. The survival random forest produces the final ensemble survival function prediction by averaging over predictions obtained from these trees. To split a node in each tree, we tried a maximum of 10 random splits to determine which variable to split and where to split. Averaged over the five imputed datasets, we obtained a 10-fold cross-validated RMSE 344.8 with q% = 37%. Thus, our proposed algorithm performed considerably better (RMSE = 246.5).
Via ensemble prediction modeling, we also identified the most salient predictors of survival time in this population. The strongest predictors of survival time included lab values indicating overall health and cancer activity and other measures of overall health. For example, alkaline phosphate (ALP)– the most predictive covariate–is typically elevated in individuals with metastatic disease. ALP was included as covariate in the Halabi et al. (2014) benchmark model 21. Other lab measurements in the benchmark model–lactase dehydrogenase (LDH), hemoglobin (HB), prostate specific antigen (PSA), and albumin (ALB)–were also among the most predictive covariates in our model. The Eastern Cooperative Oncology Group (ECOG) performance status (a standard measure of daily living abilities) and use of opiate medication were also included in the Halabi et al. (2014) nomogram and were found to be highly predictive of survival in our approach. Disease site, the remaining predictor in the benchmark model, was not among the strongest predictors of survival in our model.
In this paper, we have introduced a survival time prediction method based on multiple imputation and ensemble learning. It is designed for right-censored survival data with many covariates. The proposed method operates by iterating through two stages: iterative imputation of right-censored outcomes and building an ensemble predictive model of survival time. Compared to the existing methods for survival time prediction, the second phase of this algorithm is particularly effective in leveraging covariates to guide imputation of the censored survival times. By imputation, we have transformed the difficult problem of time-to-event prediction with censoring to a standard predictive regression problem. The results of the Prostate Cancer DREAM Challenge 1b have empirically validated the predictive performance of our algorithm. Further research is needed to explore theoretical characteristics of the proposed algorithm. Conceptually, the iterative imputation algorithm achieves strong predictive performance by first generating model-based imputations (which makes use of the covariate information) and, then, correcting survival time predictions based on observed outcomes.
For future work, we will compare our method with other methods such as risk set imputation (RSI) 14 and recursively imputed survival trees (RIST) 22 using more extensive simulation studies. We will also seek to establish the MSE optimality behind this algorithm and further improve its imputation and prediction performance. In particular, we will further study the impact of the initialization strategy in step I on the final predictive accuracy to explore whether using model-based initialization (such as RIST) performs better than the current cross-validation-based random initialization. Finally, obtaining reliable confidence intervals around predicted survival time is also crucial for this method to be more clinically useful.
The Challenge datasets can be accessed at: https://www.projectdatasphere.org/projectdatasphere/html/pcdc Challenge documentation, including the detailed description of the Challenge design, overall results, scoring scripts, and the clinical trials data dictionary can be found at: https://www.synapse.org/ProstateCancerChallenge The code and documentation underlying the method presented in this paper can be found at: http://dx.doi.org/10.7303/syn4732982 31
This publication is based on research using information obtained from www.projectdatasphere.org, which is maintained by Project Data Sphere, LLC. Neither Project Data Sphere, LLC nor the owner(s) of any information from the web site have contributed to, approved or are in any way responsible for the contents of this publication.
We thank Sage Bionetworks, the DREAM organization, and Project Data Sphere for developing and supplying data for the Challenge.
We thank John Muschelli for helpful discussions on super learner methodology. We also thank Scott Zeger and the Patrick C. Walsh Cancer Research Fund for supporting our team’s work.
[version 1; referees: 2 approved
The author(s) declared that no grants were involved in supporting this work.
|Review date||Reviewer name(s)||Version reviewed||Review status|
|2017 February 20||C Jason Liang and Wenjuan Gu||Version 1||Approved|
|2017 January 9||Ruoqing Zhu||Version 1||Approved|
|2016 December 7||Devin C. Koestler||Version 1||Approved with Reservations|
This article provides a clear summary of how the team's prognostic model was created. Biomedical prognostic models are frequently built with survival data, but in practice often do not fully address or utilize the complexity of the data (e.g. dichotomizing the time-to-event outcome out of convenience rather than scientific motivation), so it was encouraging to read about a method thoughtfully developed for survival data, and a competition that embraces performance measures tailored for survival data. We have no major comments but will provide some minor comments for the authors' consideration.
The authors' method involves an initial iterative imputation method that is attractive in that it opens up a richer suite of continuous outcome models for use with the completed data. However, as the authors mention in their discussion, the theoretical aspects of this imputation procedure are unclear. To that end, we are curious if the authors considered existing methods that try to formally account for censoring while retaining loss functions that reduce to "typical" loss functions when censoring is absent (e.g. mean-squared error). For example, Steingrimsson et al (2016) 1 and Molinaro et al (2004) 2 study random forest models with loss functions that 1) can accommodate censored outcomes; and 2) reduce to squared error loss when censoring is absent.
The authors provided a useful graphic summarizing the covariate missingness. Given that there was a nontrivial amount of missingness, a sensitivity analysis might be helpful to ensure that the results are not qualitatively different when perturbing certain aspects of the imputation procedure. Alternatively, for each of the most salient predictors, examining what proportion of observations is missing for that variable may also be useful.
Requests for clarification
Choice of iAUC and other performance measures by the DREAM challenge organizers
We commend the competition organizers for embracing prognostic performance measures that are specifically tailored for survival data, such as the concordance index and cumulative AUC. However, we are puzzled by the decision to use iAUC as the primary performance measure.
iAUC is not a standard performance measure and, to our knowledge, is not documented in the literature. While this would not necessarily preclude iAUC from being used as the primary performance measure, it would be helpful to understand the justification for its choice. There does not appear to be an immediately obvious interpretation for iAUC. According to the DREAM website, iAUC is the average of the different cumulative AUC values over all times t. While the cumulative AUC for a single t is easily interpretable, it is unclear what the interpretive value of iAUC is.
Note that Heagerty and Zheng (2005; Section 2.2.1) 3 state: "[Cumulative AUC is] most appropriate when a specific time t' (or a small collection of times t'_1, t'_2, ..., t'_m) is important and scientific interest lies in discriminating between subjects who die prior to a given time t' and those that survive beyond t'."
If one were unable to choose a specific time or small collection of times, the concordance index offers a reasonable "global in time" alternative. Incidentally, Heagerty and Zheng (2005; Section 2.4) 3 note that when the incident/dynamic AUC (related to but different than the cumulative AUC) is averaged over time (and subject to a specific time-weighting), the result is the concordance index.
We have read this submission. We believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
The paper is nicely written, and the method is clearly described.
I have only one comment regarding the initial imputation step: why an exponential distribution was chosen? Does that affect the results? Can the authors provide a brief discussion on this choice? In the literature, both RSI and RIST uses a model-based imputation value.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
In the manuscript entitled, “Predicting survival time for metastatic castration resistant prostate cancer: an iterative imputation approach” Deng and colleagues describe a generalizable algorithm for iteratively imputing event-times for censored observations and apply their methodology to data collected as part of the Prostate Cancer DREAM Challenge. The approach itself is very interesting, and its application within an ensemble-based framework as a means toward informing survival predictions is quite creative. The Introduction provides a nice appraisal of existing methodologies and their limitations, and in the opinion of this reviewer, adequately motivates methodology being proposed. Overall, the manuscript is well written and likely to be of interest to the prediction and machine learning communities. Some suggestions for improvement are given in the space that follows:
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.