Search tips
Search criteria

Results 1-10 (10)

Clipboard (0)

Select a Filter Below

Year of Publication
Document Types
1.  A note on confidence bounds after fixed-sequence multiple tests 
We are concerned with the problem of estimating the treatment effects at the effective doses in a dose-finding study. Under monotone dose-response, the effective doses can be identified through the estimation of the minimum effective dose, for which there is an extensive set of statistical tools. In particular, when a fixed-sequence multiple testing procedure is used to estimate the minimum effective dose, Hsu and Berger (1999) show that the confidence lower bounds for the treatment effects can be constructed without the need to adjust for multiplicity. Their method, called the dose-response method, is simple to use, but does not account for the magnitude of the observed treatment effects. As a result, the dose-response method will estimate the treatment effects at effective doses with confidence bounds invariably identical to the hypothesized value. In this paper, we propose an error-splitting method as a variant of the dose-response method to construct confidence bounds at the identified effective doses after a fixed-sequence multiple testing procedure. Our proposed method has the virtue of simplicity as in the dose-response method, preserves the nominal coverage probability, and provides sharper bounds than the dose-response method in most cases.
PMCID: PMC3432991  PMID: 22962518
Dose-response; Familywise error rate; Minimum effective dose; Monotonicity; Multiple comparisons
2.  Stochastic approximation with virtual observations for dose-finding on discrete levels 
Biometrika  2009;97(1):109-121.
Phase I clinical studies are experiments in which a new drug is administered to humans to determine the maximum dose that causes toxicity with a target probability. Phase I dose-finding is often formulated as a quantile estimation problem. For studies with a biological endpoint, it is common to define toxicity by dichotomizing the continuous biomarker expression. In this article, we propose a novel variant of the Robbins–Monro stochastic approximation that utilizes the continuous measurements for quantile estimation. The Robbins–Monro method has seldom seen clinical applications, because it does not perform well for quantile estimation with binary data and it works with a continuum of doses that are generally not available in practice. To address these issues, we formulate the dose-finding problem as root-finding for the mean of a continuous variable, for which the stochastic approximation procedure is efficient. To accommodate the use of discrete doses, we introduce the idea of virtual observation that is defined on a continuous dosage range. Our proposed method inherits the convergence properties of the stochastic approximation algorithm and its computational simplicity. Simulations based on real trial data show that our proposed method improves accuracy compared with the continual re-assessment method and produces results robust to model misspecification.
PMCID: PMC3412600  PMID: 23049118
Continual re-assessment method; Dichotomized data; Discrete barrier; Heteroscedasticity; Robust estimation; Semiparametric mean-variance relationship
3.  Sample size formulae for the Bayesian continual reassessment method 
Clinical trials (London, England)  2013;10(6):10.1177/1740774513497294.
In the planning of a dose finding study, a primary design objective is to maintain high accuracy in terms of the probability of selecting the maximum tolerated dose. While numerous dose finding methods have been proposed in the literature, concrete guidance on sample size determination is lacking.
With a motivation to provide quick and easy calculations during trial planning, we present closed form formulae for sample size determination associated with the use of the Bayesian continual reassessment method.
We examine the sampling distribution of a nonparametric optimal design, and exploit it as a proxy to empirically derive an accuracy index of the continual reassessment method using linear regression.
We apply the formulae to determine the sample size of a phase I trial of PTEN-long in pancreatic patients, and demonstrate that the formulae give very similar results to simulation. The formulae are implemented by an R function ‘getn’ in the package ‘dfcrm’.
The results are developed for the Bayesian continual reassessment method, and should be validated by simulation when used for other dose finding methods.
The analytical formulae we propose give quick and accurate approximation of the required sample size for the continual reassessment method. The approach used to derive the formulae can be applied to obtain sample size formulae for other dose finding methods.
PMCID: PMC3843987  PMID: 23965547
4.  Simple Benchmark for Complex Dose Finding Studies 
Biometrics  2014;70(2):389-397.
While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method’s simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O’Quigley, Paoletti, and Maccario (2002), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.
PMCID: PMC4061271  PMID: 24571185
Combination therapy; Efficacy-toxicity tradeoff; Multiple toxicities; Phase I trials; Phase I/II trials; Utility scores
5.  On the efficiency of nonparametric variance estimation in sequential dose-finding 
Dose-finding in clinical studies is typically formulated as a quantile estimation problem, for which a correct specification of the variance function of the outcomes is important. This is especially true for sequential study where the variance assumption directly involves in the generation of the design points and hence sensitivity analysis may not be performed after the data are collected. In this light, there is a strong reason for avoiding parametric assumptions on the variance function, although this may incur efficiency loss. In this article, we investigate how much information one may retrieve by making additional parametric assumptions on the variance in the context of a sequential least squares recursion. By asymptotic comparison, we demonstrate that assuming homoscedasticity achieves only a modest efficiency gain when compared to nonparametric variance estimation: when homoscedasticity in truth holds, the latter is at worst 88% as efficient as the former in the limiting case, and often achieves well over 90% efficiency for most practical situations. Extensive simulation studies concur with this observation under a wide range of scenarios.
PMCID: PMC3544527  PMID: 23329867
Homoscedasticity; Least squares estimate; Phase I trials; Quantile estimation; Stochastic approximation
6.  Calibration of prior variance in the Bayesian Continual Reassessment Method 
Statistics in medicine  2011;30(17):2081-2089.
The continual reassessment method (CRM) is an adaptive model-based design used to estimate the maximum tolerated dose in phase I clinical trials. Asymptotically, the method has been shown to select the correct dose given that certain conditions are satisfied. When sample size is small, specifying a reasonable model is important. While an algorithm has been proposed for the calibration of the initial guesses of the probabilities of toxicity, the calibration of the prior distribution of the parameter for the Bayesian CRM has not been addressed. In this paper, we introduce the concept of least informative prior variance for a normal prior distribution. We also propose two systematic approaches to jointly calibrate the prior variance and the initial guesses of the probability of toxicity at each dose. The proposed calibration approaches are compared with existing approaches in the context of two examples via simulations. The new approaches and the previously proposed methods yield very similar results since the latter used appropriate vague priors. However, the new approaches yield a smaller interval of toxicity probabilities in which a neighboring dose may be selected.
PMCID: PMC3129459  PMID: 21413054
Dose finding; indifference interval; least informative prior; phase I clinical trials
7.  Continual reassessment method with multiple toxicity constraints 
Biostatistics (Oxford, England)  2010;12(2):386-398.
This paper addresses the dose-finding problem in cancer trials in which we are concerned with the gradation of severe toxicities that are considered dose limiting. In order to differentiate the tolerance for different toxicity types and grades, we propose a novel extension of the continual reassessment method that explicitly accounts for multiple toxicity constraints. We apply the proposed methods to redesign a bortezomib trial in lymphoma patients and compare their performance with that of the existing methods. Based on simulations, our proposed methods achieve comparable accuracy in identifying the maximum tolerated dose but have better control of the erroneous allocation and recommendation of an overdose.
PMCID: PMC3062152  PMID: 20876664
Design calibration; Dose-finding cancer trials; Toxicity grades and types
8.  Stochastic Approximation and Modern Model-based Designs for Dose-Finding Clinical Trials 
In 1951 Robbins and Monro published the seminal paper on stochastic approximation and made a specific reference to its application to the “estimation of a quantal using response, non-response data”. Since the 1990s, statistical methodology for dose-finding studies has grown into an active area of research. The dose-finding problem is at its core a percentile estimation problem and is in line with what the Robbins-Monro method sets out to solve. In this light, it is quite surprising that the dose-finding literature has developed rather independently of the older stochastic approximation literature. The fact that stochastic approximation has seldom been used in actual clinical studies stands in stark contrast with its constant application in engineering and finance. In this article, I explore similarities and differences between the dose-finding and the stochastic approximation literatures. This review also sheds light on the present and future relevance of stochastic approximation to dose-finding clinical trials. Such connections will in turn steer dose-finding methodology on a rigorous course and extend its ability to handle increasingly complex clinical situations.
PMCID: PMC3010381  PMID: 21197369
Coherence; Dichotomized data; Discrete barrier; Ethics; Indifference interval; Maximum likelihood recursion; Unbiasedness; Virtual observations
9.  Selecting Promising Treatments in Randomized Phase II Cancer Trials with an Active Control 
The primary objective of phase II cancer trials is to evaluate the potential efficacy of a new regimen in terms of its antitumor activity in a given type of cancer. Due to advances in oncology therapeutics and heterogeneity in the patient population, such evaluation can be interpreted objectively only in the presence of a prospective control group of an active standard treatment. This paper deals with the design problem of phase II selection trials in which several experimental regimens are compared to an active control, with an objective to identify an experimental arm that is more effective than the control, or to declare futility if no such treatment exists. Conducting a multi-arm randomized selection trial is a useful strategy to prioritize experimental treatments for further testing when many candidates are available, but the sample size required in such a trial with an active control could raise feasibility concern. In this paper, we extend the sequential probability ratio test for normal observations to the multi-arm selection setting. The proposed methods, allowing frequent interim monitoring, offer high likelihood of early trial termination, and as such enhance enrollment feasibility. The termination and selection criteria have closed form solutions, and are easy to compute with respect to any given set of error constraints. The proposed methods are applied to design a selection trial in which combinations of sorafenib and erlotinib are compared to a control group in patients with non-small-cell lung cancer using a continuous endpoint of change in tumor size. The operating characteristics of the proposed methods are compared to that of a single-stage design via simulations: the sample size requirement is reduced substantially and is feasible at an early stage of drug development.
PMCID: PMC2896482  PMID: 19384691
Noninferiority test; Probability of correct selection; Sample size re-estimation; Sequential elimination; Sequential probability ratio test; Symmetric boundaries; Type I error
10.  Model Calibration in the Continual Reassessment Method 
The continual reassessment method (CRM) is an adaptive model-based design used to estimate the maximum tolerated dose in dose finding clinical trials. A way to evaluate the sensitivity of a given CRM model including the functional form of the dose-toxicity curve, the prior distribution on the model parameter, and the initial guesses of toxicity probability at each dose is using indifference intervals. While the indifference interval technique provides a succinct summary of model sensitivity, there are infinitely many possible ways to specify the initial guesses of toxicity probability. In practice, these are generally specified by trial and error through extensive simulations.
By using indifference intervals, the initial guesses used in the CRM can be selected by specifying a range of acceptable toxicity probabilities in addition to the target probability of toxicity. An algorithm is proposed for obtaining the indifference interval that maximizes the average percentage of correct selection across a set of scenarios of true probabilities of toxicity and providing a systematic approach for selecting initial guesses in a much less time consuming manner than the trial and error method. The methods are compared in the context of two real CRM trials.
For both trials, the initial guesses selected by the proposed algorithm had similar operating characteristics as measured by percentage of correct selection, average absolute difference between the true probability of the dose selected and the target probability of toxicity, percentage treated at each dose and overall percentage of toxicity compared to the initial guesses used during the conduct of the trials which were obtained by trial and error through a time consuming calibration process. The average percentage of correct selection for the scenarios considered were 61.5% and 62.0% in the lymphoma trial, and 62.9% and 64.0% in the stroke trial for the trial and error method versus the proposed approach.
We only present detailed results for the empiric dose toxicity curve, although the proposed methods are applicable for other dose toxicity models such as the logistic.
The proposed method provides a fast and systematic approach for selecting initial guesses of probabilities of toxicity used in the CRM that are competitive to those obtained by trial and error through a time consuming process, thus, simplifying the model calibration process for the CRM.
PMCID: PMC2884971  PMID: 19528132

Results 1-10 (10)