Home | About | Journals | Submit | Contact Us | Français |

**|**BMC Med Res Methodol**|**v.10; 2010**|**PMC2911470

Formats

Article sections

- Abstract
- Background
- Methods
- Results
- Discussion
- Conclusion
- Abbreviations
- Competing interests
- Authors' contributions
- Pre-publication history
- References

Authors

Related links

BMC Med Res Methodol. 2010; 10: 48.

Published online 2010 June 5. doi: 10.1186/1471-2288-10-48

PMCID: PMC2911470

Xuemin Gu: gro.nosrednadm@ugeux; J Jack Lee: gro.nosrednadm@eeljj

Received 2008 November 4; Accepted 2010 June 5.

Copyright ©2010 Gu and Lee, licensee BioMed Central Ltd.

This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This article has been cited by other articles in PMC.

Response-adaptive randomizations are able to assign more patients in a comparative clinical trial to the tentatively better treatment. However, due to the adaptation in patient allocation, the samples to be compared are no longer independent. At large sample sizes, many asymptotic properties of test statistics derived for independent sample comparison are still applicable in adaptive randomization provided that the patient allocation ratio converges to an appropriate target asymptotically. However, the small sample properties of commonly used test statistics in response-adaptive randomization are not fully studied.

Simulations are systematically conducted to characterize the statistical properties of eight test statistics in six response-adaptive randomization methods at six allocation targets with sample sizes ranging from 20 to 200. Since adaptive randomization is usually not recommended for sample size less than 30, the present paper focuses on the case with a sample of 30 to give general recommendations with regard to test statistics for contingency tables in response-adaptive randomization at small sample sizes.

Among all asymptotic test statistics, the Cook's correction to chi-square test (*T*_{MC}) is the best in attaining the nominal size of hypothesis test. The William's correction to log-likelihood ratio test (*T*_{ML}) gives slightly inflated type I error and higher power as compared with *T*_{MC}, but it is more robust against the unbalance in patient allocation. *T*_{MC }and *T*_{ML }are usually the two test statistics with the highest power in different simulation scenarios. When focusing on *T*_{MC }and *T*_{ML}, the generalized drop-the-loser urn (GDL) and sequential estimation-adjusted urn (SEU) have the best ability to attain the correct size of hypothesis test respectively. Among all sequential methods that can target different allocation ratios, GDL has the lowest variation and the highest overall power at all allocation ratios. The performance of different adaptive randomization methods and test statistics also depends on allocation targets. At the limiting allocation ratio of drop-the-loser (DL) and randomized play-the-winner (RPW) urn, DL outperforms all other methods including GDL. When comparing the power of test statistics in the same randomization method but at different allocation targets, the powers of log-likelihood-ratio, log-relative-risk, log-odds-ratio, Wald-type Z, and chi-square test statistics are maximized at their corresponding optimal allocation ratios for power. Except for the optimal allocation target for log-relative-risk, the other four optimal targets could assign more patients to the worse arm in some simulation scenarios. Another optimal allocation target, *R*_{RSIHR}, proposed by Rosenberger and Sriram (*Journal of Statistical Planning and Inference*, 1997) is aimed at minimizing the number of failures at fixed power using Wald-type Z test statistics. Among allocation ratios that always assign more patients to the better treatment, *R*_{RSIHR }usually has less variation in patient allocation, and the values of variation are consistent across all simulation scenarios. Additionally, the patient allocation at *R*_{RSIHR }is not too extreme. Therefore, *R*_{RSIHR }provides a good balance between assigning more patients to the better treatment and maintaining the overall power.

The Cook's correction to chi-square test and Williams' correction to log-likelihood-ratio test are generally recommended for hypothesis test in response-adaptive randomization, especially when sample sizes are small. The generalized drop-the-loser urn design is the recommended method for its good overall properties. Also recommended is the use of the *R*_{RSIHR }allocation target.

The response-adaptive randomization (RAR) in clinical trials is a class of flexible ways of assigning treatment to new patients sequentially based on available data. The RAR adjusts the allocation probabilities to reflect the interim results of the trial, thereby allowing patients to benefit from the interim knowledge as it accumulates in the trial. In practice, unequal allocation probabilities are generated based on the current assessment of treatment efficacy, which results in more patients being assigned to the treatment that is putatively superior.

Many RAR designs have been proposed over the years [1-13]. The two key issues extensively investigated are the evaluations of parameter estimations and hypothesis testing. Due to the dependency of assigning new patients based on observed data at that time, conventional estimates of treatment effect are often biased; therefore, efforts have been made to quantify and correct estimation bias [14,15]. Recent theoretical works have been focused on solving problems encountered in practice, which includes delayed response, implementation for multi-arm trials, and incorporating covariates, etc. [1,3,11,16-18]. Many recent theoretical developments are summarized in [19]. Additionally, in order to compare treatment efficacies through hypothesis testing, studies have been conducted on power comparisons and sample size calculations under the framework of adaptive randomization [20-24]. However, most of the works are based on large sample sizes, and focus on asymptotic properties [4,12,22,25,26]. But these properties have not been fully studied with small sample sizes. The mathematical challenge imposed by correlated data makes it extremely difficult to derive exact solutions for finite samples. Up to now, only limited results on exact solutions have been available [15,27], and computer simulation has to be relied upon when sample size is small [23,24], which is often the case in early phase II trials.

Each RAR design has its own objective, and there are both advantages and disadvantages associated with that objective. It is not our purpose to give a comprehensive assessment of different designs by comparing their advantages and disadvantages. Instead, the primary objective of the present study is to characterize the small sample properties of RAR based on a frequentist approach. In particular, we focus on comparing the performance of commonly used test statistics in RAR of two-arm comparative trials with a binary outcome. Due to the departure from normality caused by data correlation and the discrete nature of a binary outcome, hypothesis tests usually can not be controlled at any given levels of nominal significance. Thus, to make our simulation comparison more relevant, our assessment of hypothesis testing methods and RAR procedures is based on the calculation of both statistical power and the comparison to the nominal type I error rate. Several RAR methods studied in our simulations can assign patients according to a given allocation target, which may be optimal in terms of maximizing the power or minimizing the expected treatment failure. Therefore, we also compare the properties of test statistics at different optimal allocation targets.

The remaining parts of this paper are organized into 4 sections. In the Methods Section, we introduce the adaptive randomization procedures, the optimal allocation rates, and the test statistics used in the simulation. In the Results Section, we present the simulation results. We provide a discussion and final recommendations regarding the RAR methods and hypothesis tests in the Discussion and Conclusions Sections.

In the present section, we briefly describe the randomization methods, asymptotic hypothesis test statistics, and optimal patient allocation targets that are relevant to our simulations. More detailed information can be found in the corresponding references.

The RAR procedures investigated in the present study are randomized play-the-winner (RPW) [8,10], drop-the-loser (DL) [28], sequential maximum likelihood estimation (SMLE) [12], doubly-adaptive biased coin [2,3], sequential estimation-adjusted urn (SEU) [13], and generalized drop-the-loser (GDL) [11] designs. RPW, DL, SEU and GDL are all urn models in the sense that treatment assignment for each patient can be obtained by sampling balls from an urn. In the usual clinical trial setting, an urn model consists of one urn with different types of balls that represent the different treatments under study. Patients are assigned to treatments by randomly selecting balls from the urn. Initially, the urn contains an equal number of balls for each of the treatment offered in the trial. With the progress of a clinical trial, certain rules are applied to update the contents of the urn in such a way that favors the selection of balls corresponding to the better treatment. For example, under the RPW design, the observation of a successful treatment response leads to the addition of *a *(>0) balls of the same type to the urn; a lack of success leads to the addition of *b *(>0) balls of the other type to the urn (*a *= *b *= 1 in our simulation). The limiting allocation rate of patients on treatment 1 is *q*_{2}/(*q*_{1 }+ *q*_{2}), where *q*_{1 }= 1-*p*_{1 }and *q*_{2 }= 1-*p*_{2 }are failure rates, and *p*_{1 }and *p*_{2 }are success rates (or response rates) for treatments 1 and 2. In the DL model, patients are assigned to a treatment based on the type of ball that is drawn; however a treatment failure results in the removal of a treatment ball from the urn, and treatment successes are ignored. Due to the finite probabilities of extinction, immigration balls are added to the urn. If an immigration ball is drawn, an additional ball of each type is added. The sampling process is repeated until a treatment ball is drawn. The DL urn design has the same limiting allocation as the RPW urn, but less variability in patient allocation. Both SEU and GDL are urn models allowing fraction number of balls, and can target any allocation rate. For SEU method [13], if the limiting allocation of RPW urn is the target in a two-arm trial, then balls of type 2 and balls of type 1 are added to the urn following the allocation of the *i*th patient. Obviously, the response status of the *i*th patient is related to the contents of SEU urn only through the calculation of and . For a two-arm GDL urn model [11], when a treatment ball is drawn, a new patient is assigned accordingly, but the ball will not be returned to the urn. Depending on the response of the patient, the conditional average numbers of balls being added back to the urn are *b*_{1 }and *b*_{2 }for treatments 1 and 2, respectively. Therefore, the conditional average numbers of type 1 and type 2 balls being taken out of the urn can be defined as *d*_{1 }and *d*_{2}, where *d*_{1 }= 1-*b*_{1 }and *d*_{2 }= 1-*b*_{2}. Immigration balls are also present in a GDL urn. Whenever an immigration ball is drawn, *a*_{1 }and *a*_{2 }balls are added for treatments 1 and 2, respectively. Zhang et al [11] have shown that the limiting allocation rate of patients on treatment 1 is

(1)

The GDL urn becomes a DL urn when *a*_{1 }= 1, *a*_{2 }= 1, *b*_{1 }= *p*_{1}, and *b*_{2 }= *p*_{2}. Although GDL is a general method with different ways of implementation, a convenient approach is taken in our simulation. When a treatment ball is drawn, the ball is not returned, and no ball is added regardless of the response of the patient. When an immigration ball is drawn, *Cρ*_{1 }and *Cρ*_{2 }balls of type 1 and 2 are added, where *C *is a constant, and *ρ*_{1 }and *ρ*_{2 }are allocation targets on treatments 1 and 2, which are estimated sequentially using the maximum likelihood estimates (MLE) [11].

The SMLE and doubly-adaptive biased coin design (DBCD) methods can also target any allocation ratios, and SMLE can be implemented as a special case of DBCD method. In DBCD method, the probability of the (*i*+1)th patient being assigned to treatment 1 is calculated by

(2)

where *r*_{1 }= *n*_{1}*(i)/i *and *ρ*(*i*) are the current allocation rate and estimated allocation rate on treatment 1 [2,3]. The properties of the DBCD depend largely on the selection of *g*, which can be considered as a measuring function for the deviation from the allocation target. In the present study, we use the following function suggested by Hu and Zhang [3]:

(3)

where *α *is a tuning parameter. When *α *approaches infinity, the DBCD becomes deterministic and the patients are assigned to the putatively better treatment with probability 1. When *α *equals to 0, the MLE of *ρ *becomes the allocation target, and the DBCD method is essentially the same as the SMLE design proposed by Melfi et al [12].

In two-arm comparative trials, the results of a binary outcome variable can be summarized in a 2 × 2 contingency table (Table (Table1).1). The following hypothesis test is often conducted to compare treatment efficacy:

(4)

Nine test statistics for the hypothesis test in (4) are given in Table Table2.2. When relative risk (*q*_{1}/*q*_{2}) and odds ratio (*p*_{1}*q*_{2}/*q*_{1}*p*_{2}) are used to quantify the differences between 2 treatment arms, the test statistics are log-relative-risk and log-odds-ratio, *T*_{Risk }and *T*_{Odds}, which are asymptotically distributed as chi-square distribution with one degree of freedom (). When simple difference is used to measure the treatment effect, the applicable test statistics are the Wald-type test statistic *T*_{Wald }and the score-type test statistics *T*_{Chisq}, where the variance of simple difference in response rates is evaluated at *H*_{1 }or *H*_{0 }respectively. Additionally, the test statistics based on the logarithm of likelihood ratio (*T*_{LLR}) can also be constructed. Besides the 5 commonly used test statistics mentioned above, four modified test statistics are also included in Table Table2.2. *T*_{MO }is a modified log-odds-ratio test proposed by Gart using the approximation of discrete distributions by their continuous analogues [29]. As shown in Table Table2,2, *T*_{MO }is essentially a modification to *T*_{Odds }by adding 0.5 to each cell of a 2 × 2 table. Similarly, Agresti and Caffo proposed a modification to *T*_{Wald }by adding 1 to each cell of a contingency table [30], which results in the test statistic *T*_{MW }in Table Table2.2. *T*_{MC }is the Cook's continuity correction to chi-square test statistics *T*_{Chisq}. Williams provided a modification to log-likelihood-ratio test *T*_{LLR }[31]. The original test statistic *T*_{LLR }is improved by multiplying a scale factor such that the null distribution of the new test statistic *T*_{ML }has the same moments as the chi-square distribution.

Since all test statistics in Table Table22 are based on , they are asymptotically equivalent and any one of them can be used for large sample sizes. Meanwhile at small sample sizes, an exact test can be conducted if a model is specified for the data given in Table Table1.1. For example, depending on the number of fixed margins predetermined for the design, one of the following three models can be applied [32]:

(5)

(6)

and

(7)

where *h*(*r*_{1}|*n*, *n*_{1}, *r*) represents the hypergeometric distribution of *r*_{1}, *b*(*r*|*n*, *p*) gives the binomial distribution of *r *under the null hypothesis of equal response rates (*H*_{0}: *p*_{1 }= *p*_{2 }= *p*), and *b*(*n*_{1}|*n*, *ρ*) denotes the binomial distributions of patients on arm 1 with an allocation ratio of *ρ *(*ρ*_{1 }= 0.5 for equal randomization). The p value of exact test can be calculated by maximizing the probability in (5), (6), or (7) over the two nuisance parameters, *p *and *ρ*. However, due to data dependency, none of the above three models are directly applicable in adaptive randomization. For example, the allocation ratio *ρ *in adaptive randomization is a random variable with unknown distribution, and the binomial distribution of *n*_{1 }assumed in model (7) is not valid even when the null hypothesis is true. Therefore, in adaptive randomization, unconditional exact tests are not available and asymptotic test statistics such as the ones in Table Table22 are required for testing the hypothesis in (4).

The SMLE, DBCD, SEU, and GDL methods can be utilized to allocate patients based on different allocation targets. The allocation targets simulated in the present study are summarized in Table Table3,3, where *R*_{Risk}, *R*_{Odds}, *R*_{Wald}, *R*_{Chisq}, and *R*_{LLR }are optimal allocation ratios maximizing the power of *T*_{Risk}, *T*_{Odds}, *T*_{Wald}, *T*_{Chisq}, and *T*_{LLR }respectively, at fixed sample size. The derivation of *T*_{Risk}, *T*_{Odds}, *T*_{Wald}, *T*_{Chisq}, and *T*_{LLR }can be found in [33,34], which is equivalent to minimizing the variance of corresponding test statistic at a fixed total sample size, and consequently the power of that test statistic is maximized. *R*_{RSIHR }is a recently proposed allocation target that minimizes the expected total number of failures among all trials with the same power [15,33]. The general theoretical framework and the practical implementation of optimal allocation in *k*-arm trials with binary outcomes are discussed and demonstrated by Tymofyeyev et al [35], where the optimization can be conducted over different goals. In practice, the performance of the methodology depends on the chosen RAR procedure. The present simulation study only focuses on two-arm trials, with a goal of maximizing the power or minimizing the total number of failures.

Simulations are conducted at different total numbers of patients ranging from 20 to 200. To simplify the presentation, the results for trials with 30 patients are shown here. When patients are less than 30, adaptive randomization is generally not recommended. For sample size of 100 or larger, all methods yield similar properties in general. For all of the urn models, one ball for each treatment is consistently used as the initial contents of the urn. The number of immigration balls is 1 for both the DL and GDL urns. The tuning parameter of DBCD, *α*, is fixed at 0 or 2. When *α *is 0, it results in the SMLE method. The value of the constant *C *in GDL is 2, which is equivalent to adding 2 treatment balls on average when an immigration ball is drawn. All simulation results are calculated based on 10,000 replicates.

For the purpose of comparison, the true allocation rates are shown in Table Table4,4, and the simulated results for allocation rates on arm 1 are shown in Table Table5.5. Among all RAR methods, DBCD has the best ability to attain the true allocation target. The comparison between SMLE and DBCD shows that, the allocation becomes more unbalanced and the variation of DBCD decreases with increasing value of tuning exponent *α*. On the other hand, the patient allocation of SEU results in more balanced mean allocation between two arms with a much larger variation as compared with other RAR methods. The GDL has the lowest variation among the four sequential RAR methods. When *R*_{RPW }(the same as *R*_{DL}) is the allocation target, DL urn method has the lowest variation in patient allocation, which is consistent with the fact that the lower bound of the estimate of Var(*R*_{RPW}) is attained by DL urn [4]. The comparison among allocation targets shows that *R*_{LLR }has the lowest variation in patient allocation, and the highest variation is usually found at *R*_{RPW }or *R*_{Risk}. However, *R*_{RPW }and *R*_{Risk }are usually the top two allocation targets that assign more patients to the better treatment. *R*_{Wald}, *R*_{Odds}, and *RLLR *assigns more patients to the worse arm in some simulation cases. Among the three allocation targets that assign more patients to the better treatment (*R*_{RSIHR}, *R*_{Risk }and *R*_{RPW}), *R*_{RSIHR }has a stable and often the lowest variation in patient allocation.

The simulation results are obtained for five null cases and ten alternative cases, and Table Table66 gives the summary by averaging the results over the five null cases and the ten alternative cases for a given RAR method and at a given allocation target. Detailed simulation results for each test statistic are shown in Tables Tables7,7, ,8,8, ,9,9, ,10,10, ,11,11, ,1212 with one table for each of the six allocation targets. To simplify the presentation, the results are shown only for the four modified test statistics *T*_{MW}, *T*_{MO}, *T*_{MC}, *T*_{ML}, and the log-relative-risk test statistic *T*_{Risk }because they tend to have better performance than the four corresponding unmodified tests. The qualitative comparisons among test statistics, RAR methods, and allocation targets can be made based on the results in Table Table66.

As shown in Table Table66 (also see Tables Tables7,7, ,8,8, ,9,9, ,10,10, ,11,11, ,12),12), the worst performance can be found in the results of *T*_{MO }and *T*_{Risk}, which are often conservative with less than nominal type I error rate. *T*_{MW }is always slightly conservative across all simulation cases. Overall, *T*_{MC }is the best in attaining the correct type I error rate. *T*_{ML}, is slightly inflated as compared with chi-square test *T*_{MC}. However, the simulation results not shown here indicate that *T*_{ML }is very robust against the unbalance in patient allocation even when sample size is 20. The comparison between different RAR methods shows that the mean type I error of GDL and SEU can usually match the correct size of tests better than other methods when *T*_{MC }and *T*_{ML }are used respectively. The type I error of DBCD is usually the largest one, except at *R*_{Odds}. The overall type I error of SEU is comparable with GDL.

The power comparison of different test statistics indicates that *T*_{Risk }is the statistic with the highest power at *R*_{Risk }but with a much inflated type I error. Except at *R*_{Risk}, *T*_{MC }or *T*_{ML }is the one with the highest power. Usually, GDL has the highest power and SEU has the lowest power among all RAR methods. DBCD and SMLE have similar power, but DBCD is more powerful in most cases. At target *R*_{RPW}, DL urn has the best statistical properties. On the average, the target with the lowest power achieved by test statistics is *R*_{Risk}. The highest overall power can usually be achieved by test statistics at *R*_{RSIHR }and *R*_{LLR}, but *R*_{LLR }has the disadvantage of assigning more patients to the worse treatment in some cases.

In response-adaptive randomization, the assignment of a new patient depends on the treatment outcomes of patients previously enrolled in the trial. Delayed responses are often encountered in practice. Recently, the problem of delayed response in multi-arm generalized drop-the-loser urn and generalized Friedman's urn design is studied for both continuous and discontinuous outcomes [11,16,17,36]. It is shown that, under reasonable assumption about the delay, the asymptotic properties of adaptive design are not affected by the delay. In the present study, the primary focus is the comparison between commonly used test statistics for 2 × 2 tables. Based on results not shown here, a less extreme allocation with higher variation would be expected when a random delay is assumed. It is assumed that the response status of each of the patients already in the trial is available before the allocation of a new patient in our simulations evaluation.

The RAR methods simulated in the present study are aimed at assigning patients to the better treatment with probabilities higher than what otherwise would be allowed by equal randomization. The price being paid is that the sample sizes on the two comparing arms are no longer fixed, and the adaptation in patient allocation can complicate the statistical inference at the end of the trial. The properties of test statistics will change when the patient allocation ratio changes in adaptive randomization. The power of test statistics shown in the present simulation study is obtained by averaging over trials with an unknown distribution of allocation ratios. As shown in our simulation results, a large deviation from the nominal significance level of the hypothesis test can be found even under the null hypothesis. Therefore, the practice of comparing asymptotic hypothesis testing methods based solely on statistical power under the alternative hypothesis is not recommended. It is important to compare adaptive randomization methods based on both the type I error rate and the statistical power, especially when the sample size is small.

General recommendations given in the result section are based on the aggregated results across different settings. Because the performance of different test statistics, RAR methods, and allocation target are closely related to each other, recommendations under a specific scenario can be found based on the detailed simulation results in Tables Tables7,7, ,8,8, ,9,9, ,10,10, ,11,11, ,1212.

Based on simulation results, the Cook's correction to chi-square test statistic *T*_{MC }and Williams' correction to log-likelihood-ratio test *T*_{ML }are recommended to be used for hypothesis testing at the end of adaptive randomization. *T*_{MC }has good ability to attain the correct significance levels, and is relatively robust against the change of RAR method or allocation target. *T*_{ML }has more robust performance than *T*_{MC }and has higher power, but its type I error is slightly inflated as compared with *T*_{MC}. However, *T*_{ML }attains more accurate type I error than *T*_{MC }when the sample size is small. The original Wald-type Z test statistic *T*_{Wald}, which is very sensitive to patient allocation and has inflated type I error, should be avoided at small sample sizes. On the other hand, *T*_{MW}, the Argresti's correction to *T*_{Wald}, and *T*_{MO }the modified log-odds-ratio test are too conservative and under powered at small sample sizes.

The primary objective of current study is to compare test statistics. Since the recommended test statistics are *T*_{MC }and *T*_{ML}, the comparison between RAR methods and allocation targets are mainly based on these two selected test statistics. Among SMLE, DBCD, SEU, and GDL methods, GDL seems to be the best one due to its ability to attain the correct size of hypothesis test and comparatively higher overall power at most allocation targets. Therefore, GDL is the recommended RAR method. The sequential estimation-adjusted urn (SEU) method is comparable with GDL in controlling the type I error. However, SEU is often under powered, and the high variation in patient allocation makes it less useful in practice. The DBCD method with tuning exponent *α *equal to 2 is the best in targeting the true allocation ratio. When *T*_{MC }is the test statistic, DBCD has slightly inflated type I error and slightly lower power as compared with GDL. Therefore, among values of *α*, the balances among controlling the type I error, obtaining higher power, and targeting a given allocation ratio can be reached when *α *is equal to 2. The simulation comparison of statistical power for different RAR methods also indicates that DL urn has the best statistical properties at *R*_{RPW}, mainly due to its low variation in patient allocation.

The statistical characteristics of hypothesis tests and RAR methods also depend on allocation targets. At *R*_{Wald}, *R*_{Odds}, and *R*_{LLR }targets, more patients could be assigned to the inferior treatment in certain parameter spaces. In contrast, *R*_{Risk}, *R*_{RPW}, and *R*_{RSIHR }always assign more patients to the better treatment. However, due to the more extreme allocation of *R*_{Risk }and *R*_{RPW}, both power and type I error of *R*_{Risk }and *R*_{RPW }will suffer as compared with *R*_{RSIHR}. On the other hand, the variation of patient allocation at *R*_{RISHR }is relatively small with a stable value across all simulation scenarios. Additional, among all designs with similar power using Wald-type test statistic, *R*_{RSIHR }allocation ration can achieve fewer failures in the whole trial. Therefore, *R*_{RSIHR }is recommended among all the allocation targets in the present study.

In addition to the frequentist development on the response adaptive randomization, Bayesian decision theoretic methods has also been proposed in the context of bandit problem. The concept of "patient horizon" was brought up to include future patients to whom the current study results might be applied. The goal is to maximize the total number of success in patients enrolled in the study with or without including the patient horizon. More detailed exposition of Bayesian methods for response adaptive randomization is beyond the scope of this paper and interested readers should consult the original work on this topic [37-40].

The Cook's correction to chi-square test and Williams' correction to log-likelihood-ratio test are recommended for hypothesis test of RAR at small sample sizes. Among all the RAR methods compared, GDL method has better statistical properties in controlling type one error and maintaining high statistical power. The RSIHR allocation target provides a good balance between assigning more patients to the better treatment and maintaining a high overall power.

RAR: Response-adaptive randomization; RPW: Randomized play-the-winner; DL: Drop-the-loser; DBCD: Doubly-adaptive biased coin design; SMLE: Sequential maximum likelihood estimation design; SEU: Sequential estimation-adjusted urn; GDL: Generalized drop-the-loser urn; RSIHR: Optimal allocation target minimizing total numbers of failure for Wald-type test statistics at fixed power; MLE: Maximum likelihood estimate.

The authors declare that they have no competing interests.

XMG conducted the simulation part of the study. Both XMG and JJL participated in designing the study and writing the manuscript. All authors read and approved the final manuscript.

The pre-publication history for this paper can be accessed here:

This work was supported in part by grants CA16672 from the National Cancer Institute and W81XWH-06-1-0303 and W81XWH-07-1-0306 from the Department of Defense. The authors thank Dr. Lunagomez for helpful discussions. The authors also thank Ms. Lee Ann Chastain for her help, which greatly improved the presentation of our study.

- Andersen J, Faries D, Tamura R. A randomized play-the-winner design for multi-arm clinical trials. Communications in Statistics-Theory and Methods. 1994;23:309–323. doi: 10.1080/03610929408831257. [Cross Ref]
- Eisele JR. The doubly adaptive biased coin design for sequential clinical trials. Journal of Statistical Planning and Inference. 1994;38:249–262. doi: 10.1016/0378-3758(94)90038-8. [Cross Ref]
- Hu FF, Zhang LX. Asymptotic properties of doubly adaptive biased coin designs for multi-treatment clinical trials. Annals of Statistics. 2004;32(1):268–301.
- Ivanova S, Rosenberger WF, Durham S, Flournoy N. A birth and death urn for randomized clinical trials: asymptotic methods. Sankhya: The Indian Journals of Statistics. 2000;62(B):104–118.
- Li W, Durham SD, Flournoy N. Randomized Pôlya urn. 1996 Proceedings of the Biopharmaceutical Section of the American Statistical Association: 1997; Alexandria: American Statistical Association. 1997. pp. 166–170.
- Rosenberger WF, Stallard N, Ivanova A, Harper CN, Ricks ML. Optimal adaptive designs for binary response trials. Biometrics. 2001;57:909–913. doi: 10.1111/j.0006-341X.2001.00909.x. [PubMed] [Cross Ref]
- Wei LJ. The generalized Polya's urn design for sequential medical trials. Annals of Statistics. 1979;7:291–296. doi: 10.1214/aos/1176344614. [Cross Ref]
- Wei LJ, Durham SD. The randomized play-the-winner rule in medical trials. Journal of American Statistical Association. 1978;85:156–162. doi: 10.2307/2289538. [Cross Ref]
- Yang Y, Zhu D. Randomized allocation with nonparametric estimation for a multi-armed bandit problem with covariates. Annals of Statistics. 2002;30:100–121. doi: 10.1214/aos/1015362186. [Cross Ref]
- Zelen M. Play the winner rule and the controlled clinical trial. Journal of the American Statistical Association. 1969;64:131–146. doi: 10.2307/2283724. [Cross Ref]
- Zhang LX, Chan WS, Cheung SH, Hu FF. A generalized drop-the-loser urn for clinical trials with delayed responses. Statistica Sinica. 2007;17(1):387–409.
- Melfi VF, Page C, Geraldes M. An adaptive randomized design with application to estimation. Canadian Journal of Statistics. 2001;29(1):107–116. doi: 10.2307/3316054. [Cross Ref]
- Zhang LX, Hu FF, Cheung SH. Asymptotic theorems of sequential estimation-adjusted urn models. Annals of Applied Probability. 2006;16(1):340–369. doi: 10.1214/105051605000000746. [Cross Ref]
- Coad DS, Ivanova A. Bias calculations for adaptive urn designs. Sequential Analysis. 2001;20(3):91–116. doi: 10.1081/SQA-100106051. [Cross Ref]
- Rosenberger WF, Sriram TN. Estimation for an adapative allocation design. Journal of Statistical Planning and Inference. 1997;59:309–319. doi: 10.1016/S0378-3758(96)00109-7. [Cross Ref]
- Bai ZD, Hu FF, Rosenberger WF. Asymptotic properties of adaptive designs for clinical trials with delayed response. Annals of Statistics. 2002;30(1):122–139. doi: 10.1214/aos/1015362187. [Cross Ref]
- Hu FF, Zhang LJ. Asymptotic normality of urn models for clinical trials with delayed response. Bernoulli. 2004;10:447–463. doi: 10.3150/bj/1089206406. [Cross Ref]
- Rosenberger WF, Vidyashankar AN, Agarwal DK. Covariate-adjusted response-adaptive designs for binary response. Journal of Biopharmaceutical Statistics. 2001;11:227–236. [PubMed]
- Hu FF, Rosenberger WF. The Theory of Response-Adaptive Randomization in Clinical Trials. Hoboken, New Jersey: John Wiley & Sons, Inc.; 2006.
- Hu FF, Rosenberger WF. Optimality, variability, power: evaluating response-adaptive randomization procedures for treatment comparisons. Journal of the American Statistical Association. 2003;98(463):671–678. doi: 10.1198/016214503000000576. [Cross Ref]
- Zhang LJ, Rosenberger WF. Response-adaptive randomization for clinical trials with continuous outcomes. Biometrics. 2006;62(2):562–569. doi: 10.1111/j.1541-0420.2005.00496.x. [PubMed] [Cross Ref]
- Hu FF, Rosenberger WF, Zhang LX. Asymptotically best response-adaptive randomization procedures. Journal of Statistical Planning and Inference. 2006;136(6):1911–1922. doi: 10.1016/j.jspi.2005.08.011. [Cross Ref]
- Morgan CC, Coad DS. A comparison of adaptive allocation rules for group-sequential binary response clinical trials. Statistics in Medicine. 2007;26(9):1937–1954. doi: 10.1002/sim.2693. [PubMed] [Cross Ref]
- Guimaraes P, Palesch Y. Power and sample size simulations for Randomized Play-the-Winner rules. Contemporary Clinical Trials. 2007;28(4):487–499. doi: 10.1016/j.cct.2007.01.006. [PubMed] [Cross Ref]
- Matthews PC, Rosenberger WF. Variance in randomized play-the-winner clinical trials. Statistics & Probability Letters. 1997;35:233–240. doi: 10.1016/S0167-7152(97)00018-7. [Cross Ref]
- Bai ZD, Hu FF. Asymptotics in randomized urn models. Annals of Applied Probability. 2005;15(1B):914–940. doi: 10.1214/105051604000000774. [Cross Ref]
- Matthews PC, Rosenberger WF. Variance in randomized play-the-winner clinical trials. Statistics & Probability Letters. 1997;35(3):233–240. doi: 10.1016/S0167-7152(97)00018-7. [Cross Ref]
- Ivanova A. A play-the-winner-type urn design with reduced variability. Metrika. 2003;58:1–13.
- Gart JJ. Alternative analyses of contingency tables. Journal of Royal Statistical Society B. 1966;28:164–179.
- Agresti A, Caffo B. Simple and effective confidence intervals for proportions and differences of proportions results from adding two successes and two failures. The American Statistician. 2000;54(4):280–288. doi: 10.2307/2685779. [Cross Ref]
- Williams SS. Improved likelihood ratio tests for complete contingency tables. Biometrika. 1976;63:33–37. doi: 10.1093/biomet/63.1.33. [Cross Ref]
- Upton GJG. A comparison of alternative tests for the 2 × 2 table comparative trial. Journal of Royal Statistical Society A. 1982;145:86–105. doi: 10.2307/2981423. [Cross Ref]
- Rosenberger WF, Lachin JM. Randomization in Clinical Trials: Theory and Practice. New York: Wiley; 2002.
- Jennison C, Turnbull BW. Group Sequential Methods with Applications to Clinical Trials. Boca Raton: Chapman & Hall/CRC; 2000.
- Tymofyeyev Y, Rosenberger WF, Hu FF. Implementing optimal allocation in sequential binary response experiments. Journal of American Statistical Association. 2007;102(477):224–234. doi: 10.1198/016214506000000906. [Cross Ref]
- Sun RB, Cheung SH, Zhang LX. A generalized drop-the-loser rule for multi-treatment clinical trials. Journal of Statistical Planning and Inference. 2007;137(6):2011–2023. doi: 10.1016/j.jspi.2006.06.039. [Cross Ref]
- Berry DA, Fristedt B. Bandit Problems. New York: Chapman and Hall; 1985.
- Thompson WR. On the likelihood that one unknown probability exceeds another in the view of the evidence of the two samples. Biometrika. 1933;25:275–294.
- Berry DA, Eick SG. Adaptive assignment versus balanced randomization in clinical trials: a decision analysis. Statistics in Medicine. 1995;14:231–246. doi: 10.1002/sim.4780140302. [PubMed] [Cross Ref]
- Cheng Y, Berry DA. Optimal adaptive randomized designs for clinical trials. Biometrika. 2007;94(4):673–689. doi: 10.1093/biomet/asm049. [Cross Ref]

Articles from BMC Medical Research Methodology are provided here courtesy of **BioMed Central**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |