|Home | About | Journals | Submit | Contact Us | Français|
The traditional and accelerated titration (AT) designs are two frequently utilized Phase I clinical trial designs. Although each design has theoretical advantages and disadvantages, a summary of the practical application of these theories has not been reported. We report our center's experience in evaluating novel agents using both types of Phase I trial designs over a 13-year period. Results from nine Phase I clinical trials of multiple cytotoxic agents conducted at Wayne State University/Karmanos Cancer Institute in Detroit, MI, and published from 1995–2005 were analyzed for this report. Parameters analyzed included the number of patients, the number of dose levels, the total time to completion of the study, and adverse events. The mean number of patients treated on four Phase I trials using the traditional Phase I trial design was 34 compared to a mean of 23.8 patients treated on five Phase I trials using the AT schema. The mean number of dose levels in patients treated using the traditional Phase I trial design was 8.8 (range 7–11) compared to a mean of 10.6 (range 7–15) dose levels using the AT design. The mean length of study time (25–26 months) was similar in both trial designs. The theoretical advantages and disadvantages of both Phase I trial designs did not readily emerge in their actual application in clinical trials conducted at our institution.
There are currently many Phase I clinical trial designs available for oncologic drug development. Each trial design offers important theoretical advantages and disadvantages. Common objectives in most Phase I clinical trials include identification of the maximum tolerated dose (MTD) of the drug, identification of dose limiting toxicity (DLT), and identification of drug dose appropriate for future Phase II clinical trials (Ahn, 1998). There are at least ten validated Phase I clinical trial designs in drug development, including the traditional Phase I design, Storer's two-stage modified design, Korn's two-stage design, the one-stage modified continual reassessment method (CRM) design, the two-stage modified CRM design, the TITE-CRM (time to event CRM) design, and the accelerated titration (AT) design (Ahn, 1998; Cheung and Chappell, 2000; Simon et al., 1997).
The traditional Phase I design is one that is frequently utilized and often considered a standard trial design in oncologic drug development. The AT design has recently emerged as a popular alternative design. Both Phase I clinical trial designs offer theoretical advantages and disadvantages regarding pivotal issues such as the number of patients treated at suboptimal doses, the time to trial completion, and trial-associated financial costs. In addition, the emergence of newer generation cytotoxic chemotherapies as well as biologically targeted, and often cytostatic, therapies have forced investigators to consider different statistical requirements when designing clinical trials for these novel agents (Edler, 1990; Gelmon et al., 1999; Korn, 2004; Parulekar and Eisenhauer, 2004).
Understandably, given the increased biostatistical knowledge base of early clinical drug development, the issues are complex regarding selection of the appropriate Phase I clinical trial design to optimally and rationally develop these new agents. Although each design has theoretical advantages and disadvantages, a summary of the practical application of these theories has not been reported. In this manuscript, we report the practical experience gained from conducting several Phase I clinical trials employing both the traditional design as well as the AT design in the development of several novel, cytotoxic, anti-cancer agents at Wayne State University/Karmanos Cancer Institute (WSU/KCI) located in Detroit, MI.
The traditional Phase I clinical trial design has been referred to in multiple publications, including the clinical trials discussed in this manuscript, as the Fibonacci design or the modified Fibonacci design. This sequence (or variations thereof) has been utilized as an empirical basis for the current dose escalation schema in Phase I clinical trial designs in oncology (Schneiderman, 1967). The initial rationale for using this approach in Phase I dose escalation studies was most likely sound, but due to the strict constraints of the rule, additional modifications became necessary (Collins et al., 1986; Goldsmith et al., 1975).
Modifications to the dose escalation schema to accommodate clinical and pharmacokinetic variability allowed for increased flexibility in the design. However, in practical application, the modified Fibonacci design is itself quite variable. It utilizes preset multiples of the starting dose as well as includes percent increase above preceding dose level in dictating the dose escalation schema. As a result of this design not mandating a specific escalation sequence and/or percentage of increase, the Phase I clinical trials employing the modified Fibonacci design may appear to be designed differently, when in fact they are all using the same basic approach. In this manuscript, we recognize the variability of the application of the Fibonacci or modified Fibonacci design and will more appropriately refer to those clinical trials as ones using Traditional Phase I clinical trial designs.
The traditional Phase I clinical trial design utilizes clinical information obtained from three to six patients per cohort per dose level to help guide decision-making for safe dose escalation (Lin and Shih, 2001). The starting dose level is often selected to be one-tenth the LD10 in mice (the dose for which 10% of mice experienced drug-induced deaths), a dose believed to not cause significant toxicity in humans (Penta et al., 1992). Dose escalation is conducted with specified increments (i.e., 100%, 67%, 50%, 40%, and all subsequent doses at 33%). At each dose level, at least three patients are evaluated to see if at least one patient experiences a DLT. If one of three patients does experience a DLT, the dose level is further expanded to include treatment of three additional patients, for a total of six patients, to better understand the clinical relevance of the DLT. If only one of the six patients experience a DLT, then the dose level is deemed “safe,” and dose escalation continues to the next level.
If at least two of six patients experience a DLT, then the maximum administered dose (MAD) level is achieved. The MAD level is also achieved if two of three patients at any dose level experience a DLT. The recommended Phase II dose is one dose level below the MAD dose (sometimes it is required that six patients be treated at that dose with no more than one DLT). The maximum tolerated dose (MTD) is the highest dose level at which six patients have been treated with at most one patient experiencing a DLT. Although there are slight variations to this general concept, such as the use of intra-patient dose escalation for cycles following the first, the traditional Phase I clinical trials discussed in this manuscript utilized the methodology described.
The AT design for Phase I clinical trials was developed in order to address perceived limitations of the traditional design, in hopes of reducing the number of patients treated at lower, potentially subtherapeutic doses (Simon et al., 1997). The AT design is primarily used for first-in-human clinical trials in which the starting dose for the trial is obtained from preclinical toxicology data.
The AT Phase I designs (2B, 3B, 4B) were tested by fitting a stochastic model to data from twenty Phase I trials evaluating nine different agents (Simon et al., 1997). The AT designs 2B and 3B escalate using one patient cohorts until one patient exhibits DLT, or two patients exhibit grade 2 toxicity, during their first course of treatment, at which time the design reverts to a traditional Phase I dose escalation schema. (AT design 4B proceeds in similar fashion, but considers DLT in any course.) In other words, two additional patients are enrolled at the current dose and dose escalation proceeds in three to six patient cohorts. Designs 3B and 4B differ from 2B with respect to the rapidity of dose escalation; designs 3B and 4B allow for 100% increments between dose levels compared to design 2B with 40% increments. All three designs allow for intra-patient dose escalation (variants without intra-patient dose escalation are designated 2A, 3A, and 4A). The trials discussed in this manuscript utilize both the 2B and 4B designs but not the 3B design.
Results from nine Phase I clinical trials of multiple cytotoxic agents conducted at WSU/KCI and published from 1995–2008 were analyzed for this report. Four clinical trials (NSC366140, SR233377, NSC374551, CI-994) were conducted utilizing the traditional design (Jasti et al., 2001; LoRusso et al., 1995, 2000; Prakash et al., 2001). Two of the novel cytotoxic agents (NSC366140 and SR233377) were administered intravenously, and the remaining two agents (NSC374551 and CI-994) were administered orally.
Five clinical trials (5-fluoro-pyrimidinone, XK-469, SR271425, BMS247550, KRN5500) were conducted utilizing the AT design (Alousi et al., 2007; Gadgeel et al., 2003, 2005; LoRusso et al., 2002;?). Four of five agents (XK-469, SR271425, BMS247550, KRN5500) were administered intravenously using the 2B design, and the remaining agent (5-fluoro-pyrimidinone) was administered orally using the 4B design.
The starting dose level as well as the clinical trial design for the nine clinical trials were determined based on preclinical toxicities, specific study constraints such as eligibility criteria, and concerns for clinical toxicity. Although there were other Phase I trials conducted at our institution during the 13-year period, the nine Phase I trials in this manuscript were selected for discussion because they were primarily investigator-initiated trials and/or trials that were within our design control. One Phase I trial evaluating SR271425 was conducted as a multi-institutional trial with WSU/KCI as the lead clinical site (Goncalves et al., 2008). The number of patients, number of dose levels, total time to completion of the study, and adverse events were obtained from published manuscripts, meeting abstracts, and when necessary, from the original study database. Results discussed from simultaneously conducted Phase I studies of KRN5500 and BMS247550 completed elsewhere were obtained from published manuscripts (Mani et al., 2004; Spriggs et al., 2001; Supko et al., 2003).
Toxicity was graded according to NCI Common Toxicity Criteria version 2.0.
Demographics of the patients treated on the nine Phase I clinical trials are given in Table 1. The patients' median age was 51–60.5 years across the nine trials, and 52–79% of the patients were male. For most of the trials, the patients were heavily pretreated with chemotherapy only (median % pretreated was 84%). The four trials that followed a traditional design were published during 1995–2001, before the AT design (Simon et al., 1997) was widely used. The five trials that followed an AT design were all planned post-1997 and published during 2002–2008.
Table 2 presents several parameters within the nine Phase I clinical trials, including route of drug administration, total number of patients at trial completion, total number of dose levels at trial completion, and total length of time to study completion. The mean number of patients treated on four Phase I trials using the traditional design was 34 (median 29.5), compared to a mean of 23.8 (median 21) patients treated on five Phase I trials using the AT design. The difference is primarily accounted for by high patient accrual on the CI-994 clinical trial (Prakash et al., 2001). Of the 53 patients enrolled on the CI-994 trial, the MTD in the acute dosing phase of the trial (dose defined as CI-994 orally daily for 2 weeks) was reached in 11 patients vs. 29 patients in the chronic dosing phase of the trial (dose defined asCI-994 orally daily for 8 weeks) vs. 13 patients in the food effect dosing phase of the trial (dose defined as CI-994 orally daily administered with food). If one considered the acute and the chronic dosing phases of the trial as two separate trials, then the adjusted mean number of patients decreases to 24.6 (median 28) patients. Dose level cohort expansion at the MTD allows investigators to further evaluate the effect of food on dosing, obtain biomarker data, or gain a better understanding of the drug in a more homogeneous, disease-specific patient population.
The mean number of dose levels in patients treated on four Phase I trials using the traditional Phase I trial design was 8.8 (range 7–11), compared to a mean of 10.6 (median 11; range 7–15) dose levels for patients treated with the AT design. The mean time to completion of study was 25.8 (median 26.5) months using the traditional design compared to 25.2 (median 23.0) months using the AT design.
In the five trials utilizing AT design, the “switch” to the traditional Phase I trial design occurred at different times during dose escalation. Using the AT design, 69 (58%) of 120 patients were treated at doses below the final MTD, compared to 96 (71%) of 136 patients treated at doses below the final MTD using the traditional design. The number of patients treated at the MTD in the five trials using the AT design was at the minimum of six patients, except for the trial evaluating SR271425 (?). In this clinical trial, 3 patients were enrolled at the maximum administered dose (MAD) of 1320mg/m2 with 2 patients experiencing grade 3 toxicity. However, the study was terminated by the sponsor for administrative reasons and additional patients were never enrolled. For purposes of this analysis, the MAD for SR271425 was considered the MTD. The mean number of patients treated below the final MTD was 13.8 using the AT design and 24 using the traditional design.
The results of additional studies conducted elsewhere evaluating KRN5500 and BMS247550 in various dosing schedules were also analyzed and compared (Mani et al., 2004; Spriggs et al., 2001; Supko et al., 2003). The same dosing schedule of KRN5500 administered intravenously daily times 5 days every 3 weeks was evaluated in a traditional Phase I trial design by Supko et al. (2003). The total number of patients needed to complete their study was similar to ours: 26 vs. 24, respectively. The starting dose was the same at 0.8mg/m2/day. Our AT design required eight dose levels to complete the trial compared to five dose levels using their traditional design. There were no objective responses in either trial. The recommended Phase II dose was higher from our AT design (4.3mg/m2/day) than the 2.9mg/m2/day from their traditional design. As KRN5500 was ultimately not pursued for any further drug development in a follow-up Phase II trial, the “true” recommended Phase II dose remains unknown.
The BMS247550 was evaluated in three additional Phase I trials utilizing the AT design (4B and 2B) (Gadgeel et al., 2005; Mani et al., 2004; Spriggs et al., 2001). The dosing schedule, the number of dose levels, and the starting dose were all relatively similar for the three trials. The mean number of patients was 24 (range 17–31). The recommended Phase II dose was the same at 40mg/m2 intravenously every 3 weeks in two of the three trials (Gadgeel et al., 2005; Mani et al., 2004). Interestingly, the higher recommended Phase II dose (50mg/m2 intravenously every 3 weeks) was obtained from a Phase I trial that also utilized the AT 2B design as did we (Spriggs et al., 2001). When comparing those two additional trials that utilized the 2B design, the number of patients was quite different (17 vs. 31) (Gadgeel et al., 2005; Spriggs et al., 2001). However, the difference in patient numbers is accounted for by the enrollment of 10 additional patients at the MTD for a total of 31 patients (Spriggs et al., 2001). Otherwise, the number of patients in the two trials utilizing the 2B design was fairly comparable (17 vs. 21). The dose selected for further development in Phase II clinical trials was 40mg/m2 intravenously every 3 weeks.
Except for one partial response on the CI-994 trial, there were no objective responses observed on any of the nine Phase I trials reviewed here. Some of the trials did report minor or minimal responses: two on Fenretinide, and one on 5-FP.
The importance of utilizing the appropriate Phase I clinical trial design cannot be overemphasized, recognizing both the theoretical and practical advantages and disadvantages in the ever-changing landscape of drug development in oncology. There are several assumptions to be considered in developing new cytotoxic oncologic drugs, including increasing “clinical benefit” with increasing dose, increasing toxicities with increasing dose, and applicability of the MTD in larger Phase II and III clinical trials. These assumptions are rapidly evolving in Phase I clinical trial designs for biologically targeted drugs and certainly in Phase I clinical trials that evaluate cytotoxic drug combinations. Intuitively, the emergence of biologically targeted agents has led to an increase in individualistic trial designs to accommodate a more boutique-driven approach to rational drug development. As we better understand the mechanisms of action of multiple novel agents, it has become more important to design Phase I trials appropriately from the beginning. The specific mechanism of action of new agents will change over time, but it is imperative that the foundation of dose escalation stay consistent and true to specific design principles.
One major concern with the traditional Phase I clinical trial design is the treatment of patients with advanced cancers with sub-optimal doses of drug as defined by lack of positive response towards their disease. Other concerns include treating patients with advanced cancers with overly toxic doses of drug, the length of time to completion of trial, and the potential inadequacy of the design to determine the “correct” dose of drug for future Phase II and III trials. The common thread that binds these concerns together is ethics. Neither patient nor physician would condone treatment that is perceived as ethically unsound. In an era of an increasing number of new drugs available for clinical testing and a small number of patients willing to enroll in clinical trials, investigators must be careful to select a Phase I trial design that optimizes and encourages the process toward the “right answer” while maintaining patient integrity and safety with minimal toxicities (Koyfman et al., 2007).
The AT Phase I clinical trial designs were created to address, in part, some of these valid concerns. Theoretical advantages associated with a particular clinical trial design when applied to individuals who have unpredictable and variable cancers may not be fully realized. Therefore, information gained from practical application of AT designs in comparison to the traditional design is highly desirable.
At first glance, the theoretical advantages of the AT design are not dramatically apparent when actually applied to the development of several cytotoxic oncologic agents. Compared to the traditional design, the clinical trials conducted at our institution using the AT design did not substantially reduce the total number of patients or the length of study but did increase the mean number of dose levels required to reach the MTD. However, the patients on the AT trials experienced grade 3 and 4 toxicities at much higher dose levels than did patients on the traditional trials, which may or may not be a consequence of the agents themselves. The length of study was similar in both trial designs; the AT trials moved at a faster pace, which allowed more patients to be treated at higher dose levels. Indeed, it is possible that the initial dose level in our AT trials was defined too conservatively, ultimately leading to an increased number of dose levels prior to establishing the MTD. A selection bias toward a “lower than the usual LD10 starting dose” with the AT design may be present due to concerns regarding the limitations of one-patient cohorts. Had the initial dose level been set higher, the theoretical advantages of the AT design may have been readily observed in our WSU/KCI experience.
Regardless of the different number of dose levels, the absolute number of patients treated at a dose without toxicities up to the MTD may be a more representative comparison of the two trial designs. This absolute number of patients may potentially be considered as the total number of patients who were suboptimally treated because they were far from the MTD and experiencing no tumor response (although with minimal drug toxicity). Across the trials reported here, the mean and median number of patients treated below the MTD was less using the AT design compared to the traditional design (mean of 13.8 vs. 24 patients, median of 10 vs. 20.5 patients). This advantage to the AT design may have been even more evident with the use of starting doses that were more comparable to those used in association with the traditional design.
Using disease response as a measure of treatment success with either the traditional or AT design is difficult because the disease response rate with any treatment in this heterogenous, heavily pretreated, and advanced disease patient population is low. The concern about lack of disease response in all Phase I trials regardless of their design has been an important topic of discussion amongst physicians and patients. Horstmann et al. (2005) have reported the results of 460 Phase I clinical trials conducted at the National Cancer Institute from 1991–2002. The overall response rate of all trials was 10.6 percent with an additional 34.1 percent of patients experiencing stable or “less-than-partial” disease response. Taken together, 44.7 percent of patients (from a total of 11,935 patients) achieved a “clinical benefit.” Although the term “clinical benefit” has yet to be further explored and defined, it is important to recognize that the perception that Phase I trials carry a low benefit-risk ratio for patients with advanced cancer may not be truly justified.
In our attempts to further minimize any additional comparative bias in the two trial designs, the eventual recommended Phase II dose (RP2D) from the Phase I trials was evaluated. The RP2D of the same cytotoxic agent was variable when comparing the results of trials conducted at our institution to those conducted elsewhere. The RP2D of all three BMS247550 AT design Phase I trials was relatively similar at a dose of 40mg/m2 intravenously every 3 weeks (with one trial recommending a dose of 50mg/m2 intravenously ever 3 weeks). However, the two KRN5500 Phase I trials using the two different designs concluded two different RP2D (4.3mg/m2/day vs. 2.9mg/m2/day intravenously times 5 days). Although not conclusive, this observation does highlight the variability of Phase I trials potentially attributed to clinical trial design, patient population, or different institutions.
Clinical trial design evolves over time and gradually reflects newly published methodology, e.g., the AT design approach was first published in 1997 (Simon et al., 1997). Our traditional and AT designed trials were published in two distinct eras, so it is possible that patient demographics and/or pretreatment history could have also changed over time. However, the patient demographics and prior chemotherapy data given in Table 1 are similar in the two eras, suggesting that those are not likely to be confounding factors of any differences observed in the results from our traditional design vs. AT design Phase I trials.
We recognize that the comparison of different Phase I clinical trials evaluating various oncologic agents using different trial designs has its limitations. However, we believe it is important to describe our center's experience in evaluating novel agents using both types of Phase I trial designs over a 13-year period. Practical application of new theoretical clinical trial designs in the hopes of overcoming perceived trial disadvantages is critical in maximizing patient benefit while safely advancing rational drug development.
Supported in part by NIH grants UO1 CA-062487 and Cancer Center Support Grant CA-22453.