Home | About | Journals | Submit | Contact Us | Français |

**|**Br J Cancer**|**v.99(12); 2008 December 9**|**PMC2607226

Formats

Article sections

Authors

Related links

Br J Cancer. 2008 December 9; 99(12): 2001–2005.

Published online 2008 November 18. doi: 10.1038/sj.bjc.6604792

PMCID: PMC2607226

P Qu,^{1,}^{*} H Chu,^{2} J G Ibrahim,^{2} J Peacock,^{3} X J Shen,^{3} J Tepper,^{4} R S Sandler,^{3} and T O Keku^{3}

Received 2008 July 3; Revised 2008 October 21; Accepted 2008 October 24.

Copyright 2008, Cancer Research UK

The evaluation of tumour molecular markers may be beneficial in prognosis and predictive in therapy. We develop a stopping rule approach to assist in the efficient utilisation of resources and samples involved in such evaluations. This approach has application in determining whether a specific molecular marker has sufficient variability to yield meaningful results after the evaluation of molecular markers in the first *n* patients in a study of sample size *N* (*n**N*). We evaluated colorectal tumours for mutations (microsatellite instability, K-ras, B-raf, PI3 kinase, and TGF*β*R-II) by PCR and protein markers (Bcl2, cyclin D1, E-cadherin, hMLH1, ki67, MDM2, and P53) by immunohistochemistry. Using this method, we identified and abandoned potentially uninformative molecular markers in favour of more promising candidates. This approach conserves tissue resources, time, and money, and may be applicable to other studies.

Research on the molecular biology of colorectal cancer has increased our expectation that a better understanding of molecular changes in colorectal tumours may improve our knowledge of aetiology and treatment. Recently, investigators have recognised that molecular characteristics of colorectal cancers are associated with prognosis and therapeutic response. Studies suggest that some of the major genetic players in colorectal neoplasia, such as p53 mutations, are associated with poorer prognosis (Hardingham *et al*, 1998). Other studies report correlations between K-ras mutations, tumour stage, and survival (Andreyev *et al*, 1998; Samowitz *et al*, 2000). In a population-based study of 607 colorectal cancer patients, Gryfe *et al* (2000) observed that high-frequency microsatellite instability (MSI) conferred significant survival advantage independent of other prognostic factors including tumour stage.

Molecular studies in colorectal cancer may help us better understand how genetic alterations could alter prognosis or impact response to cytotoxic agents. However, there are limitations in the analysis of molecular markers in studies of colorectal cancer prognosis. Oftentimes, studies have a limited amount of tissue samples or have samples from a small number of subjects. Furthermore, variation in the expression of markers in tumour samples might be too small to detect differences in prognosis, thus limiting the utility of some markers. Therefore, there is a need to devise strategies to utilise resources efficiently in studies of molecular markers of prognosis.

In the conduct of a population-based study to determine prognostic and predictive molecular factors for colorectal cancer, we used data from more than 100 patients to develop a strategy to determine whether specific molecular markers possess sufficient variability to yield meaningful results in a study of sample size 1000. Using this method, molecular markers that were unlikely to be informative were abandoned in an early stage of the study in favour of mutations or protein markers showing more promise. This method allowed us to conserve time and resources, and may be applicable to other molecular studies.

We are conducting a population-based study of colorectal cancer in 33 county areas of North Carolina. This study, Cancer Care Outcomes and Surveillance (CanCORS), is a multicentre population-based study, funded by the National Cancer Institute, to evaluate patient, physician. and treatment factors that influence colorectal cancer outcomes. As part of the CanCORS study at the University of North Carolina, we collected tumour tissue on consenting subjects, and constructed tissue microarrays (Kononen *et al*, 1998) to be used for immunohistochemistry and mutational analysis as part of the UNC GI Specialized Programme in Research Excellence (SPORE) grant. We enrolled 1000 patients (*N*=1000) into the study, and the study was approved by the Institutional review board (IRB) of the UNC School of Medicine. From more than 100 patients, we evaluated genetic mutations in p53 (Angelopoulou and Diamandis, 1998; Curtin *et al*, 2004), K-ras, B-raf, TGF*β*R-II, MSI (Boland *et al*, 1998), and examined protein expression of MDM2, BCl-2, cyclin D1, Ki67, P53, hMLH1, and E-cadherin by immunohistochemistry using commercial antibodies.

In our study of *N*=1000 patients, our objective was to develop a stopping rule that might be applied after the first *n* patients were evaluated (*n**N*) to improve efficiency and lower cost. A binary mutation marker variable takes a value of 0 or 1 to represent the absence or presence of a mutation, respectively. To assess the effectiveness of any marker, one typically employs a regression model to correlate the marker variable with the outcome. A problem arises in the early stages of a study when time-to-event outcomes are not yet available because of short follow-up, hindering the evaluation of marker effectiveness in terms of survival. However, one can still make some informative decisions on marker effectiveness by evaluating marker variability. If among the first *n* (*n**N*) patients most have either mutations or non-mutations, it suggests that the marker has little variability and likely little impact on prognosis.

To evaluate marker effectiveness through marker variability without survival outcome data, we find it appropriate to use the power and sample size relation. Let *α* denote significance level and *Z*_{1−α} the (1−*α*) × 100% percentile from the standard normal distribution. Assuming the Cox proportional hazards model, Schoenfeld (1983) derived a sample size and power relation for two sample comparisons (eg, mutations *vs* non-mutations) in which the proportion of the mutation group *p* satisfies:

where *Δ* is the hazard ratio between two samples, *D* is the total deaths among *N* patients (which can be also written as *D*=*N*^{*}*d* where *d* is the overall death rate), and 1−*β* is power. This formula shows the relationship between hazard ratio (or effect size) *Δ*, variability *p*(1−*p*), and statistical power given all the other parameters being fixed. Clearly, to detect a specific effect *Δ* between mutations and non-mutations, the power can be too low when there is little variability in a marker. This suggests that we can compute a lower and upper bound of the mutation rate from (1), so that there is sufficient power (80%) to detect a specific effect, *Δ*, if the mutation rate falls between the bounds. When marker data from *n* patients (*n**N*) are available, but survival data are not, we can construct a 95% confidence interval for mutation rate and compare it with the bounds. If the 95% confidence interval falls completely below the lower bound (or completely above the upper bound), it suggests that the marker might have too little variability to be effective in predicting survival, even if marker data were collected from all *N* patients. In such circumstances, investigators can make informative decisions regarding whether they want to continue data collection on a marker that is unlikely effective, or direct resources to other markers showing more promise.

To decide how big *n* should be, we provide the following formula:

where *p* can be taken as 0.5 and *L* is a prespecified precision defined as the width of a 95% confidence interval. By using a small *L*, we can expect to have an accurate estimate for mutation rate based on only *n* (*n**N*) data points. It is important to note that, in addition to variability, effect size plays an important role in (1). When calculating the lower and upper bound (or simply the variance bound) at 80% power, we have to introduce a value for effect size. Unfortunately, the true effect size of a marker is unknown and cannot be estimated in the absence of survival data. Under such circumstances, supplying a value lower than the true effect size results in a higher variance bound, making it easier to reject a marker; supplying a value larger than the true effect size would only make it harder to reject a marker. We recommend supplying an upper bound for effect size to minimise the chance of throwing away important markers that may have very low variability, but huge effects on survival.

The protein markers under investigation were assessed by immunohistochemistry. The scoring system was based upon the proportion of cells that were stained and the intensity of staining (Hoos and Cordon-Cardo, 2001). The final score took continuous values between 0 and 5. Similar to the mutation data, our goal was to develop a stopping rule for protein marker data, which might be applied after the first *n* (*n**N*) patients have been observed. If the variance of a protein marker is very small, it will likely have little prognostic value. The method illustrated here is again useful in the early stages of a study, when survival outcomes are not yet available. Assuming the Cox proportional hazards model, Hsieh and Lavori (2000) derived a sample size formula in which the variance of a continuous variable *σ*^{2} satisfies

where *Δ* is the hazard ratio associated with one unit of increase in marker values. Similar to the binary marker case, a lower bound for marker variance can be computed by solving (3) for *σ*^{2}, such that there is at least 80% power to detect a specific survival effect *Δ*, given the overall death rate *d* (*D*=*N*^{*}*d*). Unlike in the binary marker case, there is no upper bound for marker variance in the continuous case. Again, effect size plays an important role in (3), in addition to variability. We do not want to underestimate the true effect size of a marker when calculating the variance lower bound. On the other hand, overestimating the true effect size would only make the method conservative.

If only continuous markers were evaluated in a study, one could use the following formula to compute the required sample size *n* to satisfy a certain precision *L*:

where *s*^{2} is an estimate of *σ*^{2}, according to a pilot study. However, when both binary and continuous markers are evaluated in a study, there is no need to compute *n* twice. In that case, one can compute *n* based on formula (2) because of its simplicity.

We need at most *n*=97 (=1.96^{2} × 0.25/0.1^{2}) patients to satisfy a 0.1 precision in evaluating a binary mutation marker (in formula (2), let *p*=0.5, as it gives the highest possible value for the right hand side). Table 1 displays the lower and upper bound of mutation rate, denoted as pL and pU, for a range of overall death rates and effect sizes where power is fixed at 80%, *N*=1000 and *α*=0.05. The bounds add up to 1 for each combination of overall death rate *d* and effect size *Δ* because of the symmetry in the left side of formula (1). When the overall death rate is lower than 20% and the effect size is also low, there are no solutions for pL and pU, because the power is insufficient (80%) regardless of the mutation rate. Figure 1 displays pL and pU, when power is fixed at 80%, *α*=0.05, *d*=0.6, and *N*=1000. If the 95% confidence interval for mutation rate of a genetic marker falls completely in the grey area, it suggests little variability and effectiveness in the marker. For our study, we evaluated mutation markers, such as PI3 kinase, K-ras, B-raf, TGF*β*R-II, and MSI (Table 2). At the time of the development of this method, we had collected data for more than 97 patients. Table 2 displays the results based on all the data available at that time. We thought it reasonable to expect a 0.6 overall death rate among the *N*=1000 registered patients, and a hazard ratio of no more than 1.5 (i.e., 1.5 was an upper bound for the effect size between the mutation and non-mutation groups). As shown in Table 1, the lower and upper bound of mutation rate are 0.067 and 0.933, respectively (for a 0.6 overall death rate and 1.5 hazard ratio). Among the markers, only TGF*β*R-II had a 95% confidence interval of 0.019 and 0.057, falling completely below the 0.067 lower bound. This reveals that less than 6% of the population had TGF*β*R-II mutations, a range unlikely to have sufficient power (>80%) to predict prognosis, even if we gathered TGF*β*R-II mutation data from all *N*=1000 patients. Thus, we decided to stop further genetic analysis on TGF*β*R-II, and focus attention on the other markers.

Lower and upper bounds calculated at 80% power, *α*=0.05, overall death rate *d*=0.6, and total sample size *N*=1000 for comparison with mutation rate estimated from *n* (*n**N*) patients. The shaded area represents the rejection region. **...**

Table 3 presents the minimum variance required to detect a specific hazard ratio for a range of overall death rate values when power is fixed at 80%, *N*=1000, and *α*=0.05. In our study, protein markers were measured for Bcl2, cyclin D1, E-cadherin, hMLH1, Ki67, MDM2, and P53. We computed a 95% confidence interval for the variance of each marker (Table 4). Again, the overall death rate was expected to be 0.6, and the true effect size was not more than 1.5 for one unit of increase in the protein marker values. According to Table 3, the minimum variance required for each marker is 0.063. As the lower confidence limits of all the markers were larger than 0.063, none of the markers met the stopping criterion at this early stage of the study. Figure 2 displays the rejection region of variance when power is fixed at 80%, *α*=0.05, *d*=0.6, and *N*=1000, which is the case for our study.

Minimum variance calculated at 80% power, *α*=0.05, overall death rate *d*=0.6, and total sample size *N*=1000 for comparison with marker variance estimated from *n* (*n**N*) patients. The shaded area represents the rejection region. A 95% **...**

The prospect that we might use the molecular characteristics of tumours to determine patient prognosis and predict response to chemotherapy is compelling. Studies to date have shown promising results, and there is every expectation that continued research will further improve our prognostic and predictive abilities. Although it is tempting to perform molecular analyses on an entire study sample, depending on the size of the study sample and the variability in a marker, the analysis might not be informative. In this study, we have illustrated one potential approach to evaluate marker effectiveness in the early stage of a study when survival data are not available, and the number of markers under consideration was limited. We recommend supplying an upper bound for the true effect size when calculating the marker variance bounds. In doing so, we minimise the chance of throwing away important markers that may have very low variability but huge effects on survival. The method is conservative; in that we do not abandon markers early unless markers show extremely low variability. However, if any markers are identified ineffective, the savings in money, time, and resources may be significant.

Institutional review boards and funding agencies generally demand power calculations (Friedman *et al*, 1999) as a requisite for study approval. The stakes are lower for molecular studies upon existing samples, but the ethical impetus remains to make efficient use of resources and precious, often irreplaceable, patient samples. Our approach helps identify uninformative markers in the early stage of a large molecular study to conserve time and resources. To fully assess this approach, future researchers should consider evaluating the real gain and loss of applying this approach on a large and completed study.

This research was supported, in part, by grants from the National Institutes of Health (U01 CA93326 and P50 CA106991).

- Andreyev HJ, Norman AR, Cunningham D, Oates JR, Clarke PA. Kirsten ras mutations in patients with colorectal cancer: the multicenter ‘RASCAL' study. J Natl Cancer Inst. 1998;90:675–684. [PubMed]
- Angelopoulou K, Diamandis EP. Identification of deletions and insertions in the p53 gene using multiplex PCR and high-resolution fragment analysis: application to breast and ovarian tumors. J Clin Lab Anal. 1998;12:250–256. [PubMed]
- Boland CR, Sato J, Saito K, Carethers JM, Marra G, Laghi L, Chauhan DP. Genetic instability and chromosomal aberrations in colorectal cancer: a review of the current models. Cancer Detect Prev. 1998;22:377–382. [PubMed]
- Curtin K, Slattery ML, Holubkov R, Edwards S, Holden JA, Samowitz WS. p53 alterations in colon tumors: a comparison of SSCP/sequencing and immunohistochemistry. Appl Immunohistochem Mol Morphol. 2004;12:380–386. [PubMed]
- Friedman LM, Furberg CD, DeMets DL. Fundamentals of clinical trials. pp 94–129. Springer-Verlag: New York; 1999.
- Gryfe R, Kim H, Hsieh ET, Aronson MD, Holowaty EJ, Bull SB, Redston M, Gallinger S. Tumor microsatellite instability and clinical outcome in young patients with colorectal cancer. N Engl J Med. 2000;342:69–77. [PubMed]
- Hardingham JE, Butler WJ, Roder D, Dobrovic A, Dymock RB, Sage RE, Roberts-Thomson IC. Somatic mutations, acetylator status, and prognosis in colorectal cancer. Gut. 1998;42:669–672. [PMC free article] [PubMed]
- Hoos A, Cordon-Cardo C. Tissue microarray profiling of cancer specimens and cell lines: opportunities and limitations. Lab Invest. 2001;81:1331–1338. [PubMed]
- Hsieh FY, Lavori PW. Sample-size calculations for the Cox proportional hazards regression model with nonbinary covariates. Control Clin Trials. 2000;21:552–560. [PubMed]
- Kononen J, Bubendorf L, Kallioniemi A, Barlund M, Schraml P, Leighton S, Torhorst J, Mihatsch MJ, Sauter G, Kallioniemi OP. Tissue microarrays for high-throughput molecular profiling of tumor specimens. Nat Med. 1998;4:844–847. [PubMed]
- Samowitz WS, Curtin K, Schaffer D, Robertson M, Leppert M, Slattery ML. Relationship of Ki-ras mutations in colon cancers to tumor location, stage, and survival: a population-based study. Cancer Epidemiol Biomarkers Prev. 2000;9:1193–1197. [PubMed]
- Schoenfeld DA. Sample-size formula for the proportional-hazards regression model. Biometrics. 1983;39:499–503. [PubMed]

Articles from British Journal of Cancer are provided here courtesy of **Cancer Research UK**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |