Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2794984

Formats

Article sections

- Summary
- 1. Background
- 2. Relationship between the Predictiveness Curve and the ROC Curve
- 3. Estimation Using Parametric ROC Models
- 4. Simulation
- 5. Markers for Prostate Cancer
- 6. Discussion
- References

Authors

Related links

Biometrics. Author manuscript; available in PMC 2010 December 1.

Published in final edited form as:

PMCID: PMC2794984

NIHMSID: NIHMS88165

Fred Hutchinson Cancer Research Center Public Health Sciences, 1100 Fairview Avenue N., Seattle, WA 98109-1024

The publisher's final edited version of this article is available at Biometrics

See other articles in PMC that cite the published article.

The predictiveness curve shows the population distribution of risk endowed by a marker or risk prediction model. It provides a means for assessing the model’s capacity for stratifying the population according to risk. Methods for making inference about the predictiveness curve have been developed using cross-sectional or cohort data. Here we consider inference based on case-control studies which are far more common in practice. We investigate the relationship between the ROC curve and the predictiveness curve. Insights about their relationship provide alternative ROC interpretations for the predictiveness curve and for a previously proposed summary index of it. Next the relationship motivates ROC based methods for estimating the predictiveness curve. An important advantage of these methods over previously proposed methods is that they are rank invariant. In addition they provide a way of combining information across populations that have similar ROC curves but varying prevalence of the outcome. We apply the methods to PSA, a marker for predicting risk of prostate cancer.

The importance of biomarkers in disease screening, diagnosis, and risk prediction has been generally recognized. A well-established criterion for biomarker selection is classification accuracy, commonly characterized by the Receiver Operating Characteristic (ROC) curve and its summary measures. However, classification is not always the major focus. Oftentimes we use biomarkers to calculate the risk of an outcome. Recently, there has been increasing awareness that the ROC curve is not the most relevant tool for assessing a biomarker whose purpose is risk prediction (Gail and Pfeiffer, 2005; Cook, 2007; Huang et al., 2007; Pepe et al., 2008a; Pencina et al., 2008). On the one hand, the ROC curve does not display risk which is of primary interest to patients and clinicians. On the other hand, criteria relating to classification oftentimes can be too stringent for evaluation of a risk prediction marker. To characterize the predictive capacity of a continuous marker or risk model, a new graphical tool, the predictiveness curve, has been proposed to display the population distribution of disease risk predicted by the particular marker or risk model (Bura and Gastwirth, 2001; Huang et al., 2007; Pepe et al., 2008a).

Let *D* denote a binary outcome that we term disease here, *D* = 1 for diseased and *D* = 0 for non-diseased. Let *Y* denote a marker of interest and define *Risk*(*Y*) = *P* (*D* = 1|*Y*), the risk calculated on the basis of *Y*. We use the term “risk” in a broad sense, to include presence of disease at the time *Y* is measured, and to include future onset of disease after *Y* is measured, depending on the application. Assuming *Risk*(*Y*) is monotone increasing in *Y*, the predictiveness curve for *Y* is the curve *R*(*v*) vs *v* for *v* (0, 1), where *R*(*v*) is the risk corresponding to the *v ^{th}* percentile of

An example of biomarker evaluation is included in Figure 1, where the ROC and predictiveness curves for weight and FEV1 are displayed as classification or risk prediction markers for pulmonary exacerbation in children with cystic fibrosis. The empirical ROC curves shown in Figure 1(a) suggest that FEV1 has better classification accuracy than weight. The corresponding predictiveness curves estimated in Huang et al. (2007) are shown in Figure 1(b). We see for example that at the 90* ^{th}* percentile the risk is 0.76 for FEV1 but only 0.58 for weight suggesting FEV1 is a better marker of high risk than weight. FEV1 is also a better marker of low risk. The 10

ROC curves and Predictiveness curves for two markers of pulmonary exacerbation in children with cystic fibrosis.

Semi-parametric estimators for making inference about the curve and for making pointwise comparisons between two curves from a cohort study have been developed by Huang et al. (2007). Since case-control studies are often performed in the early phases of biomarker development (Pepe et al., 2001), it is important to estimate the predictiveness of continuous biomarkers in studies that use this type of design as well. When the disease of interest is rare in the population, using a case-control design to oversample cases can be more efficient than simple random sampling from the population. In this paper, we consider estimation of the predictiveness curve from a case-control study, assuming the disease prevalence is known apriori or that it can be estimated either from an independent cohort or from a parent cohort study within which the case-control marker study is nested (Baker et al., 2002; Pepe et al., 2008b). The methodology is based on modeling a parametric ROC curve and exploits the one-to-one relationship between the predictiveness curve and the ROC curve.

Here we focus on the scenario of a single continuous marker. Note the marker may be a predefined combination of predictors. For example the Framingham risk score is a combination of age, total cholesterol, HDL cholesterol, systolic blood pressure, treatment for hypertension, and cigarette smoking. Denote *Y*, *Y _{D}*, and

Throughout this manuscript, we assume the risk of disease *P* (*D* = 1|*Y*) is monotone increasing in *Y*. Under this monotone increasing risk assumption, we have that the *v ^{th}* risk percentile is

Suppose *Y _{D}* and

$$\frac{\rho \text{RO}{\text{C}}^{\prime}(t)}{\rho \text{RO}{\text{C}}^{\prime}(t)+(1-\rho )}\phantom{\rule{0.38889em}{0ex}}\text{vs}\phantom{\rule{0.38889em}{0ex}}1-(1-\rho )t-\rho \text{ROC}(t),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}t(0,1),$$

(1)

where the ROC curve at false positive rate *t* is ROC(*t*), and ROC′(*t*) is its derivative with respect to *t*.

For *v* (0, 1), let *y* = *F*^{−1}(*v*). Suppose *F _{D̄}* (

$$\begin{array}{l}v=F(y)=(1-\rho ){F}_{\overline{D}}(y)+\rho {F}_{D}(y)=(1-\rho )(1-t)+\rho {F}_{D}({F}_{\overline{D}}^{-1}(1-t))\\ =(1-\rho )(1-t)+\rho \{1-\text{ROC}(t)\}=1-(1-\rho )t-\rho \text{ROC}(t).\end{array}$$

Moreover,

$$\begin{array}{l}R(v)=P\{D=1Y=y\}=\frac{\rho \mathcal{L}\text{R}(y)}{\rho \mathcal{L}\text{R}(y)+(1-\rho )}& =\frac{\rho \mathcal{L}\text{R}\{{F}_{\overline{D}}^{-1}(1-t)\}}{\rho \mathcal{L}\text{R}\{{F}_{\overline{D}}^{-1}(1-t)\}+(1-\rho )}=\frac{\rho \text{RO}{\text{C}}^{\prime}(t)}{\rho \text{RO}{\text{C}}^{\prime}(t)+(1-\rho )}.\end{array}$$

The last equality holds since
$\mathcal{L}\text{R}\{{F}_{\overline{D}}^{-1}(1-t)\}=\text{RO}{\text{C}}^{\prime}(t)$ (Green and Swets, 1966). Note the result can be generalized to the scenario when the upper bound of the support for *Y _{D}* is larger than the upper bound of the support for

Theorem 1 shows that the predictiveness curve can be constructed from the ROC curve given the disease prevalence. Conversely, the ROC curve can also be constructed from the predictiveness curve. Specifically, as pointed out in Pepe et al. (2008a), corresponding to a percentile value *v*, the false positive fraction (FPF) is
$t=P\{\mathit{Risk}(Y)>R(v)D=0\}={\int}_{R(v)}^{1}\{1-R(s)\}ds/\rho $, and the true positive fraction (TPF) is
$\text{ROC(}t)=P\{\mathit{Risk}(Y)>R(v)D=1\}={\int}_{R(v)}^{1}R(s)ds/\rho $.

The one-to-one relationship between the ROC curve and the predictiveness curve suggests that there may be a simple relationship between standard summary measures for the curves. One interesting summary measure of the predictiveness curve previously proposed by Bura and Gastwirth (2001) is the total gain (TG),
$\text{TG}={\int}_{0}^{1}R(v)-\rho dv$, the area sandwiched between the predictiveness curve and the horizontal line at *ρ*, the prevalence. The latter is the predictiveness curve for a completely uninformative marker. Since a better risk prediction marker has steeper predictiveness curve, Bura and Gastwirth (2001) argued for gauging predictiveness with the size of TG: better markers have larger values. In Theorem 2, we show that TG is equivalent to the Kolmogorov-Smirnov measure of distance between two distributions (case vs control), a traditional ROC summary statistic (Pepe, 2003, pg. 80). Proof of Theorem 2 is given in the Appendix.

Under the assumption that *P* (*D* = 1|*Y*) is monotone increasing in *Y*, we have *TG* = 2*ρ*(1 − *ρ*) sup* _{t}*{ROC(

Youden’s index (Youden, 1950), which has a long history as a summary performance measure for binary tests is the sum of sensitivity and specificity minus 1, and is appealing when the costs associated with false positive and false negative errors are equal. The point on the ROC curve that maximizes Youden’s index is often called the ‘optimal point’. It is interesting to see that the TG is related to this optimal point on the ROC curve. Refer to Fluss et al. (2005) for various methods to estimate Youden’s index.

Theorem 1 also has implications for estimating the predictiveness curve from a case-control sample. We can estimate a smooth ROC curve first and then derive the corresponding predictiveness curve based on (1), where we plug in a known or estimated value for disease prevalence. This is an appealing procedure for the following reasons: (i) When researchers are interested in evaluating a biomarker from multiple angles, they may choose to display both the ROC curve and the predictiveness curve. It is important in this situation that the assumptions behind estimates of the two curves are compatible with each other. Deriving the predictiveness curve from the ROC curve will guarantee this; (ii) The fact that the ROC curve can be estimated only from ranked data implies that methods for deriving the predictiveness curve from a rank-based ROC curve estimate also only depend on ranks. This contrasts with previous methods proposed (Huang et al., 2007); (iii) Estimation of the ROC curve is a well studied problem. There are a wide variety of methods available; (iv) It is natural to estimate the ROC curve from a case-control study since sensitivity and specificity are defined conditional on disease status, so case-control data are accommodated by these methods in contrast to our previous approach (Huang et al., 2007); (v) A fundamental property of the predictiveness curve is that the area under the curve is equal to *ρ* since
${\int}_{0}^{1}R(v)dv=E(\mathit{Risk})=E(D)=\rho $. The area under an estimated predictiveness curve (*v*) vs *v*, on the other hand, is not necessarily equal to the prevalence estimate. It depends on the procedure employed for estimation. However, if a predictiveness curve is calculated using a prevalence estimate and an estimated ROC curve
$\widehat{\text{ROC}}(t)$ which is differentiable almost everywhere, then the area under the estimated predictiveness curve is always equal to . In particular

$$\begin{array}{l}{\int}_{0}^{1}\widehat{R}(v)dv={\int}_{t=1}^{t=0}\frac{\widehat{\rho}{\widehat{\text{ROC}}}^{\prime}(t)}{\rho {\widehat{\text{ROC}}}^{\prime}(t)+(1-\widehat{\rho})}d\left\{1-(1-\widehat{\rho})t-\widehat{\rho}\widehat{\text{ROC}}(t)\right\}\\ =-{\int}_{t=1}^{t=0}\frac{\widehat{\rho}{\widehat{\text{ROC}}}^{\prime}(t)}{\rho {\widehat{\text{ROC}}}^{\prime}(t)+(1-\widehat{\rho})}\left\{(1-\widehat{\rho})+\widehat{\rho}{\widehat{\text{ROC}}}^{\prime}(t)\right\}dt=\widehat{\rho}{\int}_{t=0}^{t=1}{\widehat{\text{ROC}}}^{\prime}(t)dt\\ =\widehat{\rho}\left\{\widehat{\text{ROC}}(1)-\widehat{\text{ROC}}(0)\right\}=\widehat{\rho}.\end{array}$$

This is a desirable property for a predictiveness curve estimate. When we compare two markers with respect to the steepness of their estimated predictiveness curves, it facilitates visual comparisons when the estimated curves have the same area under the curve; (vi) Finally, a unique property of the ROC curve based method compared to alternative methods is its ability to borrow information across populations when a marker’s discriminatory performance (ROC curve) is the same across different populations. When this happens, a common ROC curve can be estimated using samples from different populations (Janes and Pepe, 2008a,b,c; Huang et al., 2008). The estimator can be combined with disease prevalences to estimate predictiveness curves for individual populations. In contrast, one cannot take advantage of the constant ROC curve assumption in other frameworks, for example, when risk models and marker distributions are estimated separately (Huang et al., 2007).

Before we start exploring a specific ROC based method for estimation, we note that the assumption that *P* (*D* = 1|*Y* = *y*) is increasing in *y* implies that R(*y*) is increasing in *y* which in turn implies that ROC′(*t*) is decreasing in *t*. That is, a monotone increasing risk function assumption implies concavity of the corresponding ROC curve. Therefore, we prefer methods that lead to concave estimates of the ROC curve. Note that concavity is always a desirable property for an ROC curve because it guarantees that the ROC curve will never cross the 45° “guessing line” (Dorfman et al., 1996) and because it is a property of the optimal ROC curve for decision rules based on *Y* (Egan, 1975).

Approaches to estimating an ROC curve vary along a spectrum regarding assumptions made. At one extreme, we can model the marker distributions within cases and controls parametrically and calculate the corresponding ROC curve. For example Wieand et al. (1989) modeled *Y _{D}* and

The most widely used parametric ROC model is the binormal ROC curve. It assumes that there exists a common unspecified monotone transformation *h*, which transforms the marker distributions in both cases and controls to normality. Suppose
$h({Y}_{\overline{D}})~N({\mu}_{\overline{D}},{\sigma}_{\overline{D}}^{2}),h({Y}_{D})~N({\mu}_{D},{\sigma}_{D}^{2})$, the corresponding ROC curve is ROC(*t*) = Φ{*a*+*b*Φ^{−1}(*t*)}, where *a* = (*μ _{D}* −

Many algorithms including the semi-parametric methods listed above have been proposed to fit the binormal ROC curve. Moreover, the binormal assumption is thought to fit many real datasets (Hanley, 1988). However, a problem with using the binormal ROC model is that it is not concave in (0, 1) unless *b* = 1 (i.e. the normal distributions for cases and controls have the same variance). This can be seen from the log of the derivative of ROC(*t*) which is quadratic in Φ^{−1}(*t*)

$$\text{RO}{\text{C}}^{\prime}(t)=\frac{b\mathrm{\{a+b{\mathrm{\Phi}}^{-1}(t)\}}\mathrm{\{{\mathrm{\Phi}}^{-1}(t)\}}={C}^{+}exp\left[-\frac{1}{2}({b}^{2}-1){\left\{{\mathrm{\Phi}}^{-1}(t)+\frac{ab}{{b}^{2}-1}\right\}}^{2}\right],}{}$$

(2)

where *C*^{+} is some positive constant.

The lack of concavity for a binormal model has been shown to have only a minor impact on estimation of the ROC curve itself. However, it can have a large impact on the derivative of the ROC curve, which in turn can cause problems in estimating the predictiveness curve. To get a flavor for this, look at the example in Figure 2, where 50 cases and 50 controls are simulated from normal distributions with equal variances. The estimated binormal ROC curve (Figure 2a) has a tiny squiggle in the left-hand tail which is almost unnoticeable, whereas the corresponding predictiveness curve estimate has a big non-monotone tail at the right end (Figure 2b).

To avoid this problem, one solution is to employ other parametric models that yield concave ROC curves. Two parametric models for concave ROC curves are the bigamma and bilomax models. The bigamma ROC curve (Dorfman et al., 1996) assumes there exists a common monotone transformation that transforms the distributions of *Y _{D}* and

In the rest of this paper, we focus on an alternative strategy. Since the binormal ROC curve is widely used, we develop techniques for predictiveness curve estimation based on this model. However, instead of fitting the predictiveness curve and the ROC curve over the entire domain (0,1), we only model a portion of it that is of interest. In this approach we can ensure concavity of the ROC curve and consequently monotonicity of the predictiveness curve within a restricted range of interest.

In addition to addressing the concavity issue, there are several other reasons to consider estimating the predictiveness curve over a subinterval of (0,1). First, by enforcing model assumptions on only a portion of the curve, we increase robustness of the estimate. Second, risks only within a particular range may be of primary interest. Third, a parametric ROC model creates lack of flexibility in the estimated predictiveness curve at the boundary. As shown in Table 1, when a binormal ROC model is enforced over the whole (0,1) domain, the boundary of the predictiveness curve has value either 0 or 1. However, in applications the risk function is often not 0 or 1 at extreme values of the marker. Therefore, the binormal predictiveness curve will not fit at the boundary in these settings.

Researchers in the field of diagnostic test evaluation have long been interested in the partial ROC curve. For example, in screening studies, it is important to minimize the unnecessary cost due to false positive test results, hence the region of the ROC curve corresponding to low FPF is most relevant. If the purpose of the study is disease diagnosis, it is critical not to miss detecting subjects with disease, and hence the part of the ROC curve corresponding to high TPF is of primary interest. Modeling a partial ROC curve and the area under it has been proposed and studied (McClish, 1989; Thompson and Zucchini, 1989; Jiang et al., 1996; Dodd and Pepe, 2003; Pepe and Cai, 2004).

Interestingly, when concavity is required only over a certain portion of the ROC curve, parametric ROC models which are not concave in the whole range may be employed. Consider the classic binormal ROC curve, ROC(*t*) = Φ {*a* + *b*Φ^{−1}(*t*)}, whose derivative is shown in (2). Consider the following two scenarios: (i) If 0 < *b* < 1, ROC′ (*t*) increases as {Φ^{−1}(*t*) + *ab*/(*b*^{2} − 1)}^{2} increases. Thus for ROC′(*t*) to be monotone decreasing, we need to have Φ^{−1}(*t*) < −*ab*/(*b*^{2} − 1) *t* < Φ (−*ab*/(*b*^{2} − 1)) *a* > (1 − *b*^{2}) Φ^{−1}(*t*)/*b*. That is, by imposing this restriction on (a,b) over the FPF range of interest, the corresponding portion of the ROC curve is guaranteed to be concave. (ii) If *b* > 1, ROC′(*t*) increases as {Φ^{−1}(*t*) + *ab*/(*b*^{2} − 1)}^{2} decreases. Thus for ROC′(*t*) to be monotone decreasing, we need to have Φ^{−1}(*t*) > −*ab*/(*b*^{2} − 1) *t* > Φ (−*ab*/(*b*^{2} − 1)) *a* > (1 − *b*^{2}) Φ^{−1}(*t*)/*b*. We can impose these restrictions during estimation to guarantee concavity of the partial binormal ROC curve over the range of interest. Suppose concavity is required for *t* (*t*_{0}, *t*_{1}), we fit the ROC model with the restriction on parameter estimates (*a*, *b*): *a* > (1−*b*^{2}) Φ^{−1}(*t*_{0})/*b* and *a* > (1 − *b*^{2}) Φ^{−1}(*t*_{1})/*b*.

Denote by ROC* _{θ}*(

$$\begin{array}{l}\widehat{R}(v)=G\left\{\widehat{\theta},{H}^{-1}(\widehat{\theta},v,\widehat{\rho}),\widehat{\rho}\right\}\phantom{\rule{0.38889em}{0ex}}\text{for}\phantom{\rule{0.16667em}{0ex}}v(0,1),& {\widehat{R}}^{-1}(p)=H\left\{\widehat{\theta},{G}^{-1}(\widehat{\theta},p,\widehat{\rho}),\widehat{\rho}\right\}\phantom{\rule{0.38889em}{0ex}}\text{for}\phantom{\rule{0.16667em}{0ex}}p\{R(v):v(0,1)\}.\end{array}$$

In practice, oftentimes there does not exist a closed-form for *H*^{−1}(·) or *G*^{−1}(·) and numerical methods need to be implemented to calculate (*v*) and ^{−1}(*p*).

The prevalence estimate may not have sampling variability in some applications. For example, if *ρ* derives from a population disease registry such as SEER(http://seer.cancer.gov), its value is essentially known due to the large sample size involved. Moreover, in some settings investigators will want to substitute fixed values of *ρ* into the predictiveness estimate in order to determine prevalences that render the marker useful as indicated by the corresponding predictiveness curves. This sort of “what-if” exercise, could lead to consideration of specific populations for application of the marker. In other applications the prevalence will be estimated with sampling variability from data. Such data may be independent of the case-control data, for example when it is based on a report from the literature. Alternatively it may be derived from a cohort within which the case-control study is nested. The nested case-control design has been proposed as the preferred design for pivotal evaluation of biomarkers for classification (Pepe et al., 2001, 2008b; Baker et al., 2002) and involves measuring the outcome *D* for all subjects in random sample from the population of interest but the marker *Y* only for a subsample of cases and for a subsample of controls.

Suppose (*v*) and ^{−1}(*p*) are estimators of *R*(*v*) and *R*^{−1}(*p*). We assume the following conditions hold.

- $\sqrt{n}\left(\begin{array}{c}\widehat{\theta}-\theta \\ \widehat{\rho}-\rho \end{array}\right)\stackrel{d}{\to}N\left\{\left(\begin{array}{c}0\\ 0\end{array}\right),\left(\begin{array}{cc}{\mathrm{\sum}}_{\theta \theta}& {\mathrm{\sum}}_{\theta \rho}\\ {\mathrm{\sum}}_{\theta \rho}& {\mathrm{\sum}}_{\rho \rho}\end{array}\right)\right\}$, where
*n*=*n*+_{D}*n*. Note that for the special case where_{D̄}*ρ*is assumed known, we have Σ= Σ_{θρ}= 0._{ρρ} - ROC
(_{θ}*t*) is differentiable with respect to*θ*and*t*. *G*(*θ*,*t*,*ρ*) is differentiable with respect to*θ*,*t*, and*ρ*with derivatives*g*_{1},*g*_{2}, and*g*_{3}.*H*^{−1}(*θ*,*v*,*ρ*) is differentiable with respect to*θ*.*G*^{−1}(*θ*,*p*,*ρ*) is differentiable with respect to*θ*.

We have

$$\sqrt{n}\left\{\widehat{R}(v)-R(v)\right\}\stackrel{d}{\to}N\{0,{\sigma}^{2}(v)\},$$

where ${\sigma}^{2}(v)={\sigma}_{11}^{2}(v)+{\sigma}_{12}^{2}(v)+{\sigma}_{22}^{2}(v)$, with ${\sigma}_{11}^{2}(v)={A}^{T}{\mathrm{\sum}}_{\theta \theta}A,{\sigma}_{12}^{2}(v)={A}^{T}{\mathrm{\sum}}_{\theta \rho}B,{\sigma}_{22}^{2}(v)={B}^{2}{\mathrm{\sum}}_{\rho \rho}$, and

$$\begin{array}{l}A=[{g}_{1}\{\theta ,{H}^{-1}(\theta ,v,\rho )\}+{g}_{2}\{\theta ,{H}^{-1}(\theta ,v,\rho )\}\left\{\frac{{H}^{-1}(\theta ,v,\rho )\theta}{\}}\right],B=[{g}_{3}\{\theta ,{H}^{-1}(\theta ,v,\rho )\}+{g}_{2}\{\theta ,{H}^{-1}(\theta ,v,\rho )\}\left\{\frac{{H}^{-1}(\theta ,v,\rho )\rho}{\}}\right].\end{array}$$

We have

$$\sqrt{n}\left\{{\widehat{R}}^{-1}(p)-{R}^{-1}(p)\right\}\stackrel{d}{\to}N\{0,{\tau}^{2}(p)\},$$

where ${\tau}^{2}(p)={\tau}_{11}^{2}(p)+{\tau}_{12}^{2}(p)+{\tau}_{22}^{2}(p)$, with

$$\begin{array}{l}{\tau}_{11}^{2}(p)={[\frac{H\{\theta ,{G}^{-1}(\theta ,p,\rho )\}\theta}{]}T\mathrm{\sum}(\theta )[\frac{H\{\theta ,{G}^{-1}(\theta ,p,\rho )\}\theta}{]},{\tau}_{12}^{2}(p)={[\frac{H\{\theta ,{G}^{-1}(\theta ,p,\rho )\}\theta}{]}T{\mathrm{\sum}}_{\theta \rho}\{t-{\text{ROC}}_{\theta}(t)\},}^{}{\tau}_{22}^{2}(p)={\{t-{\text{ROC}}_{\theta}(t)\}}^{2}{\mathrm{\sum}}_{\rho \rho},}^{}\end{array}$$

and $H\{\theta ,{G}^{-1}(\theta ,p,\rho ),\rho \}/\theta =\rho {\text{ROC}}_{\theta}\{{G}^{-1}(\theta ,p,\rho )\}/\theta +[1-\rho +\rho {\text{ROC}}_{\theta}^{\prime}\{{G}^{-1}(\theta ,p,\rho )\}]{G}^{-1}(\theta ,p,\rho )/\theta $.

Observe that the variance of (*v*) and ^{−1}(*p*) are related by the equation *σ*^{2}(*v*) = {*R*(*v*)/*v*}^{2} *τ*^{2}(*p*) = [*g*_{2} {*θ*, *H*^{−1}(*θ*, *v*)} *H*^{−1}(*θ*, *v*)/*v*]^{2} *τ*^{2}(*p*) where *p* = *R*(*v*). Theorems 3 and 4 follow directly from the continuous mapping theorem and the chain rule. Typically the terms
${\sigma}_{12}^{2}(v)$ in Theorem 3 and
${\tau}_{12}^{2}(p)$ in Theorem 4 are zero because Σ* _{θρ}*= 0, even in a nested case-control study. This follows because

Due to the lack of closed-forms for *H*^{−1}(·) and *G*^{−1}(·), numerical differentiation methods are needed for calculation of their derivatives when estimating the asymptotic variances of (*v*) and ^{−1}(*p*). We use bootstrap resampling instead for variance estimation with resampling reflecting the study design. Separate resampling of cases and controls is employed, with resampling of *D* for the entire cohort when prevalence is estimated.

We simulate a nested case-control study to evaluate the finite sample performance of predictiveness curve estimates. To generate data we use disease prevalence *ρ* = 0.2, and the binormal model for the risk marker *Y : h*(*Y*)|*D* = 0 ~ *N*(0, 1) and *h*(*Y*)|*D* = 1 ~ *N*(*μ*, 1), where *h*(*y*) = log{exp(*y*) −3.5}. A cohort of size *n* = 5,000 comprises the parent study, with *D* generated as Bernoulli with probability *ρ* = 0.2. The risk marker is then generated for equal numbers of cases and controls nested within the cohort. Suppose a low risk threshold 0.1 and a high risk threshold 0.3 are of major interest. The measures studied in the simulation are *R*(*v*) and *R*^{−1}(*p*) for *p* = 0.1, 0.3 and the corresponding *R*(*v*) for *v* = *R*^{−1}(*p*). The effect of the FPF range chosen for fitting the partial ROC curve is evaluated as follows: (i) a binormal ROC curve with *t* (0.01, 0.99) is fitted for estimating both *p* = *R*^{−1}(0.1) and *p* = *R*^{−1}(0.3) and corresponding *R*(*v*) for *v* = *R*^{−1}(*p*); (ii) a binormal ROC curve with *t* (0.5, 0.99) is fitted for estimating *R*^{−1}(0.1) and the corresponding *R*(*v*), while *t* (0.01, 0.50) is employed when estimating *R*^{−1}(0.3) and the corresponding *R*(*v*). The pseudolikelihood procedure (Pepe and Cai, 2004) is used for fitting the ROC model.

For comparison purposes, we note that an alternative approach for estimation of the predictiveness curve is to separately estimate the risk model and the marker distribution. The method proposed for cohort studies by Huang et al. (2007) can be naturally extended to case-control studies. Assume the population risk model is logit*P* (*D* = 1|*Y*) = *β*_{0} + *β*_{1}*Y*. Let *Sampled* indicate being sampled in the case-control study. According to Bayes’ theorem

$$\frac{P(D=1Y,\mathit{Sampled})P(D=0Y,\mathit{Sampled})=\frac{P(D=1Y)P(D=0Y)\times \frac{P(D=0)}{P(D=1)}\times \frac{P(D=1\mathit{Sampled})P(D=0\mathit{Sampled}).}{}}{}}{}$$

Therefore to estimate the risk model, we apply an ordinary linear logistic regression model to the data and correct the intercept using the term −logit () + log (*n _{D}*/

We evaluate three models with *μ* = 0.5, 0.8, 1.2, which correspond to markers with relatively weak, moderate, and strong predictive capacity. For each model, we study performance of the predictiveness estimators for case-control sample sizes varying from 500 to 1000. For each sample size, 1000 Monte-Carlo simulations are performed. The bootstrap is conducted for variance estimation, resampling marker data within cases and controls separately. Also resampled is disease status for the parent cohort in order to incorporate variability in . For each simulation, 250 bootstrap samples are generated. The 95% confidence intervals are constructed from the 2.5 and 97.5 percentiles of the bootstrap distributions.

Results for bias, mean squared error (MSE), and empirical coverage probability of confidence intervals are shown in Table 2. We see that the rank-invariant estimators based on estimating the partial ROC curve have good performance. Bias is minimal for *R*(*v*) and for *R*^{−1}(*p*). All 95% bootstrap confidence intervals have reliable coverage. On the other hand, the estimator based on logistic regression is biased since this method is not rank-invariant and in this example an incorrect scale for the marker is employed in the risk model. It has larger MSE than the partial ROC curve based estimator and very poor coverage in this simulation setting.

Performance of the predictiveness curve estimators, where MSE is mean squared error and ECP is empirical coverage of bootstrap confidence interval.

Finally, Table 3 presents average width of percentile bootstrap confidence intervals based on partial binormal ROC curve fitting, as a function of FPF range, predictive strength of the marker and sample size. Observe that in general estimation of *R*^{−1}(*p*) is more challenging than estimation of *R*(*v*): the former has much wider confidence intervals. Precision of an estimator for *R*^{−1}(*p*) depends on sample size and FPF range. With limited resources (sample size), widening the range of FPF will increase precision of the corresponding predictiveness curve estimate. However, flexibility of the ROC curve estimate is compromised. This issue of bias versus variance trade-off is typical in statistical modeling. Also note that the precision of the *R*^{−1}(*p*) estimator tends to be larger for a marker with better predictive accuracy, when the predictiveness curve is steep. For a marker with very weak predictive strength such that the predictiveness curve is very flat at risk threshold *p*, it is not essential to estimate *R*^{−1}(*p*) with high precision since the marker is not going to be helpful for predicting.

In practice, it is up to the investigator to make a choice between precision and robustness given available resources or to increase sample size. For example, for a marker with moderate predictive strength in our simulation models, average CI length is around 10% for *R*^{−1}(*p*) at *p* = 0.30 for sample size 500 when FPF ranges from (0.01 to 0.99). This is fairly informative. We use a sample size of 500 in our illustration of PSA as a predictive marker for prostate cancer that is described next.

We illustrate the methodology using data from the placebo arm of the Prostate Cancer Prevention Trial (Thompson et al., 2006). Almost all subjects, 5519, in the cohort underwent prostate biopsy at the end of the study and thus had prostate cancer disease status available. 21.9% of men were found to have prostate cancer. The marker, PSA value prior to biopsy, is available for every subject. Our goal is to evaluate PSA as a risk prediction marker for diagnosis of prostate cancer from the biopsy. To illustrate application of our methodology to a nested case-control sample, we randomly sampled 250 cases and 250 controls from the cohort and pretend that PSA is only measured for these 500 subjects but not for the rest of the cohort.

Given a low risk threshold of 10% and a high risk threshold of 30%, our main interest is to estimate *R*^{−1}(0.10) and 1 − *R*^{−1}(0.30), the proportions of subjects falling into the low and high risk ranges. We considered two ways to fit the partial binormal ROC curve, that prioritize precision and flexibility respectively. First, we fit one concave partial ROC curve to provide estimates for both *R*^{−1}(0.10) and 1 −*R*^{−1}(0.30). To include most of the available data in the interval while avoiding problems due to sparse data at the boundary, we choose the FPF range *t* (0.05, 0.95). This guarantees at least 12 controls with data beyond the corresponding risk threshold estimate. The estimated partial ROC curves are displayed in Figure 3(a). Also displayed is the empirical ROC curve (Obuchowski, 2003). The two in general agree well with each other.

(a) (Partial) ROC curves for PSA (I: *t* (0.05, 0.95), II: range of *t* determined by *R*^{−1}(*p*) ± 0.5) and (b)(c) partial predictiveness curves for PSA estimated from the nested case-control study and the corresponding 95% confidence **...**

We then estimate the corresponding partial predictiveness curve for PSA plugging in estimated from the parent cohort. The partial predictiveness curve with its 95% confidence intervals (taking variablity in into account) are displayed in Figure 3(b), Also displayed is the nonparametric predictiveness estimate, (*v*) vs *v*, under the monotone increasing risk assumption. This curve is generated by estimating (*D* = 1|*Y*) using isotonic regression as described below, estimating *F _{D}* and

Table 4 presents corresponding estimates of *R*^{−1}(0.1) and 1 −*R*^{−1}(0.3). First, based on estimated from the phase-one cohort (= 21.9%), 20.5% subjects in the population are classified as low risk, 26.4% classified as high risk. Confidence intervals are constructed either treating disease prevalence as fixed or taking variability in into account. In this example, the parent cohort is much larger than the case-control sample (around 10 times larger), so variability in has a very small impact on the width of these CIs. Observe that the widths of the confidence intervals are around 18% for estimating *R*^{−1}(0.10) and 15% for estimating 1 − *R*^{−1}(0.30), reasonably tight from a clinical point of view.

Estimates of the predictiveness for PSA. Estimates are based on partial binormal ROC models. Confidence intervals are constructed using percentiles of bootstrap distribution.

When flexibility of the curve is of major concern, a second strategy can be employed for estimating the predictiveness of PSA at the low and high risk thresholds. Two concave binormal partial predictiveness curves are fitted separately at the low and high ends of the domains for *v*. The ranges of false positive fractions (FPF) for the corresponding partial ROC curve are (0.38, 0.95) and (0.10, 0.32). Our strategy for choosing these ranges is described as follows.

For the high risk threshold, *p _{H}*, and analogously for the low risk threshold,

The estimated partial ROC curves are displayed in Figure 3(a). They agree fairly well with the empirical ROC curve and the partial ROC curve with *t* (0.05, 0.95). The corresponding partial predictiveness curve for PSA plugging in estimated from the parent cohort, with its 95% percentile bootstrap confidence interval (with variability in taken into consideration) are displayed in Figure 3(c). Compared to Figure 3(b), confidence intervals are wider with reduced range of FPF. The width of CIs for *R*^{−1}(0.10) and 1 − *R*^{−1}(0.30) are around 22% (Table 4). Compare the results for the two partial ROC curve fitting strategies in Table 4. Estimates of *R*^{−1}(0.10) and 1 − *R*^{−1}(0.30) shift upward by 3% and 6% respectively when the more flexible strategy is employed. Still, they fall into the 95% CIs based on the curve with the wider FPF range. In practice, if desired precision is of major concern, a wide FPF range is favored. To further increase precision while maintaining flexibility requires a larger sample size.

Note that in practice, when we don’t have a large cohort available for precise estimation of the disease prevalence, sensitivity analysis with varying *ρ* becomes important. Using our data as an example, the impact of varying perturbations of *ρ* are examined (Table 3). First we plug in *ρ* = 20.87% and 23.05%, which correspond to the lower and upper bounds of the 95% confidence interval for disease prevalence based on the cohort. When a bigger *ρ* is entered, we get a larger estimate of 1−*R*^{−1}(0.30) and smaller estimate of *R*^{−1}(0.10), although changes in these estimates are not big here (around 3% for both partial ROC curve fitting strategies). Next we increase the perturbation in *ρ* and plug in *ρ* = 18.53% and 25.78%, which correspond to lower and upper bounds of the 95% confidence interval for disease prevalence if a cohort with 500 subjects is used for estimating prevalence. This time the magnitude of the predictiveness estimates changes quite a lot (around 5–10%). This demonstrates the value of a two-phase design for biomarker evaluation when *ρ* is estimated precisely from a large cohort while the novel marker *Y* is measured only for a case-control subset. We also see the value of a sensitivity analysis when an accurate estimate of prevalence is not available.

Finally we describe the algorithm used to fit the non-parametric risk model that incorporates the monotone increasing risk assumption. It involves two steps: (a) We compute *P* (*D* = 1|*Y*, *Sampled*) from the case-control sample using isotonic regression with the pool-adjacent-violators algorithm (Barlow et al., 1972). Specifically, let *y*_{1}, …, *y _{U}* be the unique values for

Classification accuracy is usually considered to be an intrinsic property of a marker because it does not depend on the population-specific disease prevalence. Predictiveness, on the other hand, integrates classification accuracy and disease prevalence (Pepe et al., 2008a) and characterizes the risk prediction capacity of the marker in a particular population. In this paper, we show the one-to-one relationship between the ROC curve and the predictiveness curve when disease prevalence is fixed. The latter has been proposed as a graphical tool for evaluating a continuous risk prediction marker. We developed methodology for estimating the predictiveness curve based on a parametric ROC model using a case-control study design. The idea of estimating an ROC curve first seems very natural in the retrospective setting considering that criteria for classification accuracy are defined conditional on disease status. The availability of a wide variety of methods for estimating a parametric ROC curve makes this approach even more appealing. We note that the parametric ROC curve methodology can be applied to a cohort study as well by plugging in the sample prevalence.

The main limitation associated with assuming a parametric ROC model, however, is lack of flexibility in the predictiveness curve estimator, especially at the boundary. Estimating a partial predictiveness curve from a partial ROC curve holds promise for resolving this issue. At the same time, it allows use of the most popular parametric ROC model, the binormal ROC model, which may not be concave over the whole range of FPF but can be restricted to be concave in the certain regions. As we have pointed out, an alternative semi-parametric method for predictiveness curve estimation is to fit a risk model using logistic regression and to estimate the marker distribution within cases and controls separately (Huang and Pepe, 2008b, c). This is also an appealing approach since logistic regression is widely used in epidemiological studies. On the other hand, the ROC curve based method has some desirable properties that the logistic regression method lacks. First, the ROC curve estimation method is rank invariant so it does not matter on what scale the marker is measured, whereas fitting of a parametric risk model depends on the scale of the marker measure. Second, the ROC curve based method is natural when considering data from heterogeneous or multiple populations. When a marker’s capacity to distinguish cases from controls is invariant across populations, existing methods can be applied to estimate the common ROC curve using data from different populations (Janes and Pepe, 2008b, c; Huang and Pepe, 2008a; Huang et al., 2008). This together with population-specific prevalences lead to population-specific predictiveness curve estimates that are more efficient than those estimated from individual populations. Finally, since the ROC curve has been the gold standard for biomarker evaluation for decades, researchers may tend to plot it together with the predictiveness curve when they are interested in examining a marker from multiple points of view. While a parametric ROC model is fitted, which is fairly common in practice, deriving the predictiveness curve from it ensures the two curves are estimated under compatible model assumptions.

Some extensions of our methods should be considered. The methods can easily be extended to compare points on predictiveness curves for different markers. When predictiveness curves in subpopulations are of interest, we can estimate the covariate-specific ROC curve using existing ROC regression methods (Alonzo and Pepe, 2002; Pepe and Cai, 2004; Cai and Moskowitz, 2004) and derive the corresponding covariate-specific predictiveness curve by plugging in disease prevalence for the subpopulation. Here we have focused on disease status at a fixed point in time. When subjects are observed over time, the time dimension may make things more challenging especially if there is censoring. Methods to incorporate the time dependence in an event time setting requires further investigation.

Pepe et al. (2008a) suggested that to evaluate a risk prediction marker, besides estimating its predictiveness, it is also helpful to provide information about the fractions of cases and controls identified as high (low) risks in order to assess the effect of correct and incorrect treatment decisions based on the risk marker. For a given risk threshold *p*, the true and false positive fractions are defined as TPF(*p*) = *P* {*Risk*(*Y)* > *p*|*D* = 1} and FPF(*p*) = *P* {*Risk*(*Y)* > *p*|*D* = 0}. Based on arguments similar to those in Theorem 1, it can be shown that the curve TPF(*p*) vs *p*, can be represented as a plot of ROC(*t*) vs *ρ*ROC′(*t*)*/*{*ρ*ROC′ (*t*) + (1 − *ρ*)}, while the curve FPF(*p*) vs *p*, can be represented as a plot of *t* vs *ρ*ROC′ (*t*)*/*{*ρ*ROC′ (*t*) + (1 − *ρ*)}. So the rank-invariant parametric ROC curve based strategy developed in this work can be readily applied to estimate these classification performance curves as well for full evaluation of a marker.

The authors are grateful to reviewers for constructive comments that lead to substantial improvement to the paper. The authors thank Dr Holly Janes, Dr Yingye Zheng, and Dr Ziding Feng for helpful discussions and comments. Support for this research was provided by NIH grants GM-54438 and NCI grants CA86368.

Let *v*^{*} be the x-coordinate of the point where the predictiveness curve and the prevalence curve cross, i.e. *R*(*v*^{*}) = *ρ*. Let *t*^{*} be the value corresponding to *v*^{*}, i.e. 1 − (1 − *ρ*)*t*^{*} − *ρ*ROC(*t*^{*}) = *v*^{*}. Then

$$\begin{array}{l}TG={\int}_{0}^{{v}^{}}-{\int}_{{v}^{}1}^{\{\rho -R(v)\}dv}=\rho \{{v}^{}\end{array}$$

Also we have

$$\begin{array}{l}\frac{\rho \text{RO}{\text{C}}^{\prime}({t}^{}\rho \text{RO}{\text{C}}^{\prime}({t}^{}=\rho \frac{\text{RO}{\text{C}}^{\prime}({t}^{}\rho \text{RO}{\text{C}}^{\prime}({t}^{}=1}{}\text{RO}{\text{C}}^{\prime}({t}^{}}{}\end{array}$$

Now look at the function ROC(*t*)−*t*, its derivative is ROC′(*t*)−1. For a concave ROC curve, ROC′ (*t*) is monotone decreasing such that ROC′ (*t*) − 1 > 0 for *t* < *t*^{*} and ROC′ (*t*) − 1 < 0 for *t* > *t*^{*}. So the maximum of ROC(*t*) − *t* is achieved at *t*^{*}, which completes the proof.

- Alonzo TA, Pepe MS. Distribution-free ROC analysis using binary regression techniques. Biostatistics. 2002;3 (3):421–432. [PubMed]
- Baker SG, Kramer BS, Srivastava S. Markers for early detection of cancer: Statistical guidelines for nested case-control studies. BMI Medical Research Methodology. 2002;2:4. [PMC free article] [PubMed]
- Barlow RE, Bartholomew DJ, Bremner JM, Brunk HD. Statistical Inference Under Order Restrictions: The Theory and Application of Isotonic Regression. Wiley; London: 1972.
- Bura E, Gastwirth JL. The binary regression quantile plot: assessing the importance of predictors in binary regression visually. Biometrical Journal. 2001;43 (1):5–21.
- Cai T, Moskowitz CS. Semi-parametric estimation of the binormal ROC curve for a continuous diagnostic test. Biostatistics. 2004;5(4):573–586. [PubMed]
- Campbell G, Ratnaparkhi MV. An application of Lomax distributions in receiver operating characteristic (ROC) curve analysis. Communications in Statistics. 1993;22:1681–1697.
- Cook NR. Use and misuse of the receiver operating characteristic curve in risk prediction. Circulation. 2007;115:928–935. [PubMed]
- Dodd L, Pepe MS. Semiparametric regression for the area under the receiver operating characteristic curve. JASA. 2003;98(462):409–417.
- Dorfman DD, Alf E. Maximum likelihood estimation of parameters of signal detection theory and determination of confidence intervals-rating method data. Journal of Mathematical Psychology. 1969;6:487–496.
- Dorfman DD, Berbaum KS, Metz CE, Length RV, Hanley JA, Dagga HA. Proper receiver operating characteristic analysis: The bigamma model. Academic Radiology. 1996;4:138–149. [PubMed]
- Egan JP. Signal detection theory and ROC analysis. Academic Press; New York: 1975.
- Faraggi D, Reiser B. Estimation of the area under the ROC curve. Statistics in Medicine. 2002;21(20):3093–3106. [PubMed]
- Fluss R, Faraggi D, Reiser B. Estimation of the Youden index and its associated cutoff point. Biometrical Journall. 2005;47(4):458–472. [PubMed]
- Gail MH, Pfeiffer RM. On criteria for evaluating models of absolute risk. Biostatistics. 2005;6(2):227–239. [PubMed]
- Greenhouse SW, Mantel N. The evaluation of diagnostic tests. Biometrics. 1950;6:399–412. [PubMed]
- Green DM, Swets JA. Signal detection theory and psychophysics. Wiley; New York: 1966.
- Hanley JA. The robustness of the ‘binormal’ assumptions used in fitting ROC curves. Medical Decision Making. 1988;143:29–36. [PubMed]
- Hsieh F, Turnbull BW. Nonparametric and semiparametric estimation of the receiver operating characteristic curve. Annals of Statistics. 1996;24:25–40.
- Huang Y. Evaluating the predictiveness of continuous biomarkers. UW thesis 2007
- Huang Y, Pepe MS, Feng Z. Evaluating the predictiveness of a continuous marker. Biometrics. 2007;63(4):1181–1188. [PMC free article] [PubMed]
- Huang Y, Feng Z, Fong Y. UW Biostatistics Working Paper Series. 2008. Borrowing Information across Populations in Estimating Positive and Negative Predictive Values. Working Paper 337. [PMC free article] [PubMed]
- Huang Y, Pepe MS. Biomarker Evaluation Using the Controls as a Reference Population. Biostatistics. 2008a In Press. [PMC free article] [PubMed]
- Huang Y, Pepe MS. UW Biostatistics Working Paper Series. 2008b. Semiparametric and nonparametric methods for evaluating risk prediction markers in case-control studies. Working Paper 333. [PMC free article] [PubMed]
- Huang Y, Pepe MS. UW Biostatistics Working Paper Series. 2008c. Semiparametric methods for evaluating the covariate-specific predictiveness of continuous markers in matched case-control studies. Working Paper 329. [PMC free article] [PubMed]
- Janes H, Pepe MS. Matching in studies of classification accuracy: Implications for bias, efficiency, and assessment of incremental value. Biometrics. 2008a;64(1):1–9. [PubMed]
- Janes H, Pepe MS. Adjusting for covariates in studies of diagnostic, screening, or prognostic markers: an old concept in a new setting. American Journal of Epidemiology. 2008b;168(1):89–97. [PubMed]
- Janes H, Pepe MS. Adjusting for covariate effects on classification accuracy using the covariate-adjusted receiver operating characteristic curve. Biometrika. 2008c In Press. [PMC free article] [PubMed]
- Jiang Y, Metz CE, Nishikawa RM. A receiver operating characteristic partial area index for highly sensitive diagnostic tests. Radiology. 1996;201:745–750. [PubMed]
- Lomax HS. Business failure; Another example of the analysis of failure data. JASA. 1954;49:847–852.
- McClish DK. Analyzing a portion of the ROC curve. Medical Decision Making. 1989;9:190–195. [PubMed]
- Metz C, Herman BA, Shen JH. Maximum likelihood estimation of receiver operating characteristic (ROC) curves from continuously distributed data. Statistics in Medicine. 1998;17:1033–1053. [PubMed]
- Metz CE, Pan X. “Proper” binormal ROC curves: Theory and maximum likelihood estimation. Journal of Mathematical Psychology. 1999;43:1–33. [PubMed]
- Obuchowski NA. Receiver operating characteristic curves and their use in radiology. Radiology. 2003;229:3–8. [PubMed]
- Pencina MJ, D’Agostino RB, Sr, D’Agostine RB, Jr, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Statistics in Medicine. 2008;27(2):157–172. [PubMed]
- Pepe MS. An interpretation for the ROC curve and inference using GLM procedures. Biometrics. 2000;56:352–359. [PubMed]
- Pepe MS. The Statistical Evaluation of Medical Tests for Classification and Prediction. New York: Oxford University Press; 2003.
- Pepe MS, Cai T. The analysis of placement values for evaluating discriminatory measures. Biometrics. 2004;60:528–535. [PubMed]
- Pepe MS, Etzioni R, Feng Z, Potter JD, Thompson ML, Thornquist M, Winget M, Yasui Y. Phases of biomarker development for early detection of cancer. Journal of the National Cancer Institute. 2001;93(14):1054–1061. [PubMed]
- Pepe MS, Feng Z, Huang Y, Longton GM, Prentice R, Thompson IM, Zheng Y. Integrating the predictiveness of a marker with its performance as a classifier. American Journal of Epidemiology. 2008a;167(3):362–368. [PMC free article] [PubMed]
- Pepe MS, Feng Z, Janes H, Bossuyt PM, Potter JD. Pivotal evaluation of the accuracy of a biomarker used for classification or prediction: standards for study design. Journal of the National Cancer Institute. 2008b In Press. [PMC free article] [PubMed]
- Thompson ML, Zucchini W. On the statistical analysis of ROC curves. Statistics in Medicine. 1989;8:1277–1290. [PubMed]
- Thompson IM, Pauler Ankerst D, Chi C. Assessing prostate cancer risk: results from the prostate cancer prevention trial. Journal of the National Cancer Institute. 2006;98:529–534. [PubMed]
- Tosteson ANA, Begg CB. A general regression methodology for ROC curve estimation. Medical Decision Making. 1988;3:204–215. [PubMed]
- Zou KH, Hall WJ. Two transformation models for estimating an ROC curve derived from continuous data. Journal of Applied Statistics. 2000;27:621–631.
- Wieand S, Gail MH, James BR, James KL. A family of nonparametric statistics for comparing diagnostic markers with paired or unpaired data. Biometrika. 1989;76:585–592.
- Youden Index for rating diagnostic tests. Cancer. 1950;3(1):32–35. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |