Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2844793

Formats

Article sections

Authors

Related links

Acad Radiol. Author manuscript; available in PMC 2010 March 24.

Published in final edited form as:

PMCID: PMC2844793

NIHMSID: NIHMS13467

College of Optical Sciences, The University of Arizona, 1630 East University Blvd., Tucson, Arizona 85721, (TEL) 520-626-7280, (FAX) 520-626-2892, (email) ude.anozira.ygoloidar@noskralc

The publisher's final edited version of this article is available at Acad Radiol

See other articles in PMC that cite the published article.

Current approaches to ROC analysis use the MRMC (multiple-reader, multiple-case) paradigm in which several readers read each case and their ratings (or scores) are used to construct an estimate of the area under the ROC curve or some other ROC-related parameter. Standard practice is to decompose the parameter of interest according to a linear model into terms that depend in various ways on the readers, cases and modalities. Though the methodological aspects of MRMC analysis have been studied in detail, the literature on the probabilistic basis of the individual terms is sparse. In particular, few papers state what probability law applies to each term and what underlying assumptions are needed for the assumed independence. When probability distributions are specified for these terms, these distributions are assumed to be Gaussians.

This paper approaches the MRMC problem from a mechanistic perspective. For a single modality, three sources of randomness are included: the images, the reader skill and the reader uncertainty. The probability law on the reader scores is written in terms of three nested conditional probabilities, and random variables associated with this probability are referred to as triply stochastic.

In this paper, we present the probabilistic MRMC model and apply this model to the Wilcoxon statistic. The result is a seven-term expansion for the variance of the figure of merit. We relate the terms in this expansion to those in the standard, linear MRMC model. Finally, we use the probabilistic model to derive constraints on the coefficients in the seven-term expansion.

The multiple-reader, multiple-case paradigm is often used to assess the performance of a new medical-imaging system or to compare the performances of two or more such systems. In this paradigm, we first select a random sample of abnormal and normal cases. Each of these cases is individually read by each member in a sample of readers. Each reader produces a test statistic for each image which measures his or her confidence that an abnormality is present. This array of test statistics is used to generate a figure of merit. An important issue is the variance of this figure of merit as a function of the number of readers and cases. This is the issue addressed by standard, linear MRMC models [1–3] and by the probabilistic model presented here.

The linear model presupposes that the figure of merit can be decomposed as a sum of statistically uncorrelated terms. For a single modality there are 5 terms. The first term is the mean value of the figure of merit and is a constant. The remaining 4 terms, the reader term, the case term, the reader-case term, and the internal noise, are random variables. The reader term is a function of the reader sample only. The case term is a function of the case sample only. The reader-case term is a function of both samples. Finally, the internal noise term accounts for all other sources of variability not accounted for in the previous 3 terms.

The conventional assumption for the linear model is that the random terms in the linear decomposition are mutually independent and normally distributed [1]. As with any model-based decomposition, this assumption cannot be verified directly. In particular, a normality assumption cannot be valid if the figure of merit is the area under the ROC curve since this quantity must be between 0 and 1.

In this paper, we present a probabilistic formulation of the MRMC problem. We account for case variability, reader variability, and reader uncertainty. We then use the methods and concepts of doubly- and triply-stochastic variables to directly derive an exact seven-term decomposition of the variance of the Wilcoxon statistic [4, 5] as a function of the numbers of readers and cases. Our results are an extension of others who have studied the statistical properties of the Wilcoxon or Mann-Whitney statistics [6–10]. This paper expands upon results first presented in [11]. The probabilistic model introduced in that paper has already been used by B. Gallas [12] to develop a “one-shot” estimate of the components of variance for the Wilcoxon statistic with multiple readers and multiple cases. Here, we provide details of the theoretical foundations and subsequent derivations for the components of variance for the Wilcoxon statistic. We also derive constraints on the MRMC coefficients that result from the theoretical model.

In the probabilistic development [11], there is no need to define intermediate and unobservable random variables. The probabilistic assumptions that go into our model are derived from the physics and intuition of the problem as opposed to the independence assumptions used for the conventional linear model to make the problem tractable. The probabilistic approach also allows us to derive constraints on the coefficients in the seven-term expansion of the variance which cannot be derived from a linear model. Indeed the normality assumption used in the conventional linear model is inconsistent with the statistical properties used to derive these constraints.

Nevertheless, we show that we may rigorously define a decomposition of the figure of merit in terms of uncorrelated, but not necessarily independent or normal, random variables that correspond to the terms in the standard linear model. The variances of these random variables can be identified with terms, or combinations of terms, in the seven-term expansion. Finally, we show that the seven-term expansion turns into a ten-term expansion when replication of the entire study is considered.

MRMC methodology accounts for multiple readers each reading multiple cases. In general, we will assume that a reader analyzes an image (case) and produces a test statistic that signifies the reader’s confidence that the image is abnormal. We do not assume that a given reader will produce the same value for the test statistic on multiple readings of the same image. This is due to the internal noise or reader jitter inherent in the diagnostic process. Thus, the fundamental random quantities in the MRMC problem are the case sample, the reader sample, and the resulting array of test statistics.

The image matrix ** G** (the cases) is composed of column vectors each representing an image. We subdivide this matrix into submatrices of signal-absent cases (i.e., normal cases),

$$\mathit{G}=\left[\begin{array}{ll}{\mathit{G}}_{0}\hfill & {\mathit{G}}_{1}\hfill \end{array}\right].$$

(1)

The matrix *G*_{0} is *M* × *N*_{0} and *G*_{1} is *M* × *N*_{1}, where *M* is the number of pixels in an image, *N*_{0} is the number of signal-absent cases, and *N*_{1} is the number of signal-present images. The full image matrix ** G** is

$${\mathit{G}}_{0}=\left[\begin{array}{llll}{\mathit{g}}_{01}\hfill & {\mathit{g}}_{02}\hfill & \cdots \hfill & {\mathit{g}}_{0{N}_{0}}\hfill \end{array}\right]$$

(2)

$${\mathit{G}}_{1}=\left[\begin{array}{llll}{\mathit{g}}_{11}\hfill & {\mathit{g}}_{12}\hfill & \cdots \hfill & {\mathit{g}}_{1{N}_{1}}\hfill \end{array}\right].$$

(3)

The *g*_{0}* _{i}* and

The reader parameters are also formed into column vectors *γ** _{r}*, one for each of

$$\mathbf{\Gamma}=\left[\begin{array}{llll}{\mathit{\gamma}}_{1}\hfill & {\mathit{\gamma}}_{2}\hfill & \cdots \hfill & {\mathit{\gamma}}_{{N}_{R}}\hfill \end{array}\right].$$

(4)

This is a *K* × *N _{R}* matrix, where

A reader produces a test statistic for each image. For a given case and reader this test statistic is a random variable due to internal noise. The test statistics for all of the readers and cases are collected into a matrix ** T**. This matrix is subdivided into submatrices corresponding to signal-absent cases,

$$\mathit{T}=\left[\begin{array}{ll}{\mathit{T}}_{0}\hfill & {\mathit{T}}_{1}\hfill \end{array}\right].$$

(5)

This is an *N _{R}* ×

$${\mathit{T}}_{0}=\left[\begin{array}{c}{\mathit{t}}_{01}\\ {\mathit{t}}_{02}\\ \vdots \\ {\mathit{t}}_{0{N}_{R}}\end{array}\right]\phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.16667em}{0ex}}{\mathit{T}}_{1}=\left[\begin{array}{c}{\mathit{t}}_{11}\\ {\mathit{t}}_{12}\\ \vdots \\ {\mathit{t}}_{1{N}_{R}}\end{array}\right].$$

(6)

We can also concatenate these row vectors to make a vector of all test statistics for a given reader:

$${\mathit{t}}_{r}=\left[\begin{array}{ll}{\mathit{t}}_{0r}\hfill & {\mathit{t}}_{1r}\hfill \end{array}\right].$$

(7)

We make some statistical assumptions at this point. The cases are assumed to be drawn independently from signal-absent and signal-present distributions. The reader parameter vectors are assumed to be drawn independently from a distribution of such vectors. The readers are also assumed to be independent of the cases. Finally, the joint conditional density for the noisy test statistics is a product of conditional densities for the individual reader test statistics. Furthermore, this latter distribution depends only on the given reader and the cases. These assumptions can be summarized as follows:

$$pr(\mathbf{\Gamma},\mathit{G})=p{r}_{\mathrm{\Gamma}}(\mathbf{\Gamma})p{r}_{G}(\mathit{G})$$

(8)

$$p{r}_{\mathrm{\Gamma}}(\mathbf{\Gamma})=\prod _{r=1}^{{N}_{R}}p{r}_{\gamma}({\mathit{\gamma}}_{r})$$

(9)

$$p{r}_{G}(\mathit{G})=\prod _{i=1}^{{N}_{0}}p{r}_{0}({\mathit{g}}_{0i})\prod _{j=1}^{{N}_{1}}p{r}_{1}({\mathit{g}}_{1j})$$

(10)

$$pr(\mathit{T}\mid \mathbf{\Gamma},\mathit{G})=\prod _{r=1}^{{N}_{R}}p{r}_{t}({\mathit{t}}_{r}\mid {\mathit{\gamma}}_{r},\mathit{G}).$$

(11)

The fact that the readers are independent from the cases does not imply that there is no reader-case interaction. In fact, the reader-case interaction is embodied in the distribution *pr _{t}*(

If *x* is a random variable with conditional PDF *pr _{x}* (

$${\langle f(x)\rangle}_{x,z\mid y}=\int \int f(x)p{r}_{x}(x\mid y,z)p{r}_{z}(z\mid y)\mathit{dxdz}$$

(12)

stands for the conditional expectation of *f*(*x*) conditioned on *y*. In this expression we are averaging over the distribution of *x* given (*z, y*), and then averaging over the distribution of *z* given *y.* To perform this operation we need the conditional densities *pr _{x}* (

$${\langle f(x)\rangle}_{x,z\mid y}=\int f(x)p{r}_{x}(x\mid y)dx.$$

(13)

However, from an operational point of view, we do not know *pr _{x}*(

$${\langle f(x)\rangle}_{x,z\mid y}=\int \int f(x)p{r}_{x}(x\mid y,z)p{r}_{z}(z)\mathit{dxdz}.$$

(14)

Note that Eqn. 12 includes the case where *x* is a deterministic function *x* (*y*, *z*) of *y* and *z*, in which case

$${\langle f(x)\rangle}_{z\mid y}=\int f(x(y,z))p{r}_{z}(z\mid y)dz.$$

(15)

Initially we will assume that the figure of merit has the following form

$$\widehat{A}(\mathit{T})=\frac{1}{{N}_{R}}\sum _{r=1}^{{N}_{R}}\widehat{a}({\mathit{t}}_{r})$$

(16)

where *â* (** t**) is some figure of merit for an individual reader. Later we will be more specific about this function.

As an example of the probabilistic method and the notation introduced above, we compute the mean and variance of the figure of merit shown in Eqn. 16. From the independence assumptions on the readers the mean of the figure of merit can be written as

$$\langle \widehat{A}\rangle ={\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}.$$

(17)

The inner angle bracket averages over internal noise with the reader and case sample fixed. The outer angle bracket is then the average of this quantity over readers and case samples.

For the expectation of the square of *Â* we have a double sum, which we decompose into a single sum where the indices match, and a double sum where the indices do not match (see Appendix). The end result is

$$\langle {\widehat{A}}^{2}\rangle =\frac{1}{{N}_{R}}{\langle {\langle {\widehat{a}}^{2}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}+\left(1-\frac{1}{{N}_{R}}\right){\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}.$$

(18)

Putting the results we have so far together we get an expression for the variance of *Â* in terms of moments of *â* (** t**):

$$\begin{array}{l}\text{Var}\left(\widehat{A}\right)={\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}-{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}^{2}\\ +{\scriptstyle \frac{1}{{N}_{R}}}\left[{\langle {\langle {\widehat{a}}^{2}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}-{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}\right].\end{array}$$

(19)

The three moments we need to calculate in order to proceed further are

$$\#1:{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}^{2}$$

(20)

$$\#2:{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}$$

(21)

$$\#3:{\langle {\langle {\widehat{a}}^{2}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}.$$

(22)

Equation 19 is an exact expression of the variance of the overall figure of merit in terms of expectations of the single-reader figure of merit. In order to compute these moments we need to specify our single-reader figure of merit *â* (** t**). In the next section we will compute these three moments when

Suppose reader ** γ** produces test statistics

$$\mathit{t}=\left[\begin{array}{ll}{\mathit{t}}_{0}\hfill & {\mathit{t}}_{1}\hfill \end{array}\right]$$

(23)

with

$${\mathit{t}}_{0}=\left[\begin{array}{llll}{t}_{01}\hfill & {t}_{02}\hfill & \dots \hfill & {t}_{0{N}_{0}}\hfill \end{array}\right]$$

(24)

$${\mathit{t}}_{1}=\left[\begin{array}{llll}{t}_{11}\hfill & {t}_{12}\hfill & \dots \hfill & {t}_{1{N}_{1}}\hfill \end{array}\right].$$

(25)

The Wilcoxon statistic *â* (** t**) as a function of

$$\widehat{a}(\mathit{t})=\frac{1}{{N}_{0}{N}_{1}}\sum _{i=1}^{{N}_{0}}\sum _{j=1}^{{N}_{1}}s({t}_{1j}-{t}_{0i}).$$

(26)

In this equation *s* (*t*) is the step function, although that fact will not play a role in most of the calculations.

We will also use one more statistical assumption

$$p{r}_{t}(\mathit{t}\mid \mathit{\gamma},\mathit{G})=\prod _{i=1}^{{N}_{0}}p{r}_{t}({t}_{0i}\mid \mathit{\gamma},{\mathit{g}}_{0i})\prod _{j=1}^{{N}_{1}}p{r}_{t}({t}_{1j}\mid \mathit{\gamma},{\mathit{g}}_{1j})$$

(27)

This equation tells us that, conditional on the reader and cases, the components of ** t** are independent. It also tells us that the conditional distribution for the internal noise on an individual test statistic only depends on the reader parameter vector and the corresponding case. If, for example, the internal noise is Gaussian, then the mean and variance of the test statistic for a given reader will depend only on the case at hand and the reader parameter vector.

We will show that the statistical assumptions provided above imply that the variance of the Wilcoxon statistic can be expanded as

$$\text{Var}\left[\widehat{A}\right]=\frac{{\alpha}_{1}}{{N}_{0}}+\frac{{\alpha}_{2}}{{N}_{1}}+\frac{{\alpha}_{3}}{{N}_{0}{N}_{1}}+\frac{{\alpha}_{4}}{{N}_{R}}+\frac{{\alpha}_{5}}{{N}_{R}{N}_{0}}+\frac{{\alpha}_{6}}{{N}_{R}{N}_{1}}+\frac{{\alpha}_{7}}{{N}_{R}{N}_{0}{N}_{1}}.$$

(28)

We will call this the seven-term expansion for the variance of *Â* and find explicit expressions for the coeffiecients *α _{n}*. These expressions will, in turn, lead to constraints on these coefficients. For any given set of values for

The three moments shown in Eqns.20–22 are all that we need to derive Eqn. 28.

For the first moment (Eqn. 20) we have

$${\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}=\frac{1}{{N}_{0}{N}_{1}}\sum _{i=1}^{{N}_{0}}\sum _{j=1}^{{N}_{1}}{\langle s({t}_{1j}-{t}_{0i})\rangle}_{{t}_{0i},{t}_{1j}\mid \mathit{\gamma},{\mathit{g}}_{0i},{\mathit{g}}_{1j}}$$

(29)

$$=\frac{1}{{N}_{0}{N}_{1}}\sum _{i=1}^{{N}_{0}}\sum _{j=1}^{{N}_{1}}\overline{s}(\mathit{\gamma},{\mathit{g}}_{0i},{\mathit{g}}_{1j}).$$

(30)

The last equality introduces (** γ**,

$${\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}^{2}={\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1}}^{2}={\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}}^{2}={\mu}^{2}.$$

(31)

The penultimate equality here introduces
$\overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})$, which is (** γ**,

For the second of the three moments we average over cases after squaring. This gives

$${\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}=\frac{1}{{N}_{0}^{2}{N}_{1}^{2}}\sum _{i=1}^{{N}_{0}}\sum _{j=1}^{{N}_{1}}\sum _{k=1}^{{N}_{0}}\sum _{l=1}^{{N}_{1}}{\langle \overline{\overline{s}}({\mathit{g}}_{0i},{\mathit{g}}_{1j})\overline{\overline{s}}({\mathit{g}}_{0k},{\mathit{g}}_{1l})\rangle}_{\mathit{G}}.$$

(32)

This sum involves averaging over observers before multiplying and averaging over cases. By separating the sum into the cases where both indices match, one index matches, and no indices match we get four terms

$$\begin{array}{l}{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}=\frac{1}{{N}_{0}{N}_{1}}{\langle {\overline{\overline{s}}}^{2}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}}\\ +\frac{1}{{N}_{0}}\left(1-\frac{1}{{N}_{1}}\right){\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1}^{\prime})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1},{\mathit{g}}_{1}^{\prime}}\\ +\frac{1}{{N}_{1}}\left(1-\frac{1}{{N}_{0}}\right){\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\overline{\overline{s}}({\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1}}\\ +\left(1-\frac{1}{{N}_{0}}-\frac{1}{{N}_{1}}+\frac{1}{{N}_{0}{N}_{1}}\right){\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}}^{2}\end{array}$$

(33)

We are now in a position to compute the first part of the overall variance (Eqn. 19), which is the variance of the noise-and-reader-averaged figure of merit with respect to the case randomness. The result is three terms

$${\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}-{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}^{2}=\frac{{\alpha}_{1}}{{N}_{0}}+\frac{{\alpha}_{2}}{{N}_{1}}+\frac{{\alpha}_{3}}{{N}_{0}{N}_{1}}$$

(34)

with the coefficients given by

$${\alpha}_{1}={\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1}^{\prime})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1},{\mathit{g}}_{1}^{\prime}}-{\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}}^{2}$$

(35)

$${\alpha}_{2}={\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\overline{\overline{s}}({\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1}}-{\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}}^{2}$$

(36)

and

$$\begin{array}{l}{\alpha}_{3}={\langle {\overline{\overline{s}}}^{2}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}}-{\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1}^{\prime})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1},{\mathit{g}}_{1}^{\prime}}\\ -{\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\overline{\overline{s}}({\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1}}+{\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}}^{2}.\end{array}$$

(37)

These equations are very similar to those in Hoeffding [6] and Lehmann [10]. By using independence of cases, we may simplify these expressions. The results are,

$${\alpha}_{1}=\text{Var}\left[{\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{1}\mid {\mathit{g}}_{0}}\right]$$

(38)

$${\alpha}_{2}=\text{Var}\left[{\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0}\mid {\mathit{g}}_{1}}\right]$$

(39)

$${\alpha}_{3}=\text{Var}[\overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})]-{\alpha}_{1}-{\alpha}_{2}.$$

(40)

In the *α*_{1} expression the quantity inside the square brackets is a random variable since *g*_{1} has been averaged over but *g*_{0} has not. The coefficient *α*_{1} is then the variance of this random variable. Similar remarks apply to *α*_{2}.

For the third moment in our list we square before doing any averaging. This leads to a fourfold sum

$${\langle {\langle {\widehat{a}}^{2}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}=\frac{1}{{N}_{0}^{2}{N}_{1}^{2}}\sum _{i=1}^{{N}_{0}}\sum _{j=1}^{{N}_{1}}\sum _{k=1}^{{N}_{0}}\sum _{l=1}^{{N}_{1}}{\langle {\langle s({t}_{1j}-{t}_{0i})s({t}_{1l}-{t}_{0k})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}.$$

(41)

As before we can break this down into four sums depending on which indices match, and use our independence assumptions to reduce this expectation to four terms:

$$\begin{array}{l}{\langle {\langle {\widehat{a}}^{2}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}=\frac{1}{{N}_{0}{N}_{1}}{\langle {\langle {s}^{2}({t}_{1}-{t}_{0})\rangle}_{{t}_{0},{t}_{1}\mid \mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1}}\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1}}\\ +\frac{1}{{N}_{0}}\left(1-\frac{1}{{N}_{1}}\right){\langle {\langle s({t}_{1}-{t}_{0})s({t}_{1}^{\prime}-{t}_{0})\rangle}_{{t}_{0},{t}_{1},{t}_{1}^{\prime}\mid \mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1},{\mathit{g}}_{1}^{\prime}}\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1},{\mathit{g}}_{1}^{\prime}}\\ +\frac{1}{{N}_{1}}\left(1-\frac{1}{{N}_{0}}\right){\langle {\langle s({t}_{1}-{t}_{0})s({t}_{1}-{t}_{0}^{\prime})\rangle}_{{t}_{0},{t}_{0}^{\prime},{t}_{1}\mid \mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1}}\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1}}\\ +\left(1-\frac{1}{{N}_{0}}-\frac{1}{{N}_{1}}+\frac{1}{{N}_{0}{N}_{1}}\right){\langle {\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}\mid \mathit{\gamma}}^{2}\rangle}_{\mathit{\gamma}}.\end{array}$$

(42)

The last term may require some explanation which is provided in the Appendix. If we use the fact that *s*^{2} (*t*) = *s* (*t*), then the first term reduces to

$${\langle {\langle {s}^{2}({t}_{1}-{t}_{0})\rangle}_{{t}_{0},{t}_{1}}\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1}\mid \mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1}}={\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1}}={\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}}=\mu .$$

(43)

We are now ready to compute the second part of the overall variance (Eqn. 19). Combining the expressions we just derived with earlier ones we have

$$\frac{1}{{N}_{R}}\left[{\langle {\langle {\widehat{a}}^{2}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}-{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}\right]=\frac{{\alpha}_{4}}{{N}_{R}}+\frac{{\alpha}_{5}}{{N}_{R}{N}_{0}}+\frac{{\alpha}_{6}}{{N}_{R}{N}_{1}}+\frac{{\alpha}_{7}}{{N}_{R}{N}_{0}{N}_{1}},$$

(44)

with

$${\alpha}_{4}={\langle {\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}\mid \mathit{\gamma}}^{2}\rangle}_{\mathit{\gamma}}-{\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}}^{2}$$

(45)

$$\begin{array}{l}{\alpha}_{5}={\langle {\langle s({t}_{1}-{t}_{0})s({t}_{1}^{\prime}-{t}_{0})\rangle}_{{t}_{0},{t}_{1},{t}_{1}^{\prime}\mid \mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1},{\mathit{g}}_{1}^{\prime}}\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1},{\mathit{g}}_{1}^{\prime}}\\ -{\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1}^{\prime})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1},{\mathit{g}}_{1}^{\prime}}-{\alpha}_{4}\end{array}$$

(46)

$$\begin{array}{l}{\alpha}_{6}={\langle {\langle s({t}_{1}-{t}_{0})s({t}_{1}-{t}_{0}^{\prime})\rangle}_{{t}_{0},{t}_{0}^{\prime},{t}_{1}\mid \mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1}}\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1}}\\ -{\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\overline{\overline{s}}({\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1}}-{\alpha}_{4}\end{array}$$

(47)

and

$${\alpha}_{7}={\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1}}-{\langle {\overline{\overline{s}}}^{2}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}}-{\alpha}_{4}-{\alpha}_{5}-{\alpha}_{6}.$$

(48)

The first two terms in the expression for *α*_{5} are the average of a conditional variance of a random variable. A similar simplification is possible for *α*_{6} and *α*_{7}. The end results are alternate expressions for these coefficients (See Appendix),

$${\alpha}_{4}=\text{Var}\left[{\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}\mid \mathit{\gamma}}\right]$$

(49)

$${\alpha}_{5}={\langle {\langle {\langle s({t}_{1}-{t}_{0})\rangle}_{{t}_{1},{\mathit{g}}_{1}\mid {t}_{0},\mathit{\gamma},{\mathit{g}}_{0}}^{2}\rangle}_{{t}_{0},\mathit{\gamma}\mid {\mathit{g}}_{0}}-{\langle {\langle s({t}_{1}-{t}_{0})\rangle}_{{t}_{1},{\mathit{g}}_{1}\mid {t}_{0},\mathit{\gamma},{\mathit{g}}_{0}}\rangle}_{{t}_{0},\mathit{\gamma}\mid {\mathit{g}}_{0}}^{2}\rangle}_{{\mathit{g}}_{0}}-{\alpha}_{4}$$

(50)

$${\alpha}_{6}={\langle {\langle {\langle s({t}_{1}-{t}_{0})\rangle}_{{t}_{0},{\mathit{g}}_{0}\mid {t}_{1},\mathit{\gamma},{\mathit{g}}_{1}}^{2}\rangle}_{{t}_{1},\mathit{\gamma}\mid {\mathit{g}}_{1}}-{\langle {\langle s({t}_{1}-{t}_{0})\rangle}_{{t}_{0},{\mathit{g}}_{0}\mid {t}_{1},\mathit{\gamma},{\mathit{g}}_{1}}\rangle}_{{t}_{1},\mathit{\gamma}\mid {\mathit{g}}_{1}}^{2}\rangle}_{{\mathit{g}}_{1}}-{\alpha}_{4}$$

(51)

$${\alpha}_{7}={\langle {\langle {s}^{2}({t}_{1}-{t}_{0})\rangle}_{{t}_{0},{t}_{1},\mathit{\gamma}\mid {\mathit{g}}_{0},{\mathit{g}}_{1}}-{\langle s({t}_{1}-{t}_{0})\rangle}_{{t}_{0},{t}_{1},\mathit{\gamma}\mid {\mathit{g}}_{0},{\mathit{g}}_{1}}^{2}\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}}-{\alpha}_{4}-{\alpha}_{5}-{\alpha}_{6}.$$

(52)

The quantity in the outer angle brackets in Eqn. 50 is the variance of the step function averaged over internal noise and cases for the signal-present class. The random variables involved in computing this variance are the internal noise for a signal-absent case and readers. This variance is then averaged over signal-absent cases. A similar description can be applied to the bracketed term in *α*_{6} and *α*_{7}.

To gain more insight into the significance of *α*_{1}, *α*_{2}, and *α*_{3}, we expand
$\overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})$ as

$$\overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})=\mu +{s}_{0}({\mathit{g}}_{0})+{s}_{1}({\mathit{g}}_{1})+\epsilon ({\mathit{g}}_{0},{\mathit{g}}_{1})$$

(53)

where

$${s}_{0}({\mathit{g}}_{0})={\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{1}\mid {\mathit{g}}_{0}}-\mu $$

(54)

$${s}_{1}({\mathit{g}}_{1})={\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0}\mid {\mathit{g}}_{1}}-\mu .$$

(55)

Thus, *s*_{0} (*g*_{0}) is *s*(*t*_{1} − *t*_{0}) averaged over internal noise, readers and signal-present cases when the signal-absent case is *g*_{0}. Similarly, *s*_{1} (*g*_{1}) is *s*(*t*_{1} − *t*_{0}) averaged over internal noise, readers and signal-absent cases when the signal-present case is *g*_{1}. The random variable *ε* (*g*_{0}, *g*_{1}) is defined by Eqn. 53. It is straightforward to verify that the following expectations and conditional expectations vanish

$${\langle {s}_{0}({\mathit{g}}_{0})\rangle}_{{\mathit{g}}_{0}}=0$$

(56)

$${\langle {s}_{1}({\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{1}}=0$$

(57)

$${\langle \epsilon ({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0}\mid {\mathit{g}}_{1}}=0$$

(58)

$${\langle \epsilon ({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{1}\mid {\mathit{g}}_{0}}=0.$$

(59)

These equations, combined with the fact that *g*_{0} and *g*_{1} are independent, imply that *s*_{0} (*g*_{0}), *s*_{1} (*g*_{1}) and *ε* (*g*_{0}, *g*_{1}) are uncorrelated random variables. This then gives us the expansion

$$\text{Var}[\overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})]=\text{Var}[{s}_{0}({\mathit{g}}_{0})]+\text{Var}[{s}_{1}({\mathit{g}}_{1})]+\text{Var}[\epsilon ({\mathit{g}}_{0},{\mathit{g}}_{1})].$$

(60)

From this expansion, Eqns. 38–40, and the definitions above we can identify the coefficients *α*_{1}, *α*_{2} and *α*_{3}.

$${\alpha}_{\text{1}}=\text{Var}[{s}_{0}({\mathit{g}}_{0})]$$

(61)

$${\alpha}_{2}=\text{Var}[{s}_{1}({\mathit{g}}_{1})]$$

(62)

$${\alpha}_{3}=\text{Var}[\epsilon ({\mathit{g}}_{0},{\mathit{g}}_{1})].$$

(63)

A random variable that is constrained to be between 0 and 1 has a maximum variance of 1/4. This fact and Eqns. 60–63 above lead to the constraints

$${\alpha}_{1}\ge 0$$

(64)

$${\alpha}_{2}\ge 0$$

(65)

$${\alpha}_{3}\ge 0$$

(66)

$${\alpha}_{1}+{\alpha}_{2}+{\alpha}_{3}\le \frac{1}{4}.$$

(67)

These constraints define a bounded region in the space of points (*α*_{1}, *α*_{2}, *α*_{3}) and thus allow us to compute, for any given values of *N*_{0} and *N*_{1}, the maximum possible contribution to the variance of *Â* from the first three terms in the seven-term expansion.

$$\frac{{\alpha}_{1}}{{N}_{0}}+\frac{{\alpha}_{2}}{{N}_{1}}+\frac{{\alpha}_{3}}{{N}_{0}{N}_{1}}\le \frac{1}{4min\{{N}_{0},{N}_{1}\}}.$$

(68)

This bound represents a worst case scenario. In practice we could expect this sum to be significantly smaller than the upper bound.

Equations 49–52 lead to the following constraints

$$0\le {\alpha}_{4}\le {\scriptstyle \frac{1}{4}}$$

(69)

$$0\le {\alpha}_{4}+{\alpha}_{5}\le {\scriptstyle \frac{1}{4}}$$

(70)

$$0\le {\alpha}_{4}+{\alpha}_{6}\le {\scriptstyle \frac{1}{4}}$$

(71)

$$0\le {\alpha}_{4}+{\alpha}_{5}+{\alpha}_{6}+{\alpha}_{7}\le {\scriptstyle \frac{1}{4}}.$$

(72)

These constraints define a bounded region in the space of points (*α*_{4}, *α*_{5}, *α*_{6}, *α*_{7}). This allows us to compute, for any given *N _{R}*,

$$\frac{{\alpha}_{4}}{{N}_{R}}+\frac{{\alpha}_{5}}{{N}_{R}{N}_{0}}+\frac{{\alpha}_{6}}{{N}_{R}{N}_{1}}+\frac{{\alpha}_{7}}{{N}_{R}{N}_{0}{N}_{1}}\le \frac{1}{4{N}_{R}}.$$

(73)

Again we could expect this sum to be significantly smaller in practice. However, we can now write an upper bound for the variance of *Â,*

$$\text{Var}\left[\widehat{A}\right]\le \frac{1}{4min\{{N}_{0},{N}_{1}\}}+\frac{1}{4{N}_{R}}.$$

(74)

This could be useful in simulations where the numbers of cases and readers are easy to change and the computations of the *α _{n}* would be tedious.

To compute the *α _{n}* in the full expansion for the variance of

$${\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}}$$

(75)

$${\langle {\overline{\overline{s}}}^{2}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}}$$

(76)

$${\langle {\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{1}\mid {\mathit{g}}_{0}}^{2}\rangle}_{{\mathit{g}}_{0}}$$

(77)

$${\langle {\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0}\mid {\mathit{g}}_{1}}^{2}\rangle}_{{\mathit{g}}_{1}},$$

(78)

one at the case-averaged level,

$${\langle {\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}\mid \mathit{\gamma}}^{2}\rangle}_{\mathit{\gamma}}$$

(79)

and two at the test statistic level

$${\langle {\langle s({t}_{1}-{t}_{0})\rangle}_{{t}_{1},{\mathit{g}}_{1}\mid {t}_{0},\mathit{\gamma},{\mathit{g}}_{0}}^{2}\rangle}_{{t}_{0},\mathit{\gamma},{\mathit{g}}_{0}}$$

(80)

$${\langle {\langle s({t}_{1}-{t}_{0})\rangle}_{{t}_{0},{\mathit{g}}_{0}\mid {t}_{1},\mathit{\gamma},{\mathit{g}}_{1}}^{2}\rangle}_{{t}_{1},\mathit{\gamma},{\mathit{g}}_{1}}.$$

(81)

The *α _{n}* are then linear combinations of these moments.

We now wish to see how the expansion given above for the variance of *Â* (** T**) compares to the more standard approach to MRMC that uses an expansion into uncorrelated components [1,2]. For this purpose we set

$$\widehat{A}(\mathit{T})=\mu +r+c+rc+\epsilon $$

(82)

and define each term in this expansion in terms of averages. The first term *μ* is the overall mean

$$\mu =\langle \widehat{A}\rangle ={\langle {\langle \widehat{A}(\mathit{T})\rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}\rangle}_{\mathbf{\Gamma},\mathit{G}}={\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}.$$

(83)

The second term is the reader term

$$r=r(\mathbf{\Gamma})={\langle {\langle \widehat{A}(\mathit{T})\rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}\rangle}_{\mathit{G}}-\mu .$$

(84)

This random variable is a function of the reader sample **Γ**. The third term is the case term

$$c=c(\mathit{G})={\langle {\langle \widehat{A}(\mathit{T})\rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}\rangle}_{\mathbf{\Gamma}}-\mu .$$

(85)

This random variable is a function of the case sample ** G**. Since

$$rc=rc(\mathbf{\Gamma},\mathit{G})={\langle \widehat{A}(\mathit{T})\rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}-r-c-\mu .$$

(86)

This random variable is a function of **Γ** and ** G**. The last term is the only one that depends on the internal noise of the readers via the matrix of test statistics

$$\epsilon =\widehat{A}(\mathit{T})-r-c-rc-\mu .$$

(87)

We will call this the noise term. It is straightforward to show that

$${\langle \epsilon \rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}=0$$

(88)

$${\langle rc\rangle}_{\mathbf{\Gamma}\mid \mathit{G}}=0$$

(89)

$${\langle rc\rangle}_{\mathit{G}\mid \mathbf{\Gamma}}=0$$

(90)

$${\langle r\rangle}_{\mathbf{\Gamma}}=0$$

(91)

$${\langle c\rangle}_{\mathit{G}}=0.$$

(92)

These equations, together with the independence of *r* and *c*, can then be used to show that *r*, *c*, *rc* and *ε* are statistically uncorrelated. This fact gives us the following expansion for the variance of the figure of merit

$$\text{Var}\left[\widehat{A}\right]=\text{Var}[r]+\text{Var}[c]+\text{Var}[rc]+\text{Var}[\epsilon ]$$

(93)

We will now examine each term in this expansion

The reader term may be written as follows

$$r(\mathbf{\Gamma})=\frac{1}{{N}_{R}}\sum _{r=1}^{{N}_{R}}{\langle \overline{s}({\mathit{\gamma}}_{r},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}\mid {\mathit{\gamma}}_{r}}-\mu .$$

(94)

For the second moment, which is also the variance, of this random variable we have, via the now familiar manipulations of the square of a sum,

$$\text{Var}[r]={\scriptstyle \frac{1}{{N}_{R}}}{\langle {\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{G}\mid \mathit{\gamma}}^{2}\rangle}_{\mathit{\gamma}}+\left(1-{\scriptstyle \frac{1}{{N}_{R}}}\right){\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}^{2}-{\mu}^{2}$$

(95)

$$={\scriptstyle \frac{1}{{N}_{R}}}\left[{\langle {\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}\mid \mathit{\gamma}}^{2}\rangle}_{\mathit{\gamma}}-{\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1}}^{2}\right].$$

(96)

The first equality follows from the independence of the readers, the second from the definition of *μ*, and the third from the definition of (** γ**,

$$\text{Var}[r]=\frac{{\alpha}_{4}}{R}.$$

(97)

Thus the variance of the reader term can be identified with the fourth term in the seven-term expansion for Var [*Â*].

For the case term we can write

$$c(\mathit{G})=\frac{1}{{N}_{R}}\sum _{r=1}^{{N}_{R}}{\langle {\langle \widehat{a}({\mathit{t}}_{r})\rangle}_{{\mathit{t}}_{r}\mid {\mathit{\gamma}}_{r},\mathit{G}}\rangle}_{{\mathit{\gamma}}_{r}\mid \mathit{G}}-\mu $$

(98)

$$={\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma}\mid \mathit{G}}-\mu $$

(99)

$$=\frac{1}{{N}_{0}{N}_{1}}\sum _{i=1}^{{N}_{0}}\sum _{j=1}^{{N}_{1}}\overline{\overline{s}}({\mathit{g}}_{0i},{\mathit{g}}_{1j})-\mu .$$

(100)

The variance is given by

$$\text{Var}[c]={\langle {\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}-{\mu}^{2}$$

(101)

$$=\frac{{\alpha}_{1}}{{N}_{0}}+\frac{{\alpha}_{2}}{{N}_{1}}+\frac{{\alpha}_{3}}{{N}_{0}{N}_{1}}.$$

(102)

In other words, the first three terms in the seven-term expansion for Var [*Â*] comprise the variance of the case term.

It should be noted that if *N*_{1} = *N*_{total} and *N*_{0} = (1 − )*N*_{total}, where is the prevalence, then the variance of the case term is given by,

$$\text{Var}[c]=\frac{{\alpha}_{1}/\phi +{\alpha}_{2}/(1-\phi )}{{N}_{\text{total}}}+\frac{{\alpha}_{3}/(\phi (1-\phi ))}{{N}_{\text{total}}^{2}}.$$

(103)

The first term in Eqn. 103 agrees with standard MRMC models [2]. The second term can contribute substantially when *N*_{total} is small and will become negligible for *N*_{total} sufficiently large.

The reader/case term can be written as

$$rc(\mathbf{\Gamma},\mathit{G})=\frac{1}{{N}_{R}}\sum _{r=1}^{{N}_{R}}{\langle \widehat{a}({\mathit{t}}_{r})\rangle}_{{\mathit{t}}_{r}\mid {\mathit{\gamma}}_{r},\mathit{G}}-r(\mathbf{\Gamma})-c(\mathit{G})-\mu $$

(104)

For the variance we use the fact that *r*, *c* and *rc* are uncorrelated and have zero mean values to get

$$\text{Var}[rc]=\frac{1}{{N}_{R}}{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}^{2}\rangle}_{\mathit{\gamma},\mathit{G}}+\left(1-\frac{1}{{N}_{R}}\right){\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}-\langle {r}^{2}\rangle -\langle {c}^{2}\rangle -{\mu}^{2}.$$

(105)

This equation then gives us

$$\text{Var}[rc]=\frac{1}{{N}_{R}}\left[{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}^{2}\rangle}_{\mathit{\gamma},\mathit{G}}-{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}-{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{G}\mid \mathit{\gamma}}^{2}\rangle}_{\mathit{\gamma}}+{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}^{2}\right].$$

(106)

A new moment appears here that does not appear in the computation of the seven-term expansion for Var [*Â*], i.e., the first term in the square brackets. This moment is discussed further in the Appendix.

The noise term is explicitly given by

$$\epsilon (\mathit{T})=\widehat{A}(\mathit{T})-rc(\mathbf{\Gamma},\mathit{G})-r(\mathbf{\Gamma})-c(\mathit{G})-\mu .$$

(107)

By rearranging the variance expansion for *Â* we have

$$\langle {\epsilon}^{2}\rangle ={\langle {\langle {\widehat{A}}^{2}(\mathit{T})\rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}\rangle}_{\mathbf{\Gamma},\mathit{G}}-\langle r{c}^{2}\rangle -\langle {c}^{2}\rangle -\langle {r}^{2}\rangle -{\mu}^{2}.$$

(108)

This then gives us

$$\text{Var}[\epsilon ]=\text{Var}\left[\widehat{\text{A}}\right]-\text{Var}[rc]-\text{Var}[c]-\text{Var}[r]$$

(109)

$$=\frac{{\alpha}_{5}}{{N}_{R}{N}_{0}}+\frac{{\alpha}_{6}}{{N}_{R}{N}_{1}}+\frac{{\alpha}_{7}}{{N}_{R}{N}_{0}{N}_{1}}-\text{Var}[rc].$$

(110)

Note that it is *rc* + *ε* that accounts for the last three terms in the seven-term expansion. It appears that the separation of *rc*+*ε* into *rc* and *ε* is not a very useful concept at this point. Moments appear in the individual variances of *rc* and *ε* that cancel out, and therefore do not appear in the expressions for the *α _{n}*. It would therefore be somewhat wasteful to compute their variances separately. This situation changes when we consider replication.

Now we replicate the trial *K* times, with the same cases and readers, and assume that the internal reader noise is independent and identically distributed from one trial to the next (the readers are not learning anything). Then we have an average figure of merit for the *K* trials

$${\widehat{A}}_{K}(\mathit{T})=\frac{1}{K}\sum _{k=1}^{K}\widehat{A}({\mathit{T}}_{k}).$$

(111)

The mean value of *Â _{K}* is given by

$$\langle {\widehat{A}}_{K}(\mathit{T})\rangle ={\langle {\langle \widehat{A}(\mathit{T})\rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}\rangle}_{\mathbf{\Gamma},\mathit{G}}.$$

(112)

For the variance we need the second moment, which can be expanded as

$$\langle {\widehat{A}}_{K}^{2}(\mathit{T})\rangle =\frac{1}{K}{\langle {\langle {\widehat{A}}^{2}(\mathit{T})\rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}\rangle}_{\mathbf{\Gamma},\mathit{G}}+\left(1-\frac{1}{K}\right){\langle {\langle \widehat{A}(\mathit{T})\rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}^{2}\rangle}_{\mathbf{\Gamma},\mathit{G}}.$$

(113)

This expansion follows from the usual independence arguments. We can now write for the variance

$$\begin{array}{l}\text{Var}\left[{\widehat{A}}_{K}\right]={\langle {\langle \widehat{A}(\mathit{T})\rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}^{2}\rangle}_{\mathbf{\Gamma},\mathit{G}}-{\langle {\langle \widehat{A}(\mathit{T})\rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}\rangle}_{\mathbf{\Gamma},\mathit{G}}^{2}+\\ \frac{1}{K}\left[{\langle {\langle {\widehat{A}}^{2}(\mathit{T})\rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}\rangle}_{\mathbf{\Gamma},\mathit{G}}-{\langle {\langle \widehat{A}(\mathit{T})\rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}^{2}\rangle}_{\mathbf{\Gamma},\mathit{G}}\right].\end{array}$$

(114)

The new moment we need is

$${\langle {\langle \widehat{A}(\mathit{T})\rangle}_{\mathit{T}\mid \mathbf{\Gamma},\mathit{G}}^{2}\rangle}_{\mathbf{\Gamma},\mathit{G}}=\frac{1}{{N}_{R}}{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}^{2}\rangle}_{\mathit{\gamma},\mathit{G}}+\left(1-\frac{1}{{N}_{R}}\right){\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}.$$

(115)

This expansion follows from the conditional independence of the internal noise and the independence of the readers. Now we may write

$$\begin{array}{l}\text{Var}\left({\widehat{A}}_{K}\right)={\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}-{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}^{2}+\\ {\scriptstyle \frac{1}{{N}_{R}}}\left[{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}^{2}\rangle}_{\mathit{\gamma},\mathit{G}}-{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}\right]+\\ {\scriptstyle \frac{1}{K{N}_{R}}}\left[{\langle {\langle {\widehat{a}}^{2}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}-{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}^{2}\rangle}_{\mathit{\gamma},\mathit{G}}\right].\end{array}$$

(116)

The moments involved here have all been worked out above or in the Appendix. The result is a ten-term expansion which we will describe below.

We may also expand into uncorrelated components as before

$${A}_{K}={\mu}_{K}+{c}_{K}+{r}_{K}+r{c}_{K}+{\epsilon}_{K}$$

(117)

$$=\mu +c+r+rc+{\epsilon}_{K}.$$

(118)

The second line here follows from the conditional independence between trials. Now we have

$$\text{Var(}{A}_{K}\text{)}=\text{Var}(c)+\text{Var}(r)+\text{Var}(rc)+\text{Var}({\epsilon}_{K})$$

(119)

where the first three variances of are given above, and the last variance is given by

$$\text{Var}({\epsilon}_{K})=\frac{1}{K{N}_{R}}\left[{\langle {\langle {\widehat{a}}^{2}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}-{\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}^{2}\rangle}_{\mathit{\gamma},\mathit{G}}\right].$$

(120)

The dependencies on numbers of cases, readers and trials are given by

$$\text{Var}(c)=\frac{{\alpha}_{1}}{{N}_{0}}+\frac{{\alpha}_{2}}{{N}_{1}}+\frac{{\alpha}_{3}}{{N}_{0}{N}_{1}}$$

(121)

$$\text{Var}(r)=\frac{{\alpha}_{4}}{{N}_{R}}$$

(122)

$$\text{Var}(rc)=\frac{1}{{N}_{R}}\left[\frac{{\beta}_{1}}{{N}_{0}}+\frac{{\beta}_{2}}{{N}_{1}}+\frac{{\beta}_{3}}{{N}_{0}{N}_{1}}\right]$$

(123)

$$\text{Var}({\epsilon}_{K})=\frac{1}{K{N}_{R}}\left[\frac{{\delta}_{1}}{{N}_{0}}+\frac{{\delta}_{2}}{{N}_{1}}+\frac{{\delta}_{3}}{{N}_{0}{N}_{1}}\right].$$

(124)

Explicit expressions for the *β _{n}* and the

We have developed a probabilistic framework for analyzing MRMC problems. We have applied this framework to the Wilcoxon statistic and derived an exact seven-term expansion for the variance of the figure of merit as a function of the numbers of readers and cases. We have used the probabilistic model to derive constraints on the coefficients in this expansion. These constraints, in turn, provide an upper bound on the variance of the Wilcoxon statistic. We introduced a linear decomposition of the figure of merit into uncorrelated random variables that are defined in term of conditional expectations over the readers, cases, and test statistics. This linear decomposition has the same structure as the conventional MRMC decomposition. We have shown that the variances of the individual terms in the linear decomposition can be related to the terms in the seven-term expansion. Finally, we have shown that replication of the MRMC experiment results in a ten-term expansion.

In the future, we plan to validate this seven-term expansion of the variance of the Wilcoxon statistic in simulation. We will also apply this methodology to real data. We are especially interested in computing the variance of the Wilcoxon statistic for ideal, Bayesian observers which we calculate using Markov chain Monte Carlo techniques. Finally, we are working on the extension of the probabilistic model to account for multiple modalities as well as multiple readers and multiple cases.

We thank Drs. Charles Metz, Brandon Gallas and Robert Wagner for their many helpful discussions about this topic. This work was supported by NIH/NCI grant K01 CA87017 and by NIH/NIBIB grants R01 EB002146, R37 EB000803, P41 EB002035.

What follows is a derivation of Eqn. 18.

$$\langle {\widehat{A}}^{2}\rangle $$

(125)

$$=\frac{1}{{N}_{R}^{2}}\left[\sum _{r=1}^{{N}_{R}}{\langle {\langle {\widehat{a}}^{2}({\mathit{t}}_{r})\rangle}_{{\mathit{t}}_{r}\mid {\mathit{\gamma}}_{r},\mathit{G}}\rangle}_{{\mathit{\gamma}}_{r},\mathit{G}}+\sum _{\begin{array}{c}r,s=1\\ r\ne s\end{array}}^{{N}_{R}}{\langle {\langle \widehat{a}({\mathit{t}}_{r})\widehat{a}({\mathit{t}}_{s})\rangle}_{{\mathit{t}}_{r},{\mathit{t}}_{s}\mid {\mathit{\gamma}}_{r},{\mathit{\gamma}}_{s},\mathit{G}}\rangle}_{{\mathit{\gamma}}_{r},{\mathit{\gamma}}_{s},\mathit{G}}\right]$$

(126)

$$=\frac{1}{{N}_{R}^{2}}\left[\sum _{r=1}^{{N}_{R}}{\langle {\langle {\widehat{a}}^{2}({\mathit{t}}_{r})\rangle}_{{\mathit{t}}_{r}\mid {\mathit{\gamma}}_{r},\mathit{G}}\rangle}_{{\mathit{\gamma}}_{r},\mathit{G}}+\sum _{\begin{array}{c}r,s=1\\ r\ne s\end{array}}^{{N}_{R}}{\langle {\langle \widehat{a}({\mathit{t}}_{r})\rangle}_{{\mathit{t}}_{r}\mid {\mathit{\gamma}}_{r},\mathit{G}}{\langle \widehat{a}({\mathit{t}}_{s})\rangle}_{{\mathit{t}}_{s}\mid {\mathit{\gamma}}_{s},\mathit{G}}\rangle}_{{\mathit{\gamma}}_{r},{\mathit{\gamma}}_{s},\mathit{G}}\right]$$

(127)

$$=\frac{1}{{N}_{R}}{\langle {\langle {\widehat{a}}^{2}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}+\left(1-\frac{1}{{N}_{R}}\right){\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}{\langle \widehat{a}({\mathit{t}}^{\prime})\rangle}_{{\mathit{t}}^{\prime},{\mathit{\gamma}}^{\prime}\mid \mathit{G}}\rangle}_{\mathit{G}}$$

(128)

$$=\frac{1}{{N}_{R}}{\langle {\langle {\widehat{a}}^{2}(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}+\left(1-\frac{1}{{N}_{R}}\right){\langle {\langle \widehat{a}(\mathit{t})\rangle}_{\mathit{t},\mathit{\gamma}\mid \mathit{G}}^{2}\rangle}_{\mathit{G}}.$$

(129)

The second equality follows from the independence of the test statistics when the readers and cases are fixed. The third equality follows from the independence of the reader parameters, and the fact that they are identically distributed.

We start with the sum over all four indices with no matched indices in Eqn. 41,

$$\begin{array}{l}\sum _{i\ne k}\sum _{j\ne l}{\langle {\langle s({t}_{1j}-{t}_{0i})s({t}_{1l}-{t}_{0k})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}\rangle}_{\mathit{\gamma},\mathit{G}}\\ =\sum _{i\ne k}\sum _{j\ne l}{\langle {\langle s({t}_{1j}-{t}_{0i})\rangle}_{{t}_{0i},{t}_{1j}\mid \mathit{\gamma},{\mathit{g}}_{0i},{\mathit{g}}_{1j}}{\langle s({t}_{1l}-{t}_{0k})\rangle}_{{t}_{0k},{t}_{1l}\mid \mathit{\gamma},{\mathit{g}}_{0k},{\mathit{g}}_{1l}}\rangle}_{\mathit{\gamma},\mathit{G}}\end{array}$$

(130)

$$=\sum _{i\ne k}\sum _{j\ne l}{\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0i},{\mathit{g}}_{1j})\overline{s}(\mathit{\gamma},{\mathit{g}}_{0k},{\mathit{g}}_{1l})\rangle}_{\mathit{\gamma},\mathit{G}}$$

(131)

$$=\sum _{i\ne k}\sum _{j\ne l}{\langle {\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0i},{\mathit{g}}_{1j})\rangle}_{{\mathit{g}}_{0i},{\mathit{g}}_{1j}\mid \mathit{\gamma}}{\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0k},{\mathit{g}}_{1l})\rangle}_{{\mathit{g}}_{0k},{\mathit{g}}_{1l}\mid \mathit{\gamma}}\rangle}_{\mathit{\gamma}}$$

(132)

$$=\sum _{i\ne k}\sum _{j\ne l}{\langle {\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}\mid \mathit{\gamma}}^{2}\rangle}_{\mathit{\gamma}}$$

(133)

$$=({N}_{0}^{2}{N}_{1}^{2}-{N}_{0}{N}_{1}^{2}-{N}_{0}^{2}{N}_{1}+{N}_{0}{N}_{1}){\langle {\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}\mid \mathit{\gamma}}^{2}\rangle}_{\mathit{\gamma}}.$$

(134)

The first equality follows from independence of the internal noise when readers and cases are fixed. The fourth equality follows from independence of cases.

The first step to derive Eqn. 50 is to rewrite the first term in Eqn. 46 as

$$\begin{array}{l}{\langle {\langle s({t}_{1}-{t}_{0})s({t}_{1}^{\prime}-{t}_{0})\rangle}_{{t}_{0},{t}_{1},{t}_{1}^{\prime}\mid \mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1},{\mathit{g}}_{1}^{\prime}}\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1},{\mathit{g}}_{1}^{\prime}}\\ ={\langle {\langle s({t}_{1}-{t}_{0})\rangle}_{{t}_{1}\mid {t}_{0},\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1}}{\langle s({t}_{1}^{\prime}-{t}_{0})\rangle}_{{t}_{1}^{\prime}\mid {t}_{0},\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1}^{\prime}}\rangle}_{{t}_{0},\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1},{\mathit{g}}_{1}^{\prime}}\end{array}$$

(135)

$$={\langle {\langle s({t}_{1}-{t}_{0})\rangle}_{{t}_{1},{\mathit{g}}_{1}\mid {t}_{0},\mathit{\gamma},{\mathit{g}}_{0}}{\langle s({t}_{1}^{\prime}-{t}_{0})\rangle}_{{t}_{1}^{\prime},{\mathit{g}}_{1}^{\prime}\mid {t}_{0},\mathit{\gamma},{\mathit{g}}_{0}}\rangle}_{{t}_{0},\mathit{\gamma},{\mathit{g}}_{0}}$$

(136)

$$={\langle {\langle s({t}_{1}-{t}_{0})\rangle}_{{t}_{1},{\mathit{g}}_{1}\mid {t}_{0},\mathit{\gamma},{\mathit{g}}_{0}}^{2}\rangle}_{{t}_{0},\mathit{\gamma},{\mathit{g}}_{0}}$$

(137)

where the first equality follows from conditional independence of the internal noise and the second equality from independence of the cases. The second step is to rewrite the second term in Eqn. 46 as

$$\begin{array}{l}{\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1}^{\prime})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1},{\mathit{g}}_{1}^{\prime}}\\ ={\langle {\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{1}\mid {\mathit{g}}_{0}}{\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1}^{\prime})\rangle}_{{\mathit{g}}_{1}^{\prime}\mid {\mathit{g}}_{0}}\rangle}_{{\mathit{g}}_{0}}\end{array}$$

(138)

$$={\langle {\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{1}\mid {\mathit{g}}_{0}}^{2}\rangle}_{{\mathit{g}}_{0}}$$

(139)

where again independence of cases is used. Now we use the fact that

$${\langle {\langle s({t}_{1}-{t}_{0})\rangle}_{{t}_{1},{\mathit{g}}_{1}\mid {t}_{0},\mathit{\gamma},{\mathit{g}}_{0}}\rangle}_{{t}_{0},\mathit{\gamma}}={\langle \overline{\overline{s}}({\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{1}\mid {\mathit{g}}_{0}}$$

(140)

to get the result in Eqn. 50.

The first moment in Eqn. 106 can be expanded as

$$\begin{array}{l}{\langle \langle \widehat{a}{(\mathit{t})\rangle}_{\mathit{t}\mid \mathit{\gamma},\mathit{G}}^{2}\rangle}_{\mathit{\gamma},\mathit{G}}={\scriptstyle \frac{1}{{N}_{0}{N}_{1}}}{\langle {\overline{s}}^{2}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1}}+\\ {\scriptstyle \frac{1}{{N}_{0}}}\left(1-{\scriptstyle \frac{1}{{N}_{1}}}\right){\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1}^{\prime})\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1},{\mathit{g}}_{1}^{\prime}}+\\ {\scriptstyle \frac{1}{{N}_{1}}}\left(1-{\scriptstyle \frac{1}{{N}_{0}}}\right){\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\overline{s}(\mathit{\gamma},{\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1})\rangle}_{\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{0}^{\prime},{\mathit{g}}_{1}}+\\ \left(1-{\scriptstyle \frac{1}{{N}_{0}}}-{\scriptstyle \frac{1}{{N}_{1}}}+{\scriptstyle \frac{1}{{N}_{0}{N}_{1}}}\right){\langle {\langle \overline{s}(\mathit{\gamma},{\mathit{g}}_{0},{\mathit{g}}_{1})\rangle}_{{\mathit{g}}_{0},{\mathit{g}}_{1}\mid \mathit{\gamma}}^{2}\rangle}_{\mathit{\gamma}}\end{array}$$

(141)

Note that Var [*rc*] has no term that varies as
${N}_{R}^{-1}$. This variance will only have terms that vary as (*N _{R}N*

**Publisher's Disclaimer: **This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1. Dorfman DD, Berbaum KS, Metz CE. Receiver operating characteristic rating analysis. generalization to the population of readers and patients with the jackknife method. Investigative Radiology. 1992;27:723–731. [PubMed]

2. Beiden SV, Wagner RF, Campbell G. Components-of-variance models and multiple-bootstrap experiments: An alternative method for random-effects, receiver operating characteristic analysis. Academic Radiology. 2000;7:342–349. [PubMed]

3. Roe CA, Metz CE. Variance-component modeling in the analysis of receiver operating characteristic index estimates. Academic Radiology. 1997;4(8):587–600. [PubMed]

4. Wilcoxon F. Individual comparison of ranking methods. Biometrics. 1945;1:80–93.

5. Mann HB, Whitney DR. On a test of whether one of two random variables is stochastically larger than the other. Annals of Mathematical Statistics. 1947;18:50–60.

6. Hoeffding W. A class of statistics with asymptotically normal distribution. Annals of Mathematical Statistics. 1948;19:293–325.

7. Bamber D. The area above the ordinal dominance graph and the area below the receiver operating characteristic graph. Journal of Mathematical Psychology. 1975;12:387–415.

8. Noether GE. Elements of Nonparametric Statistics. New York: Wiley; 1967.

9. DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: A nonparametric approach. Biometrics. 1988;44:837–845. [PubMed]

10. Lehmann EL. Consistency and unbiasedness of certain nonparametric tests. Annals of Mathematical Statistics. 1951;22:165–179.

11. Barrett HH, Kupinski MA, Clarkson E. Medical Imaging 2005: Image Perception, Observer Performance, and Technology Assessment. SPIE; 2005. Probabilistic foundations of the MRMC method; pp. 21–31.

12. Gallas BD. One-shot estimate of mrmc variance: Auc. Academic Radiology. 2006;13:353–362. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |