Commun Stat Theory Methods. Author manuscript; available in PMC 2010 October 22.
Published in final edited form as:
Commun Stat Theory Methods. 2008 January; 37(12): 1855–1866.
PMCID: PMC2962418
NIHMSID: NIHMS184242

# Expected Power for the False Discovery Rate with Independence

## Abstract

The Benjamini–Hochberg procedure is widely used in multiple comparisons. Previous power results for this procedure have been based on simulations. This article produces theoretical expressions for expected power. To derive them, we make assumptions about the number of hypotheses being tested, which null hypotheses are true, which are false, and the distributions of the test statistics under each null and alternative. We use these assumptions to derive bounds for multiple dimensional rejection regions. With these bounds and a permanent based representation of the joint density function of the largest p-values, we use the law of total probability to derive the distribution of the total number of rejections. We derive the joint distribution of the total number of rejections and the number of rejections when the null hypothesis is true. We give an analytic expression for the expected power for a false discovery rate procedure that assumes the hypotheses are independent.

Keywords: Benjamini–Hochberg procedure, Distribution of number of rejections, Multiple comparisons, Rejection region bounds

## 1. Introduction

Our goal is to provide analytical formulas for the expected power for the Benjamini and Hochberg (1995) false discovery rate procedure. Benjamini and Hochberg (1995)) controlled the False Discovery Rate, the expected ratio of the number of rejections of hypotheses where the null is true (false rejections) to the number of total rejections. With a single hypothesis, power is the probability of rejecting the null hypothesis. With multiple hypotheses, we study average, or expected power (Benjamini and Liu, 1999). Expected power is the expectation of the ratio of the number of rejections to the true number of hypotheses for which the alternative holds.

Many articles describe extensions of the theory for false discovery rate, or related statistics (see, e.g., Benjamini and Liu, 1999; Benjamini and Yekutieli, 2001; Curran-Everett, 2000; Efron et al., 2001; Finner and Roters, 2002; Genovese and Wasserman, 2002, 2004; Sarkar, 2002, 2004, 2006; Storey, 2002, 2003). Power has been studied almost entirely via simulation (Benjamini and Liu, 1999; Keselman et al., 2002; Lee and Whitmore, 2002; Storey, 2002).

In analytic power analysis, it is usual to make assumptions about the underlying state of nature. We assume that we know how many hypotheses we are testing and what statistics we will use to test them. We even assume that we know when each hypothesis is true or false, and the total number of true and false hypotheses. Thus, we know the true distribution of each test statistic. We also know the distribution of the p-value, whether uniform if the null hypothesis is true, or a different distribution, calculated under the alternative.

Using these assumptions, the expected power can be calculated by using the law of total probability. The steps in the calculation are as follows.

1. Find the cumulative distribution function and the probability density function of the individual p-values when the null hypothesis is true or false.
2. Show that the probability of rejection depends only on the largest p-values.
3. Use a computational form for the joint density function of the largest order statistics (Balakrishnan, 2007; Vaughan and Venables, 1972).
4. Enumerate and delimit the rejection regions implicitly defined by the Benjamini and Hochberg (1995) procedure in terms of p-value space.
5. Give an explicit recursive algorithm for generating the rejection regions.
6. Use the law of total probability to calculate the probability distribution function of the total number of rejections.
7. Derive the joint distribution of the total rejections and false rejections.
8. Finally, derive formulas for the expected power from this joint distribution.

In Sec. 2, we define the notation and adapt some previous known results to our problem. In Sec. 3, we prove lemmas on ordered lists. In Sec. 4, we derive the distributions of the number of total rejections, and false rejections, and give our main results on expected power. Technical proofs are in the Appendix.

## 2. Notation and Known Results

Suppose one plans to use the Benjamini and Hochberg (1995) false discovery rate procedure for m hypotheses and m decisions. We take a frequentist, parametric view, and envision each decision as being between a null and an alternative hypothesis, which may differ for each decision. Let n {0, … , m} be the number of decisions for which a null hypothesis holds in the population, while an alternative hypothesis holds for the remaining mn decisions. For i {1, … , m}, index the hypotheses by Hi, with associated absolutely continuous, independent, but not necessarily identically distributed real valued test statistic Ti (with realization ti) and random p-value, Pi.

In frequentist statistics, the null hypothesis is rejected when the p-value is smaller than a specified bound. In a multiple comparisons situation, there are many p-values. The Benjamini and Hochberg procedure uses the sorted p-values to decide which null hypotheses are rejected, and which are not. Let {P(1)P(m)} be the set of order statistics for the p-values, with {P(1)P(m)} a realization. Let α* [0, 1] and bi = * /m. This set of numbers, b1, b2, … , bm, is a set of bounds. We compare the smallest p-value, p(1) to b1, the next largest p-value, p2 to b2, and so on, until we find the largest i for which p(i) * /m. Call that subscript k. Thus, k is the largest number so that pk * /m, but p(k+1) > [(k + 1)α*]/m. The Benjamini and Hochberg (1995) testing procedure rejects the k hypotheses that correspond to the k smallest p-values. The event $Rk$, the rejection of k hypotheses, is:

$Rk={{⋂i=1m(p(i)≥bi)}k=0{⋂i=k+1m(p(i)≥bi)∩(p(k)≤bk)}1≤k≤m−1{p(m)≤bm}k=m.}$
(1)

Now consider repeating the experiment over and over again. Each time, because the data are stochastic, the number of rejections would vary. Thus, the number of rejections is a random variable. Let K {0, … , m} be the random variable denoting the number of rejections of null hypotheses in an experiment, and k its realization. Rejecting a null hypothesis is a decision which may be correct, if the alternative holds, or incorrect, if the null actually holds. Rejections of hypotheses where the null is true are Type I errors. Let J be the number of rejections for which the null does hold, and j its realization. In general, j {max[0, k − (mn)], … , min(n, k)}. Table 1 illustrates the relationships.

Types of decisions for a particular realization of an experiment decision

To find the distribution of K and J, we need to understand the probability density of the largest p-values. This depends on the distribution of the test statistics. Typically, the distribution of Ti differs under the null and alternative hypotheses. Let h = 0 if the null hypothesis holds, and h = 1 otherwise. Let θi(0) be the vector of parameters for the distribution function under the null, and θi(1) be the vector of parameters for the distribution function under the alternative. The number of rows of θi(0) may be different from the number of rows of θi(1). For example, the null may be a central chi-square with one parameter, the degrees of freedom, while the alternative may be a non central chi-square, with two parameters, the degrees of freedom, and a non centrality parameter. With Ni the sample size for Ti, the cumulative distribution and probability density functions are FTi[ti; Ni, θi(0)] and fT[ti; Ni, θi(0)] under the null, and FTi[ti; Ni, θi(1)] and fT[ti; Ni, θi(1)] under the alternative.

Using this notation, we can define the distribution and density function for each p-value. Since the test statistic Ti is absolutely continuous under the null and the alternative, the distribution function is smooth and monotone increasing, and the inverse distribution function exists. With pi = 1 − FTi[ti; Ni, θi(h)] [0, 1], h {0, 1}, yields

$ti=FTi−1[1−pi;Ni,θi(h)].$
(2)

Let $qi=FTi−1[1−pi;Ni,θi(0)]$. For a one-tailed test for which larger values lead to rejection, the distribution and density functions for Pi are:

$FPi[pi;Ni,θi(h)]=1−FTi[qi;Ni,θi(h)],$
(3)

$fPi[pi;Ni,θi(h)]=fTi[qi;Ni,θi(h)]fTi[qi;Ni,θi(0)].$
(4)

Let $qUi=FTi−1[1−pi∕2;Ni,θi(0)]$ and $qLi=FTi−1[pi∕2;Ni,θi(0)]$. For a two-tailed test with equal tail probabilities, the distribution for Pi is

$FPi[pi;Ni,θi(h)]=1−FTi[qUi;Ni,θi(h)]+FTi[qLi;Ni,θi(h)].$
(5)

Using Theorem 12.4.4, (Leithold, 1968, p. 410), the density function is:

$fPi[pi;Ni,θi(h)]=12⋅fTi[qUi;Ni,θi(h)]fTi[qUi;Ni,θi(0)]+12⋅fTi[qLi;Ni,θi(h)]fTi[qLi;Ni,θi(0)].$
(6)

For clarity, abbreviate the distribution function and density for Pi by FPi(pi) and fPi(pi). Notice that the p-values are absolutely continuous, and independent, but not necessarily identically distributed. By assumption, each decision has a potentially different null and alternative hypothesis. This can lead to different distributions for each p-value.

We need to look at the joint density of the p-values to understand the power for the Benjamini and Hochberg procedure. But which p-values? Suppose m = 10 and k = 4, so 4 hypotheses are rejected by the Benjamini and Hochberg procedure. The idea here is that the four smallest p-values are less than their bounds. We need to look also at the 5th smallest p-value, to make sure that it is bigger than its bound. If it were smaller than its bounds, we would have rejected five, not four. We also need to look at the 6th smallest one, and the 7th, 8th, 9th, and 10th. In fact, we need the joint density of the largest (mk) + 1 of m of the p-values to figure out the probability of rejecting k hypotheses.

A convenient form of the joint density of independent, but not necessarily identically distributed random variables is in terms of a permanent (Balakrishnan, 2007; Vaughan and Venables, 1972). The permanent of a square matrix is defined like the determinant, except that all signs are positive (Aitken, 1999, p. 30). If A is a square matrix, we write its permanent by per [A]. For k {1, … , m}, the marginal density of p(k) ≤ p(m) is

$fP(k),…,P(m)(p(k),…,p(m))=[(k−1)!]−1×per[FP1(p(k))FP2(p(k))⋯FPm(p(k))⋮⋮⋮FP1(p(k))FP2(p(k))⋯FPm(p(k))−−−−−−−−−−−−fP1(p(k))fP2(p(k))⋯fPm(p(k))⋮⋮⋮fP1(p(m))fP2(p(m))⋯fPm(p(m))],$
(7)

where the first block contains k − 1 rows, and the second block contains (mk) + 1 rows.

When the test statistics are identically distributed, the result reduces (David, 1981, p. 10) to

$fP(k),…,P(m)(p(k),…,p(m))=m![F(p(k))]k−1(k−1)!∏i=kmf(p(i)).$
(8)

## 3. Bounds for Integration

In Eq. (1), we described the rejection regions corresponding to the Benjamini and Hochberg (1995) procedure. Although these are exact definitions of the rejection regions, they do not describe all the ways the p-values could be arranged with respect to the bounds. If we understand all the ways that the p-values can be arranged with respect to the bounds, we can define a set of rejection regions in p-value space. If we integrate the joint density of the p-values over this rejection region, we will get the probability distribution of the number of rejections.

There are many ways the p-values can be arranged with respect to the bounds and still cause a rejection. First, let us consider a small example, and demonstrate how we can proceed from a pictorial representation, to a set of ordered lists, to a set of integration bounds. Suppose that we are considering two hypotheses, so m = 2. How many ways can we reject both hypotheses? Let b1 = α * /2, and b2 = α*. The rejection region is {p(2)b2}. Notice that the rejection region can be satisfied in three different ways, shown as number lines in Fig. 1.

Testing and rejecting two hypotheses.

We can represent the number lines shown in Fig. 1 abstractly as ordered lists. The ordered lists corresponding to Fig. 1 are shown in Fig. 2. We still need to define the bounds of the rejection region. For this small example, the bounds are shown in Fig. 3.

Ordered lists for testing and rejecting two hypotheses.
Integration bounds for a rejection for rejecting two hypotheses.

For the general case, we need to formally define ordered lists. Suppose q is a finite, positive integer, and a1, … , aq are real numbers, with a1aq. $A$ is an ordered list if $A={a1,…,aq}$. If $A$ and $B$ are ordered lists with ajbj, ∀j and j’, then their concatenation is

$C=A&B,$
(9)

with $C$ an ordered list whose elements are the entries in $A$ followed by those in $B$.

Now we will give two lemmas that describe how to generate and use ordered lists for more general cases. We will use these ordered lists to define the rejection regions for Benjamini and Hochberg (1995). The proofs are given in the Appendix. The first lemma shows how to generate these ordered lists for a specific number of hypothesis tests.

The second lemma shows that these ordered lists correspond to ordered sets of ordered pairs that are bounds for integration. Integrating the marginal density of the (mk) + 1 smallest order statistics of the p-values over these bounds will give the probabilities of rejection.

### Lemma 3.1

Suppose k {0, … , m} indexes the number of rejections. Let

$ck=[2⋅(m−k)]!∕[(m−k)!(m−k)!(m−k+1)].$
(10)

Let p {1, … , ck}, and let q {1, … , 2(mk) + 2} be index variables. Let ek,p,q {p(k), … , p(m); bk, … , bm} be the entries of an ordered list, so that

$0≤ek,p,1≤⋯≤ek,p,2k+2≤1.$
(11)

Notate the list itself by letting

$Lk,p={0,ek,p,1,…,ek,p,2k+2,1}.$
(12)

Let $Lk$ be the set of such ordered lists so that

$Lk={Lk,1,Lk,2,…,Lk,ck}.$
(13)

Then the set $Lk−1$ can be generated from the set $Lk$. The number of entries in $Lk$ is ck, a Catalan number.

### Lemma 3.2

Let k {0, … , m} index the number of rejections and p {1, … , ck} be an index variable for the number of ordered lists in the set $Lk$. Let o {1, … , k + 1} index the variable of integration. Suppose lk,p,o is the lower bound of the integral with respect to dp(o), and uk,p,o the corresponding upper bound. For each ordered list $Lk,p$, define the ordered set of ordered pairs

$Bk,p={{(0,um,p,1),(lm,p,2,um,p,2),…,(lm,p,k,um,p,k)}k=0{(0,uk,p,1),(lk,p,2,uk,p,2),…,(lk,p,k,uk,p,k),(bk+1,1)}1≤k≤m−1{(0,bm)}k=m.}$
(14)

Let $Bk$ be the set of such ordered lists so that

$Bk={Bk,1,…,Bk,ck}.$
(15)

Then:

1. $Bk,p$ can be formed from the ordered lists $Lk,p$ in Lemma 3.1 by construction.
2. $Bk$ can be formed from the set $Lk$.
3. The number of entries in $Bk$ is ck.
4. For 0 ≤ km − 1, the k + 1 ordered pairs in $Bk,p$ are bounds of integration for a k + 1 dimensional integral. For k = m, the ordered pair in $Bm,p$ is the bounds of integration for a one dimensional integral.
5. The ordered pairs in $Bk,p$ delineate regions in $ℜk+1$, and the ordered pair in $Bm,p$ delineates a region in . These are the rejection regions from Eq. (1).

## 4. Exact Distributions and Expected Power

We now have expressions for the joint density of the order statistics of the p-values (Eq. (7)), and an expression for the bounds of the rejection regions (Lemma 3.1). Using the law of total probability, in this section we derive the probability distribution for the total number of rejections, and the joint distribution of the total number of rejections and the number of false rejections. We use these results to find expected power.

### Result 4.1

The distribution of the number of rejections for independent, but not necessarily identically distributed random variables, and corresponding p-values is given by:

$Pr{K=k}={Σp=1c0∫B0,pm!∏i=1mfPi(p(i))dp(1)…dp(m)k=0Σp=1ck∫Bk,pfP(k),…,P(m)(p(k),…,p(m))dp(k)…dp(m)1≤k≤m−1∏i=1mFPi(bm)k=m.}$
(16)

### Result 4.2

For independent and identically distributed random variables, and corresponding p-values

$Pr{K=k}={Σp=1c0∫B0,pm!∏i=1mf(p(i))dp(1)…dp(m)k=0Σp=1ck∫Bk,pm![F(p(k))]k−1(k−1)!∏i=kmf(p(i))dp(k)…dp(m)1≤k≤m−1[F(bm)]mk=m.}$
(17)

### Result 4.3

For independent and identically and uniformly distributed random variables,

$Pr{K=k}={1−α∗k=0Σp=1ck∫Bk,pm!(p(k))k−1(k−1)!dp(k)…dp(m)1≤k≤m−1(bm)mk=m.}$
(18)

#### Proof

Results 4.4–4.6 follow from the law of total probability. The probability of rejection is given by integrating the appropriate density over the multidimensional region. Result 4.3 occurs when the null holds for every hypothesis, and the p-values are then uniformly, and identically distributed.

Now, we need to derive the joint probability distribution of the total number of rejections and the number of false rejections. Without loss of generality, we can label the hypotheses Hi so that for i {1, … , n}, the null hypothesis holds, and for i {n + 1, … , m}, the alternative hypothesis holds. If n = m, all of the null hypotheses hold, while if n = 0, then all alternative hypotheses hold.

Index the m! terms in the expansion of the permanent in Eq. (7) by v {1, … , m!}, and denote term v by av. Each term is a product of m factors. All entries in the last [mk + 1] rows of the permanent in Eq. (7) are of the form fPd (p(e) . Describe any such entry with d {1, … , n} and e {k + 1, … , m} as satisfying the (d, e) condition. These entries correspond to a null hypothesis underlying one of the mk largest p-values, or a null hypothesis that is not rejected. Let $C0$ be the set of terms which have no factors which fulfill the (d, e) condition. If n is constrained to be zero, then $C0$ is all the terms in the density, and all other sets $Cn−j$ are empty. If n ≠ 0, but (nj) = 0, then $C0$ is the set of all terms in the density that have 0 factors that fulfill the (d, e) condition. In general, let $Cn−j$ be the set of terms that have exactly nj factors that fulfill the (d, e) condition. We can then rewrite the density of the largest (mk) + 1 p-values as a sum over these sets.

$fP(k),…,P(m)(p(k),…,p(m))=∑j∑v:av∈Cn−jav.$
(19)

### Result 4.4

If the number of hypotheses for which the alternative holds is zero, then n = m. Then the number of rejections of null hypotheses is equal to the number of total rejections and k = j. Then Pr{J = j, K = k} = Pr{J = j} = Pr{K = k}, the distribution in Result 4.1.

We also give some special cases that may be useful, depending on the testing situation. They can be derived from Results 4.2 and 4.3.

### Result 4.5

For independent, but not necessarily identically distributed random variables, and with j {max[0, k − (mn)], … , min(n, k)},

$Pr{J=j,K=k}={Σp=1c0∫B0,pm!∏i=1mfPi(p(i))dp(1)…dp(m)k=0,j=0Σp=1ck∫Bk,pΣv:av∈Cn−javdp(k)…dp(m)1≤k≤m−1∏i=1mFPi(bm)k=m,j=n.}$
(20)

### Result 4.6

With aCb = a!/[b!(ab)!] and for independent and identically distributed random variables, the joint probability distribution function of the number of total and false rejections is given by:

$Pr{J=j,K=k}=nCj⋅(m−n)C(k−j)⋅Pr{K=k}∕mCk.$
(21)

#### Proof

Results 4.4–4.6 follow from direct applications of the law of total probability.

We now have the joint distribution of the number of true rejections, K, and the number of rejections of nulls, J. Power for one realization of the experiment is (kj)/(mn). The numerator is the total number of rejections minus the number of nulls which are rejected. Thus, it is the number of hypotheses for which the alternative is true that are rejected. The denominator is the number of hypotheses for which the alternative is true. This quantity is also known as sensitivity. Benjamini and Liu (1999) called the expected value of this quantity expected power, and used it in simulations as measure of the success of an experiment that tested multiple hypotheses. We can now give an explicit, analytic formula for expected power.

### Result 4.7

For mn, i.e., when the null is not true for every hypothesis, the expected power is given by:

$E[(K−J)∕(m−n)]=∑k=0m∑j[(k−j)∕(m−n)]Pr{K=k;J=j}.$
(22)

### Result 4.8

A formula of Lindgren (1976, p. 116) gives the expected value of a function of two random variables in terms of the joint distribution. The result then follows from the definitions of the various estimators.

The expected power is given in Eq. (22). How does the power depend on the sample size? The distribution of each test statistic, Ti, includes the sample size for the hypothesis Hi, Ni, as a parameter. Thus, the distribution for each p-value, Pi, also includes the sample size. The joint density of the ordered p-values includes the distribution and density functions for the p-values, and is thus a function of set of the sample sizes for each hypothesis, {N1, … , Nm}. The probability distribution function of the total number of rejections, and the number of false rejections is expressed as an integral of the joint probability density function, and thus is also a function of the set of sample sizes. Finally, the expected power is a function of the set of sample sizes, since it involves an expectation over the joint distribution of total and false rejections. Thus, one can calculate the power for a fixed set of sample sizes.

## Acknowledgments

The authors thank Gary Grunwald for his close reading and numerous suggestions, which greatly improved the manuscript.

Glueck was supported by NCI K07CA88811. Muller was supported by NCI P01 CA47 982-04, NCI R01 CA095749-01A1, and NIAID 9P30 AI 50410. Hunter was supported by NLM 5R01LM008111-03 and NCI 5 P30 CA46934-15.

## 5. Appendix

#### Proof of Lemma 3.1

By construction.

1. With k = m, let
$Lm=Lm,1={0,p(m),bm,1}.$
(23)
2. If k = 0, then do the following:
1. In $Lk,p$, for all p {1, … , ck} delete p(0) and b0.
2. Stop generating lists.
3. Otherwise, repeat the following steps for p {1, … , ck}. Consider $Lk,p$.
1. Insert the ordered list {p(k−1), bk−1} after 0 in each ordered list that corresponds to k rejections.
2. Remove p(k) from the list.
3. From the resulting list, remove the prefix, which is the ordered list {0, … , bk}.
4. From the resulting list, remove the suffix, which is the ordered list {p(k+1, 1}. If k = m, remove the suffix, which is the ordered list {1}.
5. What remains is the core. If the core has nothing in it, insert p(k). Otherwise, insert p(k) sequentially before and after every entry of the core. The resulting ordered lists are the new cores.
6. For each core, insert the prefix at the beginning, and add the suffix onto the end. The result set of ordered lists are the elements of the set $Lk−1$, the set of ordered lists that correspond to k − 1 rejections.
4. Let k = k − 1, and go to Step 2.

#### Proof of Lemma 3.2

There are five separate statements in Lemma 3.2. We present the proofs of the five statements in order.

1. By construction. $Bm,p={(0,b1)}$ by definition. For k {0, … , m − 1}, $Bk,p$ can be formed from $Lk,p$ using the following algorithm.
1. Write the first three elements of $Lk,p$ as an ordered list. Call it the trio. The fourth through last elements of $Lk,p$ call the remainder.
1. If the middle element is of the form p(o), define lk,p,1 to be the first element of the trio, and define uk,p1 to be the third element of the trio, and then do the following. Otherwise, proceed to Step ii.
1. Add the ordered pair (lk,p,1, uk,p,1) to the ordered list $Bk,p$.
2. Form a new trio with the same first element. The middle element of the new trio is the third element of the original trio. The third element of the new trio is the first element of the remainder.
3. Remove the first element of the remainder. The new remainder is the old remainder with the first element removed.
2. If the middle element of the trio is of the form bj, then do the following.
1. Delete the first element of the trio.
2. Form a new trio. Let the middle element be the new first element, the new second element be the old third element, and let the new third element be the first item in the remainder.
3. Remove the first element of the remainder. The new remainder is the old remainder with the first element removed.
2. Using the new trio and the new remainder defined above, repeat Steps i and ii until all the elements of $Lk,p$ are exhausted.
2. $Bk$ can be formed from the set $Lk$ in the following manner. First, form all the $Bk,p$ from $Lk,p$ in $Lk$ as shown above. Second, $Bk$ is simply the set of all the $Bk,p$.
3. This holds since the number of elements in $Lk$ is ck and there are the same number of elements in $Lk$ and $Bk$.
4. By inspection.
5. They correspond to the rejection regions by construction.

## Footnotes

Mathematics Subject Classification Primary 62H15; Secondary 65F03.

## References

• Aitken AC. Determinants and Matrices. Oliver and Boyd; Edinburgh: 1999.
• Balakrishnan N. Permanents, order statistics, outliers and robustness. Revista Matematica Complutense. 2007;20:7–107.
• Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. Roy. Statist. Soc. Ser. B Statist. Methodol. 1995;57:289–300.
• Benjamini Y, Liu W. A step-down multiple hypotheses testing procedure that controls the false discovery rate under independence. J. Statist. Plann. Infer. 1999;82:163–170.
• Benjamini Y, Yekutieli D. The control of the false discovery rate in multiple testing under dependency. Ann. Statist. 2001;29:1165–1188.
• Curran-Everett D. Multiple comparisons: philosophies and illustrations. Am. J. Physiol. Regul. Integr. Comp. Physiol. 2000;279:R1–R8. [PubMed]
• David HA. Order Statistics. 2nd ed. Wiley; New York: 1981.
• Efron B, Storey J, Tibshirani R. Microarrays, empirical Bayes methods, and false discovery rates. J. Amer. Statist. Assoc. 2001;96:1151–1160.
• Finner H, Roters M. Multiple hypothesis testing and expected number of Type 1 errors. Ann. Statist. 2002;30(1):220–238.
• Genovese CR, Wasserman L. Operating characteristics and extensions of the false discovery rate procedure. J. Roy. Stat. Soc. Ser. B Statist. Methodol. 2002;64:499–517.
• Genovese CR, Wasserman L. A stochastic process approach to false discovery control. Ann. Statist. 2004;32(3):1035–1061.
• Keselman HJ, Cribbie R, Holland B. Controlling the rate of Type 1 error over a large set of statistical tests. Bri. J. Math. Statist. Psych. 2002;55:27–39. [PubMed]
• Lee M, Whitmore G. Power and sample size for DNA microarray studies. Statist. Med. 2002;21:3543–3570. [PubMed]
• Leithold L. The Calculus with Analytic Geometry. Harper and Row; New York: 1968.
• Lindgren BW. Statistical Theory. 3rd ed. Macmillan Publishing; New York: 1976.
• Sarkar SK. Some results on false discovery rate in stepwise multiple testing procedures. Ann. Statist. 2002;30(1):239–257.
• Sarkar SK. FDR-controlling stepwise procedures and their false negatives rates. J. Statist. Plann. Infer. 2004;125:119–137.
• Sarkar SK. False discovery and false nondiscovery rates in single-Step multiple testing procedures. Ann. Statist. 2006;34(1):394–415.
• Storey J. A direct approach to the false discovery rate. J. Roy. Statist. Soc. Ser. B Statist. Methodol. 2002;64:479–598.
• Storey J. The positive false discovery rate: a Bayesian interpretation and the q-value. Ann. Statist. 2003;31(6):2013–2035.
• Vaughan RJ, Venables WN. Comments and queries: permanent expressions for order statistic densities. J. Roy. Statist. Soc. Ser. B Statist. Methodol. 1972;34:308–310.

 PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers.