Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2914327

Formats

Article sections

- Summary
- 1. Introduction
- 2. The Decision Problem
- 3. A Nonparametric Bayes Decision Rule
- 4. The Bayesian Discovery Procedure (BDP)
- 5. Comparison of SODP versus SBDP
- 6. Extensions of the ODP and BDP
- 7. Conclusions and Summary
- References

Authors

Related links

J R Stat Soc Series B Stat Methodol. Author manuscript; available in PMC 2010 November 1.

Published in final edited form as:

J R Stat Soc Series B Stat Methodol. 2009 November 1; 71(5): 905–925.

doi: 10.1111/j.1467-9868.2009.00714.xPMCID: PMC2914327

NIHMSID: NIHMS217541

Michele Guindani, University of New Mexico, Alberquerque, NM 87111, U.S.A.

See other articles in PMC that cite the published article.

We discuss a Bayesian discovery procedure for multiple comparison problems. We show that under a coherent decision theoretic framework, a loss function combining true positive and false positive counts leads to a decision rule based on a threshold of the posterior probability of the alternative. Under a semi-parametric model for the data, we show that the Bayes rule can be approximated by the optimal discovery procedure (ODP), recently introduced by Storey (2007a). Improving the approximation leads us to a Bayesian discovery procedure (BDP), which exploits the multiple shrinkage in clusters implied by the assumed nonparametric model. We compare the BDP and the ODP estimates in a simple simulation study and in an assessment of differential gene expression based on microarray data from tumor samples. We extend the setting of the ODP by discussing modifications of the loss function that lead to different single thresholding statistics. Finally, we provide an application of the previous arguments to dependent (spatial) data.

A number of different approaches have been introduced in the recent literature to address the multiple comparison problem. Most focus on controlling some error rate. For example, the control of the familywise error rate (FWER) guarantees a bound on the probability of a false rejection among all tests. Benjamini and Hochberg (1995) developed a simple procedure, based on the ordered *p*-values that controls the false discovery rate (FDR), defined as the expected proportion of rejected null hypotheses which are erroneously rejected. A decision-theoretic approach to the multiple comparison problem requires the explicit statement of a loss function, which weights the relative importance of the different outcomes according to the preferences and inferential focus of the investigators. Cohen and Sackrowitz (2007) prove the inadmissibility of the Benjamini and Hochberg procedure under any loss that is a linear combination of false discoveries and false acceptances and under several sampling models, including the general one-parameter exponential family. Müller et al. (2004, 2007) undertake a decision theoretic approach to multiple testing and discuss several loss functions that lead to the use of FDR-based rules. More recently, Bogdan et al. (2008) compared the Benjamini-Hochberg procedure with several Bayesian rules for multiple testing. They show that whenever the proportion of true nulls is small, the misclassification error of the Benjamini-Hochberg procedure is close to optimal, in the sense of matching a Bayesian oracle. This property is shown to be shared by some of the Bayesian procedures they consider. In addition, through simulations, they show that Bayes rules generally perform better for large or moderate proportions.

The general multiple comparison problem is stated as follows. Assume we observe data sets **x**_{1},…,**x**_{m}, where **x**_{i} = {*x*_{1i},…,*x _{nii}*}, and for each

Specifically, we use a prior model that includes a random effects distribution *G* for the *μ _{i}*. Instead of a parametric model for

For sufficiently large *m* and a small total mass parameter of the DP prior, the posterior RPM can be approximated by the empirical distribution of the maximum likelihood estimates * _{i}*. The result is an approximate Bayes rule that is closely related to Storey’s optimal discovery procedure (ODP, Storey, 2007a).

The ODP is based on a thresholding statistic,

$${S}_{\text{ODP}}({z}_{i})=\frac{{\displaystyle {\sum}_{{\mu}_{j}\notin A}f({z}_{i};{\mu}_{j})}}{{\displaystyle {\sum}_{{\mu}_{j}\in A}f({z}_{i};{\mu}_{j})}}.$$

(1)

The null hypothesis *H*_{0i} is rejected if *S*_{ODP}(*z _{i}*) ≥ λ, for some 0 ≤ λ < ∞. Let

$${\widehat{S}}_{\text{ODP}}(z)=\frac{{\displaystyle {\sum}_{j=1}^{m}f(z;{\widehat{\mu}}_{j})}}{f(z;0)},$$

(2)

where * _{j}* is a point estimate for

For general *A* the ODP proceeds with the thresholding function

$${\widehat{S}}_{\text{ODP}}(z)=\frac{{\displaystyle {\sum}_{j=1}^{m}f(z;{\widehat{\mu}}_{j})}}{{\displaystyle {\sum}_{j=1}^{m}{w}_{j}f(z;{\widehat{\mu}}_{j})}},$$

(3)

where *w _{j}* are suitable weights, chosen to estimate the true status of each hypothesis. For example,

We show that the ODP statistic can be recognized as an approximate Bayes rule under the proposed nonparametric prior model for *μ _{i}*. This result is in accordance with Ferguson’s (1973) observation that nonparametric Bayesian inference often yields results that are comparable to corresponding classical inference. The expectation in Storey’s optimality statement for

Once we have established the utility function and the probability model leading to the ODP we can proceed with generalizations in two directions. First, we will consider variations of the ODP to improve the approximation. We show that the resulting rules lead to small improvement in inference. More importantly, once we recognize the ODP as a Bayes rule we can modify the procedure to adapt to variations in the loss function. We provide specific examples, including a study of exceedance regions of a spatial process and the detection of neurodegenerative patterns in MRI scans.

DP priors in the context of multiple hypotheses testing have been considered before by Gopalan and Berry (1993). More recently, Dahl and Newton (2007) have proposed a DP mixture model (BEMMA) for testing correlated hypotheses and showed that the induced clustering information leads to an improved testing power. Unrelated to a discussion of Storey’s ODP, Bogdan et al. (2008) proposed the same semiparametric model we introduce in section 3 and used it for the comparison of misclassification rates among several competing models. The distinction between the previous approaches and ours is that here the DP prior is part of a decision theoretic setup. Besides, both Gopalan and Berry (1993) and Dahl and Newton (2007) restrict inference to hypotheses concerning the configuration of ties among the parameters of interest. A Bayesian implementation of the ODP procedure has been recently considered also by Cao et al. (2009). They develop a parametric Bayes hierarchical mixture model that achieves shrinkage of the gene-specific sampling variances. The resulting posterior means are then plugged in the ODP statistic and this is shown to outperform the original ODP and many widely used competing procedures. The discussion in Cao et al. (2009) is purely empirical, without reference to a decision theoretic setup.

The format of the paper is as follows. In section 2, we introduce the decision problem and the resulting Bayes rule. In section 3 we introduce the nonparametric (NP) probability model and we detail the algorithm for posterior inference. In section 4 we finally discuss the interpretation of the ODP as an approximate Bayes rule and introduce the corresponding BDP statistics. In section 5 we compare the behavior of the ODP and the BDP with simulated data and a microarray dataset. We show that the BDP provides at least some improvement over the frequentist optimal procedure. In section 6 we discuss some extensions of the previous settings for multigroup experiments, different loss functions and different inferential purposes. In particular, we consider a spatially dependent NP model and apply the Bayesian multicomparison decision rule to a simplified MRI dataset.

For a formal definition of the multiple comparison problem we need some notation and minimal assumptions on the sampling model. Assume that the data are *z _{i}* |

$${H}_{0i}:{\mu}_{i}\in A\phantom{\rule{thinmathspace}{0ex}}\text{vs}.\phantom{\rule{thinmathspace}{0ex}}{H}_{1i}:{\mu}_{i}\notin A,$$

using, for example, *A* = (−+ ) or *A* = {0}. Let *G* denote the distribution of *μ _{i}*, obtained by marginalizing over the two competing hypotheses and let π

$$p({\mu}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}G)={\pi}_{0}G({\mu}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}A)+(1-{\pi}_{0})G({\mu}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{A}^{c}),$$

where *A ^{c}* denote the complement set of

$$p({\mu}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{r}_{i})=\{\begin{array}{cc}G({\mu}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}A)\hfill & \text{if}\phantom{\rule{thinmathspace}{0ex}}{r}_{i}=0\hfill \\ G({\mu}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{A}^{c})\hfill & \text{if}\phantom{\rule{thinmathspace}{0ex}}{r}_{i}=1,\hfill \end{array}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}\mathit{\text{Pr}}({r}_{i}=0)={\pi}_{0}.$$

(4)

We will use *z* = (*z*_{1},…,*z _{m}*) and θ = (

In addition to a probability model, a Bayesian decision theoretic approach is characterized by a set of actions (decisions) and a loss function corresponding to all possible outcomes of the experiment. Let *d _{i}* {0, 1} denote the decision for the

$${d}^{*}=\underset{d}{\text{arg}\phantom{\rule{thinmathspace}{0ex}}\text{min}}{\displaystyle \int L(d,\theta ,z)\phantom{\rule{thinmathspace}{0ex}}p(\theta \phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}z)}\phantom{\rule{thinmathspace}{0ex}}\mathrm{d}\theta .$$

We use a loss function that combines the true positive count, TP = ∑ *d _{i} r_{i}*, and false positive count, FP = ∑

$$L(d,\theta ,z)=-{\displaystyle \sum {d}_{i}{r}_{i}+\mathrm{\lambda}{\displaystyle \sum {d}_{i}(1-{r}_{i})=-\text{TP}}+\mathrm{\lambda}\text{FP}}$$

(5)

The loss (5) can be interpreted as the Lagrangian for maximizing TP subject to a given bound on FP. It is a multiple comparison equivalent of the loss function underlying the Neyman-Pearson paradigm for a simple test.

Let *v _{i}* =

$${d}_{i}^{*}=I({v}_{i}>t)=I\left(\frac{{v}_{i}}{1-{v}_{i}}>{t}_{2}\right).$$

(6)

Alternatively, the threshold on *v _{i}* can be written as a threshold on the posterior odds

*Consider m hypothesis tests H*_{0i} : *μ _{i}*

$$\mathit{\text{pFDR}}=E\phantom{\rule{thinmathspace}{0ex}}\left(\frac{\mathit{\text{FP}}}{D}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}D>0\right)<1-t.$$

See appendix.

The only substantial assumption in Proposition 1 is that ${d}_{i}^{*}$ is a threshold on the gene-specific posterior probabilities of differential expression. Bogdan et al. (2008) proved a similar result for loss functions that are a linear combination of FN and FP counts.

Note that rule (6) is different from rules based on local FDR. Local FDR is defined as the posterior probability of the null given that we have observed a certain value *z _{i}*, and given assumed known sampling models under the null and alternative hypotheses (Efron et al., 2001). In contrast,

We complete the sampling model (4) with a prior model for *G*. Prior probability models for unknown distributions, *G* in this case, are traditionally known as non-parametric (NP) Bayesian models. One of the most commonly used NP Bayesian priors is the DP model. We write *G* ~ DP (*G**,α) to indicate a DP for a random probability measure *G*. See Ferguson (1973, 1974) for a definition and important properties of the DP model. The model requires the specification of two parameters, the base measure *G** and the total mass parameter α. The base measure *G** is the prior mean, *E*(*G*) = *G**. The total mass parameter determines, among other important properties, the variation of the random measure around the prior mean. Small values of α imply high uncertainty. In the following discussion we exploit two key properties of the DP. A random measure *G* with DP prior is a.s. discrete. This allows us to write *G* as a mixture of point masses, *G* = ∑*w _{h} δ_{mh}*. Another important property is the conjugate nature of the DP prior under random sampling. Assume

We use a DP prior on *G* to complete model (4)

$${\mu}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}G~G\phantom{\rule{thinmathspace}{0ex}}G~\text{DP}({G}^{*},\alpha ).$$

(7)

Model (7) implies that the prior for the null hypothesis *p*_{0} = *G*(*A*) is Beta, *p*_{0} ~ *Be* (α*G** (*A*), α[1 − *G** (*A*)]).

Many approaches have been proposed in the literature to implement posterior Monte Carlo simulation for DP mixture models. See, for example, Neal (2000) for a review. We outline how these methods can be adapted to compute the posterior probabilities *v _{i}*.

Let *z _{i}, i* = 1,,

$${z}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\mu}_{i}\stackrel{\mathit{\text{ind}}}{~}\phantom{\rule{thinmathspace}{0ex}}f({z}_{i};{\mu}_{i}),\phantom{\rule{thinmathspace}{0ex}}i=1,\cdots ,m.$$

(8)

For example, *f* (*z _{i}; μ_{i}*)

$$\begin{array}{c}{\mu}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}G\phantom{\rule{thinmathspace}{0ex}}\stackrel{\mathit{\text{iid}}}{~}\phantom{\rule{thinmathspace}{0ex}}G\hfill \\ \text{}G~\text{DP}({G}^{*},\alpha )\phantom{\rule{thinmathspace}{0ex}}\text{with}\phantom{\rule{thinmathspace}{0ex}}{G}^{*}(\xb7)\phantom{\rule{thinmathspace}{0ex}}~\phantom{\rule{thinmathspace}{0ex}}{\pi}_{0}{h}_{A}(\xb7)+(1-{\pi}_{0}){h}_{{A}^{c}}(\xb7),\hfill \end{array}$$

(9)

where *h _{A}*(·) and

Algorithms for posterior Monte Carlo simulation in DP process mixture models can easily be modified to adapt to the mixture in *G**. We will focus on the case *A* = {0} and outline the necessary changes for general *A*. We set *h _{A}*(·) = δ

$$P({s}_{i}=s\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{s}_{-i},z)\propto \{\begin{array}{cc}\frac{{m}_{-i,1}+{\pi}_{0}\alpha}{m-1+\alpha}f({z}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\mu}_{{s}_{i}}^{*}=0)\hfill & \text{if}\phantom{\rule{thinmathspace}{0ex}}s=1\hfill \\ \frac{{m}_{-i,s}}{m-1+\alpha}{\displaystyle \int \phantom{\rule{thinmathspace}{0ex}}f({z}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\mu}_{s}^{*})\phantom{\rule{thinmathspace}{0ex}}h\phantom{\rule{thinmathspace}{0ex}}({\mu}_{s}^{*}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{-i,s})\phantom{\rule{thinmathspace}{0ex}}d{\mu}_{s}^{*},}\hfill & \text{if}\phantom{\rule{thinmathspace}{0ex}}2\le s\le L\hfill \\ \frac{(1-{\pi}_{0})\alpha}{m-1+\alpha}{\displaystyle \int \phantom{\rule{thinmathspace}{0ex}}f({z}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\mu}_{s}^{*})\phantom{\rule{thinmathspace}{0ex}}h\phantom{\rule{thinmathspace}{0ex}}({\mu}_{s}^{*})\phantom{\rule{thinmathspace}{0ex}}d{\mu}_{s}^{*},}\hfill & \text{if}\phantom{\rule{thinmathspace}{0ex}}s=L+1\hfill \end{array}$$

Here, $h\phantom{\rule{thinmathspace}{0ex}}({\mu}_{s}^{*}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{-i,s})$ denotes the posterior distribution of ${\mu}_{s}^{*}$ based on the prior *h*(·) and all observations *z _{h}, h* Γ

Once we have a posterior Monte Carlo sample of the model parameters it is easy to evaluate the decision rule. Assume we have *B* posterior Monte Carlo samples of random partitions, ${s}^{(b)}=({s}_{1}^{(b)},\dots ,{s}_{m}^{(b)})$, *b* = 1,…,*B*. We evaluate *v _{i}* =

$${S}_{\mathit{\text{NP}}}=I\phantom{\rule{thinmathspace}{0ex}}(\frac{{\overline{v}}_{i}}{1-{\overline{v}}_{i}}>t).$$

(10)

In the next section we show that *S _{NP}* can be approximated by a single threshold statistic similar to (1).

We show that, under a DP prior model, *d** ≈ *d*^{ODP}, that is the ODP is an approximate Bayes Rule. We start by writing the marginal posterior probability as expectation of conditional posterior probabilities,

$${v}_{i}=E\phantom{\rule{thinmathspace}{0ex}}[p({r}_{i}=1\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}G,z)\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}z]=E\phantom{\rule{thinmathspace}{0ex}}\left[\frac{{\displaystyle {\int}_{{A}^{c}}\phantom{\rule{thinmathspace}{0ex}}f({z}_{i};\phantom{\rule{thinmathspace}{0ex}}\mu )\phantom{\rule{thinmathspace}{0ex}}\mathit{\text{dG}}(\mu )}}{{\displaystyle {\int}_{A\cup {A}^{c}}\phantom{\rule{thinmathspace}{0ex}}f({z}_{i};\phantom{\rule{thinmathspace}{0ex}}\mu )\phantom{\rule{thinmathspace}{0ex}}\mathit{\text{dG}}(\mu )}}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}z\right],$$

and proceed with an approximation of the conditional posterior distribution *p*(*G* | *z*). The conjugate nature of the DP prior under random sampling implies $E(G\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}\mu ,z)={G}_{1}^{*}\propto \alpha {G}^{*}+{\displaystyle \sum {\delta}_{{\mu}_{i}}}.$. Recall that *F _{m}* ∑ δ

$${v}_{i}\approx \frac{{\displaystyle {\int}_{{A}^{c}}f(z;\mu )\phantom{\rule{thinmathspace}{0ex}}{\mathit{\text{dF}}}_{m}(\mu )}}{{\displaystyle \int f(z;\mu )\phantom{\rule{thinmathspace}{0ex}}{\mathit{\text{dF}}}_{m}(\mu )}}=\frac{{\displaystyle {\sum}_{{\widehat{\mu}}_{j}\in {A}^{c}}f({z}_{i};{\widehat{\mu}}_{j})}}{{\displaystyle {\sum}_{j=i}^{m}f({z}_{i};{\widehat{\mu}}_{j})}}.$$

(11)

The connection with the ODP rule is apparent by computing the posterior odds, ${v}_{i}/(1-{v}_{i})\approx {\displaystyle {\sum}_{{\widehat{\mu}}_{j}\in {A}^{c}}f({z}_{i};{\widehat{\mu}}_{j})/}{\displaystyle {\sum}_{{\widehat{\mu}}_{j}\in A}f({z}_{i};{\widehat{\mu}}_{j}).}$. Finally, thresholding *v _{i}*/(1−

$$\frac{{v}_{i}}{1-{v}_{i}}+1\approx \frac{{\displaystyle {\sum}_{j=1}^{m}f({z}_{i};\phantom{\rule{thinmathspace}{0ex}}{\widehat{\mu}}_{j})}}{{\displaystyle {\sum}_{{\widehat{\mu}}_{j\in A}}f({z}_{i};\phantom{\rule{thinmathspace}{0ex}}{\widehat{\mu}}_{j})}}.$$

This is (3) with *w _{j}* =

Recognizing the ODP as an approximate Bayes rule opens two important directions of generalization. First, we can sharpen the approximation to define a slightly improved procedure, at no cost beyond moderate computational effort. We will do this in sections 4.2 and 4.3. Second, we can improve the ODP by making it more relevant to the decision problem at hand by modifying features of the underlying loss function; we will do this in Section 6.

We can improve the approximation in (11) by conditioning on a cluster configuration *s* and using cluster specific point estimates ${\widehat{\mu}}_{j}^{*}$. Given model (8)–(9), we can approximate the posterior probability *v _{i}* by

$${S}_{\text{BDP}}({z}_{i})=\frac{{\displaystyle {\sum}_{j=1}^{m}f({z}_{i};{\widehat{\mu}}_{j})}}{{\displaystyle {\sum}_{{\widehat{\mu}}_{j}\in A}f({z}_{i};{\widehat{\mu}}_{j})}}.$$

(12)

The * _{j}* are point estimates based on (9). For example, one could use the posterior means

$${\widehat{v}}_{i}\equiv {\displaystyle \sum _{{\widehat{\mu}}_{j}^{*}\in {A}^{c}}{m}_{j}/m\phantom{\rule{thinmathspace}{0ex}}f({z}_{i};{\widehat{\mu}}_{j}^{*}})/{\displaystyle \sum _{j}{m}_{j}/m\phantom{\rule{thinmathspace}{0ex}}f({z}_{i};{\widehat{\mu}}_{j}^{*}}).$$

(13)

By the earlier argument * _{i}* ≈

The nature of the approximation (12) is further clarified and formalized by the following result, which justifies the replacement of *μ _{i}* by the cluster specific mle’s ${\widehat{\mu}}_{j}^{*}$. Proving asymptotic results of this kind for the DP is generally not easy. One has to establish that the posterior mass for the random

$${G}_{k}(A)={\displaystyle \sum _{j=1}^{k}{p}_{j}{\delta}_{{\mu}_{j}^{o}}(\xb7),}$$

(14)

with (*p*_{1},…,*p _{k}*) ~

*Assume* ${z}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\mu}_{i}\stackrel{\mathit{\text{ind}}}{~}f({z}_{i};{\mu}_{i})$, *i* = 1,…,*m and a random effects distribution p*(*μ _{i}* |

$$\underset{m\to +\infty}{\text{lim}}p({r}_{i}=1\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{1},\dots ,{z}_{m})=\frac{E\phantom{\rule{thinmathspace}{0ex}}({\displaystyle {\sum}_{j:{\widehat{\mu}}_{j}^{*}\in {A}^{c}}\frac{{m}_{j}}{m}f({z}_{i};\phantom{\rule{thinmathspace}{0ex}}{\widehat{\mu}}_{j}^{*})}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}z)}{E\phantom{\rule{thinmathspace}{0ex}}({\displaystyle {\sum}_{j}\frac{{m}_{j}}{m}f({z}_{i};\phantom{\rule{thinmathspace}{0ex}}{\widehat{\mu}}_{j}^{*})}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}z)}.$$

*The expectation is with respect to the posterior distribution over all possible partitions of* {1,…,*m*} *with at most k clusters and* ${\widehat{\mu}}_{j}^{*}$ *is the cluster-specific m.l.e.*

See appendix.

The definition of the BDP can easily be extended to the general *k* samples comparison problem. We assume that data for experimental units *i, i* = 1,…,*m*, are arranged by *k* distinct groups, and we wish to test if the *k* groups share a common sampling model. Let *x _{i}* = {

$$\begin{array}{c}\hfill {z}_{i}^{l}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\mu}_{i}^{l}\phantom{\rule{thinmathspace}{0ex}}\stackrel{\mathit{\text{ind}}}{~}f({z}_{i}^{l};{\mu}_{i}^{l}),\text{}l=1,2,\phantom{\rule{thinmathspace}{0ex}}i=1,\dots ,m\\ {\mu}_{i}=\{{\mu}_{i}^{1},{\mu}_{i}^{2}\}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}G\phantom{\rule{thinmathspace}{0ex}}\stackrel{i.i.d}{~}\phantom{\rule{thinmathspace}{0ex}}G,\phantom{\rule{thinmathspace}{0ex}}\text{with}\phantom{\rule{thinmathspace}{0ex}}G\phantom{\rule{thinmathspace}{0ex}}~\phantom{\rule{thinmathspace}{0ex}}\mathit{\text{DP}}(\alpha ,{G}_{0})\hfill \end{array}$$

we can proceed as in section 2.3 and approximate the posterior odds for the *i*-th comparison by

$$\frac{{v}_{i}}{1-{v}_{i}}\approx \frac{{\displaystyle {\int}_{{A}^{c}}f(z;\mu )\phantom{\rule{thinmathspace}{0ex}}{\mathit{\text{dF}}}_{m}(\mu )}}{{\displaystyle {\int}_{A}f(z;\mu )\phantom{\rule{thinmathspace}{0ex}}{\mathit{\text{dF}}}_{m}(\mu )}}=\frac{{\displaystyle {\sum}_{i:({\widehat{\mu}}_{i}^{1},{\widehat{\mu}}_{i}^{2})\in {A}^{c}}f({z}_{i}^{1}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\widehat{\mu}}_{i}^{1})\phantom{\rule{thinmathspace}{0ex}}f({z}_{i}^{2}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}}{\widehat{\mu}}_{i}^{2})}{{\displaystyle {\sum}_{i:({\widehat{\mu}}_{i}^{1},{\widehat{\mu}}_{i}^{2})\in A}f(z;{\widehat{\mu}}_{i})}},$$

(15)

where ${\widehat{\mu}}_{i}^{1},{\widehat{\mu}}_{i}^{2}$ and * _{i}* are appropriate estimates of the relevant parameters within and across conditions. Expression (15) is an estimated ODP statistics for the multicomparison problem, as discussed in Storey et al. (2007b). As before, substituting cluster-specific estimates ${\widehat{\mu}}_{{s}_{i}}^{*}$ for a selected partition

We conduct a simulation study to compare the ODP with the NP Bayesian approximation outlined in the previous sections. We assume *m* = 2000 tests of *H*_{0} : *μ _{i}* = 0 versus

μ_{i} | −4 | −3 | −2 | −1.5 | 1.2 | 2 | 3 | 4.5 | 5.8 |
---|---|---|---|---|---|---|---|---|---|

p_{i} | 0.02 | 0.08 | 0.01 | 0.01 | 0.27 | 0.02 | 0.08 | 0.005 | 0.005 |

The distribution mimicks the observed distribution of the *t*-scores in the microarray data example considered in section 5.2, as shown in Figure 1. We use *A* = {0}, and *h _{Ac}* (·) =

The scores *z*_{i} for the simulation example in 5.1 (left panel) and the the data *z*_{i} for the BRCA mutation microarray data in section 5.2 (right panel).

We also considered a similar comparison using (12) with posterior means substituted for * _{i}*, and using

We first compare the *Ŝ*_{ODP} versus the *S*_{BDP} test by analyzing a microarray dataset obtained from breast cancer tumor tissues. The data have been analyzed, among others, in Hedenfalk et al (2001), Storey and Tibshirani (2003) and Storey et al. (2007b) and can be downloaded from http://research.nhgri.nih.gov/microarray/NEJM_Supplement/. The data consist of 3,226 gene expression measurements on *n*_{1} = 7 BRCA1 arrays and *n*_{2} = 8 BRCA2 arrays (a third “sporadic” group was not used for this analysis). Following Storey and Tibshirani (2003), genes with one or more measurement exceeding 20 were eliminated from the data set, leaving *m* = 3, 169 genes.

Let *x _{ij}* be the

We assess the relative performance of the estimated *Ŝ*_{ODP} versus approximation (12) of the NP Bayes rule. For a fair comparison, we evaluate the *S*_{BDP} as a frequentist rule with rejection region {*S*_{BDP} ≥ *c*}. The power of the test is evaluated in terms of the FDR and the *q*-value. See Storey (2002) and Storey et al. (2007b) for more discussion of the q-value and its evaluation.

The evaluation of *S*_{BDP} is based on 2000 iterations of a posterior MCMC sampler (1000 burn in). The number of significant genes detected at each *q*-value is shown in Figure 3. We report the *S*_{BDP} as computed on the basis of the cluster configuration of a single iteration of the MCMC sampling scheme. We use the partition from a random iteration and from MAP (maximum a posteriori) partition. Other choices are possible. However, our experience does not suggest significantly different conclusions using alternative choices. In Figure 3 we see that in both cases the *S*_{BDP} achieves larger numbers of significant genes at the same q-value than the *S*_{ODP}. The result leads us to recommend the *S*_{BDP} as a useful tool in multicomparison problems where a natural clustering of the tests is expected. In Table 1, we report the percentage of genes that are flagged by both tests for some choices of *q*-values. For most *q*-values of practical interest the *S*_{BDP} procedure identifies all genes that were flagged by the *S*_{ODP}, plus additional discoveries. For example, at *q* = 0.05, the BDP reveals 98 significant genes, against 87 revealed by the ODP and 47 by the standard FDR procedure devised by Benjamini and Hochberg (1995). Out of the 11 additional genes, 7 had been previously reported in the literature as significant in distinguishing BRCA1 and BRCA2 mutations (see Hedenfalk et al. 2001). The additionally identified genes come at no extra cost beyond moderate computational effort. No additional experimental effort is required, and no trade off in error rates is involved.

A comparison of *S*_{BDP} and *S*_{ODP} for identifying differentially expressed genes (see section 5.2). The *BDP*(*h*) curve is is based on the MAP configuration; *BDP*(*r*) refers to *S*_{BDP} computed under the (random) last configuration imputed in the MCMC simulation. **...**

For an alternative comparison we consider the “Golden Spike” data of Choe et al. (2005). They describe an Affymetrix GeneChip experiment, where 1309 individual cRNAs have been “spiked in” at known relative concentrations between the two samples (spike-in and control). We implemented the BDP and ODP as described above. For comparison, we also include alternative comparisons using the SAM procedure (Tusher et al., 2001; Storey, 2002) and independent two-sample t-tests. We used the reduced *Golden Spike* dataset provided in the R package *st*. The dataset reports 11,475 genes with 3 replicates per group, including 1,331 genes with known differential expression. Figure 4 shows the results of a comparison of BDP, ODP, SAM and independent t-tests. To compute the SAM statistic we used the samr package in R and alternatively the SAM software from http://www-stat.stanford.edu/~tibs/SAM/. Both implementations give practically identical results.

Both comparisons, in Figures 3 and and5,5, are for the ODP and BDP test based on data with a single gene-specific summary statistic *z _{i}* per gene. The restriction to a single summary statistic in the definition of ODP and BDP was purely for presentation, and can easily be relaxed. Let

$${H}_{0i}:{\mu}_{i1}=0\phantom{\rule{thinmathspace}{0ex}}\text{vs}\phantom{\rule{thinmathspace}{0ex}}{H}_{1i}:{\mu}_{i1}\ne 0,\text{}i=1,\dots ,m.$$

Similar to before we define a random effects distribution *G* for ${\theta}_{i}=({\mu}_{i1},{\sigma}_{i}^{2})$ and assume a NP Bayesian prior on *G*,

$${\theta}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}G~G\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}G~\mathit{\text{DP}}(\alpha ,{G}_{0}),\text{}i=1,\dots ,m.$$

(16)

For the DP base measure we use a conjugate normal-inverse χ^{2} distribution ${G}_{0}=\phantom{\rule{thinmathspace}{0ex}}\left[{\pi}_{0}\phantom{\rule{thinmathspace}{0ex}}{\delta}_{0}+(1-{\pi}_{0})\phantom{\rule{thinmathspace}{0ex}}N(0,\frac{{\sigma}_{i}^{2}}{{k}_{0}})\right]\phantom{\rule{thinmathspace}{0ex}}\times \phantom{\rule{thinmathspace}{0ex}}\text{Inv}-{\chi}^{2}({\sigma}_{i}^{2};{\nu}_{0},{\sigma}_{0}^{2*})$. We can then proceed as in section 4.3 to define the ODP and BDP statistic, respectively. Figure 6 summarizes the results.

Once the optimality criterion and the probability model that lead to the ODP are identified, it is easy to modify the procedure to better accomodate preferences and losses other than (5). Often some discoveries might be considered more important than others. For example if *A* = {*μ _{i}* = 0} one might be more interested in large deviations from 0. In this case the loss function should include an explicit reward for detecting true signals as a function of some (monotone) function of

$$L(d,\theta ,z)=-{\displaystyle \sum {d}_{i}\phantom{\rule{thinmathspace}{0ex}}t({\mu}_{i})+\mathrm{\lambda}\phantom{\rule{thinmathspace}{0ex}}{\displaystyle \sum {d}_{i}(1-{r}_{i})},}$$

(17)

where *t*(*μ _{i}*) is a known function of the mean, e.g.

$${d}_{i}^{m*}=I(\mathrm{\lambda}{v}_{i}+\overline{t({\mu}_{i})}>t),$$

(18)

for some threshold *t*. Although the nature of the rule has changed, we can still proceed as before and define a modified *S*_{ODP} statistics that approximates the Bayes rule. Let $\overline{t({\mu}_{i})}\approx {\displaystyle {\sum}_{j=1}^{m}t({\widehat{\mu}}_{j})\phantom{\rule{thinmathspace}{0ex}}}f({z}_{i};{\widehat{\mu}}_{j})/{\displaystyle {\sum}_{j=1}^{m}f({z}_{i};{\widehat{\mu}}_{j})}$ be an empirical Bayes estimate of $\overline{t\phantom{\rule{thinmathspace}{0ex}}({\mu}_{i})}$, justified similarly to the approximation in (11). As before the point estimates * _{i}* are cluster-specific m.l.e.’s ${\widehat{\mu}}_{{s}_{i}}^{*}$ for some partition

$${S}_{\text{BDP}}^{m}({z}_{i})=\frac{\mathrm{\lambda}\phantom{\rule{thinmathspace}{0ex}}{\displaystyle {\sum}_{{\widehat{\mu}}_{j}\notin A}f({z}_{i};{\widehat{\mu}}_{j})+{\displaystyle {\sum}_{j}t({\widehat{\mu}}_{j})f({z}_{i};{\widehat{\mu}}_{j})}}}{{\displaystyle {\sum}_{j}f({z}_{i};{\widehat{\mu}}_{j})}}.$$

(19)

We use ${S}_{\text{BDP}}^{m}({y}_{i})$ as a single thresholding function for the multiple comparison test in lieu of the *S*_{ODP}.

Any loss function that is written as a sum of comparison-specific terms leads to similar approximations and modifications of the ODP. For example, consider a loss function that involves a stylized description of a follow-up experiment. The loss function is motivated by the following scenario. Many microarray group comparison experiments are carried out as a screening experiment. Genes that are flagged in the microarray experiment are chosen for a follow-up experiment to verify the possible discovery with an alternative experimental platform. For example, Abruzzo et al. (2005) describe a setup where RT-PCR (reverse transcription polymerase chain reaction) experiments are carried out to confirm discoveries proposed by an initial microarray group comparison. Abruzzo et al. (2005) report specific values for correlations across the platforms, error variances etc. On the basis of this setup Müller et al. (2007) consider a loss function that formalizes the consequences of this follow-up experiment. The loss function includes a sampling cost for the RT-PCR experiment and a reward that is realized if the RT-PCR experiment concludes with a significant outcome. The sample size is determined by a traditional power argument for a two-sample comparison, assuming a simple z-test for the difference of two population means. The probability of a significant outcome is the posterior predictive probability of the test statistic in the follow-up experiment falling in the rejection region. Let (* _{i}, s_{i}*) denote the posterior mean and standard deviation of the difference in mean expression for gene

$${n}_{i}({z}_{i})=2{[({q}_{\alpha}+{q}_{\beta})/{\mu}_{i}^{A}]}^{2}.$$

Let Φ(*z*) denote the standard normal c.d.f. The posterior predictive probability for a significant outcome of the follow-up experiment is approximately

$${\pi}_{i}({z}_{i})=(1-{p}_{\rho})\alpha +{p}_{\rho}\mathrm{\Phi}\phantom{\rule{thinmathspace}{0ex}}\left[\frac{{\rho}^{*}{\overline{\mu}}_{i}\sqrt{{n}_{i}/2}-{q}_{\alpha}}{\sqrt{1+{n}_{i}/2{\rho}^{*2}{s}_{i}^{2}}}\right].$$

We formalize the goal of maximizing the probability of success for the follow-up experiment while controlling the sampling cost by the loss function

$$L(d,\theta ,z)={\displaystyle \sum _{{d}_{i}=1}[-{c}_{1}{\pi}_{i}({z}_{i})+{n}_{i}({z}_{i})]}+{c}_{2}\phantom{\rule{thinmathspace}{0ex}}{\displaystyle \sum {d}_{i}}.$$

Here *c*_{2} is a fixed cost per gene for setting up a follow-up experiment, *c*_{1} is the (large) reward for a significant outcome in the follow-up experiment, and *c*_{3} 1 is the sampling cost per gene and experiment. The Bayes rule is ${d}_{i}^{p*}=I({c}_{1}{\pi}_{i}-{n}_{i}\ge {c}_{2})$. As before we can use ${\overline{\mu}}_{i}\approx {\displaystyle {\sum}_{j=1}^{m}{\widehat{\mu}}_{j}f({z}_{i};{\widehat{\mu}}_{j})/}{\displaystyle {\sum}_{j=1}^{m}f({z}_{i};{\widehat{\mu}}_{j})}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}{s}_{i}^{2}\approx {\displaystyle {\sum}_{j=1}^{m}{({\overline{\mu}}_{i}-{\widehat{\mu}}_{j})}^{2}f({z}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\widehat{\mu}}_{j})/}{\displaystyle {\sum}_{j=1}^{m}f({z}_{i};{\widehat{\mu}}_{j})}$ and approximate the Bayes rule by a modified ODP style statistic. Let * _{i}* and

$${S}_{\text{BDP}}^{p}({z}_{i})={c}_{1}{\widehat{\pi}}_{i}({z}_{i})-{\widehat{n}}_{i}$$

Figure 6 compares the exact Bayes rules (18) and ${d}_{i}^{p*}=I({c}_{1}{\pi}_{i}-{n}_{i}\ge {c}_{2})$ with the tests based on the approximate ODP statistic ${S}_{\text{BDP}}^{m}$ and ${S}_{\text{BDP}}^{p}$, respectively.

The nature of the ODP as an approximate Bayes rule was based on the semi-parametric model (7). However, the Bayes rules (6) or (18) remain meaningful under any probability model, as long as *v _{i}* and $\overline{t\phantom{\rule{thinmathspace}{0ex}}({\mu}_{i})}$ have meaningful interpretations. For example, in geostatistical applications, we may be interested in isolating the exceedance regions of a spatial process, i.e. where the process has values above a given threshold (Zhang et al., 2008). Similarly, in the analysis of fMRI data, we aim to detect region specific activations of the brain. See Pacifico et al. (2004) and Flandin and Penny (2007) for two recent Bayesian solutions to the problem. In particular, Friston and Penny (2003) propose an empirical Bayes approach to build posterior probability maps of site specific signals. These approaches do not make use of an explicitely defined optimality criterion to support the proposed decision rules.

We consider a variation of the ODP that is suitable for spatial inference problems, using a specific spatial probability model as an example. We use the spatial model proposed by Gelfand et al. (2005). Let {*Y* (*s*), *s* *D*} be a random field, where *D* *R ^{d}, d* ≥ 2. Let

$${Y}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\theta}_{i}\stackrel{\mathit{\text{ind}}}{~}f({y}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}\mu +{\theta}_{i}),\text{}i=1,\dots ,m$$

(20)

where *f* is some multivariate distribution, μ is a (not necessarily constant across *s*) regressive term and θ_{i} = {θ_{i}(*s*_{1}),…,θ_{i}(*s _{n}*)}

$${\theta}_{i}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}G\phantom{\rule{thinmathspace}{0ex}}\stackrel{\mathit{\text{iid}}}{~}\phantom{\rule{thinmathspace}{0ex}}G\text{}i=1,\dots ,m\phantom{\rule{thinmathspace}{0ex}}\text{with}\phantom{\rule{thinmathspace}{0ex}}G\phantom{\rule{thinmathspace}{0ex}}~\phantom{\rule{thinmathspace}{0ex}}\mathit{\text{DP}}(\alpha ,{G}_{0}),$$

(21)

for some base measure *G*_{0}. See Gelfand et al. (2005) for details. The assumption of the DP prior in the model is unrelated to the DP that we used to justify the nature of the ODP as approximate Bayes rule. In this setting, the inferential problem might be quite general, as it may involve subsets of sites and replicates.

For simplicity, we consider a null hypothesis specific to each location *s* and replicate *i*: *H*_{0si}: θ_{i} *A _{si}, A_{si}* = {θ

The loss function (5) is usually not an adequate representation of the investigator’s preferences in a spatial setting. Posterior probability maps may show very irregular patterns that could lead to, for example, flagging very irregular sets of pixels for exceedance θ_{i}(*s*) > *b*. We may explicitly include into the loss function a penalty for such irregularities, i.e.

$$L(d,\mathit{\theta},\mathit{y})=-{\displaystyle \sum {d}_{j}\phantom{\rule{thinmathspace}{0ex}}{r}_{j}+\mathrm{\lambda}\phantom{\rule{thinmathspace}{0ex}}{\displaystyle \sum {d}_{j}\phantom{\rule{thinmathspace}{0ex}}(1-{r}_{j})}+\mathrm{\gamma}\phantom{\rule{thinmathspace}{0ex}}\mathit{\text{PI}},}$$

(22)

where *PI* is a penalization for irregularities. For example, *PI* could be the number of connected regions. See the example below. The decision rule corresponding to (22) is

$${d}^{*}(D)=\underset{\{d(s),s\in D\}}{\text{arg}\phantom{\rule{thinmathspace}{0ex}}\text{min}}L(d(s),\theta (s),y(s)).$$

Finding *d** requires numerical optimization.

We illustrate the approach with a dataset of 18 individuals who underwent an MRI scan to detect signals of neurodegenerative patterns typical of the Alzheimer’s disease (Ashburner et al., 2003). The data have been provided by the Laboratory of Epidemiology and Neuroimaging, IRCSS, Centro San Giovanni di Dio, Brescia, Italy and have been previously normalized with the freely available SPM5 sowftare (http://www.fil.ion.ucl.ac.uk/spm/, see Friston et al. (1995) and Worsley and Friston (1995)). For simplicity, the dataset is restricted to gray density matter intensity values collected on a regular two-dimensional grid of 14 × 19 pixels encompassing the hippocampus and are treated as continuous. The data have been analyzed in Petrone et al. (2009) before, although with a different inferential aim.

We assume the random effect model (20)–(21), where *f* is a multivariate gaussian, with mean μ + θ and covariance matrix τ^{2}*I _{n}*. The base measure

Figure 7 shows the resulting optimal rule for one individual in the MRI dataset. We are interested in detecting regions of low gray matter intensity in the MRI scans. Hence we consider *A* = {θ(*s*) < *b*}, where *b* is a fixed constant, corresponding to the first decile of the dataset. The activation threshold for the posterior probability is *t* = 0.8. Figure 7 shows the activation map for an individual with recognizable signs of hypoccampal atrophy for γ = 0 (panel (a)) and for $\mathrm{\gamma}=\frac{1}{2}\mathrm{\lambda}$ (panel (b)). The additional penalty term provided a principled and coherent means of removing the singleton clusters that would otherwise be reported.

Starting from a decision theoretic framework, we provided an interpretation of the ODP as an approximate Bayes rule and introduced two directions of generalizations. First we improved the rule by sharpening the approximation. In a simulation example and a data analysis example we showed improved performance of the resulting BDP rule, even by the frequentist operating characteristics that are optimized by the oracle version of the ODP. Second, we considered generalizations of the ODP by replacing the original generic loss function by loss functions that better reflect the goals of the specific analysis. For loss functions with similar additive structure as the original loss function the resulting rule can still be approximated by a single thresholding statistic similar to the ODP.

The use of a decision theoretic framework provides a convenient assurance of coherent inference for the proposed approach. However, it also inherits the limitations of any decision theoretic procedure. The optimality is always with respect to an assumed probability model and loss function. The stated loss function is usually a stylized version of the actual inference goals. Often that is sufficient to obtain a reasonable rule. But we still caution to critically validate and if necessary revise the inference in the light of evaluations such as frequentist operating characteristics. Also, the proposed methods are more computation intensive than the original ODP procedure. In the simplest case we require some additional simulation to find a clustering of comparisons to compute cluster-specific m.l.e.’s.

The main strengths of the proposed approach are the generality and the assurance of coherent inference. The approach is general in the sense that the proposed methods are meaningful for any underlying probability model, and in principle for arbitrary loss functions. The approach is coherent in the sense that it derives from minimizing expected loss under a well defined probability model. From a data analysis perspective, an important strength of the proposed approach is the need and the opportunity to explicitely think about the inference goals and formalize them in the loss function. A practical strength is the opportunity for improved inference at no additional experimental cost, and only moderate additional computational cost.

The work of the third author is partly supported by the NIH CTSA Grant UL1 RR024982.

The proof follows closely the proof of Theorem 1 in Storey and Tibshirani (2003). First, rewrite

$$\text{pFDR}=E\phantom{\rule{thinmathspace}{0ex}}\left(\frac{\mathit{\text{FP}}}{D}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}D>0\right)=E\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{\displaystyle {\sum}_{i=1}^{n}{d}_{i}(1-{r}_{i})}}{{\displaystyle {\sum}_{i=1}^{n}{d}_{i}}}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}D>0\right).$$

(23)

The expectation is with respect to the distribution of (*z*_{1},…,*z _{m}*), conditionally on the event that some of the comparisons are significant. Hence,

$$\text{pFDR}={E}_{{Z}_{1},\dots ,{Z}_{m}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}D>0}\phantom{\rule{thinmathspace}{0ex}}\left[E\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{\displaystyle {\sum}_{i=1}^{n}{d}_{i}(1-{r}_{i})}}{{\displaystyle {\sum}_{i=1}^{n}{d}_{i}}}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{1},\dots ,{z}_{m}\right)\right].$$

Since *d _{i}* is a function of the sample

$$E\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{\displaystyle {\sum}_{i=1}^{n}{d}_{i}(1-{r}_{i})}}{{\displaystyle {\sum}_{i=1}^{n}{d}_{i}}}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{1},\dots ,{z}_{m}\right)=\frac{{\displaystyle {\sum}_{i=1}^{n}{d}_{i}(1-{v}_{i})}}{{\displaystyle {\sum}_{i=1}^{n}{d}_{i}}},$$

and since ${d}_{i}^{*}=I({v}_{i}>t)$,

$${\text{pFDR}<E}_{{z}_{1},\dots ,{z}_{m}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}D>0}\left(\frac{{\displaystyle {\sum}_{i=1}^{n}{d}_{i}(1-t)}}{{\displaystyle {\sum}_{i=1}^{n}{d}_{i}}}\right)=1-t$$

Because of the exchangeability of samples from a Pólya Urn, without loss of generality, we may consider *v _{m}* =

$$\begin{array}{cc}{v}_{m}\hfill & ={\displaystyle \int p({r}_{m}=1\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{G}_{k},{z}_{1},\dots ,{z}_{m})p(d{G}_{k}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{1},\dots ,{z}_{m})}={\displaystyle \int {G}_{k}({A}^{c})p(d{G}_{k}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{1},\dots ,{z}_{m})}\hfill \\ \hfill & =\frac{{\displaystyle \int {\displaystyle {\int}_{{A}^{c}}p({z}_{m}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\mu}_{m})}{G}_{k}(d{\mu}_{m})p(d{G}_{k}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{1},\dots ,{z}_{m-1})}}{p({z}_{m}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{1},\dots ,{z}_{m-1})}\hfill \end{array}$$

Both numerator and denominator take the form of

$$E\phantom{\rule{thinmathspace}{0ex}}\left({\displaystyle {\int}_{B}p({z}_{m}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\mu}_{m}){G}_{k}(d{\mu}_{m})\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{1},\dots ,{z}_{m}}\right)={\displaystyle {\int}_{\mathcal{P}}{\displaystyle {\int}_{B}p({z}_{m}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\mu}_{m}){G}_{k}(d{\mu}_{m})p(d{G}_{k}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{1},\dots ,{z}_{m})},}$$

for a Borel set *B*. Let *s*^{o(m)} be a vector of cluster indicators, that is ${s}_{i}^{o}=j\text{iff}{\mu}_{i}={\mu}_{j}^{o}$, *i* = 1,…,*m, j* = 1,…,*k*.

For any fixed *m*, let {*s*^{o(m)}} denote the set of all partitions of {1,…,*m*} into at most *k* clusters. From the discussions in Ferguson (1983), Lo (1984), Ishwaran and James (2003), it follows that

$$E\phantom{\rule{thinmathspace}{0ex}}\left({\displaystyle {\int}_{B}p({Z}_{m}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\mu}_{m})\phantom{\rule{thinmathspace}{0ex}}{G}_{k}(d{\mu}_{m})\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{1},\dots ,{Z}_{m-1}}\right)=\frac{m}{\alpha +m}{\displaystyle \sum _{{s}^{(m-1)}}p({s}^{(m-1)}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{1},\dots ,{z}_{m-1})}\times \phantom{\rule{thinmathspace}{0ex}}\times {\displaystyle \sum _{j=1}^{k}\frac{\frac{\alpha}{K}+{m}_{j}^{o}}{m}}{\displaystyle {\int}_{B}p({z}_{m}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\mu}_{j}^{o})p(d{\mu}_{j}^{o}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{i}:{s}_{i}=j,i=1,\dots ,m-1),}$$

(24)

where ${m}_{j}^{o}=\mathit{\text{card}}\{{s}_{i}^{o}:{s}_{i}^{o}=j,i=1,\dots \phantom{\rule{thinmathspace}{0ex}},m-1\}$ is the number of elements in cluster *j*. If ${m}_{j}^{o}=0,\text{then}\phantom{\rule{thinmathspace}{0ex}}p(d{\mu}_{j}^{o}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{i}:{s}_{i}=j)\equiv {G}^{*}(d{\mu}_{j}^{o})$. Expression (24) highlights that any partition of {1,…,*m*} can be obtained from a corresponding partition of {1,…,*m*−1} by adding the *m*-th observation to any of the previous cluster or by forming a new one.

Now, note that for each *j* = 1,…,*k*, either $\frac{{m}_{j}^{o}}{m}=o(1)\text{or}\frac{{m}_{j}^{o}}{m}=O(1)$. If $\frac{{m}_{j}^{o}}{m}=o(1),(\alpha /k+{m}_{j}^{o})/m\to 0;\text{if}\phantom{\rule{thinmathspace}{0ex}}\frac{{m}_{j}^{o}}{m}=O(1)$, we can use Laplace approximation arguments (see Schervish, 1995, chapter 7.4.3 or Ghosh et al., 2006, pag. 115) to obtain

$${\int}_{B}p({z}_{m}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\mu}_{j}^{o})p(d{\mu}_{j}^{o}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{z}_{i}:{s}_{i}^{o}=j)\approx p({z}_{m}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}{\widehat{\mu}}_{j}^{o})\phantom{\rule{thinmathspace}{0ex}}\mathrm{\Phi}({\widehat{\mu}}_{j}^{o}\in B)[1+O({m}_{j}^{o-2})]},$$

where ${\widehat{\mu}}_{j}^{o}$ is the MLE estimate computed in cluster *j, j* = 1,…*k*, obtained by solving $\frac{\partial}{\partial \mu}{\displaystyle {\sum}_{i:{s}_{i}=j}f({z}_{i};\mu )+}\frac{\partial}{\partial \mu}f({z}_{m};\mu )=0$ and Φ(·) denotes a standard gaussian probability distribution.

Next we relabel the non-empty clusters by identifying the set $\{{\mu}_{j}^{o};{m}_{j}^{o}>0\}$ as the set of unique values $\{{\mu}_{j}^{*},j=1,\dots ,L\}$.

The proof is completed after noting that, since $\frac{{m}_{j}^{o}}{m}=O(1)$, and because of the asymptotic consistency of posterior distributions, as *m* → ∞, Φ(_{j}*B*) → *I _{B}*(

Michele Guindani, University of New Mexico, Alberquerque, NM 87111, U.S.A.

Peter Müller, University of Texas M.D. Anderson Cancer Center, Houston, TX 77030, U.S.A.

Song Zhang, University of Texas Southwestern Medical Center, Dallas, TX 75390, U.S.A.

- Arratia R, Tavaré S, Barbour A. Logarithmic Combinatorial Structures: A Probabilistic Approach. EMS Monographs in Mathematics; 2003.
- Ashburner J, Csernansky J, Davatzikos C, Fox N, Frisoni G, Thompson P. Computer-assisted imaging to assess brain structure in healthy and diseased brains. The Lancet Neurology. 2003;2:79–88. [PubMed]
- Benjamini Y, Hochberg Y. Controlling the false discover rate – a practical and powerful approach to multiple testing. J. R. Statist. Soc. B. 1995;75:289–300.
- Bogdan M, Gosh J, Tokdar S. A comparison of the benjamini-hochberg procedure with some Bayesian rules for multile testing. In: Balakrishnan N, Peña E, Silvapulle M, editors. Beyond Parametrics in Interdisciplinary Research: Festshcrift in Honor of Professor Pranab K. Sen. Beachwood, Ohio, USA: Institute of Mathematical Statistics; 2008. pp. 211–230. IMS Collections.
- Cao J, Jie X, Zhang S, Whitehurst A, White M. Bayesian optimal discovery procedure for simultaneous significance testing. BMC, Bioinformatics. 2009;10:5. to appear. [PMC free article] [PubMed]
- Choe S, Boutros M, Michelson A, Church G, Halfon M. Preferred analysis methods for affymetrix genechips revealed by a wholly defined control dataset. Genome Biology. 2005;6:R16. [PMC free article] [PubMed]
- Cohen A, Sackrowitz H. More on the inadmissibility of step-up. Journal of Multivariate Analysis. 2007;98:481–492.
- Coram M, Lalley S. Consistency of bayes estimators of a binary regression function. Ann. Statist. 2006;34:1233–1269.
- Dahl D, Newton M. Multiple hypothesis testing by clustering treatment effects. J. Am. Statistic. Assoc. 2007;102:517–526.
- Efron B, Tibshirani R, Storey J, Tusher V. Empirical bayes analysis of a microarray experiment. J. Am. Statist. Assoc. 2001;96:1151–1160.
- Ferguson T. A Bayesian analysis of some nonparametric problems. Ann. Statist. 1973;1:209–230.
- Ferguson T. Prior distributions on spaces of probability measures. Ann. Statist. 1974;2:615–629.
- Ferguson TS. Bayesian density estimation by mixtures of normal distributions. In: Rizvi H, Rustagi J, editors. Recent Advances in Statistics. New York: Academic Press; 1983. pp. 287–302.
- Flandin G, Penny W. Bayesian fmri data analysis with sparse spatial basis function priors. Neuroimage. 2007;34:1108–1125. [PubMed]
- Friston KJ, Ashburner L, Poline J, Frith C, Frackowiak R. Spatial registration and normalization of images. Humain Brain Mapping. 1995;2:165–189.
- Friston KJ, Penny W. Posterior probability maps and spms. NeuroImage. 2003;19:1240–1249. [PubMed]
- Gelfand A, Kottas A, MacEachern S. Bayesian nonparametric spatial modeling with Dirichlet processes mixing. J. Am. Statist. Ass. 2005;100:1021–1035.
- Ghosh J, Delampady M, Tapas S. An Introduction to Bayesian Analysis: Theory and Methods. Springer; 2006.
- Gopalan R, Berry D. Bayesian multiple comparisons using dirichlet process priors. J. Am. Statist. Assoc. 1993;93:1130–1139.
- Hedenfalk I, Duggan D, Chen Y, Radmacher M, Bittner M, Simon R, Meltzer P, Gusterson B, M E, Raffeld M, Yakhini Z, Ben-Dor A, Dougherty E, Kononen J, Bubendorf L, Fehrle W, Pittaluga S, Gruvberger S, Loman N, Johannsson O, Olsson H, Wilfond B, Sauter G, Kallioniemi Olli-P, Borg A, Trent J. Gene-expression profiles in hereditary breast cancer. New England Journal of Medicine. 2001;344:539–549. [PubMed]
- Ishwaran H, James L. Some further developments for stick-breaking priors: finite and infinite clustering and classification. Sankhya Series A. 2003;65:577–592.
- Lo A. On a class of bayesian nonparametric estimates: I density estimates. Ann. Statist. 1984;12(1):351–357.
- Müller P, Parmigiani G, Rice K. Fdr and bayesian multiple comparisons rules. In: Bernardo J, Bayarri M, Berger J, Dawid A, Heckerman, Smith AD, West M, editors. Bayesian Statistics 8. Oxford, UK: Oxford University Press; 2007.
- Müller P, Parmigiani G, Robert CP, Rousseau J. Optimal sample size for multiple testing: the case of gene expression microarrays. J. Am. Statist. Assoc. 2004;99:990–1001.
- Neal RM. Markov chain sampling methods for dirichlet process mixture models. Journal of Computational and Graphical Statistics. 2000;9:249–265.
- Pacifico MP, Genovese C, Verdinelli I, Wasserman L. False discovery control for random fields. J. Am. Statistic. Assoc. 2004;99:1002–1014.
- Petrone S, Guindani M, Gelfand A. Hybrid Dirichlet processes for functional data, to appear. J. R. Statist. Soc. B. 2009 to appear.
- Rodriguez A, Dunson D, Gelfand A. The nested Dirichlet process. J. Am. Statist. Assoc. 2009 to appear.
- Schervish MJ. Theory of Statistics. Springer; 1995.
- Scott J, Berger J. An exploration of aspects of bayesian multiple testing. Journal of Statistical Planning and Inferece. 2003;136:2144–2162.
- Storey J. A direct approach to false discovery rates. J. R. Statist. Soc. B. 2002;64:479–498.
- Storey J. The optimal discovery procedure: a new approach to simultaneous significance testing. J. R. Statist. Soc. B. 2007a;69(3):347–368.
- Storey J, Dai J, Leek J. The optimal discovery procedure for large-scale significance testing, with applications to comparative microarray experiments. Biostatistics. 2007b;8:414–432. [PubMed]
- Storey J, Tibshirani R. Statistical significance for genomewide studies. Proc. Natn. Acad. Sci. USA. 2003;100(16):9440–9445. [PubMed]
- Tusher V, Tibshirani R, Chu G. Significance analysis of microarrays applied to the ionizing radiation response. PNAS. 2001;98:5116–5121. [PubMed]
- Worsley KJ, Friston KJ. Analysis of fmri time-series revisited-again. NeuroImage. 1995;2:173–181. [PubMed]
- Zhang J, Craigmile P, Cressie N. Loss function approaches to predict a spatial quantile and its exceedance region. Technometrics. 2008;50:216–227.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |