Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2945396

Formats

Article sections

- Abstract
- 1. INTRODUCTION
- 2. OVERVIEW OF THE MM ALGORITHM
- 3. APPLICATIONS
- 4. A NUMERICAL EXPERIMENT
- 5. DISCUSSION
- Supplementary Material
- REFERENCES

Authors

Related links

J Comput Graph Stat. Author manuscript; available in PMC 2010 September 25.

Published in final edited form as:

J Comput Graph Stat. 2010 September 1; 19(3): 645–665.

doi: 10.1198/jcgs.2010.09014PMCID: PMC2945396

NIHMSID: NIHMS205488

Hua Zhou, Post-Doctoral Fellow, Department of Human Genetics, University of California, Los Angeles, CA 90095-7088 (Email: ude.alcu@uohzauh).

See other articles in PMC that cite the published article.

The MM (minorization–maximization) principle is a versatile tool for constructing optimization algorithms. Every EM algorithm is an MM algorithm but not vice versa. This article derives MM algorithms for maximum likelihood estimation with discrete multivariate distributions such as the Dirichlet-multinomial and Connor–Mosimann distributions, the Neerchal–Morel distribution, the negative-multinomial distribution, certain distributions on partitions, and zero-truncated and zero-inflated distributions. These MM algorithms increase the likelihood at each iteration and reliably converge to the maximum from well-chosen initial values. Because they involve no matrix inversion, the algorithms are especially pertinent to high-dimensional problems. To illustrate the performance of the MM algorithms, we compare them to Newton’s method on data used to classify handwritten digits.

The MM algorithm generalizes the celebrated EM algorithm (Dempster, Laird, and Rubin 1977). In this article we apply the MM (minorization–maximization) principle to devise new algorithms for maximum likelihood estimation with several discrete multivariate distributions. A series of research papers and review articles (Groenen 1993; de Leeuw 1994; Heiser 1995; Hunter and Lange 2004; Lange 2004; Wu and Lange 2010) have argued that the MM principle can lead to simpler derivations of known EM algorithms. More importantly, the MM principle also generates many new algorithms of considerable utility. Some statisticians encountering the MM principle for the first time react against its abstraction, unfamiliarity, and dependence on the mathematical theory of inequalities. This is unfortunate because real progress can be made applying a few basic ideas in a unified framework. The current article relies on just three well-known inequalities. For most of our examples, the derivation of a corresponding EM algorithm appears much harder, the main hindrance being the difficulty of choosing an appropriate missing data structure.

Discrete multivariate distributions are seeing wider use throughout statistics. Modern data mining employs such distributions in image reconstruction, pattern recognition, document clustering, movie rating, network analysis, and random graphs. High-dimension data demand high-dimensional models with ten to hundreds of thousands of parameters. Newton’s method and Fisher scoring are capable of finding the maximum likelihood estimates of these distributions via the parameter updates

$${\theta}^{(n+1)}={\theta}^{(n)}+M{({\theta}^{(n)})}^{-1}\nabla L({\theta}^{(n)}),$$

where *L*(θ) is the score function and *M*(θ) is the observed or the expected information matrix, respectively. Several complications can compromise the performance of these traditional algorithms: (a) the information matrix *M*(θ) may be expensive to compute, (b) it may fail to be positive definite in Newton’s method, (c) in high dimensions it is expensive to solve the linear system *M*(θ)*x* = *L*(θ^{(n)}), and (d) if parameter constraints and parameter bounds intrude, then the update itself requires modification. Although mathematical scientists have devised numerous remedies and safeguards, these all come at a cost of greater implementation complexity. The MM principle offers a versatile weapon for attacking optimization problems of this sort. Although MM algorithms have at best a linear rate of convergence, their updates are often very simple. This can tip the computational balance in their favor. In addition, MM algorithms are typically easy to code, numerically stable, and amenable to acceleration. For the discrete distributions considered here, there is one further simplification often missed in the literature. These distributions involve gamma functions. To avoid the complications of evaluating the gamma function and its derivatives, we fall back on a device suggested by Haldane (1941) that replaces ratios of gamma functions by rising polynomials.

Rather than tire the skeptical reader with more preliminaries, it is perhaps best to move on to our examples without delay. The next section defines the MM principle, discusses our three driving inequalities, and reviews two simple acceleration methods. Section 3 derives MM algorithms for some standard multivariate discrete distributions, namely the Dirichlet-multinomial and Connor–Mosimann distributions, the Neerchal–Morel distribution, the negative-multinomial distribution, certain distributions on partitions, and zero-truncated and zero-inflated distributions. Section 4 describes a numerical experiment comparing the performance of the MM algorithms, accelerated MM algorithms, and Newton’s method on model fitting of handwritten digit data. Our discussion concludes by mentioning directions for further research and by frankly acknowledging the limitations of the MM principle.

As we have already emphasized, the MM algorithm is a principle for creating algorithms rather than a single algorithm. There are two versions of the MM principle, one for iterative minimization and another for iterative maximization. Here we deal only with the maximization version. Let *f* (θ) be the objective function we seek to maximize. An MM algorithm involves minorizing *f* (θ) by a surrogate function *g*(θ|θ^{n}) anchored at the current iterate θ^{n} of a search. Minorization is defined by the two properties

$$f({\theta}^{n})=g({\theta}^{n}|{\theta}^{n}),$$

(2.1)

$$f(\theta )\ge g(\theta |{\theta}^{n}),\text{}\theta \ne {\theta}^{n}.$$

(2.2)

In other words, the surface θ *g*(θ|θ^{n}) lies below the surface θ *f* (θ) and is tangent to it at the point θ = θ^{n}. Construction of the surrogate function *g*(θ|θ^{n}) constitutes the first M of the MM algorithm.

In the second M of the algorithm, we maximize the surrogate *g*(θ|θ^{n}) rather than *f* (θ). If θ^{n+1} denotes the maximum point of *g*(θ|θ^{n}), then this action forces the ascent property *f* (θ^{n+1}) ≥ *f* (θ^{n}). The straightforward proof

$$f({\theta}^{n+1})\ge g({\theta}^{n+1}|{\theta}^{n})\ge g({\theta}^{n}|{\theta}^{n})=f({\theta}^{n})$$

reflects definitions (2.1) and (2.2) and the choice of θ^{n+1}. The ascent property is the source of the MM algorithm’s numerical stability. Strictly speaking, it depends only on increasing *g*(θ|θ^{n}), not on maximizing *g*(θ|θ^{n}).

The art in devising an MM algorithm revolves around intelligent choices of minorizing functions. This brings us to the first of our three basic minorizations

$$\text{ln}\left({\displaystyle \sum _{i=1}^{m}{\alpha}_{i}}\right)\ge {\displaystyle \sum _{i=1}^{m}\frac{{\alpha}_{i}^{n}}{{\displaystyle {\sum}_{j=1}^{m}{\alpha}_{j}^{n}}}}\text{ln}\left(\frac{{\displaystyle {\sum}_{j=1}^{m}{\alpha}_{j}^{n}}}{{\alpha}_{i}^{n}}{\alpha}_{i}\right),$$

(2.3)

invoking the chord below the graph property of the concave function ln *x*. Note here that all parameter values are positive and that equality obtains whenever
${\alpha}_{i}={\alpha}_{i}^{n}$ for all *i*. Our second basic minorization

$$-\text{ln}(c+\alpha )\ge -\text{ln}(c+{\alpha}^{n})-\frac{1}{c+{\alpha}^{n}}(\alpha -{\alpha}^{n})$$

(2.4)

restates the supporting hyperplane property of the convex function −ln(*c* + *x*). Our final basic minorization

$$-\text{ln}(1-\alpha )\ge -\text{ln}(1-{\alpha}^{n})+\frac{{\alpha}^{n}}{1-{\alpha}^{n}}\text{ln}\left(\frac{\alpha}{{\alpha}^{n}}\right)$$

(2.5)

is just a rearrangement of the two-point information inequality

$${\alpha}^{n}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\alpha}^{n}+(1-{\alpha}^{n})\text{ln}(1-{\alpha}^{n})\ge {\alpha}^{n}\text{ln}\phantom{\rule{thinmathspace}{0ex}}\alpha +(1-{\alpha}^{n})\text{ln}(1-\alpha ).$$

Here α and α^{n} must lie in (0, 1). Any standard text on inequalities, for example, the book by Steele (2004), proves these three inequalities. Because piecemeal minorization works well, our derivations apply the basic minorizations only to strategic parts of the overall objective function, leaving other parts untouched.

The convergence theory of MM algorithms is well known (Lange 2004). Convergence to a stationary point is guaranteed provided five properties of the objective function *f* (θ) and the MM algorithm map *M*(θ) hold: (a) *f* (θ) is coercive on its open domain; (b) *f* (θ) has only isolated stationary points; (c) *M*(θ) is continuous; (d) θ* is a fixed point of *M*(θ) if and only if it is a stationary point of *f* (θ); (e) *f* [*M*(θ*)] ≤ *f* (θ*), with equality if and only if θ* is a fixed point of *M*(θ). Most of these conditions are easy to verify for our examples, so the details will be omitted.

A common criticism of EM and MM algorithms is their slow convergence. Fortunately, MM algorithms can be easily accelerated (Jamshidian and Jennrich 1995; Lange 1995a; Jamshidian and Jennrich 1997; Varadhan and Rolland 2008). We will employ two versions of the recent square iterative method (SQUAREM) developed by Varadhan and Roland (2008). These simple vector extrapolation techniques require computation of two MM updates at each iteration. Denote the two updates by *M*(θ^{n}) and *M* *M*(θ^{n}), where *M*(θ) is the MM algorithm map. These updates in turn define two vectors

$$u=M({\theta}^{n})-{\theta}^{n},\text{}\upsilon =M\u25e6M({\theta}^{n})-M({\theta}^{n})-u.$$

The versions diverge in how they compute the steplength constant *s*. SqMPE1 (minimal polynomial extrapolation) takes $s=\frac{{u}^{t}u}{{u}^{t}\upsilon}$, while SqRRE1 (reduced rank extrapolation) takes $s=\frac{{u}^{t}\upsilon}{{u}^{t}\upsilon}$. Once *s* is specified, we define the next accelerated iterate by θ^{n+1} = θ^{n} − 2*su* + *s*^{2}υ. Readers should consult the original article for motivation of SQUAREM. Whenever θ^{n+1} decreases the log-likelihood *L*(θ), we revert to the MM update θ^{n+1} =*M* *M*(θ^{n}). Finally, we declare convergence when

$$\frac{|L({\theta}^{n})-L({\theta}^{n-1})|}{|L({\theta}^{n-1})|+1}<\epsilon .$$

(2.6)

In the numerical examples that follow, we use the stringent criterion ε = 10^{−9}. More sophisticated stopping criteria based on the gradient of the objective function and the norm of the parameter increment lead to similar results.

When count data exhibit overdispersion, the Dirichlet-multinomial distribution is often substituted for the multinomial distribution. The multinomial distribution is characterized by a vector **p** = (*p*_{1},…, *p _{d}*) of cell probabilities and a total number of trials

$$\begin{array}{cc}h(\mathbf{x}|\alpha )\hfill & =\frac{\mathrm{\Gamma}(|\alpha |)}{\mathrm{\Gamma}({\alpha}_{1})\cdots \mathrm{\Gamma}({\alpha}_{d})}{\displaystyle {\int}_{{\mathrm{\Delta}}_{d}}\left(\begin{array}{c}\hfill m\hfill \\ \hfill \mathbf{x}\hfill \end{array}\right){\displaystyle \prod _{i=1}^{d}{p}_{i}^{{x}_{i}+{\alpha}_{i}-1}{\mathit{\text{dp}}}_{1}\cdots {\mathit{\text{dp}}}_{d}}}\hfill \\ \hfill & =\left(\begin{array}{c}\hfill m\hfill \\ \hfill \mathbf{x}\hfill \end{array}\right)\frac{\mathrm{\Gamma}({\alpha}_{1}+{x}_{1})\cdots \mathrm{\Gamma}({\alpha}_{d}+{x}_{d})}{\mathrm{\Gamma}(|\alpha |+m)}\frac{\mathrm{\Gamma}(|\alpha |)}{\mathrm{\Gamma}({\alpha}_{1})\cdots \mathrm{\Gamma}({\alpha}_{d})},\hfill \end{array}$$

(3.1)

where $\left|\alpha \right|={\displaystyle {\sum}_{i=1}^{d}{\alpha}_{i}}$, Δ_{d} is the unit simplex in ^{d}, and **x** = (*x*_{1},…, *x _{d}*) is the vector of cell counts. Note that the count total $\left|x\right|={\displaystyle {\sum}_{i=1}^{d}{x}_{i}}$ is fixed at

$$\begin{array}{cc}\hfill \mathbf{E}({\mathbf{X}}_{i})& =m\frac{{\alpha}_{i}}{\left|\alpha \right|},\hfill \\ \hfill \mathbf{\text{Var}}({\mathbf{X}}_{i})& =m\frac{{\alpha}_{i}}{\left|\alpha \right|}\left(1-\frac{{\alpha}_{i}}{\left|\alpha \right|}\right)\frac{\left|\alpha \right|+m}{\left|\alpha \right|+1},\hfill \\ \hfill \mathbf{\text{Cov}}({\mathbf{X}}_{i},{\mathbf{X}}_{j})& =-m\frac{{\alpha}_{i}}{\left|\alpha \right|}\frac{{\alpha}_{j}}{\left|\alpha \right|}\frac{\left|\alpha \right|+m}{\left|\alpha \right|+1},\text{}i\ne j.\hfill \end{array}$$

If the fractions $\frac{{\alpha}_{i}}{\left|\alpha \right|}$ tend to constants *p _{i}* as |α| tends to ∞, then these moments collapse to the corresponding moments of the multinomial distribution with proportions

One of the most unappealing features of the density function *h*(**x**|α) is the occurrence of the gamma function. Fortunately, very early on Haldane (1941) noted the alternative representation

$$h(\mathbf{x}|\alpha )=\left(\begin{array}{c}\hfill m\hfill \\ \hfill \mathbf{x}\hfill \end{array}\right)\frac{{\displaystyle {\prod}_{j=1}^{d}{\alpha}_{j}({\alpha}_{j}+1)\cdots ({\alpha}_{j}+{x}_{j}-1)}}{\left|\alpha \right|(|\alpha |+1)\cdots (|\alpha |+m-1)}.$$

(3.2)

The replacement of gamma functions by rising polynomials is a considerable gain in simplicity. Bailey (1957) later suggested the reparameterization

$${\pi}_{j}=\frac{{\alpha}_{j}}{\left|\alpha \right|},\text{}j=1,\dots ,d,\text{}\theta =\frac{1}{\left|\alpha \right|}$$

in terms of the proportion vector π = (π_{1},…, π_{d}) and the overdispersion parameter θ. In this setting, the discrete density function becomes

$$h(\mathbf{x}|\pi ,\theta )=\left(\begin{array}{c}\hfill m\hfill \\ \hfill \mathbf{x}\hfill \end{array}\right)\frac{{\displaystyle {\prod}_{j=1}^{d}{\pi}_{j}({\pi}_{j}+\theta )\cdots [{\pi}_{j}+({x}_{j}-1)}\theta ]}{(1+\theta )\cdots [1+(m-1)\theta ]}.$$

(3.3)

This version of the density function is used to good effect by Griffiths (1973) in implementing Newton’s method for maximum likelihood estimation with the beta-binomial distribution.

In maximum likelihood estimation, we pass to log-likelihoods. This introduces logarithms and turns factors into sums. To construct an MM algorithm under the parameterization (3.2), we need to minorize terms such as ln(α_{j} + *k*) and −ln(|α| + *k*). The basic inequalities (2.3) and (2.4) are directly relevant. Suppose we draw *t* independent samples **x**_{1},…, **x**_{t} from the Dirichlet-multinomial distribution with *m _{i}* trials for sample

$$\begin{array}{c}\hfill L(\alpha )=-{\displaystyle \sum _{k}{r}_{k}\text{ln}(\left|\alpha \right|+k)+{\displaystyle \sum _{j}{\displaystyle \sum _{k}{s}_{\mathit{\text{jk}}}\text{ln}({\alpha}_{j}+k),}}}\hfill \\ \hfill {r}_{k}={\displaystyle \sum _{i}{1}_{\{{m}_{i}\ge k+1\}},\text{}{s}_{\mathit{\text{jk}}}=}{\displaystyle \sum _{i}{1}_{\{{x}_{\mathit{\text{ij}}}\ge k+1\}}.}\hfill \end{array}$$

(3.4)

The index *k* in these formulas ranges from 0 to max_{i} *m _{i}* −1.

Applying our two basic minorizations to *L*(α) yields the surrogate function

$$g(\alpha |{\alpha}^{n})=-{\displaystyle \sum _{k}{r}_{k}}\frac{1}{\left|{\alpha}^{n}\right|+k}\left|\alpha \right|+{\displaystyle \sum _{j}{\displaystyle \sum _{k}{s}_{\mathit{\text{jk}}}\frac{{\alpha}_{j}^{n}}{{\alpha}_{j}^{n}+k}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\alpha}_{j}}}$$

up to an irrelevant additive constant. Equating the partial derivative of the surrogate with respect to α_{j} to 0 produces the simple MM update

$${\alpha}_{j}^{n+1}=\left({\displaystyle \sum _{k}\frac{{s}_{\mathit{\text{jk}}}{\alpha}_{j}^{n}}{{\alpha}_{j}^{n}+k}}\right)/\left({\displaystyle \sum _{k}\frac{{r}_{k}}{\left|{\alpha}^{n}\right|+k}}\right).$$

(3.5)

Minka (2003) derived these updates from a different perspective.

Under the parameterization (3.3), matters are slightly more complicated. Now we minorize the terms −ln(1 + *k*θ) and ln(π_{j} + *k*θ) via

$$-\text{log}(1+k\theta )\ge -\text{log}(1+k{\theta}^{n})-\frac{1}{1+k{\theta}^{n}}(k\theta -k{\theta}^{n})$$

and

$$\text{log}({\pi}_{j}+k\theta )\ge \frac{{\pi}_{j}^{n}}{{\pi}_{j}^{n}+k{\theta}^{n}}\text{log}\left(\frac{{\pi}_{j}^{n}+k{\theta}^{n}}{{\pi}_{j}^{n}}{\pi}_{j}\right)+\frac{k{\theta}^{n}}{{\pi}_{j}^{n}+k{\theta}^{n}}\text{log}\left(\frac{{\pi}_{j}^{n}+k{\theta}^{n}}{k{\theta}^{n}}k\theta \right).$$

These minorizations lead to the surrogate function

$$-{\displaystyle \sum _{k}{r}_{k}}\frac{k}{1+k{\theta}^{n}}\theta +{\displaystyle \sum _{j}{\displaystyle \sum _{k}{s}_{\mathit{\text{jk}}}}}\left\{\frac{{\pi}_{j}^{n}}{{\pi}_{j}^{n}+k{\theta}^{n}}\text{log}\phantom{\rule{thinmathspace}{0ex}}{\pi}_{j}+\frac{k{\theta}^{n}}{{\pi}_{j}^{n}+k{\theta}^{n}}\text{log}\phantom{\rule{thinmathspace}{0ex}}\theta \right\}$$

up to an irrelevant constant. Setting the partial derivative with respect to θ equal to 0 yields the MM update

$${\theta}^{n+1}=\left({\displaystyle \sum _{j}{\displaystyle \sum _{k}\frac{{s}_{\mathit{\text{jk}}}k{\theta}^{n}}{{\pi}_{j}^{n}+k{\theta}^{n}}}}\right)/\left({\displaystyle \sum _{k}\frac{{r}_{k}k}{1+k{\theta}^{n}}}\right).$$

(3.6)

The update of the proportion vector π must be treated as a Lagrange multiplier problem owing to the constraint Σ_{j} π_{j} = 1. Familiar arguments produce the MM update

$${\pi}_{j}^{n+1}=\left({\displaystyle \sum _{k}\frac{{s}_{\mathit{\text{jk}}}{\pi}_{j}^{n}}{{\pi}_{j}^{n}+k{\theta}^{n}}}\right)/\left({\displaystyle \sum _{l}{\displaystyle \sum _{k}\frac{{s}_{\mathit{\text{lk}}}{\pi}_{l}^{n}}{{\pi}_{l}^{n}+k{\theta}^{n}}}}\right).$$

(3.7)

The two updates summarized by (3.5), (3.6), and (3.7) enjoy several desirable properties. First, parameter constraints are built in. Second, stationary points of the log-likelihood are fixed points of the updates. Virtually all MM algorithms share these properties. The update (3.7) also reduces to the maximum likelihood estimate

$${\widehat{\pi}}_{j}=\frac{{\displaystyle {\sum}_{k}{s}_{\mathit{\text{jk}}}}}{{\displaystyle {\sum}_{l}{\displaystyle {\sum}_{k}{s}_{\mathit{\text{lk}}}}}}=\frac{{\displaystyle {\sum}_{i}{x}_{\mathit{\text{ij}}}}}{{\displaystyle {\sum}_{i}{m}_{i}}}$$

(3.8)

of the corresponding multinomial proportion when θ^{n} = 0.

The estimate (3.8) furnishes a natural initial value ${\pi}_{j}^{0}$. To derive an initial value for the overdispersion parameter θ, consider the first two moments

$$\mathbf{E}({P}_{j})=\frac{{\alpha}_{j}}{\left|\alpha \right|},\text{}\mathbf{E}({P}_{j}^{2})=\frac{{\alpha}_{j}({\alpha}_{j}+1)}{\left|\alpha \right|(\left|\alpha \right|+1)}$$

of a Dirichlet distribution with parameter vector α. These identities imply that

$$\sum _{j=1}^{d}\frac{\mathbf{E}({P}_{j}^{2})}{\mathbf{E}({P}_{j})}}=\frac{\left|\alpha \right|+d}{\left|\alpha \right|+1}=\rho ,$$

which can be solved for θ = 1/|α| in terms of ρ as θ = (ρ − 1)/(*d* − ρ). Substituting the estimate

$$\widehat{\rho}={\displaystyle \sum _{j}\frac{{{\displaystyle {\sum}_{i}({x}_{\mathit{\text{ij}}}/{m}_{i})}}^{2}}{{\displaystyle {\sum}_{i}({x}_{\mathit{\text{ij}}}/{m}_{i})}}}$$

for ρ gives a sensible initial value θ^{0}.

To test our two MM algorithms, we now turn to the beta-binomial data of Haseman and Soares (1976) on male mice exposed to various mutagens. The two outcome categories are (a) dead implants and (b) survived implants. In their first dataset, there are *t* = 524 observations with between *m* = 1 and *m* = 20 trials per observation. Table 1 presents the final log-likelihood, number of iterations, and running time (in seconds) of the two MM algorithms and their SQUAREM accelerations on these data. All MM algorithms converge to the maximum point previously found by the scoring method (Paul, Balasooriya, and Banerjee 2005). For the choice ε = 10^{−9} in stopping criterion (2.6), the MM algorithm (3.5) takes 700 iterations and 0.1580 sec to converge on a laptop computer. The alternative MM algorithm given in the updates (3.6) and (3.7) takes 339 iterations and 0.1626 sec. Figure 1 depicts the progress of the MM iterates on a contour plot of the log-likelihood. The conventional MM algorithm crawls slowly along the ridge in the contour plot; the accelerated versions SqMPE1 and SqRRE1 significantly reduce both the number of iterations and the running time until convergence.

MM Ascent of the Dirichlet-multinomial log-likelihood surface. A color version of this figure is available in the electronic version of this article.

The Dirichlet-multinomial distribution suffers from two restrictions that limit its applicability, namely the negative correlation of coordinates and the determination of variances by means. It is possible to overcome these restrictions by choosing a more flexible mixing distribution as a prior for the multinomial. Connor and Mosimann (1969) suggested a generalization of the Dirichlet distribution that meets this challenge. The resulting admixed distribution, called the generalized Dirichlet-multinomial distribution, has proved its worth in machine learning problems such as the modeling and clustering of images, handwritten digits, and text documents (Bouguila 2008). It is therefore helpful to derive an MM algorithm for maximum likelihood estimation with this distribution that avoids the complications of gamma/digamma/trigamma functions arising with Newton’s method (Bouguila 2008). The Connor–Mosimann distribution is constructed inductively by the mechanism of stick breaking. Imagine breaking the interval [0, 1] into *d* subintervals of lengths *P*_{1},…, *P _{d}* by choosing

$$g(\mathbf{p}|\alpha ,\beta )={\displaystyle \prod _{j=1}^{d-1}\frac{\mathrm{\Gamma}({\alpha}_{j}+{\beta}_{j})}{\mathrm{\Gamma}({\alpha}_{j})\mathrm{\Gamma}({\beta}_{j})}{p}_{j}^{{\alpha}_{j}-1}{\left(1-{\displaystyle \sum _{i=1}^{j}{p}_{i}}\right)}^{{\gamma}_{j}},\text{}\mathbf{p}\in {\mathrm{\Delta}}_{d},}$$

where γ_{j} = β_{j} − α_{j+1} − β_{j+1} for *j* = 1,…, *d* − 2 and γ_{d−1} = β_{d−1} − 1. The univariate case (*d* = 2) corresponds to the beta distribution. The Dirichlet distribution is recovered by taking β_{j} = α_{j+1} ++α_{d}. With *d* − 2 more parameters than the Dirichlet distribution, the Connor–Mosimann distribution is naturally more versatile.

The Connor–Mosimann distribution is again conjugate to the multinomial distribution, and the marginal density of a count vector **X** over *m* trials is easily shown to be

$$\begin{array}{cc}\text{Pr}(\mathbf{X}=\mathbf{x})\hfill & ={\displaystyle {\int}_{{\mathrm{\Delta}}_{d}}\left(\begin{array}{c}\hfill m\hfill \\ \hfill \mathbf{x}\hfill \end{array}\right)}\phantom{\rule{thinmathspace}{0ex}}{\displaystyle \prod _{j=1}^{d-1}{p}_{j}^{{x}_{j}}g(\mathbf{p}|\alpha ,\beta )d\mathbf{p}}\hfill \\ \hfill & =\left(\begin{array}{c}\hfill m\hfill \\ \hfill \mathbf{x}\hfill \end{array}\right){\displaystyle \prod _{j=1}^{d-1}\frac{\mathrm{\Gamma}({\alpha}_{j}+{x}_{j})}{\mathrm{\Gamma}({\alpha}_{j})}\frac{\mathrm{\Gamma}({\beta}_{j}+{y}_{j+1})}{\mathrm{\Gamma}({\beta}_{j})}\frac{\mathrm{\Gamma}({\alpha}_{j}+{\beta}_{j})}{\mathrm{\Gamma}({\alpha}_{j}+{\beta}_{j}+{y}_{j})},}\hfill \end{array}$$

where ${y}_{j}={\displaystyle {\sum}_{k=j}^{d}{x}_{k}}$. If we adopt the reparameterization

$${\theta}_{j}=\frac{1}{{\alpha}_{j}+{\beta}_{j}},\text{}{\pi}_{j}=\frac{{\alpha}_{j}}{{\alpha}_{j}+{\beta}_{j}},\text{}j=1,\dots ,d-1,$$

and use the fact that *x _{j}* +

$$\left(\begin{array}{c}\hfill m\hfill \\ \hfill \mathbf{x}\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}{\displaystyle \prod _{j=1}^{d-1}\frac{{\pi}_{j}\cdots [{\pi}_{j}+({x}_{j}-1){\theta}_{j}]\times (1-{\pi}_{j})\cdots [1-{\pi}_{j}+({y}_{j+1}-1){\theta}_{j}]}{1\cdots [1+({y}_{j}-1){\theta}_{j}]}}.$$

(3.9)

Thus, maximum likelihood estimation of the parameter vectors π = (π_{1},…, π_{d−1}) and θ = (θ_{1},…, θ_{d−1}) by the MM algorithm reduces to the case of *d* − 1 independent beta-binomial problems.

Let **x**_{1},…, **x**_{t} be a random sample from the generalized Dirichlet-multinomial distribution (3.9) with *m _{i}* trials for observation

$${r}_{\mathit{\text{jk}}}={\displaystyle \sum _{i=1}^{t}{1}_{\{{x}_{\mathit{\text{ij}}}\ge k+1\},\text{}}{s}_{\mathit{\text{jk}}}={\displaystyle \sum _{i=1}^{t}{1}_{\{{y}_{\mathit{\text{ij}}}\ge k+1\}}}}$$

for 1 ≤ *j* ≤ *d* − 1. In this notation, the reader can readily check that the MM updates become

$$\begin{array}{c}{\pi}_{j}^{n+1}=\left({\displaystyle \sum _{k}\frac{{r}_{\mathit{\text{jk}}}{\pi}_{j}^{n}}{{\pi}_{j}^{n}+k{\theta}_{j}^{n}}}\right)/\left({\displaystyle \sum _{k}\left[\frac{{r}_{\mathit{\text{jk}}}{\pi}_{j}^{n}}{{\pi}_{j}^{n}+k{\theta}_{j}^{n}}+\frac{{s}_{j+1,k}(1-{\pi}_{j}^{n})}{1-{\pi}_{j}^{n}+k{\theta}_{j}^{n}}\right]}\right),\hfill \\ {\theta}_{j}^{n+1}=\left({\displaystyle \sum _{k}\left[\frac{{r}_{\text{jk}}k{\theta}_{j}^{n}}{{\pi}_{j}^{n}+k{\theta}_{j}^{n}}+\frac{{s}_{j+1,k}k{\theta}_{j}^{n}}{1-{\pi}_{j}^{n}+k{\theta}_{j}^{n}}\right]}\right)/\left({\displaystyle \sum _{k}\frac{{s}_{\mathit{\text{jk}}}}{1+k{\theta}_{j}^{n}}}\right).\hfill \end{array}$$

Neerchal and Morel (1998, 2005) proposed an alternative to the Dirichlet-multinomial distribution that accounts for overdispersion by finite admixture. If **x** represents count data over *m* trials and *d* categories, then their discrete density is

$$h(\mathbf{x}|\pi ,\rho )={\displaystyle \sum _{j=1}^{d}{\pi}_{j}\left(\begin{array}{c}\hfill m\hfill \\ \hfill \mathbf{x}\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}{[(1-\rho ){\pi}_{1}]}^{{x}_{1}}\cdots {[(1-\rho ){\pi}_{j}+\rho ]}^{{x}_{j}}\cdots {[(1-\rho ){\pi}_{d}]}^{{x}_{d}},}$$

(3.10)

where π = (π_{1},…, π_{d}) is a vector of proportions and ρ [0, 1] is an overdispersion parameter. The Neerchal–Morel distribution collapses to the multinomial distribution when ρ = 0. Straightforward calculations show that the Neerchal–Morel distribution has means, variances, and covariances

$$\begin{array}{cc}\hfill \mathbf{E}({\mathbf{X}}_{i})& =m{\pi}_{i},\hfill \\ \hfill \mathbf{\text{Var}}({\mathbf{X}}_{i})& =m{\pi}_{i}(1-{\pi}_{i})[1-{\rho}^{2}+m{\rho}^{2}],\hfill \\ \hfill \mathbf{\text{Cov}}({\mathbf{X}}_{i},{\mathbf{X}}_{j})& =-m{\pi}_{i}{\pi}_{j}[1-{\rho}^{2}+m{\rho}^{2}],\text{}i\ne j.\hfill \end{array}$$

These are precisely the same as the first- and second-order moments of the Dirichlet-multinomial distribution provided we identify π_{i} = α_{i}/|α| and ρ_{2} = 1/(|α| + 1).

If we draw *t* independent samples **x**_{1},…, **x**_{t} from the Neerchal–Morel distribution with *m _{i}* trials for sample

$$\sum _{i}\text{ln}\phantom{\rule{thinmathspace}{0ex}}\left\{{\displaystyle \sum _{j}{\pi}_{j}\left(\begin{array}{c}\hfill {m}_{i}\hfill \\ \hfill {\mathbf{x}}_{i}\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}{[(1-\rho ){\pi}_{1}]}^{{x}_{i1}}\cdots {[(1-\rho ){\pi}_{j}+\rho ]}^{{x}_{\mathit{\text{ij}}}}\cdots {[(1-\rho ){\pi}_{d}]}^{{x}_{\mathit{\text{id}}}}}\right\}}.$$

(3.11)

It is worth bearing in mind that every mixture model yields to the minorization (2.3). This is one of the secrets to the success of the EM algorithm. As a practical matter, explicit minorization via inequality (2.3) is more mechanical and often simpler to implement than performing the E step of the EM algorithm. This is particularly true when several minorizations intervene before we reach the ideal surrogate. Here two successive minorizations are needed.

To state the first minorization, let us abbreviate

$${\mathrm{\Pi}}_{\mathit{\text{ij}}}={\pi}_{j}{[(1-\rho ){\pi}_{1}]}^{{x}_{i1}}\cdots {[(1-\rho ){\pi}_{j}+\rho ]}^{{x}_{\mathit{\text{ij}}}}\cdots {[(1-\rho ){\pi}_{d}]}^{{x}_{\mathit{\text{id}}}}$$

and denote by ${\prod}_{\mathit{\text{ij}}}^{n}$ the same quantity evaluated at the *n*th iterate. In this notation it follows that

$$\text{ln}\left({\displaystyle \sum _{j}{\mathrm{\Pi}}_{\mathit{\text{ij}}}}\right)\ge {\displaystyle \sum _{j}{w}_{\mathit{\text{ij}}}^{n}\phantom{\rule{thinmathspace}{0ex}}\text{ln}\left(\frac{{\mathrm{\Pi}}_{\mathit{\text{ij}}}}{{w}_{\mathit{\text{ij}}}^{n}}\right)}={\displaystyle \sum _{j}{w}_{\mathit{\text{ij}}}^{n}\phantom{\rule{thinmathspace}{0ex}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\mathrm{\Pi}}_{\mathit{\text{ij}}}-{\displaystyle \sum _{j}{w}_{\mathit{\text{ij}}}^{n}\phantom{\rule{thinmathspace}{0ex}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{w}_{\mathit{\text{ij}}}^{n}}}$$

with weights ${w}_{\mathit{\text{ij}}}^{n}=\frac{{\prod}_{\mathit{\text{ij}}}^{n}}{{\displaystyle {\sum}_{l}{\prod}_{\mathit{\text{ij}}}^{n}}}$. The logarithm splits ln _{ij} into the sum

$$\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\mathrm{\Pi}}_{\mathit{\text{ij}}}={m}_{i}\phantom{\rule{thinmathspace}{0ex}}\text{ln}(1-\rho )+\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\pi}_{j}+{x}_{i1}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\pi}_{1}+\cdots +{x}_{\mathit{\text{ij}}}\phantom{\rule{thinmathspace}{0ex}}\text{ln}({\pi}_{j}+\theta )+\cdots +{x}_{\mathit{\text{id}}}\phantom{\rule{thinmathspace}{0ex}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\pi}_{d}$$

for θ = ρ/(1 − ρ). To separate the parameters π_{j} and θ in the troublesome term ln(π* _{j}* + θ), we apply the minorization (2.3) again. This produces

$$\text{ln}({\pi}_{j}+\theta )\ge \frac{{\pi}_{j}^{n}}{{\pi}_{j}^{n}+{\theta}^{n}}\text{ln}\left(\frac{{\pi}_{j}^{n}+{\theta}^{n}}{{\pi}_{j}^{n}}{\pi}_{j}\right)+\frac{{\theta}^{n}}{{\pi}_{j}^{n}+{\theta}^{n}}\text{ln}\left(\frac{{\pi}_{j}^{n}+{\theta}^{n}}{{\theta}^{n}}\theta \right),$$

and up to a constant the surrogate function takes the form

$$\sum _{i}{\displaystyle \sum _{j}{w}_{\mathit{\text{ij}}}^{n}\left[{\displaystyle \sum _{k}{x}_{\mathit{\text{ik}}}\phantom{\rule{thinmathspace}{0ex}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\pi}_{k}}+\left(1-\frac{{x}_{\mathit{\text{ij}}}{\theta}^{n}}{{\pi}_{j}^{n}+{\theta}^{n}}\right)\phantom{\rule{thinmathspace}{0ex}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\pi}_{j}\right]}}+{\displaystyle \sum _{i}{\displaystyle \sum _{j}{w}_{\mathit{\text{ij}}}^{n}\phantom{\rule{thinmathspace}{0ex}}\left[\left({m}_{i}-\frac{{x}_{\mathit{\text{ij}}}{\theta}^{n}}{{\pi}_{j}^{n}+{\theta}^{n}}\right)\phantom{\rule{thinmathspace}{0ex}}\text{ln}(1-\rho )+\frac{{x}_{\mathit{\text{ij}}}{\theta}^{n}}{{\pi}_{j}^{n}+{\theta}^{n}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}\rho \right].}$$

Standard arguments now yield the updates

$$\begin{array}{c}{\pi}_{k}^{n+1}=\left({\displaystyle \sum _{i}{\displaystyle \sum _{j}{w}_{\mathit{\text{ij}}}^{n}{x}_{\mathit{\text{ik}}}}+{\displaystyle \sum _{j}{w}_{\mathit{\text{ik}}}^{n}\left(1-\frac{{x}_{\mathit{\text{ik}}}{\theta}^{n}}{{\pi}_{k}^{n}+{\theta}^{n}}\right)}}\right)/\left({\displaystyle \sum _{l}{\displaystyle \sum _{i}{\displaystyle \sum _{j}{w}_{\mathit{\text{ij}}}^{n}{x}_{\mathit{\text{il}}}}}+{\displaystyle \sum _{l}{\displaystyle \sum _{i}{w}_{\mathit{\text{il}}}^{n}\left(1-\frac{{x}_{\mathit{\text{il}}}{\theta}^{n}}{{\pi}_{l}^{n}+{\theta}^{n}}\right)}}}\right),\hfill \\ \begin{array}{cc}{\rho}^{n+1}=\left({\displaystyle \sum _{i}{\displaystyle \sum _{j}\frac{{w}_{\mathit{\text{ij}}}^{n}{x}_{\mathit{\text{ij}}}{\theta}^{n}}{{\pi}_{j}^{n}+{\theta}^{n}}}}\right)/\left({\displaystyle \sum _{i}{m}_{i}}\right),\hfill & {\theta}^{n+1}=\frac{{\rho}^{n+1}}{1-{\rho}^{n+1}}.\hfill \end{array}\hfill \end{array}$$

Table 2 lists convergence results for this MM algorithm and its SQUAREM accelerations on the previously discussed Haseman and Soares data.

The motivation for the negative-multinomial distribution comes from multinomial sampling with *d* + 1 categories assigned probabilities π_{1},…, π_{d+1}. Sampling continues until category *d* + 1 accumulates β outcomes. At that moment we count the number of outcomes *x _{i}* falling in category

$$\begin{array}{cc}h(\mathbf{x}|\beta ,\pi )\hfill & =\left(\begin{array}{c}\hfill \beta +\left|\mathbf{x}\right|-1\hfill \\ \hfill \left|\mathbf{x}\right|\hfill \end{array}\right)\left(\begin{array}{c}\hfill \left|\mathbf{x}\right|\hfill \\ \hfill \mathbf{x}\hfill \end{array}\right)\phantom{\rule{thinmathspace}{0ex}}{\displaystyle \prod _{i=1}^{d}{\pi}_{i}^{{x}_{i}}{\pi}_{d+1}^{\beta}}\hfill \\ \hfill & =\frac{\beta (\beta +1)\cdots (\beta +|\mathbf{x}|-1)}{{x}_{1}!\cdots {x}_{d}!}{\displaystyle \prod _{i=1}^{d}{\pi}_{i}^{{x}_{i}}{\pi}_{d+1}^{\beta}}.\hfill \end{array}$$

(3.12)

This formula continues to make sense even if the positive parameter β is not an integer. For arbitrary β > 0, the most straightforward way to construct the negative-multinomial distribution is to run *d* independent Poisson processes with intensities π_{1},…, π_{d}. Wait a gamma distributed length of time with shape parameter β and intensity parameter π_{d+1}. At the expiration of this waiting time, count the number of random events *X _{i}* of each type

The Poisson process perspective readily yields the moments

$$\begin{array}{cc}\hfill \mathbf{E}({X}_{i})& =\beta \frac{{\pi}_{i}}{{\pi}_{d+1}},\hfill \\ \hfill \mathbf{\text{Var}}({X}_{i})& =\beta \frac{{\pi}_{i}}{{\pi}_{d+1}}\left(1+\frac{{\pi}_{i}}{{\pi}_{d+1}}\right),\hfill \\ \hfill \mathbf{\text{Cov}}({X}_{i},{X}_{j})& =\beta \frac{{\pi}_{i}}{{\pi}_{d+1}}\frac{{\pi}_{j}}{{\pi}_{d+1}},\text{}i\ne j.\hfill \end{array}$$

(3.13)

Compared to a Poisson distributed random variable with the same mean, the component *X _{i}* is overdispersed. Also in contrast to the multinomial and Dirichlet-multinomial distributions, the counts from a negative-multinomial are positively correlated. Negative-multinomial sampling is therefore appealing in many applications.

Let **x**_{1},…, **x**_{t} be a random sample from the negative-multinomial distribution with *m _{i}* = |

$$\begin{array}{cc}\hfill L(\beta ,\pi )& ={\displaystyle \sum _{k}{r}_{k}\phantom{\rule{thinmathspace}{0ex}}\text{ln}(\beta +k)+{\displaystyle \sum _{j=1}^{d}{x}_{\xb7j}\phantom{\rule{thinmathspace}{0ex}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\pi}_{j}+t\beta \phantom{\rule{thinmathspace}{0ex}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\pi}_{d+1}-{\displaystyle \sum _{i}{\displaystyle \sum _{j}\phantom{\rule{thinmathspace}{0ex}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{x}_{\mathit{\text{ij}}}!,}}}}\hfill \\ \hfill {r}_{k}& ={\displaystyle \sum _{i}{1}_{\{{m}_{i}\ge k+1\},}}\text{}{x}_{\xb7j}={\displaystyle \sum _{i}{x}_{\mathit{\text{ij}}},}\hfill \end{array}$$

we must deal with the terms ln(β + *k*). Fortunately, the minorization (2.4) implies

$$\text{ln}(\beta +k)\ge \frac{{\beta}^{n}}{{\beta}^{n}+k}\text{ln}\left(\frac{{\beta}^{n}+k}{{\beta}^{n}}\beta \right)+\frac{k}{{\beta}^{n}+k}\text{ln}\left(\frac{{\beta}^{n}+k}{k}k\right),$$

leading to the surrogate function

$$g(\beta ,\pi |{\beta}^{n},{\pi}^{n})={\displaystyle \sum _{k}{r}_{k}}\frac{{\beta}^{n}}{{\beta}^{n}+k}\text{ln}\phantom{\rule{thinmathspace}{0ex}}\beta +{\displaystyle \sum _{j=1}^{d}{x}_{\xb7j}\phantom{\rule{thinmathspace}{0ex}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\pi}_{j}+t\beta \phantom{\rule{thinmathspace}{0ex}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\pi}_{d+1}}$$

up to an irrelevant constant. In view of the constraint ${\pi}_{d+1}=1-{\displaystyle {\sum}_{j=1}^{d}{\pi}_{j}}$ the stationarity conditions for a maximum of the surrogate reduce to

$$0=\frac{1}{\beta}{\displaystyle \sum _{k}{r}_{k}\frac{{\beta}^{n}}{{\beta}^{n}+k}+t\phantom{\rule{thinmathspace}{0ex}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\pi}_{d+1},\text{}0=\frac{{x}_{\xb7j}}{{\pi}_{j}}-\frac{t\beta}{{\pi}_{d+1}},\text{}1\le j\le d.}$$

(3.14)

Unfortunately, it is impossible to solve this system of equations analytically. There are two resolutions to the dilemma. One is block relaxation (de Leeuw 1994) alternating the updates

$${\beta}^{n+1}=-\left({\displaystyle \sum _{k}{r}_{k}}\frac{{\beta}^{n}}{{\beta}^{n}+k}\right)/(t\phantom{\rule{thinmathspace}{0ex}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}{\pi}_{d+1}^{n})$$

and

$${\pi}_{d+1}^{n+1}=\frac{t{\beta}^{n+1}}{{\displaystyle {\sum}_{k=1}^{d}{x}_{\xb7k}+t{\beta}^{n+1}}},\text{}{\pi}_{j}^{n+1}=\frac{{x}_{\xb7j}}{{\displaystyle {\sum}_{k=1}^{d}{x}_{\xb7k}+t{\beta}^{n+1}}},\text{}1\le j\le d.$$

This strategy enjoys the ascent property of all MM algorithms.

The other possibility is to solve the stationarity equations numerically. It is clear that the system of equations (3.14) reduces to the single equation

$$0=\frac{1}{\beta}{\displaystyle \sum _{k}{r}_{k}}\frac{{\beta}^{n}}{{\beta}^{n}+k}+t\phantom{\rule{thinmathspace}{0ex}}\text{ln}\left(\frac{t\beta}{{\displaystyle {\sum}_{k=1}^{d}{x}_{\xb7k}+t\beta}}\right)$$

for β. Equivalently, if we let

$$\alpha ={\beta}^{-1},\text{}\overline{m}=\frac{1}{t}{\displaystyle \sum _{i}{\displaystyle \sum _{j}{x}_{\mathit{\text{ij}}},}\text{}{c}^{n}={\displaystyle \sum _{k}{r}_{k}}\frac{{\beta}^{n}}{{\beta}^{n}+k},}$$

then we must find a root of the equation *f* (α) = α*c ^{n}* −

To find initial values, we again resort to the method of moments. Based on the moments (3.13), the mean and variance of |**X**| = Σ_{k} *X _{j}* are

$$\mathbf{E}(\left|\mathbf{X}\right|)=\frac{\beta (1-{\pi}_{d+1})}{{\pi}_{d+1}},\mathbf{\text{Var}}(\left|\mathbf{X}\right|)=\frac{\beta (1-{\pi}_{d+1})}{{\pi}_{d+1}^{2}}.$$

These suggest that we take

$${\beta}^{0}=\frac{{\overline{x}}^{2}}{{s}^{2}-\overline{x}},\text{}{\pi}_{d+1}^{0}=\frac{\overline{x}}{{s}^{2}},\text{}{\pi}_{j}^{0}=\frac{{\pi}_{d+1}^{0}}{{\beta}^{0}}\frac{{x}_{\xb7j}}{t},\text{}1\le j\le d,$$

where

$$\overline{x}=\frac{1}{t}{\displaystyle \sum _{i=1}^{t}{m}_{i},\text{}{s}^{2}=}\frac{1}{t-1}{\displaystyle \sum _{i=1}^{t}{({m}_{i}-\overline{x})}^{2}}.$$

When the data are underdispersed (*s*^{2} < ), our proposed initial values are not meaningful, but a negative-multinomial model is a poor choice anyway.

A partition of a positive integer *m* into *k* parts is a vector **a** = (*a*_{1},…, *a _{m}*) of non-negative integers such that Σ

$$\text{Pr}(\mathbf{A}=\mathbf{a}|m,\alpha ,\theta )=\frac{m!{\displaystyle {\prod}_{i=1}^{\left|\mathbf{a}\right|-1}(\theta +i\alpha )}}{(\theta +1)\cdots (\theta +m-1)}\times {\displaystyle \prod _{j=1}^{m}{\left[\frac{(1-\alpha )\cdots (1-\alpha +j-2)]}{i!}\right]}^{{a}_{j}}\frac{1}{{a}_{j}!}}$$

involves two parameters 0 ≤ α < 1 and θ > −α. Ewens’s distribution corresponds to the choice α = 0. We will restrict θ to be positive.

To estimate parameters given *u* independent partitions **a**_{1},…, **a**_{u} from Pitman’s distribution, we use the minorizations (2.3) and (2.4) to derive the minorizations

$$\begin{array}{cc}\hfill \text{ln}(\theta +i\alpha )& \ge \frac{{\theta}^{n}}{{\theta}^{n}+i{\alpha}^{n}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}\theta +\frac{i{\alpha}^{n}}{{\theta}^{n}+i{\alpha}^{n}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}\alpha +c,\hfill \\ \hfill \text{ln}(1-\alpha +i)& \ge \frac{1-{\alpha}^{n}}{1-{\alpha}^{n}+i}\text{ln}(1-\alpha )+c,\hfill \\ \hfill -\text{ln}(\theta +i)& \ge -\frac{1}{{\theta}^{n}+i}\theta +c,\hfill \end{array}$$

where *c* is a different irrelevant constant in each case. Assuming **a**_{j} is a partition of the integer *m _{j}*, it follows that the log-likelihood is minorized by

$$\sum _{i}\frac{{r}_{i}{\theta}^{n}}{{\theta}^{n}+i{\alpha}^{n}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}\theta +{\displaystyle \sum _{i}\frac{{r}_{i}i{\alpha}^{n}}{{\theta}^{n}+i{\alpha}^{n}}}\text{ln}\phantom{\rule{thinmathspace}{0ex}}\alpha +{\displaystyle \sum _{i}\frac{{s}_{i}(1-{\alpha}^{n})}{1-{\alpha}^{n}+i}}\text{ln}(1-\alpha )-{\displaystyle \sum _{i}\frac{{t}_{i}}{{\theta}^{n}+i}\theta +c,}$$

where

$${r}_{i}={\displaystyle \sum _{j}{1}_{\{|{\mathbf{a}}_{j}|\ge i+1\},\text{}{s}_{i}={\displaystyle \sum _{j}{\displaystyle \sum _{k\ge i+2}{a}_{\mathit{\text{jk}}},}}}}\text{}{t}_{i}={\displaystyle \sum _{j}{1}_{\{{m}_{i}\ge i+1\}}.}$$

Standard arguments now yield the simple updates

$$\begin{array}{c}{\alpha}^{n+1}=\left({\displaystyle \sum _{i}\frac{{r}_{i}i{\alpha}^{n}}{{\theta}^{n}+i{\alpha}^{n}}}\right)/\left({\displaystyle \sum _{i}\frac{{r}_{i}i{\alpha}^{n}}{{\theta}^{n}+i{\alpha}^{n}}}+{\displaystyle \sum _{i}\frac{{s}_{i}(1-{\alpha}^{n})}{1-{\alpha}^{n}+i}}\right),\hfill \\ {\theta}^{n+1}=\left({\displaystyle \sum _{i}\frac{{r}_{i}{\theta}^{n}}{{\theta}^{n}+i{\alpha}^{n}}}\right)/\left({\displaystyle \sum _{i}\frac{{t}_{i}}{{\theta}^{n}+i}}\right).\hfill \end{array}$$

If we set α^{0} = 0, then in all subsequent iterates α^{n} = 0, and we get the MM updates for Ewens’s distribution. Despite the availability of the moments of the parts *A _{i}* (Charalambides 2007), it is not clear how to initialize α and θ. Unfortunately, the alternative suggestion of Nobuaki (2001) does not guarantee that the initial values satisfy the constraints α [0, 1) and θ > 0.

In this section we briefly indicate how the MM perspective sheds fresh light on EM algorithms for zero-truncated and zero-inflated data. Once again mastery of a handful of inequalities rather than computation of conditional expectations drives the derivations.

In many discrete probability models, only data with positive counts are observed. Counts that are 0 are missing. If *f* (*x*|θ) represents the density of the complete data, then the density of a random sample *x*_{1},…, *x _{t}* of zero-truncated data amounts to

$$h(x|\theta )={\displaystyle \prod _{i=1}^{t}\frac{f({x}_{i}|\theta )}{1-f(0|\theta )}.}$$

Inequality (2.5) immediately implies the minorization

$$\text{ln}\phantom{\rule{thinmathspace}{0ex}}h(x|\theta )\ge {\displaystyle \sum _{i=1}^{t}\left[\text{ln}\phantom{\rule{thinmathspace}{0ex}}f({x}_{i}|\theta )+\frac{f(0|{\theta}^{n})}{1-f(0|{\theta}^{n})}\text{ln}\phantom{\rule{thinmathspace}{0ex}}f(0|\theta )\right]+c,}$$

where *c* is an irrelevant constant. In many models, maximization of this surrogate function is straightforward.

For instance, with zero-truncated data from the binomial, Poisson, and negative-binomial distributions, the MM updates reduce to

$$\begin{array}{c}{p}^{n+1}=\left({\displaystyle \sum _{i}{x}_{i}}\right)/\left({\displaystyle \sum _{i}\frac{{m}_{i}}{1-{(1-{p}^{n})}^{{m}_{i}}}}\right),\text{}{\lambda}^{n+1}=\left({\displaystyle \sum _{i}{x}_{i}}\right)/\left({\displaystyle \sum _{i}\frac{1}{1-{e}^{-\lambda n}}}\right),\hfill \\ {p}^{n+1}=\left({\displaystyle \sum _{i}\frac{{m}_{i}}{1-{({p}^{n})}^{{m}_{i}}}}\right)/\left({\displaystyle \sum _{i}\left[{x}_{i}+\frac{{m}_{i}}{1-{({p}^{n})}^{{m}_{i}}}\right]}\right).\hfill \end{array}$$

For observation *i* of the binomial model, there are *x _{i}* successes out of

More complicated models can be handled in similar fashion. The key insight in each case is to augment every ordinary observation *x _{i}* > 0 by a total of

$$\begin{array}{c}\hfill {s}_{1k}={\displaystyle \sum _{i}{1}_{\{{x}_{i1}\ge k+1\}},\text{}{s}_{2k}={\displaystyle \sum _{i}\left[{1}_{\{{x}_{i2}\ge k+1\}}+\frac{f(0|{\pi}^{n},{\theta}^{n})}{1-f(0|{\pi}^{n},{\theta}^{n})}\right]}},\\ \text{}\hfill {r}_{k}={\displaystyle \sum _{i}\left[1+\frac{f(0|{\pi}^{n},{\theta}^{n})}{1-f(0|{\pi}^{n},{\theta}^{n})}\right]}{1}_{\{{m}_{i}\ge k+1\}},\end{array}$$

where

$$f(0|{\pi}^{n},{\theta}^{n})=\frac{{\pi}_{2}^{n}({\pi}_{2}^{n}+{\theta}^{n})\cdots [{\pi}_{2}^{n}+({m}_{i}-1){\theta}^{n}]}{(1+{\theta}^{n})\cdots [1+({m}_{i}-1){\theta}^{n}]}.$$

Here category 1 represents success and category 2 failure. If we start with θ^{0} = 0, then we recover the updates for the zero-truncated binomial distribution.

Zero-inflated data are equally easy to handle. The density function is now

$$h(x|\theta ,\pi ){\displaystyle \prod _{i=1}^{t}{[(1-\pi )+\pi f(0|\theta )]}^{{1}_{\{{x}_{i}=0\}}}}{[\pi f({x}_{i}|\theta )]}^{{1}_{\{{x}_{i}>0\}}}.$$

Inequality (2.3) entails the minorization

$$\begin{array}{cc}\hfill \text{ln}\phantom{\rule{thinmathspace}{0ex}}h(x|\theta ,\pi )& \ge {\displaystyle \sum _{i=1}^{t}{1}_{\{{x}_{i}=0\}}\{{z}^{n}\text{ln}(1-\pi )+(1-{z}_{n})[\text{ln}\phantom{\rule{thinmathspace}{0ex}}\pi +\text{ln}\phantom{\rule{thinmathspace}{0ex}}f(0|{\theta}^{n})]\}+{\displaystyle \sum _{i=1}^{t}{1}_{\{{x}_{i}>0\}}[\text{ln}\phantom{\rule{thinmathspace}{0ex}}\pi +\text{ln}\phantom{\rule{thinmathspace}{0ex}}f({x}_{i}|\theta )],}}\hfill \\ \hfill {z}^{n}& =\frac{1-{\pi}^{n}}{1-{\pi}^{n}+{\pi}^{n}f(0|{\theta}^{n})}.\hfill \end{array}$$

The MM update of the inflation-admixture parameter clearly is

$${\pi}^{n+1}=\frac{1}{t}{\displaystyle \sum _{i=1}^{t}[{1}_{\{{x}_{i}>0\}}+{1}_{\{{x}_{i}=0\}}(1-{z}^{n})].}$$

As a typical example, consider estimation with the zero-inflated Poisson (Patil 2007). The mean λ of the Poisson component is updated by

$${\lambda}^{n+1}=\frac{{\displaystyle {\sum}_{i}{x}_{i}}}{{\displaystyle {\sum}_{i}[(1-{z}^{n}){1}_{\{{x}_{i}=0\}}+{1}_{\{{x}_{i}>0\}}]}}.$$

In other words, every 0 observation is discounted by the amount *z ^{n}* at iteration

As a numerical experiment, we fit the Dirichlet-multinomial (two parameterizations) and the Neerchal–Morel distributions to the 3823 training digits in the handwritten digit data from the UCI machine learning repository (Asuncion and Newman 2007). Each normalized 32 × 32 bitmap is divided into 64 blocks of size 4 × 4, and the black pixels are counted in each block. This generates a 64-dimensional count vector for each bitmap. Bouguila (2008) successfully fit mixtures of Connor–Mosimann to the training data and used the estimated models to cluster the test data. For illustrative purposes we now fit the Dirichlet-multinomial (two parameterizations) and Neerchal–Morel models. Based on the majorization (2.3), it is straightforward to extend our MM algorithms to fit finite mixture models using any of the previously encountered multivariate discrete distributions.

Table 3 lists the final log-likelihoods, number of iterations, and running times of the different algorithms tested. The MM and accelerated MM algorithms were coded in plain Matlab script language. Newton’s method was implemented using the fmincon function in the Marla Optimization Toolbox under the interior-point option with user-supplied analytical gradient and Hessian. All iterations started from the initial points θ^{0} = 1 and ${\pi}^{0}={\alpha}^{0}=(\frac{1}{64},\dots ,\frac{1}{64})$. The stopping criterion for Newton’s method was tuned to achieve precision comparable to the stopping criterion (2.6) for the MM algorithms. Running times in seconds were recorded from a laptop computer.

Numerical experiment. Row 1: MM; Row 2: SqMPE1 MM; Row 3: SqRRE1 MM; Row 4: Newton’s method using the (fmincon) function available in the Matlab Optimization Toolbox.

Inspection of Table 3 demonstrates that the MM algorithms outperform Newton’s method and that acceleration is often very beneficial. The cost of evaluating and inverting the observed information matrices of the Neerchal–Morel model significantly slows Newton’s method even in these problems with only 64 parameters. The observed information matrix of the Dirichlet-multinomial distribution possesses a special structure (diagonal plus rank-1 perturbation) that makes matrix inversion far easier. Table 3 does not show the human effort in devising, programming, and debugging the various algorithms. For Newton’s method, derivation and programming took in excess of one day. Formulas for the score and observed information of the Dirichlet-multinomial and Neerchal–Morel distributions are omitted for the sake of brevity. Fisher’s scoring algorithm was not implemented because it is even more cumbersome than Newton’s method (Neerchal and Morel 2005).

This numerical comparison is merely for illustrative purpose. Numerical analysts have developed quasi-Newton algorithms to mend the defects of Newton’s method. The limited-memory BFGS (LBFGS) algorithm (Nocedal and Wright 2006) is especially pertinent to high-dimensional problems. A systematic comparison of the two methods is worth pursuing.

In designing algorithms for maximum likelihood estimation, Newton’s method and Fisher scoring come immediately to mind. In the last generation, statisticians have added the EM principle. These are good mental reflexes, but the broader MM principle also deserves serious consideration. In many problems, the EM and MM perspectives lead to the same algorithm. In other situations such as image reconstruction in transmission tomography, it is possible to construct different EM and MM algorithms for the same purpose (Lange 2004). One of the most appealing features of the EM perspective is that it provides a statistical interpretation of algorithm intermediates. Although it is a matter of taste and experience whether inequalities or missing data offer an easier path to algorithm development, the fact that there are two routes adds to the possibilities for new algorithms.

One can argue that applications of minorizations (2.3) and (2.5) are just disguised EM algorithms. This objection misses the point in three ways. First, it does not suggest missing data structures explaining the minorization (2.4) and other less well-know minorizations. Second, it fails to weigh the difficulties of invoking simple inequalities versus calculating conditional expectations. When the creation of an appropriate surrogate function requires several minorizations, the corresponding conditional expectations become harder to execute. For example, although the EM principle dictates adding pseudo-observations for zero-truncated data, it is easy to lose sight of this simple interpretation in complicated examples such as the beta-binomial distribution. The genetic segregation analysis example appearing in chapter 2 of the book by Lange (2002) falls into the same category. Third, it fails to acknowledge the conceptual clarity of the MM principle, which shifts focus away from the probability spaces connected with missing data to the simple act of minorization. For instance, when one undertakes maximum a posteriori estimation, should the E step of the EM algorithm take into account the prior?

Some EM and MM algorithms are notoriously slow to converge. As we noted earlier, slow convergence is partially offset by the simplicity of each iteration. There is a growing body of techniques for accelerating MM algorithms (Jamshidian and Jennrich 1995; Lange 1995a; Jamshidian and Jennrich 1997; Varadhan and Rolland 2008). These techniques often lead to a ten-fold or even a hundred-fold reduction in the number of iterations. The various examples appearing in this article are typical in this regard. On problems with boundaries or nondifferentiable objective functions, acceleration may be less helpful.

Our negative-multinomial example highlights two useful tactics for overcoming complications in solving the maximization step of the EM and MM algorithms. It is a mistake to think of the various optimization algorithms in isolation. Often block relaxation (de Leeuw 1994) and Newton’s method can be combined creatively with the MM principle. Systematic application of Newton’s method in solving the maximization step of the MM algorithm is formalized in the MM gradient algorithm (Lange 1995b).

Parameter asymptotic standard errors are a natural byproduct of Newton’s method and scoring. With a modicum of additional effort, the EM and MM algorithms also deliver asymptotic standard errors (Meng and Rubin 1991; Hunter and Lange 2004). Virtually all optimization algorithms are prone to converge to inferior modes. For this reason, we have emphasized finding reasonable initial values. The overlooked article of Ueda and Nakano (1998) suggested an annealing approach to maximization with mixture models. Here the idea is to flatten the likelihood surface and eliminate all but the dominant mode. As the iterations proceed, the flat surface gradually warps into the true bumpy surface. Our recent work (Zhou and Lange 2010) extends this idea to many other EM and MM algorithms. A similar idea, called graduated non-convexity (GNC), appears in computer vision and signal processing literature (Blake and Zisserman 1987). In the absence of a good annealing procedure, one can fall back on starting an optimization algorithm from multiple random points, but this inevitably increases the computational load. The reassurance that a log-likelihood is concave is always welcome.

Readers may want to try their hands at devising their own MM algorithms. For instance, the Dirichlet-negative-multinomial distribution, the bivariate Poisson (Johnson, Kotz, and Balakrishnan 1997), and truncated multivariate discrete distributions yield readily to the techniques described. The performance of the MM algorithm on these problems is similar to that in our fully developed examples. Of course, many objective functions are very complicated, and devising a good MM algorithm is a challenge. The greatest payoffs are apt to be on high-dimensional problems. For simplicity of exposition, we have not tackled any extremely high-dimensional problems, but these certainly exist (Sabatti and Lange 2002; Ayers and Lange 2008; Lange and Wu 2008). In any event, most mathematicians and statisticians keep a few tricks up their sleeves. The MM principle belongs there, waiting for the right problems to come along.

The authors thank the editors and referees for their many valuable comments.

**SUPPLEMENTAL MATERIALS**

**Datasets and Matlab codes:** The supplementary material (a single zip package) contains all datasets appearing here and the Matlab codes generating our numerical results and graphs. The readme.txt file describes the contents of each file in the package. (supp_material.zip)

Hua Zhou, Post-Doctoral Fellow, Department of Human Genetics, University of California, Los Angeles, CA 90095-7088 (Email: ude.alcu@uohzauh).

Kenneth Lange, Professor, Departments of Biomathematics, Human Genetics, and Statistics, University of California, Los Angeles, CA 90095-7088.

- Asuncion A, Newman DJ. (UCI) Machine Learning Repository. 2007 available at http://www.ics.uci.edu/~mlearn/Repository.html.
- Ayers KL, Lange K. Penalized Estimation of Haplotype Frequencies. Bioinformatics. 2008;24:1596–1602. [PubMed]
- Bailey NTJ. The Mathematical Theory of Epidemics. London: Charles Griffin & Company; 1957.
- Blake A, Zisserman A. Visual Reconstruction. Cambridge, MA: MIT Press; 1987.
- Bouguila N. Clustering of Count Data Using Generalized Dirichlet Multinomial Distributions. IEEE Transactions on Knowledge and Data Engineering. 2008;20(4):462–474.
- Charalambides CA. Distributions of Random Partitions and Their Applications. Methodology and Computing in Applied Probability. 2007;9:163–193.
- Connor RJ, Mosimann JE. Concepts of Independence for Proportions With a Generalization of the Dirichlet Distribution. Journal of the American Statistical Association. 1969;64:194–206.
- Dempster AP, Laird NM, Rubin DB. Maximum Likelihood From Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, Ser. B. 1977;39(1):1–38. (with discussion)
- de Leeuw J. Block Relaxation Algorithms in Statistics. In: Bock HH, Lenski W, Richter MM, editors. Information Systems and Data Analysis. Berlin: Springer-Verlag; 1994.
- Ewens WJ. Mathematical Population Genetics. 2nd ed. New York: Springer-Verlag; 2004.
- Griffiths DA. Maximum Likelihood Estimation for the Beta-Binomial Distribution and an Application to the Household Distribution of the Total Number of Cases of a Disease. Biometrics. 1973;29:637–648. [PubMed]
- Groenen PJF. The Majorization Approach to Multidimensional Scaling: Some Problems and Extensions. Leiden: The Netherlands: DSWO Press; 1993.
- Haldane JBS. The Fitting of Binomial Distributions. Annals of Eugenics. 1941;11:179–181.
- Haseman JK, Soares ER. The Distribution of Fetal Death in Control Mice and Its Implications on Statistical Tests for Dominant Lethal Effects. Mutation Research/Fundamental and Molecular Mechanisms of Mutagenesis. 1976;41:277–288. [PubMed]
- Heiser WJ. Convergent Computing by Iterative Majorization: Theory and Applications in Multidimensional Data Analysis. In: Krzanowski WJ, editor. Recent Advances in Descriptive Multivariate Analysis. Oxford: Clarendon Press; 1995. pp. 157–189.
- Hunter DR, Lange K. A Tutorial on MM Algorithms. The American Statistician. 2004;58:30–37.
- Jamshidian M, Jennrich RI. Acceleration of the EM Algorithm by Using Quasi-Newton Methods. Journal of the Royal Statistical Society, Ser. B. 1995;59:569–587.
- Jamshidian M, Jennrich RI. Quasi-Newton Acceleration of the EM Algorithm. Journal of the Royal Statistical Society, Ser. B. 1997;59:569–587.
- Johnson NL, Kotz S, Balakrishnan N. Discrete Multivariate Distributions. New York: Wiley; 1997.
- Lange K. A Quasi-Newton Acceleration of the EM Algorithm. Statistica Sinica. 1995a;5:1–18.
- Lange K. A Gradient Algorithm Locally Equivalent to the EM Algorithm. Journal of the Royal Statistical Society, Ser. B. 1995b;57(2):425–437.
- Lange K. Mathematical and Statistical Methods for Genetic Analysis. 2nd ed. New York: Springer-Verlag; 2002.
- Lange K. Optimization. New York: Springer-Verlag; 2004.
- Lange K, Wu TT. An MM Algorithm for Multicategory Vertex Discriminant Analysis. Journal of Computational and Graphical Statistics. 2008;17:527–544.
- Meng XL, Rubin DB. Using EM to Obtain Asymptotic Variance–Covariance Matrices: The SEM Algorithm. Journal of the American Statistical Association. 1991;86:899–909.
- Minka TP. Estimating a Dirichlet Distribution. Technical report, Microsoft. 2003.
- Neerchal NK, Morel JG. Large Cluster Results for Two Parametric Multinomial Extra Variation Models. Journal of the American Statistical Association. 1998;93(443):1078–1087.
- Neerchal NK, Morel JG. An Improved Method for the Computation of Maximum Likelihood Estimates for Multinomial Overdispersion Models. Computational Statistics & Data Analysis. 2005;49(1):33–43.
- Nobuaki H. Applying Pitman’s Sampling Formula to Microdata Disclosure Risk Assessment. Journal of Official Statistics. 2001;17:499–520.
- Nocedal J, Wright S. Numerical Optimization. New York: Springer; 2006.
- Patil MK, Shirke DT. Testing Parameter of the Power Series Distribution of a Zero Inflated Power Series Model. Statistical Methodology. 2007;4:393–406.
- Paul SR, Balasooriya U, Banerjee T. Fisher Information Matrix of the Dirichlet-Multinomial Distribution. Biometrical Journal. 2005;47(2):230–236. [PubMed]
- Pitman J. Exchangeable and Partially Exchangeable Random Partitions. Probability Theory and Related Fields. 1995;102(2):145–158.
- Sabatti C, Lange K. Genomewide Motif Identification Using a Dictionary Model. Proceedings of the IEEE. 2002;90:1803–1810.
- Steele JM. The Cauchy–Schwarz Master Class. MAA Problem Books Series. Washington, DC: Mathematical Association of America; 2004.
- Ueda N, Nakano R. Deterministic Annealing EM Algorithm. Neural Networks. 1998;11:271–282. [PubMed]
- Varadhan R, Roland C. Simple and Globally Convergent Methods for Accelerating the Convergence of Any EM Algorithm. Scandinavian Journal of Statistics. 2008;35:335–353.
- Wu TT, Lange K. The MM Alternative to EM. Statistical Science. 2010 to appear.
- Zhou H, Lange K. On the Bumpy Road to the Dominant Mode. Scandinavian Journal of Statistics. 2010 to appear. [PMC free article] [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |