Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2868388

Formats

Article sections

- Abstract
- 1. Introduction
- 2. Motivating Example
- 3. Preliminaries
- 4. Theoretical Considerations
- 5. Properties of successive normalization
- 6. Computational Results and Applications
- 7. Conclusion
- References

Authors

Related links

Ann Stat. Author manuscript; available in PMC 2010 September 1.

Published in final edited form as:

Ann Stat. 2010 June 1; 38(3): 1638–1664.

doi: 10.1214/09-AOS743PMCID: PMC2868388

NIHMSID: NIHMS164691

Richard A. Olshen, Depts. of Health Research and Policy, Electrical Engineering, and Statistics, Stanford, CA 94305-5405, U.S.A.

Richard A. Olshen: ude.drofnats@nehslo; Bala Rajaratnam: ude.drofnats@tarajarb

See other articles in PMC that cite the published article.

Standard statistical techniques often require transforming data to have mean 0 and standard deviation 1. Typically, this process of “standardization” or “normalization” is applied across subjects when each subject produces a single number. High throughput genomic and financial data often come as rectangular arrays, where each coordinate in one direction concerns subjects, who might have different status (case or control, say); and each coordinate in the other designates “outcome” for a specific feature, for example “gene,” “polymorphic site,” or some aspect of financial profile. It may happen when analyzing data that arrive as a rectangular array that one requires BOTH the subjects and features to be “on the same footing.” Thus, there may be a need to standardize across rows and columns of the rectangular matrix. There arises the question as to how to achieve this double normalization. We propose and investigate the convergence of what seems to us a natural approach to successive normalization, which we learned from colleague Bradley Efron. We also study the implementation of the method on simulated data and also on data that arose from scientific experimentation.

This paper is about a method for normalization, or regularization, of large rectangular sets of numbers. In recent years many statistical efforts have been directed towards inference on such rectangular arrays. The exact geometry of the array matters little to the theory that follows. Positive results apply to the situation where there are at least three rows and at least three columns. We explain difficulties that arise when either numbers only two. Scenarios to which methodology studied here applies tend to have many more rows than columns. Data can be from gene expression microarrays, SNP (single nucleotide polymorphism) arrays, protein arrays, alternatively from large scale problems in imaging. Often there is one column per subject, with rows consisting of real numbers (as in expression) or numbers 0, 1, 2 (as with SNPs). Subjects from whom data are gathered may be “afflicted” or not, with a condition that while heritable is far from Mendelian. A goal is to find rows, better groups of rows, by which to distinguish afflicted from other subjects. One can be led to testing many statistical hypotheses simultaneously, thereby separating rows into those that are “interesting” for further follow-up and those that seem not to be. Genetic data tend to be analyzed by test “genes” (rows), beginning with their being “embedded” in a chip, perhaps a bead. There may follow a subsequent molecule that binds to the embedded “gene”/molecule. A compound that makes use of the binding preferences of nucleotides and to which some sort of “dye” is attached is then “poured.” The strength of binding depends upon affinity of the “gene” or attached molecule and the compound. Laser light is shined on the object into which the test “gene” has been embedded, and from its bending, the amount of bound compound is assessed, from which the amount of the “gene” is inferred. The basic idea is that different afflicted status may lead to different amounts of “gene”.

With the cited formulation and ingenious technology, data may still suffer from problems that have nothing to do with differences between groups of subjects or with differences between “genes” or groups of them. There may be differences in background, by column, or even by row. Perhaps also “primers” (compounds) vary across columns for a given row. For whatever reasons, scales by row or column may vary in ways that do not enable biological understanding. Variability across subjects could be unrelated to afflicted status.

Think now of the common problem of comparing variables that can vary in their affine scales. Because covariances are not scale-free, it can make sense to compare in dimensionless coordinates that are centered at 0, that is, where values of each variable have respective means subtracted off, and are scaled by respective standard deviations. That way, each variable is somehow “on the same footing”.

Standardization, or normalization, studied here is done precisely so that both “subjects” and “genes” are “on the same footing”. We recognize one might require only that “genes” (or some “genes”) be on the same footing, and the same for “subjects.” The successive transformations studied here apply when one lacks *a priori* opinions that might limit goals. Thus, “genes” that result from the standardization we study are transformed to have mean 0 and standard deviation 1 across all subjects, while the same is true for subjects across all “genes”. How to normalize? One approach is to begin with, say, row, though one could as easily begin with columns. Subtract respective row means and divide by respective standard deviations. Now do the same operation on columns, then on rows, and so on. Remarkably, this process tends to converge, even rapidly in terms of numbers of iterations, and to a set of numbers that have the described good limiting properties in terms of means and standard deviations, by row and by column.

In this paper we show by examples how the process works and demonstrate for them that indeed it converges. We also include rigorous mathematical arguments as to why convergence tends to occur. Readers will see that the process and perhaps especially the mathematics that underlies it are not as simple as we had hoped they would be. This paper is only about convergence, which is demonstrated to be exponentially fast (or faster) for examples. The mathematics here does not apply directly to “rates”. The Hausdorff dimension of the limit set seems easy enough to study. Summaries will be reported elsewhere.

We introduce a motivating example to ground the problem that we address in this paper. Consider a simple 3-by-3 matrix with entries generated from a uniform distribution on [0,1]. We standardize the initial matrix *X*^{(0)} by row and column, first subtracting the row mean from each entry and then dividing each entry in a given row by its row standard deviation. The matrix is then column standardized by subtracting the column mean from each entry and then by dividing each entry by the respective column standard deviation. In this section, these four steps of row mean polishing, row standard deviation polishing, column mean polishing and column standard deviation polishing entail one iteration in the process of attempting to row and column standardize the matrix. After one such iteration, the same process is applied to resulting matrix *X*^{(1)} and the process repeated with the hope that successive renormalization will eventually yield a row and column standardized matrix. Hence these fours steps are repeated until “convergence” - which we define as the difference in the Frobenius norm between two consecutive iterations being less than 10^{−8}.

In order to illustrate this numerically, we start with the following 3-by-3 matrix with independent entries generated from a uniform distribution on [0,1]and repeat the process described above.

$${X}^{(0)}=\left[\begin{array}{ccc}0.1182& 0.7069& 0.4145\\ 0.9884& 0.9995& 0.4648\\ 0.5400& 0.2878& 0.7640\end{array}\right]$$

(1)

The successive normalization algorithm took 9 iterations to converge. The initial matrix, the final solution, and relative (and log relative) difference for the 9 iterations are given below (see also figure 1):

$${X}^{(\mathit{final})}=\left[\begin{array}{ccc}-1.2608& 1.1852& 0.0756\\ 1.1852& 0.0757& -1.2608\\ 0.0756& -1.2608& 1.1852\end{array}\right]$$

(2)

$$\text{Successive}\phantom{\rule{0.16667em}{0ex}}\text{Difference}=\left[\begin{array}{ccc}\text{Iteration}\phantom{\rule{0.16667em}{0ex}}\text{no}.& \text{difference}& log(\text{difference})\\ 1& 8.7908& 2.1737\\ 2& 0.5018& -0.6895\\ 3& 0.0300& -3.5057\\ 4& 0.0019& -6.2862\\ 5& 0.0001& -9.0607\\ 6& 0.0000& -11.8337\\ 7& 0.0000& -14.6064\\ 8& 0.0000& -17.3790\\ 9& 0.0000& -20.1516\end{array}\right]$$

(3)

The whole procedure of 9 iterations takes less than 0.15 seconds on a standard modern laptop computer. We also note that the final solution has effectively 3 distinct entries. When other random starting values are used, we observe that convergence patterns can vary in the sense that convergence may not be monotonic. The plots below (see Figure 2) capture the type of convergence patterns that are observed in our simple 3-by-3 example.

Despite the different convergence patterns that are observed, when our successive renormalization is repeated with different starting values - a surprising phenomenon surfaces. The process seems always to converges, and moreover the convergence is very rapid. One is led naturally to ask whether this process will always converge and if so under what conditions. These questions lay the foundation for the work in this paper.

We establish the notation that we will use by revisiting a normalization/standardization method that is traditional for multivariate data. If the main goal of a normalization of a rectangular array is achieving zero row and and column averages, then a natural approach is to “mean polish” the row (i.e., subtract the row mean from every entry of the rectangular array), followed by a column “mean polish”. This cycle of successive row and column polishes is repeated until the resulting rectangular array has zero row and and column averages. The following theorem proves that this procedure attains a double mean standardized rectangular array in one iteration where an iteration is defined as constituting one row mean polish followed by one column mean polish.

Given an initial matrix **X**^{(0)}, an iterative procedure to cycle through repetitions of a row mean polish followed by a column mean polish until convergence terminates in one step.

Let *X*^{(0)} be an *n* × *k* matrix and define the following:

$$\begin{array}{l}{\mathit{X}}^{(0)}=\left[{X}_{ij}^{(0)}\right]\\ {\overline{X}}_{i\xb7}^{(0)}=\frac{1}{k}\sum _{j=1}^{k}{X}_{ij}^{(0)}\end{array}$$

Now the first part of the iteration, termed as a “row mean polish” subtracts from each element its respective row mean:

$${\mathit{X}}^{(1)}=\left[{X}_{ij}^{(1)}\right]={X}_{ij}^{(0)}-{\overline{X}}_{i\xb7}^{(0)}$$

The second step of the iteration, termed a “column mean polish” subtracts from each element of the current matrix its respective column mean:

$${\mathit{X}}^{(2)}=\left[{X}_{ij}^{(2)}\right]={X}_{ij}^{(1)}-{\overline{X}}_{\xb7j}^{(1)},$$

where

$${\overline{X}}_{\xb7j}^{(1)}=\frac{1}{n}\sum _{i=1}^{n}{X}_{ij}^{(1)}$$

After the second step of the iteration it is clear that the columns sum to zero; the previous operation enforces this. In order to prove that the iterative procedure terminates at the second part of the iteration it is sufficient to show that the rows of the current iterate sum to zero. Now note that

$$\begin{array}{l}{\mathit{X}}^{(2)}=\left[{X}_{ij}^{(2)}\right]\\ =\left[{X}_{ij}^{(1)}\right]-{\overline{X}}_{\xb7j}^{(1)}\\ =\left({X}_{ij}^{(0)}-{\overline{X}}_{i\xb7}^{(0)}\right)-\left(\frac{1}{n}\sum _{r=1}^{n}{X}_{rj}^{(1)}\right)\\ =\left({X}_{ij}^{(0)}-{\overline{X}}_{i\xb7}^{(0)}\right)-\left(\frac{1}{n}\sum _{r=1}^{n}\left({X}_{rj}^{(0)}-{\overline{X}}_{r\xb7}^{(0)}\right)\right)\end{array}$$

The remaining is to show that the row sum of this matrix *X*^{(2)} expressed as the elements of *X*^{(0)} sum to zero. So,

$$\begin{array}{l}\sum _{j=1}^{k}{X}_{ij}^{(2)}=\sum _{j=1}^{k}\left({X}_{ij}^{(0)}-{\overline{X}}_{i\xb7}^{(0)}\right)-\sum _{j=1}^{k}\left(\frac{1}{n}\sum _{r=1}^{n}\left({X}_{rj}^{(0)}-{\overline{X}}_{r\xb7}^{(0)}\right)\right)\\ =\left(k{\overline{X}}_{i\xb7}^{(0)}-k{\overline{X}}_{i\xb7}^{(0)}\right)-\frac{1}{n}\sum _{r=1}^{n}\sum _{j=1}^{k}\left({X}_{rj}^{(0)}-{\overline{X}}_{r\xb7}^{(0)}\right)\\ =\left(k{\overline{X}}_{i\xb7}^{(0)}-k{\overline{X}}_{i\xb7}^{(0)}\right)-\frac{1}{n}\sum _{r=1}^{n}\left(k{\overline{X}}_{r\xb7}^{(0)}-k{\overline{X}}_{r\xb7}^{(0)}\right)\\ =0-0\\ =0\end{array}$$

Note that the above double standardization is implicit in a 2-way ANOVA, and though not explicitly stated it can be deduced from the work of Scheffé [9]. It is nevertheless presented here first in order to introduce notation, second as it is not available in this form above in the ANOVA framework, but third for the intuition it gives since it is a natural precursor to the subject of work in the remainder of this paper.

We proceed to illustrate the previous theorem on the motivating example given after the introduction and draw contrasts between the two approaches. As expected the successive normalization algorithm terminates in one iteration. The initial matrix, the final solution, and the column and row standard deviations of the final matrix are given below:

$${Y}^{(0)}=\left[\begin{array}{ccc}0.1182& 0.7069& 0.4145\\ 0.9884& 0.9995& 0.4648\\ 0.5400& 0.2878& 0.7640\end{array}\right]$$

(4)

$${Y}^{(\mathit{column}-\mathit{polished})}=\left[\begin{array}{ccc}-0.4307& 0.0422& -0.1333\\ 0.4396& 0.3347& -0.0829\\ -0.0089& -0.3769& 0.2162\end{array}\right]$$

(5)

$${Y}^{(\mathit{row}-\mathit{polished})}={Y}^{(\mathit{final})}=\left[\begin{array}{ccc}-0.2568& 0.2161& 0.0407\\ 0.2091& 0.1043& -0.3134\\ 0.0477& -0.3204& 0.2727\end{array}\right]$$

(6)

$$\mathit{Std}(\mathit{columns})=\left[\begin{array}{lll}0.1932\hfill & 0.2311\hfill & 0.2410\hfill \end{array}\right]$$

(7)

$$\mathit{Std}(\mathit{rows})=\left[\begin{array}{c}0.1952\\ 0.2257\\ 0.2445\end{array}\right]$$

(8)

We note unlike in the motivating example, and as expected, the row and column means are both 0; but the standard deviations of the rows and the columns are not identical, let alone identically 1. Since mean polishing has already been attained, and we additionally require that row and column standard deviations to be 1, it is rather tempting to row and column standard deviation polish the terminal matrix *Y*^{(}^{final}^{)} above. We conclude this example by observing the simple fact that doing so results in the loss of the zero row and column averages.

We now examine the successive row and column mean and standard deviation polishing for a 2 × 2 matrix and hence illustrate that for the results in this paper to hold true, the minimum of row(*k*) and column dimension(*n*) of the matrix under consideration must be at least 3, that is min(*k*, *n*) ≥ 3. Consider a general 2 × 2 matrix:
${\mathit{X}}^{(0)}=\left(\begin{array}{ll}a\hfill & b\hfill \\ c\hfill & d\hfill \end{array}\right)$. If *a* < *b* and *c* < *d*, then after one row normalization,
${\mathit{X}}^{(1)}=\left(\begin{array}{ll}-1\hfill & 1\hfill \\ -1\hfill & 1\hfill \end{array}\right)$; so
${({\mathit{S}}_{j}^{(1)})}^{2}=0$. Therefore, allowing for both inequalities to be reversed, and, assuming that, for example, *a*, *b*, *c*, and *d* are *iid* with continuous distribution(s), then
$P({({\mathit{S}}_{j}^{(1)})}^{2}=0)=1/2$, in which case the procedure is no longer well-defined.

A moment’s reflection shows that if ** X** is

For a matrix ** X** as defined, take

Because *λ* and *P* are mutually absolutely continuous, if = {algorithm for successive row and column normalization converges}, then *P* () = 1 iff *λ*(^{n}^{×}* ^{k}*\) = 0, though only one direction is used.

For the remainder, assume that *P* governs ** X**. Positive results will be obtained for 3 ≤ min(

$$\begin{array}{l}{\mathit{X}}_{i\xb7}^{(0)}=\frac{1}{k}\sum _{i=1}^{k}{\mathit{X}}_{ij};\\ {({\mathit{s}}_{i}^{(0)})}^{2}=\frac{1}{k}\sum _{j=1}^{k}{({\mathit{X}}_{ij}-{\overline{\mathit{X}}}_{i\xb7}^{(0)})}^{2}\\ =(\frac{1}{k}\sum _{j=1}^{k}{\mathit{X}}_{ij}^{2})-{({\overline{\mathit{X}}}_{i\xb7}^{(0)})}^{2}\end{array}$$

${\mathit{X}}^{(1)}=[{\mathit{X}}_{ij}^{(1)}]$, where
${\mathit{X}}_{ij}^{(1)}=({\mathit{X}}_{ij}-{\overline{\mathit{X}}}_{i\xb7}^{(0)})/{\mathit{S}}_{i}^{(0)}$; a.s.
${({\mathit{S}}_{i}^{(0)})}^{2}>0$ since *k* ≥ 3 > 1.

By analogy, set

$$\begin{array}{l}{\mathit{X}}^{(2)}=[{\mathit{X}}_{ij}^{(2)}],\text{where}\phantom{\rule{0.16667em}{0ex}}{\mathit{X}}_{ij}^{(2)}=({\mathit{X}}_{ij}^{(1)}-{\overline{\mathit{X}}}_{\xb7j})/{\mathit{S}}_{j}^{(1)};\\ {\overline{\mathit{X}}}_{\xb7j}^{(1)}=\frac{1}{n}\sum _{i=1}^{n}{\mathit{X}}_{ij}^{(1)};\\ {({\mathit{S}}_{j}^{(1)})}^{2}=\frac{1}{n}\sum _{i=1}^{n}{({\mathit{X}}_{ij}^{(1)}-{\overline{\mathit{X}}}_{\xb7j}^{(1)})}^{2}.\end{array}$$

Arguments sketched in what follows entail that a.s.
${({\mathit{S}}_{j}^{(1)})}^{2}>0$ since *n* ≥ 3.

For *m* odd,
${\mathit{X}}_{ij}^{(m)}=({\mathit{X}}_{ij}^{(m-1)}-{\overline{\mathit{X}}}_{i\xb7}^{(m-1)})/{\mathit{S}}_{i}^{(m-1)}$, with
${\overline{\mathit{X}}}_{i\xb7}^{(m-1)}$ and
${({\mathit{S}}_{i}^{(m-1)})}^{2}$ defined by analogy to previous definitions.

For *m* even,
${\mathit{X}}_{ij}^{(m)}=({\mathit{X}}_{ij}^{(m-1)}-{\overline{\mathit{X}}}_{\xb7j}^{(m-1)})/{\mathit{S}}_{j}^{(m-1)}$, again with
${\overline{\mathit{X}}}_{\xb7j}^{(m-1)}$ and
${\mathit{S}}_{j}^{(m-1)}$ defined by analogy to earlier definitions.

We first note that because the process we study is a coordinate process, there is no difference between regular conditional probabilities and regular conditional distributions (see Durrett [4], Section 4.1.c, p33 and pp. 229–331 for more details). They can be computed as densities with respect to Lebesgue measure on a finite Cartesian product ×*Sph*(*q*), where *q* = *k* refers to after row normalization and where *q* = *n* if subsequent to column normalization. In a slight abuse of notation, for any positive integer *q* we define *Sph*(*q*) = {*x** ^{q}*: ||

Let {*r _{ij}*:

$$P({\cup}_{{r}_{1},\dots ,{r}_{nk}}\sum _{i,j}{r}_{ij}\phantom{\rule{0.16667em}{0ex}}{\mathit{X}}_{ij}=0)=0.$$

An inductive argument involving conditional densities shows that

$$P({\cup}_{m=1}^{\infty}{\cup}_{{r}_{1},\dots ,{r}_{nk}}\sum _{i,j}{r}_{ij}\phantom{\rule{0.16667em}{0ex}}{\mathit{X}}_{ij}^{(m)}=0)=0.$$

(9)

Consequently

$$P(({\cap}_{m-1}^{\infty}{\cap}_{i=1}^{n}{({\mathit{S}}_{i}^{(2m)})}^{2}>0)\cap ({\cap}_{m=1}^{\infty}{\cap}_{j=1}^{k}{({\mathit{S}}_{j}^{(2m-1)})}^{2}>0))=1.$$

Further, a.s. *X*^{(}^{m}^{)} is defined and finite for every *m*.

What we know about the *t* distribution Efron [5] and geometric arguments entail that *X*^{(1)} can be viewed as having probability distribution on ^{n}^{×}* ^{k}* that is the n-fold product of independent uniform distributions on

As an aside, write *g*_{1}(** X**) =

We turn now to study
${\mathit{X}}_{ij}^{(2m-1)}$ as *m* increases without bound. Note first that for *m* = 1, 2, … *X*^{(2}^{m}^{−1)} has joint distribution that is unchanged if two columns of ** X** are transposed, therefore if two columns of

Write *π* for a permutation of the integers {1, …, *k*}; let be the finite *σ*-field of all subsets of {*π*}. The marginal probability induced on {*π*} from the joint distribution of (** X**, {

Write
${\mathcal{G}}_{2m-1}^{(i)}$ to be the *σ*-field

$$\mathcal{F}([{\mathit{X}}_{ij}^{(q)}:j=1,\dots ,k;\phantom{\rule{0.38889em}{0ex}}q=2m-1,2m+1,\dots ])\times \mathrm{\Pi}.$$

$E\{{({\mathit{X}}_{i\pi (1)}^{(1)})}^{2}\mid {\mathcal{G}}_{2m-1}\}={({\mathit{X}}_{i\pi (1)}^{(2m-1)})}^{2}$ a.s. for *m* = 1, 2, …

Write
${({\mathit{X}}_{i\pi (1)}^{(2m-1)})}^{2}=\sum _{l=1}^{k}{({\mathit{X}}_{il}^{(2m-1)})}^{2}{I}_{[\pi (1)=l]}$, where *I _{A}* is the indicator function of the event A. Obviously,
${({\mathit{X}}_{i\pi (1)}^{(2m-1)})}^{2}$ is
${\mathcal{G}}_{2m-1}^{(i)}$ measurable; {

Note that
${\mathcal{G}}_{2m-1}^{(i)}$ is generated by {*B* × *Q*}, *B* of the cited form and *Q* Π. In particular, each
$B\times Q\in {\mathcal{G}}_{2m-1}^{(i)}$. Proof of our claim is complete if we show that for m = 2, 3, …

$${\int}_{B\times Q}{({\mathit{X}}_{i\pi (1)}^{(2m-1)})}^{2}={\int}_{B\times Q}{({\mathit{X}}_{i\pi (1)}^{(1)})}^{2}.$$

The left hand side of the display can be expressed

$$E\{\sum _{l=1}^{k}{({\mathit{X}}_{il}^{(2m-1)})}^{2}\phantom{\rule{0.16667em}{0ex}}{I}_{[\pi (1)=l]}\phantom{\rule{0.16667em}{0ex}}{I}_{[\pi \in Q]}\phantom{\rule{0.16667em}{0ex}}{I}_{B}\}=E\{{I}_{B}\sum _{l=1}^{k}{({\mathit{X}}_{il}^{(2m-1)})}^{2}\phantom{\rule{0.16667em}{0ex}}{I}_{[\pi (1)=l]}\phantom{\rule{0.16667em}{0ex}}{I}_{[\pi \in Q]}\}.$$

Now, for any *π*, the expression inside the sum is (*k*)(1*/k*) = 1 if *π* *Q* and 0 if not. That is, the expression constituting the sum is *k I*_{[}_{π}_{(1)=}_{l}_{]} *I*_{[}_{π}_{}_{Q}_{]}. Now the, expectation factors into *P* (*B*)*P* (*π*(1) = *l*|*π* *Q*) *P* (*π* *Q*) = *P* (*B*) *P* (*Q*).Retracing steps shows clearly that
${({\mathit{X}}_{il}^{(2m-1)})}^{2}$ in the computation just completed could be replaced by
${({\mathit{X}}_{il}^{(1)})}^{2}$ with all equalities remaining true. The claim is now proven.

The backwards martingale convergence theorem Doob [3] entails that
${({\mathit{X}}_{i\pi (1)}^{(2m-2)})}^{2}$ converges a.s. as *m* → ∞. So, for each fixed *j* {1, …, *k*},
${({\mathit{X}}_{i\pi (1)}^{(2m-1)})}^{2}{I}_{[\pi (1)=j]}$ converges a.s. It follows that [
${({\mathit{X}}_{ij}^{(2m-1)})}^{2}$] converges a.s. as *m* → ∞.

If previous arguments are perturbed so that *π* denotes a permutation of {1, …, *n*}, with
${({\mathit{X}}_{i\pi (1)}^{(1)})}^{2}$ replaced by
${({\mathit{X}}_{\pi (1)j}^{(2)})}^{2},{\mathcal{G}}_{2m-1}^{(i)}$ by
${\mathcal{G}}_{2m}^{(i)},{({\mathit{X}}_{i\pi (1)}^{(2m-1)})}^{2}$ by
${({\mathit{X}}_{\pi (1)j}^{(2m)})}^{2}$, and
${\sum}_{l=1}^{k}{({\mathit{X}}_{il}^{(2m-1)})}^{2}$ by
${\sum}_{l=1}^{n}{({\mathit{X}}_{lj}^{(2m)})}^{2}$, then one concludes that also [
${({\mathit{X}}_{ij}^{(2m)})}^{2}$] converges a.s. as *m* →; ∞. Without further argument it is unclear the a.s. limits along odd, respectively even, indices are the same; and it is crucial to what remains that this is in fact true.

Obviously,
${\cap}_{m=1}^{\infty}{\mathcal{G}}_{2m-1}^{(i)}={\cap}_{m=1}^{\infty}{\mathcal{G}}_{2m}^{(i)}$, so in a certain sense measurability is the same. Obviously, too, randomization of index is by columns in the first case and by rows in the second. But now a path to the required conclusion presents itself. Given success in proving a.s. convergence along odd indices after randomizing columns and along even indices randomizing rows, and given a requirement of our approach is that these two limits be identical a.s., perhaps there is a path by simultaneously randomizing both columns and rows? Fortunately, that is the case. Thus, let *π*_{1} be a permutation of {1, …, *n*} and *π*_{2} be a permutation of {1, …, *k*}. With obvious product formulation of governing probability mechanism and further obvious formulation of decreasing *σ*-fields, as an example of what can be proved,

$$E({({\mathit{X}}_{{\pi}_{2}(1){\pi}_{2}(1)}^{(1)})}^{2}\mid {\mathcal{G}}_{2})={({\mathit{X}}_{{\pi}_{1}(1){\pi}_{2}(1)}^{(2)})}^{2}\phantom{\rule{0.16667em}{0ex}}\text{a}.\text{s}.$$

From this arguments for the display there are several paths by which one concludes that a.s., simultaneously for all (*i*, *j*), [
${({\mathit{X}}_{ij}^{(m)})}^{2}$] converges. Dominated convergence entails that the limit random matrix has expectation 1 in all coordinates. As a consequence of this convergence, a.s. and simultaneously for all (*i*, *j*),
$[{({\mathit{X}}_{ij}^{(2m+1)})}^{2}]-[{({\mathit{X}}_{ij}^{(2m)})}^{2}]\to 0$ as *m* → ∞.

We turn now to key ideas in extending our argument that [ ${({\mathit{X}}_{ij}^{(m)})}^{2}$] converges almost surely simultaneously to the same conclusion with the square removed. To limit notational complexity, we study first only odd indices as m grows without bound. Conclusions are identical for even indices, and by extension for indices not constrained to be odd or even.

A first necessary step is to show that for arbitrary *j*,

$$P\{{\overline{lim}}_{m}{({S}_{j}^{(2m+1)})}^{2}>0\}=1$$

To that end, let A be the event [
${\overline{lim}}_{m}{({S}_{j}^{(2m-1)})}^{2}=0$]. Obviously,
$A=\{\mathit{x}:{({S}_{j}^{(2m-1)})}^{2}\to 0\}=\{\mathit{x}:{S}_{j}^{(2m-1)}\to 0\}$ regardless of square roots taken. We show that *P* {*A*} = 0.

By way of contradiction, suppose that *P*{*A*} > 0. Write

$${({S}_{j}^{(2m-1)})}^{2}=\frac{1}{n}\sum _{l=1}^{n}{({\mathit{X}}_{lj}^{(2m-1)})}^{2}-{({\mathit{X}}_{\xb7j}^{(2m-1)})}^{2}$$

We know that the first term tends to 1 a.s. on *A*. Therefore, also the second term tends to 1 a.s. on *A*. Since for *m* > 1,
${\mathit{X}}_{lj}^{(2m-1)}$ is bounded a.s.,
${\overline{lim}}_{m}{max}_{l}{\mathit{X}}_{lj}^{(2m-1)})(\mathit{x})$ is a finite-valued random variable *C* = *C*(** x**). Simple considerations show that the only possibilities are that for all

$$-1<{\underset{\_}{lim}}_{m}{min}_{l}{\mathit{X}}_{lj}^{(2{m}_{k}-1)})(\mathit{x})\le {\overline{lim}}_{m}{max}_{l}{\mathit{X}}_{lj}^{(2{m}_{k}-1)})(\mathit{x})<1$$

It follows that
${lim}_{m}{\mathit{X}}_{lj}^{(2m-1)}$ exists a.s. on *A*, and that the limit of the sequence is +1 or −1 on *A*.

Recall that *X**~* −** X**, and this equality is inherited by all joint distributions of

Again, let us fix *j*. Consider a sample path of {*X*^{(2mq−1)}} along which
${lim}_{\text{m}}{({S}_{j}^{(2{m}_{q}-1)})}^{2}=D>0$. Clearly,
$\{i:{\overline{lim}}_{{m}_{q}}\mid {\mathit{X}}_{ij}^{(2{m}_{q}-1)}\mid \phantom{\rule{0.16667em}{0ex}}>0\}\ne \varnothing $. Indeed, let
$E=E(j)=\{i:\text{for}\phantom{\rule{0.16667em}{0ex}}\text{some}\phantom{\rule{0.16667em}{0ex}}\{{m}_{q}\}=\{{m}_{q}(i)\},{\overline{lim}}_{{m}_{q}(i)}\mid {\mathit{X}}_{ij}^{(2{m}_{q}-1)}\mid \phantom{\rule{0.16667em}{0ex}}>0\}$. Row and column exchangeability of *X*^{(}^{m}^{)} entail that necessarily the cardinality of *E* is at least 2.

Let *i*_{0}≠ *i*_{1} *E*. Because min(*n, k*) ≥ 3, there is a further subsequence of {{*m _{q}* (

$$\begin{array}{l}{lim}_{{m}_{q}}\mid {\mathit{X}}_{{i}_{0}j}^{(2{m}_{q}-1)}\mid \phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.38889em}{0ex}}{lim}_{{m}_{q}}\mid {\mathit{X}}_{{i}_{0}j}^{(2{m}_{q})}\mid \phantom{\rule{0.16667em}{0ex}}\text{both}\phantom{\rule{0.16667em}{0ex}}\text{exist};\\ {lim}_{{m}_{q}}\mid {\mathit{X}}_{{i}_{1}j}^{(2{m}_{q}-1)}\mid \phantom{\rule{0.38889em}{0ex}}\text{and}\phantom{\rule{0.38889em}{0ex}}{lim}_{{m}_{q}}\mid {\mathit{X}}_{{i}_{1}j}^{(2{m}_{q})}\mid \phantom{\rule{0.16667em}{0ex}}\text{both}\phantom{\rule{0.16667em}{0ex}}\text{exist};\phantom{\rule{0.16667em}{0ex}}\text{and}\\ {lim}_{{m}_{q}}\mid {\mathit{X}}_{{i}_{0}j}^{(2{m}_{q})}-{\mathit{X}}_{{i}_{1}j}^{(2{m}_{q}-1)}\mid \phantom{\rule{0.16667em}{0ex}}\text{exists}\phantom{\rule{0.16667em}{0ex}}\text{and}\phantom{\rule{0.16667em}{0ex}}\text{is}\phantom{\rule{0.16667em}{0ex}}\text{positive}.\end{array}$$

The first two requirements can always be met off of the set of probability 0 implicit in eqn(9). That the third can be met as well is a consequence of the argument just concluded. In any case, if there were no such subsequence, then our proof would be complete because all
${\mathit{X}}_{ij}^{(m)}$ for *j* fixed tend to the same number. But now, write

$$\begin{array}{c}({\mathit{X}}_{{i}_{0}j}^{(2{m}_{q})}-{\mathit{X}}_{{i}_{0}j}^{(2{m}_{q}-1)})-({\mathit{X}}_{{i}_{1}j}^{(2{m}_{q})}-{\mathit{X}}_{{i}_{1}j}^{(2{m}_{q}-1)})=\\ ({\mathit{X}}_{{i}_{0}j}^{(2{m}_{q}-1)}-{\mathit{X}}_{{i}_{1}j}^{(2{m}_{q}-1)})({S}_{j}^{(2{m}_{q}-1)}-1)/{S}_{j}^{(2{m}_{q}-1)}.\end{array}$$

Since
${({\mathit{X}}_{{i}_{0}j}^{(2{m}_{q})})}^{2}-{({\mathit{X}}_{{i}_{0}j}^{(2{m}_{q}-1)})}^{2}\to 0$ a.s., and likewise with *i*_{0} replaced by *i*_{1}, the first expression of the immediately previous display has limit 0. Thus, so too does the second expression. This is possible only if
${S}_{j}^{(2{m}_{q}-1)}\to 1$ (where we have taken the positive square root). Further,

$${\mathit{X}}_{{i}_{0}j}^{(2{m}_{q})}-{\mathit{X}}_{{i}_{0}j}^{(2{m}_{q}-1)}=\frac{{\mathit{X}}_{{i}_{0}j}^{(2{m}_{q}-1)}({S}_{j}^{(2{m}_{q}-1)}-1)+{\overline{\mathit{X}}}_{\xb7j}^{(2{m}_{q}-1)}}{{S}_{j}^{(2{m}_{q}-1)}}$$

As a corollary to the above, one sees now that
${\overline{\mathit{X}}}_{\xb7j}^{(2{m}_{q}-1)}\to 0$. Since the original {*m _{q}*} could be taken to be an arbitrary subsequence of {

- ${S}_{j}^{(2{m}_{q}-1)}\to 1$ a.s.;
- ${\overline{\mathit{X}}}_{\xb7j}^{(2{m}_{q}-1)}\to 0$ a.s.; and
- ${\mathit{X}}_{ij}^{(m)}$ converges a.s.

Now replace arguments for (i) and (ii) on columns by analogous arguments on rows. Deduce that every infinite subsequence of positive integers has a subsequence along which our desired conclusion obtains.

We now comment on theoretical properties of successive normalization. In particular, we elaborate on the generality of the result by showing that the Gaussian assumption is not necessary and serves only as a convenient choice of measure. We also discuss convergence in Lebesgue measure and the domains of attraction of successive normalization.

Write λ for Lebesgue measure on ^{n}^{×}* ^{k}*. Thus,

Now, let *f*_{1}*, f*_{2}*,* … be a sequence of -measurable functions, ^{n}^{×}* ^{k}* →

In the present paper, the (*r, c*) coordinate of {*f _{m}*} is the set of successive normalizations of the initial real entry multiplied by the indicator of the subset of

Whenever standardization is possible, after one standardization the sum over all *nk* coordinates of squares of respective values is bounded by *nk* it follows from dominated convergence that as *m* grows without bound, each term converges not only P-almost everywhere but also in *p ^{th}* mean for every ∞ >

Reviewers of research presented here have wondered if we can describe simply what successive and alternating normalization does to rectangular arrays of data, beyond introductory comments about putting rows and columns on an equal footing and the analogy to computing correlation from covariance. We begin our reply here, though details await further research and a subsequent paper. Please remember invariance to either row or column normalization (when possible), to scale multiples of **x** ^{n}^{×}* ^{k}*. In other words, results of normalization are constant along rays defined by these multiples, and without loss of generality we can assume that

Because normalization always involves subtraction of a mean and division by a standard deviation, and because each **X**^{(}^{m}^{)} is row and column exchangeable, the limiting process we study here when P applies seems on superficial glance to be analogous to “domains of attraction” as that notion applies to sequences of *iid* random variables. One obvious difference is that here limits are almost sure rather than in distribution. While a.s. limits of **X**^{(}^{m}^{)} are shown to have row and column means 0 and row and column standard deviations 1*, n* × *k* arrays of real numbers with this property are obviously the only fixed points of the alternating process studied here. The Hausdorff dimension of the set of fixed points is not difficult to compute, but we have been unable thus far to give rigorously supported conditions for the domain of attraction (in the sense described) of each fixed point. The simple case for which domains of attraction for limits in distribution were described was a major development in the history of probability (see Feller [6], Gnedenko and Kolmogorov [7], Zolotarev [10]). We report some intuitive results, and next a mathematical question that arose in our study of domains of attraction for which at present we have only a heuristic argument.

Is there a set *E* ^{n}^{×}* ^{k}* for which

Given that
${({\mathbf{S}}_{i}^{(m)})}^{2}\to 1$ on a subset of ^{n}^{×}* ^{k}* with complementary P-measure 0, therefore Lebesgue measure 0, almost surely ultimately (meaning for

From computations, after a row normalization the surface area of the sphere in *k*-space orthogonal to the equiangular line - that corresponds to only one row of
${\mathbf{X}}^{(m)}-\text{has}\phantom{\rule{0.16667em}{0ex}}\text{surface}\phantom{\rule{0.16667em}{0ex}}\text{area}\approx \sqrt{\left({\scriptstyle \frac{2}{e}}\right)}\left({\scriptstyle \frac{2\pi e}{k-3}}\right)<1$ for *k* ≥ 21. The expression → 0 as *k* ∞. Even for *k* = 4, the quantity is only about 14.7 (larger than the actual value). Remember that there are at most [*f* (*k*)*k*!] a.s. ultimately non-empty “invariant sets” for row normalization. Thus, one sees that for *k* large the quantity
${\left({\scriptstyle \frac{\text{surface}\phantom{\rule{0.16667em}{0ex}}\text{area}\phantom{\rule{0.16667em}{0ex}}\mathit{Sph}(k)}{\mid \text{invariant}\phantom{\rule{0.16667em}{0ex}}\text{sets}\phantom{\rule{0.16667em}{0ex}}\text{of}\phantom{\rule{0.16667em}{0ex}}\mathit{Sph}(k)\mid}}\right)}^{n}$ is nearly 0.

We include three examples to highlight and illustrate some computational aspects of our iterative procedure. The first two examples are studies by simulation whereas the third example is an implementation on a real dataset.

For the simulation study, we consider a 3-by-3 matrix and a 10-by-10 matrix both with entries generated independently from a uniform distribution on [0,1]. For a given matrix, the algorithm computes/calculates the following 4 steps at each iteration:

- Mean polish the column
- Stand deviation polish the column
- Mean polish the row
- Stand deviation polish the row

These fours steps, which constitute one iteration, are repeated until “convergence” - which we define as when the difference in some norm-squared (the quadratic/Frobenius norm in our case) between two consecutive iterations is less than some small prescribed value - which we take to be 0.00000001 or 10^{−8}.

We proceed now to illustrate the convergence of the successive row and column mean-standard deviation polishing for the simple 10-by-10 dimensional example cited. The algorithm took 15 iterations to converge. The initial matrix, the final solution, and relative(and log relative) difference for the 15 iterations are:

$${X}^{0}=\left(\begin{array}{cccccccccc}0.8145& 0.3551& 0.7258& 0.3736& 0.0216& 0.2486& 0.0669& 0.2178& 0.6766& 0.6026\\ 0.7891& 0.9970& 0.3704& 0.0875& 0.9106& 0.4516& 0.9394& 0.1821& 0.9883& 0.7505\\ 0.8523& 0.2242& 0.8416& 0.6401& 0.8006& 0.2277& 0.0182& 0.0418& 0.7668& 0.5835\\ 0.5056& 0.6525& 0.7342& 0.1806& 0.7458& 0.8044& 0.6838& 0.1069& 0.3367& 0.5518\\ 0.6357& 0.6050& 0.5710& 0.0451& 0.8131& 0.9861& 0.7837& 0.6164& 0.6624& 0.5836\\ 0.9509& 0.3872& 0.1769& 0.7232& 0.3833& 0.0300& 0.5341& 0.9397& 0.2442& 0.5118\\ 0.4440& 0.1422& 0.9574& 0.3474& 0.6173& 0.5357& 0.8854& 0.3545& 0.2955& 0.0826\\ 0.0600& 0.0251& 0.2653& 0.6606& 0.5755& 0.0871& 0.8990& 0.4106& 0.6802& 0.7196\\ 0.8667& 0.4211& 0.9246& 0.3839& 0.5301& 0.8021& 0.6259& 0.9843& 0.5278& 0.9962\\ 0.6312& 0.1841& 0.2238& 0.6273& 0.2751& 0.9891& 0.1379& 0.9456& 0.4116& 0.3545\end{array}\right)$$

(10)

$${X}^{\mathit{final}}=\left(\begin{array}{cccccccccc}1.2075& 0.2139& 0.8939& 0.2661& -2.0026& -0.5881& -1.2477& -0.4157& 1.1023& 0.5705\\ -0.0736& 1.7222& -1.2202& -1.0461& 0.6465& -0.8172& 0.5144& -1.1740& 1.3022& 0.1458\\ 0.8858& -0.8659& 0.8816& 0.7930& 0.9515& -0.9498& -1.6621& -1.1469& 1.0831& 0.0298\\ -0.9296& 1.5223& 0.6537& -0.7661& 0.9476& 0.9361& 0.5467& -1.3402& -1.3775& -0.1931\\ -0.8358& 0.8041& -0.7288& -2.0057& 1.0328& 1.4824& 0.5929& 0.0202& 0.2768& -0.6390\\ 1.4926& 0.1374& -1.2120& 1.1351& -0.5035& -1.2741& 0.1766& 1.3125& -1.1642& -0.1005\\ -0.5156& -0.7494& 1.5647& 0.2025& 0.5610& 0.2646& 1.3840& -0.0059& -0.7521& -1.9537\\ -1.8680& -1.1055& -0.6428& 0.9269& 0.3515& -0.8323& 1.2448& 0.1167& 0.8827& 0.9259\\ 0.3596& -1.0158& 0.8070& -0.5547& -1.1339& 0.1669& -0.5895& 1.0581& -1.0805& 1.9828\\ 0.2771& -0.6632& -0.9973& 1.0490& -0.8509& 1.6114& -0.9601& 1.5752& -0.2727& -0.7685\end{array}\right)$$

(11)

$$\text{Successive}\phantom{\rule{0.16667em}{0ex}}\text{Difference}=\left[\begin{array}{ccc}\text{Iteration}\phantom{\rule{0.16667em}{0ex}}\text{no}.& \text{difference}& log(\text{difference})\\ 1& 84.1592& 4.4327\\ 2& 1.2860& 0.2516\\ 3& 0.1013& -2.2897\\ 4& 0.0144& -4.2402\\ 5& 0.0029& -5.8434\\ 6& 0.0007& -7.2915\\ 7& 0.0002& -8.6805\\ 8& 0.0000& -10.0456\\ 9& 0.0000& -11.4000\\ 10& 0.0000& -12.7492\\ 11& 0.0000& -14.0955\\ 12& 0.0000& -15.4403\\ 13& 0.0000& -16.7841\\ 14& 0.0000& -18.1272\\ 15& 0.0000& -19.4699\end{array}\right]$$

(12)

We note once more how the relative differences decrease linearly on the log scale(though empirically) and is once again suggestive of the rate of convergence. As both the figure (see Fig. 3) and the vector of relative differences indicate, there is a substantial jump at iteration 2; and then the curve behaves linearly.

The whole procedure takes about 0.37 seconds on a standard modern laptop computer and terminates after 15 iterations. It might appear that the increase in the number of iterations increases with increase in dimension. For instance, the number of iterations goes from 9 to 15 as we go from dimension 3 to 10. We should however bear in mind that when we go from dimension 3 to 10 the “tolerance level” is kept constant at 0.00000001. The number of elements that must be close to their respective limiting values, however, goes from 9 in the 3-dimensional case, to 100 in the 10-dimensional case. The rapidity of convergence was explored further, and the process above was repeated over 1000 simulations. The convergence proves to be stable in the sense that the mean and standard deviation of the number of steps till convergence over the 1000 simulations are 14.5230 and 2.0331 respectively. A histogram of the number of steps till convergence is given below (fig. 4).

A closer look at the vector of successive differences suggests that the “bulk of the convergence” is achieved during the first iteration. This seems reasonable since necessarily the first steps render the resulting columns, then, rows, members of their respective unit spheres (and even within cited subsets of them). Convergence then is only within these spheres. Our numerical results also indicate the the sequence of matrices *X*^{(}^{i}^{)} changes most drastically during this first iteration. It suggests that mean polishing to a larger extent is responsible for the rapidity of convergence and is reminiscent of the result in Lemma 1 - which states that if only row and column mean polishing are performed, then convergence is achieved immediately. We explore this issue further by looking at the distance between *X*^{(1)} and *X*^{(}^{final}^{)}, and compare it to the distance between *X*^{(0)} and *X*^{(}^{final}^{)}. The ratio of these two distance is defined below:

$$\mathit{Ratio}=\frac{\mathit{dist}({X}^{(1)},{X}^{(\mathit{final})})}{\mathit{dist}({X}^{(0)},{X}^{(\mathit{final})})}$$

For our 10-by-10 example, we simulated 1000 initial starting values and implemented our successive normalization procedure. The average value of the distance from the first iterate to the limit, as a proportion of the total distance to the limit from the starting value, is only 2.78%. One could interpret this heuristically as saying that on average the crucial first step does as much as 97.2% of the work towards convergence. We therefore confirm that the bulk of the convergence is indeed achieved in the first step (termed as a “one-step analysis”’ from now onwards). The distribution of the ratio defined above is graphed in the histogram below (fig. 5). We also note that none of the 1000 simulations yielded a ratio of over 10%.

Distribution of distance to limit after 1-step as a proportion of distance to limit from initial-step

Yet a another illuminating perspective of our successive normalization technique is obtained when we track the number of sign changes in the individual entries of the matrices from one iteration to the next. Please remember that this is related to the “invariant sets” that were described in subsection 5.3. Naturally, one would expect the vast majority of sign changes to occur in the first step as the bulk of the convergence is achieved during this first step. We record the number of sign changes at each iteration, as a proportion of the total number of sign changes until convergence, over 1000 simulations, in our 10-by-10 case. The results are illustrated in the table below^{1} (see Table 1). An empirical study of the occurrence of the sign changes reveals interesting heuristics. We note that on average 95% of sign changes occur during the first step and an additional 3% in the next step. The table also demonstrates that as much as 99% of sign changes occur during the first three iterations. When we examine infrequent cases where there is a sign change well after the first few iterations, we observe that the corresponding limiting value is close to zero, thus indicating that a sign change well into the successive normalization technique (i.e., a change from positive to negative or vice versa) amounts to a very small change in the actual magnitude of the corresponding value of the matrix.

We conclude this example by investigating more thoroughly whether the dimensions of the matrices have an impact on either the rapidity of convergence and/or on the one-step analysis. The following table gives the mean and standard deviations of the number of iterations needed for convergence for various values of the dimension of the matrix - denoted by *p* and *n* when keeping the total number of cells in the matrix constant ^{2}. Once more our successive normalization procedure is applied to 1000 uniform random starting values. Results of this exercise are given in the table below (see Table 2).

We find that when *n* and *p* are close convergence appears to be faster, keeping everything else constant. Interestingly enough, a one-step analysis performed for the different scenarios above tends to suggest that the onestep ratio, defined as the distance from the first iterate to the limit as a proportion of the total distance to the limit from the starting value, seems largely unaffected by the row or column dimension of the problem.

We now proceed to further investigate the successive normalization procedure when one begins with column mean-standard deviation polishing followed by row mean-standard deviation polishing or vice versa on a simple 5-by-5 dimensional example. The theory developed in the previous sections proves convergence of the successive normalization procedure whether the first normalization that is performed on the matrix is row polishing or column polishing.

The algorithm took 30 iterations to converge when one begins with column mean-standard deviation polishing, and when one begins with row mean-standard deviation polishing it took 26 iterations to converge. The initial matrix, the final solutions, log relative differences and their respective plots for both approaches are given below (see Fig. 6).

Relative differences at each iteration on the log scale for 5-by-5 dimensional example (a) starting with column polishing (b) starting with row polishing

$${X}^{0}=\left(\begin{array}{ccccc}0.6565& 0.2866& 0.7095& 0.4409& 0.8645\\ 0.3099& 0.3548& 0.9052& 0.8758& 0.0210\\ 0.3316& 0.5358& 0.8658& 0.8650& 0.0768\\ 0.1882& 0.9908& 0.1192& 0.3552& 0.3767\\ 0.1007& 0.0282& 0.9553& 0.6311& 0.1492\end{array}\right)$$

(13)

$${X}_{\text{starting}\phantom{\rule{0.16667em}{0ex}}\text{with}\phantom{\rule{0.16667em}{0ex}}\text{column}\phantom{\rule{0.16667em}{0ex}}\text{polishing}}^{\mathit{final}}=\left(\begin{array}{ccccc}1.6360& -0.4320& -0.7863& -1.0548& 0.6371\\ 0.1093& -1.2446& 0.9477& 1.2170& -1.0295\\ -0.6979& 1.1193& -0.1716& 1.1399& -1.3897\\ 0.2748& 1.2421& -1.3091& -1.0112& 0.8034\\ -1.3223& -0.6848& 1.3192& -0.2907& 0.9786\end{array}\right)$$

(14)

$${X}_{\text{starting}\phantom{\rule{0.16667em}{0ex}}\text{with}\phantom{\rule{0.16667em}{0ex}}\text{row}\phantom{\rule{0.16667em}{0ex}}\text{polishing}}^{\mathit{final}}=\left(\begin{array}{ccccc}1.4956& -0.4243& -0.7386& -1.1620& 0.8293\\ 0.3816& -0.9267& 0.5915& 1.3267& -1.3731\\ -1.2158& 1.1775& 0.1052& 0.9966& -1.0634\\ 0.3478& 1.2181& -1.4096& -0.9138& 0.7573\\ -1.0092& -1.0446& 1.4514& -0.2475& 0.8499\end{array}\right)$$

(15)

$$\text{Suceesive}\phantom{\rule{0.16667em}{0ex}}\text{Difference}=\left(\begin{array}{ccc}\text{Iteration}\phantom{\rule{0.16667em}{0ex}}\text{no}.& \text{difference}& log(\text{difference})\\ ----& \text{starting}\phantom{\rule{0.16667em}{0ex}}\text{with}& \text{starting}\phantom{\rule{0.16667em}{0ex}}\text{with}\\ ----& \text{column}\phantom{\rule{0.16667em}{0ex}}\text{polishing}& \text{row}\phantom{\rule{0.16667em}{0ex}}\text{polishing}\\ 1& 2.9646& 3.0255\\ 2& -0.5858& 0.2539\\ 3& -1.5082& -0.8731\\ 4& -1.8814& -1.4650\\ \mathrm{..}& \dots & \dots \\ \mathrm{..}& \dots & \dots \\ 24& -15.0375& -17.0730\\ 25& -15.7028& -17.8229\\ 26& -16.3679& -18.5728\\ 27& -17.0331& ---\\ 28& -17.6983& ---\\ 29& -18.3635& ---\\ 30& -19.0287& ---\end{array}\right)$$

(16)

As expected the final solutions are different. The simulations were repeated with different initial values, and we note that the convergence patterns (as illustrated in Fig. 6) are similar whether the procedure starts with column polishing or row polishing, though actual the number of iterations required to converge can vary.

We now illustrate the convergence of the successive row and column mean-standard deviation polishing for a real life gene expression example -a 20426-by-63 dimensional example. This dataset arose originally from a study of human in-stent restenosis by Ashley et al. [2]. The algorithm took considerably longer in terms of time and computer resources but converged in eight iterations. The initial matrix and the final are too large to display, but the relative(and log relative) difference for the eight iterations are given subsequently:

$$\text{Successive}\phantom{\rule{0.16667em}{0ex}}\text{Difference}=1.0e+005\ast \left(\begin{array}{ccc}\text{Iteration}\phantom{\rule{0.16667em}{0ex}}\text{no}.& \text{difference}& log(\text{difference})\\ 1& 1.0465& 11.5583\\ 2& 0.0008& 4.4030\\ 3& 0.0000& -0.2333\\ 4& 0.0000& -4.7582\\ 5& 0.0000& -9.2495\\ 6& 0.0000& -13.7130\\ 7& 0.0000& -18.1526\\ 8& 0.0000& -22.5717\end{array}\right)$$

(17)

Note once more how the relative differences decreases linearly on the log scale(though empirically) and is once again suggestive of the rate of convergence. As both the figure (see Fig. 7) and the vector of relative differences indicates, there is a jump between iteration 1 and 2 and then the curve behaves linearly.

Additionally the whole procedure takes about 853.2 seconds or approximately 14.22 minutes on a desktop computer^{3} vs 0.4 seconds for the 10-by-10 example. However, the algorithm terminates after only eight iterations. In this example the number of iterations does NOT change with the increase in dimensionality. It may make sense to investigate this behavior more thoroughly, empirically, using simulation for rectangular but not square matrices. It seems that the ratio of the two dimensions or the minimum of the two dimensions may play a role. We should also bear in mind that the tolerance level, which is the sum of the individual differences squared, has been kept constant at 0.00000001.

In this section we attempt to lend perspective to our results and to point the way for future developments. Readers please note that for rectangular *n* × *k* arrays of real numbers with *min*(*k, n*) ≥ 3, the technique beginning with rows (alternatively columns) and successively subtracting row (column) means and dividing resulting differences, respectively, by row (column) standard deviations converges for a subset of Euclidean ^{n}^{×}* ^{k}* whose complement has Lebesgue measure 0. The limit is row and column exchangeable given the Gaussian probability mechanism that applies in our theoretical arguments. We do not offer other information on the nature of the exact set on which successive iterates converge. A single “iteration” of the process we study has four steps, two each respectively for rows and columns. Note that on the set for which the algorithm converges, convergence seems remarkably rapid, exponential or even faster, perhaps because after a half an iteration, the rows (alternately columns) lie as

Viewing the squares of the entries as the terms of a backwards martingale entails maximal inequalities for them, and therefore implicitly contains information on “rates of convergence” of the squares; but these easy results appear far from the best one might establish. Our arguments for (almost everywhere) convergence of the original signed entries do not have information regarding rates of convergence. One argues easily that if successive iterates converge, and no limiting entry is 0, then after finitely many steps, (the number depending on the original values and the limiting values) signs are unchanged. In our examples of small dimension, evidence of this can be made explicit. In particular we observe empirically that the vast majority of sign changes that are observed do indeed take place in the first few iterations. Any sign changes that are observed well after the first few iterations correspond to sign changes around entries with limiting values close to zero. We also have no information on optimality in any sense of the iterated transformations we study. One reason for our thinking that our topic is inherently difficult is that we were unable to view successive iterates as “contractions” in any sense familiar to us.

If we take any original set of numbers, and multiply each number by the realized value of a positive random variable with arbitrarily heavy tails, then convergence is unchanged. Normalization entails that after half a single iteration the same points on the surface of the relevant unit spheres are attained, no matter the multiple. The message is that what matters to convergence are the distributions induced on the surfaces of spheres after each half iteration, and not otherwise common heaviness of the tails of the probability distributions of individual entries.

The authors gratefully acknowledge Bradley Efron for introducing them to the research question addressed in the paper. They also thank Johann Won and Thomas Quertermous for useful discussions and Thomas Quertermous for granting us access to and understanding of data we study and reference; Bonnie Chung and Cindy Kirby are also acknowledged for administrative assistance. Richard Olshen was supported in part by NIH MERIT Award R37EB02784, and Bala Rajaratnam was supported in part by NSF grant DMS 0505303 DMS 0906392 and NIH grant 5R01 EB001988-11REV

^{1}Since the number of iterations to convergence depends on the starting point, the length of the vector of the number of sign changes will vary accordingly. We summarize this vector by averaging over all the 1000 simulations the relative frequency of the number of sign changes for the first nine iterations. The first nine iterations were chosen as each of the 1000 simulations required at least 9 iterations to converge.

^{2}or approximately constant

^{3}2GHz Core 2 Duo processor and 2GB of RAM

Richard A. Olshen, Depts. of Health Research and Policy, Electrical Engineering, and Statistics, Stanford, CA 94305-5405, U.S.A.

Bala Rajaratnam, Department of Statistics, Stanford, CA 94305-4065, U.S.A.

1. Anderson TW. An introduction to multivariate statistical analysis. 3. Wiley; New Jersey: 2003.

2. Ashley EA, Ferrara R, King JY, Vailaya A, Kuchinsky A, He X, Byers B, Gerckens U, Oblin S, Tsalenko A, Soito A, Spin J, Tabibiazar R, Connolly AJ, Simpson JB, Grube E, Quertermous T. Network analysis of human in-stent restenosis, in. Circulation. 2006;114(24):2644–2654. [PubMed]

3. Doob JL. Regularity properties of certain functions of chance variables, in. Transactions of the American Mathematical Society. 1940;47(2):455–486.

4. Durrett R. Probability: Theory and Examples. 2. Duxbury Press; Belmont: 1995.

5. Efron B. Student’s t-test under symmetry conditions, in. Journal of the American Statistical Association. 1969;64(328):1278–1302.

6. Feller W. An Introduction to Probability Theory and Its Applications. 2. Vol. 2. Wiley; New York: 1966.

7. Gnedenko BV, Kolmogorov AN. Limit Distributions for Sums of Independent Random Variables. Addison-Wesley; Boston: 1954.

8. Muirhead RJ. Aspects of Multivariate Statistical Theory. Wiley; New York: 1999.

9. Scheffé H. The Analysis of Variance. Wiley; New York: 1999. (Reprint)

10. Zolotarev VM. One-dimensional Stable Distributions. American Mathematical Society; Providence: 1986.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |