Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2834376

Formats

Article sections

- 1. Introduction
- 2. Nemirovski's Approach: Deterministic Inequalities for Norms
- 3. The probabilistic approach: type and cotype inequalities
- 4. The Empirical Process Approach: Truncation and Bernstein's Inequality
- 5. Comparisons
- 6. Proofs
- References

Authors

Related links

Am Math Mon. Author manuscript; available in PMC 2010 July 1.

Published in final edited form as:

Am Math Mon. 2010; 117(2): 138–160.

doi: 10.4169/000298910X476059PMCID: PMC2834376

NIHMSID: NIHMS177692

Lutz Dümbgen, Institute of Mathematical Statistics and Actuarial Science, University of Bern, Alpeneggstrasse 22, CH-3012 Bern, Switzerland;

Lutz Dümbgen: hc.ebinu.tats@negbmeud; Sara A. van de Geer: hc.zhte.htam.tats@reeg; Mark C. Veraar: ln.tfledut@raarev.c.m: ln.enilnosforp@kram; Jon A. Wellner: ude.notgnihsaw.tats@waj

See other articles in PMC that cite the published article.

Our starting point is the following well-known theorem from probability: Let *X*_{1}, …, *X _{n}* be independent random variables with finite second moments, and let
${S}_{n}={\sum}_{i=1}^{n}{X}_{i}$. Then

$$\text{Var}({S}_{n})=\sum _{i=1}^{n}\text{Var}({X}_{i}).$$

(1)

If we suppose that each *X _{i}* has mean zero,

$$\mathbb{E}{S}_{n}^{2}=\sum _{i=1}^{n}\mathbb{E}{X}_{i}^{2}.$$

(2)

This equality generalizes easily to vectors in a Hilbert space with inner product ·, ·: If the *X _{i}*'s are independent with values in such that

$$\mathbb{E}{\Vert {S}_{n}\Vert}^{2}=\sum _{i,j=1}^{n}\mathbb{E}\langle {X}_{i},{X}_{j}\rangle =\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}^{2}.$$

(3)

What happens if the *X _{i}*'s take values in a (real) Banach space (, ‖ · ‖)? In such cases, in particular when the square of the norm ‖ · ‖ is not given by an inner product, we are aiming at inequalities of the following type: Let

$$\mathbb{E}{\Vert {S}_{n}\Vert}^{2}\le K\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}^{2}$$

(4)

for some constant *K* depending only on (, ‖ · ‖).

For statistical applications, the case
$(\mathbb{B},\Vert \cdot \Vert )={\ell}_{r}^{d}\u2254({\mathbb{R}}^{d},\Vert \cdot {\Vert}_{r})$ for some *r* [1, ∞] is of particular interest. Here the *r*-norm of a vector *x* * ^{d}* is defined as

$${\Vert x\Vert}_{r}\u2254\{\begin{array}{ll}{\left(\sum _{j=1}^{d}|{x}_{j}{|}^{r}\right)}^{1/r}& \text{if}\phantom{\rule{0.2em}{0ex}}1\le r\le \infty ,\\ \underset{1\le j\le d}{max}|{x}_{j}|& \text{if}\phantom{\rule{0.2em}{0ex}}r=\infty .\end{array}$$

(5)

An obvious question is how the exponent *r* and the dimension *d* enter an inequality of type (4). The influence of the dimension *d* is crucial, since current statistical research often involves small or moderate “sample size” *n* (the number of independent units), say on the order of 10^{2} or 10^{4}, while the number *d* of items measured for each independent unit is large, say on the order of 10^{6} or 10^{7}. The following two examples for the random vectors *X _{i}* provide lower bounds for the constant

Let *b*_{1}, *b*_{2}, …, *b _{d}* denote the standard basis of

Let *X*_{1}, *X*_{2}, *X*_{3},… be independent random vectors, each uniformly distributed on {−1, 1}* ^{d}*. Then

$$\underset{n\ge 1}{sup}\frac{\mathbb{E}{\Vert {S}_{n}\Vert}_{\infty}^{2}}{{\sum}_{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}_{\infty}^{2}}=\underset{n\ge 1}{sup}\mathbb{E}{\Vert {n}^{-1/2}{S}_{n}\Vert}_{\infty}^{2}\ge \mathbb{E}{\Vert Z\Vert}_{\infty}^{2}=\mathbb{E}\underset{1\le j\le d}{max}{Z}_{j}^{2}.$$

But it is well known that
${max\phantom{\rule{0.2em}{0ex}}}_{1\le j\le d}|{Z}_{j}|-\sqrt{2logd}{\to}_{p}0$ as *d* → ∞. Thus candidates *K* (*d*) for the constant in (4) have to satisfy

$$\underset{d\to \infty}{liminf}\frac{K(d)}{2logd}\ge 1.$$

At least three different methods have been developed to prove inequalities of the form given by (4). The three approaches known to us are:

- deterministic inequalities for norms;
- probabilistic methods for Banach spaces;
- empirical process methods.

Approach (a) was used by Nemirovski [14] to show that in the space
${\ell}_{r}^{d}$ with *d* ≥ 2, inequality (4) holds with *K* = *C* min(*r*, log(*d*)) for some universal (but unspecified) constant *C*. In view of Example 1.2, this constant has the correct order of magnitude if *r* = ∞. For statistical applications see Greenshtein and Ritov [7]. Approach (b) uses special moment inequalities from probability theory on Banach spaces which involve nonrandom vectors in and Rademacher variables as introduced in Example 1.1. Empirical process theory (approach (c)) in general deals with sums of independent random elements in infinite-dimensional Banach spaces. By means of chaining arguments, metric entropies, and approximation arguments, “maximal inequalities” for such random sums are built from basic inequalities for sums of independent random variables or finite-dimensional random vectors, in particular, “exponential inequalities”; see, e.g., Dudley [4], van der Vaart and Wellner [26], Pollard [21], de la Pena and Gine [3], or van de Geer [25].

Our main goal in this paper is to compare the inequalities resulting from these different approaches and to refine or improve the constants *K* obtainable by each method. The remainder of this paper is organized as follows: In Section 2 we review several deterministic inequalities for norms and, in particular, key arguments of Nemirovski [14]. Our exposition includes explicit and improved constants. While finishing the present paper we became aware of yet unpublished work of Nemirovski [15] and Juditsky and Nemirovski [12] who also improved some inequalities of [14]. Rio [22] uses similar methods in a different context. In Section 3 we present inequalities of type (4) which follow from type and cotype inequalities developed in probability theory on Banach spaces. In addition, we provide and utilize a new type inequality for the normed space
${\ell}_{\infty}^{d}$. To do so we utilize, among other tools, exponential inequalities of Hoeffding [9] and Pinelis [17]. In Section 4 we follow approach (c) and treat
${\ell}_{\infty}^{d}$ by means of a truncation argument and Bernstein's exponential inequality. Finally, in Section 5 we compare the inequalities resulting from these three approaches. In that section we relax the assumption that *X _{i}* = 0 for a more thorough understanding of the differences between the three approaches. Most proofs are deferred to Section 6.

In this section we review and refine inequalities of type (4) based on deterministic inequalities for norms. The considerations for $(\mathbb{B},\Vert \cdot \Vert )={\ell}_{r}^{d}$ follow closely the arguments of [14].

Throughout this subsection let = * ^{d}*, equipped with one of the norms ‖ · ‖

Recall that for any *x* * ^{d}*,

$${\Vert x\Vert}_{r}\le {\Vert x\Vert}_{q}\le {d}^{1/q-1/r}{\Vert x\Vert}_{r}\phantom{\rule{1.0em}{0ex}}\text{for}\phantom{\rule{0.2em}{0ex}}1\le q<r\le \infty .$$

(6)

Moreover, as mentioned before,

$$\mathbb{E}{\Vert {S}_{n}\Vert}_{2}^{2}=\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}_{2}^{2}.$$

Thus for 1 ≤ *q* < 2,

$$\mathbb{E}{\Vert {S}_{n}\Vert}_{q}^{2}\le {({d}^{1/q-1/2})}^{2}\mathbb{E}{\Vert {S}_{n}\Vert}_{2}^{2}={d}^{2/q-1}\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}_{2}^{2}\le {d}^{2/q-1}\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}_{q}^{2},$$

whereas for 2 < *r* ≤ ∞,

$$\mathbb{E}{\Vert {S}_{n}\Vert}_{r}^{2}\le \mathbb{E}{\Vert {S}_{n}\Vert}_{2}^{2}=\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}_{2}^{2}\le {d}^{1-2/r}\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}_{r}^{2}.$$

Thus we may conclude that (4) holds with

$$K=\stackrel{\sim}{K}(d,r)\u2254\{\begin{array}{cc}{d}^{2/r-1}& \text{if}\phantom{\rule{0.2em}{0ex}}1\le r\le 2,\\ {d}^{1-2/r}& \text{if}\phantom{\rule{0.2em}{0ex}}2\le r\le \infty .\end{array}$$

Example 1.1 shows that this constant (*d*, *r*) is indeed optimal for 1 ≤ *r* ≤ 2.

In what follows we shall replace (*d*, *r*) = *d*^{1−2/}* ^{r}* with substantially smaller constants. The main ingredient is the following result:

For arbitrary fixed *r* [2, ∞) and *x* ^{d} \ {0} let

$$h(x)\u22542{\Vert x\Vert}_{r}^{2-r}{(|{x}_{i}{|}^{r-2}{x}_{i})}_{i=1}^{d}$$

while *h* (0) := 0. Then for arbitrary *x, y* ^{d},

$${\Vert x\Vert}_{r}^{2}+h{(x)}^{\text{T}}y\le {\Vert x+y\Vert}_{r}^{2}\le {\Vert x\Vert}_{r}^{2}+h{(x)}^{\text{T}}y+(r-1){\Vert y\Vert}_{r}^{2}.$$

[16] and [14] stated Lemma 2.1 with the factor *r* − 1 on the right side replaced with *Cr* for some (absolute) constant *C* > 1. Lemma 2.1, which is a special case of the more general Lemma 2.4 in the next subsection, may be applied to the partial sums *S*_{0} := 0 and
${S}_{k}\u2254{\sum}_{i=1}^{k}{X}_{i}$, 1 ≤ *k* ≤ *n*, to show that for 2 ≤ *r* < ∞,

$$\begin{array}{ll}\mathbb{E}{\Vert {S}_{k}\Vert}_{r}^{2}& \le \mathbb{E}\left({\Vert {S}_{k-1}\Vert}_{r}^{2}+h{({S}_{k-1})}^{\text{T}}{X}_{k}+(r-1){\Vert {X}_{k}\Vert}_{r}^{2}\right)\\ & =\mathbb{E}{\Vert {S}_{k-1}\Vert}_{r}^{2}+\mathbb{E}h{({S}_{k-1})}^{\text{T}}\mathbb{E}{X}_{k}+(r-1)\mathbb{E}{\Vert {X}_{k}\Vert}_{r}^{2}\\ & =\mathbb{E}{\Vert {S}_{k-1}\Vert}_{r}^{2}+(r-1)\mathbb{E}{\Vert {X}_{k}\Vert}_{r}^{2},\end{array}$$

and inductively we obtain a second candidate for *K* in (4):

$$\begin{array}{cc}\mathbb{E}{\Vert {S}_{n}\Vert}_{r}^{2}\le (r-1)\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}_{r}^{2}& \text{for}\phantom{\rule{0.2em}{0ex}}2\le r\le \infty .\end{array}$$

Finally, we apply (6) again: For 2 ≤ *q* ≤ *r* ≤ ∞ with *q* < ∞,

$$\mathbb{E}{\Vert {S}_{n}\Vert}_{r}^{2}\le \mathbb{E}{\Vert {S}_{n}\Vert}_{q}^{2}\le (q-1)\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}_{q}^{2}\le (q-1){d}^{2/q-2/r}\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}_{r}^{2}.$$

This inequality entails our first (*q* = 2) and second (*q* = *r* < ∞) preliminary result, and we arrive at the following refinement:

For arbitrary *r* [2, ∞],

$$\mathbb{E}{\Vert {S}_{n}\Vert}_{r}^{2}\le {K}_{\text{Nem}}(d,r)\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}_{r}^{2}$$

with

$${K}_{\text{Nem}}(d,r)\u2254\underset{q\in [2,r]\cap \mathbb{R}}{inf}(q-1){d}^{2/q-2/r}.$$

This constant *K*_{Nem}(*d, r*) satisfies the (in)equalities

$${K}_{\text{Nem}}(d,r)\phantom{\rule{0.2em}{0ex}}\{\begin{array}{ll}={d}^{1-2/r}\hfill & \text{if}\phantom{\rule{0.2em}{0ex}}d\le 7\hfill \\ \le r-1\hfill & \\ \le 2elogd-e\hfill & \text{if}\phantom{\rule{0.2em}{0ex}}d\ge 3,\hfill \end{array}$$

and

$${K}_{\text{Nem}}(d,\infty )\ge 2elogd-3e.$$

In the case
$\left(\mathbb{B},\Vert \xb7\Vert \right)={\ell}_{\infty}^{d}$ with *d* ≥ 3, inequality (4) holds with constant *K* = 2*e* log *d − e*. If the *X _{i}* 's are also identically distributed, then

$$\mathbb{E}{\Vert {n}^{-1/2}{S}_{n}\Vert}_{\infty}^{2}\le (2elogd-e)\mathbb{E}{\Vert {X}_{1}\Vert}_{\infty}^{2}.$$

Note that

$$\underset{d\to \infty}{lim}\frac{{K}_{\text{Nem}}(d,\infty )}{2logd}=\underset{d\to \infty}{lim}\frac{2elogd-e}{2logd}=e.$$

Thus Example 1.2 entails that for large dimension *d*, the constants *K*_{Nem}(*d*, ∞) and 2*e* log *d* − *e* are optimal up to a factor close to *e* 2.7183.

Lemma 2.1 is a special case of a more general inequality: Let (*T*, Σ, *μ*) be a σ-finite measure space, and for 1 ≥ *r* < ∞ let *L _{r}*(

$${\Vert f\Vert}_{r}\u2254{\left(\int |f{|}^{r}d\mu \right)}^{1/r},$$

where two such functions are viewed as equivalent if they coincide almost everywhere with respect to *μ*. In what follows we investigate the functional

$$f\mapsto V(f)\u2254{\Vert f\Vert}_{r}^{2}$$

on *L _{r}*(

Note that *V*(·) is convex; thus for fixed *f*, *g* *L _{r}*(

$$\begin{array}{cc}v(t)\u2254V(f+tg)={\Vert f+tg\Vert}_{r}^{2},& t\in \mathbb{R}\end{array}$$

is convex with derivative

$${v}^{\prime}(t)={v}^{1-r/2}(t)\int 2|f+tg{|}^{r-2}(f+tg)gd\mu .$$

By convexity of *v* it follows that

$$V(f+g)-V(f)=v(1)-v(0)\ge {v}^{\prime}(0)\u2254DV(f,g).$$

This proves the lower bound in the following lemma. We will prove the upper bound in Section 6 by computation of *v*″ and application of Hölder's inequality.

Let *r* ≥ 2. Then for arbitrary *f, g* *L _{r}*(

$$DV(f,g)=\int h(f)g\phantom{\rule{0.2em}{0ex}}d\mu \phantom{\rule{0.2em}{0ex}}\mathit{\text{with}}\phantom{\rule{0.2em}{0ex}}h\phantom{\rule{0.2em}{0ex}}(f)\u22542{\Vert f\Vert}_{r}^{2-r}{\left|f\right|}^{r-2}f\in {L}_{q}(\mu ),$$

where *q* := *r*/(*r* − 1). Moreover,

$$V(f)+DV(f,g)\le V(f+g)\le V(f)+DV(f,g)+(r-1)\phantom{\rule{0.2em}{0ex}}V(g).$$

The upper bound for *V*(*f* + *g*) is sharp in the following sense: Suppose that *μ*(*T*) < ∞, and let *f, g _{o} : T* → be measurable such that |

$$\frac{V(f+t{g}_{o})-V(f)-DV(f,t{g}_{o})}{V(t{g}_{o})}\to r-1\phantom{\rule{0.2em}{0ex}}\text{as}\phantom{\rule{0.2em}{0ex}}t\to \text{0}.$$

If *r* = 2, Lemma 2.4 is well known and easily verified. Here the upper bound for *V*(*f* + *g*) is even an equality, i.e.,

$$V(f+g)=V(f)+DV(f,g)+V(g).$$

Lemma 2.4 improves on an inequality of [16]. After writing this paper we realized Lemma 2.4 is also proved by Pinelis [18]; see his (2.2) and Proposition 2.1, page 1680.

Lemma 2.4 leads directly to the following result:

In the case = *L _{r}* (

For any Banach space (, ‖ · ‖) and Hilbert space (, ·, ·, ‖ · ‖), their Banach-Mazur distance *D*(, ) is defined to be the infimum of

$$\Vert T\Vert \xb7\Vert {T}^{-1}\Vert $$

over all linear isomorphisms *T* : → , where ‖*T*‖ and ‖ *T* ^{−1}‖ denote the usual operator norms

$$\begin{array}{c}\Vert T\Vert \u2254sup\left\{\Vert Tx\Vert :x\in \mathbb{B}\Vert x\Vert \le 1\right\},\\ \Vert {T}^{-1}\Vert \u2254sup\left\{\Vert {T}^{-1}y\Vert :y\in \mathbb{H}\Vert y\Vert \le 1\right\}.\end{array}$$

(If no such bijection exists, one defines *D*(, ) := ∞.) Given such a bijection *T*,

$$\begin{array}{ll}\mathbb{E}{\Vert {S}_{n}\Vert}^{2}\hfill & \le {\Vert {T}^{-1}\Vert}^{2}\mathbb{E}{\Vert T{S}_{n}\Vert}^{2}\hfill \\ & ={\Vert {T}^{-1}\Vert}^{2}\sum _{i=1}^{n}\mathbb{E}{\Vert T{X}_{i}\Vert}^{2}\hfill \\ & \le {\Vert {T}^{-1}\Vert}^{2}{\Vert T\Vert}^{2}\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}^{2}.\hfill \end{array}$$

This leads to the following observation:

For any Banach space (, ‖ · ‖) and any Hilbert space (, , ·, ·,, ‖ · ‖) with finite Banach-Mazur distance *D*(, ), inequality (4) is satisfied with *K* = *D*(,)^{2}.

A famous result from geometrical functional analysis is John's theorem (see [24], [11]) for finite-dimensional normed spaces. It entails that $D\left(\mathbb{B},{\ell}_{2}^{dim\mathbb{B}}\right)\le \sqrt{dim\mathbb{B}}$. This entails the following fact:

For any normed space (, ‖ · ‖) with finite dimension, inequality (4) is satisfied with *K* = dim().

Note that Example 1.1 with *r* = 1 provides an example where the constant *K* = dim() is optimal.

Let {*ε _{i}*} denote a sequence of independent Rademacher random variables. Let 1 ≤

$$\mathbb{E}{\Vert \sum _{i=1}^{n}{\epsilon}_{i}{x}_{i}\Vert}^{p}\le {T}_{p}^{p}\sum _{i=1}^{n}{\Vert {x}_{i}\Vert}^{p}.$$

Similarly, for 1 ≤ *q* < ∞, is of *(Rademacher) cotype q* if there is a constant *C _{q}* such that for all finite sequences {

$$\mathbb{E}{\Vert \sum _{i=1}^{n}{\epsilon}_{i}{x}_{i}\Vert}^{q}\ge {C}_{q}^{-q}\sum _{i=1}^{n}{\Vert {x}_{i}\Vert}^{q}.$$

Ledoux and Talagrand [13, p. 247], note that type and cotype properties appear as dual notions: if a Banach space is of type *p*, its dual space ′ is of cotype *q* = *p*/(*p* − 1).

One of the basic results concerning Banach spaces with type *p* and cotype *q* is the following proposition:

[13, Proposition 9.11, p. 248]. If is of type *p* ≥ 1 with constant *T _{p}*, then

$$\mathbb{E}{\Vert {S}_{n}\Vert}^{p}\le {(2{T}_{p})}^{p}\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}^{p}.$$

If is of cotype *q* ≥ 1 with constant *C _{q}*, then

$$\mathbb{E}{\Vert {S}_{n}\Vert}^{q}\ge {(2{C}_{q})}^{-q}\sum _{i=1}^{n}\mathbb{E}{\Vert {X}_{i}\Vert}^{q}.$$

As shown in [13, p. 27], the Banach space *L _{r}*(

For 2 ≤ *r* < ∞, the space *L _{r}* (

$${B}_{r}\u2254{2}^{1/2}{\left(\frac{\Gamma ((r+1)/2)}{\sqrt{\pi}}\right)}^{1/r}.$$

For () = *L _{r}* (

Note that *B*_{2} = 1 and

$$\begin{array}{cc}\frac{{B}_{r}}{\sqrt{r}}\to \frac{1}{\sqrt{e}}& \text{as}\phantom{\rule{0.2em}{0ex}}r\to \infty .\end{array}$$

Thus for large values of *r*, the conclusion of Corollary 3.3 is weaker than that of Corollary 2.8.

The preceding results apply only to *r* < ∞, so the special space
${\ell}_{\infty}^{d}$ requires different arguments. First we deduce a new type inequality based on Hoeffding's [9] exponential inequality: if *ε*_{1}, *ε*_{2}, …, *ε _{n}* are independent Rademacher random variables,

$$\begin{array}{cc}\mathbb{P}\left(\left|\sum _{i=1}^{n}{a}_{i}{\epsilon}_{i}\right|\ge z\right)\le 2exp\left(-\frac{{z}^{2}}{2{v}^{2}}\right),& z\ge 0.\end{array}$$

(7)

At the heart of these tail bounds is the following exponential moment bound:

$$\begin{array}{cc}\mathbb{E}exp\left(t\sum _{i=1}^{n}{a}_{i}{\epsilon}_{i}\right)\le exp({t}^{2}{v}^{2}/2),& t\in \mathbb{R}.\end{array}$$

(8)

From the latter bound we shall deduce the following type inequality in Section 6:

The space ${\ell}_{\infty}^{d}$ is of type 2 with constant $\sqrt{2log(2\mathit{\text{d}})}$.

Using this upper bound together with Proposition 3.1 yields another Nemirovski-type inequality:

For
$(\mathbb{B},\Vert \cdot \Vert )={\ell}_{\infty}^{d}$, inequality (4) is satisfied with *K* = *K*_{Type2}(*d*, ∞) = 8 log(2*d*).

Let ${T}_{2}({\ell}_{\infty}^{d})$ be the optimal type-2 constant for the space ${\ell}_{\infty}^{d}$. So far we know that ${T}_{2}({\ell}_{\infty}^{d})\le \sqrt{2log(2d)}$. Moreover, by a modification of Example 1.2 one can show that

$${T}_{2}({\ell}_{\infty}^{d})\ge {c}_{d}\u2254\sqrt{\mathbb{E}\underset{1\le j\le d}{max}{Z}_{j}^{2}}.$$

(9)

The constants *c _{d}* can be expressed or bounded in terms of the distribution function Φ of

$${c}_{d}^{2}=\mathbb{E}({W}^{2})=\mathbb{E}{\int}_{0}^{\infty}2t{1}_{[t\le W]}dt={\int}_{0}^{\infty}2t\mathbb{P}(W\ge t)dt,$$

and for any *t* > 0,

$$\mathbb{P}(W\ge t)\{\begin{array}{l}=1-\mathbb{P}(W<t)=1-\mathbb{P}{(|{Z}_{1}|<t)}^{d}=1-{(2\Phi (t)-1)}^{d},\hfill \\ \le d\mathbb{P}(|{Z}_{1}|\ge t)=2d(1-\Phi (t)).\hfill \end{array}$$

These considerations and various bounds for Φ will allow us to derive explicit bounds for *c _{d}*.

On the other hand, Hoeffding's inequality (7) has been refined by Pinelis [17, 20] as follows:

$$\begin{array}{cc}\mathbb{P}\left(\left|\sum _{i=1}^{n}{a}_{i}{\epsilon}_{i}\right|\ge z\right)\le 2K(1-\Phi (z/v)),& z>0,\end{array}$$

(10)

where *K* satisfies 3.18 ≤ *K* ≤ 3.22. This will be the main ingredient for refined upper bounds for
${T}_{2}({\ell}_{\infty}^{d})$. The next lemma summarizes our findings:

The constants *c _{d}* and
${T}_{2}({\ell}_{\infty}^{d})$ satisfy the following inequalities:

$$\sqrt{2logd+{h}_{1}(d)}\le {c}_{d}\le \{\begin{array}{ll}{T}_{2}({\ell}_{\infty}^{d})\le \sqrt{2logd+{h}_{2}(d)},\hfill & d\ge 1\hfill \\ \sqrt{2logd},\hfill & d\ge 3\hfill \\ \sqrt{2logd+{h}_{3}(d)},\hfill & d\ge 1\hfill \end{array}$$

(11)

where *h*_{2}(*d*) ≤ 3, *h*_{2}(*d*) becomes negative for *d* > 4.13795 × 10^{10}, *h*_{3}(*d*) becomes negative for *d* ≥ 14, and *h _{j}*(

In particular, one could replace *K*_{Type2}(*d*, ∞) in Corollary 3.5 with 8 log *d* + 4*h*_{2}(*d*).

An alternative to Hoeffding's exponential tail inequality (7) is a classical exponential bound due to Bernstein (see, e.g., [2]): Let *Y*_{1}, *Y*_{2}, …,*Y _{n}* be independent random variables with mean zero such that |

$$\begin{array}{cc}\mathbb{P}\left(\left|\sum _{i=1}^{n}{Y}_{i}\right|\ge x\right)\le 2exp\left(-\frac{{x}^{2}}{2({v}^{2}+\kappa x/3)}\right),& x>0.\end{array}$$

(12)

We will not use this inequality itself but rather an exponential moment inequality underlying its proof:

For *L* > 0 define

$$e(L)\u2254exp(1/L)-1-1/L.$$

Let *Y* be a random variable with mean zero and variance σ^{2} such that |*Y*| < *κ*. Then for any *L* > 0,

$$\mathbb{E}exp\left(\frac{Y}{\kappa L}\right)\le 1+\frac{{\sigma}^{2}e(L)}{{\kappa}^{2}}\le exp\left(\frac{{\sigma}^{2}e(L)}{{\kappa}^{2}}\right).$$

With the latter exponential moment bound we can prove a moment inequality for random vectors in * ^{d}* with bounded components:

Suppose that
${X}_{i}={({X}_{i,j})}_{j=1}^{d}$ satisfies ‖*X _{i}*‖

$$\sqrt{\mathbb{E}\Vert {S}_{n}{\Vert}_{\infty}^{2}}\le \kappa Llog(2d)+\frac{\Gamma L\text{e}(L)}{\kappa}\cdot $$

Now we return to our general random vectors *X _{i}*

$${X}_{i}^{(a)}\u2254{1}_{\left[\parallel {X}_{i}\parallel \infty \le {\kappa}_{o}\right]}{X}_{i}\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}{X}_{i}^{(b)}\u2254{1}_{\left[\parallel {X}_{i}{\parallel}_{\infty}>{\kappa}_{o}\right]}{X}_{i}$$

for some constant *κ _{o}* > 0 to be specified later. Then we write

$${A}_{n}\u2254\sum _{i=1}^{n}\left({X}_{i}^{(a)}-\mathbb{E}{X}_{i}^{(a)}\right)\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}{B}_{n}\u2254\sum _{i=1}^{n}\left({X}_{i}^{(b)}-\mathbb{E}{X}_{i}^{(b)}\right)\cdot $$

The sum *A _{n}* involves centered random vectors in [−2

In the case
$(\mathbb{B},\Vert \cdot \Vert )={\ell}_{\infty}^{d}$ for some *d* ≥ 1, inequality (4) holds with

$$K={K}_{\text{TrBern}}(d,\infty )\u2254{(1+3.46\sqrt{log(2d)})}^{2}\cdot $$

If each of the random vectors *X _{i}* is symmetrically distributed around 0, one may even set

$$K={K}_{\text{TrBern}}^{\text{(symm)}}(d,\infty )={\left(1+2.9\sqrt{log(2d)}\right)}^{2}\cdot $$

In this section we compare the three approaches just described for the space
${\ell}_{\infty}^{d}$. As to the random vectors *X _{i}*, we broaden our point of view and consider three different cases:

**General case:**The random vectors*X*are independent with $\mathbb{E}\Vert {X}_{i}{\Vert}_{\infty}^{2}<\infty $ for all_{i}*i*.**Centered case:**In addition,*X*= 0 for all_{i}*i*.**Symmetric case:**In addition,*X*is symmetrically distributed around 0 for all_{i}*i*.

In view of the general case, we reformulate inequality (4) as follows:

$$\mathbb{E}\Vert {S}_{n}-\mathbb{E}{S}_{n}{\Vert}_{\infty}^{2}\phantom{\rule{0.2em}{0ex}}\le K\sum _{i=1}^{n}\mathbb{E}\Vert {X}_{i}{\Vert}_{\infty}^{2}\cdot $$

(13)

One reason for this extension is that in some applications, particularly in connection with empirical processes, it is easier and more natural to work with uncentered summands *X _{i}*. Let us discuss briefly the consequences of this extension in the three frameworks:

Between the centered and symmetric cases there is no difference. If (4) holds in the centered case for some *K*, then in the general case

$$\mathbb{E}\Vert {S}_{n}-\mathbb{E}{S}_{n}{\Vert}_{\infty}^{2}\phantom{\rule{0.2em}{0ex}}\le K\sum _{i=1}^{n}\mathbb{E}\Vert {X}_{i}-\mathbb{E}{X}_{i}{\Vert}_{\infty}^{2}\phantom{\rule{0.2em}{0ex}}\le 4K\sum _{i=1}^{n}\mathbb{E}\Vert {X}_{i}{\Vert}_{\infty}^{2}\cdot $$

The latter inequality follows from the general fact that

$$\mathbb{E}\Vert Y-\mathbb{E}Y{\Vert}^{2}\le \mathbb{E}\left({(\Vert Y\Vert +\Vert \mathbb{E}Y\Vert )}^{2}\right)\le 2\mathbb{E}\Vert Y{\Vert}^{2}+2\Vert \mathbb{E}Y{\Vert}^{2}\phantom{\rule{0.2em}{0ex}}\le 4\mathbb{E}\Vert Y{\Vert}^{2}\cdot $$

This looks rather crude at first glance, but in the case of the maximum norm and high dimension *d*, the factor 4 cannot be reduced. For let *Y* * ^{d}* have independent components

$$\Vert \mathit{\text{Y}}-\mathbb{E}Y{\Vert}_{\infty}=\{\begin{array}{ll}2(1-p)& \text{if}\phantom{\rule{0.2em}{0ex}}{Y}_{1}=\cdots ={Y}_{d}=1,\\ 2p& \text{otherwise}.\end{array}$$

Hence

$$\frac{\mathbb{E}\Vert Y-\mathbb{E}Y{\Vert}_{\infty}^{2}}{\mathbb{E}\Vert Y{\Vert}_{\infty}^{2}}=4\left({(1-p)}^{2}{p}^{d}+{p}^{2}(1-{p}^{d})\right)\cdot $$

If we set *p* = 1 − *d*^{−1/2} for *d* ≥ 4, then this ratio converges to 4 as *d* → ∞.

The first part of Proposition 3.1, involving the Rademacher type constant *T _{p}*, remains valid if we drop the assumption that

Our proof for the centered case does not utilize that *X _{i}* = 0, so again there is no difference between the centered and general cases. However, in the symmetric case, the truncated random vectors 1{‖

Table 1 summarizes the constants *K* = *K*(*d*, ∞) we have found so far by the three different methods and for the three different cases. Table 2 contains the corresponding limits

$$K\ast \u2254\underset{d\to \infty}{lim}\frac{K(d,\infty )}{logd}\cdot $$

Interestingly, there is no global winner among the three methods. But for the centered case, Nemirovski's approach yields asymptotically the smallest constants. In particular,

$$\begin{array}{ll}\underset{d\to \infty}{lim}\frac{{K}_{\text{TrBern}}(d,\infty )}{{K}_{\text{Nem}}(d,\infty )}\hfill & =\frac{{3.46}^{2}}{2e}\doteq 2.20205,\hfill \\ \underset{d\to \infty}{lim}\frac{{K}_{\text{Type2}}(d,\infty )}{{K}_{\text{Nem}}(d,\infty )}\hfill & =\frac{4}{e}\doteq 1.47152,\hfill \\ \underset{d\to \infty}{lim}\frac{{K}_{\text{TrBern}}(d,\infty )}{{K}_{\text{Type2}}(d,\infty )}\hfill & =\frac{{3.46}^{2}}{8}\doteq \mathrm{1.49645.}\hfill \end{array}$$

The conclusion at this point seems to be that Nemirovski's approach and the type 2 inequalities yield better constants than Bernstein's inequality and truncation. Figure 1 shows the constants *K*(*d*, ∞) for the centered case over a certain range of dimensions *d*.

In the case *r* = ∞, the asserted inequalities read

$$\begin{array}{cc}{\Vert x\Vert}_{\infty}\le {\Vert x\Vert}_{q}\le {d}^{1/q}{\Vert x\Vert}_{\infty}& \text{for}\phantom{\rule{0.2em}{0ex}}1\le q\le \infty \end{array}$$

and are rather obvious. For 1 ≤ *q* < *r* < ∞, (6) is an easy consequence of Hölder's inequality.

In the case *r* = 2, *V*(*f* + *g*) is equal to *V*(*f*) + *DV*(*f*, *g*) + *V*(*g*). If *r* ≥ 2 and ‖ *f* ‖* _{r}* = 0, both

Note first that the mapping

$$\mathbb{R}\ni t\mapsto {h}_{t}\u2254|f+tg{|}^{r}$$

is pointwise twice continuously differentiable with derivatives

$$\begin{array}{l}{\dot{h}}_{t}=r|f+tg{|}^{r-1}\text{sign}(f+tg)g=r|f+tg{|}^{r-2}(f+tg)g,\hfill \\ {\ddot{h}}_{t}=r(r-1)|f+tg{|}^{r-2}{g}^{2}.\hfill \end{array}$$

By means of the inequality |*x* + *y*|* ^{b}* ≤ 2

$$\begin{array}{l}\underset{|t|\le {t}_{0}}{max}|{\dot{h}}_{t}|\le r{2}^{r-2}\left(\left|f{|}^{r-1}\right|g|+{t}_{0}^{r-1}|g{|}^{r}\right),\hfill \\ \underset{|t|\le {t}_{0}}{max}|{\ddot{h}}_{t}|\le r(r-1){2}^{r-3}\left(\left|f{|}^{r-2}\right|g{|}^{2}+{t}_{0}^{r-2}|g{|}^{r}\right).\hfill \end{array}$$

The latter two envelope functions belong to *L*_{1}(*μ*). This follows from Hölder's inequality which we rephrase for our purposes in the form

$$\begin{array}{cc}\int |f{|}^{(1-\lambda )r}|g{|}^{\lambda r}d\mu \le {\Vert f\Vert}_{r}^{(1-\lambda )r}{\Vert g\Vert}_{r}^{\lambda r}& \text{for}\phantom{\rule{0.2em}{0ex}}0\le \lambda \le 1\end{array}.$$

(14)

Hence we may conclude via dominated convergence that

$$t\mapsto \stackrel{\sim}{v}(t)\u2254{\Vert f+tg\Vert}_{r}^{r}$$

is twice continuously differentiable with derivatives

$$\begin{array}{l}{\stackrel{\sim}{v}}^{\prime}(t)=r\int |f+tg{|}^{r-2}(f+tg)gd\mu ,\hfill \\ {\stackrel{\sim}{v}}^{\u2033}(t)=r(r-1)\int |f+tg{|}^{r-2}{g}^{2}d\mu .\hfill \end{array}$$

This entails that

$$t\mapsto v(t)\u2254V(f+tg)=\stackrel{\sim}{v}{(t)}^{2/r}$$

is continuously differentiable with derivative

$${v}^{\prime}(t)=(2/r)\stackrel{\sim}{v}{(t)}^{2/r-1}{\stackrel{\sim}{v}}^{\prime}(t)={\stackrel{\sim}{v}}^{2/r-1}(t)\int h(f+tg)gd\mu .$$

For *t* = 0 this entails the asserted expression for *DV*(*f, g*). Moreover, *υ*(*t*) is twice continuously differentiable on the set {*t* : ‖ *f* + *tg*‖* _{r}* > 0} which equals either or \ {

$$\begin{array}{ll}{v}^{\u2033}(t)& =(2/r)\stackrel{\sim}{v}{(t)}^{2/r-1}{\stackrel{\sim}{v}}^{\u2033}(t)+(2/r)(2/r-1)\stackrel{\sim}{v}{(t)}^{2/r-2}{\stackrel{\sim}{v}}^{\prime}{(t)}^{2}\\ & =2(r-1)\int \frac{|f+tg{|}^{r-2}}{{\Vert f+tg\Vert}_{r}^{r-2}}{g}^{2}d\mu -2(r-2){\left(\int \frac{|f+tg{|}^{r-2}(f+tg)}{{\Vert f+tg\Vert}_{r}^{r-1}}gd\mu \right)}^{2}\\ & \le 2(r-1)\int {\left|\frac{f+tg}{{\Vert f+tg\Vert}_{r}}\right|}^{r-2}|g{|}^{2}d\mu \\ & \le 2(r-1){\Vert g\Vert}_{r}^{2}=2(r-1)V(g)\end{array}$$

by virtue of Hölder's inequality (14) with λ = *2*/*r*. Consequently, by using

$${v}^{\prime}(t)-{v}^{\prime}(0)={\int}_{0}^{t}{v}^{\u2033}(s)ds\le 2(r-1)V(g)t,$$

we find that

$$\begin{array}{l}V(f+g)-V(f)-DV(f,g)\hfill \\ \phantom{\rule{1.5em}{0ex}}=v(1)-v(0)-{v}^{\prime}(0)={\int}_{0}^{1}({v}^{\prime}(t)-{v}^{\prime}(0))dt\hfill \\ \phantom{\rule{1.5em}{0ex}}\le 2(r-1)V(g){\int}_{0}^{1}\mathit{\text{t}}=(r-1)V(g).\hfill \end{array}$$

The first part is an immediate consequence of the considerations preceding the theorem. It remains to prove the (in)equalities and expansion for *K*_{Nem}(*d, r*). Note that *K*_{Nem}(*d, r*) is the infimum of *h*(*q*)*d*^{−2/}* ^{r}* over all real

$${h}^{\prime}(q)=\frac{{d}^{2/q}}{{q}^{2}}\left({(q-logd)}^{2}-(logd-2)logd\right).$$

Since 7 < *e ^{2}* < 8, this shows that

$$\begin{array}{cc}{K}_{\text{Nem}}(d,r)=h(2){d}^{-2/r}={d}^{1-2/r}& \text{if}\phantom{\rule{0.2em}{0ex}}d\le 7.\end{array}$$

For *d* ≥ 8, one can easily show that log
$d-\sqrt{(logd-2)logd}<2$, so that *h* is strictly decreasing on [*2,r _{d}*] and strictly increasing on [

$${r}_{d}\u2254logd+\sqrt{(logd-2)logd}\{\begin{array}{l}<2logd,\hfill \\ >2logd-2.\hfill \end{array}$$

Thus for *d* ≥ 8,

$${K}_{\text{Nem}}(d,r)=\{\begin{array}{ll}h(r){d}^{-2/r}=r-1<2logd-1\hfill & \text{if}\phantom{\rule{0.2em}{0ex}}r\le {r}_{d},\hfill \\ h({r}_{d}){d}^{-2/r}\le h(2logd)=2elogd-e\hfill & \text{if}\phantom{\rule{0.2em}{0ex}}r\ge {r}_{d}.\hfill \end{array}$$

Moreover, one can verify numerically that *K*_{Nem}(*d, r*) ≤ *d* ≤ 2*e* log *d* – *e* for 3 ≤ *d* ≤ 7.

Finally, for *d* ≥ 8, the inequalities
${r}_{d}^{\text{'}}\u22542logd-2<{r}_{d}<{{r}^{\u2033}}_{d}\u22542logd$ Yield

$${k}_{\mathit{\text{Nem}}}(d,\infty )=h({r}_{d})\ge ({r}_{d}^{\text{'}}-1){d}^{2/{{r}^{\u2033}}_{d}}=2elogd-3e,$$

and for 1 ≤ *d* ≤ 7, the inequality *d* = *K _{Nem}*(

The following proof is standard; see, e.g., [1, p. 160], [13, p. 247]. Let *x*_{1},…, *x _{n}* be fixed functions in

$${\left\{\mathbb{E}{\left|\sum _{i=1}^{n}{\epsilon}_{i}{x}_{i}(t)\right|}^{r}\right\}}^{1/r}\le {B}_{r}{\left(\sum _{i=1}^{n}|{x}_{i}(t){|}^{2}\right)}^{1/2}.$$

(15)

To use inequality (15) for finding an upper bound for the type constant for *L _{r}*, rewrite it as

$$\mathbb{E}{\left|\sum _{i=1}^{n}{\epsilon}_{i}{x}_{i}(t)\right|}^{r}\le {B}_{r}^{r}{\left(\sum _{i=1}^{n}|{x}_{i}(t){|}^{2}\right)}^{r/2}.$$

It follows from Fubini's theorem and the previous inequality that

$$\begin{array}{cc}\mathbb{E}{\Vert \sum _{i=1}^{n}{\epsilon}_{i}{x}_{i}\Vert}_{r}^{r}\hfill & =\mathbb{E}\int {\left|\sum _{i=1}^{n}{\epsilon}_{i}{x}_{i}(t)\right|}^{r}d\mu (t)\hfill \\ & =\int \mathbb{E}{\left|\sum _{i=1}^{n}{\epsilon}_{i}{x}_{i}(t)\right|}^{r}d\mu (t)\hfill \\ & \le {B}_{r}^{r}\int {\left(\sum _{i=1}^{n}|{x}_{i}(t){|}^{2}\right)}^{r/2}d\mu (t).\hfill \end{array}$$

Using the triangle inequality (or Minkowski's inequality), we obtain

$$\begin{array}{cc}{\left\{\mathbb{E}{\Vert \sum _{i=1}^{n}{\epsilon}_{i}{x}_{i}\Vert}_{r}^{r}\right\}}^{2/r}\hfill & \le {B}_{r}^{2}{\left\{\int {\left(\sum _{i=1}^{n}{\left|{x}_{i}(t)\right|}^{2}\right)}^{r/2}d\mu (t)\right\}}^{2/r}\hfill \\ & \le {B}_{r}^{2}{\sum _{i=1}^{n}\left(\int |{x}_{i}(t){|}^{r}d\mu (t)\right)}^{2/r}\hfill \\ & ={B}_{r}^{2}\sum _{i=1}^{n}\Vert {x}_{i}{\Vert}_{r}^{2}\cdot \hfill \end{array}$$

Furthermore, since *g*(*v*) = *v*^{2/}* ^{r}* is a concave function of

$$\mathbb{E}{\Vert \sum _{i=1}^{n}{\epsilon}_{i}{x}_{i}\Vert}_{r}^{2}\le {\left\{\mathbb{E}{\Vert \sum _{i=1}^{n}{\epsilon}_{i}{x}_{i}\Vert}_{r}^{r}\right\}}^{2/r}\le {B}_{r}^{2}\sum _{i=1}^{n}\Vert {x}_{i}{\Vert}_{r}^{2}.$$

For 1 ≤ *i* ≤ *n* let
${x}_{i}={({x}_{im})}_{m=1}^{d}$ be an arbitrary fixed vector in * ^{d}*, and set
$S\u2254{\sum}_{i=1}^{n}{\epsilon}_{i}{x}_{i}$. Further let

$$\mathbb{E}\Vert S{\Vert}_{\infty}^{2}\le 2log(2d){v}^{2}.$$

To this end note first that *h* : [0, ∞) → [1, ∞) with

$$h(t)\u2254\text{cosh}({t}^{1/2})=\sum _{k=0}^{\infty}\frac{{t}^{k}}{(2k)!}$$

is bijective, increasing, and convex. Hence its inverse function *h*^{−1} : [1, ∞) → [0, ∞) is increasing and concave, and one easily verifies that

$${h}^{-1}(s)={\left(log(s+{({s}^{2}-1)}^{1/2})\right)}^{2}\le {(log(2s))}^{2}.$$

Thus it follows from Jensen's inequality that for arbitrary *t* > 0,

$$\begin{array}{cc}\mathbb{E}\Vert S{\Vert}_{\infty}^{2}\hfill & ={t}^{-2}\mathbb{E}{h}^{-1}\left(\text{cosh}(\Vert tS{\Vert}_{\infty})\right)\le {t}^{-2}{h}^{-1}\left(\mathbb{E}\text{cosh}(\Vert tS{\Vert}_{\infty})\right)\hfill \\ & \le {t}^{-2}{\left(log\left(2\mathbb{E}\text{cosh}(\Vert tS{\Vert}_{\infty})\right)\right)}^{2}.\hfill \end{array}$$

Moreover,

$$\mathbb{E}\text{cosh}(\Vert tS{\Vert}_{\infty})=\mathbb{E}\underset{1\le m\le d}{\mathrm{max}}\text{cosh}(t{S}_{m})\le \sum _{m=1}^{d}\mathbb{E}\text{cosh}(t{S}_{m})\le dexp({t}^{2}{v}^{2}/2),$$

according to (8), whence

$$\mathbb{E}\Vert S{\Vert}_{\infty}^{2}\le {t}^{-2}log{\left(2dexp({t}^{2}{v}^{2}/2)\right)}^{2}={\left(log(2d)/t+t{v}^{2}/2\right)}^{2}.$$

Now the assertion follows if we set $t=\sqrt{2log(2d)/{v}^{2}}$.

We may replace the random sequence {*X _{i}*} in Example 1.2 with the random sequence {

$$\underset{n\ge 1}{\text{sup}}\frac{\mathbb{E}\Vert {\sum}_{i=1}^{n}{\epsilon}_{i}{X}_{i}{\Vert}_{\infty}^{2}}{{\sum}_{i=1}^{n}\Vert {X}_{i}{\Vert}_{\infty}^{2}}\ge \underset{n\ge 1}{\text{sup}}\mathbb{E}{\Vert {n}^{-1/2}\sum _{i=1}^{n}{\epsilon}_{i}{X}_{i}\Vert}_{\infty}^{2}\ge {c}_{d}^{2}.$$

The subsequent results will rely on (10) and several inequalities for 1 − Φ(*z*). The first of these is:

$$\begin{array}{cc}1-\Phi (z)\le {z}^{-1}\phi (z),& \phantom{\rule{1em}{0ex}}z>0,\end{array}$$

(16)

which is known as *Mills*' *ratio*; see [6] and [19] for related results. The proof of this upper bound is easy: since *ϕ′* (*z*) = −*zϕ*(*z*) it follows that

$$1-\Phi (z)={\int}_{z}^{\infty}\phi (t)dt\le {\int}_{z}^{\infty}\frac{t}{z}\phi (t)dt=\frac{-1}{z}{\int}_{z}^{\infty}{\phi}^{\prime}(t)dt=\frac{\phi (z)}{z}.$$

(17)

A very useful pair of upper and lower bounds for 1 − Φ(*z*) is as follows:

$$\begin{array}{cc}\frac{2}{z+\sqrt{{z}^{2}+4}}\phi (z)\le 1-\Phi (z)\le \frac{4}{3z+\sqrt{{z}^{2}+8}}\phi (z),& \phantom{\rule{0.5em}{0ex}}z>-1;\end{array}$$

(18)

the inequality on the left is due to Komatsu (see, e.g., [10, p. 17]), while the inequality on the right is an improvement of an earlier result of Komatsu due to Szarek and Werner [23].

To prove the upper bound for
${T}_{2}({\ell}_{\infty}^{d})$, let (*ε _{i}*)

$$\begin{array}{ll}\mathbb{E}\Vert S{\Vert}_{\infty}^{2}\hfill & ={\int}_{0}^{\infty}2t\mathbb{P}\phantom{\rule{0.2em}{0ex}}\left(\underset{1\le m\le d}{\text{sup}}|{S}_{m}|>t\right)dt\hfill \\ & \le {\delta}^{2}+{\int}_{\delta}^{\infty}2t\mathbb{P}\phantom{\rule{0.2em}{0ex}}\left(\underset{1\le m\le d}{\text{sup}}|{S}_{m}|>t\right)dt\hfill \\ & \le {\delta}^{2}+\sum _{m=1}^{d}{\int}_{\delta}^{\infty}2t\mathbb{P}\phantom{\rule{0.2em}{0ex}}\left(|{S}_{m}|>t\right)dt.\hfill \end{array}$$

Now by (10) with *v*^{2} and
${v}_{m}^{2}$ as in the proof of Lemma 3.4, followed by Mills' ratio (16),

$$\begin{array}{cc}{\int}_{\delta}^{\infty}2t\mathbb{P}\phantom{\rule{0.2em}{0ex}}\left(|{S}_{m}|>t\right)dt\hfill & \le {\int}_{\delta}^{\infty}\frac{4K{v}_{m}}{\sqrt{2\pi t}}t{e}^{-{t}^{2}/(2{v}_{m}^{2})}dt\hfill \\ & =\frac{4K{v}_{m}}{\sqrt{2\pi}}{\int}_{\delta}^{\infty}{e}^{-{t}^{2}/(2{v}_{m}^{2})}dt=4K{v}_{m}^{2}{\int}_{\delta}^{\infty}\frac{{e}^{-{t}^{2}/(2{v}_{m}^{2})}}{\sqrt{2\pi}{v}_{m}}dt\hfill \\ & =4K{v}_{m}^{2}(1-\Phi (\delta /{v}_{m}))\le 4K{v}^{2}(1-\Phi (\delta /v)).\hfill \end{array}$$

(19)

Now instead of the Mills' ratio bound (16) for the tail of the normal distribution, we use the upper bound part of (18). This yields

$${\int}_{\delta}^{\infty}2t\mathbb{P}\phantom{\rule{0.2em}{0ex}}(|{S}_{m}|>t)dt\le 4K{v}^{2}(1-\Phi (\delta /v))\le \frac{4c{v}^{2}}{3\delta /v+\sqrt{{\delta}^{2}/{v}^{2}+8}}{e}^{-{\delta}^{2}/(2{v}^{2})},$$

where we have defined $c\u22544K/\sqrt{2\pi}=12.88/\sqrt{2\pi}$, and hence

$$\mathbb{E}\Vert S{\Vert}^{2}\le {\delta}^{2}+\frac{4cd{v}^{2}}{3\delta /v+\sqrt{{\delta}^{2}/{v}^{2}+8}}{e}^{-{\delta}^{2}/(2{v}^{2})}.$$

Taking

$${\delta}^{2}={v}^{2}2log\left(\frac{cd/2}{\sqrt{2log(cd/2)}}\right)$$

gives

$$\begin{array}{cc}\mathbb{E}\Vert S{\Vert}^{2}\hfill & \le {v}^{2}\{2logd+2log(c/2)-log(2log(dc/2))\hfill \\ & \phantom{\rule{4.5em}{0ex}}+\frac{8\sqrt{2log(cd/2)}}{3\sqrt{2log\left(\frac{cd}{2\sqrt{2log(cd/2)}}\right)}+\sqrt{2log\left(\frac{cd}{2\sqrt{2log(cd/2)}}\right)+8}}\}\hfill \\ & \u2255{v}^{2}\left\{2logd+{h}_{2}(d)\right\}\hfill \end{array}$$

where it is easily checked that *h*_{2}(*d*) ≤ 3 for all *d* ≥ 1. Moreover *h*_{2}(*d*) is negative for *d* > 4.13795 × 10^{10}. This completes the proof of the upper bound in (11).

To prove the lower bound for *c _{d}* in (11), we use the lower bound of [13, Lemma 6.9, p. 157] (which is, in this form, due to Giné and Zinn [5]). This yields

$${c}_{d}^{2}\ge \frac{\lambda}{1+\lambda}{t}_{o}^{2}+\frac{1}{1+\lambda}d{\int}_{{t}_{o}}^{\infty}4t(1-\Phi (t))dt$$

(20)

for any *t _{o}* > 0, where λ = 2

$$\begin{array}{cc}{\int}_{{t}_{o}}^{\infty}t(1-\Phi (t))dt\hfill & \ge {\int}_{{t}_{o}}^{\infty}\frac{2t}{t+\sqrt{{t}^{2}+4}}\phi (t)dt\hfill \\ & \ge \frac{2{t}_{o}}{{t}_{o}+\sqrt{{t}_{o}^{2}+4}}{\int}_{{t}_{o}}^{\infty}\phi (t)dt\hfill \\ & =\frac{2}{1+\sqrt{1+4/{t}_{o}^{2}}}(1-\Phi ({t}_{o})).\hfill \end{array}$$

Using this lower bound in (20) yields

$$\begin{array}{cc}{c}_{d}^{2}\hfill & \ge \frac{\lambda}{1+\lambda}{t}_{o}^{2}+\frac{1}{1+\lambda}d\frac{8}{1+\sqrt{1+4/{t}_{o}^{2}}}(1-\Phi ({t}_{o}))\\ & =\frac{2d(1-\Phi ({t}_{o}))}{1+2d(1-\Phi ({t}_{o}))}\left\{{t}_{o}^{2}+\frac{4}{1+\sqrt{1+4/{t}_{o}^{2}}}\right\}\hfill \\ & \ge \frac{\frac{4d}{{t}_{o}+\sqrt{{t}_{o}^{2}+4}}\phi ({t}_{o})}{1+\frac{4d}{{t}_{o}+\sqrt{{t}_{o}^{2}+4}}\phi ({t}_{o})}\left\{{t}_{o}^{2}+\frac{4}{1+\sqrt{1+4/{t}_{o}^{2}}}\right\}.\hfill \end{array}$$

(21)

Now we let
$c\equiv \sqrt{2/\pi}$ and *δ* > 0 and choose

$${t}_{o}^{2}=2log\left(\frac{cd}{{(2log(cd))}^{(1+\delta )/2}}\right).$$

For this choice we see that *t _{o}* → ∞ as

$$4d\phi ({t}_{o})=\frac{2d}{\sqrt{2\pi}}\cdot \frac{{(2log(cd))}^{(1+\delta )/2}}{cd}=2{(2log(cd))}^{(1+\delta )/2},$$

and

$$\frac{4d\phi ({t}_{o})}{{t}_{o}}=\frac{2{(2log(cd))}^{(1+\delta )/2}}{{\{2log(cd/{(2log(cd))}^{(1+\delta )/2})\}}^{1/2}}\to \infty $$

as *d* → ∞, so the first term on the right-hand side of (21) converges to 1 as *d* → ∞, and it can be rewritten as

$$\begin{array}{c}{A}_{d}\left\{{t}_{o}^{2}+\frac{4}{1+\sqrt{1+4/{t}_{o}^{2}}}\right\}\hfill \\ \phantom{\rule{1.5em}{0ex}}=Ad\left\{2log\left(\frac{cd}{{(2log(cd))}^{(1+\delta )/2}}\right)+\frac{4}{1+\sqrt{1+4/{t}_{o}^{2}}}\right\}\hfill \\ \phantom{\rule{2em}{0ex}}\sim 1\cdot \{2logd+2logc-(1+\delta )log(2log(cd))+2\}.\hfill \end{array}$$

To prove the upper bounds for *c _{d}*, we will use the upper bound of [13, Lemma 6.9, p. 157] (which is, in this form, due to Giné and Zinn [5]). For every

$$\begin{array}{cc}{c}_{d}^{2}\equiv \mathbb{E}\underset{1\le j\le d}{\mathrm{max}}|{Z}_{j}{|}^{2}\hfill & \le {t}_{o}^{2}+d{\int}_{{t}_{o}}^{\infty}2t\phantom{\rule{0.2em}{0ex}}P(|{Z}_{1}|>t)dt\hfill \\ & ={t}_{o}^{2}+4d{\int}_{{t}_{o}}^{\infty}t(1-\Phi (t))dt\hfill \\ & \le {t}_{o}^{2}+4d{\int}_{{t}_{o}}^{\infty}\phi (t)dt\phantom{\rule{1em}{0ex}}(\text{by Mills}\text{'}\text{ratio})\hfill \\ & ={t}_{o}^{2}+4d(1-\Phi ({t}_{o})).\hfill \end{array}$$

Evaluating this bound at ${t}_{o}=\sqrt{2log(d/\sqrt{2\pi})}$ and then using Mills' ratio again yields

$$\begin{array}{cc}{c}_{d}^{2}\hfill & \le 2log(d/\sqrt{2\pi})+4d\left(1-\Phi \left(\sqrt{2log(d/\sqrt{2\pi})}\right)\right)\hfill \\ & \le 2logd-2\frac{1}{2}log(2\pi )+4d\frac{\phi \left(\sqrt{2log(d/\sqrt{2\pi})}\right)}{\sqrt{2log(d/\sqrt{2\pi})}}\hfill \\ & =2logd-log(2\pi )+\frac{2\sqrt{2}}{\sqrt{log(d/\sqrt{2\pi})}}\hfill \\ & \le 2logd,\hfill \end{array}$$

(22)

where the last inequality holds if

$$\frac{2\sqrt{2}}{\sqrt{log(d/\sqrt{2\pi})}}\le log(2\pi ),$$

or equivalently if

$$logd\ge \frac{8}{{(log(2\pi ))}^{2}}+\frac{log(2\pi )}{2}=3.28735\dots ,$$

and hence if *d* ≥ 27 > *e*^{3.28735…} 26.77. The claimed inequality is easily verified numerically for *d* = 3, …, 26. (It fails for *d* = 2.) As can be seen from (22), 2 log *d* − log(2*π*) gives a reasonable approximation to
$\mathbb{E}{\mathrm{max}}_{1\le j\le d}{Z}_{j}^{2}$ for large *d*. Using the upper bound in (18) instead of the second application of Mills' ratio and choosing
${t}_{o}^{2}=2log(cd/\sqrt{2log(cd)})$ with
$c\u2254\sqrt{2/\pi}$ yields the third bound for *c _{d}* in (11) with

$$\begin{array}{cc}{h}_{3}(d)=\hfill & -log(\pi )-log(log(cd))\hfill \\ & +\frac{8}{3\sqrt{1-\frac{log(2log(cd))}{2log(cd)}+}\sqrt{1-\frac{log(2log(cd))}{2log(cd)}+\frac{4}{log(cd)}}}.\hfill \end{array}$$

It follows from *Y* = 0, the Taylor expansion of the exponential function, and the inequality |*Y*|* ^{m}* ≤

$$\begin{array}{cc}\mathbb{E}exp\left(\frac{Y}{\kappa L}\right)\hfill & =1+\mathbb{E}\left\{exp\left(\frac{Y}{\kappa L}\right)-1-\frac{Y}{\kappa L}\right\}\hfill \\ & \le 1+\sum _{m=2}^{\infty}\frac{1}{m!}\frac{\mathbb{E}|Y{|}^{m}}{{(\kappa L)}^{m}}\le 1+\frac{{\sigma}^{2}}{{\kappa}^{2}}\sum _{m=2}^{\infty}\frac{1}{m!}\frac{1}{{L}^{m}}=1+\frac{{\sigma}^{2}\text{e}(L)}{{\kappa}^{2}}.\hfill \end{array}$$

Applying Lemma 4.1 to the *jth* components *X _{i,j}* of

$$\mathbb{E}exp\left(\frac{\pm {S}_{n,j}}{\kappa L}\right)=\prod _{i=1}^{n}\mathbb{E}exp\left(\frac{\pm {X}_{i,j}}{\kappa L}\right)\le \prod _{i=1}^{n}exp\left(\frac{\text{Var}({X}_{i,j})\text{e}(L)}{{\kappa}^{2}}\right)\le exp\left(\frac{\Gamma \text{e}(L)}{{\kappa}^{2}}\right).$$

Hence

$$\mathbb{E}\text{cosh}\phantom{\rule{0.2em}{0ex}}\left(\frac{\left|\right|{S}_{n}|{|}_{\infty}}{\kappa L}\right)=\mathbb{E}\underset{1\le j\le d}{\mathrm{max}}\text{cosh}\phantom{\rule{0.2em}{0ex}}\left(\frac{{S}_{n,j}}{\kappa L}\right)\le \sum _{j=1}^{d}\mathbb{E}\text{cosh}\phantom{\rule{0.2em}{0ex}}\left(\frac{{S}_{n,j}}{\kappa L}\right)\le dexp\left(\frac{\Gamma \text{e}(L)}{{\kappa}^{2}}\right).$$

As in the proof of Lemma 3.4 we conclude that

$$\begin{array}{cc}\mathbb{E}\Vert {S}_{n}{\Vert}_{\infty}^{2}\hfill & \le {(\kappa L)}^{2}{\left(log\phantom{\rule{0.2em}{0ex}}\left(2\mathbb{E}\text{cosh}\phantom{\rule{0.2em}{0ex}}\left(\frac{\Vert {S}_{n}{\Vert}_{\infty}}{\kappa L}\right)\right)\right)}^{2}\hfill \\ & \le {(\kappa L)}^{2}{\left(log\phantom{\rule{0.2em}{0ex}}(2d)+\frac{\Gamma \text{e}(L)}{{\kappa}^{2}}\right)}^{2}\hfill \\ & ={\left(\kappa Llog\phantom{\rule{0.2em}{0ex}}(2d)+\frac{\Gamma L\phantom{\rule{0.2em}{0ex}}\text{e}(L)}{\kappa}\right)}^{2},\hfill \end{array}$$

which is equivalent to the inequality stated in the lemma.

For fixed *κ _{o}* > 0 we split

$$\begin{array}{ll}\Vert {B}_{n}{\Vert}_{\infty}\hfill & \le \sum _{i=1}^{n}\left\{{1}_{[\Vert {X}_{i}{\Vert}_{\infty}>{\kappa}_{o}]}\Vert {X}_{i}{\Vert}_{\infty}+\mathbb{E}({1}_{[\Vert {X}_{i}{\Vert}_{\infty}>{\kappa}_{o}]}\Vert {X}_{i}{\Vert}_{\infty})\right\}\hfill \\ & =\sum _{i=1}^{n}\left\{{1}_{[\Vert {X}_{i}{\Vert}_{\infty}>{\kappa}_{o}]}\Vert {X}_{i}{\Vert}_{\infty}-\mathbb{E}({1}_{[\Vert {X}_{i}{\Vert}_{\infty}>{\kappa}_{o}]}\Vert {X}_{i}{\Vert}_{\infty})\right\}\hfill \\ & \phantom{\rule{1em}{0ex}}+2\sum _{i=1}^{n}\mathbb{E}({1}_{[\Vert {X}_{i}{\Vert}_{\infty}>{\kappa}_{o}]}\Vert {X}_{i}{\Vert}_{\infty})\hfill \\ & \u2255{B}_{n1}+{B}_{n2}.\hfill \end{array}$$

Therefore, since *B _{n}*

$$\begin{array}{ll}\mathbb{E}\Vert {B}_{n}{\Vert}_{\infty}^{2}\hfill & \le \mathbb{E}{({B}_{n1}+{B}_{n2})}^{2}=\mathbb{E}{B}_{n1}^{2}+{B}_{n2}^{2}\hfill \\ & ={\sum _{i=1}^{n}\text{Var}\left({1}_{[\Vert {X}_{i}{\Vert}_{\infty}>{\kappa}_{o}]}\Vert {X}_{i}{\Vert}_{\infty}\right)+4\left(\sum _{i=1}^{n}\mathbb{E}(\Vert {X}_{i}{\Vert}_{\infty}{1}_{[\Vert {X}_{i}{\Vert}_{\infty}>{\kappa}_{o}]})\right)}^{2}\hfill \\ & \le \sum _{i=1}^{n}\mathbb{E}\Vert {X}_{i}{\Vert}_{\infty}^{2}+4{\left(\sum _{i=1}^{n}\frac{\mathbb{E}\Vert {X}_{i}{\Vert}_{\infty}^{2}}{{\kappa}_{o}}\right)}^{2}\hfill \\ & =\Gamma +4\frac{{\Gamma}^{2}}{{\kappa}_{o}^{2}},\hfill \end{array}$$

where we define $\Gamma \u2254{\sum}_{i=1}^{n}\mathbb{E}\Vert {X}_{i}{\Vert}_{\infty}^{2}$.

The first sum, *A _{n}*, may be bounded by means of Lemma 4.2 with

$$\text{Var}({X}_{i,j}^{(a)})=\text{Var}\left({1}_{[\Vert {X}_{i}{\Vert}_{\infty}\le {\kappa}_{o}]}{X}_{i,j}\right)\le \mathbb{E}\left({1}_{[\Vert {X}_{i}{\Vert}_{\infty}\le {\kappa}_{o}]}{X}_{i,j}^{2}\right)\le \mathbb{E}\Vert {X}_{i}{\Vert}_{\infty}^{2}.$$

Thus

$$\mathbb{E}\Vert {A}_{n}{\Vert}_{\infty}^{2}\le {\left(2{\kappa}_{o}Llog(2d)+\frac{\Gamma L\phantom{\rule{0.2em}{0ex}}\text{e}(L)}{2{\kappa}_{o}}\right)}^{2}.$$

Combining the bounds we find that

$$\begin{array}{cc}\sqrt{\mathbb{E}\Vert {S}_{n}{\Vert}_{\infty}^{2}}\hfill & \le \sqrt{\mathbb{E}\Vert {A}_{n}{\Vert}_{\infty}^{2}}+\sqrt{\mathbb{E}\Vert {B}_{n}{\Vert}_{\infty}^{2}}\hfill \\ & \le 2{\kappa}_{o}Llog(2d)+\frac{\Gamma L\text{e}(L)}{2{\kappa}_{o}}+\sqrt{\Gamma}+2\frac{\Gamma}{{\kappa}_{o}}\hfill \\ & =\alpha {\kappa}_{o}+\frac{\beta}{{\kappa}_{o}}+\sqrt{\Gamma},\hfill \end{array}$$

where *α* := 2*L* log(2*d*) and *β* := Γ (*L* e(*L*) + 4)/2. This bound is minimized if
${\kappa}_{o}=\sqrt{\beta /\alpha}$ with minimum value

$$2\sqrt{\alpha \beta}+\sqrt{\Gamma}=\left(1+2\sqrt{{L}^{2}\text{e}(L)+4L}\sqrt{log(2d)}\right)\sqrt{\Gamma},$$

and for *L* = 0.407 the latter bound is not greater than

$$\left(1+3.46\sqrt{log(2d)}\right)\sqrt{\Gamma}.$$

In the special case of symmetrically distributed random vectors *X _{i}*, our treatment of the sum

$$\begin{array}{cc}\sqrt{\mathbb{E}\Vert {S}_{n}{\Vert}_{\infty}^{2}}\hfill & \le {\kappa}_{o}Llog(2d)+\frac{\Gamma L\text{e}(L)}{{\kappa}_{o}}+\sqrt{\Gamma}+2\frac{\Gamma}{{\kappa}_{o}}\hfill \\ & =\alpha \prime {\kappa}_{o}+\frac{\beta \prime}{{\kappa}_{o}}+\sqrt{\Gamma}\phantom{\rule{1.5em}{0ex}}\left(\text{with}\alpha \prime \u2254Llog(2d),\beta \prime \u2254\Gamma (L\phantom{\rule{0.2em}{0ex}}\text{e}(L)+\text{2})\right)\hfill \\ & =\left(1+2\sqrt{{L}^{2}\text{e}(L)+2L}\sqrt{log(2d)}\right)\sqrt{\Gamma}\phantom{\rule{1.5em}{0ex}}\left(\text{if}\phantom{\rule{0.2em}{0ex}}{\kappa}_{o}=\sqrt{\beta \prime /\alpha \prime}\right).\hfill \end{array}$$

For *L* = 0.5 the latter bound is not greater than

$$\left(1+2.9\sqrt{log(2d)}\right)\sqrt{\Gamma}.$$

The authors owe thanks to the referees for a number of suggestions which resulted in a considerable improvement in the article. The authors are also grateful to Ilya Molchanov for drawing their attention to Banach-Mazur distances, and to Stanislaw Kwapien and Vladimir Koltchinskii for pointers concerning type and cotype proofs and constants. This research was initiated during the opening week of the program on “Statistical Theory and Methods for Complex, High-Dimensional Data” held at the *Isaac Newton Institute for Mathematical Sciences* from 7 January to 27 June, 2008, and was made possible in part by the support of the Isaac Newton Institute for visits of various periods by Dümbgen, van de Geer, and Wellner. The research of Wellner was also supported in part by NSF grants DMS-0503822 and DMS-0804587. The research of Dümbgen and van de Geer was supported in part by the Swiss National Science Foundation.

•

**LUTZ DÜMBGEN** received his Ph.D. from Heidelberg University in 1990. From 1990-1992 he was a Miller research fellow at the University of California at Berkeley. Thereafter he worked at the Universities of Bielefeld, Heidelberg, and Lübeck. Since 2002 he has been professor of statistics at the University of Bern. His research interests are nonparametric, multivariate, and computational statistics.

•

**SARA A. VAN DE GEER** obtained her Ph.D. at Leiden University in 1987. She worked at the Center for Mathematics and Computer Science in Amsterdam, at the Universities of Bristol, Utrecht, Leiden, and Toulouse, and at the Eidgenössische Technische Hochschule in Zürich (2005-present). Her research areas are empirical processes, statistical learning, and statistical theory for high-dimensional data.

•

**MARK C. VERAAR** received his Ph.D. from Delft University of Technology in 2006. In the year 2007 he was a postdoctoral researcher in the European RTN project “Phenomena in High Dimensions” at the IMPAN institute in Warsaw (Poland). In 2008 he spent one year as an Alexander von Humboldt fellow at the University of Karlsruhe (Germany). Since 2009 he has been Assistant Professor at Delft University of Technology (the Netherlands). His main research areas are probability theory, partial differential equations, and functional analysis.

•

**JON A. WELLNER** received his B.S. from the University of Idaho in 1968 and his Ph.D. from the University of Washington in 1975. He got hooked on research in probability and statistics during graduate school at the UW in the early 1970s, and has enjoyed both teaching and research at the University of Rochester (1975–1983) and the University of Washington (1983-present). If not for probability theory and statistics, he might be a ski bum.

Lutz Dümbgen, Institute of Mathematical Statistics and Actuarial Science, University of Bern, Alpeneggstrasse 22, CH-3012 Bern, Switzerland.

Sara A. van de Geer, Seminar for Statistics, ETH Zurich, CH-8092 Zurich, Switzerland.

Mark C. Veraar, Delft Institute of Applied Mathematics, Delft University of Technology, P.O. Box 5031, 2600 GA Delft, The Netherlands.

Jon A. Wellner, Department of Statistics, Box 354322, University of Washington, Seattle, WA 98195-4322.

1. Araujo A, Giné E. Wiley Series in Probability and Mathematical Statistics. John Wiley; New York: 1980. The Central Limit Theorem for Real and Banach Valued Random Variables.

2. Bennett G. Probability inequalities for the sum of independent random variables. J Amer Statist Assoc. 1962;57:33–45. doi: 10.2307/2282438. [Cross Ref]

3. de la Peña VH, Giné E. Probability and its Applications. Springer-Verlag; New York: 1999. Decoupling: From Dependence to Independence.

4. Dudley RM. Cambridge Studies in Advanced Mathematics. Vol. 63. Cambridge University Press; Cambridge: 1999. Uniform Central Limit Theorems.

5. Giné E, Zinn J. Central limit theorems and weak laws of large numbers in certain Banach spaces. Z Wahrsch Verw Gebiete. 1983;62:323–354. doi: 10.1007/BF00535258. [Cross Ref]

6. Gordon RD. Values of Mills' ratio of area to bounding ordinate and of the normal probability integral for large values of the argument. Ann Math Statistics. 1941;12:364–366. doi: 10.1214/aoms/1177731721. [Cross Ref]

7. Greenshtein E, Ritov Y. Persistence in high-dimensional linear predictor selection and the virtue of overparametrization. Bernoulli. 2004;10:971–988. doi: 10.3150/bj/1106314846. [Cross Ref]

8. Haagerup U. The best constants in the Khintchine inequality. Studia Math. 1981;70:231–283.

9. Hoeffding W. Probability inequalities for sums of bounded random variables. J Amer Statist Assoc. 1963;58:13–30. doi: 10.2307/2282952. [Cross Ref]

10. Itô K, McKean HP., Jr . Classics in Mathematics. Springer-Verlag; Berlin: 1974. Diffusion Processes and their Sample Paths.

11. Johnson WB, Lindenstrauss J. Handbook of the Geometry of Banach Spaces. I. North-Holland; Amsterdam: 2001. Basic concepts in the geometry of Banach spaces; pp. 1–84.

12. Juditsky A, Nemirovski AS. Tech report. Georgia Institute of Technology; Atlanta, GA: 2008. Large deviations of vector-valued martingales in 2-smooth normed spaces.

13. Ledoux M, Talagrand M. Ergebnisse der Mathematik und ihrer Grenzgebiete 3. Folge / A Series of Modern Surveys in Mathematics. Vol. 23. Springer-Verlag; Berlin: 1991. Probability in Banach Spaces: Isoperimetry and Processes.

14. Nemirovski AS. Lectures on Probability Theory and Statistics (Saint-Flour, 1998), Lecture Notes in Mathematics. Vol. 1738. Springer; Berlin: 2000. Topics in non-parametric statistics; pp. 85–277.

15. Nemirovski AS. Regular Banach spaces and large deviations of random sums. 2004. working paper.

16. Nemirovski AS, Yudin DB. Problem Complexity and Method Efficiency in Optimization. John Wiley; Chichester, UK: 1983.

17. Pinelis I. Extremal probabilistic problems and Hotelling's *T*^{2} test under a symmetry condition. Ann Statist. 1994;22:357–368. doi: 10.1214/aos/1176325373. [Cross Ref]

18. Pinelis I. Optimum bounds for the distributions of martingales in Banach spaces. Ann Probab. 1994;22:1679–1706. doi: 10.1214/aop/1176988477. [Cross Ref]

19. Pinelis I. Monotonicity properties of the relative error of a Padé approximation for Mills' ratio. J Inequal Pure Appl Math. 2002;3/2

20. Pinelis I. Toward the best constant factor for the Rademacher-Gaussian tail comparison. ESAIM Probab Stat. 2007;11:412–426. doi: 10.1051/ps:2007027. [Cross Ref]

21. Pollard D. NSF-CBMS Regional Conference Series in Probability and Statistics. Vol. 2. Institute of Mathematical Statistics; Hayward, CA: 1990. Empirical Processes: Theory and Applications.

22. Rio E. Moment inequalities for sums of dependent random variables under projective conditions. J Theoret Probab. 2009;22:146–163. doi: 10.1007/s10959-008-0155-9. [Cross Ref]

23. Szarek SJ, Werner E. A nonsymmetric correlation inequality for Gaussian measure. J Multivariate Anal. 1999;68:193–211. doi: 10.1006/jmva.1998.1784. [Cross Ref]

24. Tomczak-Jaegermann N. Pitman Monographs and Surveys in Pure and Applied Mathematics. Vol. 38. Longman Scientific & Technical; Harlow, UK: 1989. Banach-Mazur Distances and Finite-Dimensional Operator Ideals.

25. van de Geer SA. Cambridge Series in Statistical and Probabilistic Mathematics. Vol. 6. Cambridge University Press; Cambridge: 2000. Applications of Empirical Process Theory.

26. van der Vaart AW, Wellner JA. Springer Series in Statistics. Springer-Verlag; New York: 1996. Weak Convergence and Empirical Processes: With Applications to Statistics.