Enter Your Search:Search tips Search criteria Articles Journal titles Advanced

Am Math Mon. Author manuscript; available in PMC 2010 July 1.
Published in final edited form as:
Am Math Mon. 2010; 117(2): 138–160.
PMCID: PMC2834376
NIHMSID: NIHMS177692

# Nemirovski's Inequalities Revisited

## 1. Introduction

Our starting point is the following well-known theorem from probability: Let X1, …, Xn be independent random variables with finite second moments, and let $Sn=∑i=1nXi$. Then

$Var(Sn)=∑i=1nVar(Xi).$
(1)

If we suppose that each Xi has mean zero, Xi = 0, then (1) becomes

$ESn2=∑i=1nEXi2.$
(2)

This equality generalizes easily to vectors in a Hilbert space with inner product ·, ·: If the Xi's are independent with values in such that Xi = 0 and Xi2 < ∞, then $‖Sn‖2=〈Sn,Sn〉=∑i,j=1n〈Xi,Xj〉$, and since Xi, Xj = 0 for ij by independence,

$E‖Sn‖2=∑i,j=1nE〈Xi,Xj〉=∑i=1nE‖Xi‖2.$
(3)

What happens if the Xi's take values in a (real) Banach space (, ‖ · ‖)? In such cases, in particular when the square of the norm ‖ · ‖ is not given by an inner product, we are aiming at inequalities of the following type: Let X1, X2, …, Xn be independent random vectors with values in (, ‖ · ‖) with Xi = 0 and Xi2 < ∞. With $Sn≔∑i=1nXi$ we want to show that

$E‖Sn‖2≤K∑i=1nE‖Xi‖2$
(4)

for some constant K depending only on (, ‖ · ‖).

For statistical applications, the case $(B,‖⋅‖)=ℓrd≔(ℝd,‖⋅‖r)$ for some r [1, ∞] is of particular interest. Here the r-norm of a vector x d is defined as

$‖x‖r≔{(∑j=1d|xj|r)1/rif1≤r≤∞,max1≤j≤d|xj|ifr=∞.$
(5)

An obvious question is how the exponent r and the dimension d enter an inequality of type (4). The influence of the dimension d is crucial, since current statistical research often involves small or moderate “sample size” n (the number of independent units), say on the order of 102 or 104, while the number d of items measured for each independent unit is large, say on the order of 106 or 107. The following two examples for the random vectors Xi provide lower bounds for the constant K in (4):

### Example 1.1 (A lower bound in $ℓrd$)

Let b1, b2, …, bd denote the standard basis of d, and let ε1, ε2, …, εd be independent Rademacher variables, i.e., random variables taking the values +1 and −1 each with probability 1/2. Define Xi:= εibi for 1 ≤ in := d. Then Xi = 0, $‖Xi‖r2=1$, and $‖Sn‖r2=d2/r=d2/r−1∑i=1n‖Xi‖r2$. Thus any candidate for K in (4) has to satisfy Kd2/r−1.

### Example 1.2 (A lower bound in $ℓ∞d$)

Let X1, X2, X3,… be independent random vectors, each uniformly distributed on {−1, 1}d. Then Xi = 0 and ‖Xi = 1. On the other hand, according to the Central Limit Theorem, n−1/2Sn converges in distribution as n → ∞ to a random vector $Z=(Zj)j=1d$ with independent, standard Gaussian components, Zj ~ N(0, 1). Hence

$supn≥1E‖Sn‖∞2∑i=1nE‖Xi‖∞2=supn≥1E‖n−1/2Sn‖∞2≥E‖Z‖∞2=Emax1≤j≤dZj2.$

But it is well known that $max1≤j≤d|Zj|−2logd→p0$ as d → ∞. Thus candidates K (d) for the constant in (4) have to satisfy

$liminfd→∞K(d)2logd≥1.$

At least three different methods have been developed to prove inequalities of the form given by (4). The three approaches known to us are:

1. deterministic inequalities for norms;
2. probabilistic methods for Banach spaces;
3. empirical process methods.

Approach (a) was used by Nemirovski [14] to show that in the space $ℓrd$ with d ≥ 2, inequality (4) holds with K = C min(r, log(d)) for some universal (but unspecified) constant C. In view of Example 1.2, this constant has the correct order of magnitude if r = ∞. For statistical applications see Greenshtein and Ritov [7]. Approach (b) uses special moment inequalities from probability theory on Banach spaces which involve nonrandom vectors in and Rademacher variables as introduced in Example 1.1. Empirical process theory (approach (c)) in general deals with sums of independent random elements in infinite-dimensional Banach spaces. By means of chaining arguments, metric entropies, and approximation arguments, “maximal inequalities” for such random sums are built from basic inequalities for sums of independent random variables or finite-dimensional random vectors, in particular, “exponential inequalities”; see, e.g., Dudley [4], van der Vaart and Wellner [26], Pollard [21], de la Pena and Gine [3], or van de Geer [25].

Our main goal in this paper is to compare the inequalities resulting from these different approaches and to refine or improve the constants K obtainable by each method. The remainder of this paper is organized as follows: In Section 2 we review several deterministic inequalities for norms and, in particular, key arguments of Nemirovski [14]. Our exposition includes explicit and improved constants. While finishing the present paper we became aware of yet unpublished work of Nemirovski [15] and Juditsky and Nemirovski [12] who also improved some inequalities of [14]. Rio [22] uses similar methods in a different context. In Section 3 we present inequalities of type (4) which follow from type and cotype inequalities developed in probability theory on Banach spaces. In addition, we provide and utilize a new type inequality for the normed space $ℓ∞d$. To do so we utilize, among other tools, exponential inequalities of Hoeffding [9] and Pinelis [17]. In Section 4 we follow approach (c) and treat $ℓ∞d$ by means of a truncation argument and Bernstein's exponential inequality. Finally, in Section 5 we compare the inequalities resulting from these three approaches. In that section we relax the assumption that Xi = 0 for a more thorough understanding of the differences between the three approaches. Most proofs are deferred to Section 6.

## 2. Nemirovski's Approach: Deterministic Inequalities for Norms

In this section we review and refine inequalities of type (4) based on deterministic inequalities for norms. The considerations for $(B,‖⋅‖)=ℓrd$ follow closely the arguments of [14].

### 2.1. Some Inequalities for d and the Norms ‖ · ‖r

Throughout this subsection let = d, equipped with one of the norms ‖ · ‖r defined in (5). For x d we think of x as a column vector and write x for the corresponding row vector. Thus xx is a d × d matrix with entries xixj for i, j {1, …, d}.

#### A first solution

Recall that for any x d,

$‖x‖r≤‖x‖q≤d1/q−1/r‖x‖rfor1≤q
(6)

Moreover, as mentioned before,

$E‖Sn‖22=∑i=1nE‖Xi‖22.$

Thus for 1 ≤ q < 2,

$E‖Sn‖q2≤(d1/q−1/2)2E‖Sn‖22=d2/q−1∑i=1nE‖Xi‖22≤d2/q−1∑i=1nE‖Xi‖q2,$

whereas for 2 < r ≤ ∞,

$E‖Sn‖r2≤E‖Sn‖22=∑i=1nE‖Xi‖22≤d1−2/r∑i=1nE‖Xi‖r2.$

Thus we may conclude that (4) holds with

$K=K∼(d,r)≔{d2/r−1if1≤r≤2,d1−2/rif2≤r≤∞.$

Example 1.1 shows that this constant (d, r) is indeed optimal for 1 ≤ r ≤ 2.

#### A refinement for r > 2

In what follows we shall replace (d, r) = d1−2/r with substantially smaller constants. The main ingredient is the following result:

##### Lemma 2.1

For arbitrary fixed r [2, ∞) and x d \ {0} let

$h(x)≔2‖x‖r2−r(|xi|r−2xi)i=1d$

while h (0) := 0. Then for arbitrary x, y d,

$‖x‖r2+h(x)Ty≤‖x+y‖r2≤‖x‖r2+h(x)Ty+(r−1)‖y‖r2.$

[16] and [14] stated Lemma 2.1 with the factor r − 1 on the right side replaced with Cr for some (absolute) constant C > 1. Lemma 2.1, which is a special case of the more general Lemma 2.4 in the next subsection, may be applied to the partial sums S0 := 0 and $Sk≔∑i=1kXi$, 1 ≤ kn, to show that for 2 ≤ r < ∞,

$E‖Sk‖r2≤E(‖Sk−1‖r2+h(Sk−1)TXk+(r−1)‖Xk‖r2)=E‖Sk−1‖r2+Eh(Sk−1)TEXk+(r−1)E‖Xk‖r2=E‖Sk−1‖r2+(r−1)E‖Xk‖r2,$

and inductively we obtain a second candidate for K in (4):

$E‖Sn‖r2≤(r−1)∑i=1nE‖Xi‖r2for2≤r≤∞.$

Finally, we apply (6) again: For 2 ≤ qr ≤ ∞ with q < ∞,

$E‖Sn‖r2≤E‖Sn‖q2≤(q−1)∑i=1nE‖Xi‖q2≤(q−1)d2/q−2/r∑i=1nE‖Xi‖r2.$

This inequality entails our first (q = 2) and second (q = r < ∞) preliminary result, and we arrive at the following refinement:

##### Theorem 2.2

For arbitrary r [2, ∞],

$E‖Sn‖r2≤KNem(d,r)∑i=1nE‖Xi‖r2$

with

$KNem(d,r)≔infq∈[2,r]∩ℝ(q−1)d2/q−2/r.$

This constant KNem(d, r) satisfies the (in)equalities

$KNem(d,r){=d1−2/rifd≤7≤r−1≤2elogd−eifd≥3,$

and

$KNem(d,∞)≥2elogd−3e.$

##### Corollary 2.3

In the case $(B,‖·‖)=ℓ∞d$ with d ≥ 3, inequality (4) holds with constant K = 2e log d − e. If the Xi 's are also identically distributed, then

$E‖n−1/2Sn‖∞2≤(2elogd−e)E‖X1‖∞2.$

Note that

$limd→∞KNem(d,∞)2logd=limd→∞2elogd−e2logd=e.$

Thus Example 1.2 entails that for large dimension d, the constants KNem(d, ∞) and 2e log de are optimal up to a factor close to e 2.7183.

### 2.2. Arbitrary Lr-spaces

Lemma 2.1 is a special case of a more general inequality: Let (T, Σ, μ) be a σ-finite measure space, and for 1 ≥ r < ∞ let Lr(μ) be the set of all measurable functions f : T with finite norm

$‖f‖r≔(∫|f|rdμ)1/r,$

where two such functions are viewed as equivalent if they coincide almost everywhere with respect to μ. In what follows we investigate the functional

$f↦V(f)≔‖f‖r2$

on Lr(μ). Note that (d, ‖ · ‖r) corresponds to (Lr(μ), ‖ · ‖r) if we take T = {1, 2, …, d} equipped with counting measure μ.

Note that V(·) is convex; thus for fixed f, g Lr(μ), the function

$v(t)≔V(f+tg)=‖f+tg‖r2,t∈ℝ$

is convex with derivative

$v′(t)=v1−r/2(t)∫2|f+tg|r−2(f+tg)gdμ.$

By convexity of v it follows that

$V(f+g)−V(f)=v(1)−v(0)≥v′(0)≔DV(f,g).$

This proves the lower bound in the following lemma. We will prove the upper bound in Section 6 by computation of v″ and application of Hölder's inequality.

#### Lemma 2.4

Let r ≥ 2. Then for arbitrary f, g Lr(μ),

$DV(f,g)=∫h(f)gdμwithh(f)≔2‖f‖r2−r|f|r−2f∈Lq(μ),$

where q := r/(r − 1). Moreover,

$V(f)+DV(f,g)≤V(f+g)≤V(f)+DV(f,g)+(r−1)V(g).$

#### Remark 2.5

The upper bound for V(f + g) is sharp in the following sense: Suppose that μ(T) < ∞, and let f, go : T be measurable such that | f | | go | 1 and ∫ f go = 0. Then our proof of Lemma 2.4 reveals that

$V(f+tgo)−V(f)−DV(f,tgo)V(tgo)→r−1ast→0.$

#### Remark 2.6

If r = 2, Lemma 2.4 is well known and easily verified. Here the upper bound for V(f + g) is even an equality, i.e.,

$V(f+g)=V(f)+DV(f,g)+V(g).$

#### Remark 2.7

Lemma 2.4 improves on an inequality of [16]. After writing this paper we realized Lemma 2.4 is also proved by Pinelis [18]; see his (2.2) and Proposition 2.1, page 1680.

Lemma 2.4 leads directly to the following result:

#### Corollary 2.8

In the case = Lr (μ) for r ≥ 2, inequality (4) is satisfied with K = r − 1.

### 2.3. A Connection to Geometrical Functional Analysis

For any Banach space (, ‖ · ‖) and Hilbert space (, ·, ·, ‖ · ‖), their Banach-Mazur distance D(, ) is defined to be the infimum of

$‖T‖·‖T−1‖$

over all linear isomorphisms T : , where ‖T‖ and ‖ T −1‖ denote the usual operator norms

$‖T‖≔sup{‖Tx‖:x∈B‖x‖≤1},‖T−1‖≔sup{‖T−1y‖:y∈ℍ‖y‖≤1}.$

(If no such bijection exists, one defines D(, ) := ∞.) Given such a bijection T,

$E‖Sn‖2≤‖T−1‖2E‖TSn‖2=‖T−1‖2∑i=1nE‖TXi‖2≤‖T−1‖2‖T‖2∑i=1nE‖Xi‖2.$

This leads to the following observation:

#### Corollary 2.9

For any Banach space (, ‖ · ‖) and any Hilbert space (, , ·, ·,, ‖ · ‖) with finite Banach-Mazur distance D(, ), inequality (4) is satisfied with K = D(,)2.

A famous result from geometrical functional analysis is John's theorem (see [24], [11]) for finite-dimensional normed spaces. It entails that $D(B,ℓ2dimB)≤dimB$. This entails the following fact:

#### Corollary 2.10

For any normed space (, ‖ · ‖) with finite dimension, inequality (4) is satisfied with K = dim().

Note that Example 1.1 with r = 1 provides an example where the constant K = dim() is optimal.

## 3. The probabilistic approach: type and cotype inequalities

### 3.1. Rademacher Type and Cotype Inequalities

Let {εi} denote a sequence of independent Rademacher random variables. Let 1 ≤ p < ∞. A Banach space with norm ‖ · ‖ is said to be of (Rademacher) type p if there is a constant Tp such that for all finite sequences {xi} in ,

$E‖∑i=1nεixi‖p≤Tpp∑i=1n‖xi‖p.$

Similarly, for 1 ≤ q < ∞, is of (Rademacher) cotype q if there is a constant Cq such that for all finite sequences {xi} in ,

$E‖∑i=1nεixi‖q≥Cq−q∑i=1n‖xi‖q.$

Ledoux and Talagrand [13, p. 247], note that type and cotype properties appear as dual notions: if a Banach space is of type p, its dual space ′ is of cotype q = p/(p − 1).

One of the basic results concerning Banach spaces with type p and cotype q is the following proposition:

#### Proposition 3.1

[13, Proposition 9.11, p. 248]. If is of type p ≥ 1 with constant Tp, then

$E‖Sn‖p≤(2Tp)p∑i=1nE‖Xi‖p.$

If is of cotype q ≥ 1 with constant Cq, then

$E‖Sn‖q≥(2Cq)−q∑i=1nE‖Xi‖q.$

As shown in [13, p. 27], the Banach space Lr(μ) with 1 ≤ r < ∞ (cf. Section 2.2) is of type min(r, 2). Similarly, Lr (μ) is cotype max(r, 2). If r ≥ 2 = p, explicit values for the constant Tp in Proposition 3.1 can be obtained from the optimal constants in Khintchine's inequalities due to Haagerup [8].

#### Lemma 3.2

For 2 ≤ r < ∞, the space Lr (μ) is of type 2 with constant T2 = Br, where

$Br≔21/2(Γ((r+1)/2)π)1/r.$

#### Corollary 3.3

For () = Lr (μ), 2 ≤ r < ∞, inequality (4) is satisfied with $K=4Br2$.

Note that B2 = 1 and

$Brr→1easr→∞.$

Thus for large values of r, the conclusion of Corollary 3.3 is weaker than that of Corollary 2.8.

### 3.2. The Space $ℓ∞d$

The preceding results apply only to r < ∞, so the special space $ℓ∞d$ requires different arguments. First we deduce a new type inequality based on Hoeffding's [9] exponential inequality: if ε1, ε2, …, εn are independent Rademacher random variables, a1, a2, …,an are real numbers, and $υ2≔∑i=1nai2$, then the tail probabilities of the random variable $|∑i=1naiεi|$ may be bounded as follows:

$ℙ(|∑i=1naiεi|≥z)≤2exp(−z22v2),z≥0.$
(7)

At the heart of these tail bounds is the following exponential moment bound:

$Eexp(t∑i=1naiεi)≤exp(t2v2/2),t∈ℝ.$
(8)

From the latter bound we shall deduce the following type inequality in Section 6:

#### Lemma 3.4

The space $ℓ∞d$ is of type 2 with constant $2log(2d)$.

Using this upper bound together with Proposition 3.1 yields another Nemirovski-type inequality:

#### Corollary 3.5

For $(B,‖⋅‖)=ℓ∞d$, inequality (4) is satisfied with K = KType2(d, ∞) = 8 log(2d).

#### Refinements

Let $T2(ℓ∞d)$ be the optimal type-2 constant for the space $ℓ∞d$. So far we know that $T2(ℓ∞d)≤2log(2d)$. Moreover, by a modification of Example 1.2 one can show that

$T2(ℓ∞d)≥cd≔Emax1≤j≤dZj2.$
(9)

The constants cd can be expressed or bounded in terms of the distribution function Φ of N(0, 1), i.e., $Φ(z)=∫−∞zϕ(x)dx$ with $ϕ(x)=exp(−x2/2)/2π$. Namely, with W := max1≤jd|Zj|,

$cd2=E(W2)=E∫0∞2t1[t≤W]dt=∫0∞2tℙ(W≥t)dt,$

and for any t > 0,

$ℙ(W≥t){=1−ℙ(W

These considerations and various bounds for Φ will allow us to derive explicit bounds for cd.

On the other hand, Hoeffding's inequality (7) has been refined by Pinelis [17, 20] as follows:

$ℙ(|∑i=1naiεi|≥z)≤2K(1−Φ(z/v)),z>0,$
(10)

where K satisfies 3.18 ≤ K ≤ 3.22. This will be the main ingredient for refined upper bounds for $T2(ℓ∞d)$. The next lemma summarizes our findings:

#### Lemma 3.6

The constants cd and $T2(ℓ∞d)$ satisfy the following inequalities:

$2logd+h1(d)≤cd≤{T2(ℓ∞d)≤2logd+h2(d),d≥12logd,d≥32logd+h3(d),d≥1$
(11)

where h2(d) ≤ 3, h2(d) becomes negative for d > 4.13795 × 1010, h3(d) becomes negative for d ≥ 14, and hj(d) ~ −log log d as d → ∞ for j = 1, 2, 3.

In particular, one could replace KType2(d, ∞) in Corollary 3.5 with 8 log d + 4h2(d).

## 4. The Empirical Process Approach: Truncation and Bernstein's Inequality

An alternative to Hoeffding's exponential tail inequality (7) is a classical exponential bound due to Bernstein (see, e.g., [2]): Let Y1, Y2, …,Yn be independent random variables with mean zero such that |Yi| ≤ κ. Then for $υ2=∑i=1nVar(Yi)$,

$ℙ(|∑i=1nYi|≥x)≤2exp(−x22(v2+κx/3)),x>0.$
(12)

We will not use this inequality itself but rather an exponential moment inequality underlying its proof:

### Lemma 4.1

For L > 0 define

$e(L)≔exp(1/L)−1−1/L.$

Let Y be a random variable with mean zero and variance σ2 such that |Y| < κ. Then for any L > 0,

$Eexp(YκL)≤1+σ2e(L)κ2≤exp(σ2e(L)κ2).$

With the latter exponential moment bound we can prove a moment inequality for random vectors in d with bounded components:

### Lemma 4.2

Suppose that $Xi=(Xi,j)j=1d$ satisfies ‖Xiκ, and let Γ be an upper bound for $max1≤j≤d∑i=1nVar(Xi,j)$. Then for any L > 0,

$E‖Sn‖∞2≤κLlog(2d)+ΓLe(L)κ⋅$

Now we return to our general random vectors Xi d with mean zero and $E‖Xi‖∞2<∞$. They are split into two random vectors via truncation: $Xi=Xi(a)+Xi(b)$ with

$Xi(a)≔1[∥Xi∥∞≤κo]XiandXi(b)≔1[∥Xi∥∞>κo]Xi$

for some constant κo > 0 to be specified later. Then we write Sn = An + Bn with the centered random sums

$An≔∑i=1n(Xi(a)−EXi(a))andBn≔∑i=1n(Xi(b)−EXi(b))⋅$

The sum An involves centered random vectors in [−2κo, 2κo]d and will be treated by means of Lemma 4.2, while Bn will be bounded with elementary methods. Choosing the threshold κ and the parameter L carefully yields the following theorem.

### Theorem 4.3

In the case $(B,‖⋅‖)=ℓ∞d$ for some d ≥ 1, inequality (4) holds with

$K=KTrBern(d,∞)≔(1+3.46log(2d))2⋅$

If each of the random vectors Xi is symmetrically distributed around 0, one may even set

$K=KTrBern(symm)(d,∞)=(1+2.9log(2d))2⋅$

## 5. Comparisons

In this section we compare the three approaches just described for the space $ℓ∞d$. As to the random vectors Xi, we broaden our point of view and consider three different cases:

• General case: The random vectors Xi are independent with $E‖Xi‖∞2<∞$ for all i.
• Centered case: In addition, Xi = 0 for all i.
• Symmetric case: In addition, Xi is symmetrically distributed around 0 for all i.

In view of the general case, we reformulate inequality (4) as follows:

$E‖Sn−ESn‖∞2≤K∑i=1nE‖Xi‖∞2⋅$
(13)

One reason for this extension is that in some applications, particularly in connection with empirical processes, it is easier and more natural to work with uncentered summands Xi. Let us discuss briefly the consequences of this extension in the three frameworks:

### Nemirovski's approach

Between the centered and symmetric cases there is no difference. If (4) holds in the centered case for some K, then in the general case

$E‖Sn−ESn‖∞2≤K∑i=1nE‖Xi−EXi‖∞2≤4K∑i=1nE‖Xi‖∞2⋅$

The latter inequality follows from the general fact that

$E‖Y−EY‖2≤E((‖Y‖+‖EY‖)2)≤2E‖Y‖2+2‖EY‖2≤4E‖Y‖2⋅$

This looks rather crude at first glance, but in the case of the maximum norm and high dimension d, the factor 4 cannot be reduced. For let Y d have independent components Y1, …,Yd {−1, 1} with (Yj = 1) = 1− (Yj = −1) = p [1/2,1). Then ‖Y 1, while $EY=(2p−1)j=1d$ and

$‖Y−EY‖∞={2(1−p)ifY1=⋯=Yd=1,2potherwise.$

Hence

$E‖Y−EY‖∞2E‖Y‖∞2=4((1−p)2pd+p2(1−pd))⋅$

If we set p = 1 − d−1/2 for d ≥ 4, then this ratio converges to 4 as d → ∞.

### The approach via Rademacher type-2 inequalities

The first part of Proposition 3.1, involving the Rademacher type constant Tp, remains valid if we drop the assumption that Xi = 0 and replace Sn with SnSn. Thus there is no difference between the general and centered cases. In the symmetric case, however, the factor 2p in Proposition 3.1 becomes superfluous. Thus, if (4) holds with a certain constant K in the general and centered cases, we may replace K with K/4 in the symmetric case.

### The approach via truncation and Bernstein's inequality

Our proof for the centered case does not utilize that Xi = 0, so again there is no difference between the centered and general cases. However, in the symmetric case, the truncated random vectors 1{‖Xiκ}Xi and 1{‖Xi > κ}Xi are centered, too, which leads to the substantially smaller constant K in Theorem 4.3.

### Summaries and comparisons

Table 1 summarizes the constants K = K(d, ∞) we have found so far by the three different methods and for the three different cases. Table 2 contains the corresponding limits

The different constants K (d, ∞).
The different limits K*.
$K∗≔limd→∞K(d,∞)logd⋅$

Interestingly, there is no global winner among the three methods. But for the centered case, Nemirovski's approach yields asymptotically the smallest constants. In particular,

$limd→∞KTrBern(d,∞)KNem(d,∞)=3.4622e≐2.20205,limd→∞KType2(d,∞)KNem(d,∞)=4e≐1.47152,limd→∞KTrBern(d,∞)KType2(d,∞)=3.4628≐1.49645.$

The conclusion at this point seems to be that Nemirovski's approach and the type 2 inequalities yield better constants than Bernstein's inequality and truncation. Figure 1 shows the constants K(d, ∞) for the centered case over a certain range of dimensions d.

Comparison of K (d, ∞) obtained via the three proof methods: Medium dashing (bottom) = Nemirovski; Small and tiny dashing (middle) = type 2 inequalities; Large dashing (top) = truncation and Bernstein inequality.

## 6. Proofs

### 6.1. Proofs for Section 2

#### Proof of (6)

In the case r = ∞, the asserted inequalities read

$‖x‖∞≤‖x‖q≤d1/q‖x‖∞for1≤q≤∞$

and are rather obvious. For 1 ≤ q < r < ∞, (6) is an easy consequence of Hölder's inequality.

#### Proof of Lemma 2.4

In the case r = 2, V(f + g) is equal to V(f) + DV(f, g) + V(g). If r ≥ 2 and ‖ fr = 0, both DV(f, g) and ∫ h(f)g d μ are equal to zero, and the asserted inequalities reduce to the trivial statement that V(g) ≤ (r – 1) V(g). Thus let us restrict our attention to the case r > 2 and ‖ fr > 0.

Note first that the mapping

$ℝ∋t↦ht≔|f+tg|r$

is pointwise twice continuously differentiable with derivatives

$h˙t=r|f+tg|r−1sign(f+tg)g=r|f+tg|r−2(f+tg)g,h¨t=r(r−1)|f+tg|r−2g2.$

By means of the inequality |x + y|b ≤ 2b–1 (|x|b + |y|b) for real numbers x, y and b ≥ 1, a consequence of Jensen's inequality, we can conclude that for any bound to > 0,

$max|t|≤t0|h˙t|≤r2r−2(|f|r−1|g|+t0r−1|g|r),max|t|≤t0|h¨t|≤r(r−1)2r−3(|f|r−2|g|2+t0r−2|g|r).$

The latter two envelope functions belong to L1(μ). This follows from Hölder's inequality which we rephrase for our purposes in the form

$∫|f|(1−λ)r|g|λrdμ≤‖f‖r(1−λ)r‖g‖rλrfor0≤λ≤1.$
(14)

Hence we may conclude via dominated convergence that

$t↦v∼(t)≔‖f+tg‖rr$

is twice continuously differentiable with derivatives

$v∼′(t)=r∫|f+tg|r−2(f+tg)gdμ,v∼″(t)=r(r−1)∫|f+tg|r−2g2dμ.$

This entails that

$t↦v(t)≔V(f+tg)=v∼(t)2/r$

is continuously differentiable with derivative

$v′(t)=(2/r)v∼(t)2/r−1v∼′(t)=v∼2/r−1(t)∫h(f+tg)gdμ.$

For t = 0 this entails the asserted expression for DV(f, g). Moreover, υ(t) is twice continuously differentiable on the set {t : ‖ f + tgr > 0} which equals either or \ {to} for some to ≠ 0. On this set the second derivative equals

$v″(t)=(2/r)v∼(t)2/r−1v∼″(t)+(2/r)(2/r−1)v∼(t)2/r−2v∼′(t)2=2(r−1)∫|f+tg|r−2‖f+tg‖rr−2g2dμ−2(r−2)(∫|f+tg|r−2(f+tg)‖f+tg‖rr−1gdμ)2≤2(r−1)∫|f+tg‖f+tg‖r|r−2|g|2dμ≤2(r−1)‖g‖r2=2(r−1)V(g)$

by virtue of Hölder's inequality (14) with λ = 2/r. Consequently, by using

$v′(t)−v′(0)=∫0tv″(s)ds≤2(r−1)V(g)t,$

we find that

$V(f+g)−V(f)−DV(f,g)=v(1)−v(0)−v′(0)=∫01(v′(t)−v′(0))dt≤2(r−1)V(g)∫01t=(r−1)V(g).$

#### Proof of Theorem 2.2

The first part is an immediate consequence of the considerations preceding the theorem. It remains to prove the (in)equalities and expansion for KNem(d, r). Note that KNem(d, r) is the infimum of h(q)d−2/r over all real q [2, r], where h(q) := (q – 1)d2/q satisfies the equation

$h′(q)=d2/qq2((q−logd)2−(logd−2)logd).$

Since 7 < e2 < 8, this shows that h is strictly increasing on [2, ∞) if d ≤ 7. Hence

$KNem(d,r)=h(2)d−2/r=d1−2/rifd≤7.$

For d ≥ 8, one can easily show that log $d−(logd−2)logd<2$, so that h is strictly decreasing on [2,rd] and strictly increasing on [rd, ∞), where

$rd≔logd+(logd−2)logd{<2logd,>2logd−2.$

Thus for d ≥ 8,

$KNem(d,r)={h(r)d−2/r=r−1<2logd−1ifr≤rd,h(rd)d−2/r≤h(2logd)=2elogd−eifr≥rd.$

Moreover, one can verify numerically that KNem(d, r) ≤ d ≤ 2e log de for 3 ≤ d ≤ 7.

Finally, for d ≥ 8, the inequalities $rd'≔2logd−2 Yield

$kNem(d,∞)=h(rd)≥(rd'−1)d2/r″d=2elogd−3e,$

and for 1 ≤ d ≤ 7, the inequality d = KNem(d, ∞) ≥ 2e log(d) – 3e is easily verified.

### 6.2. Proofs for Section 3

#### Proof of Lemma 3.2

The following proof is standard; see, e.g., [1, p. 160], [13, p. 247]. Let x1,…, xn be fixed functions in Lr (μ). Then by [8], for any t T,

${E|∑i=1nεixi(t)|r}1/r≤Br(∑i=1n|xi(t)|2)1/2.$
(15)

To use inequality (15) for finding an upper bound for the type constant for Lr, rewrite it as

$E|∑i=1nεixi(t)|r≤Brr(∑i=1n|xi(t)|2)r/2.$

It follows from Fubini's theorem and the previous inequality that

$E‖∑i=1nεixi‖rr=E∫|∑i=1nεixi(t)|rdμ(t)=∫E|∑i=1nεixi(t)|rdμ(t)≤Brr∫(∑i=1n|xi(t)|2)r/2dμ(t).$

Using the triangle inequality (or Minkowski's inequality), we obtain

${E‖∑i=1nεixi‖rr}2/r≤Br2{∫(∑i=1n|xi(t)|2)r/2dμ(t)}2/r≤Br2∑i=1n(∫|xi(t)|rdμ(t))2/r=Br2∑i=1n‖xi‖r2⋅$

Furthermore, since g(v) = v2/r is a concave function of v ≥ 0, the last display implies that

$E‖∑i=1nεixi‖r2≤{E‖∑i=1nεixi‖rr}2/r≤Br2∑i=1n‖xi‖r2.$

#### Proof of Lemma 3.4

For 1 ≤ in let $xi=(xim)m=1d$ be an arbitrary fixed vector in d, and set $S≔∑i=1nεixi$. Further let Sm be the mth component of S with variance $vm2≔∑i=1nxim2$, and define $v2≔max1≤m≤dvm2$, which is not greater than $∑i=1n‖xi‖∞2$. It suffices to show that

$E‖S‖∞2≤2log(2d)v2.$

To this end note first that h : [0, ∞) → [1, ∞) with

$h(t)≔cosh(t1/2)=∑k=0∞tk(2k)!$

is bijective, increasing, and convex. Hence its inverse function h−1 : [1, ∞) → [0, ∞) is increasing and concave, and one easily verifies that

$h−1(s)=(log(s+(s2−1)1/2))2≤(log(2s))2.$

Thus it follows from Jensen's inequality that for arbitrary t > 0,

$E‖S‖∞2=t−2Eh−1(cosh(‖tS‖∞))≤t−2h−1(Ecosh(‖tS‖∞))≤t−2(log(2Ecosh(‖tS‖∞)))2.$

Moreover,

$Ecosh(‖tS‖∞)=Emax1≤m≤dcosh(tSm)≤∑m=1dEcosh(tSm)≤dexp(t2v2/2),$

according to (8), whence

$E‖S‖∞2≤t−2log(2dexp(t2v2/2))2=(log(2d)/t+tv2/2)2.$

Now the assertion follows if we set $t=2log(2d)/v2$.

#### Proof of (9)

We may replace the random sequence {Xi} in Example 1.2 with the random sequence {εiXi}, where {εi} is a Rademacher sequence independent of {Xi}. Thereafter we condition on {Xi}, i.e., we view it as a deterministic sequence such that $n−1∑i=1nXiXiT$ converges to the identity matrix Id as n → ∞, by the strong law of large numbers. Now Lindeberg's version of the multivariate Central Limit Theorem shows that

$supn≥1E‖∑i=1nεiXi‖∞2∑i=1n‖Xi‖∞2≥supn≥1E‖n−1/2∑i=1nεiXi‖∞2≥cd2.$

##### Inequalities for Φ

The subsequent results will rely on (10) and several inequalities for 1 − Φ(z). The first of these is:

$1−Φ(z)≤z−1φ(z),z>0,$
(16)

which is known as Mills' ratio; see [6] and [19] for related results. The proof of this upper bound is easy: since ϕ′ (z) = −(z) it follows that

$1−Φ(z)=∫z∞φ(t)dt≤∫z∞tzφ(t)dt=−1z∫z∞φ′(t)dt=φ(z)z.$
(17)

A very useful pair of upper and lower bounds for 1 − Φ(z) is as follows:

$2z+z2+4φ(z)≤1−Φ(z)≤43z+z2+8φ(z),z>−1;$
(18)

the inequality on the left is due to Komatsu (see, e.g., [10, p. 17]), while the inequality on the right is an improvement of an earlier result of Komatsu due to Szarek and Werner [23].

#### Proof of Lemma 3.6

To prove the upper bound for $T2(ℓ∞d)$, let (εi)i≥1 be a Rademacher sequence. With S and Sm as in the proof of Lemma 3.4, for any δ > 0 we may write

$E‖S‖∞2=∫0∞2tℙ(sup1≤m≤d|Sm|>t)dt≤δ2+∫δ∞2tℙ(sup1≤m≤d|Sm|>t)dt≤δ2+∑m=1d∫δ∞2tℙ(|Sm|>t)dt.$

Now by (10) with v2 and $vm2$ as in the proof of Lemma 3.4, followed by Mills' ratio (16),

$∫δ∞2tℙ(|Sm|>t)dt≤∫δ∞4Kvm2πtte−t2/(2vm2)dt=4Kvm2π∫δ∞e−t2/(2vm2)dt=4Kvm2∫δ∞e−t2/(2vm2)2πvmdt=4Kvm2(1−Φ(δ/vm))≤4Kv2(1−Φ(δ/v)).$
(19)

Now instead of the Mills' ratio bound (16) for the tail of the normal distribution, we use the upper bound part of (18). This yields

$∫δ∞2tℙ(|Sm|>t)dt≤4Kv2(1−Φ(δ/v))≤4cv23δ/v+δ2/v2+8e−δ2/(2v2),$

where we have defined $c≔4K/2π=12.88/2π$, and hence

$E‖S‖2≤δ2+4cdv23δ/v+δ2/v2+8e−δ2/(2v2).$

Taking

$δ2=v22log(cd/22log(cd/2))$

gives

$E‖S‖2≤v2{2logd+2log(c/2)−log(2log(dc/2))+82log(cd/2)32log(cd22log(cd/2))+2log(cd22log(cd/2))+8}≕v2{2logd+h2(d)}$

where it is easily checked that h2(d) ≤ 3 for all d ≥ 1. Moreover h2(d) is negative for d > 4.13795 × 1010. This completes the proof of the upper bound in (11).

To prove the lower bound for cd in (11), we use the lower bound of [13, Lemma 6.9, p. 157] (which is, in this form, due to Giné and Zinn [5]). This yields

$cd2≥λ1+λto2+11+λd∫to∞4t(1−Φ(t))dt$
(20)

for any to > 0, where λ = 2d(1 − Φ(to)). By using Komatsu's lower bound (18), we find that

$∫to∞t(1−Φ(t))dt≥∫to∞2tt+t2+4φ(t)dt≥2toto+to2+4∫to∞φ(t)dt=21+1+4/to2(1−Φ(to)).$

Using this lower bound in (20) yields

$cd2≥λ1+λto2+11+λd81+1+4/to2(1−Φ(to))=2d(1−Φ(to))1+2d(1−Φ(to)){to2+41+1+4/to2}≥4dto+to2+4φ(to)1+4dto+to2+4φ(to){to2+41+1+4/to2}.$
(21)

Now we let $c≡2/π$ and δ > 0 and choose

$to2=2log(cd(2log(cd))(1+δ)/2).$

For this choice we see that to → ∞ as d → ∞,

$4dφ(to)=2d2π⋅(2log(cd))(1+δ)/2cd=2(2log(cd))(1+δ)/2,$

and

$4dφ(to)to=2(2log(cd))(1+δ)/2{2log(cd/(2log(cd))(1+δ)/2)}1/2→∞$

as d → ∞, so the first term on the right-hand side of (21) converges to 1 as d → ∞, and it can be rewritten as

$Ad{to2+41+1+4/to2}=Ad{2log(cd(2log(cd))(1+δ)/2)+41+1+4/to2}∼1⋅{2logd+2logc−(1+δ)log(2log(cd))+2}.$

To prove the upper bounds for cd, we will use the upper bound of [13, Lemma 6.9, p. 157] (which is, in this form, due to Giné and Zinn [5]). For every to > 0

$cd2≡Emax1≤j≤d|Zj|2≤to2+d∫to∞2tP(|Z1|>t)dt=to2+4d∫to∞t(1−Φ(t))dt≤to2+4d∫to∞φ(t)dt(by Mills'ratio)=to2+4d(1−Φ(to)).$

Evaluating this bound at $to=2log(d/2π)$ and then using Mills' ratio again yields

$cd2≤2log(d/2π)+4d(1−Φ(2log(d/2π)))≤2logd−212log(2π)+4dφ(2log(d/2π))2log(d/2π)=2logd−log(2π)+22log(d/2π)≤2logd,$
(22)

where the last inequality holds if

$22log(d/2π)≤log(2π),$

or equivalently if

$logd≥8(log(2π))2+log(2π)2=3.28735…,$

and hence if d ≥ 27 > e3.28735… 26.77. The claimed inequality is easily verified numerically for d = 3, …, 26. (It fails for d = 2.) As can be seen from (22), 2 log d − log(2π) gives a reasonable approximation to $Emax1≤j≤dZj2$ for large d. Using the upper bound in (18) instead of the second application of Mills' ratio and choosing $to2=2log(cd/2log(cd))$ with $c≔2/π$ yields the third bound for cd in (11) with

$h3(d)=−log(π)−log(log(cd))+831−log(2log(cd))2log(cd)+1−log(2log(cd))2log(cd)+4log(cd).$

### 6.3. Proofs for Section 4

#### Proof of Lemma 4.1

It follows from Y = 0, the Taylor expansion of the exponential function, and the inequality |Y|mσ2κm−2 for m ≥ 2 that

$Eexp(YκL)=1+E{exp(YκL)−1−YκL}≤1+∑m=2∞1m!E|Y|m(κL)m≤1+σ2κ2∑m=2∞1m!1Lm=1+σ2e(L)κ2.$

#### Proof of Lemma 4.2

Applying Lemma 4.1 to the jth components Xi,j of Xi and Sn, j of Sn yields for all L > 0,

$Eexp(±Sn,jκL)=∏i=1nEexp(±Xi,jκL)≤∏i=1nexp(Var(Xi,j)e(L)κ2)≤exp(Γe(L)κ2).$

Hence

$Ecosh(||Sn||∞κL)=Emax1≤j≤dcosh(Sn,jκL)≤∑j=1dEcosh(Sn,jκL)≤dexp(Γe(L)κ2).$

As in the proof of Lemma 3.4 we conclude that

$E‖Sn‖∞2≤(κL)2(log(2Ecosh(‖Sn‖∞κL)))2≤(κL)2(log(2d)+Γe(L)κ2)2=(κLlog(2d)+ΓLe(L)κ)2,$

which is equivalent to the inequality stated in the lemma.

#### Proof of Theorem 4.3

For fixed κo > 0 we split Sn into An + Bn as described before. Let us bound the sum Bn first: For this term we have

$‖Bn‖∞≤∑i=1n{1[‖Xi‖∞>κo]‖Xi‖∞+E(1[‖Xi‖∞>κo]‖Xi‖∞)}=∑i=1n{1[‖Xi‖∞>κo]‖Xi‖∞−E(1[‖Xi‖∞>κo]‖Xi‖∞)}+2∑i=1nE(1[‖Xi‖∞>κo]‖Xi‖∞)≕Bn1+Bn2.$

Therefore, since Bn1 = 0,

$E‖Bn‖∞2≤E(Bn1+Bn2)2=EBn12+Bn22=∑i=1nVar(1[‖Xi‖∞>κo]‖Xi‖∞)+4(∑i=1nE(‖Xi‖∞1[‖Xi‖∞>κo]))2≤∑i=1nE‖Xi‖∞2+4(∑i=1nE‖Xi‖∞2κo)2=Γ+4Γ2κo2,$

where we define $Γ≔∑i=1nE‖Xi‖∞2$.

The first sum, An, may be bounded by means of Lemma 4.2 with κ = 2κo, utilizing the bound

$Var(Xi,j(a))=Var(1[‖Xi‖∞≤κo]Xi,j)≤E(1[‖Xi‖∞≤κo]Xi,j2)≤E‖Xi‖∞2.$

Thus

$E‖An‖∞2≤(2κoLlog(2d)+ΓLe(L)2κo)2.$

Combining the bounds we find that

$E‖Sn‖∞2≤E‖An‖∞2+E‖Bn‖∞2≤2κoLlog(2d)+ΓLe(L)2κo+Γ+2Γκo=ακo+βκo+Γ,$

where α := 2L log(2d) and β := Γ (L e(L) + 4)/2. This bound is minimized if $κo=β/α$ with minimum value

$2αβ+Γ=(1+2L2e(L)+4Llog(2d))Γ,$

and for L = 0.407 the latter bound is not greater than

$(1+3.46log(2d))Γ.$

In the special case of symmetrically distributed random vectors Xi, our treatment of the sum Bn does not change, but in the bound for $E‖An‖∞2$ one may replace 2κo with κo, because $EXi(a)=0$. Thus

$E‖Sn‖∞2≤κoLlog(2d)+ΓLe(L)κo+Γ+2Γκo=α′κo+β′κo+Γ(withα′≔Llog(2d),β′≔Γ(Le(L)+2))=(1+2L2e(L)+2Llog(2d))Γ(ifκo=β′/α′).$

For L = 0.5 the latter bound is not greater than

$(1+2.9log(2d))Γ.$

## Acknowledgments

The authors owe thanks to the referees for a number of suggestions which resulted in a considerable improvement in the article. The authors are also grateful to Ilya Molchanov for drawing their attention to Banach-Mazur distances, and to Stanislaw Kwapien and Vladimir Koltchinskii for pointers concerning type and cotype proofs and constants. This research was initiated during the opening week of the program on “Statistical Theory and Methods for Complex, High-Dimensional Data” held at the Isaac Newton Institute for Mathematical Sciences from 7 January to 27 June, 2008, and was made possible in part by the support of the Isaac Newton Institute for visits of various periods by Dümbgen, van de Geer, and Wellner. The research of Wellner was also supported in part by NSF grants DMS-0503822 and DMS-0804587. The research of Dümbgen and van de Geer was supported in part by the Swiss National Science Foundation.

## Biographies

•

LUTZ DÜMBGEN received his Ph.D. from Heidelberg University in 1990. From 1990-1992 he was a Miller research fellow at the University of California at Berkeley. Thereafter he worked at the Universities of Bielefeld, Heidelberg, and Lübeck. Since 2002 he has been professor of statistics at the University of Bern. His research interests are nonparametric, multivariate, and computational statistics.

•

SARA A. VAN DE GEER obtained her Ph.D. at Leiden University in 1987. She worked at the Center for Mathematics and Computer Science in Amsterdam, at the Universities of Bristol, Utrecht, Leiden, and Toulouse, and at the Eidgenössische Technische Hochschule in Zürich (2005-present). Her research areas are empirical processes, statistical learning, and statistical theory for high-dimensional data.

•

MARK C. VERAAR received his Ph.D. from Delft University of Technology in 2006. In the year 2007 he was a postdoctoral researcher in the European RTN project “Phenomena in High Dimensions” at the IMPAN institute in Warsaw (Poland). In 2008 he spent one year as an Alexander von Humboldt fellow at the University of Karlsruhe (Germany). Since 2009 he has been Assistant Professor at Delft University of Technology (the Netherlands). His main research areas are probability theory, partial differential equations, and functional analysis.

•

JON A. WELLNER received his B.S. from the University of Idaho in 1968 and his Ph.D. from the University of Washington in 1975. He got hooked on research in probability and statistics during graduate school at the UW in the early 1970s, and has enjoyed both teaching and research at the University of Rochester (1975–1983) and the University of Washington (1983-present). If not for probability theory and statistics, he might be a ski bum.

## Contributor Information

Lutz Dümbgen, Institute of Mathematical Statistics and Actuarial Science, University of Bern, Alpeneggstrasse 22, CH-3012 Bern, Switzerland.

Sara A. van de Geer, Seminar for Statistics, ETH Zurich, CH-8092 Zurich, Switzerland.

Mark C. Veraar, Delft Institute of Applied Mathematics, Delft University of Technology, P.O. Box 5031, 2600 GA Delft, The Netherlands.

Jon A. Wellner, Department of Statistics, Box 354322, University of Washington, Seattle, WA 98195-4322.

## References

1. Araujo A, Giné E. Wiley Series in Probability and Mathematical Statistics. John Wiley; New York: 1980. The Central Limit Theorem for Real and Banach Valued Random Variables.
2. Bennett G. Probability inequalities for the sum of independent random variables. J Amer Statist Assoc. 1962;57:33–45. doi: 10.2307/2282438.
3. de la Peña VH, Giné E. Probability and its Applications. Springer-Verlag; New York: 1999. Decoupling: From Dependence to Independence.
4. Dudley RM. Cambridge Studies in Advanced Mathematics. Vol. 63. Cambridge University Press; Cambridge: 1999. Uniform Central Limit Theorems.
5. Giné E, Zinn J. Central limit theorems and weak laws of large numbers in certain Banach spaces. Z Wahrsch Verw Gebiete. 1983;62:323–354. doi: 10.1007/BF00535258.
6. Gordon RD. Values of Mills' ratio of area to bounding ordinate and of the normal probability integral for large values of the argument. Ann Math Statistics. 1941;12:364–366. doi: 10.1214/aoms/1177731721.
7. Greenshtein E, Ritov Y. Persistence in high-dimensional linear predictor selection and the virtue of overparametrization. Bernoulli. 2004;10:971–988. doi: 10.3150/bj/1106314846.
8. Haagerup U. The best constants in the Khintchine inequality. Studia Math. 1981;70:231–283.
9. Hoeffding W. Probability inequalities for sums of bounded random variables. J Amer Statist Assoc. 1963;58:13–30. doi: 10.2307/2282952.
10. Itô K, McKean HP., Jr . Classics in Mathematics. Springer-Verlag; Berlin: 1974. Diffusion Processes and their Sample Paths.
11. Johnson WB, Lindenstrauss J. Handbook of the Geometry of Banach Spaces. I. North-Holland; Amsterdam: 2001. Basic concepts in the geometry of Banach spaces; pp. 1–84.
12. Juditsky A, Nemirovski AS. Tech report. Georgia Institute of Technology; Atlanta, GA: 2008. Large deviations of vector-valued martingales in 2-smooth normed spaces.
13. Ledoux M, Talagrand M. Ergebnisse der Mathematik und ihrer Grenzgebiete 3. Folge / A Series of Modern Surveys in Mathematics. Vol. 23. Springer-Verlag; Berlin: 1991. Probability in Banach Spaces: Isoperimetry and Processes.
14. Nemirovski AS. Lectures on Probability Theory and Statistics (Saint-Flour, 1998), Lecture Notes in Mathematics. Vol. 1738. Springer; Berlin: 2000. Topics in non-parametric statistics; pp. 85–277.
15. Nemirovski AS. Regular Banach spaces and large deviations of random sums. 2004. working paper.
16. Nemirovski AS, Yudin DB. Problem Complexity and Method Efficiency in Optimization. John Wiley; Chichester, UK: 1983.
17. Pinelis I. Extremal probabilistic problems and Hotelling's T2 test under a symmetry condition. Ann Statist. 1994;22:357–368. doi: 10.1214/aos/1176325373.
18. Pinelis I. Optimum bounds for the distributions of martingales in Banach spaces. Ann Probab. 1994;22:1679–1706. doi: 10.1214/aop/1176988477.
19. Pinelis I. Monotonicity properties of the relative error of a Padé approximation for Mills' ratio. J Inequal Pure Appl Math. 2002;3/2
20. Pinelis I. Toward the best constant factor for the Rademacher-Gaussian tail comparison. ESAIM Probab Stat. 2007;11:412–426. doi: 10.1051/ps:2007027.
21. Pollard D. NSF-CBMS Regional Conference Series in Probability and Statistics. Vol. 2. Institute of Mathematical Statistics; Hayward, CA: 1990. Empirical Processes: Theory and Applications.
22. Rio E. Moment inequalities for sums of dependent random variables under projective conditions. J Theoret Probab. 2009;22:146–163. doi: 10.1007/s10959-008-0155-9.
23. Szarek SJ, Werner E. A nonsymmetric correlation inequality for Gaussian measure. J Multivariate Anal. 1999;68:193–211. doi: 10.1006/jmva.1998.1784.
24. Tomczak-Jaegermann N. Pitman Monographs and Surveys in Pure and Applied Mathematics. Vol. 38. Longman Scientific & Technical; Harlow, UK: 1989. Banach-Mazur Distances and Finite-Dimensional Operator Ideals.
25. van de Geer SA. Cambridge Series in Statistical and Probabilistic Mathematics. Vol. 6. Cambridge University Press; Cambridge: 2000. Applications of Empirical Process Theory.
26. van der Vaart AW, Wellner JA. Springer Series in Statistics. Springer-Verlag; New York: 1996. Weak Convergence and Empirical Processes: With Applications to Statistics.

 PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers.