Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2994588

Formats

Article sections

- Abstract
- 1. Introduction
- 2. Adaptive group Lasso in nonparametric additive models
- 3. Main results
- 4. Simulation studies
- 5. Data example
- 6. Concluding remarks
- REFERENCES

Authors

Related links

Ann Stat. Author manuscript; available in PMC 2010 November 30.

Published in final edited form as:

Ann Stat. 2010 August 1; 38(4): 2282–2313.

doi: 10.1214/09-AOS781PMCID: PMC2994588

NIHMSID: NIHMS251165

Jian Huang, Department of Statistics and Actuarial Science, 241 SH, University of Iowa, Iowa City, Iowa 52242, USA, Email: ude.awoiu@gnauh-naij;

See other articles in PMC that cite the published article.

We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method.

Let (*Y _{i}*,

$${Y}_{i}=\mu +{\displaystyle \sum _{j=1}^{p}{f}_{j}({X}_{\mathit{\text{ij}}})+{\epsilon}_{i}},$$

(1)

where µ is an intercept term, *X _{ij}* is the

There has been much work on penalized methods for variable selection and estimation with high-dimensional data. Methods that have been proposed include the bridge estimator [Frank and Friedman (1993), Huang, Horowitz and Ma (2008)]; least absolute shrinkage and selection operator or Lasso [Tibshirani (1996)], the smoothly clipped absolute deviation (SCAD) penalty [Fan and Li (2001), Fan and Peng (2004)], and the minimum concave penalty [Zhang (2010)]. Much progress has been made in understanding the statistical properties of these methods. In particular, many authors have studied the variable selection, estimation and prediction properties of the Lasso in high-dimensional settings. See, for example, Meinshausen and Bühlmann (2006), Zhao and Yu (2006), Zou (2006), Bunea, Tsybakov and Wegkamp (2007), Meinshausen and Yu (2009), Huang, Ma and Zhang (2008), van de Geer (2008) and Zhang and Huang (2008), among others. All these authors assume a linear or other parametric model. In many applications, however, there is little a priori justification for assuming that the effects of covariates take a linear form or belong to any other known, finite-dimensional parametric family. For example, in studies of economic development, the effects of covariates on the growth of gross domestic product can be nonlinear. Similarly, there is evidence of nonlinearity in the gene expression data used in the empirical example in Section 5.

There is a large body of literature on estimation in nonparametric additive models. For example, Stone (1985, 1986) showed that additive spline estimators achieve the same optimal rate of convergence for a general fixed *p* as for *p* = 1. Horowitz and Mammen (2004) and Horowitz, Klemelä and Mammen (2006) showed that if *p* is fixed and mild regularity conditions hold, then oracle-efficient estimates of the *f _{j}*’s can be obtained by a two-step procedure. Here, oracle efficiency means that the estimator of each

Antoniadis and Fan (2001) proposed a group SCAD approach for regularization in wavelets approximation. Zhang et al. (2004) and Lin and Zhang (2006) have investigated the use of penalization methods in smoothing spline ANOVA with a fixed number of covariates. Zhang et al. (2004) used a Lasso-type penalty but did not investigate model-selection consistency. Lin and Zhang (2006) proposed the component selection and smoothing operator (COSSO) method for model selection and estimation in multivariate nonparametric regression models. For fixed *p*, they showed that the COSSO estimator in the additive model converges at the rate *n*^{−d/(2d+1)}, where *d* is the order of smoothness of the components. They also showed that, in the special case of a tensor product design, the COSSO correctly selects the nonzero additive components with high probability. Zhang and Lin (2006) considered the COSSO for nonparametric regression in exponential families.

Meier, van de Geer and Bühlmann (2009) treat variable selection in a nonparametric additive model in which the numbers of zero and nonzero *f _{j}*’s may both be larger than

Several other recent papers have also considered variable selection in nonparametric models. For example, Wang, Chen and Li (2007) and Wang and Xia (2008) considered the use of group Lasso and SCAD methods for model selection and estimation in varying coefficient models with a fixed number of coefficients and covariates. Bach (2007) applies what amounts to the group Lasso to a nonparametric additive model with a fixed number of covariates. He established model selection consistency under conditions that are considerably more complicated than the ones we require for a possibly diverging number of covariates.

In this paper, we propose to use the adaptive group Lasso for variable selection in (1) based on a spline approximation to the nonparametric components. With this approximation, each nonparametric component is represented by a linear combination of spline basis functions. Consequently, the problem of component selection becomes that of selecting the groups of coefficients in the linear combinations. It is natural to apply the group Lasso method, since it is desirable to take into the grouping structure in the approximating model. To achieve model selection consistency, we apply the group Lasso iteratively as follows. First, we use the group Lasso to obtain an initial estimator and reduce the dimension of the problem. Then we use the adaptive group Lasso to select the final set of nonparametric components. The adaptive group Lasso is a simple generalization of the adaptive Lasso [Zou (2006)] to the method of the group Lasso [Yuan and Lin (2006)]. However, here we apply this approach to nonparametric additive modeling.

We assume that the number of nonzero *f _{j}*’s is fixed. This enables us to achieve model selection consistency under simple assumptions that are easy to interpret. We do not have to impose compatibility or irrepresentable conditions, nor do we need to assume conditions on the eigenvalues of certain matrices formed from the spline basis functions. We show that the group Lasso selects a model whose number of components is bounded with probability approaching one by a constant that is independent of the sample size. Then using the group Lasso result as the initial estimator, the adaptive group Lasso selects the correct model with probability approaching 1 and achieves the optimal rate of convergence for nonparametric estimation of an additive model.

The remainder of the paper is organized as follows. Section 2 describes the group Lasso and the adaptive group Lasso for variable selection in nonparametric additive models. Section 3 presents the asymptotic properties of these methods in “large *p*, small *n*” settings. Section 4 presents the results of simulation studies to evaluate the finite-sample performance of these methods. Section 5 provides an illustrative application, and Section 6 includes concluding remarks. Proofs of the results stated in Section 3 are given in the Appendix.

We describe a two-step approach that uses the group Lasso for variable selection based on a spline representation of each component in additive models. In the first step, we use the standard group Lasso to achieve an initial reduction of the dimension in the model and obtain an initial estimator of the nonparametric components. In the second step, we use the adaptive group Lasso to achieve consistent selection.

Suppose that each *X _{j}* takes values in [

There exists a normalized B-spline basis {ϕ_{k}, 1 ≤ *k* ≤ *m _{n}*} for

$${f}_{\mathit{\text{nj}}}(x)={\displaystyle \sum _{k=1}^{{m}_{n}}{\beta}_{\mathit{\text{jk}}}{\varphi}_{k}(x),\text{\hspace{1em}\hspace{1em}}1\le j\le p.}$$

(2)

Under suitable smoothness assumptions, the *f _{j}*’s can be well approximated by functions in

Let ${\Vert \mathbf{a}\Vert}_{2}\equiv {({\displaystyle {\sum}_{j=1}^{m}{|{a}_{j}|}^{2}})}^{1/2}$ denote the _{2} norm of any vector **a** ^{m}. Let **β**_{nj} = (β_{j1}, …, β_{jmn})′ and ${\mathit{\beta}}_{n}=({\mathit{\beta}}_{n1}^{\prime},\dots ,{\mathit{\beta}}_{\mathit{\text{np}}}^{\prime})\prime $. Let *w _{n}* = (

$${L}_{n}(\mu ,{\mathit{\beta}}_{n})={{\displaystyle \sum _{i=1}^{n}\left[{Y}_{i}-\mu -{\displaystyle \sum _{j=1}^{p}{\displaystyle \sum _{k=1}^{{m}_{n}}{\beta}_{\mathit{\text{jk}}}{\varphi}_{k}({X}_{\mathit{\text{ij}}}})}\right]}}^{2}+{\lambda}_{n}{\displaystyle \sum _{j=1}^{p}{w}_{\mathit{\text{nj}}}{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2},}$$

(3)

where λ_{n} is a penalty parameter. We study the estimators that minimize *L _{n}*(µ,

$$\sum _{i=1}^{n}{\displaystyle \sum _{k=1}^{{m}_{n}}{\beta}_{\mathit{\text{jk}}}{\varphi}_{k}({X}_{\mathit{\text{ij}}})=0,\text{\hspace{1em}\hspace{1em}}1\le j\le p.}$$

(4)

These centering constraints are sample analogs of the identifying restriction E *f _{j}*(

$${\overline{\varphi}}_{\mathit{\text{jk}}}=\frac{1}{n}{\displaystyle \sum _{i=1}^{n}{\varphi}_{k}({X}_{\mathit{\text{ij}}}),\text{\hspace{1em}\hspace{1em}}{\psi}_{\mathit{\text{jk}}}(x)}={\varphi}_{k}(x)-{\overline{\varphi}}_{\mathit{\text{jk}}}.$$

(5)

For simplicity and without causing confusion, we simply write ψ_{k}(*x*) = ψ_{jk}(*x*). Define

$${Z}_{\mathit{\text{ij}}}=({\psi}_{1}({X}_{\mathit{\text{ij}}}),\dots ,{\psi}_{{m}_{n}}({X}_{\mathit{\text{ij}}}))\prime .$$

So, *Z _{ij}* consists of values of the (centered) basis functions at the

$${L}_{n}({\mathit{\beta}}_{n};\lambda )={\Vert \mathbf{Y}-\mathbf{Z}{\mathit{\beta}}_{n}\Vert}_{2}^{2}+{\lambda}_{n}{\displaystyle \sum _{j=1}^{p}{w}_{\mathit{\text{nj}}}{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}.}$$

(6)

Here, we have dropped µ in the argument of *L _{n}*. With the centering, = . Then minimizing (3) subject to (4) is equivalent to minimizing (6) with respect to

We now describe the two-step approach to component selection in the nonparametric additive model (1).

*Step* 1. Compute the group Lasso estimator. Let

$${L}_{n1}({\mathit{\beta}}_{n},{\lambda}_{n1})={\Vert \mathbf{Y}-\mathbf{Z}{\mathit{\beta}}_{n}\Vert}_{2}^{2}+{\lambda}_{n1}{\displaystyle \sum _{j=1}^{p}{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}.}$$

This objective function is the special case of (6) that is obtained by setting *w _{nj}* = 1, 1 ≤

*Step* 2. Use the group Lasso estimator _{n} to obtain the weights by setting

$${w}_{\mathit{\text{nj}}}=\{\begin{array}{cc}{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}_{2}^{-1},\hfill & \text{\hspace{1em}if}{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}_{2}0,\hfill \\ \infty ,\hfill & \text{\hspace{1em}if}{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}_{2}=0.\hfill \end{array}$$

The adaptive group Lasso objective function is

$${L}_{n2}({\mathit{\beta}}_{n};{\lambda}_{n2})={\Vert \mathbf{Y}-\mathbf{Z}{\mathit{\beta}}_{n}\Vert}_{2}^{2}+{\lambda}_{n2}{\displaystyle \sum _{j=1}^{p}{w}_{\mathit{\text{nj}}}}{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}.$$

Here, we define 0 · ∞ = 0. Thus, the components not selected by the group Lasso are not included in Step 2. The adaptive group Lasso estimator is _{n} _{n}(λ_{n2}) = arg min_{βn} *L*_{n2}(**β**_{n}; λ_{n2}). Finally, the adaptive group Lasso estimators of µ and *f _{j}* are

$${\widehat{\mu}}_{n}=\overline{Y}\equiv {n}^{-1}{\displaystyle \sum _{i=1}^{n}{Y}_{i},{\widehat{f}}_{\mathit{\text{nj}}}(x)}={\displaystyle \sum _{k=1}^{{m}_{n}}{\widehat{\beta}}_{\mathit{\text{jk}}}{\psi}_{k}(x),\text{\hspace{1em}\hspace{1em}}1\le j\le p.}$$

This section presents our results on the asymptotic properties of the estimators defined in Steps 1 and 2 of Section 2.

Let *k* be a nonnegative integer, and let α (0, 1] be such that *d* = *k* + α > 0.5. Let be the class of functions *f* on [0, 1] whose *k*th derivative *f*^{(k)} exists and satisfies a Lipschitz condition of order α:

$$|{f}^{(k)}(s)-{f}^{(k)}(t)|\le C|s-t{|}^{\alpha}\text{\hspace{1em}\hspace{1em}for}s,t\in [a,b].$$

In (1), without loss of generality, suppose that the first *q* components are nonzero, that is, *f _{j}*(

We make the following assumptions.

(A1) The number of nonzero components *q* is fixed and there is a constant *c _{f}* > 0 such that min

(A2) The random variables ε_{1}, …, ε_{n} are independent and identically distributed with Eε_{i} = 0 and Var(ε_{i}) = σ^{2}. Furthermore, their tail probabilities satisfy *P*(|ε_{i}| > *x*) ≤ *K* exp(−*Cx*^{2}), *i* = 1, …, *n*, for all *x* ≥ 0 and for constants *C* and *K*.

(A3) E *f _{j}*(

(A4) The covariate vector *X* has a continuous density and there exist constants *C*_{1} and *C*_{2} such that the density function *g _{j}* of

We note that (A1), (A3) and (A4) are standard conditions for nonparametric additive models. They would be needed to estimate the nonzero additive components at the optimal _{2} rate of convergence on [*a, b*], even if *q* were fixed and known. Only (A2) strengthens the assumptions needed for nonparametric estimation of a nonparametric additive model. While condition (A1) is reasonable in most applications, it would be interesting to relax this condition and investigate the case when the number of nonzero components can also increase with the sample size. The only technical reason that we assume this condition is related to Lemma 3 given in the Appendix, which is concerned with the properties of the smallest and largest eigenvalues of the “design matrix” formed from the spline basis functions. If this lemma can be extended to the case of a divergent number of components, then (A1) can be relaxed. However, it is clear that there needs to be restriction on the number of nonzero components to ensure model identification.

In this section, we consider the selection and estimation properties of the group Lasso estimator. Define *Ã*_{1} = {*j* : ‖_{nj}‖_{2} ≠ 0, 1 ≤ *j* ≤ *p*}. Let |*A*| denote the cardinality of any set *A* {1, …, *p*}.

THEOREM 1. *Suppose that* (A1) *to* (A4) *hold and* ${\lambda}_{n1}\ge C\sqrt{n\text{log}({\mathit{\text{pm}}}_{n})}$ *for a sufficiently large constant C*.

*With probability converging to*1, |*Ã*_{1}| ≤*M*_{1}|*A*_{1}| =*M*_{1q}*for a finite constant**M*_{1}> 1.*If*${m}_{n}^{2}\text{log}({\mathit{\text{pm}}}_{n})/n\to 0\mathit{\text{and}}({\lambda}_{n1}^{2}{m}_{n})/{n}^{2}\to 0\mathit{\text{as}}n\to \infty $,*then all the nonzero***β**_{nj}, 1 ≤*j*≤*q*,*are selected with probability converging to one*.- $$\sum _{j=1}^{p}{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}-{\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}^{2}={O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{m}_{n}^{2}\text{log}({\mathit{\text{pm}}}_{n})}{n}\right)+{O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{m}_{n}}{n}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{1}{{m}_{n}^{2d-1}}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{4{m}_{n}^{2}{\lambda}_{n1}^{2}}{{n}^{2}}\right)}.$$

Part (i) of Theorem 1 says that, with probability approaching 1, the group Lasso selects a model whose dimension is a constant multiple of the number of nonzero additive components *f _{j}*, regardless of the number of additive components that are zero. Part (ii) implies that every nonzero coefficient will be selected with high probability. Part (iii) shows that the difference between the coefficients in the spline representation of the nonparametric functions in (1) and their estimators converges to zero in probability. The rate of convergence is determined by four terms: the stochastic error in estimating the nonparametric components (the first term) and the intercept µ (the second term), the spline approximation error (the third term) and the bias due to penalization (the fourth term).

Let ${\tilde{f}}_{\mathit{\text{nj}}}(x)={\displaystyle {\sum}_{j=1}^{{m}_{n}}{\tilde{\beta}}_{\mathit{\text{jk}}}\psi (x)}$, 1 ≤ *j* ≤ *p*. The following theorem is a consequence of Theorem 1.

THEOREM 2. *Suppose that* (A1) *to* (A4) *hold and that* ${\lambda}_{n1}\ge C\sqrt{n\text{log}({\mathit{\text{pm}}}_{n})}$ *for a sufficiently large constant C. Then*:

- Let
*Ã*= {_{f}*j*: ‖‖_{nj}_{2}> 0, 1 ≤*j*≤*p*}.*There is a constant**M*_{1}> 1*such that, with probability converging to*1, |*Ã*| ≤_{f}*M*_{1q}. *If*(*m*log(_{n}*pm*))/_{n}*n*→ 0*and*$({\lambda}_{n1}^{2}{m}_{n})/{n}^{2}\to 0\mathit{\text{as}}n\to \infty $,*then all the nonzero additive components f*, 1 ≤_{j}*j*≤*q*,*are selected with probability converging to one*.- $${\Vert {\tilde{f}}_{\mathit{\text{nj}}}-{f}_{j}\Vert}_{2}^{2}={O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{m}_{n}\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n})}{n}\right)+{O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{1}{n}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{1}{{m}_{n}^{2d}}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{4{m}_{n}{\lambda}_{n1}^{2}}{{n}^{2}}\right),\text{\hspace{1em}\hspace{1em}}j\in {\tilde{A}}_{2},$$
*where**Ã*_{2}=*A*_{1}*Ã*_{1}.

Thus, under the conditions of Theorem 2, the group Lasso selects all the nonzero additive components with high probability. Part (iii) of the theorem gives the rate of convergence of the group Lasso estimator of the nonparametric components.

For any two sequences {*a _{n}*,

We now state a useful corollary of Theorem 2.

COROLLARY 1. *Suppose that* (A1) *to* (A4) *hold*. *If* ${\lambda}_{n1}\asymp \sqrt{n\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n})}$ *and* *m _{n}*

*If**n*^{−2d/(2d+1)}log(*p*) → 0*as n*→ ∞,*then with probability converging to one, all the nonzero components f*, 1 ≤_{j}*j*≤*q*,*are selected and the number of selected components is no more than**M*_{1}*q*.- $${\Vert {\tilde{f}}_{\mathit{\text{nj}}}-{f}_{j}\Vert}_{2}^{2}={O}_{p}({n}^{-2d/(2d+1)}\text{log}({\mathit{\text{pm}}}_{n})),\text{\hspace{1em}\hspace{1em}}j\in {\tilde{A}}_{2}.$$

For the λ_{n1} and *m _{n}* given in Corollary 1, the number of zero components can be as large as exp(

We now consider the properties of the adaptive group Lasso. We first state a general result concerning the selection consistency of the adaptive group Lasso, assuming an initial consistent estimator is available. We then apply to the case when the group Lasso is used as the initial estimator. We make the following assumptions.

(B1) The initial estimators _{nj} are *r _{n}*-consistent at zero:

$${r}_{n}\phantom{\rule{thinmathspace}{0ex}}\underset{j\in {A}_{0}}{\text{max}}{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}_{2}={O}_{P}(1),\text{\hspace{1em}\hspace{1em}}{r}_{n}\to \infty ,$$

and there exists a constant *c _{b}* > 0 such that

$$\mathrm{P}\phantom{\rule{thinmathspace}{0ex}}(\underset{j\in {A}_{1}}{\text{min}}{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}_{2}\ge {c}_{b}{b}_{n1})\to 1,$$

where *b*_{n1} = min_{jA1}‖**β**_{nj}‖_{2}.

(B2) Let *q* be the number of nonzero components and *s _{n}* =

- $$\frac{{m}_{n}}{{n}^{1/2}}+\frac{{\lambda}_{n2}{m}_{n}^{1/4}}{n}=o(1),$$
- $$\frac{{n}^{1/2}{\text{log}}^{1/2}({s}_{n}{m}_{n})}{{\lambda}_{n2}{r}_{n}}+\frac{n}{{\lambda}_{n2}{r}_{n}{m}_{n}^{(2d+1)/2}}=o(1).$$

We state condition (B1) for a general initial estimator, to highlight the point that the availability of an *r _{n}*-consistent estimator at zero is crucial for the adaptive group Lasso to be selection consistent. In other words, any initial estimator satisfying (B1) will ensure that the adaptive group Lasso (based on this initial estimator) is selection consistent, provided that certain regularity conditions are satisfied. We note that it follows immediately from Theorem 1 that the group Lasso estimator satisfies (B1). We will come back to this point below.

For ${\widehat{\mathit{\beta}}}_{n}\equiv ({\widehat{\mathit{\beta}}}_{n1}^{\prime},\dots ,{\widehat{\mathit{\beta}}}_{\mathit{\text{np}}}^{\prime})\prime \text{and}{\mathit{\beta}}_{n}\equiv ({\mathit{\beta}}_{n1}^{\prime},\dots ,{\mathit{\beta}}_{\mathit{\text{np}}}^{\prime})\prime $, we say _{n} =_{0} **β**_{n} if sgn_{0}(‖_{nj}‖) = sgn_{0}(‖**β**_{nj}‖), 1 ≤ *j* ≤ *p*, where sgn_{0}(|*x*|) = 1 if |*x*| > 0 and = 0 if |*x*| = 0.

THEOREM 3. *Suppose that conditions* (B1), (B2) *and* (A1)–(A4) *hold*. *Then*:

- $$\mathrm{P}({\widehat{\mathit{\beta}}}_{n}{=}_{0}{\mathit{\beta}}_{n})\to 1.$$
- $$\sum _{j=1}^{q}{\Vert {\widehat{\mathit{\beta}}}_{\mathit{\text{nj}}}-{\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}^{2}={O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{m}_{n}^{2}}{n}\right)+{O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{m}_{n}}{n}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{1}{{m}_{n}^{2d-1}}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{4{m}_{n}^{2}{\lambda}_{n2}^{2}}{{n}^{2}}\right)}.$$

This theorem is concerned with the selection and estimation properties of the adaptive group Lasso in terms of _{n}. The following theorem states the results in terms of the estimators of the nonparametric components.

THEOREM 4. *Suppose that conditions* (B1), (B2) *and* (A1)–(A4) *hold. Then*:

- $$\mathrm{P}({\Vert {\widehat{f}}_{\mathit{\text{nj}}}\Vert}_{2}>0,j\in {A}_{1}\mathit{\text{and}}{\Vert {\widehat{f}}_{\mathit{\text{nj}}}\Vert}_{2}=0,j\in {A}_{0})\to 1.$$
- $$\sum _{j=1}^{q}{\Vert {\widehat{f}}_{\mathit{\text{nj}}}-{f}_{j}\Vert}_{2}^{2}={O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{m}_{n}}{n}\right)+{O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{1}{n}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{1}{{m}_{n}^{2d}}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{4{m}_{n}{\lambda}_{n2}^{2}}{{n}^{2}}\right)}.$$

Part (i) of this theorem states that the adaptive group Lasso can consistently distinguish nonzero components from zero components. Part (ii) gives an upper bound on the rate of convergence of the estimator.

We now apply the above results to our proposed procedure described in Section 2, in which we first obtain the the group Lasso estimator and then use it as the initial estimator in the adaptive group Lasso.

By Theorem 1, if ${\lambda}_{n1}\asymp \sqrt{n\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n})}$ and *m _{n}*

$$\frac{{\lambda}_{n2}}{{n}^{(8d+3)/(8d+4)}}=o(1)\text{\hspace{1em}and\hspace{1em}}\frac{{n}^{1/(4d+2)}{\text{log}}^{1/2}({\mathit{\text{pm}}}_{n})}{{\lambda}_{n2}}=o(1).$$

(7)

We summarize the above discussion in the following corollary.

COROLLARY 2. *Let the group Lasso estimator* _{n} _{n}(λ_{n1}) *with* ${\lambda}_{n1}\asymp \sqrt{n\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n})}$ *and* *m _{n}*

$$\sum _{j=1}^{q}{\Vert {\widehat{f}}_{\mathit{\text{nj}}}-{f}_{j}\Vert}_{2}^{2}={O}_{p}({n}^{-2d/(2d+1)})}.$$

This corollary follows directly from Theorems 1 and 4. The largest λ_{n2} allowed is λ_{n2} = *O*(*n*^{1/2}). With this λ_{n2}, the first equation in (6) is satisfied. Substitute it into the second equation in (6), we obtain *p* = exp(*o*(*n*^{2d/(2d+1)})), which is the largest *p* permitted and can be larger than *n*. Thus, under the conditions of this corollary, our proposed adaptive group Lasso estimator using the group Lasso as the initial estimator is selection consistent and achieves optimal rate of convergence even when *p* is larger than *n*. Following model selection, oracle-efficient, asymptotically normal estimators of the nonzero components can be obtained by using existing methods.

We use simulation to evaluate the performance of the adaptive group Lasso with regard to variable selection. The generating model is

$${y}_{i}=f({x}_{i})+{\epsilon}_{i}\equiv {\displaystyle \sum _{j=1}^{p}{f}_{j}({x}_{\mathit{\text{ij}}})+{\epsilon}_{i},\text{\hspace{1em}\hspace{1em}}i=1,\dots ,n.}$$

(8)

Since *p* can be larger than *n*, we consider two ways to select the penalty parameter, the BIC [Schwarz (1978)] and the EBIC [Chen and Chen (2008, 2009)]. The BIC is defined as

$$\mathit{\text{BIC}}(\lambda )=\text{log}({\text{RSS}}_{\lambda})+{\mathit{\text{df}}}_{\lambda}\xb7\frac{\text{log}n}{n}.$$

Here, RSS_{λ} is the residual sum of squares for a given λ, and the degrees of freedom *df*_{λ} = _{λ}*m _{n}*, where

$$\mathit{\text{EBIC}}(\lambda )=\text{log}({\text{RSS}}_{\lambda})+{\mathit{\text{df}}}_{\lambda}\xb7\frac{\text{log}n}{n}+\nu \xb7{\mathit{\text{df}}}_{\lambda}\xb7\frac{\text{log}p}{n},$$

where 0 ≤ ν ≤ 1 is a constant. We use ν = 0.5.

We have also considered two other possible ways of defining df: (a) using the trace of a linear smoother based on a quadratic approximation; (b) using the number of estimated nonzero components. We have decided to use the definition given above based on the results from our simulations. We note that the df for the group Lasso of Yuan and Lin (2006) requires an initial (least squares) estimator, which is not available when *p* > *n*. Thus, their df is not applicable to our problem.

In our simulation example, we compare the adaptive group Lasso with the group Lasso and ordinary Lasso. Here, the ordinary Lasso estimator is defined as the value that minimizes

$${\Vert \mathbf{Y}-\mathbf{Z}{\mathit{\beta}}_{n}\Vert}_{2}^{2}+{\lambda}_{n}{\displaystyle \sum _{j=1}^{p}{\displaystyle \sum _{k=1}^{{m}_{n}}|{\beta}_{\mathit{\text{jk}}}|.}}$$

This simple application of the Lasso does not take into account the grouping structure in the spline expansions of the components. The group Lasso and the adaptive group Lasso estimates are computed using the algorithm proposed by Yuan and Lin (2006). The ordinary Lasso estimates are computed using the Lars algorithms [Efron et al. (2004)]. The group Lasso is used as the initial estimate for the adaptive group Lasso.

We also compare the results from the nonparametric additive modeling with those from the standard linear regression model with Lasso. We note that this is not a fair comparison because the generating model is highly nonlinear. Our purpose is to illustrate that it is necessary to use nonparametric models when the underlying model deviates substantially from linear models in the context of variable selection with high-dimensional data and that model misspecification can lead to bad selection results.

EXAMPLE 1. We generate data from the model

$${y}_{i}=f({x}_{i})+{\epsilon}_{i}\equiv {\displaystyle \sum _{j=1}^{p}{f}_{j}({x}_{\mathit{\text{ij}}})+{\epsilon}_{i},\text{\hspace{1em}\hspace{1em}}i=1,\dots ,n,}$$

where *f*_{1}(*t*) = 5*t*, *f*_{2}(*t*) = 3(2*t* − 1)^{2}, *f*_{3}(*t*) = 4 sin(2π*t*)/(2 − sin(2π*t*)), *f*_{4}(*t*) = 6(0.1 sin(2π*t*) + 0.2 cos(2π*t*) + 0.3 sin(2π*t*)^{2} + 0.4 cos(2π*t*)^{3} + 0.5 sin(2π*t*)^{3}), and *f*_{5}(*t*) = = *f _{p}*(

The covariates are simulated as follows. First, we generate ${w}_{i1},\dots ,{w}_{\mathit{\text{ip}}},{u}_{i},{u}_{i}^{\prime},{\upsilon}_{i}$ independently from *N*(0, 1) truncated to the interval [0, 1], *i* = 1, …, *n*. Then we set *x _{ik}* = (

The results of 400 Monte Carlo replications are summarized in Table 1. The columns are the mean number of variables selected (NV), model error (ER), the percentage of replications in which all the correct additive components are included in the selected model (IN), and the percentage of replications in which precisely the correct components are selected (CS). The corresponding standard errors are in parentheses. The model error is computed as the average of ${n}^{-1}{\displaystyle {\sum}_{i=1}^{n}{[\widehat{f}({x}_{i})-f({x}_{i})]}^{2}}$ over the 400 Monte Carlo replications, where *f* is the true conditional mean function.

Example 1. Simulation results for the adaptive group Lasso, group Lasso, ordinary Lasso, and linear model with Lasso, n = 50, 100 or 200, p = 1000. NV, average number of the variables being selected; ME, model error; IN, percentage of occasions on which **...**

Table 1 shows that the adaptive group Lasso selects all the nonzero components (IN) and selects exactly the correct model (CS) more frequently than the other methods do. For example, with the BIC and *n* = 200, the percentage of correct selections (CS) by the adaptive group Lasso ranges from 65.25% to 81%, which is much higher than the ranges 30–57.75% for the group Lasso and 12–15.75% for the ordinary Lasso. The adaptive group Lasso and group Lasso perform better than the ordinary Lasso in all of the experiments, which illustrates the importance of taking account of the group structure of the coefficients of the spline expansion. Correlation among covariates increases the difficulty of component selection, so it is not surprising that all methods perform better with independent covariates than with correlated ones. The percentage of correct selections increases as the sample size increases. The linear model with Lasso never selects the correct model. This illustrates the poor results that can be produced by a linear model when the true conditional mean function is nonlinear.

Table 1 also shows that the model error (ME) of the group Lasso is only slightly larger than that of the adaptive group Lasso. The models selected by the group Lasso nest and, therefore, have more estimated coefficients than the models selected by the adaptive group Lasso. Therefore, the group Lasso estimators of the conditional mean function have a larger variance and larger ME. The differences between the MEs of the two methods are small, however, because as can be seen from the NV column, the models selected by the group Lasso in our experiments have only slightly more estimated coefficients than the models selected by the adaptive group Lasso.

EXAMPLE 2. We now compare the adaptive group Lasso with the COSSO [Lin and Zhang (2006)]. This comparison is suggested to us by the Associate Editor. Because the COSSO algorithm only works for the case when *p* is smaller than *n*, we use the same set-up as in Example 1 of Lin and Zhang (2006). In this example, the generating model is as in (8) with 4 nonzero components. Let *X _{j}* = (

The COSSO procedure uses either generalized cross-validation or 5-fold cross-validation. Based the simulation results of Lin and Zhang (2006) and our own simulations, the COSSO with 5-fold cross-validation has better selection performance. Thus, we compare the adaptive group Lasso with BIC or EBIC with the COSSO with 5-fold cross-validation. The results are given in Table 2. For independent predictors, when *n* = 200 and *p* = 10, 20 or 50, the adaptive group Lasso and COSSO have similar performance in terms of selection accuracy and model error. However, for smaller *n* and larger *p*, the adaptive group Lasso does significantly better. For example, for *n* = 100 and *p* = 50, the percentage of correct selection for the adaptive group Lasso is 81–83%, but it is only 11% for the COSSO. The model error of the adaptive group Lasso is similar to or smaller than that of the COSSO. In several experiments, the model error of the COSSO is 2 to more than 7 times larger than that of the adaptive group Lasso. It is interesting to note that when *n* = 50 and *p* = 20 or 50, the adaptive group Lasso still does a descent job in selecting the correct model, but the COSSO does poorly in these two cases. In particular, for *n* = 50 and *p* = 50, the COSSO did not select the exact correct model in all the simulation runs. For dependent predictors, the comparison is even mode favorable to the adaptive group Lasso, which performs significantly better than COSSO in terms of both model error and selection accuracy in all the cases.

We use the data set reported in Scheetz et al. (2006) to illustrate the application of the proposed method in high-dimensional settings. For this data set, 120 twelve-week old male rats were selected for tissue harvesting from the eyes and for microarray analysis. The microarrays used to analyze the RNA from the eyes of these animals contain over 31,042 different probe sets (Affymetric GeneChip Rat Genome 230 2.0 Array). The intensity values were normalized using the robust multi-chip averaging method [Irizzary et al. (2003)] method to obtain summary expression values for each probe set. Gene expression levels were analyzed on a logarithmic scale.

We are interested in finding the genes that are related to the gene TRIM32. This gene was recently found to cause Bardet–Biedl syndrome [Chiang et al. (2006)], which is a genetically heterogeneous disease of multiple organ systems including the retina. Although over 30,000 probe sets are represented on the Rat Genome 230 2.0 Array, many of them are not expressed in the eye tissue and initial screening using correlation shows that most probe sets have very low correlation with TRIM32. In addition, we are expecting only a small number of genes to be related to TRIM32. Therefore, we use 500 probe sets that are expressed in the eye and have highest marginal correlation in the analysis. Thus, the sample size is *n* = 120 (i.e., there are 120 arrays from 120 rats) and *p* = 500. It is expected that only a few genes are related to TRIM32. Therefore, this is a sparse, high-dimensional regression problem.

We use the nonparametric additive model to model the relation between the expression of TRIM32 and those of the 500 genes. We estimate model (1) using the ordinary Lasso, group Lasso, and adaptive group Lasso for the nonparametric additive model. To compare the results of the nonparametric additive model with that of the linear regression model, we also analyzed the data using the linear regression model with Lasso. We scale the covariates so that their values are between 0 and 1 and use cubic splines with six evenly distributed knots to estimate the additive components. The penalty parameters in all the methods are chosen using the BIC or EBIC as in the simulation study. Table 3 lists the probes selected by the group Lasso and the adaptive group Lasso, indicated by the check signs. Table 4 shows the number of variables, the residual sums of squares obtained with each estimation method. For the ordinary Lasso with the spline expansion, a variable is considered to be selected if any of the estimated coefficients of the spline approximation to its additive component are nonzero. Depending on whether BIC or EBIC is used, the group Lasso selects 16–17 variables, the adaptive group Lasso selects 15 variables and the ordinary Lasso with the spline expansion selects 94–97 variables, the linear model selects 8–14 variables. Table 4 shows that the adaptive group Lasso does better than the other methods in terms of residual sum of squares (RSS). We have also examined the plots (not shown) of the estimated additive components obtained with the group Lasso and the adaptive group Lasso, respectively. Most are highly nonlinear, confirming the need for taking into account nonlinearity.

Probe sets selected by the group Lasso and the adaptive group Lasso in the data example using BIC or EBIC for penalty parameter selection. GL, group Lasso; AGL, adaptive group Lasso; Linear, linear model with Lasso

Analysis results for the data example. No. of probes, the number of probe sets selected; RSS, the residual sum of squares of the fitted model

In order to evaluate the performance of the methods, we use cross-validation and compare the prediction mean square errors (PEs). We randomly partition the data into 6 subsets, each set consisting of 20 observations. We then fit the model with 5 subsets as training set and calculate the PE for the remaining set which we consider as test set. We repeat this process 6 times, considering one of the 6 subsets as test set every time. We compute the average of the numbers of probes selected and the prediction errors of these 6 calculations. Then we replicate this process 400 times (this is suggested to us by the Associate Editor). Table 5 gives the average values over 400 replications. The adaptive group Lasso has smaller average prediction error than the group Lasso, the ordinary Lasso and the linear regression with Lasso. The ordinary Lasso selects far more probe sets than the other approaches, but this does not lead to better prediction performance. Therefore, in this example, the adaptive group Lasso provides the investigator a more targeted list of probe sets, which can serve as a starting point for further study.

Comparison of adaptive group Lasso, group Lasso, ordinary Lasso, and linear regression model with Lasso for the data example. ANP, the average number of probe sets selected averaged across 400 replications; PE, the average of prediction mean square errors **...**

It is of interest to compare the selection results from the adaptive group Lasso and the linear regression model with Lasso. The adaptive group Lasso and the linear model with Lasso select different sets of genes. When the penalty parameter is chosen with the BIC, the adaptive group Lasso selects 5 genes that are not selected by the linear model with Lasso. In addition, the linear model with Lasso selects 5 genes that are not selected by the adaptive group Lasso. When the penalty parameter is selected with the EBIC, the adaptive group Lasso selects 10 genes that are not selected by the linear model with Lasso. The estimated effects of many of the genes are nonlinear, and the Monte Carlo results of Section 4 show that the performance of the linear model with Lasso can be very poor in the presence of nonlinearity. Therefore, we interpret the differences between the gene selections of the adaptive group Lasso and the linear model with Lasso as evidence that the selections produced by the linear model are misleading.

In this paper, we propose to use the adaptive group Lasso for variable selection in nonparametric additive models in sparse, high-dimensional settings. A key requirement for the adaptive group Lasso to be selection consistent is that the initial estimator is estimation consistent and selects all the important components with high probability. In low-dimensional settings, finding an initial consistent estimator is relatively easy and can be achieved by many well-established approaches such as the additive spline estimators. However, in high-dimensional settings, finding an initial consistent estimator is difficult. Under the conditions stated in Theorem 1, the group Lasso is shown to be consistent and selects all the important components. Thus the group Lasso can be used as the initial estimator in the adaptive Lasso to achieve selection consistency. Following model selection, oracle-efficient, asymptotically normal estimators of the nonzero components can be obtained by using existing methods. Our simulation results indicate that our procedure works well for variable selection in the models considered. Therefore, the adaptive group Lasso is a useful approach for variable selection and estimation in sparse, high-dimensional nonparametric additive models.

Our theoretical results are concerned with a fixed sequence of penalty parameters, which are not applicable to the case where the penalty parameters are selected based on data driven procedures such as the BIC. This is an important and challenging problem that deserves further investigation, but is beyond the scope of this paper. We have only considered linear nonparametric additive models. The adaptive group Lasso can be applied to generalized nonparametric additive models, such as the generalized logistic nonparametric additive model and other nonparametric models with high-dimensional data. However, more work is needed to understand the properties of this approach in those more complicated models.

The authors wish to thank the Editor, Associate Editor and two anonymous referees for their helpful comments.

We first prove the following lemmas. Denote the centered versions of _{n} by

$${\mathcal{S}}_{\mathit{\text{nj}}}^{0}=\{{f}_{\mathit{\text{nj}}}:{f}_{\mathit{\text{nj}}}(x)={\displaystyle \sum _{k=1}^{{m}_{n}}{b}_{\mathit{\text{jk}}}{\psi}_{k}(x),({\beta}_{j1},\dots ,{\beta}_{{\mathit{\text{jm}}}_{n}})\in {\mathbb{R}}^{{m}_{n}}}\},\text{}1\le j\le p,$$

where ψ_{k}’s are the centered spline bases defined in (5).

LEMMA 1. *Suppose that f* *and* E *f*(*X _{j}*) = 0.

$$\Vert {f}_{n}-f{\Vert}_{2}={O}_{p}({m}_{n}^{-d}+{m}_{n}^{1/2}{n}^{-1/2}).$$

*In particular, if we choose m _{n}* =

$$\Vert {f}_{n}-f{\Vert}_{2}={O}_{p}({m}_{n}^{-d})={O}_{p}({n}^{-d/(2d+1)}).$$

PROOF. By (A4), for *f* , there is an ${f}_{n}^{*}\in {\mathcal{S}}_{n}$ such that ${\Vert f-{f}_{n}^{*}\Vert}_{2}=O({m}_{n}^{-d})$. Let ${f}_{n}={f}_{n}^{*}-{n}^{-1}{\displaystyle {\sum}_{i=1}^{n}{f}_{n}^{*}}({X}_{\mathit{\text{ij}}})$. Then ${f}_{n}\in {\mathcal{S}}_{\mathit{\text{nj}}}^{0}\text{and}|{f}_{n}-f|\le |{f}_{n}^{*}-f|+|{P}_{n}{f}_{n}^{*}|$, where *P _{n}* is the empirical measure of i.i.d. random variables

$${P}_{n}{f}_{n}^{*}=({P}_{n}-P){f}_{n}^{*}+P\phantom{\rule{thinmathspace}{0ex}}({f}_{n}^{*}-f).$$

Here, we use the linear functional notation, for example, *Pf* = ∫ *fdP*, where *P* is the probability measure of *X*_{1j}. For any ε > 0, the bracketing number ${N}_{[\xb7]}(\epsilon ,{\mathcal{S}}_{\mathit{\text{nj}}}^{0},{L}_{2}(P))\text{of}{\mathcal{S}}_{\mathit{\text{nj}}}^{0}$ satisfies $\text{log}\phantom{\rule{thinmathspace}{0ex}}{N}_{[\xb7]}(\epsilon ,{\mathcal{S}}_{\mathit{\text{nj}}}^{0},{L}_{2}(P))\le {c}_{1}{m}_{n}\phantom{\rule{thinmathspace}{0ex}}\text{log}(1/\epsilon )$ for some constant *c*_{1} > 0 [Shen and Wong (1994), page 597]. Thus, by the maximal inequality; see, for example, van der Vaart (1998, page 288), $({P}_{n}-P){f}_{n}^{*}={O}_{p}({n}^{-1/2}{m}_{n}^{1/2})$. By (A4), $|P\phantom{\rule{thinmathspace}{0ex}}({f}_{n}^{*}-f)|\le {C}_{2}{\Vert {f}_{n}^{*}-f\Vert}_{2}=O\phantom{\rule{thinmathspace}{0ex}}({m}_{n}^{-d})$ for some constant *C*_{2} > 0. The lemma follows from the triangle inequality.

LEMMA 2. *Suppose that conditions* (A2) *and* (A4) *hold*. *Let*

$${T}_{\mathit{\text{jk}}}={n}^{-1/2}{m}_{n}^{1/2}{\displaystyle \sum _{i=1}^{n}{\psi}_{k}({X}_{\mathit{\text{ij}}}){\epsilon}_{i},}\text{\hspace{1em}\hspace{1em}}1\le j\le p,1\le k\le {m}_{n},$$

*and T _{n}* = max

$$\mathrm{E}({T}_{n})\le {C}_{1}{n}^{-1/2}{m}_{n}^{1/2}\sqrt{\text{log}({\mathit{\text{pm}}}_{n})}{(\sqrt{2{C}_{2}{m}_{n}^{-1}n\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n})}+4\phantom{\rule{thinmathspace}{0ex}}\text{log}(2{\mathit{\text{pm}}}_{n})+{C}_{2}{\mathit{\text{nm}}}_{n}^{-1})}^{1/2},$$

*where* *C*_{1} *and* *C*_{2} *are two positive constants. In particular, when m _{n}* log(

$$\mathrm{E}({T}_{n})=O(1)\sqrt{\text{log}({\mathit{\text{pm}}}_{n})}.$$

PROOF. Let ${s}_{\mathit{\text{njk}}}^{2}={\displaystyle {\sum}_{i=1}^{n}{\psi}_{k}^{2}({X}_{\mathit{\text{ij}}})}$. Conditional on *X _{ij}*’s,

$$\mathrm{E}(\underset{1\le j\le p,1\le k\le {m}_{n}}{\text{max}}|{T}_{\mathit{\text{jk}}}||\{{X}_{\mathit{\text{ij}}},1\le i\le n,1\le j\le p\})\le {C}_{1}{n}^{-1/2}{m}_{n}^{1/2}{s}_{n}\sqrt{\text{log}({\mathit{\text{pm}}}_{n})}.$$

Therefore,

$$\mathrm{E}(\underset{1\le j\le p,1\le k\le {m}_{n}}{\text{max}}|{T}_{\mathit{\text{jk}}}|)\le {C}_{1}{n}^{-1/2}{m}_{n}^{1/2}\sqrt{\text{log}({\mathit{\text{pm}}}_{n})}\mathrm{E}({s}_{n}),$$

(9)

where *C*_{1} > 0 is a constant. By (A4) and the properties of B-splines,

$$|{\psi}_{k}({X}_{\mathit{\text{ij}}})|\le |{\varphi}_{k}({X}_{\mathit{\text{ij}}})|+|{\overline{\varphi}}_{\mathit{\text{jk}}}|\le 2\text{\hspace{1em}and\hspace{1em}}\mathrm{E}{({\psi}_{k}({X}_{\mathit{\text{ij}}}))}^{2}\le {C}_{2}{m}_{n}^{-1}$$

(10)

for a constant *C*_{2} > 0, for every 1 ≤ *j* ≤ *p* and 1 ≤ *k* ≤ *m _{n}*. By (10),

$$\sum _{i=1}^{n}\mathrm{E}{[{\psi}_{k}^{2}({X}_{\mathit{\text{ij}}})-\mathrm{E}{\psi}_{k}^{2}({X}_{\mathit{\text{ij}}})]}^{2}\le 4{C}_{2}{\mathit{\text{nm}}}_{n}^{-1}$$

(11)

and

$$\underset{1\le j\le p,1\le k\le {m}_{n}}{\text{max}}{\displaystyle \sum _{i=1}^{n}\mathrm{E}{\psi}_{k}^{2}({X}_{\mathit{\text{ij}}})}\le {C}_{2}{\mathit{\text{nm}}}_{n}^{-1}.$$

(12)

By Lemma A.1 of van de Geer (2008), (10) and (11) imply

$$\mathrm{E}\left(\underset{1\le j\le p,1\le k\le {m}_{n}}{\text{max}}\left|{\displaystyle \sum _{i=1}^{n}\{{\psi}_{k}^{2}({X}_{\mathit{\text{ij}}})-\mathrm{E}{\psi}_{k}^{2}({X}_{\mathit{\text{ij}}})\}}\right|\right)\le \sqrt{2{C}_{2}{m}_{n}^{-1}n\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n})}+4\phantom{\rule{thinmathspace}{0ex}}\text{log}(2{\mathit{\text{pm}}}_{n}).$$

Therefore, by (12) and the triangle inequality,

$$\mathrm{E}{s}_{n}^{2}\le \sqrt{2{C}_{2}{m}_{n}^{-1}n\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n})}+4\phantom{\rule{thinmathspace}{0ex}}\text{log}(2{\mathit{\text{pm}}}_{n})+{C}_{2}{\mathit{\text{nm}}}_{n}^{-1}.$$

Now since ${\mathrm{E}s}_{n}\le {({\mathrm{E}s}_{n}^{2})}^{1/2}$, we have

$$\mathrm{E}{s}_{n}\le {(\sqrt{2{C}_{2}{m}_{n}^{-1}n\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n})}+4\phantom{\rule{thinmathspace}{0ex}}\text{log}(2{\mathit{\text{pm}}}_{n})+{C}_{2}{\mathit{\text{nm}}}_{n}^{-1})}^{1/2}.$$

(13)

Denote

$${\mathit{\beta}}_{A}=({\mathit{\beta}}_{j}^{\prime},j\in A)\prime \text{\hspace{1em}and\hspace{1em}}{\mathbf{Z}}_{A}=({\mathbf{Z}}_{j},j\in A).$$

Here, **β**_{A} is an |*A*|*m _{n}* × 1 vector and

LEMMA 3. *Let m _{n}* =

$${c}_{1}{h}_{n}\le {\rho}_{\text{min}}({\mathbf{C}}_{A})\le {\rho}_{\text{max}}({\mathbf{C}}_{A})\le {c}_{2}{h}_{n},$$

*where* *c*_{1} *and* *c*_{2} *are two positive constants*.

PROOF. Without loss of generality, suppose *A* = {1, …, *k*}. Then **Z**_{A} = (**Z**_{1}, …, **Z**_{q}). Let $\mathbf{b}=({\mathbf{b}}_{1}^{\prime},\dots ,{\mathbf{b}}_{q}^{\prime})\prime $, where **b**_{j} *R ^{mn}*. By Lemma 3 of Stone (1985),

$${\Vert {\mathbf{Z}}_{1}{\mathbf{b}}_{1}+\cdots +{\mathbf{Z}}_{q}{\mathbf{b}}_{q}\Vert}_{2}\ge {c}_{3}({\Vert {\mathbf{Z}}_{1}{\mathbf{b}}_{1}\Vert}_{2}+\cdots +{\Vert {\mathbf{Z}}_{q}{\mathbf{b}}_{q}\Vert}_{2})$$

for a certain constant *c*_{3} > 0. By the triangle inequality,

$${\Vert {\mathbf{Z}}_{1}{\mathbf{b}}_{1}+\cdots +{\mathbf{Z}}_{q}{\mathbf{b}}_{q}\Vert}_{2}\le {\Vert {\mathbf{Z}}_{1}{\mathbf{b}}_{1}\Vert}_{2}+\cdots +{\Vert {\mathbf{Z}}_{q}{\mathbf{b}}_{q}\Vert}_{2}.$$

Since **Z**_{A}**b** = **Z**_{1}**b**_{1} + + **Z**_{q}**b**_{q}, the above two inequalities imply that

$${c}_{3}({\Vert {\mathbf{Z}}_{1}{\mathbf{b}}_{1}\Vert}_{2}+\cdots +{\Vert {\mathbf{Z}}_{q}{\mathbf{b}}_{q}\Vert}_{2})\le {\Vert {\mathbf{Z}}_{A}\mathbf{b}\Vert}_{2}\le {\Vert {\mathbf{Z}}_{1}{\mathbf{b}}_{1}\Vert}_{2}+\cdots +{\Vert {\mathbf{Z}}_{q}{\mathbf{b}}_{q}\Vert}_{2}.$$

Therefore,

$${c}_{3}^{2}({\Vert {\mathbf{Z}}_{1}{\mathbf{b}}_{1}\Vert}_{2}^{2}+\cdots +{\Vert {\mathbf{Z}}_{q}{\mathbf{b}}_{q}\Vert}_{2}^{2})\le {\Vert {\mathbf{Z}}_{A}\mathbf{b}\Vert}_{2}^{2}\le 2({\Vert {\mathbf{Z}}_{1}{\mathbf{b}}_{1}\Vert}_{2}^{2}+\cdots +{\Vert {\mathbf{Z}}_{q}{\mathbf{b}}_{q}\Vert}_{2}^{2}).$$

(14)

Let ${\mathbf{C}}_{j}={n}^{-1}{\mathbf{Z}}_{j}^{\prime}{\mathbf{Z}}_{j}$. By Lemma 6.2 of Zhou, Shen and Wolf (1998),

$${c}_{4}h\le {\rho}_{\text{min}}({\mathbf{C}}_{j})\le {\rho}_{\text{max}}({\mathbf{C}}_{j})\le {c}_{5}h,\text{\hspace{1em}\hspace{1em}}j\in A.$$

(15)

Since ${\mathbf{C}}_{A}={n}^{-1}{\mathbf{Z}}_{A}^{\prime}{\mathbf{Z}}_{A}$, it follows from (14) that

$${c}_{3}^{2}({\mathbf{b}}_{1}^{\prime}{\mathbf{C}}_{1}{\mathbf{b}}_{1}+\cdots +{\mathbf{b}}_{q}^{\prime}{\mathbf{C}}_{q}{\mathbf{b}}_{q})\le \mathbf{b}\prime {\mathbf{C}}_{A}\mathbf{b}\le 2({\mathbf{b}}_{1}^{\prime}{\mathbf{C}}_{1}{\mathbf{b}}_{1}+\cdots +{\mathbf{b}}_{q}^{\prime}{\mathbf{C}}_{q}{\mathbf{b}}_{q}).$$

Therefore, by (15),

$$\begin{array}{cc}\frac{{\mathbf{b}}_{1}^{\prime}{\mathbf{C}}_{1}{\mathbf{b}}_{1}}{{\Vert \mathbf{b}\Vert}_{2}^{2}}+\cdots +\frac{{\mathbf{b}}_{q}^{\prime}{\mathbf{C}}_{q}{\mathbf{b}}_{q}}{{\Vert \mathbf{b}\Vert}_{2}^{2}}\hfill & =\frac{{\mathbf{b}}_{1}^{\prime}{\mathbf{C}}_{1}{\mathbf{b}}_{1}}{{\Vert {\mathbf{b}}_{1}\Vert}_{2}^{2}}\frac{{\Vert {\mathbf{b}}_{1}\Vert}_{2}^{2}}{{\Vert \mathbf{b}\Vert}_{2}^{2}}+\cdots +\frac{{\mathbf{b}}_{q}^{\prime}{\mathbf{C}}_{q}{\mathbf{b}}_{q}}{{\Vert {\mathbf{b}}_{q}\Vert}_{2}^{2}}\frac{{\Vert {\mathbf{b}}_{q}\Vert}_{2}^{2}}{{\Vert \mathbf{b}\Vert}_{2}^{2}}\hfill \\ \hfill & \ge {\rho}_{\text{min}}({\mathbf{C}}_{1})\frac{{\Vert {\mathbf{b}}_{1}\Vert}_{2}^{2}}{{\Vert \mathbf{b}\Vert}_{2}^{2}}+\cdots +{\rho}_{\text{min}}({\mathbf{C}}_{q})\frac{{\Vert {\mathbf{b}}_{q}\Vert}_{2}^{2}}{{\Vert \mathbf{b}\Vert}_{2}^{2}}\hfill \\ \hfill & \ge {c}_{4}h.\hfill \end{array}$$

Similarly,

$$\frac{{\mathbf{b}}_{1}^{\prime}{\mathbf{C}}_{1}{\mathbf{b}}_{1}}{{\Vert \mathbf{b}\Vert}_{2}^{2}}+\cdots +\frac{{\mathbf{b}}_{q}^{\prime}{\mathbf{C}}_{q}{\mathbf{b}}_{q}}{{\Vert \mathbf{b}\Vert}_{2}^{2}}\le {c}_{5}h.$$

Thus, we have

$${c}_{3}^{2}{c}_{4}h\le \frac{\mathbf{b}\prime {\mathbf{C}}_{A}\mathbf{b}}{\mathbf{b}\prime \mathbf{b}}\le 2{c}_{5}h.$$

The lemma follows.

PROOF OF THEOREM 1. The proof of parts (i) and (ii) essentially follows the proof of Theorem 2.1 of Wei and Huang (2008). The only change that must be made here is that we need to consider the approximation error of the regression functions by splines. Specifically, let **ξ**_{n} = **ε**_{n} + **δ**_{n}, where **δ**_{n} = (δ_{n1}, …, δ_{nn})′ with ${\delta}_{\mathit{\text{ni}}}={\displaystyle {\sum}_{j=1}^{{q}_{n}}({f}_{0j}({X}_{\mathit{\text{ij}}})-{f}_{\mathit{\text{nj}}}({X}_{\mathit{\text{ij}}}))}$. Since ${\Vert {f}_{0j}-{f}_{\mathit{\text{nj}}}\Vert}_{2}=O\phantom{\rule{thinmathspace}{0ex}}({m}_{n}^{-d})=O\phantom{\rule{thinmathspace}{0ex}}({n}^{-d/(2d+1)})$ for *m _{n}* =

$${\Vert {\mathit{\delta}}_{n}\Vert}_{2}\le {C}_{1}\sqrt{{\mathit{\text{nqm}}}_{n}^{-2d}}={C}_{1}q{n}^{1/(4d+2)}$$

for some constant *C*_{1} > 0. For any integer *t*, let

$${\chi}_{t}=\underset{|A|=t}{\text{max}}\underset{{\Vert {U}_{{A}_{k}}\Vert}_{2}=1,1\le k\le t}{\text{max}}\frac{|{\mathit{\xi}}_{n}^{\prime}{V}_{A}(\mathbf{s})|}{{\Vert {V}_{A}(\mathbf{s})\Vert}_{2}}\text{\hspace{1em}and\hspace{1em}}{\chi}_{t}^{*}=\underset{|A|=t}{\text{max}}\underset{{\Vert {U}_{{A}_{k}}\Vert}_{2}=1,1\le k\le t}{\text{max}}\frac{|{\mathit{\epsilon}}_{n}^{\prime}{V}_{A}(\mathbf{s})|}{{\Vert {V}_{A}(\mathbf{s})\Vert}_{2}},$$

where ${V}_{A}({S}_{A})={\mathit{\xi}}_{n}^{\prime}({\mathbf{Z}}_{A}{({\mathbf{Z}}_{A}^{\prime}{\mathbf{Z}}_{A})}^{-1}{\overline{S}}_{A}-(I-{P}_{A})X\mathit{\beta}$ for *N*(*A*) = *q*_{1} = *m* ≥ 0, ${S}_{A}=({S}_{{A}_{1}}^{\prime},\dots ,{S}_{{A}_{m}}^{\prime})\prime ,{S}_{{A}_{k}}=\lambda \sqrt{{d}_{{A}_{k}}}{U}_{{A}_{k}}$ and ‖*U _{Ak}*‖

For a sufficiently large constant *C*_{2} > 0, define

$${\mathrm{\Omega}}_{{t}_{0}}=\{(\mathbf{Z},{\mathit{\epsilon}}_{n}):{x}_{t}\le \sigma {C}_{2}\sqrt{(t\phantom{\rule{thinmathspace}{0ex}}\vee \phantom{\rule{thinmathspace}{0ex}}1){m}_{n}\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n}),}\forall t\ge {t}_{0}\}$$

and

$${\mathrm{\Omega}}_{{t}_{0}}^{*}=\{(\mathbf{Z},{\mathit{\epsilon}}_{n}):{x}_{t}^{*}\le \sigma {C}_{2}\sqrt{(t\phantom{\rule{thinmathspace}{0ex}}\vee \phantom{\rule{thinmathspace}{0ex}}1){m}_{n}\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n}),}\forall t\ge {t}_{0}\},$$

where *t*_{0} ≥ 0.

As in the proof of Theorem 2.1 of Wei and Huang (2008),

$$(\mathbf{Z},{\mathit{\epsilon}}_{n})\in {\mathrm{\Omega}}_{q}\text{\hspace{1em}}\Rightarrow \text{\hspace{1em}}|{\tilde{A}}_{1}|\le {M}_{1}q$$

for a constant *M*_{1} > 1. By the triangle and Cauchy–Schwarz inequalities,

$$\frac{|{\mathit{\xi}}_{n}^{\prime}{V}_{A}(\mathbf{s})|}{{\Vert {V}_{A}(\mathbf{s})\Vert}_{2}}=\frac{|{\mathit{\epsilon}}_{n}^{\prime}{V}_{A}(\mathbf{s})+{\mathit{\delta}}_{n}^{\prime}{V}_{A}(\mathbf{s})|}{{\Vert {V}_{A}(\mathbf{s})\Vert}_{2}}\le \frac{|{\mathit{\epsilon}}_{n}^{\prime}{V}_{A}(\mathbf{s})|}{{\Vert {V}_{A}\Vert}_{2}}+\Vert {\mathit{\delta}}_{n}\Vert .$$

(16)

In the proof of Theorem 2.1 of Wei and Huang (2008), it is shown that

$$\mathrm{P}({\mathrm{\Omega}}_{0}^{*})\ge 2-\frac{2}{{p}^{1+{c}_{0}}}-\text{exp}\left(\frac{2p}{{p}^{1+{c}_{0}}}\right)\to 1.$$

(17)

Since

$$\frac{|{\mathit{\delta}}_{n}^{\prime}{V}_{A}(\mathbf{s})|}{{\Vert {V}_{A}(\mathbf{s})\Vert}_{2}}\le {\Vert {\mathit{\delta}}_{n}\Vert}_{2}\le {C}_{1}{\mathit{\text{qn}}}^{1/(2(2d+1))}$$

and *m _{n}* =

$${\Vert {\mathit{\delta}}_{n}\Vert}_{2}\le {C}_{1}{\mathit{\text{qn}}}^{1/(2(2d+1))}\le \sigma {C}_{2}\sqrt{(t\phantom{\rule{thinmathspace}{0ex}}\vee \phantom{\rule{thinmathspace}{0ex}}1){m}_{n}\phantom{\rule{thinmathspace}{0ex}}\text{log}(p)}.$$

(18)

It follows from (16), (17) and (18) that P(Ω_{0}) → 1. This completes the proof of part (i) of Theorem 1.

Before proving part (ii), we first prove part (iii) of Theorem 1. By the definition of ${\tilde{\mathit{\beta}}}_{n}\equiv ({\tilde{\mathit{\beta}}}_{n1}^{\prime},\dots ,{\tilde{\mathit{\beta}}}_{\mathit{\text{np}}}^{\prime})\prime $,

$${\Vert \mathbf{Y}-\mathbf{Z}{\tilde{\mathit{\beta}}}_{n}\Vert}_{2}^{2}+{\lambda}_{n1}{\displaystyle \sum _{j=1}^{p}{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}_{2}\le {\Vert \mathbf{Y}-\mathbf{Z}{\mathit{\beta}}_{n}\Vert}_{2}^{2}}+{\lambda}_{n1}{\displaystyle \sum _{j=1}^{p}{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}.}$$

(19)

Let *A*_{2} = {*j* : ‖**β**_{nj}‖_{2} ≠ 0 or ‖_{nj}‖_{2} ≠ 0} and *d*_{n2} = |*A*_{2}|. By part (i), *d*_{n2} = *O _{p}*(

$${\Vert \mathbf{Y}-{\mathbf{Z}}_{{A}_{2}}{\tilde{\mathit{\beta}}}_{n{A}_{2}}\Vert}_{2}^{2}+{\lambda}_{n1}{\displaystyle \sum _{j\in {A}_{2}}{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}_{2}\le {\Vert \mathbf{Y}-{\mathbf{Z}}_{{A}_{2}}{\mathit{\beta}}_{n{A}_{2}}\Vert}_{2}^{2}}+{\lambda}_{n1}{\displaystyle \sum _{j\in {A}_{2}}{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}.}$$

(20)

Let **η**_{n} = **Y** − **Zβ**_{n}. Write

$$\mathbf{Y}-{\mathbf{Z}}_{{A}_{2}}{\tilde{\mathit{\beta}}}_{n{A}_{2}}=\mathbf{Y}-\mathbf{Z}{\mathit{\beta}}_{n}-{\mathbf{Z}}_{{A}_{2}}({\tilde{\mathit{\beta}}}_{n{A}_{2}}-{\mathit{\beta}}_{n{A}_{2}})={\mathit{\eta}}_{n}-{\mathbf{Z}}_{{A}_{2}}({\tilde{\mathit{\beta}}}_{n{A}_{2}}-{\mathit{\beta}}_{n{A}_{2}}).$$

We have

$${\Vert \mathbf{Y}-{\mathbf{Z}}_{{A}_{2}}{\tilde{\mathit{\beta}}}_{n{A}_{2}}\Vert}_{2}^{2}={\Vert {\mathbf{Z}}_{{A}_{2}}({\tilde{\mathit{\beta}}}_{n{A}_{2}}-{\mathit{\beta}}_{n{A}_{2}})\Vert}_{2}^{2}-2{\mathit{\eta}}_{n}^{\prime}{\mathbf{Z}}_{{A}_{2}}({\tilde{\mathit{\beta}}}_{n{A}_{2}}-{\mathit{\beta}}_{n{A}_{2}})+{\mathit{\eta}}_{n}^{\prime}{\mathit{\eta}}_{n}.$$

We can rewrite (20) as

$${\Vert {\mathbf{Z}}_{{A}_{2}}({\tilde{\mathit{\beta}}}_{n{A}_{2}}-{\mathit{\beta}}_{n{A}_{2}})\Vert}_{2}^{2}-2{\mathit{\eta}}_{n}^{\prime}{\mathbf{Z}}_{{A}_{2}}({\tilde{\mathit{\beta}}}_{n{A}_{2}}-{\mathit{\beta}}_{n{A}_{2}})\le {\lambda}_{n1}{\displaystyle \sum _{j\in {A}_{1}}{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}}-{\lambda}_{n1}{\displaystyle \sum _{j\in {A}_{1}}{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}_{2}}.$$

(21)

Now

$$\left|{\displaystyle \sum _{j\in {A}_{1}}{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}}-{\displaystyle \sum _{j\in {A}_{1}}{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}_{2}}\right|\le \sqrt{|{A}_{1}|}\xb7{\Vert {\tilde{\mathit{\beta}}}_{n{A}_{1}}-{\mathit{\beta}}_{n{A}_{1}}\Vert}_{2}\le \sqrt{|{A}_{1}|}\xb7{\Vert {\tilde{\mathit{\beta}}}_{n{A}_{2}}-{\mathit{\beta}}_{n{A}_{2}}\Vert}_{2}.$$

(22)

Let **ν**_{n} = **Z**_{A2}(_{nA2} − **β**_{nA2}). Combining (20), (21) and (22) to get

$${\Vert {\mathit{\nu}}_{n}\Vert}_{2}^{2}-2{\mathit{\eta}}_{n}^{\prime}{\mathit{\nu}}_{n}\le {\lambda}_{n1}\sqrt{|{A}_{1}|}\xb7{\Vert {\tilde{\mathit{\beta}}}_{n{A}_{2}}-{\mathit{\beta}}_{n{A}_{2}}\Vert}_{2}.$$

(23)

Let ${\mathit{\eta}}_{n}^{*}$ be the projection of **η**_{n} to the span **Z**_{A2}, that is, ${\mathit{\eta}}_{n}^{*}={\mathbf{Z}}_{{A}_{2}}{({\mathbf{Z}}_{{A}_{2}}^{\prime}\times \phantom{\rule{thinmathspace}{0ex}}{\mathbf{Z}}_{{A}_{2}})}^{-1}{\mathbf{Z}}_{{A}_{2}}^{\prime}{\mathit{\eta}}_{n}$. By the Cauchy–Schwarz inequality,

$$2|{\mathit{\eta}}_{n}^{\prime}{\mathit{\nu}}_{n}|\le 2{\Vert {\mathit{\eta}}_{n}^{*}\Vert}_{2}\xb7{\Vert {\mathit{\nu}}_{n}\Vert}_{2}\le 2{\Vert {\mathit{\eta}}_{n}^{*}\Vert}_{2}^{2}+\frac{1}{2}{\Vert {\mathit{\nu}}_{n}\Vert}_{2}^{2}.$$

(24)

$${\Vert {\mathit{\nu}}_{n}\Vert}_{2}^{2}\le 4{\Vert {\mathit{\eta}}_{n}^{*}\Vert}_{2}^{2}+2{\lambda}_{n1}\sqrt{|{A}_{1}|}\xb7{\Vert {\tilde{\mathit{\beta}}}_{n{A}_{2}}-{\mathit{\beta}}_{n{A}_{2}}\Vert}_{2}.$$

Let *c*_{n*} be the smallest eigenvalue of ${\mathbf{Z}}_{{A}_{2}}^{\prime}{\mathbf{Z}}_{{A}_{2}}/n$. By Lemma 3 and part (i), ${c}_{n*}{\asymp}_{p}{m}_{n}^{-1}$. Since ${\Vert {\mathit{\nu}}_{n}\Vert}_{2}^{2}\ge {\mathit{\text{nc}}}_{n*}{\Vert {\tilde{\mathit{\beta}}}_{{\mathit{\text{nA}}}_{2}}-{\mathit{\beta}}_{{\mathit{\text{nA}}}_{2}}\Vert}_{2}^{2}$ and 2*ab* ≤ *a*^{2} + *b*^{2},

$${\mathit{\text{nc}}}_{n*}{\Vert {\tilde{\mathit{\beta}}}_{n{A}_{2}}-{\mathit{\beta}}_{n{A}_{2}}\Vert}_{2}^{2}\le 4{\Vert {\mathit{\eta}}_{n}^{*}\Vert}_{2}^{2}+\frac{{(2{\lambda}_{n1}\sqrt{|{A}_{1}|})}^{2}}{2{\mathit{\text{nc}}}_{n*}}+\frac{1}{2}{\mathit{\text{nc}}}_{n*}{\Vert {\tilde{\mathit{\beta}}}_{n{A}_{2}}-{\mathit{\beta}}_{n{A}_{2}}\Vert}_{2}^{2}.$$

It follows that

$${\Vert {\tilde{\mathit{\beta}}}_{n{A}_{2}}-{\mathit{\beta}}_{n{A}_{2}}\Vert}_{2}^{2}\le \frac{8{\Vert {\mathit{\eta}}_{n}^{*}\Vert}_{2}^{2}}{{\mathit{\text{nc}}}_{n*}}+\frac{4{\lambda}_{n1}^{2}|{A}_{1}|}{{n}^{2}{c}_{n\mathrm{*}}^{2}}.$$

(25)

Let ${f}_{0}({\mathbf{X}}_{i})={\displaystyle {\sum}_{j=1}^{p}{f}_{0j}({X}_{\mathit{\text{ij}}})}$ and *f*_{0A}(**X**_{i}) = ∑_{jA} *f*_{0j}(*X _{ij}*). Write

$${\eta}_{i}={Y}_{i}-\mu -{f}_{0}({\mathbf{X}}_{i})+(\mu -\overline{Y})+{f}_{0}({\mathbf{X}}_{i})-{\displaystyle \sum _{j\in {A}_{2}}{Z}_{\mathit{\text{ij}}}^{\prime}\phantom{\rule{thinmathspace}{0ex}}{\mathit{\beta}}_{\mathit{\text{nj}}}}={\epsilon}_{i}+(\mu -\overline{Y})+{f}_{{A}_{2}}({\mathbf{X}}_{i})-{f}_{n{A}_{2}}({\mathbf{X}}_{i}).$$

Since |µ − |^{2} = *O _{p}*(

$${\Vert {\mathit{\eta}}_{n}^{*}\Vert}_{2}^{2}\le 2{\Vert {\mathit{\epsilon}}_{n}^{*}\Vert}_{2}^{2}+{O}_{p}(1)+O\phantom{\rule{thinmathspace}{0ex}}({\mathit{\text{nd}}}_{n2}{m}_{n}^{-2d}),$$

(26)

where ${\mathit{\epsilon}}_{n}^{*}$ is the projection of **ε**_{n} = (ε_{1}, …, ε_{n})′ to the span of **Z**_{A2}. We have

$${\Vert {\mathit{\epsilon}}_{n}^{*}\Vert}_{2}^{2}={\Vert {({\mathbf{Z}}_{{A}_{2}}^{\prime}{\mathbf{Z}}_{{A}_{2}})}^{-1/2}{\mathbf{Z}}_{{A}_{2}}^{\prime}{\mathit{\epsilon}}_{n}\Vert}_{2}^{2}\le \frac{1}{{\mathit{\text{nc}}}_{n*}}{\Vert {\mathbf{Z}}_{{A}_{2}}^{\prime}{\mathit{\epsilon}}_{n}\Vert}_{2}^{2}.$$

Now

$$\underset{A:|A|\le {d}_{n2}}{\text{max}}{\Vert {\mathbf{Z}}_{A}^{\prime}{\mathit{\epsilon}}_{n}\Vert}_{2}^{2}=\underset{A:|A|\le {d}_{n2}}{\text{max}}{\displaystyle \sum _{j\in A}{\Vert {\mathbf{Z}}_{j}^{\prime}{\mathit{\epsilon}}_{n}\Vert}_{2}^{2}}\le {d}_{n2}{m}_{n}\underset{1\le j\le p,1\le k\le {m}_{n}}{\text{max}}{|{\mathcal{Z}}_{\mathit{\text{jk}}}^{\prime}\mathit{\epsilon}|}^{2},$$

where _{jk} = (ψ_{k}(*X*_{1j}), …, ψ_{k}(*X _{nj}*))′. By Lemma 2,

$$\underset{1\le j\le p,1\le k\le {m}_{n}}{\text{max}}{|{\mathcal{Z}}_{\mathit{\text{jk}}}^{\prime}{\mathit{\epsilon}}_{n}|}^{2}={\mathit{\text{nm}}}_{n}^{-1}\underset{1\le j\le p,1\le k\le {m}_{n}}{\text{max}}{|{({m}_{n}/n)}^{1/2}{\mathcal{Z}}_{\mathit{\text{jk}}}^{\prime}{\mathit{\epsilon}}_{n}|}^{2}={O}_{p}(1){\mathit{\text{nm}}}_{n}^{-1}\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n}).$$

It follows that,

$${\Vert {\mathit{\epsilon}}_{n}^{*}\Vert}_{2}^{2}={O}_{p}(1)\frac{{d}_{n2}\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n})}{{c}_{n*}}.$$

(27)

Combining (25), (26) and (27), we get

$${\Vert {\tilde{\mathit{\beta}}}_{{A}_{2}}-{\mathit{\beta}}_{{A}_{2}}\Vert}_{2}^{2}\le {O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{d}_{n2}\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n})}{{\mathit{\text{nc}}}_{n*}^{2}}\right)+{O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{1}{{\mathit{\text{nc}}}_{n*}}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{d}_{n2}{m}_{n}^{-2d}}{{c}_{n*}}\right)+\frac{4{\lambda}_{n1}^{2}|{A}_{1}|}{{n}^{2}{c}_{n*}^{2}}.$$

Since *d*_{n2} = *O _{p}*(

$${\Vert {\tilde{\mathit{\beta}}}_{{A}_{2}}-{\mathit{\beta}}_{{A}_{2}}\Vert}_{2}^{2}\le {O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{m}_{n}^{2}\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n})}{n}\right)+{O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{m}_{n}}{n}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{1}{{m}_{n}^{2d-1}}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{4{m}_{n}^{2}{\lambda}_{n1}^{2}}{{n}^{2}}\right).$$

This completes the proof of part (iii).

We now prove part (ii). Since ${\Vert {f}_{j}\Vert}_{2}\ge {c}_{f}>0,1\le j\le q,{\Vert {f}_{j}-{f}_{\mathit{\text{nj}}}\Vert}_{2}=O\phantom{\rule{thinmathspace}{0ex}}({m}_{n}^{-d})$ and ‖*f _{nj}*‖

$${c}_{6}{m}_{n}^{-1}{\Vert {\mathit{\beta}}_{n}\Vert}_{2}^{2}\le {\Vert {f}_{\mathit{\text{nj}}}\Vert}_{2}^{2}\le {c}_{7}{m}_{n}^{-1}{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}^{2}.$$

It follows that ${\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}^{2}\ge {c}_{7}^{-1}{m}_{n}{\Vert {f}_{\mathit{\text{nj}}}\Vert}_{2}^{2}\ge 0.25{c}_{7}^{-1}{c}_{f}^{2}{m}_{n}$. Therefore, if ‖**β**_{nj}‖_{2} ≠ 0 but ‖_{nj}‖_{2} = 0, then

$${\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}-{\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}^{2}\ge 0.25{c}_{7}^{-1}{c}_{f}^{2}{m}_{n}.$$

(28)

However, since (*m _{n}* log(

PROOF OF THEOREM 2. By the definition of * _{j}*, 1 ≤

Now consider part (iii). By the properties of spline [de Boor (2001)],

$${c}_{6}{m}_{n}^{-1}{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}-{\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}^{2}\le {\Vert {\tilde{f}}_{\mathit{\text{nj}}}-{f}_{\mathit{\text{nj}}}\Vert}_{2}^{2}\le {c}_{7}{m}_{n}^{-1}{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}-{\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}^{2}.$$

Thus,

$${\Vert {\tilde{f}}_{\mathit{\text{nj}}}-{f}_{\mathit{\text{nj}}}\Vert}_{2}^{2}={O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{m}_{n}\phantom{\rule{thinmathspace}{0ex}}\text{log}({\mathit{\text{pm}}}_{n})}{n}\right)+{O}_{p}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{1}{n}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{1}{{m}_{n}^{2d}}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{4{m}_{n}{\lambda}_{n1}^{2}}{{n}^{2}}\right).$$

(29)

By (A3),

$${\Vert {f}_{j}-{f}_{\mathit{\text{nj}}}\Vert}_{2}^{2}=O\phantom{\rule{thinmathspace}{0ex}}({m}_{n}^{-2d}).$$

(30)

In the proofs below, for any matrix **H**, denote its 2-norm by ‖**H**‖, which is equal to its largest eigenvalue. This norm satisfies the inequality ‖**Hx**‖ ≤ ‖**H**‖‖**x**‖ for a column vector **x** whose dimension is the same as the number of the columns of **H**.

Denote ${\mathit{\beta}}_{{\mathit{\text{nA}}}_{1}}=({\mathit{\beta}}_{\mathit{\text{nj}}}^{\prime},j\in {A}_{1})\prime ,{\widehat{\mathit{\beta}}}_{{\mathit{\text{nA}}}_{1}}=({\widehat{\mathit{\beta}}}_{\mathit{\text{nj}}}^{\prime},j\in {A}_{1})\prime $ and **Z**_{A1} = **Z**_{j}, j*A*_{1}). Define ${\mathbf{C}}_{{A}_{1}}={n}^{-1}{\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathbf{Z}}_{{A}_{1}}$. Let ρ_{n1} and ρ_{n2} be the smallest and largest eigenvalues of **C**_{A1}, respectively.

PROOF OF THEOREM 3. By the KKT, a necessary and sufficient condition for _{n} is

$$\{\begin{array}{cc}2{\mathbf{Z}}_{j}^{\prime}(\mathbf{Y}-\mathbf{Z}{\widehat{\mathit{\beta}}}_{n})={\mathrm{\lambda}}_{n2}{w}_{\mathit{\text{nj}}}\frac{{\widehat{\mathit{\beta}}}_{\mathit{\text{nj}}}}{\Vert {\widehat{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert},\hfill & \text{\hspace{1em}\hspace{1em}}{\Vert {\widehat{\mathit{\beta}}}_{j}\Vert}_{2}\ne 0,j\ge 1,\hfill \\ 2{\Vert {\mathbf{Z}}_{j}^{\prime}(\mathbf{Y}-\mathbf{Z}{\widehat{\mathit{\beta}}}_{n})\Vert}_{2}\le {\mathrm{\lambda}}_{n2}{w}_{\mathit{\text{nj}}},\hfill & \text{\hspace{1em}\hspace{1em}}\Vert {\widehat{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert =0,j\ge 1.\hfill \end{array}$$

(31)

Let **ν**_{n} = (*w _{nj}*

$${\widehat{\mathit{\beta}}}_{n{A}_{1}}={({\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathbf{Z}}_{{A}_{1}})}^{-1}({\mathbf{Z}}_{{A}_{1}}^{\prime}\mathbf{Y}-{\mathrm{\lambda}}_{n2}{\mathit{\nu}}_{n}).$$

(32)

If _{nA1} =_{0} **β**_{nA1}, then the equation in (31) holds for ${\widehat{\mathit{\beta}}}_{n}\equiv {\widehat{\mathit{\beta}}}_{{\mathit{\text{nA}}}_{1}}^{\prime},\mathbf{0}\prime )\prime $. Thus, since **Z**_{n} = **Z**_{A1}_{nA1} for this _{n} and {**Z**_{j}, *j* *A*_{1}} are linearly independent,

$${\widehat{\mathit{\beta}}}_{n}{=}_{0}{\mathit{\beta}}_{n}\text{\hspace{1em}\hspace{1em}if}\phantom{\rule{thinmathspace}{0ex}}\{\begin{array}{cc}{\widehat{\mathit{\beta}}}_{n{A}_{1}{=}_{0}}{\mathit{\beta}}_{n{A}_{1}},\hfill & \hfill \\ {\Vert {\mathbf{Z}}_{j}^{\prime}\phantom{\rule{thinmathspace}{0ex}}(\mathbf{Y}-{\mathbf{Z}}_{{A}_{1}}{\widehat{\mathit{\beta}}}_{n{A}_{1}})\Vert}_{2}\le {\lambda}_{n2}{w}_{\mathit{\text{nj}}}/2,\hfill & \text{\hspace{1em}}\forall j\notin {A}_{1}.\hfill \end{array}$$

This is true if

$${\widehat{\mathit{\beta}}}_{n}{=}_{0}{\mathit{\beta}}_{n}\text{if}\phantom{\rule{thinmathspace}{0ex}}\{\begin{array}{cc}{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}-{\Vert {\widehat{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}_{2}{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2},\hfill & \text{}\forall j\in {A}_{1},\hfill \\ {\Vert {\mathbf{Z}}_{j}^{\prime}\phantom{\rule{thinmathspace}{0ex}}(\mathbf{Y}-{\mathbf{Z}}_{{A}_{1}}{\widehat{\mathit{\beta}}}_{n{A}_{1}})\Vert}_{2}\le {\lambda}_{n2}{w}_{\mathit{\text{nj}}}/2,\hfill & \text{}\forall j\notin {A}_{1}.\hfill \end{array}$$

Therefore,

$$\mathrm{P}({\widehat{\mathit{\beta}}}_{n}{\ne}_{0}{\mathit{\beta}}_{n})\le \phantom{\rule{thinmathspace}{0ex}}\mathrm{P}({\Vert {\widehat{\mathit{\beta}}}_{\mathit{\text{nj}}}-{\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}\ge {\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2},\exists j\in {A}_{1})+\mathrm{P}{\Vert {\mathbf{Z}}_{j}^{\prime}\phantom{\rule{thinmathspace}{0ex}}(\mathbf{Y}-{\mathbf{Z}}_{{A}_{1}}{\widehat{\mathit{\beta}}}_{n{A}_{1}})\Vert}_{2}>{\lambda}_{n2}{w}_{\mathit{\text{nj}}}/2,\exists j\notin {A}_{1}).$$

Let *f*_{0j} (**X**_{j}) = (*f*_{0j}(*X*_{1j}), …, *f*_{0j}(*X _{nj}*))′ and

$${n}^{-1}{\Vert {\mathit{\delta}}_{n}\Vert}^{2}={O}_{p}({\mathit{\text{qm}}}_{n}^{-2d}).$$

(33)

Let ${\mathbf{H}}_{n}={\mathbf{I}}_{n}-{\mathbf{Z}}_{{A}_{1}}{({\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathbf{Z}}_{{A}_{1}})}^{-1}{\mathbf{Z}}_{{A}_{1}}^{\prime}$. By (32),

$${\widehat{\mathit{\beta}}}_{n{A}_{1}}-{\mathit{\beta}}_{n{A}_{1}}={n}^{-1}{\mathbf{C}}_{{A}_{1}}^{-1}({\mathbf{Z}}_{{A}_{1}}^{\prime}({\mathit{\epsilon}}_{n}+{\mathit{\delta}}_{n})-{\lambda}_{n2}{\mathit{\nu}}_{n})$$

(34)

and

$$\mathbf{Y}-{\mathbf{Z}}_{{A}_{1}}{\widehat{\mathit{\beta}}}_{n{A}_{1}}={\mathbf{H}}_{n}{\mathit{\epsilon}}_{n}+{\mathbf{H}}_{n}{\mathit{\delta}}_{n}+{\lambda}_{n2}{\mathbf{Z}}_{{A}_{1}}{\mathbf{C}}_{{A}_{1}}^{-1}{\mathbf{\nu}}_{n}/n.$$

(35)

Based on these two equations, Lemma 5 below shows that

$$\mathrm{P}({\Vert {\widehat{\mathit{\beta}}}_{\mathit{\text{nj}}}-{\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}\ge {\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2},\exists j\in {A}_{1})\to 0,$$

and Lemma 6 below shows that

$$\mathrm{P}({\Vert {\mathbf{Z}}_{j}^{\prime}\phantom{\rule{thinmathspace}{0ex}}(\mathbf{Y}-{\mathbf{Z}}_{{A}_{1}}{\widehat{\mathit{\beta}}}_{n{A}_{1}})\Vert}_{2}>{\lambda}_{n2}{w}_{\mathit{\text{nj}}}/2,\exists j\notin {A}_{1})\to 0.$$

These two equations lead to part (i) of the theorem.

We now prove part (ii) of Theorem 3. As in (26), for **η**_{n} = **Y** − **Zβ**_{n} and

$${\mathit{\eta}}_{n1}^{*}={\mathbf{Z}}_{{A}_{1}}{({\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathbf{Z}}_{{A}_{1}})}^{-1}{\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathit{\eta}}_{n},$$

we have

$${\Vert {\mathit{\eta}}_{n1}^{*}\Vert}_{2}^{2}\le 2{\Vert {\mathit{\epsilon}}_{n1}^{*}\Vert}_{2}^{2}+{O}_{p}\phantom{\rule{thinmathspace}{0ex}}(1)+O\phantom{\rule{thinmathspace}{0ex}}({\mathit{\text{qnm}}}_{n}^{-2d}),$$

(36)

where ${\mathit{\epsilon}}_{n1}^{*}$ is the projection of **ε**_{n} = (ε_{1}, …, ε_{n})′ to the span of **Z**_{A1}. We have

$${\Vert {\mathit{\epsilon}}_{n1}^{*}\Vert}_{2}^{2}={\Vert {({\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathbf{Z}}_{{A}_{1}})}^{-1/2}{\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathit{\epsilon}}_{n}\Vert}_{2}^{2}\le \frac{1}{n{\rho}_{n1}}{\Vert {\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathit{\epsilon}}_{n}\Vert}_{2}^{2}={O}_{p}\phantom{\rule{thinmathspace}{0ex}}(1)\phantom{\rule{thinmathspace}{0ex}}\frac{|{A}_{1}|}{{\rho}_{n1}}.$$

(37)

Now similarly to the proof of (25), we can show that

$${\Vert {\widehat{\mathit{\beta}}}_{n{A}_{1}}-{\mathit{\beta}}_{n{A}_{1}}\Vert}_{2}^{2}\le \frac{8{\Vert {\mathit{\eta}}_{n1}^{*}\Vert}_{2}^{2}}{n{\rho}_{n1}}+\frac{4{\lambda}_{n2}^{2}|{A}_{1}|}{{n}^{2}{\rho}_{n1}^{2}}.$$

(38)

Combining (36), (37) and (38), we get

$${\Vert {\widehat{\mathit{\beta}}}_{n{A}_{1}}-{\mathit{\beta}}_{n{A}_{1}}\Vert}_{2}^{2}={O}_{p}\left(\frac{8}{n{\rho}_{n1}^{2}}\right)+{O}_{p}\left(\frac{1}{n{\rho}_{n1}}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{1}{{m}_{n}^{2d-1}}\right)+O\phantom{\rule{thinmathspace}{0ex}}\left(\frac{4{\lambda}_{n2}^{2}}{{n}^{2}{\rho}_{n1}^{2}}\right).$$

Since ${\rho}_{n1}{\asymp}_{p}{m}_{n}^{-1}$, the result follows.

The following lemmas are needed in the proof of Theorem 3.

LEMMA 4. *For* **ν**_{n} = (*w _{nj}*

$${\Vert {\mathit{\nu}}_{n}\Vert}^{2}={O}_{p}({h}_{n}^{2})={O}_{p}({({b}_{n1}^{2}{c}_{b})}^{-2}{r}_{n}^{-1}+q{b}_{n1}^{-1}).$$

PROOF. Write

$${\Vert {\mathit{\nu}}_{n}\Vert}^{2}={\displaystyle \sum _{j\in {A}_{1}}{w}_{j}^{2}=}{\displaystyle \sum _{j\in {A}_{1}}{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}^{-2}={\displaystyle \sum _{j\in {A}_{1}}\frac{{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}^{2}-{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}^{2}}{{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}^{2}\xb7{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}^{2}}}}+{\displaystyle \sum _{j\in {A}_{1}}{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}^{-1}}.$$

Under (B2),

$$\sum _{j\in {A}_{1}}\frac{|{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}^{2}-{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}^{2}|}{{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}^{2}\xb7{\Vert {\tilde{\mathit{\beta}}}_{\mathit{\text{nj}}}\Vert}^{2}}\le M{c}_{b}^{-2}{b}_{n1}^{-4}\Vert {\tilde{\mathit{\beta}}}_{n}-{\mathit{\beta}}_{n}\Vert$$

and $\sum}_{j\in {A}_{1}}{\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}^{-2}\le {\mathit{\text{qb}}}_{n1}^{-2$. The claim follows.

Let ρ_{n3} be the maximum of the largest eigenvalues of ${n}^{-1}{\mathbf{Z}}_{j}^{\prime}{\mathbf{Z}}_{j},j\in {A}_{0}$, that is, ${\rho}_{n3}={\text{max}}_{j\in {A}_{0}}{\Vert {n}^{-1}{\mathbf{Z}}_{j}^{\prime}{\mathbf{Z}}_{j}\Vert}_{2}$. By Lemma 3,

$${b}_{n1}\asymp O\phantom{\rule{thinmathspace}{0ex}}({m}_{n}^{1/2}),\text{\hspace{1em}}{\rho}_{n1}{\asymp}_{p}{m}_{n}^{-1},\text{\hspace{1em}}{\rho}_{n2}{\asymp}_{p}{m}_{n}^{-1}\text{\hspace{1em}and\hspace{1em}}{\rho}_{n3}{\asymp}_{p}{m}_{n}^{-1}.$$

(39)

LEMMA 5. *Under conditions* (B1), (B2), (A3) *and* (A4),

$$\mathrm{P}({\Vert {\widehat{\mathit{\beta}}}_{\mathit{\text{nj}}}-{\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}\ge {\Vert {\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2},\exists j\in {A}_{1})\to 0.$$

(40)

PROOF. Let **T**_{nj} be an *m _{n}* ×

$${\mathbf{T}}_{\mathit{\text{nj}}}=({\mathbf{0}}_{{m}_{n}},\dots \phantom{\rule{thinmathspace}{0ex}},{\mathbf{0}}_{{m}_{n}},{\mathbf{I}}_{{m}_{n}},{\mathbf{0}}_{{m}_{n}},\dots \phantom{\rule{thinmathspace}{0ex}},{\mathbf{0}}_{{m}_{n}}),$$

where **O**_{mn} is an *m _{n}* ×

$${\Vert {\widehat{\mathit{\beta}}}_{\mathit{\text{nj}}}-{\mathit{\beta}}_{\mathit{\text{nj}}}\Vert}_{2}\le {n}^{-1}{\Vert {\mathbf{T}}_{\mathit{\text{nj}}}{\mathbf{C}}_{{A}_{1}}^{-1}{\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathit{\epsilon}}_{n}\Vert}_{2}+{n}^{-1}{\Vert {\mathbf{T}}_{\mathit{\text{nj}}}{\mathbf{C}}_{{A}_{1}}^{-1}{\mathbf{Z}}_{{A}_{1}}^{\prime}{\delta}_{n}\Vert}_{2}+{n}^{-1}{\lambda}_{n2}{\Vert {\mathbf{T}}_{\mathit{\text{nj}}}{\mathbf{C}}_{{A}_{1}}^{-1}{\mathit{\nu}}_{n}\Vert}_{2}.$$

(41)

Let *C* be a generic constant independent of *n*. The first term on the right-hand side

$$\begin{array}{cc}\underset{j\in {A}_{1}}{\text{max}}{n}^{-1}{\Vert {\mathbf{T}}_{\mathit{\text{nj}}}{\mathbf{C}}_{{A}_{1}}^{-1}{\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathit{\epsilon}}_{n}\Vert}_{2}\hfill & \le {n}^{-1}{\rho}_{n1}^{-1}{\Vert {\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathit{\epsilon}}_{n}\Vert}_{2}\hfill \\ \hfill & ={n}^{-1/2}{\rho}_{n1}^{-1}{\Vert {n}^{-1/2}{\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathit{\epsilon}}_{n}\Vert}_{2}\hfill \\ \hfill & ={O}_{p}(1){n}^{-1/2}{\rho}_{n1}^{-1}{m}_{n}^{-1/2}{({\mathit{\text{qm}}}_{n})}^{1/2}.\hfill \end{array}$$

(42)

By (33), the second term

$$\begin{array}{cc}\underset{j\in {A}_{1}}{\text{max}}{n}^{-1}{\Vert {\mathbf{T}}_{\mathit{\text{nj}}}{\mathbf{C}}_{{A}_{1}}^{-1}{\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathit{\delta}}_{n}\Vert}_{2}\hfill & \le {\Vert {\mathbf{C}}_{{A}_{1}}^{-1}\Vert}_{2}\xb7{\Vert {n}^{-1}{\mathbf{Z}}_{{A}_{1}}^{\prime}{\mathbf{Z}}_{{A}_{1}}\Vert}_{2}^{1/2}\xb7{\Vert {n}^{-1}{\mathit{\delta}}_{n}\Vert}_{2}\hfill \\ \hfill & ={O}_{p}(1)\phantom{\rule{thinmathspace}{0ex}}{\rho}_{n1}^{-1}{\rho}_{n2}^{1/2}{q}^{1/2}{m}_{n}^{-d}.\hfill \end{array}$$

(43)

By Lemma 4, the third term

$$\underset{j\in {A}_{1}}{\text{max}}{n}^{-1}{\lambda}_{n2}{\Vert {\mathbf{T}}_{\mathit{\text{nj}}}{\mathbf{C}}_{{A}_{1}}^{-1}{\mathit{\nu}}_{n}\Vert}_{2}\le n\phantom{\rule{thinmathspace}{0ex}}{\lambda}_{n2}\phantom{\rule{thinmathspace}{0ex}}{\rho}_{n1}^{-1}{\Vert {\mathit{\nu}}_{n}\Vert}_{2}={O}_{p}(1)\phantom{\rule{thinmathspace}{0ex}}{\rho}_{n1}^{-1}{n}^{-1}{\lambda}_{n2}{h}_{n}.$$

(44)

Thus, (40) follows from (39), (42)–(44) and condition (B2a).

LEMMA 6. *Under conditions* (B1), (B2), (A3) *and* (A4),

$$\mathrm{P}({\Vert {\mathbf{Z}}_{j}^{\prime}\phantom{\rule{thinmathspace}{0ex}}(\mathbf{Y}-{\mathbf{Z}}_{{A}_{1}}{\widehat{\mathit{\beta}}}_{n{A}_{1}})\Vert}_{2}>{\lambda}_{n2}{w}_{\mathit{\text{nj}}}/2,\exists j\notin {A}_{1})\to 0.$$

(45)

PROOF. By (35), we have

$${\mathbf{Z}}_{j}^{\prime}\phantom{\rule{thinmathspace}{0ex}}(\mathbf{Y}-{\mathbf{Z}}_{{A}_{1}}{\widehat{\mathit{\beta}}}_{n{A}_{1}})={\mathbf{Z}}_{j}^{\prime}{\mathbf{H}}_{n}{\mathit{\epsilon}}_{n}+{\mathbf{Z}}_{j}^{\prime}{\mathbf{H}}_{n}{\mathit{\delta}}_{n}+\lambda {n}^{-1}{\mathbf{Z}}_{j}^{\prime}{\mathbf{Z}}_{{A}_{1}}{\mathbf{C}}_{{A}_{1}}^{-1}{\mathit{\nu}}_{n}.$$

(46)

Recall *s _{n}* =

$$\mathrm{E}\phantom{\rule{thinmathspace}{0ex}}\left(\underset{j\notin {A}_{1}}{\text{max}}{\Vert {n}^{-1/2}{\mathbf{Z}}_{j}^{\prime}{\mathbf{H}}_{n}{\mathit{\epsilon}}_{n}\Vert}_{2}\right)\le O\phantom{\rule{thinmathspace}{0ex}}(1){\{\text{log}\phantom{\rule{thinmathspace}{0ex}}({s}_{n}{m}_{n})\}}^{1/2}.$$

(47)

Since *w _{nj}* = ‖

$$\begin{array}{c}\mathrm{P}({\Vert {\mathbf{Z}}_{j}^{\prime}{\mathbf{H}}_{n}{\mathit{\epsilon}}_{n}\Vert}_{2}>{\lambda}_{n2}{w}_{\mathit{\text{nj}}}/6,\exists j\notin {A}_{1})\hfill \\ \text{\hspace{1em}\hspace{1em}}\le \mathrm{P}({\Vert {\mathbf{Z}}_{j}^{\prime}{\mathbf{H}}_{n}{\mathit{\epsilon}}_{n}\Vert}_{2}>C{\lambda}_{n2}{r}_{n},\exists j\notin {A}_{1})+o(1)\hfill \\ \text{\hspace{1em}\hspace{1em}}=\mathrm{P}\phantom{\rule{thinmathspace}{0ex}}\left(\underset{j\notin {A}_{1}}{\text{max}}{\Vert {n}^{-1/2}{\mathbf{Z}}_{j}^{\prime}{\mathbf{H}}_{n}{\mathit{\epsilon}}_{n}\Vert}_{2}C{n}^{-1/2}{\lambda}_{n2}{r}_{n}\right)+o(1)\hfill \\ \text{\hspace{1em}\hspace{1em}}\le O\phantom{\rule{thinmathspace}{0ex}}(1)\frac{{n}^{1/2}{\{\text{log}({s}_{n}{m}_{n})\}}^{1/2}}{C{\lambda}_{n2}{r}_{n}}+o(1).\hfill \end{array}$$

(48)

By (33), the second term on the right-hand side of (46)

$$\begin{array}{cc}\underset{j\notin {A}_{1}}{\text{max}}{\Vert {\mathbf{Z}}_{j}^{\prime}{\mathbf{H}}_{n}{\mathit{\delta}}_{n}\Vert}_{2}\hfill & \le {n}^{1/2}\underset{j\notin {A}_{1}}{\text{max}}{\Vert {n}^{-1}{\mathbf{Z}}_{j}^{\prime}{\mathbf{Z}}_{j}\Vert}_{2}^{1/2}\xb7{\Vert {\mathbf{H}}_{n}\Vert}_{2}\xb7{\Vert {\mathit{\delta}}_{n}\Vert}_{2}\hfill \\ \hfill & =O\phantom{\rule{thinmathspace}{0ex}}(1)n{\rho}_{n3}^{1/2}{q}^{1/2}{m}_{n}^{-d}.\hfill \end{array}$$

(49)

By Lemma 4, the third term on the right-hand side of (46)

$$\begin{array}{c}\underset{j\notin {A}_{1}}{\text{max}}{\lambda}_{n2}{n}^{-1}{\Vert {\mathbf{Z}}_{j}{\mathbf{Z}}_{{A}_{1}}{\mathbf{C}}_{{A}_{1}}^{-1}{\mathit{\nu}}_{n}\Vert}_{2}\hfill \\ \text{\hspace{1em}\hspace{1em}}\le {\lambda}_{n2}\underset{j\notin {A}_{1}}{\text{max}}{\Vert {n}^{-1/2}{\mathbf{Z}}_{j}\Vert}_{2}\xb7{\Vert {n}^{-1/2}{\mathbf{Z}}_{{A}_{1}}{\mathbf{C}}_{{A}_{1}}^{-1/2}\Vert}_{2}\xb7{\Vert {\mathbf{C}}_{{A}_{1}}^{-1/2}\Vert}_{2}\xb7{\Vert {\mathit{\nu}}_{n}\Vert}_{2}\hfill \\ \text{\hspace{1em}\hspace{1em}}={\lambda}_{n2}{\rho}_{n3}^{1/2}{\rho}_{n1}^{-1/2}{O}_{p}\phantom{\rule{thinmathspace}{0ex}}(q{b}_{n1}^{-1}).\hfill \end{array}$$

(50)

Therefore, (45) follows from (39), (48), (49), (50) and condition (B2b).

PROOF OF THEOREM 4. The proof is similar to that of Theorem 2 and is omitted.

^{1}Supported in part by NIH Grant CA120988 and NSF Grant DMS-08-05670.

^{2}Supported in part by NSF Grant SES-0817552.

Jian Huang, Department of Statistics and Actuarial Science, 241 SH, University of Iowa, Iowa City, Iowa 52242, USA, Email: ude.awoiu@gnauh-naij.

Joel L. Horowitz, Department of Economics, Northwestern University, 2001 Sheridan Road, Evanston, Illinois 60208, USA, Email: ude.nretsewhtron@ztiworoh-leoj.

Fengrong Wei, Department of Mathematics, University of West Georgia, Carrollton, Georgia 30118, USA, Email: ude.agtsew@iewf.

- Antoniadis A, Fan J. Regularization of wavelet approximation (with discussion) J. Amer. Statist. Assoc. 2001;96:939–967. MR1946364.
- Bach FR. Consistency of the group Lasso and multiple kernel learning. J. Mach. Learn. Res. 2007;9:1179–1225. MR2417268.
- Bunea F, Tsybakov A, Wegkamp M. Sparsity oracle inequalities for the Lasso. Electron. J. Stat. 2007:169–194. MR2312149.
- Chen J, Chen Z. Extended Bayesian information criteria for model selection with large model space. Biometrika. 2008;95:759–771.
- Chen J, Chen Z. Extended BIC for small-
*n*-large-*P*sparse GLM. 2009. Available at http://www.stat.nus.edu.sg/~stachenz/ChenChen.pdf. - Chiang AP, Beck JS, Yen H-J, Tayeh MK, Scheetz TE, Swiderski R, Nishimura D, Braun TA, Kim K-Y, Huang J, Elbedour K, Carmi R, Slusarski DC, Casavant TL, Stone EM, Sheffield VC. Homozygosity mapping with SNP arrays identifies a novel gene for Bardet–Biedl syndrome (BBS10) Proc. Natl. Acad. Sci. USA. 2006;103:6287–6292. [PubMed]
- de Boor C. A Practical Guide to Splines. revised ed. New York: Springer; 2001. MR1900298.
- Efron B, Hastie T, Johnstone I, Tibshirani R. Least angle regression (with discussion) Ann. Statist. 2004;32:407–499. MR2060166.
- Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 2001;96:1348–1360. MR1946581.
- Fan J, Peng H. Nonconcave penalized likelihood with a diverging number of parameters. Ann. Statist. 2004;32:928–961. MR2065194.
- Frank IE, Friedman JH. A statistical view of some chemometrics regression tools (with discussion) Technometrics. 1993;35:109–148.
- Horowitz JL, Klemelä J, Mammen E. Optimal estimation in additive regression models. Bernoulli. 2006;12:271–298. MR2218556.
- Horowitz JL, Mammen E. Nonparametric estimation of an additive model with a link function. Ann. Statist. 2004;32:2412–2443.
- Huang J, Horowitz JL, Ma SG. Asymptotic properties of bridge estimators in sparse high-dimensional regression models. Ann. Statist. 2008;36:587–613. MR2396808.
- Huang J, Ma S, Zhang C-H. Adaptive Lasso for high-dimensional regression models. Statist. Sinica. 2008;18:1603–1618. MR2469326.
- Irizarry RA, Hobbs B, Collin F, Beazer-Barclay YD, Antonellis KJ, Scherf U, Speed TP. Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatistics. 2003;4:249–264. [PubMed]
- Lin Y, Zhang H. Component selection and smoothing in multivariate nonparametric regression. Ann. Statist. 2006;34:2272–2297. MR2291500.
- Meier L, van de Geer S, Bühlmann P. High-dimensional additive modeling. Ann. Statist. 2009;37:3779–3821. MR2572443.
- Meinshausen N, Bühlmann P. High dimensional graphs and variable selection with the Lasso. Ann. Statist. 2006;34:1436–1462. MR2278363.
- Meinshausen N, Yu B. Lasso-type recovery of sparse representations for high-dimensional data. Ann. Statist. 2009;37:246–270. MR2488351.
- Ravikumar P, Liu H, Lafferty J, Wasserman L. Sparse additive models. J. Roy. Statist. Soc. Ser. B. 2009;71:1009–1030.
- Scheetz TE, Kim K-YA, Swiderski RE, Philp AR, Braun TA, Knudtson KL, Dorrance AM, DiBona GF, Huang J, Casavant TL, Sheffield VC, Stone EM. Regulation of gene expression in the mammalian eye and its relevance to eye disease. Proc. Natl. Acad. Sci. USA. 2006;103:14429–14434. [PubMed]
- Schwarz G. Estimating the dimension of a model. Ann. Statist. 1978;6:461–464. MR0468014.
- Schumaker L. Spline Functions: Basic Theory. New York: Wiley; 1981. MR0606200.
- Shen X, Wong WH. Convergence rate of sieve estimates. Ann. Statist. 1994;22:580–615.
- Stone CJ. Additive regression and other nonparametric models. Ann. Statist. 1985;13:689–705. MR0790566.
- Stone CJ. The dimensionality reduction principle for generalized additive models. Ann. Statist. 1986;14:590–606. MR0840516.
- Tibshirani R. Regression shrinkage and selection via the Lasso. J. Roy. Statist. Soc. Ser. B. 1996;58:267–288. MR1379242.
- van de Geer S. High-dimensional generalized linear models and the Lasso. Ann. Statist. 2008;36:614–645. MR2396809.
- Van der Vaart AW. Asymptotic Statistics. Cambridge: Cambridge Univ. Press; 1998.
- van der Vaart AW, Wellner JA. Weak Convergence and Empirical Processes: With Applications to Statistics. New York: Springer; 1996. MR1385671.
- Wang L, Chen G, Li H. Group SCAD regression analysis for microarray time course gene expression data. Bioinformatics. 2007;23:1486–1494. [PubMed]
- Wang H, Xia Y. Shrinkage estimation of the varying coefficient model. J. Amer. Statist. Assoc. 2008;104:747–757. MR2541592.
- Wei F, Huang J. Technical Report #387. Dept. Statistics and Actuarial Science, Univ. Iowa; 2008. Consistent group selection in high-dimensional linear regression. Available at http://www.stat.uiowa.edu/techrep/tr387.pdf.
- Yuan M, Lin Y. Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B Stat. Methodol. 2006;68:49–67. MR2212574.
- Zhang C-H. Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 2010;38:894–942.
- Zhang H, Wahba G, Lin Y, Voelker M, Ferris M, Klein R, Klein B. Variable selection and model building via likelihood basis pursuit. J. Amer. Statist. Assoc. 2004;99:659–672. MR2090901.
- Zhang C-H, Huang J. The sparsity and bias of the Lasso selection in high-dimensional linear regression. Ann. Statist. 2008;36:1567–1594. MR2435448.
- Zhang HH, Lin Y. Component selection and smoothing for nonparametric regression in exponential families. Statist. Sinica. 2006;16:1021–1041. MR2281313.
- Zhao P, Yu B. On model selection consistency of LASSO. J. Mach. Learn. Res. 2006;7:2541–2563. MR2274449.
- Zhou S, Shen X, Wolf DA. Local asymptotics for regression splines and confidence regions. Ann. Statist. 1998;26:1760–1782. MR1673277.
- Zou H. The adaptive Lasso and its oracle properties. J. Amer. Statist. Assoc. 2006;101:1418–1429. MR2279469.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |