PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of springeropenLink to Publisher's site
Journal of Inequalities and Applications
 
J Inequal Appl. 2017; 2017(1): 174.
Published online 2017 August 1. doi:  10.1186/s13660-017-1448-2
PMCID: PMC5539310

A new bound on the block restricted isometry constant in compressed sensing

Abstract

This paper focuses on the sufficient condition of block sparse recovery with the l2/l1-minimization. We show that if the measurement matrix satisfies the block restricted isometry property with δ2s|ℐ < 0.6246, then every block s-sparse signal can be exactly recovered via the l2/l1-minimization approach in the noiseless case and is stably recovered in the noisy measurement case. The result improves the bound on the block restricted isometry constant δ2s|ℐ of Lin and Li (Acta Math. Sin. Engl. Ser. 29(7):1401-1412, 2013).

Keywords: compressed sensing, l2/l1-minimization, block sparse recovery, block restricted isometry property, null space property

Introduction

Compressed sensing [24] is a scheme which shows that some signals can be reconstructed from fewer measurements compared to the classical Nyquist-Shannon sampling method. This effective sampling method has a number of potential applications in signal processing, as well as other areas of science and technology. Its essential model is

minxRNx0s.t.y=Ax,
1

where x0 denotes the number of non-zero entries of the vector x, an s-sparse vector x ∈ ℝN is defined by x0 ≤ s ≪ N. However, the l0-minimization (1) is a nonconvex and NP-hard optimization problem [5] and thus is computationally infeasible. To overcome this problem, one proposed the l1-minimization [4, 69].

minxRNx1s.t.y=Ax,
2

where x1=i=1N|xi|. Candès [10] proved that the solutions to (2) are equivalent to those of (1) provided that the measurement matrices satisfy the restricted isometry property (RIP) [9, 11] with some definite restricted isometry constant (RIC) δs ∈ (0, 1), here δs is defined as the smallest constant satisfying

(1δs)x22Ax22(1+δs)x22
3

for any s-sparse vectors x ∈ ℝN.

However, the standard compressed sensing only considers the sparsity of the recovered signal, but it does not take into account any further structure. In many practical applications, for example, DNA microarrays [12], face recognition [13], color imaging [14], image annotation [15], multi-response linear regression [16], etc., the non-zero entries of sparse signal can be aligned or classified into blocks, which means that they appear in regions in a regular order instead of arbitrarily spread throughout the vector. These signals are called the block sparse signals and has attracted considerable interests; see [1723] for more information.

Suppose that x ∈ ℝN is split into m blocks, x[1], x[2], …, x[m], which are of length d1d2, …, dm, respectively, that is,

x=[x1,,xd1x[1],xd1+1,,xd1+d2x[2],,xNdm+1,,xNx[m]]T,
4

and N=i=1mdi. A vector x ∈ ℝN is called block s-sparse over ℐ = {d1d2, …, dm} if x[i] is non-zero for at most s indices i [18]. Obviously, di = 1 for each i, the block sparsity reduces to the conventional definition of a sparse vector. Let

x2,0=i=1mI(x[i]2),

where I(x) is an indicator function that equals 1 if x > 0 and 0 otherwise. So a block s-sparse vector x can be defined by x2,0 ≤ s, and x0=i=1mx[i]0. Also, let Σs denote the set of all block s-sparse vectors: Σs = {x ∈ ℝN:∥x2,0 ≤ s}.

To recover a block sparse signal, similar to the standard l0-minimization, one seeks the sparsest block sparse vector via the following l2/l0-minimization [13, 17, 18]:

minxRNx2,0s.t.y=Ax.
5

But the l2/l0-minimization problem is also NP-hard. It is natural to use the l2/l1-minimization to replace the l2/l0-minimization [13, 17, 18, 24].

minxRNx2,1s.t.y=Ax,
6

where

x2,1=i=1mx[i]2.
7

To characterize the performance of this method, Eldar and Mishali [18] proposed the block restricted isometry property (block RIP).

Definition 1

Block RIP

Given a matrix A ∈ ℝn×N, for every block s-sparse x ∈ ℝN over ℐ = {d1d2, …, dm}, there exists a positive constant 0 < δs|ℐ < 1, such that

(1δs|I)x22Ax22(1+δs|I)x22,
8

then the matrix A satisfies the s-order block RIP over , and the smallest constant δs|ℐ satisfying the above inequality (8) is called the block RIC of A.

Obviously, the block RIP is an extension of the standard RIP, but it is a less stringent requirement comparing to the standard RIP [18, 25]. Eldar et al. [18] proved that the l2/l1-minimization can exactly recover any block s-sparse signal when the measurement matrices A satisfy the block RIP with δ2s|ℐ < 0.414. The block RIC can be improved, for example, Lin and Li [1] improved the bound to δ2s|ℐ < 0.4931, and established another sufficient condition δs|ℐ < 0.307 for exact recovery. So far, to the best of our knowledge, there is no paper that further focuses on improvement of the block RIC. As mentioned in [1, 26, 27], like RIC, there are several benefits for improving the bound on δ2s|ℐ. First, it allows more measurement matrices to be used in compressed sensing. Secondly, for the same matrix A, it allows for recovering a block sparse signal with more non-zero entries. Furthermore, it gives better error estimation in a general problem to recover noisy compressible signals. Therefore, this paper addresses improvement of the block RIC, we consider the following minimization for the inaccurate measurement, yAxe with e2 ≤ ϵ:

minxRNx2,1s.t.yAx2ϵ.
9

Our main result is stated in the following theorem.

Theorem 1

Suppose that the 2s block RIC of the matrix A ∈ ℝn×N satisfies

δ2s|I<4410.6246.
10

If x is a solution to (9), then there exist positive constants C1, D1 and C2, D2, and we have

xx2,1C1σs(x)2,1+D1sϵ,
11

xx2C2sσs(x)2,1+D2ϵ,
12

where the constants C1, D1 and C2, D2 depend only on δ2s|ℐ, written as

C1=2(41δ2s|I2+3δ2s|I)41δ2s|I25δ2s|I,
13

D1=161+δ2s|I41δ2s|I25δ2s|I,
14

C2=2(41δ2s|I2+3δ2s|I)2(41δ2s|I2δ2s|I)(41δ2s|I25δ2s|I),
15

D2=81+δ2s|I(121δ2s|I2+δ2s|I)(41δ2s|I2δ2s|I)(41δ2s|I25δ2s|I),
16

and σs(x)2,1 denotes the best block s-term approximation error of x ∈ ℝN in l2/l1 norm, i.e.,

σs(x)2,1:=infzΣsxz2,1.
17

Corollary 1

Under the same assumptions as in Theorem 1, suppose that e = 0 and x is block s-sparse, then x can be exactly recovered via the l2/l1-minimization (6).

The remainder of the paper is organized as follows. In Section 2, we introduce the l2,1 robust block NSP that can characterize the stability and robustness of the l1-minimization with noisy measurement (9). In Section 3, we show that the condition (10) can conclude the l2,1 robust block NSP, which means to implement the proof of our main result. Section 4 is for our conclusions. The last section is an appendix including an important lemma.

Block null space property

Although null space property (NSP) is a very important concept in approximation theory [28, 29], it provides a necessary and sufficient condition of the existence and uniqueness of the solution to the l1-minimization (2), so NSP has drawn extensive attention for studying the characterization of measurement matrix in compressed sensing [30]. It is natural to extend the classic NSP to the block sparse case. For this purpose, we introduce some notations. Suppose that x ∈ ℝN is an m-block signal, whose structure is like (4), we set S ⊂ {1, 2, …, m} and by SC we mean the complement of the set S with respect to {1, 2, …, m}, i.e., SC = {1, 2, …, m}∖S. Let xS denote the vector equal to x on a block index set S and zero elsewhere, then xxSxSC. Here, to investigate the solution to the model (9), we introduce the l2,1 robust block NSP, for more information on other forms of block NSP, we refer the reader to [23, 31].

Definition 2

l2,1 robust block NSP

Given a matrix A ∈ ℝn×N, for any set S ⊂ {1, 2, …, m} with card(S) ≤ s and for all v ∈ ℝN, if there exist constants 0 < τ < 1 and γ > 0, such that

vS2τsvSC2,1+γAv2,
18

then the matrix A is said to satisfy the l2,1 robust block NSP of order s with τ and γ.

Our main result relies heavily on this definition. A natural question is what relationship between this robust block NSP and the block RIP. Indeed, from the next section, we shall see that the block RIP with condition (10) can lead to the l2,1 robust block NSP, that is, the l2,1 robust block NSP is weaker than the block RIP to some extent. The spirit of this definition is first to imply the following theorem.

Theorem 2

For any set S ⊂ {1, 2, …, m} with card(S) ≤ s, the matrix A ∈ ℝn×N satisfies the l2,1 robust block NSP of order s with constants 0 < τ < 1 and γ > 0, then, for all vectors xz ∈ ℝN,

xz2,11+τ1τ(z2,1x2,1+2xSC2,1)+2γs1τA(xz)2.
19

Proof

For xz ∈ ℝN, setting vx − z, we have

vSC2,1xSC2,1+zSC2,1,x2,1=xSC2,1+xS2,1xSC2,1+vS2,1+zS2,1,

which yield

vSC2,1 ≤ 2∥xSC2,1 + ∥z2,1 − ∥x2,1 + ∥vS2,1.
20

Clearly, for an m-block vector x ∈ ℝN is like (4), l2-norm x2 can be rewritten as

x2=x2,2=i=1m(x[i]22)1/2.
21

Thus, we have vS2,2 = ∥vS2 and vS2,1svS2,2. So the l2,1 robust block NSP implies

vS2,1τvSC2,1+γsAv2.
22

Combining (20) with (22), we can get

vSC2,111τ(2xSC2,1+z2,1x2,1)+γs1τAv2.

Using (22) once again, we derive

v2,1=vSC2,1+vS2,1(1+τ)vSC2,1+γsAv21+τ1τ(2xSC2,1+z2,1x2,1)+2γs1τAv2,

which is the desired inequality.

The l2,1 robust block NSP is vital to characterize the stability and robustness of the l2/l1-minimization with noisy measurement (9), which is the following result.

Theorem 3

Suppose that the matrix A ∈ ℝn×N satisfies the l2,1 robust block NSP of order s with constants 0 < τ < 1 and γ > 0, if x is a solution to the l2/l1-minimization with yAxe and e2 ≤ ϵ, then there exist positive constants C3, D3 and C4, D4, and we have

xx2,1C3σs(x)2,1+D3sϵ,
23

xx2C4sσs(x)2,1+D4ϵ,
24

where

C3=2(1+τ)1τ;D3=4γ1τ.
25

C4=2(1+τ)21τ;D4=2γ(3+τ)1τ.
26

Proof

In Theorem 2, by S denote an index set of s largest l2-norm terms out of m blocks in x, (23) is a direct corollary of Theorem 2 if we notice that xSC2,1σs(x)2,1 and A(xx)∥2 ≤ 2ϵ. Equation (24) is a result of Theorem 7 for q = 1 in [23].

Proof of the main result

From Theorem 3, we see that the inequalities (23) and (24) are the same as in (11) and (12) up to constants, respectively. This means that we shall only show that the condition (10) implies the l2,1 robust block NSP for implementing the proof of our main result.

Theorem 4

Suppose that the 2s block RIC of the matrix A ∈ ℝn×N obeys (10), then the matrix A satisfies the l2,1 robust block NSP of order s with constants 0 < τ < 1 and γ > 0, where

τ=4δ2s|I41δ2s|I2δ2s|I,γ=41+δ2s|I41δ2s|I2δ2s|I.
27

Proof

The proof relies on a technique introduced in [30]. Suppose that the matrix A has the block RIP with δ2s|ℐ. Let v be divided into m blocks whose structure is like (38). Let S = :S0 be an index set of s largest l2-norm terms out of m blocks in v. We begin by dividing SC into subsets of size s, S1 is the first s largest l2-norm terms in SC, S2 is the next s largest l2-norm terms in SC, etc. Since the vector vS is block s-sparse, according to the block RIP, for |t| ≤ δ2s|ℐ, we can write

AvS22=(1+t)vS22.
28

We are going to establish that, for any j ≥ 1,

|AvS,AvSj|δ2s|I2t2vS2vSj2.
29

To do so, we normalize the vectors vS and vSj by setting u = :vS/∥vS2 and w = :vSj/∥vSj2. Then, for αβ > 0, we write

2Au,Aw=1α+β[A(αu+w)22A(βuw)22(α2β2)Au22].
30

By the block RIP, on the one hand, we have

2Au,Aw1α+β[(1+δ2s|I)αu+w22(1δ2s|I)βuw22]1α+β(α2β2)(1+t)u22=1α+β[α2(δ2s|It)+β2(δ2s|I+t)+2δ2s|I].

Making the choice α=(δ2s|I+t)δ2s|I2t2, β=(δ2s|It)δ2s|I2t2, we derive

Au,Awδ2s|I2t2.
31

On the other hand, we also have

2Au,Aw1α+β[(1δ2s|I)αu+w22(1+δ2s|I)βuw22]1α+β(α2β2)(1+t)u22=1α+β[α2(δ2s|It)+β2(δ2s|I+t)+2δ2s|I].

Making the choice α=(δ2s|It)δ2s|I2t2, β=(δ2s|I+t)δ2s|I2t2, we get

Au,Awδ2s|I2t2.
32

Combining (31) with (32) yields the desired inequality (29). Next, noticing that AvSA(v − ∑j≥1AvSj), we have

AvS22=AvS,Avj1AvS,AvSjAvS2Av2+j1δ2s|I2t2vS2vSj2=vS2(1+tAv2+δ2s|I2t2j1vSj2).
33

According to Lemma A.1 and the setting of Sj, we have

j1vSj2,2j1[1svSj2,1+s4(vSj[1]2vSj[s]2)]1svSC2,1+s4vS1[1]21svSC2,1+14vS2.
34

Substituting (34) into (33) and noticing (28), we also have

(1+t)vS21+tAv2+δ2s|I2t2svSC2,1+δ2s|I2t24vS2,

that is,

vS2δ2s|I2t2s(1+t)vSC2,1+δ2s|I2t24(1+t)vS2+11+tAv2.

Let

f(t)=δ2s|I2t21+t,|t|δ2s|I,
35

then it is not difficult to conclude that f(t) has a maximum point t=δ2s|I2 in the closed interval [−δ2s|ℐδ2s|ℐ], so for |t| ≤ δ2s|ℐ, we have

f(t)=δ2s|I2t21+tδ2s|I1δ2s|I2.
36

Therefore,

vS2δ2s|I1δ2s|I21svSC2,1+δ2s|I41δ2s|I2vS2+11δ2s|IAv2,

that is,

vS24δ2s|I41δ2s|I2δ2s|I1svSC2,1+41+δ2s|I41δ2s|I2δ2s|IAv2.

Here, we require

41δ2s|I2δ2s|I>0,4δ2s|I41δ2s|I2δ2s|I<1,
37

which implies δ2s|I2<1641, that is, δ2s|I<4410.6246.

Remark 1

Substituting (27) into (25) and (26), we can obtain the constants in Theorem 1.

Remark 2

Our result improves that of [1], that is, the bound of block RIC δ2s|ℐ is improved from 0.4931 to 0.6246.

Conclusions

In this paper, we gave a new bound on the block RIC δ2s|ℐ < 0.6246, under this bound, every block s-sparse signal can be exactly recovered via the l2/l1-minimization approach in the noiseless case and is stably recovered in the noisy measurement case. The result improves the bound on the block RIC δ2s|ℐ in [1].

Acknowledgements

This work was supported by the Science Research Project of Ningxia Higher Education Institutions of China (NGY2015152), National Natural Science Foundation of China (NSFC) (11561055), and Science Research Project of Beifang University of Nationalities (2016SXKY07).

Appendix

Lemma A.1

Suppose that v ∈ ℝN is split into m blocks, v[1], v[2], …, v[m], which are of length d1d2, …, dm, respectively, that is,

v=[v1,,vd1v[1],vd1+1,,vd1+d2v[2],,vNdm+1,,vNv[m]]T.
38

Suppose that the m blocks in x are rearranged by nonincreasing order for which

v[1]∥2 ≥ ∥v[2]∥2 ≥  ⋯  ≥ ∥v[m]∥2 ≥ 0.

Then

v[1]22++v[m]22v[1]2++v[m]2m+m4(v[1]2v[m]2).
39

Proof

See Lemma 6.14 in [30] for the details.

Footnotes

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The two authors contributed equally to this work, and they read and approved the final manuscript.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

1. Lin J, Li S. Block sparse recovery via mixed l2/l1l2/l1 minimization. Acta Math. Sin. Engl. Ser. 2013;29(7):1401–1412. doi: 10.1007/s10114-013-1564-y. [Cross Ref]
2. Donoho D. Compressed sensing. IEEE Trans. Inf. Theory. 2006;52(4):1289–1306. doi: 10.1109/TIT.2006.871582. [Cross Ref]
3. Candès E, Romberg J, Tao T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 2006;52(2):489–509. doi: 10.1109/TIT.2005.862083. [Cross Ref]
4. Candès E, Tao T. Near-optimal signal recovery from random projections: universal encoding strategies. IEEE Trans. Inf. Theory. 2006;52(12):5406–5425. doi: 10.1109/TIT.2006.885507. [Cross Ref]
5. Natarajan BK. Sparse approximate solutions to linear systems. SIAM J. Comput. 1995;24:227–234. doi: 10.1137/S0097539792240406. [Cross Ref]
6. Donoho D, Huo X. Uncertainty principles and ideal atomic decompositions. IEEE Trans. Inf. Theory. 2001;47(4):2845–2862. doi: 10.1109/18.959265. [Cross Ref]
7. Elad M, Bruckstein A. A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Trans. Inf. Theory. 2002;48(9):2558–2567. doi: 10.1109/TIT.2002.801410. [Cross Ref]
8. Gribonval R, Nielsen M. Sparse representations in unions of bases. IEEE Trans. Inf. Theory. 2003;49(5):3320–3325. doi: 10.1109/TIT.2003.820031. [Cross Ref]
9. Candès E, Tao T. Decoding by linear programming. IEEE Trans. Inf. Theory. 2005;51(12):4203–4215. doi: 10.1109/TIT.2005.858979. [Cross Ref]
10. Candès E. The restricted isometry property and its implications for compressed sensing. C. R. Math. Acad. Sci. Paris, Sér. I. 2008;346:589–592. doi: 10.1016/j.crma.2008.03.014. [Cross Ref]
11. Baraniuk R, Davenport M, DeVore R, Wakin M. A simple proof of the restricted isometry property for random matrices. Constr. Approx. 2008;28:253–263. doi: 10.1007/s00365-007-9003-x. [Cross Ref]
12. Parvaresh F, Vikalo H, Misra S, Hassibi B. Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays. IEEE J. Sel. Top. Signal Process. 2008;2(3):275–285. doi: 10.1109/JSTSP.2008.924384. [Cross Ref]
13. Elhamifar E, Vidal R. Block-sparse recovery via convex optimization. IEEE Trans. Signal Process. 2012;60(8):4094–4107. doi: 10.1109/TSP.2012.2196694. [Cross Ref]
14. Majumdar A, Ward R. Compressed sensing of color images. Signal Process. 2010;90:3122–3127. doi: 10.1016/j.sigpro.2010.05.016. [Cross Ref]
15. Huang J, Huang X, Metaxas D. IEEE 12th International Conference on Computer Vision. 2009. Learning with dynamic group sparsity; pp. 64–71.
16. Simila T, Tikka J. Input selection and shrinkage in multiresponse linear regression. Comput. Stat. Data Anal. 2007;52:406–422. doi: 10.1016/j.csda.2007.01.025. [Cross Ref]
17. Eldar Y, Kuppinger P, Bolcskei H. Block-sparse signals: uncertainty relations and efficient recovery. IEEE Trans. Signal Process. 2010;58(6):3042–3054. doi: 10.1109/TSP.2010.2044837. [Cross Ref]
18. Eldar Y, Mishali M. Robust recovery of signals from a structured union of subspaces. IEEE Trans. Inf. Theory. 2009;55(11):5302–5316. doi: 10.1109/TIT.2009.2030471. [Cross Ref]
19. Fu Y, Li H, Zhang Q, Zou J. Block-sparse recovery via redundant block OMP. Signal Process. 2014;97:162–171. doi: 10.1016/j.sigpro.2013.10.030. [Cross Ref]
20. Afdideh F, Phlypo R, Jutten C. 24th European Signal Processing Conference (EUSIPCO) 2016. Recovery guarantees for mixed norm lp1,p2lp1,p2 block sparse representations; pp. 378–382.
21. Wen, J, Zhou, Z, Liu, Z, Lai, M-J, Tang, X: Sharp sufficient conditions for stable recovery of block sparse signals by block orthogonal matching pursuit (2016). arXiv:1605.02894v1
22. Karanam S, Li Y, Radke RJ. Person re-identification with block sparse recovery. Image Vis. Comput. 2017
23. Gao Y, Peng J, Yue S. Stability and robustness of the l2/lql2/lq-minimization for block sparse recovery. Signal Process. 2017;137:287–297. doi: 10.1016/j.sigpro.2017.02.012. [Cross Ref]
24. Stojnic M, Parvaresh F, Hassibi B. On the reconstruction of block-sparse signals with an optimal number of measurements. IEEE Trans. Signal Process. 2009;57(8):3075–3085. doi: 10.1109/TSP.2009.2020754. [Cross Ref]
25. Baraniuk R, Cevher V, Duarte M, Hegde C. Model-based compressive sensing. IEEE Trans. Inf. Theory. 2010;56(4):1982–2001. doi: 10.1109/TIT.2010.2040894. [Cross Ref]
26. Mo Q, Li S. New bounds on the restricted isometry constant δ2kδ2k. Appl. Comput. Harmon. Anal. 2011;31:460–468. doi: 10.1016/j.acha.2011.04.005. [Cross Ref]
27. Lin J, Li S, Shen Y. New bounds for restricted isometry constants with coherent tight frames. IEEE Trans. Signal Process. 2013;61(3):611–621. doi: 10.1109/TSP.2012.2226171. [Cross Ref]
28. Cohen A, Dahmen W, DeVore A. Compressed sensing and best k-term approximation. J. Am. Math. Soc. 2009;22(1):211–231. doi: 10.1090/S0894-0347-08-00610-3. [Cross Ref]
29. Pinkus A. On L1-Approximation. Cambridge: Cambridge University Press; 1989.
30. Foucart S, Rauhut H. A Mathematical Introduction to Compressive Sensing. New York: Springer; 2013.
31. Gao Y, Peng J, Yue S, Zhao Y. On the null space property of lqlq-minimization for 0 < q ≤ 10<q1 in compressed sensing. J. Funct. Spaces. 2015;2015

Articles from Springer Open Choice are provided here courtesy of Springer