PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of springeropenLink to Publisher's site
Journal of Inequalities and Applications
 
J Inequal Appl. 2017; 2017(1): 52.
Published online 2017 February 28. doi:  10.1186/s13660-017-1326-y
PMCID: PMC5331114

Strong convergence of an extragradient-type algorithm for the multiple-sets split equality problem

Abstract

This paper introduces a new extragradient-type method to solve the multiple-sets split equality problem (MSSEP). Under some suitable conditions, the strong convergence of an algorithm can be verified in the infinite-dimensional Hilbert spaces. Moreover, several numerical results are given to show the effectiveness of our algorithm.

Keywords: strong convergence, extragradient-type, multiple-sets split equality problem

Introduction

The split feasibility problem (SFP) was first presented by Censor et al. [1]; it is an inverse problem that arises in medical image reconstruction, phase retrieval, radiation therapy treatment, signal processing etc. The SFP can be mathematically characterized by finding a point x that satisfies the property

x ∈ CAx ∈ Q
1.1

if such a point exists, where C and Q are nonempty closed convex subsets of Hilbert spaces H1 and H2, respectively, and A:H1 → H2 is a bounded and linear operator.

There are various algorithms proposed to solve the SFP, see [24] and the references therein. In particular, Byrne [5, 6] introduced the CQ-algorithm motivated by the idea of an iterative scheme of fixed point theory. Moreover, Censor et al. [7] introduced an extension upon the form of SFP in 2005 with an intersection of a family of closed and convex sets instead of the convex set C, which is the original of the multiple-sets split feasibility problem (MSSFP).

Subsequently, an important extension, which goes by the name of split equality problem (SEP), was made by Moudafi [8]. It can be mathematically characterized by finding points x ∈ C and y ∈ Q that satisfy the property

AxBy
1.2

if such points exist, where C and Q are nonempty closed convex subsets of Hilbert spaces H1 and H2, respectively, H3 is also a Hilbert space, A:H1 → H3 and B:H2 → H3 are two bounded and linear operators. When BI, the SEP reduces to SFP. For more information about the methods for solving SEP, see [9, 10].

This paper considers the multiple-sets split equality problem (MSSEP) which generalizes the MSSFP and SEP and can be mathematically characterized by finding points x and y that satisfy the property

xi=1tCiandyj=1rQjsuch that Ax=By,
1.3

where rt are positive integers, {Ci}i=1tH1 and {Qj}j=1rH2 are nonempty, closed and convex subsets of Hilbert spaces H1 and H2, respectively, H3 is also a Hilbert space, A:H1 → H3, B:H2 → H3 are two bounded and linear operators. Obviously, if BI, the MSSEP is just right MSSFP; if tr = 1, the MSSEP changes into the SEP. Moreover, when BI and tr = 1, the MSSEP reduces to the SFP.

One of the most important methods for computing the solution of a variational inequality and showing the quick convergence is an extragradient algorithm, which was first introduced by Korpelevich [11]. Moreover, this method was applied for finding a common element of the set of solutions for a variational inequality and the set of fixed points of a nonexpansive mapping, see Nadezhkina et al. [12]. Subsequently, Ceng et al. in [13] presented an extragradient method, and Yao et al. in [14] proposed a subgradient extragradient method to solve the SFP. However, all these methods to solve the problem have only weak convergence in a Hilbert space. On the other hand, a variant extragradient-type method and a subgradient extragradient method introduced by Censor et al. [15, 16] possess strong convergence for solving the variational inequality.

Motivated and inspired by the above works, we introduce an extragradient-type method to solve the MSSEP in this paper. Under some suitable conditions, the strong convergence of an algorithm can be verified in the infinite-dimensional Hilbert spaces. Finally, several numerical results are given to show the feasibility of our algorithm.

Preliminaries

Let H be a real Hilbert space whose inner product and norm are denoted by 〈 ⋅ ,  ⋅ 〉 and  ∥  ⋅  ∥ , respectively. Let I denote the identity operator on H.

Next, we recall several definitions and basic results that will be available later.

Definition 2.1

A mapping T:H → H goes by the name of

  • (i) nonexpansive if
     ∥ Tx − Ty ∥  ≤  ∥ x − y ∥ ,  ∀xy ∈ H
  • (ii) firmly nonexpansive if
     ∥ Tx − Ty ∥  ≤ 〈x − yTx − Ty〉,  ∀xy ∈ H
  • (iii) contractive on x if there exists 0 < α < 1 such that
     ∥ Tx − Ty ∥  ≤ α ∥ x − y ∥ ,  ∀xy ∈ H
  • (iv) monotone if
    Tx − Tyx − y〉 ≥ 0,  ∀xy ∈ H
  • (v) β-inverse strongly monotone if there exists β > 0 such that
    Tx − Tyx − y〉 ≥ βTxTy2,  ∀xy ∈ H.

The following properties of an orthogonal projection operator were introduced by Bauschke et al. in [17], and they will be powerful tools in our analysis.

Proposition 2.2

[17]

Let PC be a mapping from H onto a closed, convex and nonempty subset C of H if

PC(x)=argminyCxy,xH,

then PC is called an orthogonal projection from H onto C. Furthermore, for any xy ∈ H and z ∈ C,

  • (i) x − PCxz − PCx〉 ≤ 0;
  • (ii) PCxPCy2 ≤ 〈PCx − PCyx − y;
  • (iii) PCxz2 ≤ ∥xz2 − ∥PCxx2.

The following lemmas provide the main mathematical results in the sequel.

Lemma 2.3

[18]

Let C be a nonempty closed convex subset of a real Hilbert space H, let T:C → H be α-inverse strongly monotone, and let r > 0 be a constant. Then, for any xy ∈ C,

∥(IrT)x−(IrT)y2 ≤ ∥xy2r(r − 2α)∥T(x)−T(y)∥2.

Moreover, when 0 < r < 2α, I − rT is nonexpansive.

Lemma 2.4

[19]

Let {xk} and {yk} be bounded sequences in a Hilbert space H, and let {βk} be a sequence in [0, 1] which satisfies the condition 0 < lim infk→∞βk ≤ lim supk→∞βk < 1. Suppose that xk+1 = (1 − βk)ykβkxk for all k ≥ 0 and lim supk→∞( ∥ yk+1 − yk ∥ − ∥ xk+1 − xk ∥ ) ≤ 0. Then limk→∞ ∥ yk − xk ∥  = 0.

The lemma below will be a powerful tool in our analysis.

Lemma 2.5

[20]

Let {ak} be a sequence of nonnegative real numbers satisfying the condition ak+1 ≤ (1 − mk)akmkδk, ∀k ≥ 0,  where {mk}, {δk} are sequences of real numbers such that

  • (i) {mk} ∈ [0, 1] and k=0mk= or, equivalently,
    k=0(1mk)=limkj=0k(1mj)=0;
  • (ii) lim supk→∞δk ≤ 0 or
  • (ii)’ k=0δkmk is convergent. Then limk→∞ak = 0.

Main results

In this section, we propose a formal statement of our present algorithm. Review the multiple-sets split equality problem (MSSEP), without loss of generality, suppose t > r in (1.3) and define Qr+1Qr+2 =  ⋯  = QtH2. Hence, MSSEP (1.3) is equivalent to the following problem:

findxi=1tCiandyj=1tQjsuch that Ax=By.
3.1

Moreover, set SiCi × Qi ∈ HH1 × H2 (i = 1, 2, …, t), S=i=1tSi, G = [A, −B]:H → H3, the adjoint operator of G is denoted by G, then the original problem (3.1) reduces to

finding w = (xy) ∈ S such that Gw = 0.
3.2

Theorem 3.1

Let Ω ≠ ∅ be the solution set of MSSEP (3.2). For an arbitrary initial point w0 ∈ S, the iterative sequence {wn} can be given as follows:

{vn=PS{(1αn)wnγnGGwn},wn+1=PS{wnμnGGvn+λn(vnwn)},
3.3

where {αn}n=0 is a sequence in [0, 1] such that limnαn=0,andn=1αn=, and {γn}n=0, {λn}n=0, {μn}n=0 are sequences in H satisfying the following conditions:

{γn(0,2ρ(GG)),limn(γn+1γn)=0;λn(0,1),limn(λn+1λn)=0;μn2ρ(GG)λn,limn(μn+1μn)=0;n=1(γnλn)<.
3.4

Then {wn} converges strongly to a solution of MSSEP (3.2).

Proof

In view of the property of the projection, we infer wˆ=PS(wˆtGGwˆ) for any t > 0. Further, from the condition in (3.4), we get that μn2ρ(GG)λn, it follows that IμnλnGG is nonexpansive. Hence,

wn+1wˆ=PS{wnμnGGvn+λn(vnwn)}PS{wˆtGGwˆ}=PS{(1λn)wn+λn(IμnλnGG)vn}PS{(1λn)wˆ+λn(IμnλnGG)wˆ}(1λn)wnwˆ+λn(IμnλnGG)vn(IμnλnGG)wˆ(1λn)wnwˆ+λnvnwˆ.
3.5

Since αn → 0 as n → ∞ and from the condition in (3.4), γn(0,2ρ(GG)), it follows that αn1γnρ(GG)2 as n → ∞, that is, γn1αn(0,2ρ(GG)). We deduce that

vnwˆ=PS{(1αn)wnγnGGwn}PS(wˆtGGwˆ)(1αn)(wnγn1αnGGwn){αnwˆ+(1αn)(wˆγn1αnGGwˆ)}αnwˆ+(1αn)[wnγn1αnGGwnwˆ+γn1αnGGwˆ],
3.6

which is equivalent to

vnwˆαnwˆ+(1αn)wnwˆ.
3.7

Substituting (3.7) in (3.5), we obtain

wnwˆ(1λn)wnwˆ+λn(αnwˆ+(1αn)wnwˆ)(1λnαn)wnwˆ+λnαnwˆmax{wnwˆ,wˆ}.

By induction,

wnwˆmax{w0wˆ,wˆ}.

Consequently, {wn} is bounded, and so is {vn}.

Let T = 2PS − I. From Proposition 2.2, one can know that the projection operator PS is monotone and nonexpansive, and 2PS − I is nonexpansive.

Therefore,

wn+1=I+T2[(1λn)wn+λn(1μnλnGG)vn]=Iλn2wn+λn2(IμnλnGG)vn+T2[(1λn)wn+λn(IμnλnGG)vn],

that is,

wn+1=1λn2wn+1+λn2bn,
3.8

where bn=λn(IμnλnGG)vn+T[(1λn)wn+λn(IμnλnGG)vn]1+λn.

Indeed,

bn+1bnλn+11+λn+1(Iμn+1λn+1GG)vn+1(IμnλnGG)vn+|λn+11+λn+1λn1+λn|×(IμnλnGG)vn+T1+λn+1{(1λn+1)wn+1+λn+1(Iμn+1λn+1GG)vn+1[(1λn)wn+λn(IμnλnGG)vn]}+|11+λn+111+λn|×T[(1λn)wn+λn(IμnλnGG)vn].
3.9

For convenience, let cn=(IμnλnGG)vn. By Lemma 2.5 in Shi et al. [1], it follows that (IμnλnGG) is nonexpansive and averaged. Hence,

bn+1bnλn+11+λn+1cn+1cn+|λn+11+λn+1λn1+λn|cn+T1+λn+1{(1λn+1)wn+1+λn+1cn+1[(1λn)wn+λncn]}+|11+λn+111+λn|T[(1λn)wn+λncn]λn+11+λn+1cn+1cn+|λn+11+λn+1λn1+λn|cn+1λn+11+λn+1wn+1wn+λn+11+λn+1cn+1cn+λnλn+11+λn+1wn+λn+1λn1+λn+1cn+|11+λn+111+λn|T[(1λn)wn+λncn].
3.10

Moreover,

cn+1cn=(Iμn+1λn+1GG)vn+1(IμnλnGG)vnvn+1vn=PS[(1αn+1)wn+1γnGGwn+1]PS[(1αn)wnγnGGwn](Iγn+1GG)wn+1(Iγn+1GG)wn+(γnγn+1)GGwn+αn+1wn+1+αnwnwn+1wn+|γnγn+1|GGwn+αn+1wn+1+αnwn.
3.11

Substituting (3.11) in (3.10), we infer that

bn+1bn|λn+11+λn+1λn1+λn|cn+λnλn+11+λn+1wn+λn+1λn1+λn+1cn+wn+1wn+|11+λn+111+λn|T[(1λn)wn+λncn]+|γnγn+1|wn+αn+1wn+1+αnwn.
3.12

By virtue of limn→∞(λn+1 − λn) = 0, it follows that limn|λn+11+λn+1λn1+λn|=0. Moreover, {wn} and {vn} are bounded, and so is {cn}. Therefore, (3.12) reduces to

limsupn(bn+1bnwn+1wn)0.
3.13

Applying (3.13) and Lemma 2.4, we get

limnbnwn=0.
3.14

Combining (3.14) with (3.8), we obtain

limnxn+1xn=0.

Using the convexity of the norm and (3.5), we deduce that

wn+1wˆ2(1λn)wnwˆ2+λnvnwˆ2(1λn)wnwˆ2+λnαnwˆ+(1αn)[wnγn1αnGGwn(wˆγn1αnGGwˆ)]2(1λn)wnwˆ2+λnαnwˆ2+(1αn)λn[wnwˆ2+γn1αn(γn1αn2ρ(GG))GGwnGGwˆ2]wnwˆ2+λnαnwˆ2+λnγn(γn1αn2ρ(GG))GGwnGGwˆ2,

which implies that

λnγn(2ρ(GG)γn1αn)GGwnGGwˆ2wnwˆ2wn+1wˆ2+λnαnwˆ2wn+1wn(wnwˆ+wn+1wˆ)+λnαnwˆ2.

Since liminfnλnγn(2ρ(GG)γn1αn)>0, limn→∞αn = 0 and limn→∞ ∥ wn+1 − wn ∥  = 0, we infer that

limnGGwnGGwˆ=0.
3.15

Applying Proposition 2.2 and the property of the projection PS, one can easily show that

vnwˆ2=PS[(1αn)wnγnGGwn]PS[wˆγnGGwˆ]2(1αn)wnγnGGwn(wˆγnGGwˆ),vnwˆ=12{wnγnGGwn(wˆγnGGwˆ)αnwn2+vnwˆ2(1αn)wnγnGGwn(wˆγnGGwˆ)vn+wˆ2}12{wnwˆ2+2αnwnwnγnGGwn(wˆγnGGwˆ)αnwn+vnwˆ2wnvnγnGG(wnwˆ)αnwn2}12{wnwˆ2+αnM+vnwˆ2wnvn2+2γnwnvn,GG(wnwˆ)+2αnwn,wnvnγnGG(wnwˆ)+αnwn2}12{wnwˆ2+αnM+vnwˆ2wnvn2+2γnwnvnGG(wnwˆ)+2αnwnwnvn}wnwˆ2+αnMwnvn2+4γnwnvnGG(wnwˆ)+4αnwnwnvn,
3.16

where M > 0 satisfies

Msupk{2wnwnγnGGwn(wˆγnGGwˆ)αnwn}.

From (3.5) and (3.16), we get

wn+1wˆ2(1λn)wnwˆ2+λnvnwˆ2wnwˆ2λnwnvn2+αnM+4γnwnvnγnGG(wnwˆ)+4αnwnwnvn,

which means that

λnwnvn2wn+1wn(wnwˆ+wn+1wˆ)+αnM+4γnwnvnγnGG(wnwˆ)+4αnwnwnvn.

Since limn→∞αn = 0, limn→∞ ∥ wn+1 − wn ∥  = 0 and limnGGwnGGwˆ=0, we infer that

limnwnvn=0.

Finally, we show that wnwˆ. Using the property of the projection PS, we derive

vnwˆ2=PS[(1αn)(wnγn1αnGGwn)]PS[αnwˆ+(1αn)(wˆγn1αnGGwˆ)]2(1αn)(Iγn1αnGG)(wnwˆ)αnwˆ,vnwˆ(1αn)wnwˆvnwˆ+αnwˆ,wˆvn1αn2(wnwˆ2+vnwˆ2)+αnwˆ,wˆvn,

which equals

vnwˆ21αn1+αnwnwˆ2+2αn1αnwˆ,wˆvn.
3.17

It follows from (3.5) and (3.17) that

wn+1wˆ2(1λn)wnwˆ2+λnvnwˆ2(1λn)wnwˆ2+λn{1αn1+αnwnwˆ2+2αn1αnwˆ,wˆvn}(12αnλn1+αn)wnwˆ2+2αnλn1αnwˆ,wˆvn.
3.18

Since γn1αn(0,2ρ(GG)), we observe that αn(0,1γnρ(GG)2), then

2αnλn1αn(0,2λn(2γnρ(GG))γnρ(GG)),

that is to say,

2αnλn1αnwˆ,wˆvn2λn(2γnρ(GG))γnρ(GG)wˆ,wˆvn.

By virtue of n=1(λnγn)<, γn(0,2ρ(GG)) and wˆ,wˆvn is bounded, we obtain n=1(2λn(2γnρ(GG))γnρn(GG))wˆ,wˆvn<, which implies that

n=12αnλn1αnwˆ,wˆvn.

Moreover,

n=12αnλn1αnwˆ,wˆvn=n=12αnλn1+αn1+αn1αnwˆ,wˆvn,
3.19

it follows that all the conditions of Lemma 2.5 are satisfied. Combining (3.18), (3.19) and Lemma 2.5, we can show that wnwˆ. This completes the proof.

Numerical experiments

In this section, we provide several numerical results and compare them with Tian’s [21] algorithm (3.15)’ and Byrne’s [22] algorithm (1.2) to show the effectiveness of our proposed algorithm. Moreover, the sequence given by our algorithm in this paper has strong convergence for the multiple-sets split equality problem. The whole program was written in Wolfram Mathematica (version 9.0). All the numerical results were carried out on a personal Lenovo computer with Intel(R)Pentium(R) N3540 CPU 2.16 GHz and RAM 4.00 GB.

In the numerical results, A = (aij)P×N, B = (bij)P×M, where aij ∈ [0, 1], bij ∈ [0, 1] are all given randomly, PMN are positive integers. The initial point x0 = (1, 1, …, 1), and y0 = (0, 0, …, 0), αn = 0.1, λn = 0.1, γn=0.2ρ(GG), μn=0.2ρ(GG) in Theorem 3.1, ρ1n=ρ2n=0.1 in Tian’s (3.15)’ and γn = 0.01 in Byrne’s (1.2). The termination condition is  ∥ Ax − By ∥  < ϵ. In Tables Tables11--4,4, the iterative steps and CPU are denoted by n and t, respectively.

Table 2
ϵ = 10−10P = 3, M = 3, N = 3

Table 3
ϵ = 10−5P = 10, M = 10, N = 10

Table 1
ϵ = 10−5P = 3, M = 3, N = 3
Table 4
ϵ = 10−10P = 10, M = 10, N = 10

Acknowledgements

This research was supported by NSFC Grants No: 11226125; No: 11301379; No: 11671167.

Footnotes

Ying Zhao and Luoyi Shi contributed equally to this work.

Competing interests

The authors declare that there are no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Contributor Information

Ying Zhao, moc.qq@655908563.

Luoyi Shi, nc.ude.upjt@iyoulihs.

References

1. Censor Y, Elfving T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms. 1994;8(2-4):221–239. doi: 10.1007/BF02142692. [Cross Ref]
2. Xu HK. A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006;22(6):2021–2034. doi: 10.1088/0266-5611/22/6/007. [Cross Ref]
3. Lopez G, Martin-Marqnez V, Wang F, Xu HK. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012;28 doi: 10.1088/0266-5611/28/8/085004. [Cross Ref]
4. Yang Q. The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004;20(4):1261–1266. doi: 10.1088/0266-5611/20/4/014. [Cross Ref]
5. Byrne C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002;18(2):441–453. doi: 10.1088/0266-5611/18/2/310. [Cross Ref]
6. Byrne C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004;20(1):103–120. doi: 10.1088/0266-5611/20/1/006. [Cross Ref]
7. Censor Y, Elfving T, Kopf N, Bortfeld T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005;21(6):2071–2084. doi: 10.1088/0266-5611/21/6/017. [Cross Ref]
8. Moudafi A. Alternating CQ algorithm for convex feasibility and split fixed point problem. J. Nonlinear Convex Anal. 2013;15(4):809–818.
9. Shi LY, Chen RD, Wu YJ. Strong convergence of iterative algorithms for solving the split equality problems. J. Inequal. Appl. 2014;2014 doi: 10.1186/1029-242X-2014-478. [Cross Ref]
10. Dong QL, He SN, Zhao J. Solving the split equality problem without prior knowledge of operator norms. Optimization. 2015;64(9):1887–1906. doi: 10.1080/02331934.2014.895897. [Cross Ref]
11. Korpelevich GM. An extragradient method for finding saddle points and for other problems. Ekonomikai Matematicheskie Metody. 1976;12(4):747–756.
12. Nadezhkina N, Takahashi W. Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2006;128(1):191–201. doi: 10.1007/s10957-005-7564-z. [Cross Ref]
13. Ceng LC, Ansari QH, Yao JC. An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012;64(4):633–642. doi: 10.1016/j.camwa.2011.12.074. [Cross Ref]
14. Yao Y, Postolache M, Liou YC. Variant extragradient-type method for monotone variational inequalities. Fixed Point Theory Appl. 2013;2013 doi: 10.1186/1687-1812-2013-185. [Cross Ref]
15. Censor Y, Gibali A, Reich S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011;148(2):318–335. doi: 10.1007/s10957-010-9757-3. [PMC free article] [PubMed] [Cross Ref]
16. Censor Y, Gibali A, Reich S. Strong convergence of subgradient extragradient method for the variational inequalities in Hilbert space. Optim. Methods Softw. 2011;26(4-5):827–845. doi: 10.1080/10556788.2010.551536. [Cross Ref]
17. Bauschke HH, Combettes PL. Convex Analysis and Monotone Operator Theory in Hilbert Space. London: Springer; 2011.
18. Takahashi W, Toyoda M. Weak convergence theorems for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 2003;118(2):417–428. doi: 10.1023/A:1025407607560. [Cross Ref]
19. Suzuki T. Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces. Fixed Point Theory Appl. 2005;2005 doi: 10.1155/FPTA.2005.103. [Cross Ref]
20. Xu HK. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002;66(1):240–256. doi: 10.1112/S0024610702003332. [Cross Ref]
21. Tian D, Shi L, Chen R. Iterative algorithm for solving the multiple-sets split equality problem with split self-adaptive step size in Hilbert spaces. Arch. Inequal. Appl. 2016;2016(1):1–9. doi: 10.1186/s13660-015-0952-5. [Cross Ref]
22. Byrne, C, Moudafi, A: Extensions of the CQ algorithm for feasibility and split equality problems. hal-00776640-version 1 (2013)

Articles from Springer Open Choice are provided here courtesy of Springer