PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of springeropenLink to Publisher's site
Journal of Inequalities and Applications
 
J Inequal Appl. 2017; 2017(1): 51.
Published online 2017 February 27. doi:  10.1186/s13660-017-1321-3
PMCID: PMC5329097

A generalized gradient projection method based on a new working set for minimax optimization problems with inequality constraints

Abstract

Combining the techniques of the working set identification and generalized gradient projection, we present a new generalized gradient projection algorithm for minimax optimization problems with inequality constraints. In this paper, we propose a new optimal identification function, from which we provide a new working set. At each iteration, the improved search direction is generated by only one generalized gradient projection explicit formula, which is simple and could reduce the computational cost. Under some mild assumptions, the algorithm possesses the global and strong convergence. Finally, the numerical results show that the proposed algorithm is promising.

Keywords: minimax optimization problems, inequality constraints, generalized gradient projection method, global and strong convergence

Introduction

Minimax problem is an important class of nonsmooth optimization, since it has a broad application background. Numerous models in optimal control [1, 2], engineering problem [3], portfolio optimization [4] and many other situations [5] can be formulated as the following minimax optimization problems with inequality constraints:

min F(x),  s.t. fj(x) ≤ 0,  j ∈ J
1

where F(x) = max {fj(x), j ∈ I} with I = {1, 2, …, m}, j ∈ J = {l + 1, l + 2, …, lm}, fj(x):Rn → R (j ∈ I ∪ J) are continuously differentiable functions. The objective function F(x) is not necessarily differentiable, even when all fj(x), j ∈ {I ∪ J} are differentiable. Obviously, the non-differentiablity of the objective function F(x) is a main challenge for solving minimax problem, as the classical smooth methods cannot be applied directly. Over the past few decades, the minimax problem has attracted more and more researchers’ attention and many algorithms have been developed, which can be grouped into three classes in general.

The first class of algorithms views the problem (1) as the constrained nonsmooth optimization problem, which can be solved directly by several classical nonsmooth methods, such as subgradient methods, bundle methods, and cutting plane methods, Refs. [69]. The algorithms in [68] have a shortcoming that it is difficult to improve the numerical results. However, we have observed in [9] that a feasible descent bundle method for solving inequality constrained minimax problems is proposed, by using the subgradients of functions, the idea of bundle method and the technique of partial cutting planes model, to generate the new cutting plane and aggregate the subgradients in the bundle, so the difficulty of numerical calculation and storage is overcome.

The second one is the entropy function method [1013]. First, it transforms the objective function minimax problem into a smooth function with parameters. Then the objective function is approximated by a parametric and smooth function. For example, the parametric and smooth function with the parameter p

Fp(x)=1pln{i=1lexp[pfi(x)]}.

Since 0Fp(x)F(x)1plnl, so p → ∞, Fp(x) → F(x).

Due to the particular structure of the objective function, the third approach is that the problem (1) can be transformed into the following equivalent smooth constrained nonlinear programming by introducing an artificial variable z:

min(x,z)Rn+1z,s.t. fi(x)z0,iI;fj(x)0,jJ.

Then, we can solve the above inequality constrained optimization by some well-established methods, such as the sequential quadratic programming type (SQP) methods [1417], the sequential quadratically constrained quadratic programming type (SQCQP) methods [18, 19], the trust-region strategy [20, 21] and the interior-point method [22]. In [14], a SQP algorithm is proposed that incorporates the particular case of minimax problems, the global and local convergence is ensured. In order to improve the convergence properties and numerical performance, Jian and Zhu et al. developed improved SQP methods established in [15, 16] for solving unconstrained or constrained minimax problems, by means of solving one quadratic programming an improved direction is yielded and a second-order correction direction can also be at hand via one system of linear equations. Under mild conditions, we can ensure global and superlinear convergence. Jian et al. [17] developed the norm-Relaxed SQP method based on active set identification and new line search for constrained minimax problems, which the master direction and high-order correction direction are computed by solving a new type of norm-relaxed quadratic programming subproblem and a system of linear equations, respectively. Moreover, the step size is yielded by a new line search which combines the method of strongly sub-feasible direction with the penalty method.

Recently, several researchers have worked on the SQCQP, to solve unconstrained or constrained minimax problems. The authors provided the SQCQP method [18] and showed that their algorithm is faster than the SQP algorithm that is a well-known algorithm used for solving constrained minimization problems, which solved a subproblem that involves convex quadratic inequality constraints and a convex quadratic objective function. Jian et al. [19] also proposed the simple sequential quadratically constrained quadratic programming algorithm for smooth constrained optimization. Unlike the previous work, at each iteration, the main search direction is obtained by solving only one subprogram which is composed of a convex quadratic objective function and simple quadratic inequality constraints without the second derivatives of the constrained functions.

Although the SQP and SQCQP methods can effectively solve the minimax problem, this transformation may ignore the unique nature of the minimax problem and then increase the number of constraint functions. Moreover, these methods require the solution of one or two QP (QCQP) subproblems at each iteration, especially, some subproblems may be complex for the large-scale problems. In general, there are many cases where the subproblems cannot be solved easily, which will increase the amount of computations largely. Hence, (generalized) gradient projection method (GGPM) based on the Rosen gradient projection method [23] has been developed for solving inequality constrained optimization problems. The GGPM method has good properties that the search direction is only a gradient projection explicit formula and it has nice convergence and numerical results for middle-small-scale problems. These good natures cause widespread concern of many scholars [2426]. In [25], Chapter II, there are more systematic and detailed study about generalized gradient projection algorithm for inequality constrained smooth optimization.

It is well established in the literature that, when the number of constraints is very large, the active set identification technique can improve the local convergence behavior and decrease the computation cost of the algorithms of nonlinear programming and minimax problems. An earlier study of the active set identification technique can be found in [27]. Many satisfactory results on the general nonlinear case were studied, e.g., [28, 29]. Facchinei et al. [28] described a technique based on the algebraic representation of the constraint set, which identifies active constraints in a neighborhood of a solution. The extension to constrained minimax problems was also first presented in [30] without strict complementarity and linear independence. Moreover, the identification technique of active constraints for constrained minimax problems can be more suitable for infeasible algorithms, such as the strongly sub-feasible direction method and the penalty function method.

Despite GGPM’s importance and usefulness, there are no GGPM type method that are applied to solve minimax problems with inequality constraints. The aim of this paper is to propose such an algorithm, analyze its convergence properties, and report its numerical performance. Motivated by [26, 30], in this paper, we propose a generalized gradient projection algorithm directly on Rn with a new working set for the problem (1). The characteristics of our proposed algorithm can be summarized as follows:

  1. We propose a new optimal identification function for the stationary point, from which we provide a new working set.
  2. The search direction is generated by only one generalized gradient projection explicit formula, which is simple and could reduce the computational cost.
  3. Under some mild assumptions, the algorithm possesses the global and strong convergence.

The paper is organized as follows. The next section describes the algorithm. Section 3 discusses the convergence analysis. Section 4 contains numerical results. Finally, some conclusions are drawn in Section 5.

Description of algorithm

In this section, for the sake of simplicity, we introduce and use the following notations for the problem (1) in this paper:

X: = {x ∈ Rn:fj(x) ≤ 0, j ∈ J}, 
2

I(x): = {i ∈ I:fi(x) = F(x)},  J(x): = {j ∈ J:fj(x) = 0}.
3

According to the analysis of other projection algorithms, we need the following linear independence assumption.

Assumption A1

The functions fj(x) (j ∈ I ∪ J) are all first order continuously differentiable, and there exists an index lx ∈ I(x) for each x ∈ X such that the gradient vectors {∇fi(x)−∇flx(x), i ∈ I(x)∖{lx}; ∇fj(x), j ∈ J(x)} are linearly independent.

Remark 1

We can easily find that Assumption A1 is equivalent to Assumption A1-1:

Assumption A1-1

The vectors {∇fi(x)−∇ft(x), i ∈ I(x)∖{t}; ∇fj(x), j ∈ J(x)} are linearly independent for arbitrarily t ∈ I(x).

For a given point xk and the parameter ε ≥ 0, we use the following approximate active set in this paper:

{Ik={iI:ϱ˜kfi(xk)F(xk)0},Jk={jJ:ϱ˜kfj(xk)0},
4

where ϱ˜k is taken by

ϱ˜0=ε,ϱ˜k=min{ε,ϱk1},k1,
5

ϱk−1 ≥ 0 is the identification function corresponding to the last iteration point xk−1 and it can be calculated by (17). At each iteration, only ϱk need to be computed, since ϱk−1 has already been calculated at the previous iteration, so the total computing capacity is not increase at each iteration.

For the current iteration point xk, for convenience of statement, we define

Fk:=F(xk),fik:=fi(xk),gik:=fi(xk),iIJ,lkI(xk),Ik0:=Ik{lk},Lk:=Ik0Jk:=(IkJk){lk},

where lk ∈ I(xk). In theory, the index lk can be any component of the active set I(xk) of Algorithm A, and all the theoretical analyses are the same. But we set lklxk = :min {i:i ∈ I(xk)} for convenience. We can easily find that I(xk) ⊆ Ik, J(xk) ⊆ Jk. Therefore, for the feasible point xk of the problem (1), its stationary point (optimality conditions) can be described as

{iIkλikgik+jJkλjkgjk=0,iIkλik=1,λik0,λik(fikFk)=0,iIk;λjk0,λjkfjk=0,jJk.
6

The foregoing conditions can be rewritten in the following equivalent form:

{iIk0λik(gikglkk)+jJkλjkgjk=glkk,λik0,λik(fikFk)=0,iIk0;λjk0,λjkfjk=0,jJk,λlkk=1iIk0λik0.
7

We define a generalized gradient projection matrix to test whether the current iteration point xk satisfies (6)

Pk: = En − NkQk
8

where En is an nth-order identity matrix.

Nk:=(gikglkk,iIk0;gjk,jJk),Qk:=(NkTNk+Dk)1NkT,
9

Dk:=diag(Djk,jLk),Djk:={(Fkfjk)p,jIk0;(fjk)p,jJk.
10

Suppose that (xk,λLkk) is a stationary point pair, from (7), it is not difficult to get

NkTNkλLkk=NkTglkk,DkλLkk=0,λLkk0.

These further imply that

(NkTNk+Dk)λLkk=NkTglkk,λLkk=Qkglkk,
11

Pkglkk=0,λlkk0,Djkλjk=0,λjk0,jLk.
12

Based on the above analysis, we introduce the following optimal identification function for the stationary point:

ρk:=Pkglkk2+ωk+ω¯k2,
13

where

ωk:=jLkmax{μjk,μjkDjk},ω¯k:=max{μlkk,0},
14

μLkk:=Qkglkk,μlkk:=1iIk0μik.
15

Before describing our algorithm, one has the following lemma.

Lemma 1

Suppose that Assumption  A1 holds. Then

  • (i) the matrix NkTNk+Dk is nonsingular and positive definite. Suppose xk → x, Nk → N, Dk → D, then the matrix NTN+D is also nonsingular and positive definite;
  • (ii) NkTPk=DkQk, NkTQkT=E|Lk|Dk(NkTNk+Dk)1;
  • (iii) (glkk)TPkglkk=Pkglkk2+jLk(μjk)2Djk;
  • (iv) ρk = 0 if and only if xk is a stationary point of the problem (1).

Proof

Under Assumption A1, it is shown that the matrix NkTNk+Dk is nonsingular by [25], Theorem 1.1.9. Then, we can prove that the matrix NTN+D is also definite similar to [25], Lemma 2.2.2. The conclusions (ii) and (iii) can obtained similar to [25], Theorem 1.1.9. Next, we prove the conclusion (iv) in detail below.

(iv) If ρk = 0, combining the equations (13), (8), and (15), we obtain

0=Pkglkk=glkkNkQkglkk=glkk+NkμLkk.

We have max{μjk,μjkDj}=0 by ωk = 0, which follows by μjk0, μjkDjk=0, j ∈ Lk and it is not difficult to get μlkk0 by ω¯k=0. So we have

{iIkμikgik+jJkμjkgjk=0,iIkμik=1,μik0,μik(fikFk)=0,iIk;μjk0,μjkfjk=0,jJk.

Therefore, xk is a stationary point of the problem (1) with the multiplier (μIkJkk,0(IJ)(IkJk)).

Conversely, if xk is a stationary point of the problem (1) with the multiplier vector λk, then from the (6)-(15) one knows that μLkk=λLkk, and ρk = 0.

The above results show that the current iteration point xk is a stationary point of the problem (1) if and only if ρk = 0, that is, ρk is optimal identification function. In case of ρk > 0, together with Ik and Jk, we compute the search direction, which is motivated by the generalized gradient projection technique in [25], Chapter II, and [26]

dk=ρkξ{Pkglkk+QkTvLkk}ϱkQkTek,
16

where the parameter ξ > 0, ek = (1,1,…,1)T ∈ R|Lk|,

ϱk=ρk1+ξ1+μLkk1,
17

the vector vLkk=(vjk,jLk) with

vjk={ω¯k1,if μjk<0,jIk0;ω¯k+Djk,if μjk0,jIk0,vjk={1,if μjk<0,jJk;Djk,if μjk0,jJk.
18

The following lemma further describes the important characteristics of the search direction, which is feasible and descent.

Lemma 2

Suppose that Assumption  A1 holds. Then

  • (i) (glkk)Tdkρkξω¯kϱk;
  • (ii) (gjk)Tdkϱk, j ∈ (I(xk)∖{lk}) ∪ J(xk);
  • (iii) F(xkdk) ≤ −ϱk, where F(xkdk) is the directional derivative of F(x) at the point xk along the direction dk.

Proof

(i) First, by (16), together with Lemma 1(iii), (15), (18) and (14), we obtain

(glkk)Tdk=ρkξ{(glkk)TPkglkk+(Qkglkk)TvLkk}ϱk(Qkglkk)Tek=ρkξ{Pkglkk2jLk(μjk)2Djk(μLkk)TvLkk}+ϱk(μLkk)Tekρkξ{Pkglkk2μik<0,iIk0μik(ω¯k1)μik0,iIk0μik(ω¯k+Dik)μjk<0,jJk(μjk)μjk0,jJkμjkDjk}+ϱk(μLkk)Tek=ρkξ{Pkglkk2μjk<0,jLk(μjk)μjk0,jLkμjkDjkω¯kiIk0μik}+ϱk(μLkk)Tek=ρkξ{Pkglkk2ωkω¯kiIk0μik}+ϱk(μLkk)Tek.

In addition, based on the definition of μlkk, it immediately follows ω¯kiIk0μik=ω¯k+ω¯k(μlkk). If μlkk0, we have ω¯k=0, ω¯k(μlkk)=0=ω¯k2. On the other hand μlkk<0, then ω¯k=μlkk, ω¯k(μlkk)=ω¯k2. Thus, ω¯kiIk0μik=ω¯k+ω¯k2 is always true. We have from (13) and (17)

(glkk)Tdkρkξ{Pkglkk2ωkω¯kω¯k2}+ϱk(μLkk)Tek=ρkξ(ρkω¯k)+ϱk(μLkk)Tekρkξω¯kρk1+ξ+ϱkμLkk1=ρkξω¯kϱk.
19

(ii) From Lemma 1(ii) and (16), we obtain

NkTdk=ρkξ{NkTPkglkk+NkTQkTvLkk}ϱkNkTQkTek=ρkξ{DkQkglkk+vLkkDk(NkTNk+Dk)1vLkk}ϱk{E|Lk|Dk(NkTNk+Dk)1}ek.
20

Then we discuss the following two cases, respectively.

Case 1. For i(I(xk){lk})Ik0, it follows that Djk=0. From (20), we have (gikglkk)Tdk=ρkξvikϱk. Then, combined (18) with conclusion (i), we have

(gik)Tdk=(glkk)Tdk+ρkξvikϱkϱkρkξω¯k+ρkξω¯kϱk=2ϱkϱk.

Case 2. For j ∈ J(xk) ⊆ Jk, Djk=0 holds. It follows from (20) and (18) that

(gjk)Tdk=ρkξvjkϱkϱk.

By summarizing the above discussion, the conclusion (ii) holds.

(iii) Since F(xk;dk)=max{(gik)Tdk,iI(xk)}, we have F(xkdk) ≤ −ϱk from the conclusions (i) and (ii).

Based on the improved direction dk defined by (16) and analysed above, we are now ready to describe the steps of our algorithm as follows.

Algorithm A

Step 0. Choose an initial feasible point x0 ∈ X and parameters: αβ ∈ (0, 1), ε > 0, p ≥ 1, ξ > 0. Let k: = 0.

Step 1. For the current iteration point xk, generate the working set Ik, Jk by (4) and (5), calculate the projection matrix Pk, the optimal identification function values ρk and ϱk by (8), (13)-(15) and (17). If ρk = 0, then xk is a stationary point of the problem (1), stop. Otherwise, go to Step 2.

Step 2. Obtain the search direction dk by (16)-(18).

Step 3. Compute the step size tk, which is the maximum t of the sequence {1, ββ2, …} satisfying

F(xktdk) ≤ Fk − αtϱk
21

fj(xktdk) ≤ 0,  j ∈ J.
22

Step 4. Let xk+1xktkdk, k: = k + 1, and go back to Step 1.

Remark 2

The inequality (21) is equivalent to fi(xktdk) ≤ Fk − αtϱk, i ∈ I.

Note that Lemma 2, we get F(xkdk) ≤ −ϱk < 0 and (gjk)Tdkϱk, j ∈ J(xk), so it is easy to know that (21) and (22) hold for t > 0 small enough, then Algorithm A is well defined.

Convergence analysis

In this section, we will analyze the global and strong convergence of Algorithm A. If Algorithm A stops at xk in a finite number of iterations, then ρk = 0. From Lemma 1(iv), we know that xk is a stationary point of the problem (1). Next, we assume that Algorithm A yields an infinite iteration sequence {xk} of points, and prove that any accumulation point x of {xk} is a stationary point for the problem (1) under some mild conditions including the following assumption.

Assumption A2

The sequence {xk} generated by Algorithm A is bounded.

Lemma 3

Suppose that Assumptions A1 and A2 hold. Then

  • (i)
    there exists a positive number c such that dkcρkξ;
  • (ii)
    limk→∞ ∥ xk+1 − xk ∥  = 0.

Proof

(i) Because of Assumption A2 and Lemma 1(i), it is easy to get the total sequences {Pk}k=1 and {Qk}k=1 of matrices are bounded. Furthermore, from (15) and (18), we know that {μLkk} and {vLkk} are bounded. This implies that dkcρkξ.

(ii) In view of {F(xk)} is decreasing and bounded, so limk→∞F(xk) = F(x). Then, combining (17) with the definition of ϱk, one has limktkρk1+ξ=0. Taking into account the parameter ξ > 0, it follows that

limkxk+1xk=limktkdklimkctkρkξ=climk[(tkρk1+ξ)ξtk]11+ξ=0.

Based on the lemma above, we can present the global convergence of our algorithm.

Theorem 1

Suppose that Assumptions A1 and A2 hold. Then Algorithm  A generates an infinite sequence {xk} of points such that each accumulation x of {xk} is a stationary point of the problem (1).

Proof

If the infinite iteration index set 𝒦 satisfies limk∈𝒦xkx, it follows that limk∈𝒦xk−1x from Lemma 3(ii). Noting that Ik, Jk and lk are finite, we suppose k ∈ 𝒦 (choosing a subsequence of 𝒦 when necessary) satisfies

IkI,JkJ,lkl,Ik0I0:={I}l,LkL:=I0J,
23

Ik1I¯,Jk1J¯,lk1l¯,Ik10I¯0:={I¯}l¯,Lk1L¯:=I¯0J¯.
24

Define

Dj={(F(x)fj(x))p,jI{l};(fj(x))p,jJ,D=diag(Dj,jL),
25

N=(fi(x)fl(x),iI0;fj(x),jJ),P=EnNQ,
26

Q=(NTN+D)1NT,μL=(μj,jL)=Qfl(x),
27

ω=jLmax{μj,μjDj},μl=1iI0μi,ω¯=max{μl,0},
28

ρ=Pfl(x)2+ω+ω¯2,ϱ=ρ1+ξ1+μL1.
29

The above definitions are well defined by Lemma 1(i). Similarly, for x, I¯, J¯ and l¯, we define ρ¯ and ϱ¯. In addition, the matrix NTN+D is positive definite by Lemma 1(i). Therefore, the sequences {vk}𝒦 and {dk}𝒦 are bounded. Then, assume by contradiction that x is not the stationary point of the problem (1), we can easily get ϱ > 0 and ϱ¯>0 similar to the analysis of Lemma 1(iv). Then, we have

ϱk0.5ϱ,ϱk1Kϱ¯:=ρ¯1+μ¯L¯>0,ϱ˜k=min{ε,ϱk1}Kϱ:=min{ε,ϱ¯}>0.

Next, we will prove this theorem in two steps.

A. The step size sequence {tk}k∈𝒦 is always bounded away from zero, that is, t¯:=inf{tk,kK}>0. We only need to show the inequalities (21) and (22) hold for k ∈ 𝒦 large enough and t > 0 sufficiently small.

A1. For i ∈ I, in this case, we have i ∉ I(x) and i ∈ I(x) as two cases.

A11. Case i ∉ I(x), that is fi(x) < F(x). Taking into account the differentiability of fi(x) and the boundedness of {dk}𝒦 and using Taylor expansion, we have

fi(xk+tdk)Fk+αtϱk=fik+t(gik)Tdk+o(t)Fk+αtϱk=fikFk+O(t)0.5(fi(x)F(x))+O(t)0.

A12. Case i ∈ I(x), that is, fi(x) = F(x). Note that xk → x and ϱ>0, by (4) and (5), we know that I(x) ⊆ Ik always holds for k ∈ 𝒦 large enough. Hence, i ∈ Ik. In this case, we also have two cases, which are il and i ≠ l. Therefore, we discuss the two cases, respectively.

Assume il, it follows by Lemma 2(i) that

(glk)Tdkρkξω¯kϱkϱk.
30

Assume i ≠ l, Dik=(Fkfik)pK(F(x)fi(x))p=0. From Lemma 1(i), we have (NkTNk+Dk)1(NTN+D)1. Together with (20) we conclude that

(gikglkk)Tdk=ρkξvikϱk+O(Dik).

In view of (18), vikω¯k+O(Dik) holds. Combining (19) with ϱk → ϱ > 0, we have

(gik)Tdkρkξω¯k+(glkk)Tdkϱk+O(Dik)2ϱk+O(Dik)ϱk.
31

Thus, for i ∈ I(x), from (30) and (31), using Taylor expansion, we obtain

fi(xk+tdk)Fk+αtϱkfik+t(gik)Tdk+o(t)fik+αtϱk(1α)tϱk+o(t)0.5t(1α)ϱ+o(t)0.

According to the analysis of the above A11 and A12, it is sufficient to show that the inequality (21) is always satisfied for k ∈ 𝒦 large enough and t > 0 sufficiently small.

A2. For j ∈ J, there are also j ∉ J(x) and j ∈ J(x) two cases.

A21. Case j ∉ J(x), it follows that fj(x) < 0. Taking into account the boundedness of {dk}𝒦 and using Taylor expansion, we have

fj(xk+tdk)=fjk+O(t)0.5fj(x)+O(t)0.
32

A22. Case j ∈ J(x), that is, fj(x) = 0. j ∈ Jk follows in a similar analysis to A12, and we have Djk=(fjk)pK0. By recalling (20), (gjk)Tdk=ρkξvjkϱk+O(Djk). Then using Taylor expansion and (18), we get

fj(xk+tdk)=fjk+t(gjk)Tdk+o(t)fjktϱk+tO(Djk)+o(t)0.5tϱ+o(t)0.

Thus, inequality (22) always holds for k ∈ 𝒦 large enough and t > 0 sufficiently small. Hence, t¯:=inf{tk,kK}>0 holds.

B. Export contradiction by using tkt¯>0 (k ∈ 𝒦). In view of (21), it is easy to know that the sequence {Fk} is monotone nonincreasing. Since limk∈𝒦FkF(x), it implies limk→∞FkF(x). This, together with (21) and tkt¯, ϱk ≥ 0.5ϱ, we have

F(x)=limkKFk+1limkK(Fkαtkϱk)limkK(Fk0.5αt¯ϱ)=F(x)0.5αt¯ϱ,

which contradicts t¯>0 and ϱ > 0. Therefore, x is a stationary point for the problem (1), and the whole proof is completed.

Subsequently, we further show that Algorithm A has the property of strong convergence.

Theorem 2

Suppose that Assumptions A1 and A2 hold. If the sequence {xk} of points generated by Algorithm  A possesses an isolated accumulation point x, then limk→∞xkx, that is, Algorithm  A is strongly convergent.

Proof

Since the sequence {xk} of points generated by Algorithm A possesses an isolated accumulation point, together with limk→∞ ∥ xk+1 − xk ∥  = 0 (Lemma 3(ii)) and [25], Corollary 1.1.8, we immediately have limk→∞xkx. Finally, by limk→∞xkx and Theorem 1, it follows that x is the stationary point of the problem (1).

Numerical experiments

In this section, in order to validate the efficiency of our proposed Algorithm A, some preliminary numerical experiments have been carried out. Test problems are divided into two groups. The first test group is made up of 10 problems (P1-P10), of which four small scale problems P1-P4 are taken from [25], which is the problem 7.3.28, 7.3.32, 7.3.31 and 7.3.29, respectively; and the other six middle-large-scale problems P5-P10 are from [31]. These six problems are composed of the corresponding objective functions and constraint functions in [31]. The second test group, we pick up six problems from [31] and compare the results of Algorithm A with the algorithm from [17] (called Algorithm B). In particular, P5 = 2.3 + 4.1(1) (which means the objective and constraints of the problem P11 are 2.3 and 4.1(1) in [31], respectively, and the same blew), P6 = 2.3 + 4.6(2), P7 = 2.1 + 4.6(2), P8 = 2.1 + 4.1(1), P9 = 2.5 + 4.6(2) (n = 200), P10 = 2.9 + 4.1(1), P11 = 2.1 + 4.6(1), P12 = 2.5 + 4.6(2) (n = 50), P13 = 2.9 + 4.6(2).

Algorithm A is coded in MATLAB R2010b, and on a PC computer with Windows 7, Intel(R) Pentium(R) CPU 2.4 GHz. During the numerical experiments, we set parameters α = 0.4, β = 0.4, ε = 7, p = 1, ξ = 0.2. Execution is terminated if ρk < 10−5 or NI ≥ 150.

The numerical results are listed in Tables 1 and and2.2. The following notations are used: ‘Prob.’: the test problem number; ‘n’: the dimensions of the variable x; ‘l‘: number of all component objective functions; ‘m’: number of constraint functions; ‘x0’: initial feasible point; ‘Alg.’: algorithm; ‘A and B’: the algorithms of this paper and [17], respectively. ‘NI’: number of iterations; ‘NF’: number of all component functions evaluations in the objective; ‘NC’: number of constraints evaluations; ‘T’: CPU time (in seconds); ‘-’: the corresponding datum are not reported; ‘F(x)’: final objective value.

Table 1
Numerical results of Algorithm  A
Table 2
Numerical comparisons for Algorithms A and B

From Table 1, we see that Algorithm A can generate approximately optimal solution for all the test problems. The search direction is generated by the generalized gradient projection explicit formula, and a new working set is used, which can reduce the scale of the generalized gradient projection, so the proposed Algorithm A is efficient.

In Table 2, we compare our Algorithm A with Algorithm B, and all the numerical results are tested by a same PC computer. The performance of the two algorithms is similar in terms of the approximate optimal objective value at the final iteration point, although the number of all component functions evaluations in the objective and constraints evaluations is few for Algorithm B, our proposed algorithm performs much better than Algorithm B relative to the cost of CPU running times.

In summary, through the numerical results in two tables and the analysis above, we can get that the proposed Algorithm A is promising for middle-small-scale minimax problem.

Conclusions

Although the generalized gradient projection algorithms have good theoretical convergence and effectiveness in practice, their applications to minimax problems have not yet been investigated. In this paper, we present a new generalized gradient projection algorithm for minimax optimization problems with inequality constraints.

The main conclusions of this work:

  1. A new approximation working set is presented.
  2. Using the technique of generalized gradient projection, we construct a generalized gradient projection feasible decent direction.
  3. Under mild assumptions, the algorithm is global and strong convergent.
  4. Some preliminary numerical results show that the proposed algorithm performs efficiently.

Results and discussion

In this work, a new generalized gradient projection algorithm for minimax optimization problems with inequality constraints is presented. As further work, we think the ideas can be extended to minimax optimization problems with equality and inequality constraints and other optimization problems.

Acknowledgements

Project supported by the Natural Science Foundation of China (No. 11271086), the Natural Science Foundation of Guangxi Province (No. 2015GXNSFBA139001) and the Science Foundation of Guangxi Education Department (No. KY2015YB242) as well as the Doctoral Scientific Research Foundation (No. G20150010).

Footnotes

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

GDM carried out the idea of this paper, the description of Algorithm A and drafted the manuscript. YFZ carried out the convergence the analysis of Algorithm A. MXL participated in the numerical experiments and helped to draft the manuscript. All authors read and approved the final manuscript.

Contributor Information

Guodong Ma, moc.361@6002dgm.

Yufeng Zhang, moc.qq@0452548711.

Meixing Liu, moc.621@1102gnixiemuil.

References

1. Baums A. Minimax method in optimizing energy consumption in real-time embedded systems. Autom. Control Comput. Sci. 2009;43:57–62. doi: 10.3103/S0146411609020011. [Cross Ref]
2. Li YP, Huang GH. Inexact minimax regret integer programming for long-term planning of municipal solid waste management - part A, methodology development. Environ. Eng. Sci. 2009;26:209–218. doi: 10.1089/ees.2007.0241.ptA. [Cross Ref]
3. Michelot C, Plastria F. An extended multifacility minimax location problem revisited. Ann. Oper. Res. 2002;111:167–179. doi: 10.1023/A:1020953703533. [Cross Ref]
4. Cai XQ, Teo KL, Yang XQ, Zhou XY. Portfolio optimization under a minimax rule. Manag. Sci. 2000;46:957–972. doi: 10.1287/mnsc.46.7.957.12039. [Cross Ref]
5. Khan MA. Second-order duality for nondifferentiable minimax fractional programming problems with generalized convexity. J. Inequal. Appl. 2013;2003 doi: 10.1186/1029-242X-2013-500. [Cross Ref]
6. Gaudioso M, Monaco MF. A bundle type approach to the unconstrained minimization of convex nonsmooth functions. Math. Program. 1982;23:216–226. doi: 10.1007/BF01583790. [Cross Ref]
7. Polak E, Mayne DQ, Higgins JE. Superlinearly convergent algorithm for min-max problems. J. Optim. Theory Appl. 1991;69:407–439. doi: 10.1007/BF00940683. [Cross Ref]
8. Gaudioso M, Giallombardo G, Miglionico G. An incremental method for solving convex finite min-max problems. Math. Oper. Res. 2006;31:173–187. doi: 10.1287/moor.1050.0175. [Cross Ref]
9. Jian JB, Tang CM, Tang F. A feasible descent bundle method for inequality constrained minimax problems. Sci. Sin., Math. 2015;45:2001–2024.
10. Xu S. Smoothing method for minimax problems. Comput. Optim. Appl. 2001;20:267–279. doi: 10.1023/A:1011211101714. [Cross Ref]
11. Polak E, Royset JO, Womersley RS. Algorithms with adaptive smoothing for finite minimax problems. J. Optim. Theory Appl. 2003;119:459–484. doi: 10.1023/B:JOTA.0000006685.60019.3e. [Cross Ref]
12. Polak E, Womersley RS, Yin HX. An algorithm based on active sets and smoothing for discretized semi-infinite minimax problems. J. Optim. Theory Appl. 2008;138:311–328. doi: 10.1007/s10957-008-9355-9. [Cross Ref]
13. Xiao Y, Yu B. A truncated aggregate smoothing Newton method for minimax problems. Appl. Math. Comput. 2010;216:1868–1879.
14. Zhou JL, Tits AL. An SQP algorithm for finely discretized continuous minimax problems and other minimax problems with many objective functions. SIAM J. Optim. 1996;6:461–487. doi: 10.1137/0806025. [Cross Ref]
15. Zhu ZB, Cai X, Jian JB. An improved SQP algorithm for solving minimax problems. Appl. Math. Lett. 2009;22:464–469. doi: 10.1016/j.aml.2008.06.017. [Cross Ref]
16. Jian JB, Zhang XL, Quan R, Ma Q. Generalized monotone line search SQP algorithmfor constrained minimax problems. Optimization. 2009;58:101–131. doi: 10.1080/02331930801951140. [Cross Ref]
17. Jian JB, Hu QJ, Tang CM. Superlinearly convergent norm-relaxed SQP method based on active set identification and new line search for constrained minimax problems. J. Optim. Theory Appl. 2014;163:859–883. doi: 10.1007/s10957-013-0503-5. [Cross Ref]
18. Jian JB, Chao MT. A sequential quadratically constrained quadratic programming method for unconstrained minimax problems. J. Math. Anal. Appl. 2010;362:34–45. doi: 10.1016/j.jmaa.2009.08.046. [Cross Ref]
19. Jian JB, Mo XD, Qiu LJ, Yang SM, Wang FS. Simple sequential quadratically constrained quadratic programming feasible algorithm with active identification sets for constrained minimax problems. J. Optim. Theory Appl. 2014;160:158–188. doi: 10.1007/s10957-013-0339-z. [Cross Ref]
20. Wang FS, Wang CL. An adaptive nonmonotone trust-region method with curvilinear search for minimax problem. Appl. Math. Comput. 2013;219:8033–8041.
21. Ye F, Liu HW, Zhou SS, Liu SY. A smoothing trust-region Newton-CG method for minimax problem. Appl. Math. Comput. 2008;199:581–589.
22. Obasanjo E, Tzallas-Regas G, Rustem B. An interior-point algorithm for nonlinear minimax problems. J. Optim. Theory Appl. 2010;144:291–318. doi: 10.1007/s10957-009-9599-z. [Cross Ref]
23. Rosen JB. The gradient projection method for nonlinear programming, part I. Linear constraints. J. Soc. Ind. Appl. Math. 1960;8:181–217. doi: 10.1137/0108011. [Cross Ref]
24. Du DZ, Wu F, Zhang XS. On Rosen’s gradient projection methods. Ann. Oper. Res. 1990;24:9–28. doi: 10.1007/BF02216813. [Cross Ref]
25. Jian JB. Fast Algorithms for Smooth Cconstrained Optimization - Theoretical Analysis and Numerical Experiments. Beijing: Science Press; 2010.
26. Ma GD, Jian JB. An ε-generalized gradient projection method for nonlinear minimax problems. Nonlinear Dyn. 2014;75:693–700. doi: 10.1007/s11071-013-1095-1. [Cross Ref]
27. Burke JV, More JJ. On the identification of active constraints. SIAM J. Numer. Anal. 1988;25:1197–1211. doi: 10.1137/0725068. [Cross Ref]
28. Facchinei F, Fischer A, Kanzow C. On the accurate identification of active constraints. SIAM J. Optim. 1998;9:14–32. doi: 10.1137/S1052623496305882. [Cross Ref]
29. Oberlin C, Wright SJ. Active set identification in nonlinear programming. SIAM J. Optim. 2006;17:577–605. doi: 10.1137/050626776. [Cross Ref]
30. Han DL, Jian JB, Li J. On the accurate identification of active set for constrained minimax problems. Nonlinear Anal. 2011;74:3022–3032. doi: 10.1016/j.na.2011.01.024. [Cross Ref]
31. Karmitsa, N: Test problems for large-scale nonsmooth minimization, Reports of the Department of Mathematical Information Technology. Series B, Scientific computing, No. B, 4 (2007)

Articles from Springer Open Choice are provided here courtesy of Springer