PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Inverse Probl. Author manuscript; available in PMC 2010 July 6.
Published in final edited form as:
Inverse Probl. 2010 June 1; 26(6): 65008.
doi:  10.1088/0266-5611/26/6/065008
PMCID: PMC2897099
NIHMSID: NIHMS207234

Perturbation Resilience and Superiorization of Iterative Algorithms

Abstract

Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a “superiorized version” of it that retains its computational efficiency but nevertheless goes a long way towards solving an optimization problem. This is possible to do if the original algorithm is “perturbation resilient,” which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image.

Keywords: iterative algorithms, convex feasibility problem, superiorization, perturbation resilience, projection methods

1. Introduction

Computational tractability with limited computing resources is a major barrier for the ever increasing problem sizes of constrained optimization models (that seek a minimum of an objective function satisfying a set of constraints). On the other hand, there exist efficient (and computationally much less demanding) iterative methods for finding a feasible solution that only fulfills the constraints. These methods can handle problem sizes beyond which existing optimization algorithms cannot function. To bridge this gap we have been working on the new concept called superiorization, envisioned methodologically as lying between optimization and feasibility seeking. It enables us to use efficient iterative methods to steer the iterates towards a point that is feasible and superior, but not necessarily optimal, with respect to the given objective/merit function.

Using efficient iterative methods to do “superiorization” instead of “full constrained optimization” or only “feasibility” is a new tool for handling mathematical models that include constraints and a merit function. The target improvement of the superiorization methodology is to affect the computational treatment of the mathematical models so that we can reach solutions that are desirable from the point of view of the application at hand at a relatively small computational cost. The key to superiorization is our recent discovery [6, 23] that two principal prototypical algorithmic schemes: string-averaging projections (SAP) and block-iterative projections (BIP), which include many projection methods, are bounded perturbations resilient. Superiorization uses perturbations proactively to reach superior feasible points.

The work done to date on superiorization has been very specific: superiorization has been applied to certain algorithms for certain tasks. The main contribution of the current paper is the introduction of terminology and theory that allows us to superiorize automatically any iterative algorithm for a problem set from a very general class of problem sets. It is shown that under reasonable conditions such superiorized algorithms are guaranteed to halt. In fact, the method of superiorization introduced in the current paper is slightly different from the specific superiorized algorithms published earlier; in particular, in order to guarantee that those algorithms halt we needed more complex termination conditions (as discussed, for example, below the pseudocode on p. 10 of [28]) than what is needed for the algorithms produced by the method of the current paper. This new method of superiorization is illustrated by applying it to an SAP and to a BIP algorithm. These illustrations are for image reconstruction from projection and demonstrate an important practical aspect of our work, which is that the output of superiorization can be as useful as anything that may be obtained by full optimization because the value of the objective function for the image provided by superiorization is already smaller than that for the true image that we are trying to reconstruct. All this is explained in greater detail below.

We first motivate and describe our ideas in a not fully general context. Many significant real-world problems are modeled by constraints that force the sought-after solution point to fulfill conditions imposed by the physical nature of the problem. Such a modeling approach often leads to a convex feasibility problem of the form

findx*C=i=1ICi,
(1)

where the sets Ci [subset, dbl equals] RJ are closed convex subsets of the Euclidean space RJ, see [2, 9, 17] or [16, Chapter 5] for this broad topic. In many real-world problems the underlying system is very large (huge values of I and J) and often very sparse. In these circumstances projection methods have proved to be effective. They are iterative algorithms that use projections onto sets while relying on the general principle that when a family of closed and convex sets is present, then projections onto the individual sets are easier to perform than projections onto other sets, such as their intersection as in (1), that are derived from them.

Projection methods can have various algorithmic structures (some of which are particularly suitable for parallel computing) and they also possess desirable convergence properties and good initial behavior patterns [2, 16, 18, 19, 20, 27, 32]. The main advantage of projection methods, which makes them successful in real-world applications, is computational. They commonly have the ability to handle huge-size problems of dimensions beyond which more sophisticated methods cease to be efficient or even applicable due to memory requirements. (For a justification of this claim see the various examples provided in [10].) This is so because the building bricks of a projection algorithm (which are the projections onto the given individual sets) are easy to perform, and because the algorithmic structure is either sequential or simultaneous, or in-between, as in the block-iterative projection methods or in the more recently invented string-averaging projection methods. The number of sets used simultaneously in each iteration in block-iterative methods and the number and lengths of strings used in each iteration in string-averaging methods are variable, which provides great flexibility in matching the implementation of the algorithm with the parallel architecture at hand; for block-iterative methods see, e.g., [1, 3, 5, 13, 20, 23, 25, 26, 29, 30, 31] and for string-averaging methods see, e.g., [4, 6, 11, 14, 15, 22, 31, 33].

The key to superiorization is our recent discovery [6, 23, 28] that two principal prototypical algorithmic schemes of projection methods: string-averaging projections (SAP) and block-iterative projections (BIP), which include as special cases a variety of projection methods for the convex feasibility problem, are bounded perturbations resilient in the sense that the convergence of sequences generated by them continues to hold even if the iterates are perturbed in every iteration. We harness this resilience to bounded perturbations to steer the iterates to not just any feasible point but to a superior (in a well-defined sense) feasible point of (1).

Our motivation is the desire to create a new methodology that will significantly improve methods for the solution of inverse problems in image reconstruction from projections, intensity-modulated radiation/proton therapy (IMRT/IMPT) and in other real-world problems such as electron microscopy (EM). Our work [6, 23], as well as the examples given below, indicate that our objective is achievable and show how algorithms can incorporate perturbations in order to perform superiorization.

The superiorization methodology has in fact broader applicability than what has been discussed until now and its mathematical specification in the next section reflects this. However, all our specific examples will be chosen from the field that we used as our motivation in this introductory section.

2. Specification of the superiorization methodology

The superiorization principle relies on the bounded perturbation resilience of algorithms. Therefore we define this notion next in a general setting within RJ. (We note in passing that there is an immediate generalization of our approach by considering problems that are defined over a closed subset of RJ rather than necessarily the whole of RJ. We chose not to do this in this paper for reasons that are given at the end in Section 4.)

We introduce the notion of a problem structure left angle bracketT, Prright angle bracket, where T is a nonempty problem set and Pr is a function on T such that, for all T [set membership] T, PrT : RJR+, where R+ is the set of nonnegative real numbers. Intuitively we think of PrT (x) as a measure of how “far” x is from being a solution of T. In fact, we call x a solution of T if PrT (x) = 0.

For example, for the convex feasibility problem (1)

𝕋={{C1,,CI}|Iis a positive integer and, for1iI,Ciis a closed convex subset ofJ}
(2)

and

𝒫r{C1,,CI}(x)=i=1I(d(x,Ci))2,
(3)

where d (x,Ci) is the Euclidean distance of x from the set Ci. Clearly, in this case x is a solution of {C1,…,CI} as defined in the previous paragraph if, and only if, x [set membership] C as defined in (1).

Definition 1. An algorithm P for left angle bracketT, Prright angle bracket assigns to each T [set membership] T an algorithmic operator PT : RJRJ. P is said to be bounded perturbations resilient if, for all T [set membership] T, the following is the case: if the sequence ((PT)kx)k=0 converges to a solution of T for all x [set membership] RJ, then any sequence (xk)k=0 of points in RJ also converges to a solution of T provided that, for all k ≥ 0,

xk+1=PT(xk+βkυk),
(4)

where βkυk are bounded perturbations, meaning that βk are real nonnegative numbers such that k=0βk< and the sequence (υk)k=0 is bounded.

We give next specific instances of bounded perturbations resilient algorithms for solving the convex feasibility problem as in (2) and (3), from the classes of SAP and BIP methods. We do this by defining P{C1,…,CI} for an arbitrary but fixed element {C1,…,CI} of T of (2) for the different algorithms P. For any nonempty closed convex subset M of RJ and any x [set membership] RJ, the orthogonal projection of x onto M is the point in M that is nearest (by the Euclidean distance) to x; it is denoted by PMx.

To define P{C1,…,CI} for the SAP instances, we make use of index vectors, which are nonempty ordered sets t = (t1,…,tN), where N is an arbitrary positive integer, whose elements tn are in the set {1,…,I}. For an index vector t we define the composite operator

P[t]=PCtNPCt1.
(5)

A finite set Ω of index vectors is called fit if, for each i [set membership] {1,…,I}, there exists t = (t1,…,tN) [set membership] Ω such that tn = i for some n [set membership] {1,…,N}. If Ω is a fit set of index vectors, then a function ω:Ω → R++ = (0,∞) is called a fit weight function if ∑t[set membership]Ω ω(t) = 1. A pair (Ω, ω) consisting of a fit set of index vectors and a fit weight function defined on it was called an amalgamator in [6]. For each amalgamator (Ω, ω), we define the algorithmic operator P{C1,…,CI} : RJRJ by

P{C1,,CI}x=tΩω(t)P[t]x.
(6)

For this algorithmic operator we have the following bounded perturbations resilience theorem.

Theorem 1. [6, Section II] If C of (1) is nonempty, (βk)k=0 is a sequence of nonnegative real numbers such that k=0βk<and(υk)k=0 is a bounded sequence of points in RJ, then for any amalgamator (Ω, ω) and any x0 [set membership] RJ, the sequence (xk)k=0 generated by

xk+1=P{C1,,CI}(xk+βkυk),k0,
(7)

converges, and its limit is in C. (The statement of this theorem in [6] is for positive βks, but the proof given there applies to nonnegative βks.)

Corollary 1. For any amalgamator (Ω, ω), the algorithm P defined by the algorithmic operator P{C1,…,CI} is bounded perturbations resilient.

Proof. Assume that for T = {C1,…,CI} the sequence ((PT)kx)k=0 converges to a solution of T for all x [set membership] RJ. This implies, in particular, that C of (1) is nonempty. By Definition 1, we need to show that any sequence (xk)k=0 of points in RJ also converges to a solution of T provided that, for all k ≥ 0, (4) is satisfied when the βkυk are bounded perturbations. Under our assumptions, this follows from Theorem 1.

Next we look at a member of the family of BIP methods. Considering the convex feasibility problem (1), for 1 ≤ uU, let Bu be a set {bu,1,…,bu,|Bu|} of elements of {1,…,I} (|Bu| denotes the cardinality of Bu). We call such a Bu a block and define the (composite) algorithmic operator Q{C1,…,CI}:RJRJ by

Q{C1,,CI}=QUQ1,
(8)

where, for x [set membership] RJ and 1 ≤ uU,

Qux=1RiBuPCix+R|Bu|Rx,
(9)

and

R=max{|Bu||1uU}.
(10)

The iterative procedure xk+1 = Q{C1,…,CI}xk is a member of the family of BIP methods. For this algorithmic operator we have the following bounded perturbations resilience theorem.

Theorem 2. [23] If C of (1) is nonempty, {1,,I}=u=1UBu,(βk)k=0 is a sequence of nonnegative real numbers such that k=0βk<and(υk)k=0 is a bounded sequence of points in RJ, then for any x0 [set membership] RJ, the sequence (xk)k=0 generated by

xk+1=Q{C1,,CI}(xk+βkυk),k0,
(11)

converges, and its limit is in C. (This is a special case of Theorem 2 in [23] given here without a relaxation parameter. Also, that theorem is stated for positive βks, but the proof given there applies to nonnegative βks.)

Corollary 2. The algorithm Q defined by the algorithmic operator Q{C1,…,CI} is bounded perturbations resilient.

Proof. Replace in the proof of Corollary 1 P by Q and Theorem 1 by Theorem 2.

Further bounded perturbations resilience theorems are available in a Banach space setting, see [7, 8]. Thus the theory of bounded perturbations resilient algorithms already contains some solid mathematical results. As opposed to this, the superiorization theory that we present next is at the stage of being a collection of heuristic ideas, a full mathematical theory still needs to be developed. However, there are practical demonstrations of its potential usefulness; see [6, 23, 28] and the illustrations in Section 3 below.

For a problem structure left angle bracketT, Prright angle bracket, T [set membership] T, ε [set membership] R++ and a sequence S=(xk)k=0 of points in RJ, we use O (T, ε, S) to denote the x [set membership] RJ that has the the following properties: PrT (x) ≤ ε and there is a nonnegative integer K such that xK = x and, for all nonnegative integers [ell] < K, PrT (x[ell]) > ε. Clearly, if there is such an x, then it is unique. If there is no such x, then we say that O (T, ε, S) is undefined. The intuition behind this definition is the following: if we think of S as the (infinite) sequence of points that is produced by an algorithm (intended for the problem T) without a termination criterion, then O (T, ε, S) is the output produced by that algorithm when we add to it instructions that make it terminate as soon as it reaches a point at which the value of PrT is not greater than ε. The following result is obvious.

Lemma 1. If PrT is continuous and the sequence S converges to a solution of T, then O (T, ε, S) is defined and PrT (O (T, ε, S)) ≤ ε.

Given an algorithm P for a problem structure left angle bracketT, Prright angle bracket, a T [set membership] T and an [x with macron] [set membership] RJ, let R(T,x¯)=((PT)kx¯)k=0. For a function ϕ : RJR, the superiorization methodology should provide us with an algorithm that produces a sequence S(T,x¯,ϕ)=(xk)k=0, such that for any ε [set membership] R++ and [x with macron] [set membership] RJ for which PrT ([x with macron]) > ε and O (T, ε, R (T, [x with macron])) is defined, O (T, ε, S (T, [x with macron], ϕ)) is also defined and ϕ (O (T, ε, S (T, [x with macron], ϕ))) < ϕ (O (T, ε, R (T, [x with macron]))). This is of course too ambitious in its full generality and so here we analyze only a special case, but one that is still quite general. We now list our assumptions for the special case for which we discuss details of the superiorization methodology.

Assumptions

  1. left angle bracketT, Prright angle bracket is a problem structure such that PrT is continuous for all T [set membership] T.
  2. P is a bounded perturbation resilient algorithm for left angle bracketT, Prright angle bracket such that, for all T [set membership] T, PT is continuous and, if x is not a solution of T, then PrT (PTx)) < PrT (x).
  3. ϕ : RJR is an everywhere real-valued convex function, defined on the whole space.

Under these assumptions, we now describe the algorithm to produce the sequence S(T,x¯,ϕ)=(xk)k=0 and present and prove Theorem 3 below.

The algorithm assumes that we have available a summable sequence (γ)=0 of positive real numbers. It is easy to generate such sequences; e.g., we can use γ[ell] = a[ell], where 0 < a < 1. The algorithm generates, simultaneously with the sequence (xk)k=0,sequences(υk)k=0and(βk)k=0. The latter will be generated as a subsequence of(γ)=0. Clearly, the resulting sequence (βk)k=0 of positive real numbers will be summable. We first specify the algorithm and then discuss it. The algorithm depends on the specified [x with macron], ϕ, (γ)=0, PrT and PT. It makes use of a logical variable called continue and also of the concept of a subgradient of the convex function ϕ. ‖·‖ is the Euclidean norm.

Superiorized Version of Algorithm P

(i)set k = 0
(ii)set xk = [x with macron]
(iii)set [ell] = 0
(iv)repeat
(v) set g to a subgradient of ϕ at xk
(vi) ifg‖ > 0
(vii)  then set υk = −g/ ‖g
(viii)  else set υk = g
(ix) set continue = true
(x) while continue
(xi)  set βk = γ[ell]
(xii)  set y = xk + βkυk
(xiii)  if ϕ (y) ≤ ϕ (xk) and PrT (PTy) < PrT (xk) then
(xiv)   set xk+1 = PTy
(xv)   set continue = false
(xvi)  set [ell] = [ell] + 1
(xvii) set k = k + 1

Sometimes it is useful to emphasize the function ϕ for which we are superiorizing, in which case we refer to the algorithm above as the ϕ-superiorized version of algorithm P. It is important to bear in mind that the sequence S produced by the algorithm depends also on the initial point [x with macron], the selection of the subgradient in Line (v) of the algorithm, the summable sequence (γ)=0, and the problem T. In addition, the output O (T, ε, S) of the algorithm depends on the stopping criterion ε.

Theorem 3. Under the Assumptions listed above, the Superiorized Version of Algorithm P will produce a sequence S (T, [x with macron], ϕ) of points in RJ that either contains a solution of T or is infinite. In the latter case, if the sequence ((PT)kx)k=0 converges to a solution of T for all x [set membership] RJ, then, for any ε [set membership] R++, O (T, ε, S (T, [x with macron], ϕ)) is defined and PrT (O (T, ε, S (T, [x with macron], ϕ))) ≤ ε.

Proof. Assume that the sequence S (T, [x with macron], ϕ) produced by the Superiorized Version of Algorithm P dos not contain a solution of T. We first show that in this case the algorithm generates an infinite sequence (xk)k=0. This is equivalent to saying that, for any xk that has been generated already, the condition in Line (xiii) of the algorithm will be satisfied sooner or later (and hence xk+1 will be generated). This needs to happen, because as long as the condition is not satisfied we keep resetting (in Line (xi)) the value of βk to γ[ell], with ever increasing values of [ell]. However, (γ)=0 is a summable sequence of positive real numbers, and so γ[ell] is guaranteed to be arbitrarily small if [ell] is sufficiently large. Since υk is either a unit vector in the direction of the negative subgradient of the convex function ϕ at xk or is the zero vector (see Lines (v)–(viii)), ϕ (xk + βkυk) ≤ ϕ (xk) must be satisfied if the positive number βk is small enough. Also, since PrT (PTxk) < PrT (xk) and PT and PrT are continuous (Assumptions (ii) and (i), respectively), we also have that PrT (PT (xk + βkυk)) < PrT (xk) if βk is small enough. This completes the proof that the condition in Line (xiii) of the algorithm will be satisfied and so the algorithm will generate an infinite sequence S (T, [x with macron], ϕ). Observing that we have already demonstrated that the βkυk are bounded perturbations, and comparing (4) with Lines (xii) and (xiv), we see that (by the bounded perturbation resilience of P) the assumption that the sequence ((PT)kx)k=0 converges to a solution of T for all x [set membership] RJ implies that S (T, [x with macron], ϕ)) also converges to a solution of T. Thus, applying Lemma 1 we obtain the final claim of the theorem.

Unfortunately, this theorem does not go far enough. To demonstrate that a methodology leads to superiorization we should be proving (under some assumptions) a result like ϕ (O (T, ε, S (T, [x with macron], ϕ))) < ϕ (O (T, ε, R (T, [x with macron]))) in place of the weaker result at the end of the statement of the theorem. Currently we do not have any such proofs and so we are restricted to providing practical demonstrations that our methodology leads to superiorization in the desired sense. In the next section we provide such demonstrations for the Superiorized Version of Algorithm P, for two different Ps.

3. Illustrations of the superiorization methodology

We illustrate the superiorization methodology on a problem of reconstructing a head cross-section (based on Figure 4.6(a) of [27]) from its projections using both an SAP and a BIP algorithm. (All the computational work reported in this section was done using SNARK09 [24]; the phantom, the data, the reconstructions and displays were all generated within this same framework.) Figure 1(a) shows a 243 × 243 digitization of the head phantom with J = 59,049 pixels. An x [set membership] RJ is interpreted as a vector of pixel values, whose components represent the average X-ray linear attenuation coefficients (measured per centimeter) within the 59,049 pixels. Each pixel is of size 0.0752 × 0.0752 (measured in centimeters). The pixel values range from 0 to 0.5639. For display purposes, any value below 0.204 is shown as black (gray value 0) and any value above 0.21675 is shown as white (gray value 255), with a linear mapping of the pixel values into gray values in between (the same convention is used in displaying reconstructed images in Figures 1(b)–(e)).

Figure 1
A head phantom (a) and its reconstructions from underdetermined consistent data obtained for 82 views using: (b) a variant of ART, (c) TV-superiorized version of the same variant of ART, (d) a block-iterative projection method, and (e) TV-superiorized ...

Data were collected by calculating line integrals across the digitized image for 82 sets of equally spaced parallel lines, with I = 25,452 lines in total. Each data item determines a hyperplane in RJ. Since the digitized phantom lies in the intersection of all the hyperplanes, we have here an instance of the convex feasibility problem with a nonempty C, satisfying the first condition of the statements of Theorems 1 and 2.

For our illustration, we chose the SAP algorithm P{C1,…,CI} as determined by (5)–(6) with Ω = {(1,…,I)} and ω(1,…,I) = 1. This is a classical method that in tomography would be considered a variant of the algebraic reconstruction techniques (ART) [27, Chapter 11]. For the BIP algorithm we chose Q{C1,…,CI} as determined by (8)–(10) with U = 82 and each block corresponding to one of the 82 sets of parallel lines along which the data are collected.

The function ϕ for which we superiorized is defined so that, for any x [set membership] RJ, ϕ (x) is the total variation (TV) of the corresponding 243 × 243 image. If the pixel values of this image are qg,h, then the value of the TV is defined to be

g=1242h=1242(qg+1,hqg,h)2+(qg,h+1qg,h)2.
(12)

For the TV-superiorized versions of the algorithms P{C1,…,CI} and Q{C1,…,CI} of the previous paragraph we selected [x with macron] to be the origin (the vector of all zeros) and γ[ell] = 0.999[ell]. Also, we set ε = 0.01 for the stopping criterion, which is small compared to the PrT of the initial point (PrT ([x with macron]) = 330.208).

For each of the four algorithms (P{C1,…, CI}, Q{C1, …,CI} and their TV-superiorized versions), the sequence S that is produced by it is such that the output O (T, ε, S) is defined; see Figures 1(b)–(e) for the images that correspond to these outputs. Clearly, the superiorized reconstructions in Figures 1(c) and 1(e) are visually superior to their not superiorized versions in Figures 1(b) and 1(d), respectively. More importantly from the point of view of our theory, consider Table 1. As stated in the last paragraph of the previous section, we would like to have that ϕ (O (T, ε, S (T, [x with macron], ϕ))) < ϕ (O (T, ε, R (T, [x with macron]))). While we are not able to prove that this is the case in general, Table 1 clearly shows it to be the case for the two algorithms discussed in this section.

Table 1
Values of TV for the outputs of the various algorithms. The second column is for the superiorized versions and the third column is for the original versions.

A final important point that is illustrated by the experiments in this section is that, from the practical point of view, TV-superiorization is as useful as TV-optimization. This is because a realistic phantom, such as the one in Figure 1(a), is unlikely to be TV-minimizing subject to the constraints provided by the measurements. In fact, the TV value of our phantom is 450.53, which is larger than that for either of the TV-superiorized reconstructions in the second column of Table 1. While an optimization method should be able to find an image with a lower TV value, there is no practical point for doing that. Since the underlying aim of what we are doing is to estimate the phantom from the data, producing an image whose TV value is further from the TV value of the phantom than that of our superiorized reconstructions is unlikely to be helpful towards achieving this aim.

4. Discussion and conclusions

Stability of algorithms under perturbations is generally studied in numerical analysis with the aim of proving that an algorithm is stable so that it can “endure” all kinds of imperfections in the data or in the computational performance. Here we have taken a proactive approach designed to extract specific benefits from the kind of stability that we term perturbation resilience. We have been able to do this in a context that includes, but is much more general than, feasibility-optimization for intersections of convex sets.

Our premise has been that (1) there is available a bounded perturbations resilient iterative algorithm that solves efficiently certain type of problems and (2) we desire to make use of perturbations to find for these problems solutions that, according to some criterion, are superior to the ones to which we would get without employing perturbations. To accomplish this one must have a way of introducing perturbations that take into account the criterion according to which we wish to “superiorize” the solutions of the problems.

We have set forth the fundamental principle, have given some mathematical formulations and results, and have shown potential benefits (in the field of image reconstruction from projections). However, the superiorization methodology needs to be studied further from the mathematical, algorithmic and computational points of view in order to unveil its general applicability to inverse problems. In particular, we need to investigate the computational efficiency of the superiorized versions of algorithms compared to their original versions and, more importantly, compared to actual optimization algorithms. Such results have been reported for specific algorithms (e.g., Table I of [6] reports on a case in which a superiorized algorithm is eight times faster than the original version and is four times faster than an optimization algorithm proposed in [21]). However, further testing of the computational efficiency of the algorithms produced by the new general approach will have to be carried out under a variety of circumstances. As algorithms are developed and tested a dialog on algorithmic developments must be accompanied by mathematical validation and applications to simulated and real data from various relevant fields of applications.

Validating the concept means proving precise statements about the behavior of iterates (xk)k=0 generated by the superiorized versions of algorithms. Under what conditions do they converge? Can their limit points be characterized? How would different choices of the perturbation coefficients βk and the perturbation vectors υk affect the superiorization process? Can different schemes for generating the βks be developed, implemented, investigated? Enlarging the arsenal of bounded perturbations resilient algorithms means generalizing existing proofs for such algorithms and developing new theories that will bring additional ones into the family of bounded perturbations resilient algorithms. Further developments should include extension to the problem of finding a common fixed point of a family of operators (a direct generalization of the convex feasibility problem, see, e.g., [29]), the possibility to generalize the concept of superiorization so that it will be applicable to the split feasibility problem, see [12, 34, 35], and studying the behavior of superiorization algorithms in inconsistent situations when the underlying solution set is empty. Thus we view the material in this paper as only an initial step in a promising new field of endeavor for solving inverse problems.

As a final comment we return to the issue raised at the beginning of Section 2 of generalizing the approach to problems that are over a closed subset of RJ rather than the whole of RJ. This would allow us, for example, to superiorize an entropy-based function ϕ that is defined only over the positive orthant. It seems clear that this is doable, but a rigorous reformulation of all that we said requires numerous changes. For example, in Theorems 1 and 2, we would have to include additional condition(s) to ensure that (xk + βkυk) is in the domain of the algorithmic operator and then we would have to prove the so-altered theorems. We felt that such extra details would interfere with the clarity of presentation of the main contribution of the paper and decided not to do it. Entropy-superiorizing perturbations for a specific algorithm were investigated in [23] and the results reported in Figure 1 of that paper show it to produce a reconstruction from projections that is inferior to the one produced by TV-superiorizing perturbations.

Acknowledgments

We appreciate the constructive comments of two referees. This work was supported by Award Number R01HL070472 from the National Heart, Lung, And Blood Institute. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Heart, Lung, And Blood Institute or the National Institutes of Health.

Footnotes

AMS classification scheme numbers: 65Y20, 68W25, 90C06, 90C25, 68U10

References

1. Aharoni R, Censor Y. Block-iterative projection methods for parallel computation of solutions to convex feasibility problems. Linear Algebra Appl. 1989;120:165–175.
2. Bauschke HH, Borwein JM. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996;38:367–426.
3. Bauschke HH, Combettes PL, Kruk SG. Extrapolation algorithm for affine-convex feasibility problems. Numer. Algorithms. 2006;41:239–274.
4. Bauschke HH, Matoušková E, Reich S. Projection and proximal point methods: convergence results and counterexamples. Nonlinear Anal. 2004;56:715–738.
5. Butnariu D, Censor Y. On the behavior of a block-iterative projection method for solving convex feasibility problems. Int. J. Comput Math. 1990;34:79–94.
6. Butnariu D, Davidi R, Herman GT, Kazantsev IG. Stable convergence behavior under summable perturbations of a class of projection methods for convex feasibility and optimization problems. IEEE J. Sel. Top. Sign. Process. 2007;1:540–547.
7. Butnariu D, Reich S, Zaslavski AJ. Convergence to fixed points of inexact orbits of Bregman-monotone and nonexpansive operators in Banach spaces. In: Nathansky HF, de Buen BG, Goebel K, Kirk WA, Sims B, editors. Fixed Point Theory and Applications. Yokohama: Yokohama Publishers; 2006. pp. 11–32.
8. Butnariu D, Reich S, Zaslavski AJ. Stable convergence theorems for infinite products and powers of nonexpansive mappings. Numer. Func. Anal. Opt. 2008;29:304–323.
9. Byrne CL. Applied Iterative Methods. AK Peters; 2008.
10. Censor Y, Chen W, Combettes PL, Davidi R, Herman GT. On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints. Opt. Online. 2009 http://www.optimization-online.org/DB_HTML/2009/12/2500.html.
11. Censor Y, Elfving T, Herman GT. Averaging strings of sequential iterations for convex feasibility problems. In: Butnariu D, Censor Y, Reich S, editors. Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Elsevier Science Publishers; 2001. pp. 101–114.
12. Censor Y, Elfving T, Kopf N, Bortfeld T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Problems. 2005;21:2071–2084.
13. Censor Y, Gordon D, Gordon R. BICAV: A block-iterative, parallel algorithm for sparse systems with pixel-related weighting. IEEE Trans. Med. Imaging. 2001;20:1050–1060. [PubMed]
14. Censor Y, Segal A. On the string averaging method for sparse common fixed points problems. Int. Trans. Oper. Res. 2009;16:481–494. [PMC free article] [PubMed]
15. Censor Y, Tom E. Convergence of string-averaging projection schemes for inconsistent convex feasibility problems. Optim. Methods Softw. 2003;18:543–554.
16. Censor Y, Zenios SA. Parallel Optimization: Theory, Algorithms and Applications. Oxford University Press; 1997.
17. Chinneck JW. Feasibility and Infeasibility in Optimization: Algorithms and Computational Methods. Springer; 2007.
18. Combettes PL. The convex feasibility problem in image recovery. Adv. Imag. Elec. Phys. 1996;95:155–270.
19. Combettes PL. Hilbertian convex feasibility problem: Convergence of projection methods. Appl. Math. Opt. 1997;35:311–330.
20. Combettes PL. Convex set theoretic image recovery by extrapolated iterations of parallel subgradient projections. IEEE T. Image Process. 1997;6:493–506. [PubMed]
21. Combettes PL, Luo J. An adaptive level set method for nondifferentiable constrained image recovery. IEEE T. Image Process. 2002;11:1295–1304. [PubMed]
22. Crombez G. Finding common fixed points of strict paracontractions by averaging strings of sequential iterations. J. Nonlinear Convex Anal. 2002;3:345–351.
23. Davidi R, Herman GT, Censor Y. Perturbation-resilient block-iterative projection methods with application to image reconstruction from projections. Int. Trans. Oper. Res. 2009;16:505–524. [PMC free article] [PubMed]
24. Davidi R, Herman GT, Klukowska J. SNARK09: A programming system for the reconstruction of 2D images from 1D projections. 2009 http://www.snark09.com/ [PubMed]
25. Eggermont PPB, Herman GT, Lent A. Iterative algorithms for large partitioned linear systems, with applications to image reconstruction. Linear Algebra Appl. 1981;40:37–67.
26. González-Castaño FJ, García-Palomares UM, Alba-Castro JL, Pousada-Carballo JM. Fast image recovery using dynamic load balancing in parallel architectures, by means of incomplete projections. IEEE T. Image Process. 2001;10:493–499. [PubMed]
27. Herman GT. Fundamentals of Computerized Tomography: Image Reconstruction from Projections. 2nd ed. Springer; 2009.
28. Herman GT, Davidi R. On image reconstruction from a small number of projections. Inverse Problems. 2008;24 045011. [PMC free article] [PubMed]
29. Kiwiel KC, Łopuch B. Surrogate projection methods for finding fixed points of firmly nonexpansive mappings. SIAM J. Optim. 1997;7:1084–1102.
30. Ottavy N. Strong convergence of projection-like methods in Hilbert spaces. J. Optim. Theory Appl. 1988;56:433–461.
31. Penfold SN, Schulte RW, Censor Y, Bashkirov V, McAllister S, Schubert KE, Rosenfeld AB. Block-iterative and string-averaging projection algorithms in proton computed tomography image reconstruction. In: Censor Y, Jiang M, Wang G, editors. Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems. Medical Physics Publishing; to appear.
32. Pierra G. Decomposition through formalization in a product space. Math. Program. 1984;28:96–115.
33. Rhee H. An application of the string averaging method to one-sided best simultaneous approximation. J. Korea Soc. Math. Educ. Ser. B Pure Appl. Math. 2003;10:49–56.
34. Schöpfer F, Schuster T, Louis AK. An iterative regularization method for the solution of the split feasibility problem in Banach spaces. Inverse Problems. 2008;24:055008.
35. Zhao J, Yang Q. Several solution methods for the split feasibility problem. Inverse Problems. 2005;21:1791–1799.