Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2897099

Formats

Article sections

- Abstract
- 1. Introduction
- 2. Specification of the superiorization methodology
- 3. Illustrations of the superiorization methodology
- 4. Discussion and conclusions
- References

Authors

Related links

Inverse Probl. Author manuscript; available in PMC 2010 July 6.

Published in final edited form as:

Inverse Probl. 2010 June 1; 26(6): 65008.

doi: 10.1088/0266-5611/26/6/065008PMCID: PMC2897099

NIHMSID: NIHMS207234

See other articles in PMC that cite the published article.

Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a “superiorized version” of it that retains its computational efficiency but nevertheless goes a long way towards solving an optimization problem. This is possible to do if the original algorithm is “perturbation resilient,” which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image.

Computational tractability with limited computing resources is a major barrier for the ever increasing problem sizes of constrained optimization models (that seek a minimum of an objective function satisfying a set of constraints). On the other hand, there exist efficient (and computationally much less demanding) iterative methods for finding a feasible solution that only fulfills the constraints. These methods can handle problem sizes beyond which existing optimization algorithms cannot function. To bridge this gap we have been working on the new concept called *superiorization*, envisioned methodologically as lying between optimization and feasibility seeking. It enables us to use efficient iterative methods to steer the iterates towards a point that is feasible and superior, but not necessarily optimal, with respect to the given objective/merit function.

Using efficient iterative methods to do “superiorization” instead of “full constrained optimization” or only “feasibility” is a new tool for handling mathematical models that include constraints and a merit function. The target improvement of the superiorization methodology is to affect the computational treatment of the mathematical models so that we can reach solutions that are desirable from the point of view of the application at hand at a relatively small computational cost. The key to superiorization is our recent discovery [6, 23] that two principal prototypical algorithmic schemes: string-averaging projections (SAP) and block-iterative projections (BIP), which include many projection methods, are *bounded perturbations resilient*. Superiorization uses perturbations proactively to reach superior feasible points.

The work done to date on superiorization has been very specific: superiorization has been applied to certain algorithms for certain tasks. The main contribution of the current paper is the introduction of terminology and theory that allows us to superiorize automatically any iterative algorithm for a problem set from a very general class of problem sets. It is shown that under reasonable conditions such superiorized algorithms are guaranteed to halt. In fact, the method of superiorization introduced in the current paper is slightly different from the specific superiorized algorithms published earlier; in particular, in order to guarantee that those algorithms halt we needed more complex termination conditions (as discussed, for example, below the pseudocode on p. 10 of [28]) than what is needed for the algorithms produced by the method of the current paper. This new method of superiorization is illustrated by applying it to an SAP and to a BIP algorithm. These illustrations are for image reconstruction from projection and demonstrate an important practical aspect of our work, which is that the output of superiorization can be as useful as anything that may be obtained by full optimization because the value of the objective function for the image provided by superiorization is already smaller than that for the true image that we are trying to reconstruct. All this is explained in greater detail below.

We first motivate and describe our ideas in a not fully general context. Many significant real-world problems are modeled by constraints that force the sought-after solution point to fulfill conditions imposed by the physical nature of the problem. Such a modeling approach often leads to a *convex feasibility problem* of the form

$$\text{find}\phantom{\rule{thinmathspace}{0ex}}{\mathit{x}}^{*}\phantom{\rule{thinmathspace}{0ex}}\in C={\displaystyle \underset{i=1}{\overset{I}{\cap}}{C}_{i},}$$

(1)

where the sets *C _{i}*

Projection methods can have various algorithmic structures (some of which are particularly suitable for parallel computing) and they also possess desirable convergence properties and good initial behavior patterns [2, 16, 18, 19, 20, 27, 32]. The main advantage of projection methods, which makes them successful in real-world applications, is computational. They commonly have the ability to handle huge-size problems of dimensions beyond which more sophisticated methods cease to be efficient or even applicable due to memory requirements. (For a justification of this claim see the various examples provided in [10].) This is so because the building bricks of a projection algorithm (which are the projections onto the given individual sets) are easy to perform, and because the algorithmic structure is either sequential or simultaneous, or in-between, as in the block-iterative projection methods or in the more recently invented string-averaging projection methods. The number of sets used simultaneously in each iteration in block-iterative methods and the number and lengths of strings used in each iteration in string-averaging methods are variable, which provides great flexibility in matching the implementation of the algorithm with the parallel architecture at hand; for block-iterative methods see, e.g., [1, 3, 5, 13, 20, 23, 25, 26, 29, 30, 31] and for string-averaging methods see, e.g., [4, 6, 11, 14, 15, 22, 31, 33].

The key to superiorization is our recent discovery [6, 23, 28] that two principal prototypical algorithmic schemes of projection methods: string-averaging projections (SAP) and block-iterative projections (BIP), which include as special cases a variety of projection methods for the convex feasibility problem, are bounded perturbations resilient in the sense that the convergence of sequences generated by them continues to hold even if the iterates are perturbed in every iteration. We harness this resilience to bounded perturbations to steer the iterates to not just any feasible point but to a superior (in a well-defined sense) feasible point of (1).

Our motivation is the desire to create a new methodology that will significantly improve methods for the solution of inverse problems in image reconstruction from projections, intensity-modulated radiation/proton therapy (IMRT/IMPT) and in other real-world problems such as electron microscopy (EM). Our work [6, 23], as well as the examples given below, indicate that our objective is achievable and show how algorithms can incorporate perturbations in order to perform superiorization.

The superiorization methodology has in fact broader applicability than what has been discussed until now and its mathematical specification in the next section reflects this. However, all our specific examples will be chosen from the field that we used as our motivation in this introductory section.

The superiorization principle relies on the bounded perturbation resilience of algorithms. Therefore we define this notion next in a general setting within ^{J}. (We note in passing that there is an immediate generalization of our approach by considering problems that are defined over a closed subset of ^{J} rather than necessarily the whole of ^{J}. We chose not to do this in this paper for reasons that are given at the end in Section 4.)

We introduce the notion of a *problem structure* , *r*, where is a nonempty *problem set* and *r* is a function on such that, for all *T* , *r _{T}* :

For example, for the convex feasibility problem (1)

$$\begin{array}{cc}\mathbb{T}=\{\{{C}_{1},\dots ,{C}_{I}\}|\hfill & I\phantom{\rule{thinmathspace}{0ex}}\text{is a positive integer and, for}\phantom{\rule{thinmathspace}{0ex}}1\le i\le I,{C}_{i}\phantom{\rule{thinmathspace}{0ex}}\text{is a closed convex subset of}\phantom{\rule{thinmathspace}{0ex}}{\mathbb{R}}^{J}\}\hfill \end{array}$$

(2)

and

$$\mathcal{P}{r}_{\{{C}_{1},\dots ,{C}_{I}\}}(\mathit{x})=\sqrt{{\displaystyle \sum _{i=1}^{I}{(d(\mathit{x},{C}_{i}))}^{2},}}$$

(3)

where *d* (* x*,

**Definition 1**. An *algorithm* **P** for , *r* assigns to each *T* an algorithmic operator **P**_{T} : ^{J} → ^{J}. **P** is said to be *bounded perturbations resilient* if, for all *T* , the following is the case: if the sequence ${\left({({\mathbf{P}}_{T})}^{k}\mathit{x}\right)}_{k=0}^{\infty}$ converges to a solution of *T* for all **x**^{J}, then any sequence ${\left({\mathit{x}}^{k}\right)}_{k=0}^{\infty}$ of points in ^{J} also converges to a solution of *T* provided that, for all *k* ≥ 0,

$${\mathit{x}}^{k+1}={\mathbf{P}}_{T}\phantom{\rule{thinmathspace}{0ex}}({\mathit{x}}^{k}+{\beta}_{k}{\mathit{\upsilon}}^{k}),$$

(4)

where β_{k}**υ**^{k} are bounded perturbations, meaning that β_{k} are real nonnegative numbers such that $\sum _{k=0}^{\infty}{\beta}_{k}<\infty$ and the sequence ${\left({\mathit{\upsilon}}^{k}\right)}_{k=0}^{\infty}$ is bounded.

We give next specific instances of bounded perturbations resilient algorithms for solving the convex feasibility problem as in (2) and (3), from the classes of SAP and BIP methods. We do this by defining **P**_{{C1,…,CI}} for an arbitrary but fixed element {*C*_{1},…,*C _{I}*} of of (2) for the different algorithms

To define **P**_{{C1,…,CI}} for the SAP instances, we make use of *index vector*s, which are nonempty ordered sets *t* = (*t*_{1},…,*t _{N}*), where

$$P[t]={P}_{{C}_{{t}_{N}}}\cdots {P}_{{C}_{{t}_{1}}}.$$

(5)

A finite set Ω of index vectors is called *fit* if, for each *i* {1,…,*I*}, there exists *t* = (*t*_{1},…,*t _{N}*) Ω such that

$${\mathbf{P}}_{\{{C}_{1},\dots ,{C}_{I}\}}\mathit{x}={\displaystyle \sum _{t\in \mathrm{\Omega}}\omega (t)P[t]\mathit{x}.}$$

(6)

For this algorithmic operator we have the following bounded perturbations resilience theorem.

**Theorem 1. [6, Section II]** *If C of (1) is nonempty*, ${({\beta}_{k})}_{k=0}^{\infty}$ *is a sequence of nonnegative real numbers such that* ${\sum}_{k=0}^{\infty}{\beta}_{k}}<\infty \phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}{({\mathit{\upsilon}}^{k})}_{k=0}^{\infty$ *is a bounded sequence of points in* ^{J}, *then for any amalgamator* (Ω, ω) and any **x**^{0} ^{J}, *the sequence* ${({\mathit{x}}^{k})}_{k=0}^{\infty}$ *generated by*

$${\mathit{x}}^{k+1}={\mathbf{P}}_{\{{C}_{1},\dots ,{C}_{I}\}}\phantom{\rule{thinmathspace}{0ex}}({\mathit{x}}^{k}+{\beta}_{k}{\mathit{\upsilon}}^{k}),\phantom{\rule{thinmathspace}{0ex}}\forall k\ge 0,$$

(7)

*converges, and its limit is in C*. (The statement of this theorem in [6] is for positive β_{k}s, but the proof given there applies to nonnegative β_{k}s.)

**Corollary 1**. *For any amalgamator* (Ω, ω), *the algorithm* **P** *defined by the algorithmic operator* **P**_{{C1,…,CI}} *is bounded perturbations resilient*.

**Proof**. Assume that for *T* = {*C*_{1},…,*C _{I}*} the sequence ${\left({({\mathbf{P}}_{T})}^{k}\mathit{x}\right)}_{k=0}^{\infty}$ converges to a solution of

Next we look at a member of the family of BIP methods. Considering the convex feasibility problem (1), for 1 ≤ *u* ≤ *U*, let *B _{u}* be a set {

$${\mathbf{Q}}_{\{{C}_{1},\dots ,{C}_{I}\}}={Q}_{U}\cdots {Q}_{1},$$

(8)

where, for **x**^{J} and 1 ≤ *u* ≤ *U*,

$${Q}_{u}\mathit{x}=\frac{1}{R}{\displaystyle \sum _{i\in {B}_{u}}{P}_{{C}_{i}}\mathit{x}+\frac{R-\left|{B}_{u}\right|}{R}}\mathit{x},$$

(9)

and

$$R=\text{max}\phantom{\rule{thinmathspace}{0ex}}\{\left|{B}_{u}\right|\phantom{\rule{thinmathspace}{0ex}}|1\le u\le U\}.$$

(10)

The iterative procedure **x**^{k+1} = **Q**_{{C1,…,CI}}* x^{k}* is a member of the family of BIP methods. For this algorithmic operator we have the following bounded perturbations resilience theorem.

**Theorem 2**. [23] *If C of* (1) *is nonempty*, $\{1,\dots ,I\}={\displaystyle {\cup}_{u=1}^{U}{B}_{u},\phantom{\rule{thinmathspace}{0ex}}{({\beta}_{k})}_{k=0}^{\infty}}$ *is a sequence of nonnegative real numbers such that* $\sum}_{k=0}^{\infty}{\beta}_{k}<\infty \phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}{({\mathit{\upsilon}}^{k})}_{k=0}^{\infty$ *is a bounded sequence of points in* ^{J}, *then for any* **x**^{0} ^{J}, *the sequence*
${({\mathit{x}}^{k})}_{k=0}^{\infty}$ *generated by*

$${\mathit{x}}^{k+1}={\mathbf{Q}}_{\{{C}_{1},\dots ,{C}_{I}\}}({\mathit{x}}^{k}+{\beta}_{k}{\mathit{\upsilon}}^{k}),\phantom{\rule{thinmathspace}{0ex}}\forall k\ge 0,$$

(11)

*converges, and its limit is in C*. (This is a special case of Theorem 2 in [23] given here without a relaxation parameter. Also, that theorem is stated for positive β_{k}s, but the proof given there applies to nonnegative β_{k}s.)

**Corollary 2.** *The algorithm* **Q** *defined by the algorithmic operator* **Q**_{{C1,…,CI}} *is bounded perturbations resilient*.

**Proof.** Replace in the proof of Corollary 1 **P** by **Q** and Theorem 1 by Theorem 2.

Further bounded perturbations resilience theorems are available in a Banach space setting, see [7, 8]. Thus the theory of bounded perturbations resilient algorithms already contains some solid mathematical results. As opposed to this, the superiorization theory that we present next is at the stage of being a collection of heuristic ideas, a full mathematical theory still needs to be developed. However, there are practical demonstrations of its potential usefulness; see [6, 23, 28] and the illustrations in Section 3 below.

For a problem structure , *r*, *T* , ε _{++} and a sequence $S={({\mathit{x}}^{k})}_{k=0}^{\infty}$ of points in ^{J}, we use *O* (*T*, ε, *S*) to denote the **x**^{J} that has the the following properties: *r _{T}* (

**Lemma 1**. *If* *r _{T}*

Given an algorithm **P** for a problem structure , *r*, a *T* and an * *^{J}, let $R(T,\overline{\mathit{x}})={({({\mathbf{P}}_{T})}^{k}\overline{\mathit{x}})}_{k=0}^{\infty}$. For a function ϕ : ^{J} → , the * superiorization methodology* should provide us with an algorithm that produces a sequence $S(T,\overline{\mathit{x}},\varphi )={({\mathit{x}}^{k})}_{k=0}^{\infty}$, such that for any ε _{++} and * *^{J} for which *r _{T}* (

**Assumptions**

- ,
*r*is a problem structure such that*r*is continuous for all_{T}*T*. **P**is a bounded perturbation resilient algorithm for ,*r*such that, for all*T*,**P**_{T}is continuous and, ifis not a solution of**x***T*, then*r*(_{T}**P**)) <_{T}x*r*(_{T}).**x**- ϕ :
^{J}→ is an everywhere real-valued convex function, defined on the whole space.

Under these assumptions, we now describe the algorithm to produce the sequence $S(T,\overline{\mathit{x}},\varphi )={({\mathit{x}}^{k})}_{k=0}^{\infty}$ and present and prove Theorem 3 below.

The algorithm assumes that we have available a summable sequence ${({\gamma}_{\ell})}_{\ell =0}^{\infty}$ of positive real numbers. It is easy to generate such sequences; e.g., we can use γ_{} = *a*^{}, where 0 < *a* < 1. The algorithm generates, simultaneously with the sequence ${({\mathit{x}}^{k})}_{k=0}^{\infty},\text{sequences}\phantom{\rule{thinmathspace}{0ex}}{({\mathit{\upsilon}}^{k})}_{k=0}^{\infty}\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}{({\beta}_{k})}_{k=0}^{\infty}$. The latter will be generated as a subsequence of${({\gamma}_{\ell})}_{\ell =0}^{\infty}$. Clearly, the resulting sequence ${({\beta}_{k})}_{k=0}^{\infty}$ of positive real numbers will be summable. We first specify the algorithm and then discuss it. The algorithm depends on the specified *, ϕ, ${({\gamma}_{\ell})}_{\ell =0}^{\infty}$, **r _{T}* and

**Superiorized Version of Algorithm P**

(i) | set k = 0 |

(ii) | set = x^{k} |

(iii) | set = 0 |

(iv) | repeat |

(v) | set to a subgradient of ϕ at gx^{k} |

(vi) | if ‖‖ > 0g |

(vii) | then set υ^{k} = −/ ‖g‖g |

(viii) | else set υ^{k} = g |

(ix) | set continue = true |

(x) | while continue |

(xi) | set β_{k} = γ_{} |

(xii) | set = y + βx^{k}_{k}υ^{k} |

(xiii) | if ϕ () ≤ ϕ (y) x^{k}and r (_{T}P) < _{T}yr (_{T}) x^{k}then |

(xiv) | set x^{k+1} = P_{T}y |

(xv) | set continue = false |

(xvi) | set = + 1 |

(xvii) | set k = k + 1 |

Sometimes it is useful to emphasize the function ϕ for which we are superiorizing, in which case we refer to the algorithm above as the ϕ-*superiorized version of algorithm* **P**. It is important to bear in mind that the sequence *S* produced by the algorithm depends also on the initial point *, the selection of the subgradient in Line (v) of the algorithm, the summable sequence ${({\gamma}_{\ell})}_{\ell =0}^{\infty}$, and the problem **T*. In addition, the output *O* (*T*, ε, *S*) of the algorithm depends on the stopping criterion ε.

**Theorem 3**. *Under the Assumptions listed above, the Superiorized Version of Algorithm* **P** *will produce a sequence S* (*T*, *, ϕ) **of points in* ^{J}*that either contains a solution of T or is infinite. In the latter case, if the sequence* ${\left({({\mathbf{P}}_{T})}^{k}\mathit{x}\right)}_{k=0}^{\infty}$ *converges to a solution of T for all x*

**Proof**. Assume that the sequence *S* (*T*, *, ϕ) produced by the Superiorized Version of Algorithm ***P** dos not contain a solution of *T*. We first show that in this case the algorithm generates an infinite sequence ${({\mathit{x}}^{k})}_{k=0}^{\infty}$. This is equivalent to saying that, for any * x^{k}* that has been generated already, the condition in Line (xiii) of the algorithm will be satisfied sooner or later (and hence

Unfortunately, this theorem does not go far enough. To demonstrate that a methodology leads to superiorization we should be proving (under some assumptions) a result like ϕ (*O* (*T*, ε, *S* (*T*, *, ϕ))) < ϕ (**O* (*T*, ε, *R* (*T*, *))) in place of the weaker result at the end of the statement of the theorem. Currently we do not have any such proofs and so we are restricted to providing practical demonstrations that our methodology leads to superiorization in the desired sense. In the next section we provide such demonstrations for the Superiorized Version of Algorithm ***P**, for two different **P**s.

We illustrate the superiorization methodology on a problem of reconstructing a head cross-section (based on Figure 4.6(a) of [27]) from its projections using both an SAP and a BIP algorithm. (All the computational work reported in this section was done using SNARK09 [24]; the phantom, the data, the reconstructions and displays were all generated within this same framework.) Figure 1(a) shows a 243 × 243 digitization of the head phantom with *J* = 59,049 pixels. An **x**^{J} is interpreted as a vector of pixel values, whose components represent the average X-ray linear attenuation coefficients (measured per centimeter) within the 59,049 pixels. Each pixel is of size 0.0752 × 0.0752 (measured in centimeters). The pixel values range from 0 to 0.5639. For display purposes, any value below 0.204 is shown as black (gray value 0) and any value above 0.21675 is shown as white (gray value 255), with a linear mapping of the pixel values into gray values in between (the same convention is used in displaying reconstructed images in Figures 1(b)–(e)).

A head phantom (a) and its reconstructions from underdetermined consistent data obtained for 82 views using: (b) a variant of ART, (c) TV-superiorized version of the same variant of ART, (d) a block-iterative projection method, and (e) TV-superiorized **...**

Data were collected by calculating line integrals across the digitized image for 82 sets of equally spaced parallel lines, with *I* = 25,452 lines in total. Each data item determines a hyperplane in ^{J}. Since the digitized phantom lies in the intersection of all the hyperplanes, we have here an instance of the convex feasibility problem with a nonempty *C*, satisfying the first condition of the statements of Theorems 1 and 2.

For our illustration, we chose the SAP algorithm **P**_{{C1,…,CI}} as determined by (5)–(6) with Ω = {(1,…,*I*)} and ω(1,…,*I*) = 1. This is a classical method that in tomography would be considered a variant of the algebraic reconstruction techniques (ART) [27, Chapter 11]. For the BIP algorithm we chose **Q**_{{C1,…,CI}} as determined by (8)–(10) with *U* = 82 and each block corresponding to one of the 82 sets of parallel lines along which the data are collected.

The function ϕ for which we superiorized is defined so that, for any **x**^{J}, ϕ (* x*) is the

$$\sum _{g=1}^{242}{\displaystyle \sum _{h=1}^{242}\sqrt{{({q}_{g+1,h}-{q}_{g,h})}^{2}+{({q}_{g,h+1}-{q}_{g,h})}^{2}.}}$$

(12)

For the TV-superiorized versions of the algorithms **P**_{{C1,…,CI}} and **Q**_{{C1,…,CI}} of the previous paragraph we selected * to be the origin (the vector of all zeros) and γ*_{} = 0.999^{}. Also, we set ε = 0.01 for the stopping criterion, which is small compared to the *r _{T}* of the initial point (

For each of the four algorithms (**P**_{{C1,…, CI}}, **Q**_{{C1, …,CI}} and their TV-superiorized versions), the sequence *S* that is produced by it is such that the output *O* (*T*, ε, *S*) is defined; see Figures 1(b)–(e) for the images that correspond to these outputs. Clearly, the superiorized reconstructions in Figures 1(c) and 1(e) are visually superior to their not superiorized versions in Figures 1(b) and 1(d), respectively. More importantly from the point of view of our theory, consider Table 1. As stated in the last paragraph of the previous section, we would like to have that ϕ (*O* (*T*, ε, *S* (*T*, *, ϕ))) < ϕ (**O* (*T*, ε, *R* (*T*, *))). While we are not able to prove that this is the case in general, Table 1 clearly shows it to be the case for the two algorithms discussed in this section.*

Values of TV for the outputs of the various algorithms. The second column is for the superiorized versions and the third column is for the original versions.

A final important point that is illustrated by the experiments in this section is that, from the practical point of view, TV-superiorization is as useful as TV-optimization. This is because a realistic phantom, such as the one in Figure 1(a), is unlikely to be TV-minimizing subject to the constraints provided by the measurements. In fact, the TV value of our phantom is 450.53, which is larger than that for either of the TV-superiorized reconstructions in the second column of Table 1. While an optimization method should be able to find an image with a lower TV value, there is no practical point for doing that. Since the underlying aim of what we are doing is to estimate the phantom from the data, producing an image whose TV value is further from the TV value of the phantom than that of our superiorized reconstructions is unlikely to be helpful towards achieving this aim.

Stability of algorithms under perturbations is generally studied in numerical analysis with the aim of proving that an algorithm is stable so that it can “endure” all kinds of imperfections in the data or in the computational performance. Here we have taken a proactive approach designed to extract specific benefits from the kind of stability that we term perturbation resilience. We have been able to do this in a context that includes, but is much more general than, feasibility-optimization for intersections of convex sets.

Our premise has been that (1) there is available a bounded perturbations resilient iterative algorithm that solves efficiently certain type of problems and (2) we desire to make use of perturbations to find for these problems solutions that, according to some criterion, are superior to the ones to which we would get without employing perturbations. To accomplish this one must have a way of introducing perturbations that take into account the criterion according to which we wish to “superiorize” the solutions of the problems.

We have set forth the fundamental principle, have given some mathematical formulations and results, and have shown potential benefits (in the field of image reconstruction from projections). However, the superiorization methodology needs to be studied further from the mathematical, algorithmic and computational points of view in order to unveil its general applicability to inverse problems. In particular, we need to investigate the computational efficiency of the superiorized versions of algorithms compared to their original versions and, more importantly, compared to actual optimization algorithms. Such results have been reported for specific algorithms (e.g., Table I of [6] reports on a case in which a superiorized algorithm is eight times faster than the original version and is four times faster than an optimization algorithm proposed in [21]). However, further testing of the computational efficiency of the algorithms produced by the new general approach will have to be carried out under a variety of circumstances. As algorithms are developed and tested a dialog on algorithmic developments must be accompanied by mathematical validation and applications to simulated and real data from various relevant fields of applications.

Validating the concept means proving precise statements about the behavior of iterates ${({\mathit{x}}^{k})}_{k=0}^{\infty}$ generated by the superiorized versions of algorithms. Under what conditions do they converge? Can their limit points be characterized? How would different choices of the perturbation coefficients β_{k} and the perturbation vectors **υ**^{k} affect the superiorization process? Can different schemes for generating the β_{k}s be developed, implemented, investigated? Enlarging the arsenal of bounded perturbations resilient algorithms means generalizing existing proofs for such algorithms and developing new theories that will bring additional ones into the family of bounded perturbations resilient algorithms. Further developments should include extension to the problem of finding a common fixed point of a family of operators (a direct generalization of the convex feasibility problem, see, e.g., [29]), the possibility to generalize the concept of superiorization so that it will be applicable to the split feasibility problem, see [12, 34, 35], and studying the behavior of superiorization algorithms in inconsistent situations when the underlying solution set is empty. Thus we view the material in this paper as only an initial step in a promising new field of endeavor for solving inverse problems.

As a final comment we return to the issue raised at the beginning of Section 2 of generalizing the approach to problems that are over a closed subset of ^{J} rather than the whole of ^{J}. This would allow us, for example, to superiorize an entropy-based function ϕ that is defined only over the positive orthant. It seems clear that this is doable, but a rigorous reformulation of all that we said requires numerous changes. For example, in Theorems 1 and 2, we would have to include additional condition(s) to ensure that (* x^{k}* + β

We appreciate the constructive comments of two referees. This work was supported by Award Number R01HL070472 from the National Heart, Lung, And Blood Institute. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Heart, Lung, And Blood Institute or the National Institutes of Health.

AMS classification scheme numbers: 65Y20, 68W25, 90C06, 90C25, 68U10

1. Aharoni R, Censor Y. Block-iterative projection methods for parallel computation of solutions to convex feasibility problems. Linear Algebra Appl. 1989;120:165–175.

2. Bauschke HH, Borwein JM. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996;38:367–426.

3. Bauschke HH, Combettes PL, Kruk SG. Extrapolation algorithm for affine-convex feasibility problems. Numer. Algorithms. 2006;41:239–274.

4. Bauschke HH, Matoušková E, Reich S. Projection and proximal point methods: convergence results and counterexamples. Nonlinear Anal. 2004;56:715–738.

5. Butnariu D, Censor Y. On the behavior of a block-iterative projection method for solving convex feasibility problems. Int. J. Comput Math. 1990;34:79–94.

6. Butnariu D, Davidi R, Herman GT, Kazantsev IG. Stable convergence behavior under summable perturbations of a class of projection methods for convex feasibility and optimization problems. IEEE J. Sel. Top. Sign. Process. 2007;1:540–547.

7. Butnariu D, Reich S, Zaslavski AJ. Convergence to fixed points of inexact orbits of Bregman-monotone and nonexpansive operators in Banach spaces. In: Nathansky HF, de Buen BG, Goebel K, Kirk WA, Sims B, editors. Fixed Point Theory and Applications. Yokohama: Yokohama Publishers; 2006. pp. 11–32.

8. Butnariu D, Reich S, Zaslavski AJ. Stable convergence theorems for infinite products and powers of nonexpansive mappings. Numer. Func. Anal. Opt. 2008;29:304–323.

9. Byrne CL. Applied Iterative Methods. AK Peters; 2008.

10. Censor Y, Chen W, Combettes PL, Davidi R, Herman GT. On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints. Opt. Online. 2009 http://www.optimization-online.org/DB_HTML/2009/12/2500.html.

11. Censor Y, Elfving T, Herman GT. Averaging strings of sequential iterations for convex feasibility problems. In: Butnariu D, Censor Y, Reich S, editors. Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Elsevier Science Publishers; 2001. pp. 101–114.

12. Censor Y, Elfving T, Kopf N, Bortfeld T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Problems. 2005;21:2071–2084.

13. Censor Y, Gordon D, Gordon R. BICAV: A block-iterative, parallel algorithm for sparse systems with pixel-related weighting. IEEE Trans. Med. Imaging. 2001;20:1050–1060. [PubMed]

14. Censor Y, Segal A. On the string averaging method for sparse common fixed points problems. Int. Trans. Oper. Res. 2009;16:481–494. [PMC free article] [PubMed]

15. Censor Y, Tom E. Convergence of string-averaging projection schemes for inconsistent convex feasibility problems. Optim. Methods Softw. 2003;18:543–554.

16. Censor Y, Zenios SA. Parallel Optimization: Theory, Algorithms and Applications. Oxford University Press; 1997.

17. Chinneck JW. Feasibility and Infeasibility in Optimization: Algorithms and Computational Methods. Springer; 2007.

18. Combettes PL. The convex feasibility problem in image recovery. Adv. Imag. Elec. Phys. 1996;95:155–270.

19. Combettes PL. Hilbertian convex feasibility problem: Convergence of projection methods. Appl. Math. Opt. 1997;35:311–330.

20. Combettes PL. Convex set theoretic image recovery by extrapolated iterations of parallel subgradient projections. IEEE T. Image Process. 1997;6:493–506. [PubMed]

21. Combettes PL, Luo J. An adaptive level set method for nondifferentiable constrained image recovery. IEEE T. Image Process. 2002;11:1295–1304. [PubMed]

22. Crombez G. Finding common fixed points of strict paracontractions by averaging strings of sequential iterations. J. Nonlinear Convex Anal. 2002;3:345–351.

23. Davidi R, Herman GT, Censor Y. Perturbation-resilient block-iterative projection methods with application to image reconstruction from projections. Int. Trans. Oper. Res. 2009;16:505–524. [PMC free article] [PubMed]

24. Davidi R, Herman GT, Klukowska J. SNARK09: A programming system for the reconstruction of 2D images from 1D projections. 2009 http://www.snark09.com/ [PubMed]

25. Eggermont PPB, Herman GT, Lent A. Iterative algorithms for large partitioned linear systems, with applications to image reconstruction. Linear Algebra Appl. 1981;40:37–67.

26. González-Castaño FJ, García-Palomares UM, Alba-Castro JL, Pousada-Carballo JM. Fast image recovery using dynamic load balancing in parallel architectures, by means of incomplete projections. IEEE T. Image Process. 2001;10:493–499. [PubMed]

27. Herman GT. Fundamentals of Computerized Tomography: Image Reconstruction from Projections. 2nd ed. Springer; 2009.

28. Herman GT, Davidi R. On image reconstruction from a small number of projections. Inverse Problems. 2008;24 045011. [PMC free article] [PubMed]

29. Kiwiel KC, Łopuch B. Surrogate projection methods for finding fixed points of firmly nonexpansive mappings. SIAM J. Optim. 1997;7:1084–1102.

30. Ottavy N. Strong convergence of projection-like methods in Hilbert spaces. J. Optim. Theory Appl. 1988;56:433–461.

31. Penfold SN, Schulte RW, Censor Y, Bashkirov V, McAllister S, Schubert KE, Rosenfeld AB. Block-iterative and string-averaging projection algorithms in proton computed tomography image reconstruction. In: Censor Y, Jiang M, Wang G, editors. Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems. Medical Physics Publishing; to appear.

32. Pierra G. Decomposition through formalization in a product space. Math. Program. 1984;28:96–115.

33. Rhee H. An application of the string averaging method to one-sided best simultaneous approximation. J. Korea Soc. Math. Educ. Ser. B Pure Appl. Math. 2003;10:49–56.

34. Schöpfer F, Schuster T, Louis AK. An iterative regularization method for the solution of the split feasibility problem in Banach spaces. Inverse Problems. 2008;24:055008.

35. Zhao J, Yang Q. Several solution methods for the split feasibility problem. Inverse Problems. 2005;21:1791–1799.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |