Home | About | Journals | Submit | Contact Us | Français |

**|**Int J Biomed Imaging**|**v.2010; 2010**|**PMC2910490

Formats

Article sections

Authors

Related links

Int J Biomed Imaging. 2010; 2010: 780262.

Published online 2010 July 4. doi: 10.1155/2010/780262

PMCID: PMC2910490

The Faculty of Electrical Engineering and Communication, Brno University of Technology, 60200 Brno, Czech Republic

*Radim Kolar: Email: zc.rbtuv.ceef@rralok

Academic Editor: Jiang Hsieh

Received 2009 November 11; Accepted 2010 May 22.

Copyright © 2010 Libor Kubecka et al.

This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

A method for correction of nonhomogenous illumination based on optimization of parameters of B-spline shading model with respect to Shannon's entropy is presented. The evaluation of Shannon's entropy is based on Parzen windowing method (Mangin, 2000) with the spline-based shading model. This allows us to express the derivatives of the entropy criterion analytically, which enables efficient use of gradient-based optimization algorithms. Seven different gradient- and nongradient-based optimization algorithms were initially tested on a set of 40 simulated retinal images, generated by a model of the respective image acquisition system. Among the tested optimizers, the gradient-based optimizer with varying step has shown to have the fastest convergence while providing the best precision. The final algorithm proved to be able of suppressing approximately 70% of the artificially introduced non-homogenous illumination. To assess the practical utility of the method, it was qualitatively tested on a set of 336 real retinal images; it proved the ability of eliminating the illumination inhomogeneity substantially in most of cases. The application field of this method is especially in preprocessing of retinal images, as preparation for reliable segmentation or registration.

Improper scene illumination as well as nonideal acquisition conditions due to for example, misadjusted imaging system can introduce severe distortions into the resulting image. These distortions are usually perceived as smooth intensity variations across the image. According to the terminology commonly used in processing of magnetic resonance (MR) images, we call these systematic intensity level inhomogeneities as the *bias field*. With such unevenness, the subsequent image processing like image registration, segmentation, or pattern recognition may be substantially complicated; therefore, the correction of illumination inhomogeneities is highly desirable. Unfortunately, separating the bias field from the true underlying image is an under-constrained and ill-posed problem with fewer observations than free variables. For this reason, regularization of the problem is necessary.

Most existing bias correction methods assume that the bias field is multiplicative, slowly varying, and tissue independent. Many techniques ignore the noise and apply a log transform to make the bias field additive. The known illumination correction methods can be categorized in the following groups: filtering, segmentation based, surface fitting, and other methods.

Illumination inhomogeneities are generated during acquisition process in systems with different modalities. Here, the proposed illumination correction method will be applied on retinal images from confocal scanning laser ophthalmoscope (CSLO). This correction is an important pre-processing task in image segmentation and/or multimodal registration [1–4].

The earliest bias correction techniques were based on phantoms [5] with known structure; this approach enables estimation of the bias field, its inversion and removal. Provided the calibrated phantom is available, this method can be considered the ground truth but in real situations is not often applicable.

Linear filtering methods [6, 7] try to estimate and eliminate the additive bias of the image using unsharp masking with defocus larger than any object in the image. These techniques can be extended, via nonlinear morphological approach to the multiplicative bias field estimation [6]. Homomorphic unsharp filtering methods [8–10] assume a separation of the low frequency bias field from the higher frequencies of the image structures.

Some simple methods, as [11], require expert supervision, which is time-consuming and often too subjective. In this approach, the expert specifies some parameters related to the intensity models of the different tissues and forms a list of intensity values and locations related to the background or to object classes. Then, the bias field over the image may be obtained by least-squares fitting to the intensity values at the preselected points. These methods are closely connected to segmentation-based methods. Authors of [12] combine shading correction with adaptive segmentation. They use a fuzzy C-means algorithm to allow labeling of a pixel to be influenced by its immediate neighbors. Main benefits of this approach are high robustness to salt-and-pepper noise and computational efficiency of the algorithm.

The expectation maximization (EM) algorithm is proposed in [13] to compute iteratively the optimal smooth bias field corrupting the data based on classification into several tissue classes. Their formulation includes the bias distortion in the statistical model of the pixel distribution, that is, the bias field influences the distribution by locally changing its mean value. The algorithm iterates two steps, the E-step calculating the posterior tissue probabilities, and the M-step estimating the bias field. The Styner's method [14] is based on a simplified model of the imaging process, a parametric model of a tissue class statistics, and a polynomial model of the inhomogeneity field. The estimation of the parametric bias field is formulated as a non-linear energy minimization problem solved by an evolution strategy.

Segmentation-based approaches raise the problem of selecting the number of classes, which have to be explicitly modeled. Furthermore, these algorithms unfortunately tend to converge to a local nonoptimal minimum for some bias configurations, especially when more than two tissue classes are modeled [15]. The main problem of segmentation-based technique is that the determination of specific class-conditioned intensity is difficult when the image is incorrectly illuminated; however, the illumination correction is the main aim. Also, these methods usually assume that the intensity distribution of an image is normal and given by distributions of individual tissues, which may be often invalid. The above drawbacks are even more serious when correcting pathological image data. Therefore, Likar et al. [16] used the closed connection between intensity nonuniformity correction and segmentation, and proposed a method, which iteratively interleaves them, so that both of them gradually improve, until the final correction and segmentation is reached. The method is based on iterative minimization of the class square-error of intensity distributions caused by non-uniformity and on a nonparametric segmentation method. The method makes no assumption on the distribution of tissue intensity and does not require initialization.

To overcome the segmentation problems, bias correction methods not requiring the segmentation were designed based on a chosen image quality criterion. In [17], a presumed histogram matching method is presented. Authors of [18] suggested the iterative optimization method, which seeks the smooth multiplicative field that maximizes the higher frequency content of the distribution of tissue intensity. It requires a parametric model for the bias field but not a decomposition of the intensities. The quality criterion derived from the information theory has been also used in several applications. In [19], a polynomial multiplicative and additive shading model was introduced employing the cost-function based on image entropy and Powell optimizer. Mangin [20] uses image entropy combined with a measure of the field smoothness as the image quality cost function. Authors of [14] model the bias field using Legendre polynomials and use the energy function based on a multiclass model estimator and evolutional search algorithm.

Specific methods of illumination correction were proposed in the frame of retinal image processing and analysis. Simple and fast methods using large-kernel median filter to obtain a low-pass correction coefficients were used for CSLO image preprocessing in [21]. Authors of [22] model the bias field (background image) of a fundus (basic retinal) image as a white Gaussian random field and use Mahalanobis distance for background pixel classification. Contrast normalization using high-pass filtered image is used in [2] as one step of microaneurysm detection procedure. Additive model of nonuniform illumination is used in [3], together with adaptive histogram equalization.

Other approaches exist, for example, in applications including illumination correction for face recognition [23, 24] and restoration of digitized documents [25, 26]. The latter application frequently uses multiplicative model and specific properties of the processed images (e.g., illumination edges or geometrical distortion).

In this paper, we focus our attention on methods estimating the parametric illumination field using the quality criterion derived from the information theory. We use a multiplicative model of nonuniform illumination and parametric local bias model for formulation of criterion function and its derivatives (Section 2). The results are presented in Section 3 for different optimizers, which were tested for two image sets—artificially illuminated images and real CSLO images. The assessment of the results concludes the paper in Section 4.

For the purpose of illumination correction process, we need a model of image creation. We assume, that each tissue class (vessels, optic disc, retinal surface) has a different mean value *ρ*(**x**) of the property measured by the imaging device. Moreover, every class of tissue has a characteristic texture, which can be modeled by the additive noise *n*
_{tiss}(**x**). The ideal output signal *o*(**x**) therefore consists of piecewise constant values plus additive noise. Due to finite size of the point spread function *h*(**x**) of the imaging device (different from the ideal Dirac impulse), this ideal signal is corrupted by convolution with *h*(**x**) and with the noise generated by the device *n*(**x**) (thermal or electronic noise and the noise coming from digitization during acquisition process). The overall equation of the observed image data can be formalized as follows (*b*(**x**) being the illumination), see Figure 1:

$$\begin{array}{cc}\hfill s\left(\mathbf{x}\right)& =\left[\left(\rho \left(\mathbf{x}\right)+{{n}_{\text{tiss}}}_{}\left(\mathbf{x}\right)\right)b\left(\mathbf{x}\right)\right]{}^{\ast}h\left(\mathbf{x}\right)\hfill \\ \hfill & \hspace{1em}+n\left(\mathbf{x}\right)\approx o\left(\mathbf{x}\right)b\left(\mathbf{x}\right)+n\left(\mathbf{x}\right).\hfill \end{array}$$

(1)

Model of image acquisition. Ideal image is corrupted by noise, multiplicative illumination, and finite resolution of the imaging device (characterized by Gaussian point spread function).

On the right-hand side of (1), we neglected the smoothing due to the imaging properties not playing a substantial role in our problem (the disturbance need not be taken into account in the retinal applications).

Thus, under the assumption that the bias involved in image creation process is multiplicative, and that it is the only important disturbance, we can reconstruct the original signal as

$$\widehat{o}\left(\mathbf{x}\right)=\frac{s\left(\mathbf{x}\right)}{\widehat{b}\left(\mathbf{x}\hspace{0.17em}\mid \hspace{0.17em}\mathrm{\Phi}\right)}-\frac{n\left(\mathbf{x}\right)}{\widehat{b}\left(\mathbf{x}\hspace{0.17em}\mid \hspace{0.17em}\mathrm{\Phi}\right)}\approx \frac{s\left(\mathbf{x}\right)}{\widehat{b}\left(\mathbf{x}\hspace{0.17em}\mid \hspace{0.17em}\mathrm{\Phi}\right)}=s\left(\mathbf{x}\right){\widehat{b}}^{-1}\left(\mathbf{x}\hspace{0.17em}\mid \hspace{0.17em}\mathrm{\Phi}\right),$$

(2)

where $\widehat{b}(\mathbf{x})$ is the optimal illumination bias model controlled by parameters Φ_{i} forming the vector Φ, while ${\widehat{b}}^{-1}(\mathbf{x})$ is its reciprocal value. Example of a line input signal (measured line profile) can be seen on Figure 2. The choice of a proper bias model *b*(**x**) as well as of an appropriate criterion function, evaluating how well the non-homogenous illumination of the image is corrected for current values of the bias model parameters, is crucial.

In this section, we extend the Likar's algorithm [19] and applied the modified algorithm for retrospective shading correction of retinal images. We derive an expression for the criterion of quality of illumination compensation based on Shannon's entropy and on Parzen windowing probability estimation. Further, as a novelty, we derive analytical expressions for derivatives of this criterion with respect to parameters of the used B-spline multiplicative illumination model. The analytical expressions ease substantially the optimization calculations compared to purely numerical evaluation of derivatives.

Generally, in accordance with [14, 19], the intensity transformation performed by the reciprocal bias model can be defined by a linear combination of *K* smooth basis functions *r*
_{i}(**x**).

$${\widehat{b}}^{-1}\left(\mathbf{x}\right)={\displaystyle \sum}_{i=1}^{K}{\mathrm{\Phi}}_{i}{r}_{i}\left(\mathbf{x}\right),$$

(3)

where Φ_{i} are the parameters of the transform defining the contribution of each basis.

In order to regularize the problem of finding the bias field that would optimally correct the illumination inhomogeneity, we used the bases in the following form:

$${r}_{i}\left(\mathbf{x}\right)=\frac{{q}_{i}\left(\mathbf{x}\right){-}^{m}c\left(\mathbf{x}\right)}{{}^{n}c\left(\mathbf{x}\right)},$$

(4)

where *q*
_{i}(**x**) are smooth polynomial basis functions, ^{m}
*c*(**x**) is a mean correction coefficient, and ^{n}
*c*(**x**) is a normalization coefficient. The correction coefficients are defined using constraints presented in [19], where the mean preserving condition preventing a global shift of the mean intensity of the resultant corrected image is formulated as

$$\frac{1}{\mathrm{\Theta}}{\displaystyle \sum}_{\mathbf{x}\in \Omega}s\left(\mathbf{x}\right)=\frac{1}{\mathrm{\Theta}}{\displaystyle \sum}_{\mathbf{x}\in \Omega}s\left(\mathbf{x}\right){\widehat{b}}^{-1}\left(\mathbf{x}\right).$$

(5)

Here, Θ is the size of the image domain Ω (or region of interest: ROI) in pixels. Further, as we use the multiplicative bias model, the final intensity is differently sensitive to values of different parameters, thus producing non-linear logarithmic scale (see [19] for detailed description). This fact is undesirable in optimization. Therefore, to assure that every parameter has the same influence on the intensity transform, a normalization constraint for every base function kernel *r _{i}*(

$$\frac{1}{\mathrm{\Theta}}{\displaystyle \sum}_{\mathbf{x}\in \Omega}\left|s\left(\mathbf{x}\right){r}_{i}\left(\mathbf{x}\right)\right|=1,\hspace{1em}\forall i,$$

(6)

We have found that the illumination distortion of retinal images shows a significant spatial variance, while both illumination models mentioned in [14, 19] have rather a global character influencing the whole image despite possibly only locally defined distortions (e.g., when the image is corrupted in its upper left corner only, the polynomial model may significantly influence the opposite corner as well). Therefore, to model such local distortions, high-order models have to be used, which results in a high number of parameters to optimize, besides a danger of oscillatory character of the functions.

Hence, we decided to use the locally defined mean-corrected and normalized reciprocal bias model based on B-splines, formalized as follows:

$$\mathrm{\hspace{0.17em}\hspace{0.17em}}\begin{array}{cc}\hfill {\widehat{b}}^{-1}\left(\mathbf{x}\right)& =1+{\displaystyle \sum}_{{\mathbf{x}}_{i}\in P}\mathrm{\Phi}\left({\mathbf{x}}_{i}\right)\left[\frac{\left({\beta}^{\left(d\right)}\left(\mathrm{\Delta}{\mathbf{x}}_{i}\right)-{}_{\mathrm{\hspace{0.17em}\hspace{0.17em}}}{}^{m}c\left({\mathbf{x}}_{i}\right)\right)}{{}_{\mathrm{\hspace{0.17em}\hspace{0.17em}}}{}^{n}c\left({\mathbf{x}}_{i}\right)}\right],\hfill \\ \hfill & \hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\text{where}\hspace{0.17em}\hspace{0.17em}\mathrm{\Delta}{\mathbf{x}}_{i}=\frac{\mathbf{x}-{\mathbf{x}}_{i}}{\mathbf{h}}.\hfill \end{array}$$

(7)

For the case of two dimensions, *β*
^{(d)}(**x**) = *β*
^{(d)}(*x*
_{1}) · *β*
^{(d)}(*x*
_{2}) is a separable *d*th order B-spline kernel, *P* is a set of *n _{x1}* ×

$$\frac{1}{\mathrm{\Theta}}{\displaystyle \sum}_{\mathbf{x}\in \Omega}s\left(\mathbf{x}\right)=\frac{1}{\mathrm{\Theta}}{\displaystyle \sum}_{\mathbf{x}\in \Omega}\left\{s\left(\mathbf{x}\right)\left[1+{\displaystyle \sum}_{i=1}^{K}\frac{{\mathrm{\Phi}}_{i}\left({\beta}^{\left(d\right)}\left(\mathrm{\Delta}{\mathbf{x}}_{i}\right)-{}_{\mathrm{\hspace{0.17em}\hspace{0.17em}}}{}^{m}\mathrm{\hspace{0.17em}\hspace{0.17em}}{c}_{i}\right)}{{}_{\mathrm{\hspace{0.17em}\hspace{0.17em}}}{}^{n}{c}_{i}}\right]\right\},$$

(8)

which can be configured into

$$\frac{1}{\mathrm{\Theta}}{\displaystyle \sum}_{i=1}^{K}\left\{\frac{{\mathrm{\Phi}}_{i}}{{}_{\mathrm{\hspace{0.17em}\hspace{0.17em}}}{}^{n}{c}_{i}}\left[{\displaystyle \sum}_{\mathbf{x}\in \Omega}s\left(\mathbf{x}\right)\left({\beta}^{\left(d\right)}\left(\mathrm{\Delta}{\mathbf{x}}_{i}\right)-{}_{\mathrm{\hspace{0.17em}\hspace{0.17em}}}{}^{m}{c}_{i}\right)\right]\right\}=0.$$

(9)

The nontrivial solution (*Φ _{i }*

$$\sum}_{\mathbf{x}\in \Omega}s\left(\mathbf{x}\right)\left({\beta}^{\left(d\right)}\left(\mathrm{\Delta}{\mathbf{x}}_{i}\right)-{}_{\mathrm{\hspace{0.17em}\hspace{0.17em}}}{}^{m}{c}_{i}\right)=0,\hspace{1em}\forall i,$$

(10)

which provides

$${}_{\mathrm{\hspace{0.17em}\hspace{0.17em}}}{}^{m}{c}_{i}=\frac{{\sum}_{\mathbf{x}\in \Omega}\hspace{0.17em}s\left(\mathbf{x}\right){\beta}^{\left(d\right)}\left(\mathrm{\Delta}{\mathbf{x}}_{i}\right)}{{\sum}_{\mathbf{x}\in \Omega}\hspace{0.17em}s\left(\mathbf{x}\right)}$$

(11)

for each mean preserving constant * ^{m}c_{i}* corresponding to a control point

Similarly, we can derive the normalization condition from (6) and(7)

$$\frac{1}{\mathrm{\Theta}}{\displaystyle \sum}_{\mathbf{x}\in \Omega}\left|\frac{s\left(\mathbf{x}\right)\left({\beta}^{\left(d\right)}\left(\mathrm{\Delta}{\mathbf{x}}_{i}\right)-{}_{\mathrm{\hspace{0.17em}\hspace{0.17em}}}{}^{m}{c}_{i}\right)}{{}_{\mathrm{\hspace{0.17em}\hspace{0.17em}}}{}^{n}{c}_{i}}\right|=1,\hspace{1em}\forall i$$

(12)

from where

$${}_{\mathrm{\hspace{0.17em}\hspace{0.17em}}}{}^{n}\mathrm{\hspace{0.17em}\hspace{0.17em}}{c}_{i}=\frac{1}{\mathrm{\Theta}}{\displaystyle \sum}_{\mathbf{x}\in \Omega}\left|s\left(\mathbf{x}\right)\left({\beta}^{\left(d\right)}\left(\mathrm{\Delta}{\mathbf{x}}_{i}\right)-{}_{\mathrm{\hspace{0.17em}\hspace{0.17em}}}{}^{m}{c}_{i}\right)\right|,$$

(13)

for each normalization constant ^{n}
*c*
_{i}. Details of normalization coefficients computation in case of polynomial basis can be found in [19]; here we modified this approach for the B-spline representation. Finally, the restored image $\widehat{o}(\mathbf{x})$ is provided using (2) by the intensity transform ${\widehat{b}}^{-1}(\mathbf{x}|\mathbf{\Phi})$ with the found parameter vector Φ.

Because this transformation may generally produce out-of-range intensity values, we use intermediate image representation with extended intensity range. The resultant image is finally computed using linear contrast transformation converting the intermediate image back into the original intensity range.

In order to find parameters of the bias model describing the undesirable illumination, we need to define a criterion, with respect to which the parameters would be optimized. In the works of Likar et al. [19] and Mangin [20], the image entropy is shown to be a suitable criterion. Their idea is that the illumination is additional information added to the information included in the original signal *o*(**x**) and because we would like to remove the illumination bias, the information content of the corrected image should be lower than that of the distorted image *s*(**x**). Therefore, when looking for parameters of the reciprocal bias model eliminating the non-homogenous illumination, the Shannon's entropy *H*(.) of the resulting image $\widehat{o}(\mathbf{x})$,

$$H=-{\displaystyle \sum}_{k}\hspace{0.17em}P\left(k\right)\mathrm{log}\hspace{0.17em}\left(P\left(k\right)\right),$$

(14)

which is a measure of the information amount, should be minimized. Here, *P*(*k*) is the probability of intensity *k* appearing in any pixel of $\widehat{o}(\mathbf{x})$.

Although the brightness *k* is a discrete variable in reality (represented by 8 bits, i.e., 256 values), it may be well approximated by continuous variable * κ* with a probability density

$$H=-{{\displaystyle \int}}_{\kappa}p\left(\kappa \right)\mathrm{log}\hspace{0.17em}\left(p\left(\kappa \right)\right)d\kappa .$$

(15)

However, the probability density *p*(* κ*) is not available explicitly and must be estimated from the image data transformed by the current parameters

$$p\left(\kappa ;\mathrm{\Phi}\right)=\frac{1}{\mathrm{\Theta}}{\displaystyle \sum}_{\mathbf{x}\in \Omega}{\beta}^{\left(3\right)}\left(\frac{\widehat{o}\left(\mathbf{x};\mathrm{\Phi}\mathrm{\hspace{0.17em}\hspace{0.17em}}\right)}{z}-\kappa \right)$$

(16)

is the probability density estimate at intensity value * κ*, which is obtained using PW,

Thanks to our formulation of the criterion (15) based on PW, both the probability estimation and the intensity transformation are defined continuously and therefore we can analytically derive partial derivatives of the criterion with respect to components of the parameter vector **Φ**. As *H* is a compound function of the form

$$H\left(\kappa ;\mathrm{\Phi}\right)=f\left(p\left(\beta \left(\widehat{o}\left(\mathrm{\Phi}\right)\right)\right)\right),$$

(17)

its derivative with respect to an element of **Φ** can be expressed as

$$\frac{\partial H\left(\kappa ;\mathrm{\Phi}\right)}{\partial {\mathrm{\Phi}}_{\mathbf{k}}}=\frac{\partial H}{\partial p}\frac{\partial p}{\partial \beta}\frac{\partial \beta}{\partial \widehat{o}}\frac{\partial \widehat{o}}{\partial {\mathrm{\Phi}}_{\mathbf{k}}}.$$

(18)

From here on, the *i*th control point (a linearly decoded control point of the rectangular grid) is characterized by a vector index **k** = [*k*
_{1}, *k*
_{2}] with appropriate indices along both axes. If *n _{x1}* is the number of control points along the first dimension, then

By applying product rule, we obtain from (15)

$$\frac{\partial H\left(\kappa ;\mathrm{\Phi}\right)}{\partial p}=-{{\displaystyle \int}}_{\kappa}\left(\mathrm{log}\hspace{0.17em}\left(p\left(\kappa ;\mathrm{\Phi}\right)\right)+1\right)d\kappa .$$

(19)

Calculation of this integral is approximated using the rectangle rule as

$$\frac{\partial H\left(\kappa ;\mathrm{\Phi}\right)}{\partial p}\approx -{\displaystyle \sum}_{k}\left(\mathrm{log}\hspace{0.17em}\left(p\left({\kappa}_{k};\mathrm{\Phi}\right)\right)+1\right).$$

(20)

Further from (16),

$$\frac{\partial p}{\partial \beta}={\displaystyle \sum}_{\mathbf{x}\in \Omega}\frac{1}{\mathrm{\Theta}}.$$

(21)

The derivative of the B-spline kernel can be expressed using B-spline properties [27] as

$$\frac{\partial {\beta}^{\left(3\right)}\left(\xi \right)}{\partial \xi}={\beta}^{\left(2\right)}\left(\xi +\frac{1}{2}\right)-{\beta}^{\left(2\right)}\left(\xi -\frac{1}{2}\right).$$

(22)

The image $\widehat{o}(\mathbf{x};\mathbf{\Phi})$ derived from the observed image *s*(**x**) by application of the intensity transform with parameters Φ can be expressed using (2) and (7) as

$$\begin{array}{cc}\hfill \widehat{o}\left(\mathbf{x};\mathrm{\Phi}\right)& =s\left(\mathbf{x}\right)\left\{1+{\displaystyle \sum}_{{\mathbf{x}}_{i}\in P}\mathrm{\Phi}\left({\mathbf{x}}_{i}\right)\left[\frac{\left({\beta}^{\left(n\right)}\left(\mathrm{\Delta}{\mathbf{x}}_{i}\right)-{}_{\mathrm{\hspace{0.17em}\hspace{0.17em}}}{}^{m}c\left({\mathbf{x}}_{i}\right)\right)}{{\hspace{0.17em}}^{m}c\left({\mathbf{x}}_{i}\right)}\right]\right\},\hfill \\ \hfill & \hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\text{where}\hspace{0.17em}\hspace{0.17em}\mathrm{\Delta}{\mathbf{x}}_{i}=\frac{\mathbf{x}-{\mathbf{x}}_{i}}{\mathbf{h}}\hfill \end{array}$$

(23)

and for its partial derivatives we can write (in two dimensions):

$$\begin{array}{cc}\hfill \frac{\partial \widehat{o}\left(\mathbf{x};\mathrm{\Phi}\right)}{\partial {\mathrm{\Phi}}_{\mathbf{k}}}& ={\displaystyle \sum}_{{\mathbf{x}}_{i}\in P}\{s\left(\mathbf{x}\right)\frac{1}{{}_{}{}^{n}{c}_{i}}\left[{\beta}^{\left(3\right)}\left(\frac{{x}_{1}-{x}_{{k}_{1}}}{{h}_{1}}\right)-{}_{}{}^{m}{c}_{i}\right]\hspace{0.17em}\hfill \\ \hfill & \hspace{1em}\hspace{1em}\hspace{1em}\text{\xd7}\frac{1}{{}_{}{}^{n}{c}_{i}}\left[{\beta}^{\left(3\right)}\left(\frac{{x}_{2}-{x}_{{k}_{2}}}{{h}_{2}}\right)-{}_{}{}^{m}{c}_{i}\right]\mathrm{\hspace{0.17em}\hspace{0.17em}}\}.\hfill \end{array}$$

(24)

Finally, we have for the derivatives of the criterion

$$\frac{\partial H\left(\kappa ;\mathrm{\Phi}\right)}{\partial {\mathrm{\Phi}}_{\text{k}}}\approx -{\displaystyle \sum}_{\kappa}\frac{\partial p\left(\kappa ;\mathrm{\Phi}\right)}{\partial {\mathrm{\Phi}}_{\text{k}}}\left(\mathrm{log}\hspace{0.17em}\left(p\left(\kappa ;\mathrm{\Phi}\right)\right)+1\right),$$

(25)

where

$$\begin{array}{cc}\hfill & \frac{\partial p(\kappa ;\mathrm{\Phi})}{\partial {\mathrm{\Phi}}_{\mathbf{k}}}\hfill \\ \hfill & ={\displaystyle \sum}_{\mathbf{x}\in \Omega}\frac{1}{\mathrm{\Theta}}\{{{\displaystyle \sum}_{{\mathbf{x}}_{i}\in P}\frac{\partial {\beta}^{\left(3\right)}\left(\xi \right)}{\partial \xi}|}_{\xi =\left(s\left(\mathbf{x}\right){b}^{-1}\left(\mathbf{x}\right)/z\right)-\kappa}\hfill \\ \hfill & \hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\times s\left(\mathbf{x}\right)\frac{1}{{}^{n}{c}_{i}}\left[{\beta}^{\left(3\right)}\left(\frac{{x}_{1}-{x}_{{k}_{1}}}{{h}_{1}}\right)-{}^{m}{c}_{i}\right]\frac{1}{{}^{n}{c}_{i}}\hfill \\ \hfill & \hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\times \left[{\beta}^{\left(3\right)}\left(\frac{{x}_{2}-{x}_{{k}_{2}}}{{h}_{2}}\right)-{}^{m}{c}_{i}\right]\}.\hfill \end{array}$$

(26)

The grid controlling the compensation field is chosen adequately sparse as required by the smooth character of the to be compensated illumination unevenness. An example illustrating behaviour of the criterion function *H*(*κ*, Φ) as dependent on one element of the parameter vector—concretely Φ_{[0,2]} at *i* = 2, that is, for *k* = [0, 2]—top-right image corner on 3 × 3 grid covering the image—is depicted on Figure 5(a) showing a distinct minimum. Below, also the course of partial derivative of *H*(*κ*, Φ) is shown, both as derived analytically by the described approach and as calculated numerically via finite differences. There is a good agreement between both versions namely, in the important area around the; however, important advantages of analytical derivatives are faster evaluation and smooth behavior in the parameter space. On Figure 5(b), the influence of this parameter on the illumination correction is illustrated for five particular values of Φ_{[0,2]} influencing namely the correction in the upper right corner of the image; obviously, the best corrected image corresponds to the minimum of the criterion thus to the zero position of its derivative.

The overall illumination correction algorithm is based on the reciprocal illumination model *b ^{−1}* with the parameters minimizing the entropy criterion

$${\mathrm{\Phi}}_{\text{opt}}=\mathrm{arg}\underset{\mathrm{\Phi}}{\mathrm{min}\hspace{0.17em}}\left\{H\left(s\left(\mathbf{x}\right){b}^{-1}\left(\mathbf{x}\right)\right)\right\}.$$

(27)

The block diagram of the iterative algorithm is depicted on Figure 6.

Various types of optimization techniques were studied in order to find the optimizer best suited for the particular properties of the used optimization criterion in the problem of non-homogenous illumination correction. The tested methods were namely, downhill simplex [28] (Amoeba), Powell's direction set [28], controlled random search (CRS) [29], gradient descent (GD) [28, 30], conjugate gradient (CG) [28], and two versions of the limited memory Broyden, Fletcher, Goldfarb, Shannon (LBFGS) methods [31].

The comparison of results of the individual methods can be found in the next paragraph.

The proposed algorithm is supposed to be able to deal with non-homogenous illumination of retinal image data (384×384pixel, 10*μ*m/pixel) obtained by means of the CSLO (Heidelberg Retinal Tomograph HRT II). In order to check the efficiency of the designed method on images with a known illumination inhomogeneity, a model of the HRT II image signal was designed, using segmentation of real HRT II images into five object classes with the estimated characteristic intensity values assigned to the objects as follows.

0—black background introduced by HRT II; 60—vessels; 120—vessel centers (brighter due to reflections); 90—retinal tissue; 30—optic disc; 250—optic disc cup). Further, according to the image acquisition model (1), random tissue noise *n*
_{tiss} (uniformly distributed with standard deviation *σ* = 28) was added and the resulting image was convolved with the estimated point spread function of the real imaging system, approximated by 2D Gaussian kernel with standard deviation *σ* = 1. Finally, an artificial multiplicative illumination field controlled by a mesh of 3×3 nodes on the image area with random parameters was applied (see Figure 7).

(a): Ideal model image created by manual segmentation of a real image, (b): Model image after noise addition, (c): Model image after noise addition and convolution with PSF of the imaging system. (d): Final model image corrupted by non-homogenous illumination. **...**

A set of 40 simulated images was created via this model. Normalized parameters of the illumination field were uniformly distributed on interval [−15, 15]. Then, various optimizers were tested and evaluated with respect to achieved quality of the retrospective compensation of the illumination unevenness. The compensation quality was quantified using the fact that the known artificially introduced multiplicative illumination field and the final illumination correction field should be ideally inverse. Therefore, we can define the postcorrection illumination error *ξ*
_{post} as

$${\xi}_{\text{post}}=\frac{1}{\mathrm{\Theta}}{\displaystyle \sum}_{\mathbf{x}\in \Omega}\sqrt{{\left(1-b\left(\mathbf{x}\right){\widehat{b}}^{-1}\left(\mathbf{x}\right)\right)}^{2}},$$

(28)

where *b*(**x**) is the introduced illumination field, ${\widehat{b}}^{-1}(\mathbf{x})$ is the found correction of the illumination, and Θ is the size of the image domain Ω. The mean precorrection illumination error defined as

$${\xi}_{\text{pre}}=\frac{1}{\mathrm{\Theta}}{\displaystyle \sum}_{\mathbf{x}\in \Omega}\sqrt{{\left(1-b\left(\mathbf{x}\right)\right)}^{2}}$$

(29)

was set to *ξ*
_{pre} = 0.2875, that is, the intensity of the ideal image was corrupted on average by approximately 30%.

The seven optimization algorithms mentioned in the previous section were tested and the average results obtained when using the simulated image set are summarized in Table 1. Here, the achieved after-correction total mean errors are presented, together with the computing times per image, numbers of iterations, and the achieved percentage suppression of the illumination unevenness. Examples of error fields resulting from these optimizers can be seen on Figure 8. The Amoeba, Powell and CRS methods need to evaluate the criterion value only; the derivatives of the criterion need not be computed. On the other hand, these three methods converge slowly and the total computational demands are higher compared to GD, CG, and LBFGS. Moreover, the worst problem of Amoeba, CRS, and especially of Powell algorithm was the unreliable convergence. The Powell optimizer was usually unable to find the optimum parameters of the correction illumination field (see Table 1). The false convergence to a side extreme is less frequent in case of smaller distortions introduced but even in this case the gradient-based algorithms LBFGS and GD outperform the nongradient algorithms. This can be explained by smoother description of the shape of the multidimensional cost function when using the analytical derivatives. Surprisingly, the simplest gradient descent optimizer with variable step proved to be the best with respect to the correction precision, suppressing the illumination errors by about 67 per cent on average (from *ξ*
_{pre} = 0.2875 to *ξ*
_{post} = 0.0926). The LBFGS optimizer was the fastest but the average achieved suppression of the simulated illumination error was only 51 per cent.

(a) Artificially introduced illumination field. Error illumination field (illumination and restoration field multiplied, ideally black) using: (b) Amoeba optimizer, (c) Powel optimizer, (d) CRS, (e) LBFGS optimizer, (f) Gradient descent optimizer (g) **...**

Study of suitability of different optimizers for the task of optimization of Shannon's entropy. The best values are highlighted.

In frame of experiments, we tested also the influence of different number of image samples used for probability approximation. As can be seen in Table 2, even as little as 15% (about 20000) of the available samples are statistically powerful enough to provide good approximation of probability density of image intensities. This results in faster computation of the criterion.

Influence of different number of samples used for entropy evaluation on the algorithm precision and speed.

The next set of experiments concerned real retinal images with clearly visible illumination inhomogeneities that however were not known. The algorithm was tested on the set of 336 real (clinically obtained) retinal images. Obviously, the quality evaluation of non-homogenous illumination compensation could only be performed subjectively, due to lack of the golden standard (i.e., ideal illumination in each case). In majority of cases (about 95%), a substantial improvement was visible; in the remaining cases, the image remained practically unchanged, that is, no image was further distorted by the correction. On Figure 9, a result of the designed algorithm as applied to a real retinal HRT II image is illustrated; the intensity profiles clearly show the successful bias correction.

A method for efficient illumination correction was proposed, implemented, and verified: quantitatively on simulated images with known deterioration and qualitatively on an extensive set of real retinal images lacking this knowledge. It is based on estimating a B-spline polynomial shading model, the inversion of which provides correction of the input image, optimal in sense of the used information-based criterion—Shannon's entropy. Previously published methods using a similar principle were modified namely, as to the type of the used correction function concerns, and as a novelty, the computation of the criterion is based on probability distribution estimates by Parzen windowing method, which enables us to derive analytical expressions for derivatives of the criterion. This consequently substantially speeds up the computations. Along with the efficient B-spline distortion model, it results in the possibility of efficient usage of gradient-based optimizers. We compared the efficiency and precision of three non-gradient and four gradient-based optimizers and found the classical gradient descent optimizer with variable step as the best for the purpose of illumination correction, formulated in the suggested way.

The quantitative tests were done on an image set artificially created with respect to characteristics of the imaging device (the method is primarily aimed at improving quality of retinal images taken by means of the HRT II CSLO); these tests have shown that the designed algorithm is capable of removing up to about 70% of artificially introduced illumination variability. Finally, the method was successfully qualitatively tested on a set containing 336 real CSLO clinically obtained retinal images.

According to results of some further tests, involving subsequent registration of multiple images, preprocessing of the retinal images by the proposed algorithm had a clearly positive influence on reliability of the registration of the images using the registration method developed by our group [1], when compared to registration results concerning the unprocessed images. A similar positive effect can be expected for even higher types of image analysis following the presented correction method.

This research has been supported by the National Research Centre DAR (sponsored by the Ministry of Education, Czech Republic) project no. 1M0572 and partly also by the research frame no. MSM 0021630513.

1. Kubecka L, Jan J. Registration of bimodal retinal images—improving modifications. In: Proceedings of the 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC ’04); September 2004; San Francisco, Calif, USA. pp. 1695–1698. [PubMed]

2. Fleming AD, Philip S, Goatman KA, Olson JA, Sharp PF. Automated microaneurysm detection using local contrast normalization and local vessel detection. *IEEE Transactions on Medical Imaging*. 2006;25(9):1223–1232. [PubMed]

3. Abdel-Razik Youssif AA-H, Ghalwash AZ, Abdel-Rahman Ghoneim AAS. Optic disc detection from normalized digital fundus images by means of a vessels’ direction matched filter. *IEEE Transactions on Medical Imaging*. 2008;27(1):11–18. [PubMed]

4. Kolar R, Kubecka L, Jan J. Registration and fusion of the autofluorescent and infrared retinal images. *International Journal of Biomedical Imaging*. 2008;2008(1):11 pages. Article ID 513478. [PMC free article] [PubMed]

5. Axel L, Costantini J, Listerud J. Intensity correction in surface-coil MR imaging. *American Journal of Roentgenology*. 1987;148(2):418–420. [PubMed]

6. Young IT. Shading correction: compensation for illumination and sensor inhomogeneities. In: Robinson JP, et al., editors. *Current Protocols in Cytometry*. Vol. 1. New York, NY, USA: John Wiley & Sons; 2000. pp. 2.11.1–2.11.12.

7. Chrastek R, Michelson G, Donath K, Wolf M, Niemann H. Vessel segmentation in retina scans. In: Proceedings of the 15th Biennial International EURASIP Conference Biosignal; 2000; Brno, Czech Republic. pp. 252–254.

8. Brinkmann BH, Manduca A, Robb RA. Optimized homomorphic unsharp masking for MR grayscale inhomogeneity correction. *IEEE Transactions on Medical Imaging*. 1998;17(2):161–171. [PubMed]

9. Lufkin RB, Sharpless T, Flannigan B, Hanafee W. Dynamic-range compression in surface-coil MRI. *American Journal of Roentgenology*. 1986;147(2):379–382. [PubMed]

10. Haselgrove J, Prammer M. An algorithm for compensation of surface-coil images for sensitivity of the surface coil. *Magnetic Resonance Imaging*. 1986;4(6):469–472.

11. Dawant BM, Zijdenbos AP, Margolin RA. Correction of intensity variations in MR images for computer-aided tissue classification. *IEEE Transactions on Medical Imaging*. 1993;12(4):770–781. [PubMed]

12. Ahmed MN, Yamany SM, Farag AA, Moriarty T. Bias field estimation and adaptive segmentation of MRI data using a modified fuzzy c-means algorithm. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern
Recognition (CVPR ’99), vol. 1; June 1999; Fort Collins, Colo, USA. pp. 250–255.

13. Wells WM, III, Crimson WEL, Kikinis R, Jolesz FA. Adaptive segmentation of mri data. *IEEE Transactions on Medical Imaging*. 1996;15(4):429–442. [PubMed]

14. Styner M. Parametric estimate of intensity inhomogeneities applied to MRI. *IEEE Transactions on Medical Imaging*. 2000;19(3):153–165. [PubMed]

15. Guillemaud R, Brady M. Estimating the bias field of MR images. *IEEE Transactions on Medical Imaging*. 1997;16(3):238–251. [PubMed]

16. Likar B, Derganc J, Pernuš F. Segmentation-based retrospective correction of intensity non-uniformity in multi-spectral MR images. In: Sonka M, Fitzpatrick JM, editors. In: Medical Imaging 2002: Image Processing, vol. 4684; February 2002; San Diego, Calif, USA. pp. 1531–1540. *Proceedings of SPIE*.

17. Wang L, Lai H-M, Barker GJ, Miller DH, Tofts PS. Correction for variations in MRI scanner sensitivity in brain studies with histogram matching. *Magnetic Resonance in Medicine*. 1998;39(2):322–327. [PubMed]

18. Sied JG, Zijdenbos AP, Evans AC. A nonparametric method for automatic correction of intensity nonuniformity in mri data. *IEEE Transactions on Medical Imaging*. 1998;17(1):87–97. [PubMed]

19. Likar B, Maintz JBA, Viergever MA, Pernuš F. Retrospective shading correction based on entropy minimization. *Journal of Microscopy*. 2000;197(3):285–295. [PubMed]

20. Mangin J-F. Entropy minimization for automatic correction of intensity nonuniformity. In: Proceedings of the IEEE Workshop on Mathematical Methods in Biomedical Image Analysis (MMBIA ’00); June 2000; Hilton Head Island, SC, USA. IEEE Press; pp. 162–169.

21. Niemann H, Chrastek R, Lausen B, et al. Towards automated diagnostic evaluation of retina images. *Pattern Recognition and Image Analysis*. 2006;16(4):671–676.

22. Foracchia M, Grisan E, Ruggeri A. Luminosity and contrast normalization in retinal images. *Medical Image Analysis*. 2005;9(3):179–190. [PubMed]

23. Chen T, Yin W, Zhou XS, Comaniciu D, Huang TS. Total variation models for variable lighting face recognition. *IEEE Transactions on Pattern Analysis and Machine Intelligence*. 2006;28(9):1519–1524. [PubMed]

24. Zhu J, Liu B, Schwartz SC. General illumination correction and its application to face normalization. In: Proceedings of the IEEE International Conference on Accoustics, Speech, and Signal Processing (ICASSP ’03), vol. 3; April 2003; pp. 133–136.

25. Brown MS, Sun M, Yang R, Lin Y, Seales WB. Restoring 2D content from distorted documents. *IEEE Transactions on Pattern Analysis and Machine Intelligence*. 2007;29(11):1904–1916. [PubMed]

26. Meng G, Zheng N, Du S, Song Y, Zhang Y. Shading extraction and correction for scanned book images. *IEEE Signal Processing Letters*. 2008;15:849–852.

27. Unser M, Aldroubi A, Eden M. B-spline signal processing. Part I. Theory. *IEEE Transactions on Signal Processing*. 1993;41(2):821–833.

28. Press WH, Teukolsky SA, Vetterling WT, Flannery BP. *Numerical Recipes in C*. Cambridge, UK: Cambridge University Press; 1995.

29. Ali MM, Törn A, Viitanen S. A numerical comparison of some modified controlled random search algorithms. *Journal of Global Optimization*. 1997;11(4):377–385.

30. Mandic DP. A generalized normalized gradient descent algorithm. *IEEE Signal Processing Letters*. 2004;11(2):115–118.

31. Liu DC, Nocedal J. On the limited memory BFGS method for large scale optimization. *Mathematical Programming*. 1989;45(1–3):503–528.

Articles from International Journal of Biomedical Imaging are provided here courtesy of **Hindawi Publishing Corporation**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |