Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2873042

Formats

Article sections

- Abstract
- 1. INTRODUCTION
- 2. ACTION FORMALISM FOR COMPLEX DIFFUSION
- 3. DENOISING AND EDGE ENHANCEMENT EXPERIMENTS
- 4. CONCLUSION AND DISCUSSION
- 5. REFERENCES

Authors

Related links

Proc Int Conf Image Proc. Author manuscript; available in PMC 2010 May 19.

Published in final edited form as:

Proc Int Conf Image Proc. 2009 November 7; 2009: 2969–2972.

doi: 10.1109/ICIP.2009.5413577PMCID: PMC2873042

NIHMSID: NIHMS163217

Complex diffusion was introduced in image processing literature as a means to achieve simultaneous denoising and enhancement of scalar valued images. In this paper, we present a novel geometric framework for achieving complex diffusion on color images expressed as image graphs. In this framework, we develop a new variational formulation for achieving complex diffusion. This formulation involves a modified harmonic map functional and is quite distinct from the Polyakov action described in earlier work by Sochen et al. Our formulation provides a framework for simultaneous (feature preserving) denoising and enhancement. We present results of comparison between the complex diffusion, and Beltrami flow all in the image graph framework.

Image denoising is a quintessential component of most image analysis tasks and there are numerous methods reported in literature for achieving this goal. In the past few decades, methods based on partial differential equations (PDEs) have become very popular and there has been a flurry of activity that has matured the field significantly. Some of the PDE-based methods can be derived from minimizations principles while others are not. The general mathematical form of a feature preserving anisotropic diffusion is given by,

$$\frac{\partial I}{\partial t}=\mathit{Div}(g\left(\mid \nabla u\mid \right)\nabla u).$$

Where, *u*(*x*, *y*; *t*)|_{t=0} = *I*(*x*, *y*) is the function being smoothed and initialized to the input image to be smoothed. The choice of *g*(|*u*|) in the above leads to various types of diffusion flows.

Alternatively, one may represent the image as a graph by embedding it as a 2D surface Σ with local coordinates (σ^{1}, σ^{2}), in *R*^{3}, the embedding map *X* is given by, *X* : (σ^{1}, σ^{2}) → (*x*, *y*, *I*(*x*, *y*)). This provides a geometric interpretation to the PDEs as those that modify some geometric property such as area of the 2D manifold representing the image surface. In the case of vector-valued images, the embedding map *X* is given by, *X* : (σ^{1}, σ^{2}) → (*x*, *y*, *I*^{i}(*x*, *y*)), where, *I*^{i}(*x*, *y*) are the channels of the given vector valued image. This graph representation also provides a geometric way to handle the interaction between the components (channels) of the vector-valued images. Kimmel et al., [1] pioneered the use of this image graph representation to develop algorithms to achieve image smoothing in scalar and vector-valued image data sets. They also introduced the Polyakov actoin [4] to derive various flows such as the Beltrami, mean curvature, and the Perona-Malik flows. One of the benefits of this approach is that the channels in multi-channel (vector-valued) images such as color images can be correlated in a geometrical way. However, diffusing the RGB channels in a color image and retaining their correlation is not simple. If we perform isotropic or anisotropic diffusion independently on each channel, then the coupling between the channels is ignored.

Alternatively, one may simply extend the traditional diffusion to diffusion in the complex-domain: One can generalize the diffusion equation to the domain of complex-valued functions. In [5], Gilboa et al. pioneered a general approach to isotropic and anisotropic complex (valued) diffusion. In complex diffusion, the imaginary part behaves as a smoothed second derivative so that we can have image smoothing and edge information simultaneously. The authors of [5] have shown that using the imaginary part for *g*(|*u*|) in the anisotropic diffusion equation above gives more improved denoising results than the Perona-Malik flow. However, they did not apply the complex diffusion model to vector-valued images. In this paper, we present a novel model of simultaneous smoothing and enhancement by mapping the real and complex channels to **C**^{n}, introducing an image-surface metric and constructing an action functional on the image manifold. In our approach, the correlation between the color channels is introduced via the metric on the image (graph) manifold. Due to lack of space, we present one experimental result on color image denoising and edge enhancement case, depicting the performance of our model in comparison with the Beltrami flow [1–3] for color image denoising. Rest of this paper is organized as follows: In section 2, we present a novel metric for the image manifold and a novel norm functional whose minimization yields the desired flow equation. In section 3, we present results of application of our model to color images along with comparisons. Finally, section 4 contains the conclusions.

The general idea of complex diffusion has been investigated in [5]. However, their primary focus was on gray level images. There was no generalization to vector-valued data sets. Since we deal with processing multi channel images here, one of key problems here is how to process the data and capture the correlation between the channels. In [2], the authors have introduced a norm functional called the Polyakov action and an embedding map **X** : Σ → **R**^{n}, where Σ is a 2-*D* manifold, in order to capture the interaction between the multiple channels, and minimize the norm functional to obtain specific flows that smooth images in different ways. In this paper, we suggest an alternative to the Polyakov action, where the image manifold, Σ is mapped to n-dimesional complex manifold by **Z** : Σ → **C**^{n}. Denoting the local coordinates on the 2-*D* manifold Σ by (σ_{1}, σ_{2}), the map **Z** is given by [**Z**^{1}(σ^{1}, σ^{2}), **Z**^{2}(σ^{1}, σ^{2}), …, **Z**^{n}(σ^{1}, σ^{2})], where all the **Z**s are complex. For example, a color (RGB) image can be mapped by **Z** as follows:

$$\mathbf{Z}:({\sigma}^{1},{\sigma}^{2})\to [{z}^{1},{\stackrel{\u2012}{z}}^{1},{Z}^{l}={I}^{l}({\sigma}^{1},{\sigma}^{2}){\stackrel{\u2012}{Z}}^{l}],$$

(1)

where $z={\sigma}^{1}+i{\sigma}^{2},\stackrel{\u2012}{z}$ is the complex conjugate of *z*, *I*^{l} is a complex-valued channel, ${I}_{R}^{l}({\sigma}^{1},{\sigma}^{2})+i{I}_{M}^{l}({\sigma}^{1},{\sigma}^{2}),{\stackrel{\u2012}{z}}^{i}$ is the complex conjugate of *Z*^{i} and the index *l* runs over R,G, and B.

Let *M*, the space-feature manifold denote the embedding manifold of the image graph. Let us now consider **Z** : Σ → *M* and let *g*_{μν} be the metric on the image manifold Σ, and *h*_{ij} be the metric on *M*. Here, *h*_{ij} is defined such that *h*_{ij}*dZ*^{i}*dZ*^{j} gives a length element on *M*, and this metric makes the manifold M equivalent to the Riemannian manifold with (*n*×2)+2 dimensions, where *n* and the additional 2 represent the number of channels and the local coordinates respectively. If, for example, a gray level image is considered, then, *h*_{ij} is represented by,

$$h=\left(\begin{array}{cccc}\hfill 0\hfill & \hfill \frac{1}{2}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \frac{1}{2}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \frac{1}{2}\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill \frac{1}{2}\hfill & \hfill 0\hfill \end{array}\right)$$

(2)

so that the length element is $dzd\stackrel{\u2012}{z}+dId\stackrel{\u2012}{I}={\left(d{\sigma}^{1}\right)}^{2}+{\left(d{\sigma}^{2}\right)}^{2}+d{I}_{R}^{2}+d{I}_{M}^{2}$ Then, the image metric, *g*_{μν} is given explicitly as follow:

$${g}_{\mu \nu}({\sigma}^{1},{\sigma}^{2})={h}_{ij}\left(\mathbf{Z}\right){\partial}_{\mu}{Z}^{i}{\partial}_{\nu}{Z}^{j},$$

(3)

where, ${\partial}_{\mu}{Z}^{i}=\partial {Z}^{i}\u2215\partial {\sigma}_{\mu}$. The image metric for the *n*-channel case is given explicitly by,

$${g}_{\mu \nu}=\left(\begin{array}{cc}\hfill 1+\sum _{l=1}^{n}{I}_{x}^{l}{\stackrel{\u2012}{I}}_{x}^{l}\hfill & \hfill \frac{1}{2}\sum _{l=1}^{n}({I}_{x}^{l}{\stackrel{\u2012}{I}}_{y}^{l}+{I}_{y}^{l}{\stackrel{\u2012}{I}}_{x}^{l})\hfill \\ \hfill \frac{1}{2}\sum _{l=1}^{n}({I}_{x}^{l}{\stackrel{\u2012}{I}}_{y}^{l}+{I}_{y}^{l}{\stackrel{\u2012}{I}}_{x}^{l})\hfill & \hfill 1+\sum _{l=1}^{n}{I}_{y}^{l}{\stackrel{\u2012}{I}}_{y}^{l}\hfill \end{array}\right),$$

(4)

where *x* and *y* are local coordinates. We are now ready present the formulation of the norm functional i.e., the action formalism.

Images in computer vision are usually real-valued, therefore, it is natural to pose them as a real-valued graph with a real-valued metric. However, in this paper we seek an action (a norm functional) appropriate for complex-valued functions and need an action distinct from the Polyakov action presented in [2]. We would like the gradient descent (flow) equation of the new action to equal the complex diffusion introduced in [5] under a special geometry and depict edge-preserving flows on a graph. We propose the following specific action for the n-channel images satisfying the above conditions:

$$S=\iint \mathit{F}(z,\stackrel{\u2012}{z},{I}_{x}^{l},{l}_{y}^{l}{\stackrel{\u2012}{I}}_{x}^{l},{\stackrel{\u2012}{I}}_{y}^{l})\sqrt{g}dxdy$$

(5)

$$\mathit{F}=\frac{1}{2}\sum _{l=1}^{n}({\mid \nabla {I}^{l}\mid}^{2}{e}^{i{\theta}_{l}}+{\mid \nabla {\stackrel{\u2012}{I}}^{l}\mid}^{2}{e}^{-i{\theta}_{l}})$$

(6)

Here, *x* and *y* are local coordinates, and *g* is the determinant of the image metric *g*_{μν}. In Eq. (6), generally, we can assign different phase θ_{l} to each channel.

We can derive the gradient descent of Eq. (5) by evaluating the Euler-Lagrange equation with respect to the embedding. For this, we fix the *x* and *y* coordinates or *z* and $\stackrel{\u2012}{z}$ and vary the action with respect to *I*. Then, the flow equation for *I*^{l} is given by:

$$\frac{\partial {I}^{l}}{\partial t}=\frac{1}{{g}^{\beta}}\left[\frac{d}{dx}\left(\frac{{P}^{l}}{\sqrt{g}}\right)+\frac{d}{dy}\left(\frac{{Q}^{l}}{\sqrt{g}}\right)\right],$$

(7)

where, *P*^{l} and *Q*^{l} are given by,

$${P}^{l}=g\frac{\partial F}{\partial {I}_{x}^{l}},\phantom{\rule{1em}{0ex}}{Q}^{l}=g\frac{\partial F}{\partial {I}_{y}^{l}}.$$

(8)

In Eq.(7), we are free to multiply the right hand side of the equation by a positive function therefore, here, 1/(*g*^{β}) will produce nonlinear scale-space and keep the flow geometrical as suggested in [2]. The exponent β will be discussed subsequently. Equation (7) can now be rewritten as follows:

$$\frac{\partial {I}^{l}}{\partial t}=\frac{1}{{g}^{(\beta +0.5)}}\left[{P}_{x}^{l}+{Q}_{x}^{l}-\frac{1}{2g}({g}_{x}{P}^{l}+{g}_{y}{Q}^{l})\right].$$

(9)

As a special case, we can easily obtain the isotropic complex diffusion equation in [5], by applying Eq. (9) to gray scale image and setting the metric *g*_{μν} to be the identity matrix. Then, *g* is equal to 1, *I*(*x, y*) = *I*_{R}(*x, y*) + *iI*_{M} (*x, y*), and Eq. (9) is reduced to

$$\frac{\partial {I}_{R}}{\partial t}=\mathit{cos}\left(\theta \right)\nabla {I}_{R}-\mathit{sin}\left(\theta \right)\nabla {I}_{M}$$

(10)

$$\frac{\partial {I}_{M}}{\partial t}=\mathit{sin}\left(\theta \right)\nabla {I}_{R}+\mathit{cos}\left(\theta \right)\nabla {I}_{R}.$$

(11)

There is no imaginary part in this case as the initial condition is just real. However, we can create an imaginary part from non-zero θ via iteration.

Denoising an image by anisotropic complex diffusion has been introduced in [5]. The authors have used imaginary part as stopping criteria and compared their method with the Perona-Malik flow. They showed that the anisotropic complex diffusion can avoid staircasing effects produced by Perona-Malik flow. However, their approach did not have a geometric interpretation and they did not show how to improve the (imaginary part) edge enhancement in a noisy image using a flow on an image graph-based representation. In this paper, we apply our method to noisy color images uisng an image graph representation. There are two tuning parameters in our model: the exponential parameter β in Eq. (9) and θ in the functional, * F* in Eq. (6). In [5], large values of phase, θ, made edges represented by the imaginary part thicken with increasing iterations, and small θ less than 5 degree was recommended for isotropic and anisotropic diffusion to have sharp edges. In contrast, in our work here, large phase values increases the magnitude of the imaginary part and slows down diffusion speed near edges. The exponent, β of non-linear scale multiplier, 1/

The results of denoising depend on parameters, θ and β as was in earlier approaches [2, 5]. The optimal choice depends on the amount of noise. The larger phase angles θ and βs lead to diffusions more sensitive to edges. We applied the complex RGB flow to color images with added gaussian noise (var=0.001) and compared the results with Beltrami flow. Our test image had additive gaussian noise (25.3dB). Fig.1(a) and 1(b) show original image and the noisy version respectively. We used the peak SNR (PSNR) as the stopping criteria for iterations. We stopped the iterations when the denoised images reached the maximum PSNR. Fig.1(c) shows denoised images using the complex (RGB) flow. The parameters values in our experiments were set to θ = 7π/30 and β = 5/6. All the experiments reported here were implemented in Matlab 2007a, on an Intel Core Duo 2.16 GHz CPU. We achieved the denoising using the complex (RGB) flow with maximum PSNR 26.6 dB and in 38.6 sec. Fig.1(d) shows a denoised image using Beltrami flow with maximum 25.4 dB in 13.8 sec. The result of the complex flow depicts higher degree of smoothing than the Beltrami flow. When the noise is in the image detail, Beltrami flow tends to confuse the noise as detail and this effect slows down the diffusion velocity locally. Fig.1(e) shows the denoised image using Beltrami flow after 89.8 sec processing time (500 iterations).

It has been shown in [5] that the imaginary part of the isotropic complex diffusion behaves as the smoothed second derivatives of the original (real) image and as a shock filter. We can use this imaginary part as the edge information contained in the given image. To obtain this information from a noisy image, we can apply commonly used anisotropic flows as well as the isotropic complex diffusion, [5]. Here, we applied the complex RGB flow to the noisy image in Fig. 1(b) to obtain improved edge enhancement over that reported in [5]. Fig. 2(a) and Fig. 2(b) depict the imaginary parts of Fig. 1(a) and 1(b) respectively. The parameters settings are θ = 7π/30 and β = 5/6. The images are rescaled to 8-bit RGB color images. Fig. 2(c) and Fig. 2(d) show the denoised imaginary parts from the complex diffusion after 38.6 sec and 72.4 sec processing time respectively. The noisy parts of Fig. 2(b) have been smoothed and edges have been preserved in Fig. 2(c) and Fig. 2(d) respectively.

(a) and (b) are imaginary parts of Fig. 1(a) and Fig. 1(b). (c) and (d) are denoised imaginary parts after 38.6 sec and 72.4 sec processing time respectively

In this paper, we presented a novel formulation of the complex diffusion for simultaneous image smoothing and edge enhancement. The formulation involved the use of an image graph representation as an embedded manifold, a novel image metric and a novel action functional yielding a new complex diffusion. The results showed improved performance over the Beltrami flow reported in literature. Comparisons were reported on data with noise and using PSNR as a quantitative measure. Our future work will involve application of our model to complex-valued MRI data.

This research was in part funded by NIH EB007082.

[1] Kimmel R, Sochen N, Malladi R. Lecture Notes in Computer science: First International Conference on Scale-Space Theory in Computer Vision. Vol. 1252. Springer-Verlag; 1997. From high energy physics to low level vision; pp. 236–247.

[2] Sochen N, Kimmel R, Malladi R. A general framework for low level vision. IEEE Transaction on Image Processing, Special Issue on PDE based Image Processing. 1998;7(3):310–318. [PubMed]

[3] Kimmel R, Malladi R, Sochen N. Image as embedded maps and minimal surface: movies, color, texture, and volumetric medical images. International Journal of Computer Vision. 2000 Sept.39(2):111–129.

[4] Polyakov AM. Quantum geometry of bosonic strings. Physics Letters. 1981;103B:207.

[5] Gilboa G, Sochen N, Zeevi YY. Image Enhancement and Denoising by Complex Diffusion Process. IEEE Transaction on pattern analysis and machine intelligence (PAMI) 2004;25(8):1020–1036. [PubMed]

[6] Gilboa G, Sochen N, Zeevi YY. Scale-Space 2001, LNCS 2106. Springer-Verlag; 2001. Complex Diffusion Process for Image Filtering; pp. 299–307.

[7] Gilboa G, Sochen N, Zeevi YY. Regularizaed Shock Filter and Complex Diffusion, ECCV 2002, LNCS 2350. Springer-Verlag; 2002. pp. 399–313.

[8] Perona P, Malik J. Scale-space and edge detection usion ansiotropic diffusion. IEEE Trans. Pattern Anal. Machine Intell. 1990;12:629–639.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |