Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2919295

Formats

Article sections

- Abstract
- I. Introduction
- II. Theoretical Background
- III. Materials and Methods
- IV. Results
- V. Discussion
- VI. Conclusion
- References

Authors

Related links

IEEE Trans Image Process. Author manuscript; available in PMC 2010 August 10.

Published in final edited form as:

Published online 2010 March 11. doi: 10.1109/TIP.2010.2044962

PMCID: PMC2919295

NIHMSID: NIHMS222531

Peter C. Tay, Member, IEEE, Christopher D. Garson, Student Member, IEEE, Scott T. Acton, Senior Member, IEEE, and John A. Hossack, Senior Member, IEEE

Peter C. Tay, Department of Engineering and Technology, Western Carolina University, Cullowhee, NC 28723 USA (Email: ude.ucw.liame@yatp)

The publisher's final edited version of this article is available at IEEE Trans Image Process

See other articles in PMC that cite the published article.

Images produced by ultrasound systems are adversely hampered by a stochastic process known as speckle. A despeckling method based upon removing outlier is proposed. The method is developed to contrast enhance B-mode ultrasound images. The contrast enhancement is with respect to decreasing pixel variations in homogeneous regions while maintaining or improving differences in mean values of distinct regions. A comparison of the proposed despeckling filter is compared with the other well known despeckling filters. The evaluations of despeckling performance are based upon improvements to contrast enhancement, structural similarity, and segmentation results on a Field II simulated image and actual B-mode cardiac ultrasound images captured *in vivo*.

AN accurate anatomical display of organs such as the heart, kidney, prostate, liver, etc., would be a beneficial aid to health practioners in diagnosing ailments or assessing the health of that organ. Although there are a wide range of medical imaging modalities that could be utilized, the use of ultrasound is advantageous for the following reasons: only safe nonionizing sound waves are used in the scanning process; the portability of the hardware; cost is inexpensive when compared to other medical imaging modalities; and real-time or near real-time imaging is attainable.

Although there are many advantages of using ultrasound as an imaging modality, the images produced are hampered by a phenomena known as speckle. Speckle is visible in all ultrasound images as a dense granular noise^{1} or “snake-like” objects that appear throughout the image. Unfortunately, the presence of speckle does adversely effect image contrast, edge detection, and segmentation.

An explanation on the cause of speckle is offered by Goodman in [1]. Goodman mathematically models speckle as a sum of a large number of complex phasors, a.k.a. a complex random walk. These complex phasors can have either constructive or destructive relationship with each other. Thus, bright and dark points can be observed in close proximity. The relationship and relevance of acoustic speckle to laser speckle is given by Abbott and Thurstone in [2]. More relevant and detailed descriptions of ultrasound image formation and the statistics of ultrasound speckle can be found in [3], [4].

The experimentation performed in [3] provides evidence that the intensity or prelogarithm compressed envelop detection amplitudes *J* (*n, m*) can be modeled as the multiplicative model given in (1)

$$J(n,m)=(P(n,m)\ast I(n,m)){\eta}_{\times}(n,m)$$

(1)

where the multiplicative noise *η*×(*n, m*) is samplewise independent and uncorrelated to the ideal image pixel value *I*(*n, m*) and *P*(*n, m*) is the point spread function (PSF) of the ultrasound imaging system. The multiplicative model could be made equivalent to the additive model

$$J(n,m)=P(n,m)\ast I(n,m)+{\eta}_{+}(n,m)$$

(2)

if the additive noise term *η*+(*n, m*) is made dependent upon the ideal image *I*(*n, m*) or other additive noise values.

The intuition behind the proposed despeckling method is that speckle as seen in the intensity image or as a logarithmically compressed intensity image, commonly called B-mode image, is a stochastic process where outliers inevitably occur. The despeckling of the B-mode image that is proposed iteratively removes outliers. The essence of our despeckling method has been proposed in [5] to remove highly dense impulse noise. The ultimate goal of this paper is to enhance ultrasound slices for the purpose of providing optimal contrast enhancement, which will subsequently be used to segment substantial anatomic features via a gradient based edge detection method.

A smoothing of speckle and preservation of edges are in a general sense divergent. This problem is not only directly relevant to enhancing ultrasound images, but also relevant to enhancing synthetic aperture radar (SAR), optical laser, and other imaging modalities. A wide variety of methods have been proposed to address speckle removal or reduction for a variety of applications. When addressing speckle as unwanted noise, adaptive filtering methods like the Nagao [6], Lee [7], Frost [8], Kuan [9], [10], adaptive weighted median filter (AWMF) [11], speckle reducing anisotropic diffusion [12], and other [13]-[28] methods have been proposed.

The comparison in this paper is unlike the one published in [29], which was based upon subjective interpretation by expert radiologists. The results reported in [29] found that adaptive local statistics filters like the Lee [7], the Frost *et al.* [8], and the Kuan *et al.* filters [10] were robust in providing a visually enhanced image to make an accurate diagnosis of the amount, composition, and texture of plaque in the carotid artery.

Our application differs from the one in [29] in that we are interested in autonomous or semi-autonomous segmentation. We will rely on several performance metrics to quantify improvements to contrast, structural similarity, and segmentation. The general goal being to determine which despeckling method best removes speckle while producing the best contrast and preservation of edges. Best contrast is meant in the sense of decreasing the variance in a homogeneous region while distinct regions are well defined. Although the proposed despeckling method has been favorably evaluated in [30], the evaluation offered in Section IV is based upon three objective metrics. A generalization of the Fisher discriminant [31] and the structural similarity (SSIM) index [32] are used to quantify how well various algorithms achieve this general goal. Additionally, an evaluation of a despeckling method’s ability to promote robust segmentation of actual B-mode ultrasound cine data of a mouse heart captured *in vivo* is performed where a shape metric is used to evaluate the results.

If it is allowed to assume that the complex-valued *IQ* data is modeled as a sum of complex phasors as Goodman suggests in [1]

$$IQ(n,m)=\sum _{k=1}^{K}{a}_{k}(n,m){e}^{j\phi k(n,m)}$$

for some positive integer *K*, then it can be shown that the amplitudes of the *IQ* data |*IQ*(*n, m*)| in a constant reflectivity region are Rayleigh or more generally Rician distributed when *K* is sufficiently large and with the added assumptions that the amplitudes *a _{k}*(

Fig. 1(a) shows a eight bit full scale contrast stretched B mode ultrasound image of a phantom consisting of three disks representing regions with various return echo intensities. This B mode image was acquired from a Siemens Sequoia system. The location of the linear array transducer operating at 3 Mhz is at the top of the image. In Fig. 1-(e) the histograms of the pixel values enclosed in the yellow, green, blue, and red boxes in Fig. 1(a) are shown, respectively. These histograms highlight that the pixel values in a homogeneous region are the result of some stochastic process.

B-mode ultrasound image of a phantom and histograms of pixel values enclosed in the (b) yellow, (c) green, (d) blue, and (e) red boxes of Fig. 1(a). (a) Ultrasound phantom image, (b) yellow box, (c) green box, (d) blue box, and (e) red box.

The following subsections will outline some methods that were tested in Section IV. The algorithms outlined in the following subsection are not meant to be a comprehensive list or even a list of the most robust despeckling methods. Rather, each algorithm was chosen because it represents a unique approach to achieve despeckling. Moreover, most recent methods have compared their results with at least one or more of the following methods.

Suppose the detected image is given as the additive model in (2) where the PSF **P** is known or estimated. Applying the Wiener filter as described in [34] to the detected image **J** results in a low-pass filtered approximation of the ideal image **I**. The Wiener filtered image $\stackrel{~}{\mathbf{J}}$ is determined as

$$\stackrel{~}{\mathbf{J}}\underleftrightarrow{\mathrm{DFT}}\widehat{\stackrel{~}{\mathbf{J}}}=\stackrel{~}{\mathbf{P}}\widehat{\mathbf{J}}=\stackrel{~}{\mathbf{P}}\{\widehat{\mathbf{P}}\widehat{\mathbf{I}}+{\widehat{\eta}}_{+}\}$$

where $\mathbf{P}\underleftrightarrow{\mathrm{DFT}}\widehat{\mathbf{P}}$, likewise for **J**,**I**,*η*+ and $\widehat{\mathbf{J}}$,$\widehat{\mathbf{I}}$,${\widehat{\eta}}_{+}$, respectively,

$$\stackrel{~}{P}(k,l)=\{\begin{array}{cc}\frac{\widehat{P}\ast (k.l)}{{\mid \widehat{P}(k,l)\mid}^{2}+{\sigma}_{{\eta}_{+}}^{2}},\hfill & \text{if}\phantom{\rule{thickmathspace}{0ex}}\widehat{P}(k,l)\ne 0\hfill \\ 0,\hfill & \text{otherwise}\hfill \end{array}\phantom{\}}$$

and ${\sigma}_{{\eta}_{+}}^{2}$ is the variance of the noise.

Lee in [7] proposed methods to contrast enhance an image and to restore an image corrupted by noise. Lee’s noisy image models in [7] are given in (1) and (2) where the PSF is ignored. Lee’s paper proposes an adaptive filter to aggressively smooth, via local averaging, in homogeneous regions while regions that contain significant image features are to be left unmolested.

Contrast enhancement is promoted at each pixel by a linear rescaling of the local mean summed with the multiplication of a gain applied to the difference of the pixel value with the local mean to determine a new pixel value. Formally, let *J*(*n, m*) be the original noisy pixel value, then the new pixel value $\widehat{J}(n,m)$ is determined as

$$\widehat{J}(n,m)=g\left(\mu \right)+k(J(n,m)-\mu )$$

(3)

where *μ* is the local mean. The function *g*(·) is a linear rescaling of the mean that is *g*(*μ*) = *aμ*+*b* where the parameters $a,b\in \mathbb{R}$ were determined to allow the new pixel value to utilize the full eight bit dynamic range. As pointed out in [7], if 0 ≤ *k* < 1, then (3) determines a smoothing filter, i.e., a low-pass filter. When the gain *k* is greater than one, then (3) attempts to “sharpen” image features that is enhance the edges.

The gain *k* is adaptively chosen as a function of the local statistics. The gain *k* is determined as

$$k=\frac{{\sigma}^{2}-{\sigma}_{\eta}^{2}}{{\sigma}^{2}}$$

(4)

where *σ*^{2} is the local variance in the same window about *J*(*n, m*) that determined *μ* and ${\sigma}_{\eta}^{2}$ is the global noise variance. When ${\sigma}^{2}\u2aa2{\sigma}_{\eta}^{2}\ne 0$, the gain parameter is less than but approximately one, in which case the filter in (3) performs like an identity filter that is $\widehat{J}(n,m)\approx J(n,m)$. If the local variance *σ*^{2} is greater than but nearly equal to the global noise variance ${\sigma}_{\eta}^{2}$, then $\widehat{J}(n,m)\approx \mu $ and the filter specified by (3) serves as a low-pass filter. Since the global noise variance can only be greater than or equal to zero, the gain parameter *k* can never exceed one. Thus, in homogeneous regions *k* should be set to zero and (3) provides local smoothing. When *J*(*n, m*) is determined to be an edge pixel, then *k* should be set to one and the pixel is left undeteriorated.

When the image is determined or assumed to be degraded by multiplicative noise as in (1), then Lee in [7] approximates the image as an additive noise model of the form

$$\stackrel{~}{J}(n,m)=AI(n,m)+B\eta (n,m)+C$$

(5)

where $A,B,C\in \mathbb{R}$ are chosen so that mean square error between the *J*(*n, m*) of the multiplicative model from (1) and $\stackrel{~}{J}(n,m)$ of the additive model in (5) is minimized. Since the image polluted by multiplicative noise is recast as an image with additive noise, the adaptive filter employed in the additive case, given in (3), can be used, though the gain parameter *k* is redefined. Given an *a priori* determination of the mean and variance of the stationary noise, *μ _{η}* and ${\sigma}_{\eta}^{2}$ respectively, the adaptive gain parameter is determined as

$$k=\frac{{\mu}_{\eta}Q}{{\mu}_{I}^{2}{\sigma}_{\eta}^{2}+{\mu}_{\eta}^{2}Q}$$

where *μ _{I}* is the mean of

$${\mu}_{I}=\frac{\mu}{{\mu}_{\eta}}$$

within some fixed window and *μ* is the local mean of *J*(*n, m*) within the same window. The variable *Q* is determined as

$$Q=\frac{{\sigma}^{2}+{\mu}^{2}}{{\sigma}_{\eta}^{2}+{\mu}_{\eta}^{2}}-{\mu}_{I}^{2}$$

where *σ*^{2} is the local variance of *J*(*n, m*) within some window.

The speckle reducing anisotropic diffusion (SRAD) method of Yu and Acton in [12] adaptively applies an anisotropic diffusion method. Inspired by the time dependant heat diffusion equation, Perona and Malik in [35] published a method to perform anisotropic diffusion on noisy images. In [35], the diffusion equation of an image with continuous variables over time *J*(*x, y, t*) is given as

$$J(x,y,t)=\mathrm{div}\{c(x,y,t)\nabla J(x,y,t)\}=c(x,y,t){\nabla}^{2}J(x,y,t)+\nabla c(x,y,t)\cdot \nabla J(x,y,t)$$

(6)

where $(x,y,t)\in {\mathbb{R}}^{3}$ div is the divergence operator, is the gradient operator, ^{2} is the Laplacian operator, **v** · **w** represents the dot product of two vectors, and *J*(*x, y*, 0) = *J*(*x, y*). When *c*(*x, y, t*) is a constant, then (6) defines isotropic diffusion. For anisotropic diffusion, the function known as the conduction coefficient (a.k.a. coefficient of variation, diffusion coefficient) *c*(·) is given as

$$c(x,y,t)=\mathrm{exp}\left(-{\left(\frac{\parallel \nabla J(x,y,t)\parallel}{K}\right)}^{2}\right)$$

where *K* is some constant and · is the norm.

Yu and Acton in [12] proposed a anisotropic diffusion method where the coefficient of variation is determined as a function of the ratio of the local variance to the local squared mean. In other words, SRAD utilizes the instantaneous coefficient of variation (ICOV) as a variable in their conduction coefficient function. The ICOV of SRAD determines whether a pixel should be smoothed or left intact. The SRAD algorithm iteratively processes a nonzero valued image *I*(*x, y*) := *I*(*x, y*; 0) according to

$$\frac{\partial I(x,y;t)}{\partial t}=\mathrm{div}[c\left(q\right)\nabla I(x,y;t)]$$

(7)

and

$${\phantom{\mid}\frac{\partial I(x,y,;t)}{\partial \overrightarrow{n}}\mid}_{B\left(\Omega \right)}=0$$

(8)

where $\overrightarrow{n}$ is the outward normal vector to *B*(Ω), the border of ω. Equations (7) and (8) are known as the SRAD partial differential equations. The diffusion coefficient *c*(·) is defined as the quotient

$$c\left(q(x,y;t)\right)=\frac{{q}_{0}^{4}\left(t\right)+{q}_{0}^{2}\left(t\right)}{{q}_{0}^{4}\left(t\right)+{q}^{2}(x,y;t)}$$

(9)

or as the exponential function

$$c\left(q(x,y;t)\right)=\mathrm{exp}\left(\frac{{q}_{0}^{2}\left(t\right)-{q}^{2}(x,y;t)}{{q}_{0}^{4}\left(t\right)+{q}_{0}^{2}\left(t\right)}\right).$$

(10)

In (9) and (10), if *q*(*x, y; t*) ≈ *q*_{0}(*t*), then *c*(*q*(*x, y; t*)) ≈ 1 and smoothing with respect to (7) is enacted. If *q*(*x, y; t*) *q*_{0}(*t*), then the diffusion coefficient is very small and smoothing in a local region around (*x, y*) is averted. When at time *t*, if (*x, y*) resides in a homogeneous region, then smoothing can be promoted by allowing *q*(*x, y; t*) ≈ *q*_{0}(*t*). When (*x, y*) lie on an edge or in the vicinity of an edge, then defining *q*(*x, y; t*) *q*_{0}(*t*) would prohibit deterioration of the edge. Yu and Acton defined *q*_{0}(*t*), the coefficient of variation in fully developed speckle, as the ratio of the noise standard deviation to the noise mean

$${q}_{0}\left(t\right)=\frac{{\sigma}_{\eta}\left(t\right)}{{\mu}_{\eta}\left(t\right)}$$

(11)

and the ICOV is defined as

$$q(x,y;t)=\sqrt{\frac{\frac{1}{2}{\left(\frac{\mid \nabla I\mid}{I}\right)}^{2}-\frac{1}{16}{\left(\frac{{\nabla}^{2}I}{I}\right)}^{2}}{{(1+\left(\frac{1}{4}\right)\left(\frac{{\nabla}^{2}I}{I}\right))}^{2}}}.$$

The standard deviation and the mean of the noise in (11) are determined by calculating the mean and standard deviation within a homogeneous region

Lastly, it should be noted that although the intuitive ideas of SRAD is outlined using continuous variables $(x,y,t)\in {\mathbb{R}}^{3}$, our implementation of SRAD used in the evaluation given in Section IV is the discrete form which is detailed in [12].

Since the publication of the Lee filter [7], numerous despeckling techniques [8]-[13], [16], [19] have been proposed that utilized the same basic adaptive filtering concept. The general concept of these filters is to adaptively determine some degree of local smoothing that should be performed. The differences of the various despeckling methods are the various methods used to choose how to smooth and/or the diverse criteria used to determine the degree of smoothing. The parameter *k* in (3) determines the degree or amount of smoothing and is generally derived from some function of the local statistics. This parameter usually can be any continuous value between zero and one. Ideally, the smoothing should occur when the pixel is within a homogeneous region and the local mean *μ* should be the mean determined from this homogeneous region. If the pixel value lies on an edge due to some significant anatomical feature of interest, then the original pixel value should be preserved.

The propose despeckling method is aimed at removing outliers. Removal of outliers for denoising images is a method suggested by Abreu *et al.* in [5] to remove highly dense impulse noise. The general concept of the proposed despeckling method is outlier pixel values are smoothed aggressively. Otherwise, the current pixel value is preserved. Thus, the adaptive parameter *k* of (3) is binary (either *k* = 0 or *k* = 1) in the proposed despeckling method. Iteratively filtering in this manner the variance in each homogeneous region or class (a collection of similar regions) is reduced while the mean values from differing regions or classes are maintained.

The proposed despeckling method relies on how outliers are defined. Although there are numerous definitions of an outlier that merit investigation, the proposed despeckling method considers the pixel value that is being scrutinized an outlier if it is a local extremum. As prescribed in [5] the outliers are replaced by a value which does not rely on the outlier value. The impulse removal method in [5] replaces the outliers with the rank order mean, which is determined from ordering the pixel values in some odd size window (the outlier value is not included in this window) and the average of the two middle values of this ordering, i.e., the median value from an even number of windowed samples, is used to replace the outlier.

Although the rank order mean is worth considering in future improvements, we prefer to adhere to computational efficiency (no ordering of pixel values is needed) and replace each outliers with the local mean from some surrounding neighborhood $\mathcal{N}$ where the outlier value is not a member of $\mathcal{N}$. The choice of the neighborhood $\mathcal{N}$ is important. A large neighborhood fully contained in a homogeneous region will produce a local mean that is close to the mean of the homogeneous region. Yet, a neighborhood that is too large will extend past the homogeneous region and yield a misleading value. If a small neighborhood is used, then more iterations will be needed to achieve the same result as with a larger neighborhood. A small neighborhood would be beneficial in preserving fine structure(s) in an ultrasound image.

Each iteration of the propose method produces a sequence with locally reduced variance. The local extrema of the new sequence are consider outliers and the process is iterated. The steps of our proposed iterative method applied to a 1-D signal or a 2-D image **J** are described in the following. For the sake of clarity in explanation, the steps of the proposed despeckling algorithm is illustrated in 1-D in Fig. 2.

Illustration of one iteration of the proposed despeckling method. (a) Ideal (green) and noisy (black) signals. The s are local maximums and the s are local minimums of the noisy signal. (b) s and s in Fig. 2(a) are **...**

- The algorithm is initialized by the original speckled image$$\stackrel{~}{\mathbf{J}}=\mathbf{J}.$$
- White Gaussian noise with a small variance is initially and periodically added to the the current imageThis step ensures that the filter acts on a local region of the current signal or image $\stackrel{~}{\mathbf{J}}$.$$\stackrel{~}{J}(n,m)=\stackrel{~}{J}(n,m)+\eta (n,m).$$
- An iteration
*i*begins by determining the set of locations of local maxima and local minima of $\stackrel{~}{\mathbf{J}}$. The locations of these extrema are defined by the setwhere$$\begin{array}{c}E=\{\overrightarrow{n}\mid {\stackrel{~}{J}}_{i-1}\left(\overrightarrow{n}\right)\text{meets condition}\phantom{\rule{thickmathspace}{0ex}}1\phantom{\rule{thickmathspace}{0ex}}\text{or}\phantom{\rule{thickmathspace}{0ex}}2\}\hfill \\ \text{Condition}\phantom{\rule{thickmathspace}{0ex}}1:{\stackrel{~}{J}}_{i-1}\left(\overrightarrow{n}\right)>{\stackrel{~}{J}}_{i-1}\left(\overrightarrow{m}\right)\hfill \\ \text{Condition}\phantom{\rule{thickmathspace}{0ex}}2:{\stackrel{~}{J}}_{i-1}\left(\overrightarrow{n}\right)<{\stackrel{~}{J}}_{i-1}\left(\overrightarrow{m}\right)\hfill \end{array}$$*m*=*n*± 1 for 1-D signals and $\overrightarrow{m}=({m}_{1},{m}_{2})=({n}_{1}\pm 1,{n}_{2}\pm 1)$ for 2-D images.An illustration using a signal of fifty samples is shown in Fig. 2. Fig. 2(a) show the ideal noise free signal in green, the noisy signal (the ideal signal with Gaussian noise added) in black. The red upward pointed () and blue downward pointed () triangles denote the local peaks and valleys of the noisy signal, respectively. These local extrema are considered outliers. - Without using the local extrema values, the algorithm replaces each extremum with the local mean taken from neighboring samples. For all (
*n, m*)*E*where $\mathcal{N}$ is some local neighborhood of $\overrightarrow{n}$, $\mid \mathcal{N}\mid $ is the cardinality of set $\mathcal{N}$, and $\overrightarrow{n}\notin \mathcal{N}$. In the 1-D illustration shown in Fig. 2 the local maximums shown as s and the local minimums shown as s in Fig. 2(a) are replaced with corresponding local averages. The local averages are depicted as either red or blue circles (○ or ○) in Fig. 2(b). That is each value in Fig. 2(a) is replace with the corresponding ○ value in Fig. 2(b) and each value in Fig. 2(a) is replace with the corresponding ○ value depicted in Fig. 2(b).$${\stackrel{~}{J}}_{i}\left(\overrightarrow{n}\right)=\frac{1}{\mid \mathcal{N}\mid}\sum _{\overrightarrow{s}\in \mathcal{N}}{\stackrel{~}{J}}_{i-1}\left(\overrightarrow{s}\right)$$This produce the reduced local variance red signal shown in Fig. 2(c). - If convergence is attained, that isfor some predefined > 0 and the total number of iteration is not met, then the process is iterated from either step 2 or step 3, depending upon whether more zero mean small variance noise should be added. If the total number of iterations is met, then the filtering process is terminated.$$\sum _{\forall \overrightarrow{n}}\mid {\stackrel{~}{J}}_{i-1}\left(\overrightarrow{n}\right)-{\stackrel{~}{J}}_{i}\left(\overrightarrow{n}\right)\mid <\u220a$$

By removing outliers at each iteration, this method reduces the local variance of the signal/image. In effect, this method produces a converging sequence of signals or images by squeezing or compressing the stochastically distributed pixel values to some limiting value. Since the proposed filtering method described is able to compress the distribution of pixel values, the proposed method is fittingly named the squeeze box filter (SBF).

To evaluate the despeckling performance of various filters against the proposed SBF despeckling method, a template image was created and is denoted as **T**. An eight bit full scale contrast stretched template image is used as the ideal noise free images and is shown in Fig. 3(a). The template **T** consists of three classes. The background class is shaded in gray and the pixel values in this class are set at one. In the template **T**, a class consisting of five disks of various diameter vertically aligned on the left is shaded in white and the pixel values in this class are set at five. The pixel values of the vertically aligned class of five black disk of various diameters on the right of template **T** are set at zero.

Two and three class templates and ultrasound simulations used in the comparison: (a) three class template, (b) Field II simulation, (c) SSIM map, and (d) canny edge map of Fig. 3(b).

The Mathworks MatLab® Field II simulation [36], [37] version 3.16 is used to simulate a B-mode ultrasound image of template **T** from a linear array. The Field II simulated B-mode ultrasound image of template **T** is shown in Fig. 3(b). The indices of the images in Fig. 3 represents distances in units of millimeters (mm). The simulation is generated with the linear array transducer at the top of the image. Thus, the row indices indicate axial distance away from the transducer and the column indices indicate lateral distance. The five small bright white dots along the left side at axial distances 40, 50, 60, 70, and 80 of Fig. 3(b) were placed to simulate the ultrasound system’s PSF at various distance from the transducer. These simulated PSFs are not included in any of the classes. The other relevant parameters of the Field II simulation of an ultrasound system attached to a linear array are listed in Table I.

The image in Fig. 3(c) is the eight bit full scale contrast SSIM index map of the Field II simulated image in Fig. 3(b) against the template in Fig. 3(a). The SSIM index provides a metric based upon local measures of luminance and contrast. The SSIM index compares the local structural similarity of an image against an ideal image, which is free of noise and distortion. The SSIM index varies between −1 and 1, where −1 indicates the worst local similarity to the ideal image and 1 indicates the best local similarity to the ideal image. The image in Fig. 3(c) shows the SSIM at each pixel of the Field II simulated image in Fig. 3(b) against the template image in Fig. 3(a). In Fig. 3(c), whiter pixels would indicate better structural similarity and darker pixels would indicate poorer structural similarity. The SSIM index map in Fig. 3(c) shows that there are only three regions in the Field II simulated image that exhibit good structural similarity. The three regions of good structural similarity are located within the three larger dark disks in Fig. 3(b).

Fig. 3(c) shows the Canny edge map of the Field II simulated image in Fig. 3(b). This edge map does not exhibit any edge gradient that are relevant to the dark and white disks. Thus, the Field II simulated image in Fig. 3(b) accurately portrays most actual ultrasound images.

Evaluation of despeckling methods’ improvements to an ultrasound image requires a meaningful quantifiable measure. This measure should indicate when different homogeneous regions are properly defined. Additionally, this metric should account for differences in pixel values from the mean of a class. A class is taken to be a collection of homogeneous regions. This measure in essence will determine how well an algorithm despeckles the simulated ultrasound image while keeping the distinct classes well separated.

The method we propose to quantify the improvements made to an ultrasound image is as follow:

- A despeckling algorithm is applied to the simulated image
**J**in Fig. 3(b) to produce the eight bits full scale contrast stretched processed image, denoted as**I**_{alg}. - The means and variances in each predefined class of
**I**_{alg}are computed. - The ratio of the average squared differences of the interclass means to the sum of the intraclass variances are calculated and this quantity is denoted as
*Q*(**I**_{alg}). Explicitly, the quantity to evaluate the performance of a despeckling algorithm to preserve distinct classes and promote smoothing within homogeneous regions of each class is given as$$Q\left({\mathbf{I}}_{\mathrm{alg}}\right)\underset{=}{\mathrm{def}}\frac{{\Sigma}_{k\ne 1}{(\mu {C}_{k}-\mu {C}_{l})}^{2}}{(N-1)\sum _{k=1}^{N}{\sigma}_{{C}_{k}}^{2}}$$(12)where$$\mu {C}_{k}\underset{=}{\mathrm{def}}\frac{1}{\mid {C}_{k}\mid}\sum _{(n,m)\in {C}_{k}}{I}_{\mathrm{alg}}(n,m)$$$${\sigma}_{{C}_{k}}^{2}\underset{=}{\mathrm{def}}\frac{1}{\mid {C}_{k}\mid}\sum _{(n,m)\in {C}_{k}}{({I}_{\mathrm{alg}}(n,m)-{\mu}_{{C}_{k}})}^{2}$$and |*C*| denotes the number of pixels in class_{k}*C*._{k} - To avoid sensitivity to resolution, the quantity we will use to measure the improvements to the image
**J**due to algorithm alg is$$\stackrel{~}{Q}\left({\mathbf{I}}_{\mathrm{alg}}\right)\underset{=}{\mathrm{def}}\frac{Q\left(\mathbf{I}\mathrm{alg}\right)}{Q\left(\mathbf{J}\right)}.$$(13)We refer to this relative performance metric $\stackrel{~}{Q}\left({\mathbf{I}}_{\mathrm{alg}}\right)$ as the ultrasound despeckling assessment index (USDSAI).

If the classes are well separated, then the numerator in (12) will be large. If the segmentation or restoration algorithm produces small intraclass variances, then the denominator of (12) will be small, resulting in the USDSAI quantity $\stackrel{~}{Q}\left({\mathbf{I}}_{\mathrm{alg}}\right)$ to be large. A large value would indicate that alg produces the desirable restoration result. If $\stackrel{~}{Q}\left({\mathbf{I}}_{\mathrm{alg}}\right)$ is greater than one, then the algorithm being tested improved the ultrasound image **J**. Larger $\stackrel{~}{Q}\left({\mathbf{I}}_{\mathrm{alg}}\right)$ values indicate better performance when comparing various algorithms. If $\stackrel{~}{Q}\left({\mathbf{I}}_{\mathrm{alg}}\right)$ is less than one, then we consider the algorithm being tested as detrimental to the ultrasound image **J**. If $\stackrel{~}{Q}\left({\mathbf{I}}_{\mathrm{alg}}\right)$ is equal to one, then the contrast of an image provide by the algorithm being tested is equivalent to the contrast of the original unprocessed image.

To elucidate the significance of the USDSAI quantity $\stackrel{~}{Q}\left({\mathbf{I}}_{\mathrm{alg}}\right)$ defined in (13), we examine the three extreme cases. In the first case, the simulated image **J** is not constant and the resulting image of some algorithm **I**_{alg} is constant. Clearly, *Q*(**I**_{alg}) = (0/0), which we define as zero. Thus, the quantity $\stackrel{~}{Q}\left({\mathbf{I}}_{\mathrm{alg}}\right)$ is equal to zero. We consider algorithms that yield small $\stackrel{~}{Q}\left({\mathbf{I}}_{\mathrm{alg}}\right)$ values, less than one, as indicative of poor performance.

In the second case, the simulated image **J** is neither a constant nor a scaled version of the template image. The resulting image of some algorithm is the template image, that is **I**_{alg} = **T**. In this case

$$Q\left({\mathbf{I}}_{\mathrm{alg}}\right)=\frac{K}{0}=\infty \Rightarrow \stackrel{~}{Q}\left({\mathbf{I}}_{\mathrm{alg}}\right)=\infty $$

when the average of the squared differences of the interclass means represented as *K* is not equal to zero. We will consider this quantity as indicative of an ideal improvement to the image **J** and the algorithm being tested performed ideally.

In the third and final case, the simulated image **J** is composed of classes *C*_{1}, *C*_{2}, *C*_{3}, …, *C _{N}* and the pixel values of the resulting image

$$\sum _{k>1}{(\mu {C}_{k}-\mu {C}_{l})}^{2}=\sum _{k>1}{({K}_{k}-{K}_{l})}^{2}\le (N-1)\sum _{k=1}^{N}{K}_{k}^{2}.$$

(14)

The inequality of (14) is attained from the safe assumption that *K _{k}* > 0 for all

$${\sigma}_{{C}_{k}}^{2}=(\mid {C}_{k}\mid -1){K}_{k}^{2}.$$

The sum of the intraclass variances is

$$\sum _{k=1}^{N}{\sigma}_{{C}_{k}}^{2}=\sum _{k=1}^{N}(\mid {C}_{k}\mid -1){K}_{k}^{2}.$$

Let *C*_{min} = min{|*C _{k}*| where

$$\sum _{k=1}^{N}{\sigma}_{{C}_{k}}^{2}\ge ({C}_{min}-1)\sum _{k=1}^{N}{K}_{k}^{2}.$$

(15)

Using the inequalities from (14) and (15), the quantity *Q*(**I**_{alg}) defined in (12) is bounded above by

$$Q\left({\mathbf{I}}_{\mathrm{alg}}\right)\le \frac{(N-1)\sum _{k=1}^{N}{K}_{k}^{2}}{(N-1)({C}_{\mathrm{min}}-1)\sum _{k=1}^{N}{K}_{k}^{2}}=\frac{1}{{C}_{\mathrm{min}}-1}.$$

(16)

In this present case, the algorithm being evaluated produces an image composed of a scaled Kronecker delta function in each class. When *C*_{min} is very large, the original unprocessed image contains large classes of homogeneous regions and the metric *Q*(**I**_{alg}) returns a small number. Thus, the $\stackrel{~}{Q}\left({\mathrm{I}}_{\mathrm{alg}}\right)$ value would be smaller than $\stackrel{~}{Q}\left(\mathbf{J}\right)$.

To present an objective comparison, we perform a mesh search algorithm varying the parameters of each algorithm so that the USDSAI value $\stackrel{~}{Q}\left({\mathrm{I}}_{\mathrm{alg}}\right)$ was maximized. It should be noted that the mesh search optimization method is problematic in attaining optimal performance, since the search can get stuck on a local values and never attain global optimality.^{3}

The USDSAI values $\stackrel{~}{Q}\left({\mathrm{I}}_{\mathrm{alg}}\right)$ along with the mean SSIM (MSSIM) of the various tested filter (AWMF, wavelet soft thresholding, Lee, SRAD, Wiener, and SBF) are given in Table II. The USDSAI and MSSIM of the unprocessed simulated image $\stackrel{~}{Q}\left(\mathbf{J}\right)$ is listed in Table II for reference. The USDSAI and MSSIM values $\stackrel{~}{Q}\left({\mathrm{I}}_{\mathrm{AWMF}}\right)$ and $\stackrel{~}{Q}\left({\mathrm{I}}_{\mathrm{Wavelet}}\right)$ given in Table II are slightly greater than the USDSAI of the unprocessing image $\stackrel{~}{Q}\left(\mathbf{J}\right)$, indicating that the contrast improvements and the structural similarity improvements against the ideal template image due to these two filters are modest.

USDSAI Values $\stackrel{~}{Q}\left({\mathbf{I}}_{\mathrm{alg}}\right)$ and MSSIM Index for the Various Algorithms Tested

The USDSAIs $\stackrel{~}{Q}\left({\mathrm{I}}_{\mathrm{Lee}}\right)$, $\stackrel{~}{Q}\left({\mathrm{I}}_{\mathrm{SRAD}}\right)$, and $\stackrel{~}{Q}\left({\mathrm{I}}_{\mathrm{Wiener}}\right)$ indicate significant contrast enhancement by the Lee, SRAD, and Wiener filters, respectively, are possible. The images despeckled by the Lee, SRAD, and Wiener filters are shown in Fig. 4(a), (b),, and (c), respectively. The unprocessed pixel values enclosed in the green box of the SRAD filtered image shown in Fig. 4(b) are used to initialized the ICOV of the SRAD algorithm. The eight bit full scale contrast stretched SSIM index map of the Lee, SRAD, and Wiener filters images are shown in Fig. 4(e), (f), and (g), respectively. It is interesting and should be noted that the MSSIM of the Wiener filter is remarkably decreased when compared to the MSSIM of the unprocessed image. This is due to the poor local SSIM in the class composed of the five white disks and the background class. The SSIM indices of theWiener filtered image shown in Fig. 4(g) are displayed as black in these regions, which indicate low local values. But the SSIM index in the dark disks of the Wiener filtered image exhibit excellent structural similarity, which are shown as white in the SSIM index map of Fig. 4(g).

Resulting images from Lee, SRAD, Wiener, and SBF filters and SSIM map. (a) Lee, (b) SRAD where the green box shows the region used to initialized the ICOV, (c) Wiener, (d) SBF, (e) Lee SSIM map, (f) SRAD SSIM map, (g) Wiener SSIM map, and (h) SBF SSIM **...**

Since the SBF require periodic addition of noise, it is stochastic in nature. The SBF algorithms was also optimized using a mesh search over a wide range of parameters. The SBF algorithm was evaluated using the optimal parameters, which include size of the neighborhood to compute the local mean and total number of iterations, with adding Gaussian distributed noise of standard deviation equal to 0.1 at the zeroth and 50th iterations and 100 total iterations. The SBF algorithm was applied ten consecutive times to the Field II simulated image shown in Fig. 3(b). After each application of SBF, the USDSAI and MSSIM index were computed. The SBF USDSAI and MSSIM index listed in Table II are the mean USDSAI and the mean MSSIM index, respectively, from the ten application of SBF with the optimized parameters. The standard deviation of each index is listed within parentheses in Table II. Since the mean USDSAI and mean MSSIM index are greater than the USDSAI and MSSIM indices from the other despeckling method, it is possible to achieve better despeckling results with the SBF method than with the other tested methods. Additionally, since the standard deviation is small, the SBF method is capable of providing consistently robust despeckling results.

The ability to capture the left ventricle (LV) of the heart is of great importance to cardiac research and a potentially valuable tool to clinically assess cardiac health. We performed an experiment on short axis cardiac cine ultrasound data of a mouse heart. The data was acquired *in vivo* using a specifically designed small animal echocardiograph (ECG) ultrasound imaging system manufactured by VisualSonics® Inc. (Toronto, Ontario, Canada). The VisualSonics® ultrasound system used a signal element mechanically sweeping transducer, which operated at 30 Mhz.^{4} The VisualSonics system is able to attain 1000 frames per second capture rate. Mouse heart rate is roughly ten cardiac cycles per second. Thus, the VisualSonics® ultrasound system is able to provide image samples well over the Nyquist rate. Moreover the system is connected to an ECG system so that image acquisition can be begin and finish at End Diastole (where the heart is most relaxed). This system produces roughly 100 frame cine data over one mouse cardiac cycle. Figs. Figs.55 and and66 show the B-mode short axis slices of the LV of a mouse heart at various times during the cardiac cycle that are produced by the VisualSonics® system. Fig. 5 is the zeroth frame of the cine data and shows the mouse heart near End Diastole. Fig. 6(a) shows frame 30 of the cine data, which is the mouse heart at a midway point between End Diastole and End Systole (where the heart is most compressed). Frame 60 is shown in Fig. 6(b). This image shows the short axis cross section of the mouse heart at End Systole. Frame 80 is shown in Fig. 6(c), which is a cross section of the mouse heart near a midpoint betweeen End Systole and End Diastole. Fig. 6(d) shows the mouse heart at End Diastole again.

Unprocessed *in vivo* B-mode cardiac ultrasound image at End Diastole. The yellow contour is the manually defined contour. The green contour is the initial contour. The red contour is the result of 150 iteration of a BFAC.

Unprocessed *in vivo* B-mode cardiac ultrasound images at various times during the cardiac cycle. The yellow and green contours represent the desired and initial contours, respectively, (a) Frame 30, (b) Frame 60 (End Systole), (c) Frame 80, (d) Frame 100 **...**

The yellow contours in Figs. Figs.55 and and6,6, and later in the images of Figs. Figs.77 and and88 are the manually defined contours of the endocardial border of the LV. These yellow contours are used as the gold standard to evaluate the segmentation results produced from despeckles the B-mode images, then applying a balloon force active contour (BFAC) [38] segmentation algorithm to segment the endocardial border of the LV of each frame. The green contour is the initial contour. The green initial contour is a circle whose center is determined as the mean of the manually defined contour of each frame. The radius of the initial circular contour of each frame is one quarter the average distance of all the manually defined contour points to the center.

Performance of each despeckling algorithm was optimized to give the best final BFAC (red) contour in frame 0. The blue contours are the BFAC contours at every 50th iteration. The yellow contour is the desired manually defined contour. (a) Frame 0: Lee, **...**

BFAC segmentation after despeckling with the various filter. Each blue contours are the BFACs at every 50th iterations. The final BFAC contours are shown in red. (a) Frame 30: Lee, (b) Frame 60: Lee, (c) Frame 80: Lee, (d) Frame 100: Lee, (e) Frame 30: **...**

The BFAC is used to autonomously segment the endocardial border after each despeckling filter has been applied. The red chaotic contour in Fig. 5 is the final BFAC segmentation after 150 iterations were applied to the unprocessed frame 0 image. The chaotic contour is due to significant gradients caused by speckle in a region that is occupied by blood. An accurate segmentation of the LV would require the speckle within the LV to be smoothed. Yet, the endocardial border of the LV should remain intact. The BFAC parameters used are fixed throughout this experiment and are listed in Table III.

The average directed Hausdorff distance [39] is used to provide a quantitative assessment of the segmentation results. The directed Hausdorff distance of a point on the red contour to the yellow contour is the minimum Euclidean distance from the point on the red contour to some point on the yellow manually defined contour. The average directed Hausdorff distances of all points on each final BFAC red contour are used to quantify the robustness of the despeckling-segmentation results. A smaller average directed Hausdorff distance equate to a contour that is closer to the manually defined yellow contour of each frame.

Since the SBF is a stochastic method to despeckle ultrasound images, the SBF despeckling followed by BFAC segmentation is performed ten times on each test frame. The average and standard deviation (SD) of the average directed Hausdorff distances from these ten applications of despeckling with SBF followed by BFAC segmentation are given in the last two rows of Table IV.

Hausdorrf Distances From Balloon Force Active Contour Segmentation After Despeckling With Various Algorithms

The Lee, SRAD, Wiener, and SBF filters were tested. Since the performance of each despeckling method is dependent upon the choice of various parameters, the optimal choice of parameters of each despeckling method was determined through trial and error so that the BFAC segmentation of frame 0 was optimal with respect to minimizing the average directed Hausdorff distance of the final red contour to the frame 0 manually defined yellow contour. The results of the trial and error optimization are shown in Fig. 7, where the red contours are the results of the segmentation. The blue contours in Fig. 7 show the evolution of the BFAC, in that each blue contour represents the BFAC at every 50th iteration. That is the most inner contour is the BFAC at iteration 50. Each blue contour outward represents the BFAC after an additional 50 iterations. The optimal parameters of each despeckling algorithm determine by trial and error in frame 0 were used to despeckle the remaining test frames of the ultrasound cine data.

The experiment was performed on frames 0, 10, 20, …, 100 of the *in vivo* short axis cardiac B-mode cine data. The average directed Hausdorff distance of all tested methods on the tested frames are given in Table IV. The segmentation results of each despeckling method for frames 30, 60, 80, and 100 are shown in Fig. 8.

It should be noted than all tested despeckling algorithm perform well with respect to segmentation of the LV endocardial border. A visual inspection of Fig. 8 shows that the segmentation of the SRAD processed image is the worst. Although SRAD performed well in frame 80, which is shown in Fig. 8(g), the BFAC segmentation results in frame 30, 60, and 100, which are shown in Fig. 8(e), (f), and (h), respectively, poorly reflect the manual segmented yellow contours of each respective frame. The BFAC red contours in these SRAD processed frames consist of erroneous loops and extend well beyond the endocardial border. These errors are most evident in the End Systole frame 60 shown in Fig. 8(f). These errors are reflected in Table IV where the Hausdorff distance of the final BFAC contour of the SRAD despeckled frame 60 is 18.6815. This is the largest Hausdorff distance of all resulting BFAC.

It should also be noticed that all four BFAC segmentations of frame 100 failed to capture the left endocardial border of the LV. This is due to image drop out, weak contrast of edges that are axially parallel to the transducer. This problem is inherent to all ultrasound imaging system. The solution of this problem is beyond the scope of this paper. It is suspected that the segmentation errors caused by the image drop out in frame 100 is the cause of significantly large Hausdorff distances in the second rightmost column of Table IV.

The quantitative results of Table IV shows that SBF despeckling provided a better BFAC segmentation in six out the eleven frames. The SBF despeckling method promoted a better BFAC segmentation more often than when despeckling with the other three methods. The mean average directed Hausdorff distance, listed in the rightmost column of Table IV, is smaller when despeckling with SBF. Further, except in the frame 20 evaluation, the small standard deviation of the average directed Hausdorff distances from contours produced by ten trials of SBF despeckling followed by BFAC segmentation signify that the SBF despeckling promotes consistently accurate BFAC segmentation. When evaluating the segmentation results using frame 20, the Hausdorff distances of the Wiener and Lee filtered images produced contours that are within one standard deviation of the average directed Hausdorff distance produced by the SBF processed image. Thus, it is reasonable to concede that the Lee, Wiener, and SBF despeckling algorithms provided equivalently accurate BFAC segmentation results in frame 20.

The evaluation in this paper provides evidence that despeckling ultrasound images to promote robust contrast enhancement is possible with the adaptive Lee filter, the linear Wiener method, the anisotropic SRAD method, and the stochastically driven SBF method. The evaluation was based upon the USDSAI and SSIM (or rather MSSSIM) indices produced by each method’s ability to despeckle a Field II simulated image. Although the optimized Wiener filter was able to achieve a USDSAI close to the Lee and SRAD USDSAI performance, the MSSIM index was decreased when compared to the MSSIM index of the unprocessed Field II simulated image. Surprisingly, the SSIM map of the Wiener despeckled simulated image exhibited excellent local similarity in the class composed of the five dark disks, while the remainder of the Wiener processed image exhibited poor local structural similarity. Both SRAD and SBF were able to attain excellent contrast enhancement (high USDSAI) and high MSSIM index. The SRAD and SBF SSIM maps showed poor performance in the class consisting of the five dark disks of varying size. An interesting method that is left for future work would be to despeckle using a hybrid of SBF (or SRAD) with Wiener filtering. This method might consist of replacing a speckle polluted image pixel with a pixel from a Wiener, SBF, or SRAD filtered version of that image.

When testing on actual ultrasound B-mode images, the BFAC was used to determine if the degree of despeckling provide by the Lee, SRAD, Wiener, and SBF methods were sufficient enough to provide a robust segmentation of a mouse LV. Although the BFAC parameters given in Table III were fixed throughout the experiment performed in Section IV.B, it may be possible to conjointly tune the parameters of a despeckling method with the parameters of the BFAC (or other segmentation method) to provide better performance than reported in Section IV.

Great efforts were made in optimizing the tested algorithms. It is foreseeable that optimization of a despeckling algorithm would be dependent of transducer geometry, operating frequency, focal point(s), distribution of pixel values due to speckle, subject being scanned, etc. Automatic selection of optimal despeckling algorithm’s parameters would provide a useful tool for research and clinical applications.

The evaluations provide in this paper are far from being comprehensive and all inclusive. More algorithms need to be tested and for more applications to include contrast enhancement and automatic segmentation. The evaluation should include metric that are based upon visual information, information fidelity, perception, and other novel concepts.

The removal or reduction of speckle while preserving or enhancing edge information of an ultrasound image is an challenging task. An evaluation of various despeckling algorithms is presented. In general, the previously published despeckling methods scrutinize every pixel values and attempt to determine the amount or degree of local or global smoothing to apply. In contrast, an iterative despeckling method, SBF, that at each iteration smoothes only outlying pixel values, is proposed. Outliers are described as local extrema. Without using the outlier values, the SBF method replaces these outliers with the local mean. Each iteration of the proposed method compresses the image pixel values so that the differences in interclass means are preserved or possibly enhanced while the intraclass variance is decreased. The superior contrast enhancement and structural similarity performances of the SBF method are established by experimentation using a Field II simulated ultrasound image and evaluated with the USDSAI and SSIM (or MSSIM) performance metrics.

An evaluation using actual B-mode *in vivo* cardiac cine data was performed. This evaluation consist of applying a BFAC segmentation to B-mode images despeckled by the Lee, SRAD, Wiener, and SBF methods. Robust segmentation was determined by the lowest average directed Hausdorff distance. Although all tested despeckling algorithms provided robust segmentation of a mouse LV throughout the cardiac cycle, the quantitative results showed that the SBF method consistently and more often provided contours that better resembled the manually defined contours.

This work was supported in part by NIH NIBIB grant EB001826, in part by U.S. Army CDMRP grant W81XWH-04-1-0240, and in part by NIH NCRR RR022582 grant. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Margaret Cheney.

**Peter C. Tay** (S’01–M’03) received the B.S. and M.A. degrees in mathematics and the Ph.D. degree in electrical and computer engineering, all from the University of Oklahoma, Norman, OK.

He is currently an Assistant Professor in the Department of Engineering and Technology, Western Carolina University, Cullowhee, NC. He was an Oak Ridge Institute for Science and Education Research Fellow at the National Weather Service (NWS) and subsequently a Software Engineer for the NWS in Norman, OK. Prior to his current appointment he was a Postdoctoral Research Associate in the Departments of Biomedical Engineering and ECE at the University of Virginia, Charlottesville, VA. His research interests include biomedical imaging and signal/image/video processing for medical applications.

**Christopher D. Garson** (S’04) received the B.S. degree in biomedical engineering from Duke University, Durham, NC, and the M.S. degree the University of Virginia, Charlottesville.

He is currently self-employed as an independent software developer; projects include webcam-based gaze estimation and the development of interactive video content for mobile phones in American and Chinese markets.

**Scott T. Acton**(S’89–M’93–SM’99) graduated from Oakton High School, Vienna, Virginia, in 1984. He received the B.S. degree in electrical engineering from Virginia Tech, Blacksburg in 1988 as a Virginia Scholar, and received the M.S. degree in electrical and computer engineering and the Ph.D. degree in electrical and computer engineering from the University of Texas at Austin in 1990 and 1993, respectively, where he was a student of Prof.

Al Bovik. He has worked in industry for AT&T, Oakton, VA, the MITRE Corporation, McLean, VA and Motorola, Inc., Phoenix, AZ and in academia for Oklahoma State University, Stillwater. Currently, he is Professor of Electrical and Computer Engineering and Biomedical Engineering at the University of Virginia (UVA). At UVA, he was named the Outstanding New Teacher in 2002, Faculty Fellow in 2003, and Walter N. Munster Chair for Intelligence Enhancement in 2003.

Dr. Acton is an active participant in the IEEE, currently serving as Associate Editor for the IEEE transactions on image processing and, formerly, as Associate Editor the IEEE signal processing letters. He was the 2004 Technical Program Chair and the 2006 General Chair for the Asilomar Conference on Signals, Systems and Computers. His research interests include anisotropic diffusion, active models, basketball, biomedical segmentation problems and biomedical tracking problems. From 2007–2008, he was on sabbatical in Santa Fe, NM.

**John A. Hossack** (S’90–M’92–SM’02) received the B. Eng. and Ph.D. degrees from the University of Strathclyde, Glascow, U.K.

He is currently a Professor in the Department of Biomedical Engineering and Department of Electrical and Computer Engineering, University of Virginia, Charlottesville. Previously, he was a postdoctoral researcher in the E. L. Ginzton Laboratory at Stanford University and then a Design Engineer, and later Technical Fellow, at Acuson Corp in Mountain View, CA. He is active Vice President, Publications for the Ultrasonics, Ferroelectrics and Frequency Control Society at IEEE and is an Associate Editor for the transactions on ultrasonics, ferroelectrics and frequency control. He was Technical Program Chair at the 2005 IEEE Ultrasonics Symposium. His current research interests relate to ultrasonic transducer design (including intravascular devices), mouse heart ultrasound imaging, cardiac ultrasound image quantification, and the use of ultrasound for image guiding and effecting focal drug/gene delivery in the context of management of arterial disease.

^{1}Some algorithms utilize the features of speckle and do not consider speckle as noise. The use of the term noise in the context of this paper is based upon the visual observation that speckle may cloud visual or automated detection of edges and significant borders of the object being imaged.

^{2}Independence is in the sense that prob(*a _{k}*(

^{3}The best method to obtain global optimality of each algorithm is beyond the scope of this paper.

^{4}The location of the transducer is at the bottom of all the mouse heart images that are shown in this paper.

Peter C. Tay, Department of Engineering and Technology, Western Carolina University, Cullowhee, NC 28723 USA (Email: ude.ucw.liame@yatp)

Christopher D. Garson, Independent software developer and was a student with the Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22904 USA (Email: ude.ainigriv@d6gdc)

Scott T. Acton, Department of Electrical and Computer Engineering and also the Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22904 USA ; Email: ude.ainigriv@notca.

John A. Hossack, Department of Electrical and Computer Engineering and also the Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22904 USA ; Email: ude.ainigriv@kcassoh..

[1] Goodman JW. Statistical properties of laser speckle patterns. In: Dainty JC, editor. Laser Speckle and Related Phenomena. Springer-Verlag; Berlin, Germany: 1984. pp. 9–75.

[2] Abbott JG, Thurstone FL. Acoustic speckle: Theory and experimental analysis. Ultrason. Imaging. 1979;1:303–324. [PubMed]

[3] Wagner RF, Smith SW, Sandrik JM, Lopez H. Statistics of speckle in ultrasound B-scans. IEEE Trans. Sonics Ultrason. 1983 May;SU–30(3):156–163.

[4] Wagner RF, Insana MF, Smith SW. Fundamental correlation lengths of coherent speckle in medical ultrasonic images. IEEE Trans. Ultrason., Ferroelectr., Freq. Control. 1988 Jan;35(1):34–44. [PubMed]

[5] Abreu E, Lightstone M, Mitra SK, Arakawa K. A new efficient approach for the removal of impulse noise from highly corrupted images. IEEE Trans. Image Process. 1996 Jun;5(6):1012–1025. [PubMed]

[6] Nagao M, Matsuyama T. Edge preserving smoothing. Comput. Graph. Image Process. 1979 Apr.9(4):394–407.

[7] Lee JS. Digital image enhancement and noise filtering by use of local statistics. IEEE Trans. Pattern Anal. Mach. Intell. 1980 Mar.PAMI-2(2):165–168. [PubMed]

[8] Frost VS, Stiles JA, Shanmugan KS, Holtzman JC. A model for radar images and its application to adaptive digital filtering of multiplicative noise. IEEE Trans. Pattern Anal. Mach. Intell. 1982 Mar;PAMI-4(2):157–166. [PubMed]

[9] Kuan DT, Sawchuk AA, Strand TC, Chavel P. Adaptive noise smoothing filter for images with signal-dependent noise. IEEE Trans. Pattern Anal. Mach. Intell. 1985 Mar;PAMI-7(2):165–177. [PubMed]

[10] Kuan DT, Sawchuk AA, Strand TC, Chavel P. Adaptive restoration of images with speckle. IEEE Trans. Acoust., Speech, Signal Process. 1987 Mar;ASSP-35(3):373–383.

[11] Loupas T, McDicken WN, Allan PL. An adaptive weighted median filter for speckle suppression in medical ultrasonic images. IEEE Trans. Circuits Syst. 1989 Jan;36(1):129–135.

[12] Yu Y, Acton ST. Speckle reducing anisotropic diffusion. IEEE Trans. Image Process. 2002 Nov;11(11):1260–1270. [PubMed]

[13] Karaman M, Kutay MA, Bozdagi G. An adaptive speckle suppression filter for medical ultrasonic imaging. IEEE Trans. Med. Imag. 1995 Jun;14(2):283–292. [PubMed]

[14] Hao SGX, Gao X. A novel multiscale nonlinear thresholding method for ultrasonic speckle suppressing. IEEE Trans. Med. Imag. 1999 Sep;18(9):797–794. [PubMed]

[15] Achim A, Bezerianos A, Tsakalides P. Novel Bayesian multiscale method for speckle removal in medical ultrasound images. IEEE Trans. Med. Imag. 2001 Aug;20(8):772–783. [PubMed]

[16] Lopes A, Touzi R, Nezry E. Adaptive speckle filters and scene heterogeneity. IEEE Trans. Geosci. Remote Sens. 1990 Nov;28(6):992–1000.

[17] Michailovich OV, Tannenbaum A. Despeckling of medical ultrasound images. IEEE Trans. Ultrason., Ferroelectr., Freq. Control. 2006 Jan;53(1):64–78. [PMC free article] [PubMed]

[18] Dutt V, Greenleaf J. Adaptive speckle reduction filter for log-compressed B-scan images. IEEE Trans. Med. Imag. 1996 Dec;15(6):802–813. [PubMed]

[19] Tay PC, Acton ST, Hossack JA. A stochastic approach to ultrasound despeckling; Proc. IEEE Int. Symp. Biomedical Imaging; Arlington, Virginia. Apr. 6–9, 2006.pp. 907–910.

[20] Aysal TC, Barner KE. Rayleigh-maximum-likelihood filtering for speckle reduction of ultrasound images. IEEE Trans. Med. Imag. 2007 May;26(5):712–727. [PubMed]

[21] Gupta N, Swamy MNS, Plotkin E. Despeckling of medical ultrasound images using data and rate adaptive lossy compression. IEEE Trans. Med. Imag. 2005 Jun;24(6):743–754. [PubMed]

[22] Rabbani H, Vafadust M, Abolmaesumi P, Gazor S. Speckle noise reduction of medical ultrasound images in complex wavelet domain using mixture priors. IEEE Trans. Biomed. Eng. 2008 Sep;55(9):2152–2160. [PubMed]

[23] Xie J, Jiang Y, Tsui HT, Heng PA. Boundary enhancement and speckle reduction for ultrasound images via salient structure extraction. IEEE Trans. Biomed. Eng. 2006 Nov;53(11):2300–2309. [PubMed]

[24] Yeoh WS, Zhang C. Constrained least squares filtering algorithm for ultrasound image deconvolution. IEEE Trans. Biomed. Eng. 2006 Oct;53(10):2001–2007. [PubMed]

[25] Munteanu C, Morales FC, Ruiz-Alzola J. Speckle reduction through interactive evolution of a general order statistics filter for clinical ultrasound imaging. IEEE Trans. Biomed. Eng. 2008 Jan;55(1):365–369. [PubMed]

[26] Abd-Elmoniem KZ, Youssef ABM, Kadah YM. Real-time speckle reduction and coherence enhancement in ultrasound imaging via nonlinear anisotropic diffusion. IEEE Trans. Biomed. Eng. 2002 Sep;49(9):997–1014. [PubMed]

[27] Sanchez JR, Oelze ML. An ultrasonic imaging speckle-suppression and contrast-enhancement technique by means of frequency compounding and coded excitation. IEEE Trans. Ultrason., Ferroelectr., Freq. Control. 2009 Jul;56(7):1327–1339. [PubMed]

[28] Jirik R, Taxt T. High-resolution ultrasonic imaging using two-dimensional homomorphic filtering. IEEE Trans. Ultrason., Ferroelectr., Freq. Control. 2006 Aug;53(8):1440–1448. [PubMed]

[29] Loizou CP, Pattichis C, Christodoulou CI, Istepanian RSH, Pantziaris M, Nicolaides A. Comparative evaluation of despeckle filtering in ultrasound imaging of the carotid artery. IEEE Trans. Ultrason., Ferroelectr., Freq. Control. 2005 Oct;52(10):1653–1669. [PubMed]

[30] Sanches JM, Nascimento JC, Marques JS. Medical image noise reduction using the Sylvester-Lyapunov equation. IEEE Trans. Image Process. 2008 Sep;17(9):1522–1539. [PubMed]

[31] Fisher RA. The use of multiple measurements in taxonomic problems. Ann. Eugenics. 1936;7:179–188.

[32] Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004 Apr;13(4):600–612. [PubMed]

[33] Arsenault HH, April G. Properties of speckle integrated with a finite aperture and logarithmically transformed. J. Opt. Soc. Amer. 1976 Nov;66(11):1160–1163.

[34] Kailath T. Equations of Wiener-Hopf type in filtering theory and related applications. In: Masani P, editor. Norbert Wiener: Collected Works vol. III. MIT Press; Cambridge, MA: 1976. pp. 63–94.

[35] Perona P, Malik J. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 1990 Jul;12(7):629–639.

[36] Jensen JA, Svendsen NB. Calculation of pressure fields from arbitrarily shaped apodized and excited ultrasound transducers. IEEE Trans. Ultrason., Ferroelectr., Freq. Control. 1992 Mar;39(2):262–267. [PubMed]

[37] Jensen JA. Field: A program for simulating ultrasound systems. Med. Biol. Eng. Comput. 1996;34(1):351–353. [PubMed]

[38] Xu C, Prince JL. Snakes, shapes, and gradient vector flow. IEEE Trans. Image Process. 1998 Mar;7(3):359–369. [PubMed]

[39] Huttenlocher DP, Klanderman GA, Rucklidge WJ. Comparing images using the Hausdorff distance. IEEE Trans. Pattern Anal. Mach. Intell. 1993 Sep;15(9):850–863.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |