Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2732005

Formats

Article sections

Authors

Related links

J Magn Reson. Author manuscript; available in PMC 2010 July 1.

Published in final edited form as:

Published online 2009 March 20. doi: 10.1016/j.jmr.2009.03.005

PMCID: PMC2732005

NIHMSID: NIHMS123794

Section on Tissue Biophysics and Biomimetics Eunice Kennedy Shriver National Institute of Child Health and Human Development National Institutes of Health, Bethesda, MD 20892

Corresponding author: Cheng Guan Koay, PhD Section on Tissue Biophysics and Biomimetics Eunice Kennedy Shriver National Institute of Child Health and Human Development National Institutes of Health Bldg 13, Rm 3W16 13 South Drive, MSC 5772 Bethesda, MD 20892−5772 E-mail: vog.hin.liam@caoknaug

The publisher's final edited version of this article is available at J Magn Reson

See other articles in PMC that cite the published article.

Data analysis in MRI usually entails a series of processing procedures. One of these procedures is noise assessment, which in the context of this work, includes both the identification of noise-only pixels and the estimation of noise variance (standard deviation). Although noise assessment is critical to many MRI processing techniques, the identification of noise-only pixels has received less attention than has the estimation of noise variance. The main objectives of this paper are therefore to demonstrate (a) that the identification of noise-only pixels has an important role to play in the analysis of MRI data, (b) that the identification of noise-only pixels and the estimation of noise variance can be combined into a coherent framework, and (c) that this framework can be made self-consistent. To this end, we propose a novel iterative approach to simultaneously identify noise-only pixels and estimate the noise standard deviation from these identified pixels in a commonly used data structure in MRI. Experimental and simulated data were used to investigate the feasibility, the accuracy and the stability of the proposed technique.

Magnetic resonance imaging (MRI) (1) is a rapidly expanding field and a widely used medical imaging modality possessing many noninvasive and quantitative techniques capable of probing functional activity (2) as well as tissue morphology in the brain (3,4). Data analysis in MRI is sophisticated and can be thought of as a “pipeline” of closely connected processing and modeling steps.

Because noise in MRI data affects all subsequent steps in this pipeline, e.g., from noise reduction (5) and image registration (6) to techniques for breaking the noise floor (7), parametric tensor estimation (8-11) and error propagation (12-17), accurate noise assessment has an important role in MRI studies.

Noise assessment in MRI usually means the estimation of Gaussian noise variance (or standard deviation (SD)) alone (18-23). Previously proposed methods for the estimation of Gaussian noise SD can be separated into two groups. In the first group, the Gaussian noise SD is estimated from a manually selected region-of-interest (ROI), while in the second group, it is estimated from an entire image or a volumetric data set automatically without human intervention.

A major problem faced by the first group of *manual* methods is lack of reproducibility of the results. The second group of *automatic* methods overcame this problem by bringing objectivity into the estimation process so that the Gaussian noise SD can be estimated without human input and that the results obtained can be reproduced. However, the most critical problem facing current automatic methods is the separation of pure noise from noisy signals and other artifacts because the way in which the automatic methods of Sijbers et al.(22) and of Chang et al. (21) work is by lumping the values of all the pixels from an entire image or from an entire volumetric data set into a one-dimensional array and then estimating the Gaussian noise SD from the histogram of this one-dimensional array. Complicated criteria and techniques have been developed by Sijbers et al.(22) and Chang et al. (21) to separate pure noise from noisy signals and other artifacts from the histogram alone. In this work, we introduce a simpler paradigm for performing noise assessment in MRI. The proposed method shows improved performance compared to previous methods, and may have application in other scientific and technological areas as well.

One of the major aims of this paradigm is to help us get out of the “*one-dimensional”* predicament faced by the automatic methods of Sijbers et al. and Chang et al. so that the separation of pure noise from noisy signals and other artifacts can be done more cleanly and simply. A moment of reflection will indicate that the identification of noise-only pixels should be a part of the paradigm in order to enhance the performance and accuracy of the estimation process, but the identification of noise-only pixels entails some *a priori* knowledge of the Gaussian noise SD. Therefore, any paradigm that attempts to make the identification of noise-only pixels a part of the overall noise assessment protocol will necessarily be iterative. Such a paradigm, if feasible, not only can improve the accuracy of the estimate of the Gaussian noise SD but also can provide spatial distributions of noise for further analysis or for quality control and calibration.

In this work, we will present one such paradigm. We will demonstrate that (a) the identification of noise-only pixels, which has not received much attention in MRI literature, is as important as—if not more important than—the estimation of Gaussian noise SD, (b) the identification of noise-only pixels and the estimation of Gaussian noise SD via the sample median (or the sample mean or other optimal sample quantiles, see Appendix B) can be combined into a single coherent framework of noise assessment, and (c) this framework can be made self-consistent, that is, it can be turned into a fixed-point iterative procedure.

Briefly, we propose a novel approach to simultaneously identify noise-only pixels and estimate the Gaussian noise SD from a commonly used data structure (see Figure 1) in MRI. The data structure as shown in Figure 1 is ubiquitous in functional MRI and diffusion MRI. It is composed of a series of images acquired at the same physical (slice) location but not necessarily acquired under the same experimentally controlled conditions. Hereafter, we shall refer to the proposed technique as PIESNO, which stands for *P*robabilistic *I*dentification and *Es*timation of *No*ise. PIESNO consists of two distinct parts that are connected dynamically in an iterative manner. The first part of PIESNO is the proposed probabilistic technique for identifying noise-only pixels, which is specifically formulated to deal with the data structure mentioned above. Here, it is assumed that the noise variance is uniform both within and across images in this data structure. The second part of PIESNO is the estimation of Gaussian noise SD via the sample median (or the sample mean or other optimal sample quantiles, see Appendix B), which will be outlined below.

The proposed data structure includes volumetric data composed of magnitude MR images that are acquired at the same slice location but not necessarily under identical experimentally controlled conditions.

The proposed probabilistic identification of noise-only pixels is designed to take advantage of this data structure to increase the discriminative power to identify noise-only pixels. Specifically, it identifies noise-only pixels through the distribution of the mean of a collection of measurements, shown as a vertical column of data along the k axis in Figure 1. Consequently, the discriminative power of the identification, which is related to the sharpness of the distribution of the mean, increases as the number of images within the proposed data structure increases. For completeness, we will show that the distribution of the mean used in this work is a Gamma distribution, which is well-known in MRI, e.g., see (24).

Our technique for estimating Gaussian noise SD is based on the median method but we also provide other methods of estimation based on the sample mean and optimal sample quantiles. The median method is a simple formula of the Gaussian noise SD expressed in terms of the sample median of a collection of noise-only measurements. We chose the sample median for its ease of use. The theoretical reason behind our choice of the sample median over other slightly more optimal methods based on the sample quantile of a specific order is explicated in Appendix B.

Both the identification of noise-only pixels and the estimation of noise SD are integral parts of the proposed framework on noise assessment because they are connected dynamically and iteratively to identify noise and estimate noise SD in a self-consistent manner.

Both experimental and simulated data were used to investigate the feasibility, accuracy, stability and global property of the proposed framework. Our approach managed to tease apart two noise distributions through a simple global analysis based on a well-know graphical technique in nonlinear dynamics known as *Cobweb* (25).

A comparison between our technique and Sijbers’ (22) (hereafter referred to as the Sijbers Method) was performed. Our technique demonstrated a lower mean squared error in estimating the Gaussian noise SD and a combined method based on both our technique and the Sijbers Method was found to be the most optimal when the number of images within the data structure was above five.

In this section, we will first provide the necessary details about the distribution of the arithmetic mean of *K* independent Gamma random variables, which is also a Gamma distribution, and then establish the connection between the proposed data structure and this well-known distribution by a few simple changes of variables. Throughout this section, we will use the same notation as employed in (26).

It is known that magnitude MR signals, *m* (or *m _{i,j,k}* in Figure 1), reconstructed from the sum-of-squares algorithm through an

$${p}_{\stackrel{~}{\chi}}(m\mid \eta ,{\sigma}_{g},N)=\frac{{m}^{N}}{{\sigma}_{g}^{2}{\eta}^{N-1}}\mathrm{exp}(-{\scriptstyle \frac{{m}^{2}+{\eta}^{2}}{2{\sigma}_{g}^{2}}}){I}_{N-1}\left({\scriptstyle \frac{m\eta}{{\sigma}_{g}^{2}}}\right),\phantom{\rule{1em}{0ex}}m\ge 0$$

[1]

where the PDF is zero for *m* < 0, η is the underlying (combined) signal intensity, σ_{g} is the Gaussian noise SD, and *I _{k}* is the

In this work, the PDF of magnitude noise (i.e., η = 0) is more relevant than the PDF of magnitude signal (i.e., η ≠ 0). Therefore, we will derive the PDF of magnitude noise from Eq.[1] by setting the underlying signal to zero, i.e., η = 0. It can be shown that the PDF of magnitude noise is given by:

$${p}_{\stackrel{~}{\chi}}(m\mid 0,{\sigma}_{g},N)=\frac{{m}^{2N-1}}{{2}^{N-1}{\sigma}_{g}^{2N}(N-1)!}\mathrm{exp}(-{\scriptstyle \frac{{m}^{2}}{2{\sigma}_{g}^{2}}}),$$

[2]

after *I _{N–1}* in Eq.[1] is replaced by its first term Taylor expansion about η = 0, which is ${I}_{N-1}(m\eta \u2215{\sigma}_{g}^{2})\approx {\scriptstyle \frac{1}{{2}^{N-1}(N-1)!}}{(m\eta \u2215{\sigma}_{g}^{2})}^{N-1}$. Note that when

By a change of variables from *m* to $t={m}^{2}\u2215\left(2{\sigma}_{g}^{2}\right)$ (or ${t}_{i,j,k}={m}_{i,j,k}^{2}\u2215\left(2{\sigma}_{g}^{2}\right)$), it can be shown that *t* follows a particular form of the Gamma PDF:

$${p}_{\gamma}(t\mid N)=\frac{1}{(N-1)!}{t}^{N-1}\phantom{\rule{thickmathspace}{0ex}}\mathrm{exp}(-t)\equiv {f}_{\gamma}(t\mid N,1),$$

[3]

where the Gamma PDF, *f*_{γ}, is defined as (31):

$${f}_{\gamma}(x\mid n,\beta )=\frac{1}{\Gamma \left(n\right){\beta}^{n}}{x}^{n-1}\phantom{\rule{thickmathspace}{0ex}}\mathrm{exp}(-x\u2215\beta ),$$

[4]

and Γ is the Gamma function.

The measurements *m _{i,j,k}*'s are drawn from the distribution defined in Eq.[2] and any statistic computed from a set of

In short, the new random variable *s _{i,j}* representing the arithmetic mean of

$${f}_{\gamma}(s\mid NK,1\u2215K)=\frac{{K}^{NK}}{(NK-1)!}{s}^{NK-1}\phantom{\rule{thickmathspace}{0ex}}\mathrm{exp}(-Ks).$$

[5]

Further, the new random variable *s _{i,j}* related to

$${s}_{i,j}=\frac{1}{K}{\Sigma}_{k=1}^{K}{t}_{i,j,k}=\frac{1}{2{\sigma}_{g}^{2}K}{\Sigma}_{k=1}^{K}{m}_{i,j,k}^{2}.$$

[6]

The derivation in Appendix A uses only a few simple substitutions of variables, which takes advantage of the reproductive property of the Gamma distribution (32). Interested readers may find it interesting to compare the derivation presented in (24).

In order to probabilistically identify the noise-only pixels we need to define an upper and a lower threshold on *s* so that a given percentage of all noise-only voxels fall between these values. In order to do so we will need to derive the cumulative distribution function (CDF), and its inverse, of the distribution of *s*.

We will first define the CDF of *s* and its inverse to specify the lower and upper threshold values of *s*. A few CDF's of *s* are shown in Figure 2. We also assume that *N* and *K* are known. Additionally, the initial estimate of σ_{g} is required, which can be obtained automatically by the simple search method outlined below.

(A) PDF's and (B) CDF's of the mean of Gamma random variables with respect to different values of *K* and *N*.

Let the CDF of *s* with probability α be denoted by ${P}_{s}(\lambda \mid N,K)\equiv {\int}_{0}^{\lambda}{f}_{\gamma}(s\mid NK,1\u2215K)ds=\alpha $ and its inverse, the inverse CDF of *s*, be denoted symbolically by $\lambda \equiv {P}_{s}^{-1}(\alpha \mid N,K)$. Note that the CDF of *s* and its inverse are related to each other by $\lambda ={P}_{s}^{-1}({P}_{s}(\lambda \mid N,K)\mid N,K)$ and $\alpha ={P}_{s}({P}_{s}^{-1}(\alpha \mid N,K)\mid N,K)$. Note also that ${P}_{s}^{-1}(\alpha \mid N,K)$ is given by Inverse Gamma Regularized [*NK*,1-α]/*K* in Mathematica (33).

The lower and upper threshold values of *s*, λ_{−} and λ_{+}, with probabilities respectively, α/2 and 1– (α/2), can be expressed in terms of the inverse CDF of *s*. Symbolically, they are given by ${\lambda}_{-}={P}_{s}^{-1}(\alpha \u22152\mid N,K)$ and ${\lambda}_{+}={P}_{s}^{-1}(1-\alpha \u22152\mid N,K)$.

Thus, a collection of *K* independent magnitude MR signals {*m _{i,j}*,

The test proposed here is a two-sided test. We should also mention that a one-sided test may be used instead of the two-sided test. But, the estimation step via the median or the quantile methods will no longer be as simple as the one presented below. In short, the median and the quantile methods will have be corrected for bias if these methods are to be used with a one-sided test.

It is well known that the estimation of Gaussian noise SD can be obtained from the method of moments. For convenience, we provide here the method for estimating the Gaussian noise SD from the sample mean, denoted by <*m*>, of a collection of measurements *m*'s (18,23,26):

$${\sigma}_{g}=<m>\u2215{\beta}_{N},$$

[7]

where ${\beta}_{N}=\sqrt{{\scriptstyle \frac{\pi}{2}}}{\scriptstyle \frac{(2N-1)!!}{{2}^{N-1}(N-1)!}}=\sqrt{{\scriptstyle \frac{\pi}{2}}}{\prod}_{k=2}^{N}{\scriptstyle \frac{k-1\u22152}{k-1}}$ and the double factorial function is defined as *n*!!=*n*×(*n*- 2)×....

We should point out that the Gaussian noise SD, σ_{g}, can also be estimated from the sample quantile of a unique optimal order α* (Appendix B):

$${\sigma}_{g}={q}_{{\alpha}^{\ast}}\u2215\sqrt{2{P}_{s}^{-1}({\alpha}^{\ast}\mid N,1)},$$

[8]

where *q*_{α*} is the quantile of optimal order α*, see Table 1. Interestingly, these optimal quantiles approach the sample median as *N* increases, in the sense as described in Appendix B. A well known but slightly less optimal sample quantile is of course the sample median (i.e., α = 1/2), denoted by μ *q*_{1/2}. Specifically, σ_{g} can be expressed as a function of both μ and *N* as follows:

$${\sigma}_{g}=\mu \u2215\sqrt{2{P}_{s}^{-1}(1\u22152\mid N,1)}.$$

[9]

Note that both the sample median and the median of the continuous PDF, Eq.[2], are denoted by the same symbol. Note also that the denominator of Eq.[9] can be computed in advance, see Table 2. Note also that when *N* = 1, Eq.[9] can be expressed analytically as ${\sigma}_{g}=\mu \u2215\sqrt{2\phantom{\rule{thickmathspace}{0ex}}\mathrm{ln}\phantom{\rule{thickmathspace}{0ex}}2}$.

PIESNO is best described through a step-by-step procedure as shown in Figure 3.

Step 1 is on the identification of noise-only pixels. In practice, this step should be carried out on each of the pixels before moving on to Step 2 because dynamically adding new elements into the one dimensional array, Ω, of Step 2 is inefficient. Note also that the summation in Step 1 should be computed in advance.

In Step 2, the link between the identification of noise-only pixels and the estimation of Gaussian noise SD is established. Here, a one-dimensional array, Ω, is created. It is the union (in set-theoretic language) of all the arrays of *K* measurements that have been identified positively in Step 1. Note that the number of positive identifications is defined to be the number of the positively identified arrays. Finally, the sample median of Ω, μ, is selected. Note again that the sample mean of Ω or the sample quantile of Ω of a specific order may be used here.

Step 3 is the estimation of noise SD. The sample median, μ, of Ω computed in Step 2 is used to estimate σ_{g} via Eq.[9]. Note again that the sample mean and other optimal quantiles as defined in respectively Eq.[7] and Eq.[8] may be used instead of Eq.[9].

In Step 4, the iteration stops if Ω is empty, otherwise it will move through Steps 1 to 4 sequentially until convergence or the maximum number of iterations is reached. Here, a sequence of estimates of σ_{g} produced by the iterative procedure is considered convergent if the absolute difference between any successive estimates is less than some small number, say 10^{−10}.

Note that the accuracy of the estimation of noise is dependent upon how the elements of Ω are distributed, which, in turn, is determined by the lower and upper threshold values of how the identification of noise is performed. In short, the two areas excluded by the threshold values in the PDF of *s* should be equal in order to ensure accurate estimate of the Gaussian noise SD.

PIESNO can be made automatic by a systematic search for a good initial estimate of σ_{g}. This systematic search begins by finding an upper bound, *M*, of σ_{g}. Here, *M* is estimated from the whole volumetric data through Eq.[9] (or Eq.[7] or Eq.[8]) where μ is taken to be the sample median (or the sample mean or the sample quantitle of a specific order) of the whole volumetric data.

Next, the interval from 0 to *M* is subdivided to generate a set of points, Φ = {*M*/*l*, 2*M*/*l*,*ħ*,(*l*−1)*M*/*l*, *M*} where *l* is some positive integer, say 100. Each point in Φ then serves as an initial solution. The best initial solution is the one that produces the highest number of positive identifications.

In the previous section, we adopted the strategy of using the total number of positive identifications as a means to find a good initial estimate. This strategy is not foolproof but it works in many MR applications because it is generally true that there is only one distribution of magnitude noise. In this section, we will present a general strategy based on a well-known graphical technique in nonlinear dynamics known as *Cobweb* for analyzing the global behavior of the proposed iterative procedure (25). This general strategy is able to detect multiple distributions of magnitude noise based on the number of attracting fixed points.

First, let us define a function, Π, based on the first three steps of PIESNO:

$${\sigma}_{n+1}=\Pi \left({\sigma}_{n}\right).$$

[10]

In essence, Π takes in some value of the Gaussian noise SD, σ_{n}, and returns the next value of the Gaussian noise SD, σ_{n+1}, without going through the iterative process. Similarly, we can define T as a function of σ_{n} to tally the positive identifications at σ_{n}. It is important to note that Π(σ*n*) is defined to be zero if T(σ_{n}) is zero. It should be clear that we can also generate the binary mask for a given σ_{n}. A plot of these two functions, and of a unit slope curve can provide an intuitive and informative snapshot of the global behavior of PIESNO for a given data set, see Section III.K for the results.

PIESNO was tested with a set of human brain data acquired on a 1.5 Tesla scanner (GE Medical Systems, Milwaukee, WI) based on the sum-of-squares reconstruction (27) with an 8-channel phased array coil, i.e., *N* = 8, using a single-shot spin-echo EPI sequence. Below are the specific experimental parameters: Field-Of-View (FOV) of 24cm × 24cm, 60 slices without gaps and with a slice thickness of 2.5mm, an image matrix of 96×96. Each diffusion weighted image (DWI) dataset consisted of 2 images without diffusion weighting (b=0 s/mm^{2}) and 12 images with different gradient directions but with the same diffusion weighting of 1100 s/mm^{2}, i.e., *K* = 14 at each slice location, see Figures 4A and 4B. If we set α to be 0.10, we have λ_{−} = λ_{−} = 6.798 and λ_{−} = 9.282 for the case when *K* = 14. For this particular slice location, the initial estimate of σ_{g} was found to be 0.0106 through our automatic search method with *l* = 50. The final estimate of 0.0104 was obtained in 13 iterations in less than a second. Those regions that are classified as containing noise-only measurements are shown in white in Figure 4C.

(A) A T2-weighted image. (B) A diffusion-weighted image. (C) A quaternary image on the identification of noise. Positively identified pixels are white. In this case, the proposed method was applied to all 14 images (2 non diffusion-weighted + 12 diffusion-weighted **...**

Note that Figures 4C and 4D are quaternary images. Pixels that are classified as containing noise-only measurements are shown in white. Pixels whose *s _{ij}*'s are greater than the upper threshold are shown in light gray. Pixels whose

Simulated data were also used to gain qualitative insight into the numerical behavior of PIESNO. The automatic search method was not used in this test because the stability of PIESNO with respect to different initial estimates of σ_{g} was under investigation. Here, the same set of parameters was used, i.e., *N* = 8, *K* = 14, α = 0.10, λ_{−} = 6.798 and λ_{−} = 9.282, but the ground truth of σ_{g}, for simplicity, was set to 10 arbitrary units. We generated 5000 arrays of random samples with each array containing *K* = 14 random samples. Each random sample was generated according to the following expression, $\sqrt{{\sum}_{i=1}^{N}({\epsilon}_{i}^{2}+{\delta}_{i}^{2})}$, where ε_{i}'s and δ_{i}'s are Gaussian distributed with mean zero and standard deviation of 10 arbitrary units. The results of this study are shown in Figure 5. Note that the number of positively identified regions was one out of 5000, when 7.805 and 12.75 were used as the initial estimates of σ_{g}. It is interesting to note that the proposed procedure converged to the result, which was close to the ground truth, even when it began with these estimates. Additionally, the final percent of positively identified samples was about 90.56% and the final estimate of σ_{g} was about 10.015. These results were independent of the initial solutions used.

A synthetic image matrix of 64 64 was constructed (Figure 6A) and used to compare the performance of three different methods, (I) PIESNO, (II) Sijbers’ recently proposed method (22) and (III) a method that combines both (I) and (II) by using one of the outputs of PIESNO, i.e., Ω, as an input to (II). The performance measure used was the mean squared error in estimating the Gaussian noise SD. Further, the performance of these methods in terms of average time needed to carry out a single estimation of noise variance was also evaluated.

(A) The ground truth image, a synthetic image with different signal intensities ranging from 0 to 260 (arbitrary units). (B) The mean squared error (in estimating the Gaussian noise SD) as a function of the Gaussian noise SD. Note that the mean squared **...**

Note that the following data structure, {{0, 2160}, {2, 176}, {4, 176}, {6, 176}, {8, 176}, {10, 176}, {12, 176}, {14, 176}, {28, 176}, {48, 176}, {96, 176}, {192, 88}, {260, 88}}, shows all distinct elements of our synthetic image together with their multiplicities, e.g., there are 2160 elements with null signal intensity, 176 elements with signal intensity of 2 (arbitrary unit), and so on.

For convenience, the same set of parameters was used, i.e., *K* = 14, α = 0.10, except for *N*, which in this case was *1* because the Sijbers method was specifically designed for dealing with the Rician/Rayleigh data. Therefore, we had λ_{−} = 0.604 and λ_{+} = 1.476. For simplicity, the 14 images (*K* = 14) were drawn from repeated measurements.

In this comparison, 5000 volumetric data sets each of which contains 64 64 14 random samples were generated for each value of σ_{g} based on the sum-of-squares algorithm. The values of σ_{g} range from 1 to 20 in a uniform step of 1. Each volumetric data set served as an input to both PIESNO and the Sijbers method. The initial number of bins for the Sijbers method was 1200 but it is well known that one of the advantages of the Sijbers method is the ability to find an optimal number of bins. The results are shown in Figures 6B. PIESNO achieved lower mean squared error in estimating the Gaussian noise SD than did the Sijbers method. In addition, it is clear that PIESNO can improve the performance of the Sijbers method (or other methods of noise variance estimation) when the noise ensemble, Ω, is used as an input to the Sijbers method.

A synthetic volumetric data of 64 64 16 was constructed. Each of the 16 images was created with random numbers drawn from two Rayleigh distributions, σ_{g} = 10 and σ_{g} = 20, based on the following rule. If both indices of a pixel location, (*i,j*), are even number, then a random number drawn from Rayleigh distribution of σ_{g} = 10 will be placed on this pixel location; otherwise, a random number drawn from the other Rayleigh distribution will be placed on this pixel location. Figures 7A and 7B are representative of the noisy images constructed using the rule mentioned above. Figure 7C is a histogram of the whole volumetric data. Note that it is impossible to infer from this histogram that there are two (Rayleigh) distributions.

(A-B)Two representative noisy images. (C) Histogram of the noisy volumetric data. (D-E) Binary masks indicating two spatial distributions of noise drawn from two different Rayleigh distribution. (F) Graphical analysis of PIESNO.

Based on the technique outlined in Section II.G, it is possible to detect multiple Rayleigh distributions through PIESNO. Figures 7D and 7E are binary masks computed at two different attracting fixed points shown in Figure 7F. In Figure 7F, it is clear that the function, T, which is the total number of positive identifications, peaks near the attracting fixed points. The nonzero trough of the function, T, seems to coincide with the repelling fixed points. The results showed that the number of attracting fixed points coincides with the number of clusters of noise with different probability distributions.

Earlier approaches such as those of Sijbers et al. (22) and Chang et al. (21) are general and can be applied to any 3D volumetric data set. These approaches can also be applied to the particular 3D volumetric data considered in this paper.

In a typical functional or diffusion MRI data set, one usually has many images for a particular slice location and these images are usually acquired under slightly different experimentally-controlled conditions. The data structure considered in this work requires nothing more than a simple rearrangement of data and so is the data structure considered by Sijbers *et al.* and Chang *et al.* The data structure considered by them is a 3D volumetric data set consisted of distinct images across multiple slice locations. There is nothing special about the latter data structure or the data structure used in this paper because the basic assumption about the property of the Gaussian noise SD is the same. Namely, the Gaussian noise SD is assumed to be uniform either across multiple slice locations or across multiple images of the same location, e.g., if the readout bandwidth is maintained at the same level for all the images. Consequently, it is superfluous to regard the data structure used by Sijbers *et al.* and Chang *et al.* to be better or more appropriate than the data structure used in this work for estimating the Gaussian noise SD.

Our approach is designed for the particular 3D volumetric data mentioned above (multiple images acquired with different experimental conditions but at the same slice location) and cannot be applied to any 3D volumetric data. Our method takes advantage of this particular 3D volumetric data structure to increase the discriminative power of the identification of noise-only pixel. We should point out that our approach may not be optimal when K is small (less than 5). But, in high angular resolution diffusion MRI or functional MRI, K is usually greater than 5.

The initial motivation behind this work was simply to take advantage of the multiplicity of the images within the proposed data structure to increase the discriminative power of the identification of noise-only pixels, and to develop a simple and self-consistent framework of noise assessment that incorporates both the identification of noise-only pixels and the estimation of Gaussian noise SD.

This paper can be thought of as a sequel to but independent of our recent paper on a signal-transformational framework for breaking the noise floor in MRI and other imaging sciences (7). The present method for estimating the Gaussian noise SD and the technique proposed in (7) represent our major attempt to decouple the fixed point formula of SNR (26) into two self-consistent approaches for estimating the underlying signal and the Gaussian noise SD, respectively. The advantage of this decoupling is substantial because the estimation of the Gaussian noise SD can be obtained from a much larger collection of samples. As a consequence, the precision of the Gaussian noise SD estimate will be significantly increased, and in turn, the precision of the underlying signal intensity estimate will also be increased (7).

The proposed identification of noise-only pixels is a simple and highly discriminative method for identifying noise because its identification criterion entails a few simple arithmetic operations and its discriminative power is proportional to the number of images (i.e., *K*) within the proposed data structure. PIESNO can be viewed as a clustering algorithm for finding different noise-only regions. Note that the input parameters of PIESNO can be obtained readily, e.g., see (34) for a routine on the CDF of the Gamma distribution. Note also that its discriminative power decreases as the number of combined channels (i.e., *N*) increases, see Figure 2.

In this work, we use a quaternary image rather than a binary image to distinguish different types of pixel. In essence, a quaternary image can provide useful visual information about the four distinct decision-making regions (from the PDF) to which a particular *s _{ij}* belongs. For example, PIESNO seems to be able to exclude low-level artifacts in the top of the field of view in both Figures 4C and 4D. Interestingly, the values of these low-level artifacts are still higher than the upper threshold. Note that some pixels in dark gray are persistently distributed at the interface between the brain and the skull. The values of these pixels are nonzero but less than the lower threshold. The source of this peculiar phenomenon may be due to the use of the Fermi filter in the image reconstruction. Finally, regions excluded by PIESNO that are outside of the tissue region may provide useful information about spatial distribution of low-level artifacts.

The proposed estimation of Gaussian noise SD via the median method is an efficient and robust method for estimating the Gaussian noise SD because sample median can be found very efficiently, e.g., see (35); the sample median is a statistically robust measure, that is, robust against outliers. Furthermore, the sample mean or the other slightly more optimal quantiles developed in Appendix B may be used instead of the sample median.

The proposed framework of noise assessment, PIESNO, shows how both the identification of noise-only pixels and the estimation of Gaussian noise SD can be connected in a dynamic and iterative manner. PIESNO was shown to be beneficial to methods of estimation such as that of Sijbers and coworkers. That is, we showed that PIESNO can be combined with other noise variance estimation methods, e.g., (21,22,36), to achieve better performance in terms of the mean squared error in estimating the Gaussian noise SD. We remark that the proposed method is slightly more general than the maximum likelihood based approach proposed by Sijbers and coworkers (22) because our method is applicable to larger class of magnitude data than just Rayleigh-distributed data (i.e., *N* = 1).

PIESNO is general and can be adapted to other imaging sciences by using a different PDF and CDF of interest. An important application of PIESNO is the assessment of noise in the brain region. Specifically, PIESNO can be used to evaluate the quality of images acquired at high diffusion weighting through one of the high angular resolution diffusion imaging (HARDI) techniques (37-42). HARDI images (acquired at the same slice location and the same diffusion weighting, but with either different or the same gradient directions) can be arranged into the proposed data structure. These HARDI images can then be analyzed via PIESNO to determine whether a region of interest (ROI) has any pixels that contain only noise. It should be clear that this type of noise assessment is also applicable to fMRI images.

Here, we discuss some limitations of our approach. PIESNO cannot be expected to perform well for very small *K* because the identification of noise may be less stable and may not produce sufficient number of elements in Ω for the estimation of noise via the median method to be useful as an estimator. Nevertheless, one can still estimate the Gaussian noise SD from a region-of-interest (ROI) in the image background via the median method alone or the optimal quantile method developed in Appendix B or other classical formulae based on the mean or the standard deviation of the measurements in that ROI (19). Based on our experience with numerous simulation tests on the same simulated data used in Figure 6, PIESNO was found to perform well when *K* was greater than 5. At *K*=5, the mean squared error of PIESNO in estimating the Gaussian noise SD is slightly higher than the Sijbers method but the combined method remains the best method in terms of mean squared error in estimating the Gaussian noise SD. At present, we do not have a rigorous statement about the convergence of PIESNO but it has never failed to converge in all cases we tested. For some qualitative features the convergence of PIESNO, see Appendix C.

In conclusion, we have demonstrated that it is useful and logical to combine both the identification of noise-only pixels and the estimation of noise variance into a single coherent framework of noise assessment. This is an important paradigm for noise assessment in MRI and in other fields. We have also demonstrated the convergence of the proposed method and presented a simple graphical analysis for understanding global behavior of our iterative schemes.

C.G. Koay dedicates this work to Felix Koay. He is very grateful to Drs. J. Sijbers and D. Poot for providing the Matlab software that implements (22), and to Dr. D. Poot for help with the software. The authors are grateful to the reviewers for very thoughtful and helpful comments, to Liz Salak for editing the paper, to Dr. Peter J. Basser for critically reading the paper, to Dr. Alan Barnett for his help in the pulse sequence, to Dr. Stefano Marenco for sharing the experimental data, and to Ms. Lindsay Walker and Dr. Lin-Ching Chang for help in image formatting. C.G. Koay was supported in part by the Intramural Research Program of the *Eunice Kennedy Shriver* National Institute of Child Health and Human Development, the National Institute on Drug Abuse, the National Institute of Mental Health, and the National Institute of Neurological Disorders and Stroke as part of the NIH MRI Study of Normal Pediatric Brain Development with supplemental funding from the NIH Neuroscience Blueprint. PIESNO is freely available upon request (http://sites.google.com/site/piesnoformri/).

In a few steps, we will use the Characteristic functions (43) of *f*_{γ} and *p*_{γ} to derive the PDF of the arithmetic mean of *K* independent Gamma random variables with PDF *f*_{γ}(*t*|*N*,1), Eq.[4]. First, the Characteristic function of *f*_{γ} can be shown to be:

$${\phi}_{\gamma}(\omega \mid n,\beta )\equiv {\int}_{0}^{\infty}{f}_{\gamma}(t\mid n,\beta )\mathrm{exp}\left(i\omega t\right)\phantom{\rule{thinmathspace}{0ex}}dt={(1-i\phantom{\rule{thickmathspace}{0ex}}\beta \omega )}^{-n},$$

[A1]

where $i\equiv \sqrt{-1}$. Therefore, the Characteristic function of *p*_{γ} *f*_{γ}(*t*|*N*,1) is given by:

$${\phi}_{\gamma}(\omega \mid N,1)={(1-i\omega )}^{-N},$$

[A2]

Second, the Characteristic function of the PDF of the arithmetic mean of *K* independent Gamma random variables can be expressed as:

$${\left[{\phi}_{\gamma}(\omega \u2215K\mid N,1)\right]}^{K}={(1-i\omega \u2215K)}^{-NK}={\phi}_{\gamma}(\omega \mid NK,1\u2215K),$$

[A3]

$${\phi}_{\gamma}(\omega \mid NK,1\u2215K)={\int}_{0}^{\infty}\frac{{K}^{NK}}{(NK-1)!}{s}^{NK-1}\phantom{\rule{thickmathspace}{0ex}}\mathrm{exp}(-Ks)\phantom{\rule{thickmathspace}{0ex}}\mathrm{exp}\left(i\omega s\right)ds.$$

[A4]

Thus, the PDF of the arithmetic mean of *K* independent Gamma random variables is a Gamma distribution given by:

$${f}_{\gamma}(s\mid NK,1\u2215K)=\frac{{K}^{NK}}{(NK-1)!}{s}^{NK-1}\phantom{\rule{thickmathspace}{0ex}}\mathrm{exp}(-Ks),$$

[A5]

where *s* (or ${s}_{i,j}={\scriptstyle \frac{1}{K}}{\sum}_{k=1}^{K}{t}_{i,j,k}$) is a new random variable denoting the arithmetic mean of *K* independent Gamma random variables, $t={m}^{2}\u2215\left(2{\sigma}_{g}^{2}\right)$ (or ${t}_{i,j,k}={m}_{i,j,k}^{2}\u2215\left(2{\sigma}_{g}^{2}\right)$), with PDF *f*_{γ}(*t*|*N*,*K* = 1). A few PDF's of *s* are shown in Figure 2.

The connection between the quantile of order α, denoted by *q*_{α}, and σ_{g} of Eq.[2] is given by the definition of the quantile in a continuous PDF:

$${\int}_{0}^{{q}_{\alpha}}{p}_{\stackrel{~}{\chi}}(m\mid 0,{\sigma}_{g},N)dm=\alpha .$$

[B1]

Equation [B1] can also be expressed in terms of the CDF of *s*, *P _{s}*(λ|

$${P}_{s}({q}_{\alpha}^{2}\u2215\left(2{\sigma}_{g}^{2}\right)\mid N,1)=\alpha .$$

[B2]

Equivalently, Eq.[B2] can be expressed in terms of the inverse CDF of *s*:

$${q}_{\alpha}^{2}\u2215\left(2{\sigma}_{g}^{2}\right)={P}_{s}^{-1}(\alpha \mid N,1).$$

[B3]

Hence,

$${\sigma}_{g}={q}_{\alpha}\u2215\sqrt{2{P}_{s}^{-1}(\alpha \mid N,1)}.$$

[B4]

Although a self-contained discussion on the theoretical justifications for using the sample median as an estimator of the Gaussian noise SD is beyond the scope of the paper, it is still useful to point out that the sample quantile of any order α (0<α<1) is a consistent asymptotically normal estimator with several desired properties, e.g., page 215 of Dudewicz (44). Further, the Gaussian noise SD can be estimated from quantiles of different orders as given in Eq.[A4]. It can be shown that the sample median, which is the sample quantile of order 0.5, has higher relative efficiency than most quantiles of other orders as *N* increases. We note that the expression for the expected variance of the quantile of order α is given by (44):

$$\frac{\alpha (1-\alpha )}{{p}_{\stackrel{~}{\chi}}{({q}_{\alpha}\mid 0,{\sigma}_{g},N)}^{2}}.$$

[B5]

Based on Eqs[B4,B5], it can be shown that the expected SD for estimating the Gaussian noise SD through the quantile of order α can be expressed as:

$$\frac{\sqrt{\alpha (1-\alpha )}}{{p}_{\stackrel{~}{\chi}}(\sqrt{2{\sigma}_{g}^{2}{P}_{s}^{-1}(\alpha \mid N,1)}\mid 0,{\sigma}_{g},N)}.$$

[B6]

Figures 8A, 8B, 8C and 8D show the expected SD for estimating the Gaussian noise SD through the quantile of order α with *N* equal to 1, 8, 64 and 128, respectively, and with different Gaussian noise SDs, σ_{g} = 1, 2, 3, 4 and 5, coded in different hues ranging respectively from red to blue.

Expected SD for estimating the Gaussian noise SD using the quantile of order α with (A) *N*=1, (B) *N*=8, (C) *N*=64, and (D) *N*=128, and with different Gaussian noise SDs, i.e, σ_{g} = 1, 2, 3, 4, and 5 (arbitrary units), which were indicated with **...**

The optimal quantile order, α*, can be found by finding the minimum point of Eq.[B6]. It can be shown that the optimal quantile order is independent of σ_{g}. The numerical values of the optimal quantile order for different *N* are listed in Table A. For completeness, the values of the denominator of Eq.[B4] evaluated at the optimal quantile order are also listed in Table A. It is clear that the use of sample median is expedient and reasonable because we expect the optimal quantile order to approach ½ as *N* increases. It is also clear that the sample median used in estimating the Gaussian noise SD may be replaced by the sample mean.

Although we do not have a rigorous and general statement about the convergence of the proposed framework, the proposed framework has never failed to converge in all the cases we tested. In this appendix, we discuss some qualitative features of the convergence of PIESNO.

Let us assume that there is only one attracting fixed point, denoted by σ_{FP}, which is unbeknown to us. In what follows, we will show that if σ_{n} is some estimate, which results in nonempty noise array Ω, then we have the following conditions for the next estimate, σ_{n+1} : (A) if σ_{n}>σ_{FP} then σ_{n}>σ_{n+1}>σ_{FP}, and (B) if σ_{n}<σ_{FP} then σ_{n}<σ_{n+1}<σ_{FP}. Based on these two cases we can conclude that σ_{n} converges to σ_{FP} as *n* increases.

Let us assume that σ_{n}>σ_{FP}, then we can write σ_{n} = βσ_{FP} with β>1. In many simulations we carried out with *K*>5, we found that σ_{n+1} seems to be close the value $\sqrt{\beta}{\sigma}_{FP}$. The same is true when σ_{n}<σ_{FP} and σ_{n} = βσ_{FP} with β<1. Note that the data structure used in these tests was 128×128×*K* with α = 0.1 and *N* = 1 to *N* = 64. For cases with *K*<5, (A) and (B) remain true but we did not find ${\sigma}_{n+1}=\sqrt{\beta}{\sigma}_{FP}$ to be true.

For example, let the dimensions of the volumetric data be 128×128×*K* with *K* = 8, 12, 16 and 20. Let us take σ* _{FP}* = 50 as the ground truth, α = 0.1,

**Publisher's Disclaimer: **This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1. Lauterbur PC. Image formation by induced local interactions: Examples employing nuclear magnetic resonance. Nature. 1973;242(5394):190–191. [PubMed]

2. Ogawa S, Lee TM, Kay AR, Tank DW. Brain Magnetic Resonance Imaging with Contrast Dependent on Blood Oxygenation. Proceedings of the National Academy of Sciences. 1990;87(24):9868–9872. [PubMed]

3. Basser PJ, Mattiello J, Le Bihan D. MR diffusion tensor spectroscopy and imaging. Biophys J. 1994;66(1):259–267. [PubMed]

4. Tuch DS, Reese TG, Wiegell MR, Van JW. Diffusion MRI of complex neural architecture. Neuron. 2003;40(5):885–895. [PubMed]

5. Nowak RD. Wavelet-based Rician noise removal for magnetic resonance imaging. IEEE Transactions on Image Processing. 1999;8(10):1408–1419. [PubMed]

6. Rohde GK, Barnett AS, Basser PJ, Pierpaoli C. Estimating intensity variance due to noise in registered images: Applications to diffusion tensor MRI. NeuroImage. 2005;26(3):673–684. [PubMed]

7. Koay CG, Özarslan E, Basser PJ. A signal transformational framework for breaking the noise floor and its applications in MRI. Journal of Magnetic Resonance. 2009 [PMC free article] [PubMed]

8. Koay CG, Chang L-C, Carew JD, Pierpaoli C, Basser PJ. A unifying theoretical and algorithmic framework for least squares methods of estimation in diffusion tensor imaging. Journal of Magnetic Resonance. 2006;182(1):115–125. [PubMed]

9. Chang L-C, Jones DK, Pierpaoli C. RESTORE: Robust estimation of tensors by outlier rejection. Magnetic Resonance in Medicine. 2005;53(5):1088–1095. [PubMed]

10. Basser PJ, Mattiello J, Le Bihan D. Estimation of the effective self-diffusion tensor from the NMR spin echo. (Series B).Journal of Magnetic Resonance. 1994;103(3):247–254. [PubMed]

11. Jones DK, Basser PJ. Squashing peanuts and smashing pumpkins: How noise distorts diffusion-weighted MR data. Magnetic Resonance in Medicine. 2004;52(5):979–993. [PubMed]

12. Koay CG, Chang LC, Pierpaoli C, Basser PJ. Error propagation framework for diffusion tensor imaging via diffusion tensor representations. IEEE Transactions on Medical Imaging. 2007;26(8):1017–1034. [PubMed]

13. Anderson AW. Theoretical analysis of the effects of noise on diffusion tensor imaging. Magnetic Resonance in Medicine. 2001;46(6):1174–1188. [PubMed]

14. Koay CG, Nevo U, Chang L-C, Pierpaoli C, Basser PJ. The elliptical cone of uncertainty and its normalized measures in diffusion tensor imaging. IEEE Transactions on Medical Imaging. 2008;27(6):834–846. [PMC free article] [PubMed]

15. Jones DK. Determining and visualizing uncertainty in estimates of fiber orientation from diffusion tensor MRI. Magnetic Resonance in Medicine. 2003;49(1):7–12. [PubMed]

16. Chang L-C, Koay CG, Pierpaoli C, Basser PJ. Variance of estimated DTI-derived parameters via first-order perturbation methods. Magnetic Resonance in Medicine. 2007;57(1):141–149. [PubMed]

17. Jeong H-K, Anderson AW. Characterizing fiber directional uncertainty in diffusion tensor MRI. Magnetic Resonance in Medicine. 2008;60(6):1408–1421. [PMC free article] [PubMed]

18. Edelstein WA, Bottomley PA, Pfeifer LM. A signal-to-noise calibration procedure for NMR imaging systems. Med Phys. 1984;11(2):180–185. [PubMed]

19. Henkelman RM. Measurement of signal intensities in the presence of noise in MR images. Med Phys. 1985;12(2):232–233. [PubMed]

20. Bernstein MA, Thomasson DM, Perman WH. Improved detectability in low signal-to-noise ratio magnetic resonance images by means of a phase-corrected real reconstruction. Med Phys. 1989;16(5):813–817. [PubMed]

21. Chang L-C, Rohde GK, Pierpaoli C. An automatic method for estimating noise-induced signal variance in magnitude-reconstructed magnetic resonance images. SPIE Medical Imaging: Image processing. 2005;5747:1136–1142.

22. Sijbers J, Poot D, den Dekker AJ, Pintjens W. Automatic estimation of the noise variance from the histogram of a magnetic resonance image. Physics in Medicine and Biology. 2007;52(5):1335–1348. [PubMed]

23. Constantinides CD, Atalar E, McVeigh ER. Signal-to-noise measurements in magnitude images from NMR phased arrays. Magnetic Resonance in Medicine. 1997;38(5):852–857. [PMC free article] [PubMed]

24. Andersen AH, Kirsch JE. Analysis of noise in phase contrast MR imaging. Med Phys. 1996;23(6):857–869. [PubMed]

25. Strogatz SH. Nonlinear dynamics and chaos. Perseus Publishing; Cambridge: 1994.

26. Koay CG, Basser PJ. Analytically exact correction scheme for signal extraction from noisy magnitude MR signals. Journal of Magnetic Resonance. 2006;179(2):317–322. [PubMed]

27. Roemer PB, Edelstein WA, Hayes CE, Souza SP, Mueller OM. The NMR phased array. Magnetic Resonance in Medicine. 1990;16:192–225. [PubMed]

28. Dietrich O, Raya JG, Reeder SB, Ingrisch M, Reiser MF, Schoenberg SO. Influence of multichannel combination, parallel imaging and other reconstruction techniques on MRI noise characteristics. Magnetic Resonance Imaging. 2008;26(6):754–762. [PubMed]

29. Rice SO. Mathematical analysis of random noise. Bell System Technical Journal. 1944;23 and 24

30. Gudbjartsson H, Patz S. The Rician distribution of noisy MRI data. Magnetic Resonance in Medicine. 1995;34(6):910–914. [PMC free article] [PubMed]

31. Casella G, Berger RL. Statistical Inference. Duxbury; Pacific Grove: 2002.

32. Rao CR. Linear statistical inference and its applications. Wiley; New York: 1973.

33. Wolfram S. The Mathematica book. Wolfram Media; Champaign: 2003.

34. Press WH, Flannery BP, Teukolsky SA, Vetterling WT. Numerical recipes in C. Cambridge University Press; New York: 1992.

35. Sedgewick R. Algorithms in Java. Addison Wesley; Boston: 2003.

36. Brummer ME, Mersereau RM, Eisner RL, Lewine RRJ. Automatic detection of brain contours in MRI data sets. IEEE Transactions on Medical Imaging. 1993;12(2):153–166. [PubMed]

37. Özarslan E, Shepherd TM, Vemuri BC, Blackband SJ, Mareci TH. Resolution of complex tissue microarchitecture using the diffusion orientation transform (DOT). NeuroImage. 2006;31(3):1086–1103. [PubMed]

38. Tuch DS, Reese TG, Wiegell MR, Makris N, Belliveau JW, Wedeen VJ. High angular resolution diffusion imaging reveals intravoxel white matter fiber heterogeneity. Magnetic Resonance in Medicine. 2002;48(4):577–582. [PubMed]

39. Tuch DS. Q-ball imaging. Magnetic Resonance in Medicine. 2004;52(6):1358–1372. [PubMed]

40. Tuch DS, Weisskoff RM, Belliveau JW, Wedeen VJ. High angular resolution diffusion imaging of the human brain. Philadelphia: 1999. p. 321.

41. Anderson AW. Measurement of fiber orientation distributions using high angular resolution diffusion imaging. Magnetic Resonance in Medicine. 2005;54(5):1194–1206. [PubMed]

42. Frank LR. Characterization of anisotropy in high angular resolution diffusion-weighted MRI. Magnetic Resonance in Medicine. 2002;47(6):1083–1099. [PubMed]

43. Papoulis A. Probability, random variables, and stochastic processes. McGraw-Hill; New York: 1965.

44. Dudewicz EJ. Introduction to statistics and probability. Holt, Rinehart & Winston; New York: 1976.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |