Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2765718

Formats

Article sections

Authors

Related links

J Magn Reson. Author manuscript; available in PMC 2010 April 1.

Published in final edited form as:

Published online 2008 December 6. doi: 10.1016/j.jmr.2008.11.015

PMCID: PMC2765718

NIHMSID: NIHMS83011

Section on Tissue Biophysics and Biomimetics, Eunice Kennedy Shriver National Institute of Child Health and Human Development National Institutes of Health, Bethesda, MD 20892

Corresponding author: Cheng Guan Koay, PhD, Section on Tissue Biophysics and Biomimetics, Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health, Bldg 13, Rm 3W16, 13 South Drive, MSC 5772, Bethesda, MD 20892-5772, E-mail: vog.hin.liam@caoknaug

The publisher's final edited version of this article is available at J Magn Reson

See other articles in PMC that cite the published article.

A long-standing problem in Magnetic Resonance Imaging (MRI) is the noise-induced bias in the magnitude signals. This problem is particularly pressing in diffusion MRI at high diffusion-weighting. In this paper, we present a three-stage scheme to solve this problem by transforming noisy nonCentral Chi signals to noisy Gaussian signals. A special case of nonCentral Chi distribution is the Rician distribution. In general, the Gaussian-distributed signals are of interest rather than the Gaussian-derived (e.g., Rayleigh, Rician, and nonCentral Chi) signals because the Gaussian-distributed signals are generally more amenable to statistical treatment through the principle of least squares. Monte Carlo simulations were used to validate the statistical properties of the proposed framework. This scheme opens up the possibility of investigating the low signal regime (or high diffusion-weighting regime in the case of diffusion MRI) that contains potentially important information about biophysical processes and structures of the brain.

Magnetic resonance imaging (MRI) (1) is a rapidly expanding field and a widely used medical imaging modality—possessing many noninvasive techniques capable of probing functional activities (2) and anatomical structures (3–10) of the brain *in vivo*. In quantitative MRI, important parameters of biophysical relevance are typically estimated from a collection of MR signals that are related to one another through a function of one or more experimentally controlled variables. As ever higher sensitivity and specificity to biophysical processes are achieved in MRI through improved spatial or temporal resolution, the adverse effect of noise on the overall accuracy of MRI-based quantitative findings also increases.

MR signals are complex numbers where the real and imaginary components are independently Gaussian distributed (11). The phase of the complex MRI signal is highly sensitive to many experimental factors, e.g., see (11,12), and as such, the magnitude of the complex MR signal is used instead in most quantitative studies. Although several techniques have been proposed to correct the phase error (12–15), the magnitude of the complex MR signal (hereafter, *magnitude MR signal*) remains the most commonly used measure in MRI. While the magnitude MR signal is not affected by the phase error, it is not an optimal estimate of the underlying signal intensity when the signal-to-noise ratio is low (11) because it follows a nonCentral Chi distribution (16,17) rather than a Gaussian distribution. We should note that the Rician distribution (18,19) is a special case of the nonCentral Chi distribution. It is also well known that a Rician distribution (20) reduces to a Rayleigh distribution when the underlying signal intensity is zero, and the first moment of a Rayleigh distribution is usually known as the “noise floor” (21).

It is increasingly apparent that a resolution of the noise-induced bias in the magnitude MR signals could make it possible to gain further insights into the low signal regime that contains potentially important information about intrinsic functional activity (22) and tissue microstructure (3–9). Although several correction methods have been proposed (11,16,19,23,24) to address this problem, these methods do not produce corrected data that are Gaussian distributed.

A simple means of assessing Gaussianity in the corrected data when the noisy magnitude signals are drawn from the same distribution, e.g., see Figure 1, is to check if the corrected data follow a Gaussian distribution. In practice, this type of data is rare. Rather, we usually have MRI data that are drawn from a family of distributions all of which are characterized by different location parameters (e.g., the location parameter of a Gaussian distribution is the first moment and the location parameter of a nonCentral Chi distribution will be pointed out later). For example, each of the noisy magnitude signals of interest may be acquired under a slightly different experimentally controlled condition so that each noisy magnitude signal is actually drawn from a slightly different distribution. The proposed scheme is the *first method* capable of obtaining corrected data that are distributed evenly in both the *positive and negative* axes when the signal-to-noise ratio is very close to zero, which is a very important but simple criterion for testing the accuracy or lack thereof of a correction scheme. We should point out that none of the previously published methods (11,16,19,23,24) satisfies this criterion because these methods cannot produce corrected data that have negative values.

(A) A schematic diagram of the proposed scheme. (B) A schematic diagram of the two possible approaches that can be used to map nonCentral Chi signals to Gaussian signals. The approach using the samples that are drawn from the same distribution (the bottom-left **...**

In this work, we present a framework for making the magnitude signals Gaussian-distributed. A simple example illustrates the idea behind the proposed framework: suppose the noisy magnitude signals are drawn from a family of nonCentral Chi distributions all of which are characterized by different location parameters but with the same scale parameter. The proposed framework attempts to transform the noisy magnitude signals such that each noisy transformed signal may be thought of as if it were drawn from a Gaussian distribution with a different mean but the same standard deviation. Note that the location and scale parameters that characterize a nonCentral Chi distribution are exactly the mean and the standard deviation of the Gaussian distribution that characterize the transformed signal.

Three important considerations will have to be taken into account in order to construct such a framework. First, we need a method that can find an estimate of the first moment of a nonCentral Chi distribution from which the datum is drawn. Second, we need a method that can find an estimate of the first moment of the Gaussian distribution if an estimate of the first moment of a nonCentral Chi distribution is provided. Third, we need a method that can find a noisy Gaussian-distributed signal for each of the magnitude signals if the first moment of the nonCentral Chi distribution, the first moment and the standard deviation of the Gaussian distribution are provided. Each consideration above constitutes a separate procedure or stage.

Therefore, it is necessary to have a procedure in the first stage that can find an “average value” for each datum. In other words, the first moment of a nonCentral Chi distribution from which the datum is drawn is estimated in the first stage. Once an estimate of the first moment of a nonCentral Chi distribution is known, a procedure in the second stage must be able to produce the “average value” of the underlying signal intensity, which is an estimate of the first moment of a Gaussian distribution. A procedure in the third stage must be able to use each original noisy datum, which is nonCentral Chi-distributed, to find the corresponding transformed noisy signal that is Gaussian-distributed. The schematic representation of the three stages of the proposed framework is shown in Figure 1A.

Specifically, in the first stage, a data smoothing or fitting method may be used to obtain the average values of the noisy magnitude signals. The data may be fitted with some parametric functions (single exponentially or bi-exponentially decaying functions) or smoothed with a variety of smoothing methods. Although a comparison of various fitting or smoothing methods is of interest, such a comparison, if thoroughly investigated, would take us too far afield. Here, we use a penalized or smoothing spline model (25,26), to obtain the “average values”. The penalized spline model is chosen for its ease of implementation and use. The degree of smoothness is selected based on the method of generalized cross-validation (GCV) (26,27). Again, other methods may be used to select the degree of smoothness, see e.g., (28).

In the second stage, we propose an iterative method that takes in an “average value” of a noisy magnitude signal as an input and returns an “average value” of the underlying signal intensity as an output. This iterative method is closely related to but different from our previously proposed fixed point formula of the signal-to-noise ratio (SNR) because it is a fixed point formula of the underlying signal intensity, see Figure 1B. Specifically, the present iterative method treats the estimations of the underlying signal intensity and of the Gaussian noise standard deviation (SD) separately rather than simultaneously. The key advantage of such an approach is that there exists excellent methods for estimating the Gaussian noise SD from a much larger sample (29,30). Consequently, a more precise estimate of the Gaussian noise SD will result in a more precise estimate of the underlying signal intensity.

In the third stage, the corresponding noisy Gaussian signal of each of the noisy magnitude signals is found through a composition of the inverse cumulative probability function of a Gaussian random variable and the cumulative probability function of a nonCentral Chi random variable. Both the inverse cumulative probability function of a Gaussian random variable and the cumulative probability function of a nonCentral Chi random variable depend on the “average value” of the underlying signal intensity and the Gaussian noise SD. The third stage is exactly a Gaussian random number generator if the input data are Rician-distributed.

The statistical properties of the proposed framework is investigated using Monte Carlo simulations. Experimental data is also used to illustrate the proposed framework.

Since the first stage of the proposed scheme is readily available (25,26), our focus in this paper will be on the latter stages. For completeness and notational consistency, we have included a brief discussion of one-dimensional penalized splines in Appendix A, and of spherical harmonics splines in Appendix B. These spline models share the same matrix structure, and therefore, the computation of this matrix structure is briefly touched on in Appendix C.

The probability density function (PDF) and the cumulative distribution function (CDF) of a nonCentral Chi random variable, *m*, are needed respectively in the second and third stages of the proposed scheme. It is known that magnitude MR signals obtained from an *N*-receiver-coil MRI system follow a nonCentral Chi, , distribution of 2*N* degrees of freedom and the corresponding PDF can be expressed as (16,17):

$${p}_{\stackrel{\sim}{\chi}}(m\mid \eta ,{\sigma}_{g},N)dm=\frac{{m}^{N}}{{\sigma}_{g}^{2}{\eta}^{N-1}}exp(-\frac{{m}^{2}+{\eta}^{2}}{2{\sigma}_{g}^{2}}){I}_{N-1}(\frac{m\eta}{{\sigma}_{g}^{2}})dm,\phantom{\rule{0.38889em}{0ex}}m\ge 0$$

(1)

where the PDF is zero when *m* < 0, η is the underlying (combined) signal intensity (also known as the location parameter of the nonCentral Chi distribution), σ* _{g}* is the Gaussian noise standard deviation, and

The corresponding (CDF) can be expressed as:

$${P}_{\stackrel{\sim}{\chi}}(\alpha \mid \eta ,{\sigma}_{g},N)=\underset{0}{\overset{\alpha}{\int}}{p}_{\stackrel{\sim}{\chi}}(m\mid \eta ,{\sigma}_{g},N)dm.$$

(2)

In practice, it is more convenient to compute Eq. (2) in terms of series representations of the generalized Marcum-Q function (31), *Q _{N}*. It can be shown that Eq. (2) can be simplified to:

$$\begin{array}{l}{P}_{\stackrel{\sim}{\chi}}(\alpha \mid \eta ,{\sigma}_{g},N)=1-\underset{\alpha}{\overset{\infty}{\int}}{p}_{\stackrel{\sim}{\chi}}(m\mid \eta ,{\sigma}_{g},N)dm\\ =1-{Q}_{N}(\eta /{\sigma}_{g},\alpha /{\sigma}_{g}),\end{array}$$

(3)

where the definition of the generalized Marcum-Q function is:

$${Q}_{N}(\lambda ,\gamma )=\frac{1}{{\lambda}^{N-1}}\underset{\gamma}{\overset{\infty}{\int}}{s}^{N}exp(-\frac{{\lambda}^{2}+{s}^{2}}{2}){I}_{N-1}(\lambda s)ds.$$

(4)

When the underlying signal is zero, i.e., η = 0, the PDF and the CDF are given by (30):

$${p}_{\stackrel{\sim}{\chi}}(m\mid 0,{\sigma}_{g},N)dm=\frac{{m}^{2N-1}}{{2}^{N-1}{\sigma}_{g}^{2N}(N-1)!}exp(-\frac{{m}^{2}}{2{\sigma}_{g}^{2}})dm,$$

(5)

and

$${P}_{\stackrel{\sim}{\chi}}(\alpha \mid 0,{\sigma}_{g},N)dm=1-{\scriptstyle \frac{1}{(N-1)!}}\mathrm{\Gamma}(N,{\alpha}^{2}/(2{\sigma}_{g}^{2})),$$

(6)

where the incomplete Gamma function is defined as
$\mathrm{\Gamma}(N,x)=\underset{x}{\overset{\infty}{\int}}{t}^{N-1}exp(-t)dt$. The complete Gamma function is Γ(*N*,0) and is typically written simply as Γ(*N*).

The derivation of the fixed point formula of the underlying signal intensity, η, which is needed in this work, is closely related to that of the fixed point formula of the signal-to-noise ratio, θ η/σ* _{g}*, shown in our previous work (16). The main difference is in the separation of the underlying signal intensity and the Gaussian noise SD, σ

Here, we present the derivation of the fixed point formula of η. We begin with the first two moments of a nonCentral Chi distribution, Eq. (1), and they are given by:

$$\langle m\rangle ={\sigma}_{g}{\beta}_{N}\phantom{\rule{0.16667em}{0ex}}{{}_{1}F}_{1}(-1/2,N,-{\eta}^{2}/(2{\sigma}_{g}^{2})),$$

(7)

and

$$\langle {m}^{2}\rangle ={\eta}^{2}+2N{\sigma}_{g}^{2},$$

(8)

respectively, where
${\beta}_{N}=\sqrt{\pi /2}{\scriptstyle \frac{(2N-1)!!}{{2}^{N-1}(N-1)!}}$, the double factorial is defined as: *n*!!= *n*(*n* −2)(*n* −4) × …, and _{1}*F*_{1} is the confluent hypergeometric function.

The variance of a nonCentral Chi random variable is defined as:

$${\sigma}_{\stackrel{\sim}{\chi}}^{2}\equiv \langle {m}^{2}\rangle -{\langle m\rangle}^{2}=\xi (\eta \mid {\sigma}_{g},N){\sigma}_{g}^{2},$$

(9)

where the scaling factor, ξ, is given by:

$$\xi (\eta \mid {\sigma}_{g},N)=2N+\frac{{\eta}^{2}}{{\sigma}_{g}^{2}}-{\left[{\beta}_{N}\phantom{\rule{0.16667em}{0ex}}{{}_{1}F}_{1}(-1/2,N,-{\eta}^{2}/(2{\sigma}_{g}^{2}))\right]}^{2}.$$

(10)

The fixed point formula of the underlying signal intensity can be obtained by substituting the expression in Eq. (8) into Eq. (9). This leads to the following expressions:

$$\eta =g(\eta \mid \langle m\rangle ,{\sigma}_{g},N)\equiv \sqrt{{\langle m\rangle}^{2}+\left[\xi (\eta \mid {\sigma}_{g},N)-2N\right]{\sigma}_{g}^{2}}.$$

(11)

Note that the implementation of the fixed point formula of η, which is based on Newton’s method of root finding and is described in Appendix D, has important differences compared to that of the fixed point formula of θ η/σ* _{g}* (16).

To find the fixed point estimate, denoted by , in Eq. (11), *m* and σ* _{g}* are replaced by their corresponding estimates, denoted by and

In short, the fixed point formula maps to . Fixed point formulae are powerful methods of successive approximation because their convergence can be tested under a very simple and general assumption (34). Specifically, let be the fixed point that satisfies Eq. (11), i.e., *g*( | , * _{g}*,

Mapping a nonCentral Chi random variable, *m*, to a Gaussian random variable, *x*, can be achieved by a composition of the inverse cumulative distribution function of a Gaussian random variable and the cumulative probability function of a nonCentral Chi random variable, i.e.,

$$x={P}_{G}^{-1}({P}_{\stackrel{\sim}{\chi}}(m\mid \eta ,{\sigma}_{g},N)\mid \eta ,{\sigma}_{g}),$$

(12)

where the inverse cumulative distribution function of a Gaussian random variable is given by

$${P}_{G}^{-1}(y\mid \eta ,{\sigma}_{g})=\eta +{\sigma}_{g}\sqrt{2}{\mathit{erf}}^{-1}(2y-1).$$

(13)

Note that *erf* ^{−1} is the inverse of the error function. We should mention that, in practice, an outlier-rejection step is recommended in Eq. (12). Specifically, we shall identify *x* in Eq. (12) as an outlier if the following inequalities do not hold: (α/2) ≤ *P*_{} (*m* | η,σ* _{g}*,

The method of mapping an arbitrary distribution to a Gaussian distribution is well known, e.g., (35,36). In general, however, this type of mapping is of limited value without *a priori* knowledge of both η and σ* _{g}*, except for those that map from a Gaussian-derived distribution to a Gaussian distribution in which η and σ

The validity of the proposed scheme is analyzed with several simulation tests.

We will begin with the simplest case—that is, the mapping of noisy nonCentral Chi signals, which are drawn from the same distribution characterized by constant η and σ* _{g}*, to noisy Gaussian signals. Without loss of generality, we take

This type of data in which samples are drawn from the same distribution is rare in practice but is useful for illustrating the basic idea of the mapping between nonCentral Chi and Gaussian distributions. Note that this type of data is not an ordered sequence, and therefore, does not require a smoothing spline to estimate the “average value”—the sample mean of the data is sufficient in this case.

We should also note that the Gaussian noise SD cannot be estimated from this type of data using the noise variance estimation techniques discussed in (11,29,30,32,33) because there is no “background” in this type of data to estimate noise variance. Fortunately, other approaches can estimate both the underlying signal intensity and the Gaussian noise SD. Here, we note two approaches—our previously proposed analytically exact scheme (16), and the maximum likelihood approach as discussed in (37). One of the notable differences between these two approaches is that the former is a 1-D optimization procedure while the latter is a 2-D optimization procedure.

In this example, we will use the analytically exact scheme (16) to estimate both the underlying signal intensity and the Gaussian noise SD. Figure 2A shows the histogram of 20000 random samples that were drawn from a Rician distribution with η = 25 and σ* _{g}* = 50 (or η/σ

(A) Histogram of 20000 random signals generated from a Rician distribution. (B) Histogram of the transformed signals.

Based on the estimated values of η and σ* _{g}*, the noisy Rician samples were then transformed to noisy Gaussian samples through the third stage of the proposed scheme. The histogram of the transformed signals is shown in Figure 2B. The sample mean and the standard deviation of these random transformed samples were 25.72 and 50.01, respectively.

In this and the next examples, we investigate the statistical properties of the proposed scheme with data generated from a simple exponentially decaying model of the following form, *s*_{0}*e*^{−}* ^{bD}*, taken from diffusion-weighted MRI. The

Data generated from an exponentially decaying model are particularly useful for testing the proposed scheme because each measurement obtained at a different b-value is in fact drawn from a different distribution. Since there is only one measurement at each b-value, using the sample mean as the “average value” at each b-value would be too variable. Therefore, the “average value” at each b-value has to be estimated from a smoothing method such as the penalized spline where a collection of measurements at different b-values is treated as a whole to estimate the “average values” at all b-values.

Here, we generated 50000 sets of 30 measurements (Rician signals) from the following expression
$\sqrt{{({s}_{0}exp(-bD)+{\epsilon}_{1})}^{2}+{\epsilon}_{2}^{2}}$ with *s*_{0} = 1000, *D* = 2.1×10^{−3} *mm*^{2}/*s*, and ε’s are the Gaussian random variables with mean zero and standard deviation of 100.

The 30 measurements are sampled uniformly from b-value of 50 s/mm^{2} to 1993 s/mm^{2} with a gap of 67 s/mm^{2}. Figure 3A shows the sample mean and the sample standard deviation of the 50000 measurements at each b-value. The error bar denotes one standard deviation away in both directions from the sample mean. The blue curves in Figures 3A and 3B are the expected value computed from the first moment of the Rician random variables.

(A) The expected value of the magnitude signal evaluated with a known Gaussian noise SD of 100 unit is shown as a blue curve, and the gray box and the error bar at each b-value represent the sample mean and the sample standard deviation that are obtained **...**

Each set of 30 measurements is analyzed through the proposed scheme using the penalized spline with truncated polynomial basis of degree 4 and with 3 knots at {452, 988., 1457} s/mm^{2}. The results of these 50000 sets for each stage of the proposed scheme are shown in Figures 3B, 3C and 3D. Figures 3B and 3C show the sample mean and the sample standard deviation of the spline estimates and of the fixed point estimates, respectively. The red curves in Figures 3C and 3D are the ground truth, i.e., *s*_{0} exp(−*bD*). Figures 3D and 3E show the sample mean and the sample standard deviation of the transformed signals obtained through the proposed framework and the method of Gudbjartsson and Patz (19), respectively.

In Figure 3D, it is clear that the sample mean at each b-value is close to the ground truth value but the variance (or SD) increases as the SNR decreases. The increase in SD is mainly due to a lack of sufficient samples because the ideal or expected behavior is that the variance should be constant (Figure 1B). As an example, we compare the result from the above simulation to that of another simulation in which the number of sampling points on the b-value axis was increased to 98, see Figure 4. It is clear from Figure 4 that the Gaussian noise SD estimates of the 98-point fit are collectively much closer to the ground truth value of 100 (arbitrary unit) that those of the 30-point fit.

The same exponentially decaying model in diffusion-weighted MRI and the same set of parameters, *D* = 2.1×10^{−3} *mm*^{2}/*s* and the Gaussian noise SD of 100, are used in this example. Here, we have only one set of 2476 measurements sampled from 50 s/mm^{2} to 5000 s/mm^{2} with a gap of 2 s/mm^{2}. The penalized spline with a truncated polynomial basis of degree 4 and with 5 knots at {872, 1698, 2524, 3348, 4174} s/mm^{2} was used in this example.

The goal of this example is to show the qualitative features of the noisy Rician signals and of the transformed signals obtained through the proposed framework and the method of Gudbjartsson and Patz (19). We also compare and contrast the results from the parametric fits (mono-exponential and bi-exponential fits) to both the noisy signals and the transformed signals.

Figure 5A shows the noisy Rician signals. Figures 5B and 5C show the transformed signals obtained through the proposed framework and the method of Gudbjartsson and Patz (19), respectively. The results of both a mono-exponential fit and a bi-exponential fit to the noisy Rician signals are shown in Figure 5D. It is interesting to note that a bi-exponential model fits the noisy Rician signals rather well—the bi-exponential model is almost superimposed upon the expected curve. Figure 5E shows the result of a mono-exponential fit to the transformed signals obtained through the proposed framework; the resultant curve is close to the ground truth. The results of both a mono-exponential fit and a bi-exponential fit to the transformed signals obtained through the method of Gudbjartsson and Patz (19) are shown in Figure 5F. The estimates of the parameters, (*s*_{0}, *D*), obtained through a mono-exponential fit of the noisy Rician signals, the transformed signals based on the proposed framework, and the corrected signals based on the method of Gudbjartsson and Patz (19) were found to be (597.4, 7.3 × 10^{−4} mm^{2}/s), (966.4, 2.0 × 10^{−3} mm^{2}/s), and (774.3, 1.3 × 10^{−3} mm^{2}/s), respectively. In the bi-exponential fit of the noisy Rician signals and of the corrected signals based on the method of Gudbjartsson and Patz (19), we found (*ŝ*_{0} = 1036.5, _{1} = −1.8×10^{−5} mm^{2}/s, _{2} = 2.7×10^{−3} mm^{2}/s, 0.11) and (*ŝ*_{0} = 1040.9, _{1} = −3.0×10^{−5} mm^{2}/s, _{2} = 2.7×10^{−3} mm^{2}/s, 0.087), respectively. Note that the last item in each of the lists above is the (volume) fraction associated with _{1}.

In this example, we will illustrate the proposed scheme with data sampled on a unit sphere. The spherical harmonic spline model will be used to transform the nonCentral Chi signals to Gaussian signals. A brief introduction to the spherical spline is provided in Appendix B.

For simplicity, the noisy Rician signals will be generated from a single tensor model according to the following expression,
$\sqrt{{({s}_{0}exp(-b{\mathbf{g}}^{T}\mathbf{Dg})+{\epsilon}_{1})}^{2}+{\epsilon}_{2}^{2}}$ where *s*_{0} = 1000, **D** is the diffusion tensor, **g** is a unit gradient vector, *T* denotes matrix or vector transposition, and ε’s are the Gaussian random variables with mean zero and SD of 100. Further, the synthetic tensor is given by:

$$\mathbf{D}=\left(\begin{array}{ccc}9.5& 1.1& -1.6\\ 1.1& 6.7& -0.5\\ -1.6& -0.5& 4.8\end{array}\right)\times {10}^{-4}m{m}^{2}/s.$$

For visualization purposes, we first parametrize the unit gradient vector in terms of spherical coordinates, i.e., **g** = [sin(θ)cos(), sin(θ)sin(), cos(θ)]* ^{T}*. With this parametrization, we can plot the underlying signal intensity and the expected value of the Rician random variables as functions of the spherical coordinates. Figure 6A shows the underlying signal intensity as a function of the spherical coordinates at a b-value of 3000 s/mm

(A) The underlying signal intensity from a single tensor model as a function of spherical coordinates evaluated with a constant b-value of 3000 s/mm^{2}. (B) The expected value of the Rician signals (with the known Gaussian noise SD of 100) as a function **...**

Similar to the one-dimensional case, we chose 30 unit gradient vectors that are uniformly distributed on the sphere (based on the electrostatic repulsion scheme (38)), and the spherical coordinates of each of the gradient vectors are color-coded in Figure 6C. Figure 6D shows the color-coded underlying signal intensity in ascending order and their respective expected values (the first moment of the Rician random variables) with a Gaussian noise SD of 100. There are 50000 sets of 30 measurements and each measurement in the set is a sample on the unit sphere obtained through one of the gradient vectors. The sample mean and the sample SD of the noisy Rician signals of all the spherical coordinates are shown in Figure 6E. Finally, each set of measurements is analyzed through both the proposed scheme using the spherical spline with spherical harmonics of even degree up to *l* = 6 and the method of Gudbjartsson and Patz. The results are shown in Figures 6F and 6G, respectively.

It is clear from the results shown in Figure 6F that the sample means are close to the ground truth values but the variance increases slightly as the SNR decreases. The increase in variance is to be expected since only 30 gradient directions are used. More importantly, we can expect the variance to get closer to a constant value that is independent of the SNR level as the size of the samples becomes larger.

We illustrate the performance of our approach on an excised rat hippocampus data set. The data set contains a series of diffusion-weighted images obtained by varying the diffusion gradient strength. The rat was perfusion-fixed with 4% paraformaldehyde in phosphate buffered-saline (PBS), the hippocampus was dissected and kept in fixative for more than 8 days. Prior to imaging, the sample was washed overnight in PBS. The imaging was performed using a 14.1T narrow-bore spectrometer where a pulsed gradient stimulated echo pulse sequence was employed. The imaging parameters were: TE=12.6ms, TR=1000ms, resolution=(78×78×500)μm^{3}, matrix size=(64×64×3), number of repetitions=4, diffusion gradient pulse duration (δ)=2ms, and diffusion gradient separation (Δ)=24.54ms. The data set contains a total of 33 images with different diffusion gradient strengths increasing from 0 to 2935mT/m in steps of 91.75mT/m. One diffusion weighted image is shown in Figure 7A.

Experimental data. (A) A diffusion-weighted image of a hippocampus with a red square indicating the four different pixel locations where the noisy magnitude signals of each pixel (with different b-values) are analyzed using the proposed method. The results **...**

Four neighboring pixels indicated with a red square were selected for further analyses. The noisy magnitude signals and the noisy transformed signals of each of the pixels as a function of b-value are shown in Figures 7B–7E as blue and red dots, respectively. The blue curve in each of the panels is obtained through a least squares fit of a bi-exponential function to the noisy magnitude signals. The red curve in each of the panels is obtained through a least square fit of a bi-exponential function to the noisy transformed signals produced by the proposed framework. Note that the penalized spline with a truncated polynomial basis of degree 4 and with 4 knots was used in this example. The estimated Gaussian noise standard deviation was 0.88. Further, the estimated parameters obtained from a least squares fit of a bi-exponential function to both the noisy magnitude signals and noisy transformed signals are shown below:

Bi-exponential fit to the noisy magnitude signals | ŝ_{0} (a.u.) | _{1} (×10^{−5} mm^{2}/s) | _{2} (×10^{−4} mm^{2}/s) | Volume fraction associated with _{1} |
---|---|---|---|---|

Fig. 7B | 62.48 | 0.82 | 5.3 | 0.027 |

Fig. 7C | 63.10 | 2.0 | 6.2 | 0.037 |

Fig. 7D | 64.28 | 0.81 | 6.0 | 0.026 |

Fig. 7E | 64.36 | 1.4 | 5.5 | 0.027 |

Bi-exponential fit to the noisy transformed signals | ||||

Fig. 7B | 62.6 | 9.0 | 5.5 | 0.060 |

Fig. 7C | 63.3 | 10.9 | 6.6 | 0.077 |

Fig. 7D | 64.4 | 11.3 | 6.2 | 0.056 |

Fig. 7E | 64.4 | 9.9 | 5.7 | 0.048 |

If both the estimated Gaussian noise SD and each of the red curves are assumed to be the ground truth values then the expected value (or the first moment) of a Rician distribution as a function of b-value can be computed and is shown in dark gray; these expected values are in good agreement with the blue curve, which is an indication that the red curve is a good approximation of the underlying signal intensities.

In this work, our main objective is to demonstrate that nonCentral Chi signals can be transformed into Gaussian signals and present as clearly as possible the basic ideas as well as the nuts and bolts of the proposed scheme.

This paper can be thought of as a sequel to but independent of our recent paper on the probabilistic and self-consistent approach to the identification and estimation of noise (PIESNO) (30) because the noise estimate on which the proposed framework depends can estimated through other techniques. The fixed point formula of the underlying signal intensity and the technique proposed in (30) represent our major attempt to decouple the fixed point formula of SNR (16) into two self-consistent approaches for estimating the underlying signal and the Gaussian noise SD.

The advantage of this decoupling is substantial because the estimation of the Gaussian noise SD can be obtained from a much larger collection of samples (30). As a consequence, the precision of the Gaussian noise SD estimate will be significantly increased, and in turn, the precision of the underlying signal intensity estimate will also be increased. As discussed above, the decoupling is more useful and practical than the fixed point formula of SNR because we do not usually have many data that are drawn from the same distribution, Fig 1B. It is interesting to note that the way in which the present scheme is realized is due in part to this practical constraint.

The combination of these stages presented here is, to the best of our knowledge, unique and novel. Moreover, the formulation of the second stage is conceptually very different from our previous approach (16), see Figure 1B.

The first and third stages of the proposed scheme are well known but these stages, alone or together, are not sufficient for mapping nonCentral Chi signals to Gaussian signals without the second stage. The three stages used in the proposed scheme in a sense form an irreducible set of steps that is necessary to map noisy nonCentral Chi signals to noisy Gaussian signals. While different fitting or smoothing methods may be used in the first stage, the last two stages are strictly mathematical and fixed even though there exists several different iteration scheme for finding the fixed point of the underlying signal intensity, see Appendix D.

In the second stage, we should point out that the suggested modification to the fixed point formula of the underlying signal intensity for the special case in which the “average value” of the magnitude signals is below the noise floor can be further improved. Although we have provided a theoretical justification for this modification, we believe further studies are needed to investigate other approaches to find the fixed point estimate for this particular situation.

The examples illustrated above clearly show the feasibility and effectiveness of the proposed scheme in mapping noisy magnitude signals to noisy signal intensities. The proposed scheme can be extended to transforming any Gaussian-derived noisy signals, e.g., Rayleigh, Rician, nonCentral Chi, and nonCentral Chi-squared distributed signals, to noisy Gaussian signals by finding the specific fixed point formula used in the second stage.

The basic idea of our approach is general and can be easily adapted to many MRI and non-MRI applications, e.g., the Laser Interferometric Gravitational Wave Observatory (LIGO) (39,40) and communication systems (31), by selecting an appropriate data smoothing method that is optimal for the application-specific sampling space. For example, the penalized spline model or the wavelet smoothing spline may be useful in the analysis of functional MRI data while spherical splines are particularly useful to diffusion tensor imaging and high angular diffusion imaging techniques (3–9,21). We should also point out that some algorithms of least squares estimation may need to be modified in order to handle negative values in the transformed data. For example, the nonnegative least squares approach, e.g., (41), may be needed to analyze the transformed signals.

Spline models are known for their flexibility in capturing unknown trends in the data but they come at the cost of slightly higher susceptibility to noise such as spurious oscillatory trends in the spline estimates. Therefore, optimal performance cannot be expected of any spline or regression models when the number of samples is very small, and simulation studies may be needed to get an initial assessment of the number of samples needed for a particular experimental design. In this work, the GCV function was used as a smoothing criterion because it has several desirable properties, the most notable of which is that as the number of samples increases, the spline estimate obtained via the GCV becomes closer to the estimate that is obtained by minimizing the mean square error between the estimate and the unknown ground truth (27). Finally, the spurious trends in the spline estimates mentioned above can be partially removed if the transformed signals are fitted with some parametric functions based on *a priori* physical or mathematical model that is less flexible than the smoothing spline, e.g. mono-, bi- or tri-exponential functions for the one-dimensional diffusion data or the diffusion tensor model for the three dimensional diffusion data.

In quantitative MRI, anatomically or physiologically relevant parameters are usually estimated from a least squares model. As noted in the introduction, the Gaussian-distributed noisy signals are of interest here rather than the Gaussian-derived random signals because the Gaussian-distributed noisy signals are generally more amenable to statistical treatment based on the principle of least squares, e.g., (42–44). It is important to point out that one of the basic assumptions in a least squares model is that random errors follow a Gaussian distribution. The principle of least squares is very powerful because of its mathematical tractability not only in parameter estimation but also in hypothesis testing and confidence interval estimation. Further, the least squares and maximum likelihood estimators are equivalent under the assumption of normality of random errors (45).

In this work, we have presented a novel approach for transforming noisy nonCentral Chi signals to noisy Gaussian signals, thus making least squares approaches uniformly applicable for analyzing MRI data. The present approach is a major advance in facilitating and improving all subsequent data analysis and processing steps in a quantitative MRI pipeline.

We are grateful to Liz Salak for reviewing the paper. C. G. Koay was supported in part by the *Eunice Kennedy Shriver* National Institute of Child Health and Human Development, the National Institute on Drug Abuse, the National Institute of Mental Health, and the National Institute of Neurological Disorders and Stroke as part of the NIH MRI Study of Normal Pediatric Brain Development with supplemental funding from the NIH Neuroscience Blueprint. We would like to thank Drs. Timothy M. Shepherd and Stephen J. Blackband for the MRI data set.

A penalized spline function with a truncated polynomial basis (25) of degree *p* and *K* knots at {κ_{1}, …, κ* _{K}*} is given by:

$$f(x)={\beta}_{0}+\sum _{i=1}^{p}{\beta}_{i}{x}^{i}+\sum _{j=1}^{K}{\beta}_{p\phantom{\rule{0.16667em}{0ex}}j}{(x-{\kappa}_{j})}_{+}^{p},$$

(A1)

where the operation, (*x*)_{+}, returns *x* if *x* > 0, and zero otherwise,
${(x-{\kappa}_{j})}_{+}^{p}$ are the spline basis functions and κ* _{j}* are the knots.

If there are *n* observations, {*y*_{1}, …, *y _{n}*}, sampled at {

$$\mathbf{y}=\mathbf{X}\mathbf{\beta},$$

(A2)

where the design matrix, **X**, is given by:

$$\left(\begin{array}{ccccccc}1& {x}_{1}& \cdots & {x}_{1}^{p}& {({x}_{1}-{\kappa}_{1})}_{+}^{p}& \cdots & {({x}_{1}-{\kappa}_{K})}_{+}^{p}\\ \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 1& {x}_{n}& \cdots & {x}_{n}^{p}& {({x}_{n}-{\kappa}_{1})}_{+}^{p}& \cdots & {({x}_{n}-{\kappa}_{K})}_{+}^{p}\end{array}\right).$$

(A3)

In practice, we usually normalize the coordinates,{*x*_{1}, …, *x _{n}*}, by the maximum of the absolute value of the elements in {

In the ordinary least squares estimation, the goal is to find **β** that minimizes ||**y** − **Xβ**||^{2} while, in the penalized spline estimation, the goal is to find **β** that minimizes

$${\Vert \mathbf{y}-\mathbf{X}\mathbf{\beta}\Vert}^{2}+\lambda {\mathbf{\beta}}^{T}\mathbf{D}\mathbf{\beta},$$

(A4)

where *T* denotes matrix or vector transposition, **D** is a diagonal matrix whose first *p* +1 diagonal elements are zero and the rest of the diagonal elements are unity and λ is the penalty parameter (or the smoothing parameter). The smoothed observation vector, **ŷ**_{λ}, estimated from the penalized spline can be expressed as follows:

$${\widehat{\mathbf{y}}}_{\lambda}={\mathbf{S}}_{\lambda}\mathbf{y},$$

(A5)

where

$${\mathbf{S}}_{\lambda}=\mathbf{X}{({\mathbf{X}}^{T}\mathbf{X}+\lambda \mathbf{D})}^{-1}{\mathbf{X}}^{T}$$

(A6)

is known as the smoother matrix.

The procedure presented thus far does not provide a means to find an optimal λ. Here, we use the GCV function (27) to select an optimal λ, which will be denoted by λ* _{GCV}*; note that λ

$$\mathit{GCV}(\lambda )=\mathit{RSS}(\lambda )/{(1-tr({\mathbf{S}}_{\lambda})/n)}^{2},$$

(A7)

where *RSS*(λ) = ||**y** − **ŷ**_{λ}||^{2} is the residual sum of squares, *tr* denotes the matrix trace operation, and *n* is the number of observations. For a numerically stable implementation of the penalized spline estimation, see Appendix C.

According to the expansion theorem of the spherical harmonics (46), any continuous function, *f* (θ,), on the unit sphere together with continuous derivatives up to second order can be expanded in terms of the Laplace series of the spherical harmonics:

$$f(\theta ,\phi )=\sum _{l=0}^{\infty}\sum _{m=-l}^{l}{\beta}_{l}^{m}\phantom{\rule{0.38889em}{0ex}}{Y}_{l}^{m}(\theta ,\phi )$$

(B1)

where
${Y}_{l}^{m}(\theta ,\phi )$ is the spherical harmonic of *l*^{th} degree and of *m*^{th} order. The spherical harmonic can be expressed as a real rather than complex function, and this is given by (9,46):

$${Y}_{l}^{m}(\theta ,\phi )=\{\begin{array}{c}-\sqrt{{\scriptstyle \frac{2l+1}{2\pi}}{\scriptstyle \frac{(l+m)!}{(l-m)!}}}sin(m\phantom{\rule{0.16667em}{0ex}}\phi ){P}_{l}^{-m}(cos(\theta )):-l\le m\le -1\\ \sqrt{{\scriptstyle \frac{2l+1}{4\pi}}}{P}_{l}^{m}(cos(\theta )):m=0\\ \sqrt{{\scriptstyle \frac{2l+1}{2\pi}}{\scriptstyle \frac{(l-m)!}{(l+m)!}}}cos(m\phantom{\rule{0.16667em}{0ex}}\phi ){P}_{l}^{m}(cos(\theta )):1\le m\le l\end{array}.$$

Note that
${P}_{l}^{m}$ is the associated Legendre polynomial of *m*^{th} order, and the arguments of the spherical harmonic function are defined within these intervals: 0 ≤ θ < π and 0 ≤ < 2π.

The smoothing spherical spline (9,26) is built on the Laplace series with finite number of terms as well as on the following linear matrix structure:

$$\mathbf{y}=\mathbf{X}\mathbf{\beta},$$

(B2)

where **y** is an array of measurements sampled at {(θ_{1},_{1}), …,(θ* _{n}*,

$$\mathbf{\beta}={[{\beta}_{0}^{0},{\beta}_{1}^{-1},{\beta}_{1}^{0},{\beta}_{1}^{1},{\beta}_{2}^{-2},\dots ,{\beta}_{2}^{2},\cdots ,{\beta}_{{l}_{max}}^{{l}_{max}}]}^{T}.$$

(B3)

The goal in the smoothing spherical spline estimation (9,26) is to find **β** that minimizes

$${\Vert \mathbf{y}-\mathbf{X}\mathbf{\beta}\Vert}^{2}+\lambda {\mathbf{\beta}}^{T}\mathbf{D}\mathbf{\beta},$$

(B4)

where **D** is a diagonal matrix with each diagonal element takes on the value of *l*^{2} (*l* +1)^{2} where *l* is the degree associated with the corresponding element,
${\beta}_{l}^{m}$, in **β**.

The solution of the above estimation has the same matrix structure as that of the penalized spline estimation in Appendix A. Note that in diffusion MRI, only spherical harmonics of even degree are of interest because of the assumption that the diffusion process has antipodal symmetry. Therefore, Eq. (B3) has to be modified accordingly.

The key computational problem in penalized spline estimation is to find an efficient matrix decomposition of the smoother matrix:

$${\mathbf{S}}_{\lambda}=\mathbf{X}{({\mathbf{X}}^{T}\mathbf{X}+\lambda \mathbf{D})}^{-1}{\mathbf{X}}^{T}.$$

(C1)

Our approach in computing the smoother matrix is slightly different from that of (25) in that we use the QR decomposition to factor **X** rather than the Cholesky decomposition to factor **X**^{T}**X**.

Let the QR decomposition of **X** be **Q R** where **Q** is an orthogonal matrix, i.e., **Q**^{T}**Q** = **I**, and **R** is an upper triangular matrix. Note that **I** is the identity matrix. Substituting **Q R** into Eq. (C1), we have:

$$\begin{array}{l}{\mathbf{S}}_{\lambda}=\mathbf{Q}\phantom{\rule{0.16667em}{0ex}}\mathbf{R}{({\mathbf{R}}^{T}\mathbf{R}+\lambda \mathbf{D})}^{-1}{\mathbf{R}}^{T}{\mathbf{Q}}^{T}\\ =\mathbf{Q}{(\mathbf{I}+\lambda {\mathbf{R}}^{-T}\mathbf{D}{\mathbf{R}}^{-1})}^{-1}{\mathbf{Q}}^{T}.\end{array}$$

(C2)

At this stage, the singular value decomposition (SVD) of **Σ R**^{−}^{T}**DR**^{−1} is needed, which will be denoted by **Σ UΔV*** ^{T}*. Note that

$$\begin{array}{l}{\mathbf{S}}_{\lambda}=\mathbf{Q}\phantom{\rule{0.16667em}{0ex}}\mathbf{U}{(\mathbf{I}+\lambda \mathbf{\Delta})}^{-1}{(\mathbf{Q}\phantom{\rule{0.16667em}{0ex}}\mathbf{U})}^{T},\\ =\mathbf{M}\phantom{\rule{0.16667em}{0ex}}\mathbf{W}\phantom{\rule{0.16667em}{0ex}}{\mathbf{M}}^{T}.\end{array}$$

(C3)

where **M** = **Q U** is an orthogonal matrix and **W** is a diagonal matrix and its diagonal elements are defined by
${W}_{ii}={\scriptstyle \frac{1}{1+\lambda {\mathrm{\Delta}}_{ii}}}$. Since **M** is an orthogonal matrix, *tr*(**S**_{λ}) is simply
$tr(\mathbf{W})={\sum}_{i}{\scriptstyle \frac{1}{1+\lambda {\mathrm{\Delta}}_{ii}}}$. In practice, the factor **M** may be precomputed and only the diagonal matrix **W** needs to be updated during the optimization search for λ* _{GCV}*.

In this appendix, we provide an implementation of the fixed point formula of the underlying signal intensity, which is based on Newton’s method of root finding. It begins with an iteration scheme of the following form:

$${\eta}_{k+1}\equiv K({\eta}_{k}\mid \widehat{m},{\widehat{\sigma}}_{g},N)={\eta}_{k}-\frac{f({\eta}_{k}\mid \widehat{m},{\widehat{\sigma}}_{g},N)}{{f}^{\prime}({\eta}_{k}\mid \widehat{m},{\widehat{\sigma}}_{g},N)},$$

(D1)

where *f* (η| , * _{g}*,

$$\eta -\frac{g(\eta \mid m,\sigma ,N)\left(g(\eta \mid m,\sigma ,N)-\eta \right)}{\eta \left(1-{\scriptstyle \frac{{\beta}_{N}^{2}}{2N}}{{}_{1}F}_{1}(-{\scriptstyle \frac{1}{2}},N,-{\scriptstyle \frac{{\eta}^{2}}{2{\sigma}^{2}}})\phantom{\rule{0.16667em}{0ex}}{{}_{1}F}_{1}({\scriptstyle \frac{1}{2}},N+1,-{\scriptstyle \frac{{\eta}^{2}}{2{\sigma}^{2}}})\right)-g(\eta \mid m,\sigma ,N)}.$$

(D2)

The basic algorithm of the above iteration is given in Table 1. It is clear from Eq. (D2) that the expression is different from that of (16).

We should note that there are other iteration schemes for finding the fixed point of the underlying signal. Here, we provide another iteration scheme also based on the Newton’s method:

$$\eta +\frac{2N\sigma \left(m-{\beta}_{N}\phantom{\rule{0.16667em}{0ex}}\sigma \phantom{\rule{0.16667em}{0ex}}{{}_{1}F}_{1}(-{\scriptstyle \frac{1}{2}},N,-{\scriptstyle \frac{{\eta}^{2}}{2{\sigma}^{2}}})\right)}{{\beta}_{N}\eta \phantom{\rule{0.16667em}{0ex}}{{}_{1}F}_{1}({\scriptstyle \frac{1}{2}},N+1,-{\scriptstyle \frac{{\eta}^{2}}{2{\sigma}^{2}}})}.$$

(D3)

The expression above is derived directly from Eq. (7). Note that Table 1 can be easily adapted for Eq. (D3) instead of Eq. (D2).

**Publisher's Disclaimer: **This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1. Lauterbur PC. Image formation by induced local interactions: Examples employing nuclear magnetic resonance. Nature. 1973;242(5394):190–191. [PubMed]

2. Ogawa S, Lee TM, Kay AR, Tank DW. Brain Magnetic Resonance Imaging with Contrast Dependent on Blood Oxygenation. Proceedings of the National Academy of Sciences. 1990;87(24):9868–9872. [PubMed]

3. Basser PJ, Mattiello J, Le Bihan D. MR diffusion tensor spectroscopy and imaging. Biophys J. 1994;66(1):259–267. [PubMed]

4. Tuch DS, Reese TG, Wiegell MR, Makris N, Belliveau JW, Wedeen VJ. High angular resolution diffusion imaging reveals intravoxel white matter fiber heterogeneity. Magnetic Resonance in Medicine. 2002;48(4):577–582. [PubMed]

5. Frank LR. Characterization of anisotropy in high angular resolution diffusion-weighted MRI. Magnetic Resonance in Medicine. 2002;47(6):1083–1099. [PubMed]

6. Anderson AW. Measurement of fiber orientation distributions using high angular resolution diffusion imaging. Magnetic Resonance in Medicine. 2005;54(5):1194–1206. [PubMed]

7. Hess CP, Mukherjee P, Han ET, Xu D, Vigneron DB. Q-ball reconstruction of multimodal fiber orientations using the spherical harmonic basis. Magnetic Resonance in Medicine. 2006;56(1):104–117. [PubMed]

8. Özarslan E, Shepherd TM, Vemuri BC, Blackband SJ, Mareci TH. Resolution of complex tissue microarchitecture using the diffusion orientation transform (DOT) NeuroImage. 2006;31(3):1086–1103. [PubMed]

9. Descoteaux M, Angelino E, Fitzgibbons S, Deriche R. Regularized, fast, and robust analytical Q-ball imaging. Magnetic Resonance in Medicine. 2007;58(3):497–510. [PubMed]

10. Wu YC, Alexander AL. Hybrid diffusion imaging. Neuroimage. 2007;36(3):617–629. [PMC free article] [PubMed]

11. Henkelman RM. Measurement of signal intensities in the presence of noise in MR images. Med Phys. 1985;12(2):232–233. [PubMed]

12. Liu J, Koenig JL. An automatic phase correction method in nuclear magnetic resonance imaging. Journal of Magnetic Resonance. 1990;86:593–604.

13. Chen L, Weng Z, Goh L, Garland M. An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization. Journal of Magnetic Resonance. 2002;158:164–168.

14. Bretthorst GL. Automatic phasing of MR images. Part I: Linearly varying phase. Journal of Magnetic Resonance. 2008;191:184–192. [PubMed]

15. Bretthorst GL. Automatic phasing of MR images. Part II: Voxel-wise phase estimation. Journal of Magnetic Resonance. 2008;191:193–201. [PubMed]

16. Koay CG, Basser PJ. Analytically exact correction scheme for signal extraction from noisy magnitude MR signals. Journal of Magnetic Resonance. 2006;179(2):317–322. [PubMed]

17. Constantinides CD, Atalar E, McVeigh ER. Signal-to-noise measurements in magnitude images from NMR phased arrays. Magnetic Resonance in Medicine. 1997;38(5):852–857. [PMC free article] [PubMed]

18. Bernstein MA, Thomasson DM, Perman WH. Improved detectability in low signal-to-noise ratio magnetic resonance images by means of a phase-corrected real reconstruction. Med Phys. 1989;16(5):813–817. [PubMed]

19. Gudbjartsson H, Patz S. The Rician distribution of noisy MRI data. Magnetic Resonance in Medicine. 1995;34(6):910–914. [PMC free article] [PubMed]

20. Rice SO. Mathematical analysis of random noise. Bell System Technical Journal. 1944;23 and 24

21. Jones DK, Basser PJ. Squashing peanuts and smashing pumpkins: How noise distorts diffusion-weighted MR data. Magnetic Resonance in Medicine. 2004;52(5):979–993. [PubMed]

22. Vincent JL, Patel GH, Fox MD, Snyder AZ, Baker JT, Van Essen DC, Zempel JM, Snyder LH, Corbetta M, Raichle ME. Intrinsic functional architecture in the anaesthetized monkey brain. Nature. 2007;447(7140):83–86. [PubMed]

23. McGibney G, Smith MR. An unbiased signal-to-noise ratio measure for magnetic resonance images. Med Phys. 1993;20(4):1077–1078. [PubMed]

24. Miller AJ, Joseph PM. The use of power images to perform quantitative analysis on low SNR MR images. Magnetic Resonance Imaging. 1993;11(7):1051–1056. [PubMed]

25. Ruppert D, Wand MP, Carroll RJ. Semiparametric regression. Cambridge University Press; 2003.

26. Wahba G. Spline models for observational data. SIAM; 1990.

27. Craven P, Wahba G. Smoothing noisy data with spline functions. Numerische Mathematik. 1978;31(4):377–403.

28. Bertero M, Boccacci P. Introduction to inverse problems in imaging. Philadelphia: Institute of Physics Publishing; 1998.

29. Sijbers J, Poot D, den Dekker AJ, Pintjens W. Automatic estimation of the noise variance from the histogram of a magnetic resonance image. Physics in Medicine and Biology. 2007;52(5):1335–1348. [PubMed]

30. Koay CG, Özarslan E, Pierpaoli C. Probabilistic Identification and Estimation Noise (PIESNO): A self-consistent approach via the median method and its applications in MRI. Journal of Magnetic Resonance Submitted [PMC free article] [PubMed]

31. Proakis J. Digital communications. New York: McGraw-Hill; 2001.

32. Edelstein WA, Bottomley PA, Pfeifer LM. A signal-to-noise calibration procedure for NMR imaging systems. Med Phys. 1984;11(2):180–185. [PubMed]

33. Chang L-C, Rohde GK, Pierpaoli C. An automatic method for estimating noise-induced signal variance in magnitude-reconstructed magnetic resonance images. SPIE Medical Imaging: Image processing. 2005;5747:1136–1142.

34. Courant R, John F. Introduction to Calculus and Analyis I. New York: Springer; 1989.

35. Liu P, Der Kiureghian A. Multivariate distribution models with prescribed marginals and covariances. Prob Eng Mech. 1986;1:105–112.

36. van Albada S, Robinson P. Transformation of arbitrary distributions to the normal distribution with application to EEG test-retest reliability. J Neu Meth. 2007;161:205–211. [PubMed]

37. Sijbers J, Den Dekker AJ. Maximum likelihood estimation of signal amplitude and noise variance from MR data. Magn Reson Med. 2004;51:586–594. [PubMed]

38. Jones DK, Horsfield MA, Simmons A. Optimal strategies for measuring diffusion in anisotropic systems by magnetic resonance imaging. Magnetic Resonance in Medicine. 1999;42(3):515–525. [PubMed]

39. Abramovici A, Althouse WE, Drever RWP, Gursel Y, Kawamura S, Raab FJ, Shoemaker D, Sievers L, Spero RE, Thorne KS, Vogt RE, Weiss R, Whitcomb SE, Zucker ME. LIGO: The Laser Interferometer Gravitational-Wave Observatory. Science. 1992;256(5055):325–333. [PubMed]

40. Cutler C, Flanagan EE. Gravitational waves from merging compact binaries: How accurately can one extract the binary’s parameters from the inspiral waveform? Phys Rev D. 1994;49(6):2658–2697. [PubMed]

41. Lawson CL, Hanson RJ. Solving least squares problems. Philadelphia: SIAM; 1974.

42. Koay CG, Chang L-C, Carew JD, Pierpaoli C, Basser PJ. A unifying theoretical and algorithmic framework for least squares methods of estimation in diffusion tensor imaging. Journal of Magnetic Resonance. 2006;182(1):115–125. [PubMed]

43. Koay CG, Chang LC, Pierpaoli C, Basser PJ. Error propagation framework for diffusion tensor imaging via diffusion tensor representations. IEEE Transactions on Medical Imaging. 2007;26(8):1017–1034. [PubMed]

44. Koay CG, Nevo U, Chang L-C, Pierpaoli C, Basser PJ. The elliptical cone of uncertainty and its normalized measures in diffusion tensor imaging. IEEE Transactions on Medical Imaging Accepted [PMC free article] [PubMed]

45. Seber GAF, Lee AJ. Linear Regression Analysis. New York: Wiley; 2003.

46. Courant R, Hilbert D. Methods of Mathematical Physics. New York: Wiley; 1989.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |