Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2856353

Formats

Article sections

- Abstract
- I. Introduction
- II. Multiple Scan Fieldmap Estimation - Theory
- III. Experiments
- IV. Discussion
- REFERENCES

Authors

Related links

IEEE Trans Med Imaging. Author manuscript; available in PMC 2010 April 19.

Published in final edited form as:

PMCID: PMC2856353

NIHMSID: NIHMS160353

Amanda K. Funai, Student Member, IEEE, Jeffrey A. Fessler, Fellow, IEEE, Desmond T.B. Yeo, Student Member, IEEE, Valur T. Olafsson, Student Member, IEEE, and Douglas C. Noll, Senior Member, IEEE

The publisher's final edited version of this article is available at IEEE Trans Med Imaging

See other articles in PMC that cite the published article.

In fast MR imaging with long readout times, such as echo-planar imaging (EPI) and spiral scans, it is important to correct for the effects of field inhomogeneity to reduce image distortion and blurring. Such corrections require an accurate *field map*, a map of the off-resonance frequency at each voxel. Standard field map estimation methods yield noisy field maps, particularly in image regions with low spin density. This paper, describes regularized methods for field map estimation from two or more MR scans having different echo times. These methods exploit the fact that field maps are generally smooth functions. The methods use algorithms that decrease monotonically a regularized least-squares cost function, even though the problem is highly nonlinear. Results show that the proposed regularized methods significantly improve the quality of field map estimates relative to conventional unregularized methods.

MR imaging techniques with long readout times, such as echo-planar imaging (EPI) and spiral scans, suffer from the effects of field inhomogeneity that cause blur and image distortion. To reduce these effects via field-corrected MR image reconstruction, *e.g*., [1]–[5], one must have available an accurate estimate of the field map. A common approach to measuring field maps is to acquire two scans with different echo times, and then to reconstruct the images (without field correction) from those two scans. The conventional method is then to compute their phase difference and divide by the echo time difference Δ_{1}. This model makes no account for noise and creates field maps that are very noisy in voxels with low spin density. Section II first introduces this model and then reviews standard approaches for this problem. A limitation of the standard two-scan approach to field mapping is that selecting the echo-time-difference Δ_{1} involves a trade off: if Δ_{1} is too large, then undesirable phase wrapping will occur, but if Δ_{1} is too small, the variance of the field map is large. One way to reduce the variance while also avoiding phase unwrapping procedures is to acquire more than two scans, *e.g*., one pair with a small echo time difference and a third scan with a larger echo time difference. By using multiple echo readouts, the scan times may remain reasonable, at least for the modest spatial resolutions needed in fMRI. Therefore, we present a general model that accommodates more than two scans and describe a regularized least-squares field map estimation method using those scans. Section III shows the improvements both in the estimated field maps and the reconstructed images using multiple scans. This is shown first with simulated results in Section III-A and then using real MR data in Section III-B.

The usual approach to measuring field maps in MRI is to acquire two scans of the object with slightly different echo times, and then to reconstruct images *y*^{0} and *y*^{1} (without field correction) from those two scans, *e.g*., [6]–[8]. We assume the following model for those undistorted reconstructed images is

$$\begin{array}{c}{y}_{j}^{0}={f}_{j}+{\epsilon}_{j}^{0}\hfill \\ {y}_{j}^{1}={f}_{j}{\phantom{\rule{thinmathspace}{0ex}}\mathrm{e}}^{\sim {\omega}_{j}{\mathrm{\Delta}}_{1}}+{\epsilon}_{j}^{1},\hfill \end{array}$$

(1)

where Δ_{1} denotes the echo-time difference, *f _{j}* denotes the underlying complex transverse magnetization in the

Based on (1), the usual field map estimator _{j} uses the phase difference of the two images, computed as follows [1], [9]:

$${\widehat{\omega}}_{j}=\angle ({y}_{j}^{0*}{y}_{j}^{1})/{\mathrm{\Delta}}_{1}.$$

(2)

This expression is a method-of-moments approach that would work perfectly in the absence of noise and phase wrapping, within voxels where |*f _{j}*| > 0. However, (2) can be very sensitive to noise in voxels where the image magnitude |

Although the conventional estimate (2) is most common, other methods for estimating field maps have appeared in the literature.

Different techniques have been proposed that incorporate field map acquisition with image acquisition ([7] for projection reconstruction and [10] for spiral scans). Chen *et al*. in [11] used multiple echos during EPI acquisition and used these distorted scans to create a final corrected undistorted image. Priest *et al*. in [12] used a two-shot EPI technique to obtain a field map for each image; this could prevent changes in the field map due to subject motion from being propagated through an entire fMRI time series.

Stand alone field map acquisition techniques have also been proposed. Windischberger *et al*. [13] used three echos and corrected for phase wrapping by classifying the degree of phase wrapping into seven categories. They then used linear regression to create a field map that is followed by median and Gaussian filtering. Reber *et al*. [14] used ten separate echo times and acquired distorted EPI images. They used a standard phase unwrapping technique of adding multiples of 2π and then spatially smoothed the image with a Gaussian filter. While these techniques both seek to use more echos to increase the accuracy of the field map, they have several disadvantages. Neither are based on a statistical model and, thus, do not consider any noise in developing their estimator. The filtering suggested by both techniques also adds additional blur. Aksit *et al*. [15] used three scans, the first two with a small echo time and no phase unwrapping and the third with a larger echo time. Two techniques were tried: 1) phase unwrapping by using the first two sets of data and 2) taking a Fourier transform to determine the EPI shift experienced. In phantom studies, using three scans yielded half to a third of the error of two scans. Because the estimator uses a linear fit, there is still error in voxels near phase discontinuities and along areas of large susceptibility differences.

An additional technique used to improve the conventional estimate is local (non-linear) fitting, *e.g*. [16], [17]. While this can improve the conventional estimate, we desire a more statistical approach.

Our technique is unique in that it uses a statistical model using multiple scans and operates without the constraint of linearity. By using a penalized-likelihood cost function, we can easily adjust the regularization parameter to control the amount of smoothing without any additional filtering step. By using a field map derived from the first two echos as the initialization for the iterative method (assuming the two echos are close enough together), no phase unwrapping is required. Our model also takes into account ${R}_{2}^{*}$ decay which was ignored in previous multiple echo techniques.

We now generalize the conventional model (1) to the case of multiple scans, *i.e*., with more than one echo time difference. The reconstructed images are denoted here by **y**^{0}, …, * y^{L}*, where

$${y}_{j}^{l}={f}_{j}{\mathrm{e}}^{\sim {\omega}_{j}{\mathrm{\Delta}}_{l}}{\mathrm{e}}^{-{R}_{j}{\mathrm{\Delta}}_{l}}+{\epsilon}_{j}^{l},$$

(3)

for *l* = 0, …, *L*, where Δ_{l} denotes the echo time difference of the *l*th scan relative to the original scan *i.e*., (Δ_{0} = 0), where *j* denotes the voxel number and where *R _{j}* denotes the value of ${R}_{2}^{*}$ for the jth voxel. As in most field map estimation methods, this model assumes implicitly there is no motion between the scans. As in (1),

The conventional estimate (2) appears to disregard noise effects, so a natural alternative approach is to estimate ** ω** using a maximum likelihood (ML) method based on a statistical model for the measurements

$$\begin{array}{cc}\text{log}\phantom{\rule{thinmathspace}{0ex}}\mathrm{p}(\mathit{y};\mathit{f},\mathit{\omega})\hfill & ={\displaystyle \sum _{l=0}^{L}\text{log}\phantom{\rule{thinmathspace}{0ex}}\mathrm{p}({\mathit{y}}^{l};\mathit{f},}\mathit{\omega})\hfill \\ \hfill & \equiv \frac{-1}{2{\sigma}^{2}}{\displaystyle \sum _{j=1}^{N}{\displaystyle \sum _{l=0}^{L}|{y}_{j}^{l}-{f}_{j}{\mathrm{e}}^{\sim {\omega}_{j}{\mathrm{\Delta}}_{l}}{\mathrm{e}}^{-{R}_{j}{\mathrm{\Delta}}_{l}}{|}^{2}}}\hfill \end{array}$$

(4)

where “” denotes equality to within constants independent of * f* and

$$\underset{\mathit{\omega}\in {\mathbb{R}}^{N},\mathit{f}\in {\u2102}^{N}}{\text{arg}\phantom{\rule{thinmathspace}{0ex}}\text{min}\phantom{\rule{thinmathspace}{0ex}}}{{\displaystyle \sum _{j=1}^{N}\Vert \left[\begin{array}{c}{y}_{j}^{0}\\ {y}_{j}^{1}\\ \vdots \\ {y}_{j}^{L}\end{array}\right]-\left[\begin{array}{c}1\\ {\mathrm{e}}^{\sim {\omega}_{j}{\mathrm{\Delta}}_{1}}{\mathrm{e}}^{-{R}_{j}{\mathrm{\Delta}}_{1}}\\ \vdots \\ {\mathrm{e}}^{\sim {\omega}_{j}{\mathrm{\Delta}}_{L}}{\mathrm{e}}^{-{R}_{j}{\mathrm{\Delta}}_{L}}\end{array}\right]\phantom{\rule{thinmathspace}{0ex}}{f}_{j}\Vert}}^{2}.$$

(5)

This problem is quadratic in *f _{j}*; minimizing over

$${\widehat{f}}_{j}=\frac{{\displaystyle {\sum}_{l=0}^{L}{y}_{j}^{l}{\mathrm{e}}^{-\sim {\omega}_{j}{\mathrm{\Delta}}_{l}}{\mathrm{e}}^{-{R}_{j}{\mathrm{\Delta}}_{l}}}}{{\displaystyle {\sum}_{l=0}^{L}{\mathrm{e}}^{-2{R}_{j}{\mathrm{\Delta}}_{l}}}}.$$

(6)

Substituting this estimate back into the cost function (5) and simplifying considerably yields the following cost function used for ML estimation of * ω*:

$$\begin{array}{cc}{\mathrm{\Psi}}_{\text{ML}}(\mathit{\omega})\equiv \hfill & {\displaystyle \sum _{j=1}^{N}{\displaystyle \sum _{m=0}^{L}{\displaystyle \sum _{n=0}^{L}|{y}_{j}^{m}{y}_{j}^{n}|\xb7{w}_{j}^{m,n}\xb7}}}\hfill \\ \hfill & [1-\text{cos}\phantom{\rule{thinmathspace}{0ex}}(\angle {y}_{j}^{n}-\angle {y}_{j}^{m}-{\omega}_{j}({\mathrm{\Delta}}_{n}-{\mathrm{\Delta}}_{m}))],\hfill \end{array}$$

(7)

where ${w}_{j}^{m,n}$ is a weighting factor that depends on ${R}_{2}^{*}$ as follows:

$${w}_{j}^{m,n}=\frac{{\mathrm{e}}^{-{R}_{j}({\mathrm{\Delta}}_{m}+{\mathrm{\Delta}}_{n})}}{{\displaystyle {\sum}_{l=0}^{L}{\mathrm{e}}^{-2{R}_{j}{\mathrm{\Delta}}_{l}}}}.$$

(8)

Similar weighting appeared in the weighted phase estimate proposed in [19] for angiography. The ML cost function Ψ_{ML}(* ω*) is periodic, similar to cost functions used in phase unwrapping problems,

$$|\mathrm{E}[{y}_{j}^{m}]|=|{f}_{j}||{\mathrm{e}}^{-{R}_{j}{\mathrm{\Delta}}_{m}}|;$$

therefore, hereafter, we approximate ${w}_{j}^{m,n}$, as follows:

$${w}_{j}^{m,n}\approx \frac{|{y}_{j}^{m}||{y}_{j}^{n}|}{{\displaystyle {\sum}_{l=0}^{L}|{y}_{j}^{l}{|}^{2}}}.$$

(9)

This approximation does not require knowledge of ${R}_{2}^{*}$ values.

There is no analytical solution for the minimizer, * ω* in (7), except in the

In the case where *L* = 1 usually Δ_{1} is chosen small enough that we can ignore ${R}_{2}^{*}$ decay (*i.e*., let ${R}_{2}^{*}=0$) and the ML cost function in (7) simplifies to

$${\mathrm{\Psi}}_{\text{ML}}(\mathit{\omega})\equiv {\displaystyle \sum _{j=1}^{N}|{y}_{j}^{0}{y}_{j}^{1}|[1-\text{cos}\phantom{\rule{thinmathspace}{0ex}}(\angle {y}_{j}^{1}-\angle {y}_{j}^{0}-}{\omega}_{j}{\mathrm{\Delta}}_{1})].$$

(10)

The ML estimate is not unique here due to the possibility of phase wrapping. But ignoring that issue, the ML estimate of * ω* is ${\widehat{\omega}}_{j}=(\angle {y}_{j}^{1}-\angle {y}_{j}^{0})/{\mathrm{\Delta}}_{1}$, because 1 − cos(

The ML estimator ignores our *a priori* knowledge that field maps tend to be spatially smooth functions due to the physical nature of main field inhomogeneity and susceptibility effects^{2}. (We note that this assumption does not address the presence of signal from fat). A natural approach to incorporating this characteristic is to add a regularizing roughness penalty to the cost function. Here we regularize only the phase map * ω* and not the magnetization map

$$(\widehat{\mathit{\omega}},\widehat{f})=\underset{\mathit{\omega},\mathit{f}}{\text{arg}\phantom{\rule{thinmathspace}{0ex}}\text{max}\phantom{\rule{thinmathspace}{0ex}}}{\displaystyle \sum _{l=0}^{L}\text{log}\phantom{\rule{thinmathspace}{0ex}}\mathrm{p}({\mathit{y}}^{l};\mathit{f})-\beta \mathrm{R}(\mathit{\omega}),}$$

where R(* ω*) is a spatial roughness penalty (or log prior in a Bayesian MAP philosophy). Based on (6) and (7), after solving for

$${\mathrm{\Psi}}_{\text{PL}}(\mathit{\omega})\phantom{\rule{thinmathspace}{0ex}}\triangleq \phantom{\rule{thinmathspace}{0ex}}{\mathrm{\Psi}}_{\text{ML}}(\mathit{\omega})+\beta \mathrm{R}(\mathit{\omega}),$$

(11)

where we use the approximation (9) for Ψ_{ML}(* ω*). This cost function automatically gives low weight to any voxels where the magnitude $|{y}_{j}^{m}{y}_{j}^{n}|$ is small. For such voxels, the regularization term will have the effect of smoothing or extrapolating the neighboring values. Thus, this approach avoids the phase “outlier” problem that plagues the usual estimate (2) in voxels with low signal magnitude. If

$$\begin{array}{cc}\mathrm{R}(\mathit{\omega})\hfill & ={\displaystyle \sum _{n=1}^{{N}_{1}-1}{\displaystyle \sum _{m=0}^{{N}_{2}-1}\psi (2{\omega}_{n,m}-{\omega}_{n-1,m}-{\omega}_{n+1,m})}}\hfill \\ \hfill & +{\displaystyle \sum _{n=0}^{{N}_{1}-1}{\displaystyle \sum _{m=1}^{{N}_{2}-1}\psi (2{\omega}_{n,m}-{\omega}_{n,m-1}-{\omega}_{n,m+1})}}\hfill \end{array}$$

(12)

where ψ is a convex “potential function.” Here, we use the quadratic potential function, ψ(*t*) = *t*^{2}/2. In this paper, we used second-order differences for all results; we found that second-order finite differences are preferable to first-order differences because the resulting PSF tails decrease more rapidly even when the FWHM values are identical. A quadratic potential function has the advantage of being differentiable and easy to analyze, especially with Gaussian noise. Although quadratic regularization blurs edges, we assume the field map is smooth, so a more complicated potential function, such as Huber, is not considered here.

Usually ψ is differentiable, so we can minimize the cost function Ψ(* ω*) either by conventional gradient descent methods or by optimization transfer methods [21]–[23]. In particular, in the usual case where (

$${\mathit{\omega}}^{(n+1)}={\mathit{\omega}}^{(n)}-\text{diag}\phantom{\rule{thinmathspace}{0ex}}\mathbf{\left\{}\frac{1}{{d}_{j}+\beta \xb7c}\mathbf{\right\}}\phantom{\rule{thinmathspace}{0ex}}\nabla \mathrm{\Psi}({\mathit{\omega}}^{(n)}),$$

(13)

where is the gradient of the cost function,

$$c\phantom{\rule{thinmathspace}{0ex}}\triangleq \phantom{\rule{thinmathspace}{0ex}}\mathbf{\{}\begin{array}{cc}4,\hfill & \phantom{\rule{thinmathspace}{0ex}}\text{regularization}\phantom{\rule{thinmathspace}{0ex}}\text{with}\phantom{\rule{thinmathspace}{0ex}}\text{1st-order}\phantom{\rule{thinmathspace}{0ex}}\text{differences}\hfill \\ 16,\hfill & \text{regularization}\phantom{\rule{thinmathspace}{0ex}}\text{with}\phantom{\rule{thinmathspace}{0ex}}\text{2nd-order}\phantom{\rule{thinmathspace}{0ex}}\text{differences}\hfill \end{array}$$

(14)

and

$${d}_{j}\phantom{\rule{thinmathspace}{0ex}}\triangleq \phantom{\rule{thinmathspace}{0ex}}{\displaystyle \sum _{m=0}^{L}{\displaystyle \sum _{n=0}^{L}|{y}_{j}^{m}{y}_{j}^{n}|\xb7{w}_{j}^{m,n}\xb7{({\mathrm{\Delta}}_{n}-{\mathrm{\Delta}}_{m})}^{2},}}$$

(15)

using the approximation for *w _{j}* shown in (9). For examples in this paper, we used a similar minimization algorithm described in Appendix A because of its faster convergence properties.

To initialize **ω**^{(0)}, we used the regularized ML estimate (11) based on the first two sets of data **y**^{0} and **y**^{1}. We choose the echo times to avoid phase wrapping between these sets of data (this same idea is used in [15] in their three-point method). Therefore, there is no need to apply phase unwrapping algorithms - the algorithm will converge to a local minimizer in the “basin” of the initial estimate [21].

In [24], we considered approximating the 1 − cos term in (11) with its second-order Taylor series to create a penalized weighted least squares (PWLS) cost function. A simplified PWLS approach where the weights were thresholded was also considered. Those models ignore any phase wrap that may occur when evaluating (2). They also have increased error with little computational benefit. Therefore, those simplified methods are not explored further in this paper.

To use the regularized method (11) the user must select the regularization parameter β, which could seem tedious if one used trial-and-error methods. Fortunately, it is particularly simple to analyze the spatial resolution properties for this problem, using the methods in [25] for example. We make the second-order Taylor series approximation for this analysis. The local frequency response of the estimator using second-order finite differences at the *j*th voxel can be shown to be:

$$H({\mathrm{\Omega}}_{X},{\mathrm{\Omega}}_{Y})\approx \frac{1}{1+\frac{\beta}{{d}_{j}}{({\mathrm{\Omega}}_{X}^{2}+{\mathrm{\Omega}}_{Y}^{2})}^{p}},$$

(16)

where Ω_{X} and Ω_{Y} are the Discrete Space Fourier Transform (DSFT) frequency variables. and where *p* = 1 for regularization based on first-order differences and *p* = 2 for second-order finite differences as in (12). (See [26] for related analyses.) From (16) we see that the spatial resolution at each voxel depends on the data through *d _{j}*. In areas with small signal magnitudes, there will be more smoothing, as desired. The spatial resolution (16) also depends on the Δ

We used the inverse 2D DSFT of (16) to compute the PSF *h*[*n, m*] and tabulate its FWHM as a function of β, assuming the previous corrections were made and that the pixel *j* has *d _{j}* = 1. Fig. 1 shows this FWHM as a function of log

Fig. 2 shows an example of the data magnitude $|{y}_{j}^{0}|$ and the usual phase estimate based on *L* = 1 (2) which is very noisy. This is real data taken from a 3T MR scanner with Δ_{1} = 2 ms. The maximum value of |ω_{j} · Δ_{1}| is 1.61 radians in nonzero voxels, making the scan free of any phase wraps. Fig. 2 also shows the penalized-likelihood estimate based on (13) using two different values for β and using 150 iterations. Here, we can see the improvement from using a regularized estimator versus the conventional ML estimator. The effect of β on the smoothness of the estimate is also seen. The improvement seen is analyzed quantitatively in Section III. Fig. 2 also shows the effective FWHM (in pixels) of the regularized algorithm based on (16) for both values of β. Most of the image has a FWHM corresponding to the chosen β based on Fig. 1. Areas of low magnitude have a much higher FWHM (such as the sinuses) and areas of high magnitude have the lowest FWHM.

Using more than two sets of data requires a longer data acquisition and also involves choosing the Δ_{l} values. Analyzing the theoretical improvements that may be attained by using multiple data sets can help determine when the increased acquisition time is warranted and can guide our choice of the Δ_{l} values. Therefore, we calculated the Cramér-Rao bound (CRB) for the model (3). This bound expresses the lowest achievable variance possible for an unbiased estimator based on a given model. Although a biased estimator (the penalized-likelihood estimator) is used in our implementation, the bound quantifies the maximal improvement possible based on the model and allows for a comparison on how close our implementation is to the ideal, unbiased case.

Because there are multiple unknown parameters in these models * θ* = (ω

$${\text{Cov}}_{\mathit{\theta}}\mathbf{\{}\widehat{\mathit{\theta}}\mathbf{\}}\ge {\mathrm{F}}^{-1}\phantom{\rule{thinmathspace}{0ex}}(\mathit{\theta})$$

where $\mathrm{F}(\mathit{\theta})=-\mathrm{E}[{\nabla}_{\mathit{\theta}}^{2}\text{ln}\phantom{\rule{thinmathspace}{0ex}}\mathrm{p}(\mathit{Y};\mathit{\theta})]$ is the Fisher information. Because *f _{j}* is a nuisance parameter, we focus on the CRB for the variance of ω

Applying the CRB to the *L* echo-time difference model (3) yields, after considerable simplification, the expression:

$${\text{Var}}_{L}\{{\widehat{\omega}}_{j}\}\ge \frac{{\sigma}^{2}}{(L+1){\mathrm{\Delta}}_{1}^{2}|{f}_{j}{|}^{2}{\lambda}_{L}},$$

(17)

where, defining α_{l} = Δ_{l} / Δ_{1}:

$${\lambda}_{L}\phantom{\rule{thinmathspace}{0ex}}\triangleq \phantom{\rule{thinmathspace}{0ex}}\left(\frac{1}{L+1}{\displaystyle \sum _{l=0}^{L}{\alpha}_{l}^{2}}\right)-{\left(\frac{1}{L+1}{\displaystyle \sum _{l=0}^{L}{\alpha}_{l}}\right)}^{2}.$$

The variance reduces, in general, as *L* is increased. The expression for λ_{L} is the “variance” of {α_{0}, α_{1}, α_{L}}, measuring the variance between the echo time differences. Increasing the variance (spread) of the Δ_{l} values will decrease the overall variance of the field map estimate.

For the *L* = 1 (2 sets of data) model, λ_{1} = 1/4 and (17) simplifies to:

$${\text{CRB}}_{1}\phantom{\rule{thinmathspace}{0ex}}\triangleq \phantom{\rule{thinmathspace}{0ex}}\frac{2{\sigma}^{2}}{{\mathrm{\Delta}}_{1}^{2}|{f}_{j}{|}^{2}}.$$

As expected, the field map variance decreases when the signal strength |*f _{j}*|, or echo time difference Δ

$${\text{CRB}}_{2}\phantom{\rule{thinmathspace}{0ex}}\triangleq \phantom{\rule{thinmathspace}{0ex}}\frac{{\text{CRB}}_{1}}{4/3({\alpha}_{2}^{2}-{\alpha}_{2}+1)}.$$

(18)

Interestingly, simply using three scans, but using Δ_{2} = Δ_{1} (or α_{2} = 1), would reduce the variance by only 4/3.

From (18), increasing α_{2} should decrease the variance for an unbiased estimator. Making α_{2} arbitrarily large, however, is not advisable for many reasons. A larger α_{2} creates more phase-wrapping. Eventually, the wrapping will lead to intra-voxel aliasing and the desired improvement would be unattainable. Another problem with large values of α_{l} is the effect on the MR pulse sequence length. A large α_{2} also causes much more ${R}_{2}^{*}$ decay in the signal as shown in (7). Choosing optimal Δ_{l} values requires some knowledge of ${R}_{2}^{*}$ decay. This can be seen more clearly in the CRB bounds for the model (3) with ${R}_{2}^{*}$ decay included. For the *L* = 1 model, one can show:

$${\text{Var}}_{1}\{{\widehat{\omega}}_{j}\}\ge {\text{CRB}}_{1}\xb7\frac{1+{\mathrm{e}}^{2{R}_{j}{\mathrm{\Delta}}_{1}}}{2}.$$

(19)

For the *L* = 2 (3 sets of data) model:

$${\text{Var}}_{2}\{{\widehat{\omega}}_{j}\}\ge \frac{{\sigma}^{2}}{{\mathrm{\Delta}}_{1}^{2}|{f}_{j}{|}^{2}}\frac{1+{\mathrm{e}}^{-2{R}_{j}{\mathrm{\Delta}}_{1}}+{\mathrm{e}}^{-2{R}_{j}{\mathrm{\Delta}}_{1}{\alpha}_{2}}}{b},$$

(20)

where

$$b\phantom{\rule{thinmathspace}{0ex}}\triangleq \phantom{\rule{thinmathspace}{0ex}}{\mathrm{e}}^{-2{R}_{j}{\mathrm{\Delta}}_{1}}+{\alpha}_{2}^{2}{\mathrm{e}}^{-2{R}_{j}{\mathrm{\Delta}}_{1}{\alpha}_{2}}+(1+{\alpha}_{2}^{2}-2{\alpha}_{2}){\mathrm{e}}^{-2{R}_{j}{\mathrm{\Delta}}_{1}(1+{\alpha}_{2})}.$$

Using these expressions, we can optimize the Δ_{l} values, which will be inversely proportional to the value of ${R}_{2}^{*}$. In fact, for *L* = 1, one can show that the optimal choice is ${\mathrm{\Delta}}_{1}^{\text{opt}}=1.11/{R}_{j}$. Therefore, small values of α_{l} based on the amount of ${R}_{2}^{*}$ decay expected should be used.

We compared the *L* = 1 and *L* = 2 methods with two examples. First, we used a simulated Gaussian true field map (Fig. 3) with a magnitude map equal to unity at all points. Second, we simulated a brain example. For the magnitude, we used a simulated normal T1-weighted brain image [27], [28]. We generated a simple field map consisting of a 4.8 cm diameter sphere of air (centered around the nasal cavity) embedded in water using simple geometrical equations [29], [30], using a slice slightly above the sphere. Fig. 4 shows the field map and magnitude image |*f _{j}*|. We added complex Gaussian noise at many levels of SNR to the images. For this paper, we used the following definition of SNR:

$$\text{SNR}=20\text{log}\phantom{\rule{thinmathspace}{0ex}}\frac{\Vert \mathit{f}\Vert}{\Vert {\mathit{y}}^{0}-\mathit{f}\Vert}.$$

(21)

The SNR remains consistent even when varying ${R}_{2}^{*}$, *L*, or α.

Top: “True” field map for Gaussian example in Hz; Noisy (SNR = 10dB) wrapped phase $\angle {y}_{j}^{2}$ with α_{2} = 3, Noisy (SNR = 10dB) wrapped phase with α_{2} = 7. Bottom: Conventional estimate for *L* = 1, PL estimates for *L* = 1, **...**

Top: True field map and magnitude for brain example and mask, (SNR = 8.5dB) wrapped phase for α_{2} = 3 and α_{2} = 5 images. Center and Bottom: Conventional, Conventional convolved with a Gaussian filter, PL with 2 sets (*L* = 1), and PL with **...**

We used Δ_{1} = 2 msec for both cases. For the *L* = 2 case we also varied α_{2} to produce several Δ_{2} values. We used a uniform value of ${R}_{2}^{*}$ = 20 sec^{−1} in generating our simulations.

The field map was reconstructed using the penalized-likelihood method (11) using normalization as described in Section II-H for both *L* = 1 and *L* = 2. The algorithm (13) was run at each SNR level for the *L* = 1 case and for the *L* = 2 case of data with varying values of α_{2} using 5 realizations. We ran 300 iterations of the algorithm, using β = 2^{−3}.

We also applied the conventional estimator to our data. To reduce the noise, we convolved the conventional estimate with Gaussian filters of varying widths (σ = 0.0625, 0.1250, …, 3.125). We chose the “optimal” σ based on the minimum masked RMSE. Choosing the optimal σ using the true field map gives the conventional estimate an advantage in this example unavailable in practice.

The RMS error (in Hz) was computed between the “true” field map and the field map reconstructed using the PL method (11) and the conventional estimate. This RMSE was calculated in a masked region (pixels with magnitudes at least 20% of the maximum true magnitude).

Fig. 3 shows an example of the PL with *L* = 1 estimate compared to the PL with *L* = 2 estimate at α_{2} = 3 and α_{2} = 7 at an SNR of 10dB. Qualitatively, we can see improvements with increases in both *L* and α_{2}. Fig. 4 shows similar results for the brain example.

The largest errors in these field maps occur where the magnitude is smallest. The RMSE is much higher using only the conventional method. We also calculated the RMSE in the sinus region of the brain (the ROI is shown in Fig. 4). We chose this ROI because the low magnitude makes the field map difficult to estimate here although the field inhomogeneity is also greatest here. The RMSE in this ROI was 61.1 Hz for the conventional estimate, 11.6 Hz for the Gaussian filtered estimate, 3.4 Hz for the *L* = 1 regularized estimate, and 1.9 Hz for the *L* = 2 α_{2} = 3 regularized estimate and 1.7 Hz for the *L* = 2 α_{2} = 5 regularized estimate. Overall, the filtered conventional estimate performed similar to the PL method with *L* = 1 over the masked region, but had higher error in the ROI. The PL method with *L* = 2 showed a decreased error in both the masked region and the ROI. We would expect even higher improvement over any practical Gaussian filtered estimate because a suboptimal σ would be used. The proposed regularized estimators are more accurate in pixels with low magnitude. Adding additional scans (*L* > 1) makes the PL estimate even more accurate.

Fig. 5 shows the improvement (defined as the RMS error for PL estimate with *L* = 1 divided by the error for PL estimate with *L* = 2) gained by using an additional set of data for the Gaussian example. For comparison, we also plotted the predicted improvement, given by the square root of the ratio of the expressions (19) and (20). The experimental gains are actually higher than the improvements anticipated as shown by the dotted lines (the predicted improvement) for some SNR values. Because this is a ratio of RMSEs and the amount of bias can vary between *L* = 1 and *L* = 2, the unbiased CRB provides a benchmark of expected ratios rather than an exact upper limit. Also, recall that (19) and (20) considered ${R}_{2}^{*}$ to be a known value when, in fact, ${R}_{2}^{*}$ is unknown and approximated through (9). The RMSE is low (in voxels with large magnitudes) at high SNRs using either *L* = 1 or *L* = 2. At lower SNRs, however, including in voxels with low magnitudes, using *L* = 2 and higher values of α_{2} greatly reduces RMS error. We repeated these simulations with ${R}_{2}^{*}$ = 0 (results not shown) and the empirical improvement almost exactly matched (18).

Improvement in the RMSE for the Gaussian example by using 3 data sets rather than 2 sets. Expected improvements shown by dotted lines.

Fig. 6 shows the improvement gained by using an additional set of data for the brain image. For a low SNR (for example 10 dB), the improvements are close to expected. The brain image has some areas where the magnitude is very low, making estimation using any method quite challenging. In addition, the field map phase itself is less smooth than in the Gaussian case, making the estimation more difficult. For a higher SNR (for example 20 dB), the 3-set case still outperforms the 2-set case substantially but by less than predicted by (18).

Improvement in the RMSE for the brain example by using 3 data sets rather than 2 sets. Expected improvements shown by dotted lines.

The RMSE has components of both bias error and variance in it, as shown below:

$$\text{RMSE}(\mathrm{X})=\sqrt{\text{Var}\{X\}+{\text{bias}}^{2}(X)}.$$

Therefore, we analyzed the bias and the standard deviation at a single representative SNR = 20 dB and at α_{2} = 1, 2, … 7 using 500 iterations and 100 realizations for each factor. Fig. 7 compares the standard deviation for each α_{2} relative to that at α_{2} =1 and the empirical improvements were compared to those predicted by the CRB (20) for the Gaussian example. As expected, the improvements in variance are very close to predicted. Here, the bias is also very low at all levels of SNR - explaining the improvement seen in RMSE in Fig. 5.

Bias and RMSE improvement for Gaussian example. Top: Space-averaged σ and absolute bias for several α_{2} values; Bottom: RMSE improvement, empirical and expected, over α_{2} = 1 for several α_{2} values.

Fig. 8 shows the bias and standard deviation for a signle SNR = 20 for the brain example. The empirical variances were close to those expected. The bias, however, introduced in part by the regularization, was nearly constant (independent of α). So for large values of α_{2}, the bias begins to dominate the variance in RMSE calculations, explaining Fig. 6.

Bias and RMSE improvement for brain example. Top: Space-averaged σ and absolute bias for several α_{2} values; Bottom: RMSE improvement, empirical and expected, over α_{2} = 1 for several α_{2} values.

Overall, the variance reductions in both examples due to using three echo times were close to the results predicted by the CRB. For low values of α_{2} (*i.e*., five or less), the expected benefit using *L* > 1 holds even with a moderate value of ${R}_{2}^{*}$. The RMSE reductions are largest at lower SNRs. For phase estimation, the local SNR depends on the spin density of each voxel as seen in (17). Voxels with lower spin density effectively have lower SNR. It is precisely in these voxels where using 3 or more scans has the greatest benefit.

To illustrate how improved field map estimation leads to improved reconstructed images, we used field maps produced by the conventional method (2) and produced by the PL method with three scans (11) to correct real spiral MR data for field inhomogeneities. We imaged a phantom with large field inhomogeneity. We used a spiral-out trajectory with a TE of 30 ms, TR of 2 sec, and a flip angle of 90 degrees. We took six slices spaced 5 cm apart over the 15 cm field of view. First, we collected data to create the field maps (using eight interleaves to minimize the effect of the field inhomogeneity) at the original 30 ms, as well as at 32 ms (Δ_{1} = 2 ms) and at 34 ms (Δ_{2} = 4 ms) and at 40 ms (Δ_{2} = 10 ms). We took ten realizations for each echo difference. We reconstructed iteratively the resulting 64 × 64 pixel images in a masked region using [31]. Then, we used these images to create (for each slice) a conventional field map (2), a conventional field map blurred with a Gaussian filter, a PL field map with *L* = 1, a PL field map with *L* = 2 and α_{2} = 2, a PL field map with *L* = 2 and α_{2} = 5, and a PL field map with *L* = 3, (11). We used β = 2^{−6} for the regularized iterative algorithm and σ = .5 for the Gaussian filter approach, approximately matching the FWHM of the two approaches. Finally, we collected one-shot spiral out data with TE = 30 ms. This scan is thus much more affected by final inhomogeneity. We collected two realizations and then averaged them in k-space. We first reconstructed this data iteratively without a field map as in [31]. Uncorrelated field inhomogeneity causes a blurred image for spiral trajectories. Finally, we iteratively reconstructed this one-shot data with each of the field maps previously created as in [2].

Fig. 9 shows one representative slice. The regularized field maps are less noisy than the conventional one, especially in areas of low magnitude and along the edges. Fig. 9 illustrates the blur and distortion in the one-shot image reconstructed without a field map. The images reconstructed with a field map do not have this blur. Nevertheless, a noisy field map can cause error in the reconstructed image. For example, in Fig. 9, the image reconstructed with the conventional field map shows more artifacts than the eight-shot data or either of the images reconstructed with regularized field maps. Using the eight-shot data as “truth”, we computed the NRMSE of each image and Table I shows the mean and variance over the ten realizations. We include data from two representative slices to show a range of values, although slice three is not shown. In addition, we calculated the NRMSE in the one-shot reconstructed images in pixels where the magnitude is less than .2 times the maximum pixel value of the eight-shot reconstructed image to see if the regularized field maps reduce errors in areas of the image with low magnitude. This is also reported in Table I. We use the norm of the eight-shot 30 ms image for normalization. The regularized iterative PL methods have a lower RMSE and much less variability than the other methods. Therefore, these regularized methods (especially with more than one echo time) give a very reliable estimate of the field map with little variability.

First Slice - Top: Reconstructed 8-shot image, Conventional field map, Gaussian filtered field map, regularized field map L=1, regularized field map L=2; α_{2} = 2, regularized field map L=2 α_{2} = 5, regularized field map L=3. The field maps **...**

We described a regularized method for field map estimation using two or more scans: the penalized-likelihood method (11). This method yields field maps that interpolate smoothly over regions with low spin density, thereby avoiding phase outliers that plague the conventional estimate (2). The method has been used with *L* = 1 (without full description) in [3], [32], [33].

Our analysis also shows that the conventional estimate (2) is in fact the ML estimate, a property that has previously gone unnoticed to our knowledge.

We also analyzed the spatial resolution properties of this method, leading to a practical procedure for choosing the regularization parameter to achieve a given desired spatial resolution.

We studied the CRB on the variance of the estimate for this method and found that our empirical simulation results for the PL method compared favorably, showing a reduction in the RMSE in comparison to using only two scans.

We collected real MR phantom data and created conventional and PL estimates of the field map which were used to reconstruct final images. The PL estimate reduces image artifacts caused by the field inhomogeneity and has a reduced RMSE, especially in areas of very low magnitude where the conventional estimate has many errors. Omitting or using a poor field map estimate for image reconstruction can dramatically affect the final image quality.

As noted in Section II-D, our cost function assumes, as do most other field map estimation problems, that there is no motion between scans. While our analysis indicated that a larger *L* is better in terms of variance, motion could be a problem during the larger time required for *L* echo time differences. Practically, *L* = 1 or *L* = 2 are the most likely choices for *L* and here motion is less likely to be an issue. If a larger number of echo differences are desired, then the cost function could be further generalized to include a joint estimation of the field map and rigid motion parameters.

We have focused here on the case of a single receive coil. It is straightforward to generalize the method for phased array coils, *cf*. [34].

Although we did not estimate ${R}_{2}^{*}$ we used a simple weighting (9) in our algorithm to partially account for ${R}_{2}^{*}$ decay; the improvements seen over estimation with two scans are still large, especially when using a small value of α_{2}.

While this method assumed the first two echo time differences were close enough to prevent phase wrapping, this method could, with proper initialization, extend to data with larger echo time differences and some phase wrapping. This is especially interesting at higher field strengths where wrapping still exists at low echo time differences.

Overall, this method has potential to be a reliable estimator for MR field maps, able to utilize many scans to produce a good estimate. The general penalized-likelihood approach in this work is also applicable to estimating other parametric maps in MRI, such as relaxation maps [35] and sensitivity maps [36]. It may also be useful for phase unwrapping problems with noisy data. In some cases, it may be preferable to use edge-preserving regularization in (12), such as the Huber potential function [37].

Ultimately, this method is a tool that may help answer the main question of field mapping: how to best allocate scan time to achieve the most accurate field map. The preliminary CRB analysis guides choice of echo times given a set number of scans. In future work, we wish to further explore the relationship between number of echoes, signal to noise ratio, and spatial resolution.

Supported in part by NIH grants EB002683 and DA15410 and NSF-GRFP.

To minimize the cost function (11) developed in this paper, we need a method that will decrease it monotonically. The simple minimization algorithm shown in (13) is guaranteed to decrease Ψ(* ω*) monotonically; the proof that ensures mono-tonicity uses the fact that the second derivative of 1 – cos

However, this simple minimization algorithm shown in (13) is only one possible option to minimize the cost function given in (11). In our implementation, we used an optimization transfer approach to refine the iterative algorithm [22], [38]. First express (11) as shown below:

$$\mathrm{\Psi}(\mathit{\omega})\phantom{\rule{thinmathspace}{0ex}}\triangleq \phantom{\rule{thinmathspace}{0ex}}{\displaystyle \sum _{j=1}^{N}{\displaystyle \sum _{m=0}^{L}{\displaystyle \sum _{l=0}^{L}{\phi}_{jml}({\omega}_{j})+\beta \mathrm{R}(\mathit{\omega}),}}}$$

(22)

where we define

$$\begin{array}{c}{\phi}_{jml}(\omega )\triangleq |{y}_{j}^{m}{y}_{j}^{l}|{w}_{j}^{m,l}\phi (\omega ({\mathrm{\Delta}}_{l}-{\mathrm{\Delta}}_{m})+\angle {y}_{j}^{m}-\angle {y}_{j}^{l})\\ \end{array}$$

with

$$\phi (t)\phantom{\rule{thinmathspace}{0ex}}\triangleq \phantom{\rule{thinmathspace}{0ex}}1-\text{cos}(t).$$

To minimize this cost function, we adopt an optimization transfer approach, for which we need a surrogate function for (*s*). In particular, we use the following parabola surrogate for :

$$\phi (t)\le q(t;s)\stackrel{\mathrm{\Delta}}{=}\phi (s)+\dot{\phi}(s)(t-s)+\frac{1}{2}{\kappa}_{\phi}({\{s\}}_{2\pi}){(t-s)}^{2}$$

where {*s*}_{2π} denotes the principle value of *s*. Huber stated that parabola surrogate functions (which he called a comparison function) exist for that satisfy Huber’s conditions [39, p.184-5]; the functions must be differentiable, symmetric, and have curvatures (κ_{}(*s*)) that are bounded and monotone non-increasing for *s* > 0. For |*s*| ≤ π, (*s*) shown above satisfies Huber’s conditions. We note

$$\dot{\phi}(s)=\text{sin}(s)$$

and

$${\kappa}_{\phi}(s)\phantom{\rule{thinmathspace}{0ex}}\triangleq \phantom{\rule{thinmathspace}{0ex}}\frac{\dot{\phi}(s)}{s}=\frac{\text{sin}(s)}{s}.$$

Substituting this curvature κ_{}(*s*) into the expression for *jml*(ω) gives us the following curvature for the parabola surrogate

$${\kappa}_{\phi ,\mathit{\text{jml}}}(s)\stackrel{\mathrm{\Delta}}{=}\frac{{\dot{\phi}}_{\mathit{\text{jml}}}(s)}{s}=|{y}_{j}^{m}{y}_{j}^{l}|{w}_{j}^{m,l}{({\mathrm{\Delta}}_{l}-{\mathrm{\Delta}}_{m})}^{2}\frac{\text{sin}(s)}{s},$$

which is bounded as *s* → 0 and decreasing as |*s*| increases. For values of |*s*| > π, we exploit the periodicity of and find an integer *k* such that |*s − k*2π| ≤ π, *i.e*., the principal value of the phase *s*. Fig. 10 shows and parabola surrogates for several values of *s*. When *s* is an even multiple of π, the curvature κ_{} is the maximum curvature of . When *s* is an odd multiple of π, the curvature κ_{} is zero, and is also zero, so the surrogate function is a constant.

Aggregating such surrogates leads to the following surrogate function for the cost function Ψ(* ω*):

$${\varphi}^{(n)}(\omega )\phantom{\rule{thinmathspace}{0ex}}\triangleq \phantom{\rule{thinmathspace}{0ex}}{\displaystyle \sum _{j=1}^{N}{\displaystyle \sum _{m=0}^{L}{\displaystyle \sum _{l=0}^{L}{q}_{\mathit{\text{jml}}}^{(n)}({\omega}_{j})+\beta \mathrm{R}(\omega )}}}$$

where

$$\begin{array}{ccc}{q}_{\mathit{\text{j}}ml}^{(n)}(\omega )\hfill & \phantom{\rule{thinmathspace}{0ex}}\triangleq \phantom{\rule{thinmathspace}{0ex}}\hfill & {\phi}_{\mathit{\text{jml}}}({\omega}_{j}^{(n)})+{\dot{\phi}}_{\mathit{\text{jml}}}({\omega}_{j}^{(n)})(\omega -{\omega}_{j}^{(n)})\hfill \\ \hfill & +\hfill & \frac{1}{2}{\kappa}_{\phi ,\mathit{\text{jml}}}({s}_{r}^{(n)}){(\omega -{\omega}_{j}^{(n)})}^{2},\hfill \end{array}$$

and where

$${s}_{r}^{(n)}\phantom{\rule{thinmathspace}{0ex}}\triangleq \phantom{\rule{thinmathspace}{0ex}}({\omega}_{j}^{(n)}|{\mathrm{\Delta}}_{l}-{\mathrm{\Delta}}_{m}|+\angle {y}_{j}^{m}-\angle {y}_{j}^{l})\phantom{\rule{thinmathspace}{0ex}}\text{mod}\phantom{\rule{thinmathspace}{0ex}}\pi \in [-\pi ,\pi ].$$

If the roughness penalty R(* ω*) is a quadratic function, which is the natural choice for smooth phase maps, then the surrogate ϕ

In our implementation, we used a separable quadratic surrogate algorithm to minimize this cost function [40]. Then, the following iteration, similar to that of (13), is guaranteed to decrease Ψ(* ω*) monotonically:

$${\mathit{\omega}}^{(n+1)}={\mathit{\omega}}^{(n)}-\text{diag}\phantom{\rule{thinmathspace}{0ex}}\mathbf{\left\{}\frac{1}{{\tilde{d}}_{j}^{(n)}+\beta \xb7c}\mathbf{\right\}}\phantom{\rule{thinmathspace}{0ex}}\nabla \mathrm{\Psi}({\mathit{\omega}}^{(n)}),$$

(23)

where *c* was defined in (14) and where

$$\phantom{\rule{thinmathspace}{0ex}}{\tilde{d}}_{j}^{(n)}={\displaystyle \sum _{m=0}^{L}{\displaystyle \sum _{l=0}^{L}{\kappa}_{\phi ,\mathit{\text{jml}}}({s}_{r}^{(n)}).}}$$

The advantage of (23) over (13) is that ${\tilde{d}}_{j}^{(n)}\le {d}_{j}$ (15), so (23) will converge faster [41].

^{1}Independence in image space is an approximation. In k-space, data points are independently distributed, but reconstruction may produce correlations, especially in scans with non-Cartesian k-space imaging.

^{2}There may be discontinuities at air/water boundaries. Even in this case, sharp boundaries can be problematic if there is motion between scans, further motivating the use of regularization.

1. Sekihara K, Matsui S, Kohno H. NMR imaging for magnets with large nonuniformities. IEEE Trans. Med. Imag. 1985 Dec;vol. 4(no. 4):193–199. [PubMed]

2. Sutton BP, Noll DC, Fessler JA. Fast, iterative image reconstruction for MRI in the presence of field inhomogeneities. IEEE Trans. Med. Imag. 2003 Feb;vol. 22(no. 2):178–188. [PubMed]

3. Noll DC, Fessler JA, Sutton BP. Conjugate phase MRI reconstruction with spatially variant sample density correction. IEEE Trans. Med. Imag. 2005 Mar;vol. 24(no. 3):325–336. [PubMed]

4. Twieg DB. Parsing local signal evolution directly from a single-shot MRI signal: A new approach for fMRI. Mag. Res. Med. 2003 Nov;vol. 50(no. 5):1043–1052. [PubMed]

5. Barnet C, Tsao J, Pruessmann KP. Efficient iterative reconstruction for MRI in strongly inhomogeneous *B*_{0}. Proc. Intl. Soc. Mag. Res. Med. 2004:347.

6. Jezzard P, Balaban RS. Correction for geometric distortion in echo planar images from *B*_{0} field variations. Mag. Res. Med. 1995 Jul;vol. 34(no. 1):65–73. [PubMed]

7. Nayak KS, Nishimura DG. Automatic field map generation and off-resonance correction for projection reconstruction imaging. Mag. Res. Med. 2000 Jan;vol. 43(no. 1):151–154. [PubMed]

8. Cusack R, Brett M, Osswald K. An evaluation of the use of magnetic field-maps to undistort echo-planar images. NeuroImage. 2003 Jan;vol. 18(no. 1):127–142. [PubMed]

9. Glover G. Multipoint Dixon technique for water and fat proton and susceptibility imaging. J. Mag. Res. Im. 1991 Sep;vol. 1(no. 5):521–530. [PubMed]

10. Nayak KS, Tsai C-M, Meyer CH, Nishimura DG. Efficient off-resonance correction for spiral imaging. Mag. Res. Med. 2001 Mar;vol. 45(no. 3):521–524. [PubMed]

11. Chen N, Wyrwicz AM. Correction for EPI distortions using multi-echo gradient-echo imaging. Mag. Res. Med. 1999 Jun;vol. 41(no. 6):1206–1213. [PubMed]

12. Priest AN, Vita ED, Thomas DL, Ordidge RJ. EPI distortion correction from a simultaneously acquired distortion map using TRAIL. J. Mag. Res. Im. 2006 Apr;vol. 23(no. 4):597–603. [PubMed]

13. Windischberger C, Robinson S, Rauscher A, Barth M, Moser E. Robust field map generation using a triple-echo acquisition. J. Mag. Res. Im. 2004 Oct;vol. 20(no. 4):730–734. [PubMed]

14. Reber PJ, Wong EC, Buxton RB, Frank LR. Correction of off resonance-related distortion in echo-planar imaging using EPI-based field maps. Mag. Res. Med. 1998 Feb;vol. 39(no. 2):328–330. [PubMed]

15. Aksit P, Derbyshire JA, Prince JL. Three-point method for fast and robust field mapping for EPI geometric distortion correction. Proc. IEEE Intl. Symp. Biomed. Imag. 2007:141–144.

16. Schneider E, Glover G. Rapid in vivo proton shimming. Mag. Res. Med. 1991 Apr;vol. 18(no. 2):335–347. [PubMed]

17. Irarrazabal P, Meyer CH, Nishimura DG, Macovski A. Inhomogeneity correction using an estimated linear field map. Mag. Res. Med. 1996 Feb;vol. 35(no. 2):278–282. [PubMed]

18. McVeigh ER, Henkelman RM, Bronskill MJ. Noise and filtration in magnetic resonance imaging. Med. Phys. 1985 Sep;vol. 12(no. 5):586–591. [PMC free article] [PubMed]

19. Bernstein MA, Grgic M, Brosnan TJ, Pelc NJ. Reconstructions of phase contrast, phased array multicoil data. Mag. Res. Med. 1994 Sep;vol. 32(no. 3):330–334. [PubMed]

20. Leitao JMN, Figueiredo MAT. Absolute phase image reconstruction: a stochastic nonlinear filtering approach. IEEE Trans. Im. Proc. 1998 Jun;vol. 7(no. 6):868–882. [PubMed]

21. Jacobson MW, Fessler JA. An expanded theoretical treatment of iteration-dependent majorize-minimize algorithms. IEEE Trans. Im. Proc. 2007 Oct;vol. 16(no. 10):2411–2422. [PMC free article] [PubMed]

22. Böhning D, Lindsay BG. Monotonicity of quadratic approximation algorithms. Ann. Inst. Stat. Math. 1988 Dec;vol. 40(no. 4):641–663.

23. Lange K, Hunter DR, Yang I. Optimization transfer using surrogate objective functions. J. Computational and Graphical Stat. 2000 Mar;vol. 9(no. 1):1–20.

24. Fessler JA, Yeo D, Noll DC. Regularized fieldmap estimation in MRI. Proc. IEEE Intl. Symp. Biomed. Imag. 2006:706–709.

25. Fessler JA, Rogers WL. Spatial resolution properties of penalized-likelihood image reconstruction methods: Space-invariant tomographs. IEEE Trans. Im. Proc. 1996 Sep;vol. 5(no. 9):1346–1358. [PubMed]

26. Unser M, Aldroubi A, Eden M. Recursive regularization filters: design, properties, and applications. IEEE Trans. Patt. Anal. Mach. Int. 1991 Mar;vol. 13(no. 3):272–277.

27. Collins DL, Zijdenbos AP, Kollokian V, Sled JG, Kabani NJ, Holmes CJ, Evans AC. Design and construction of a realistic digital brain phantom. IEEE Trans. Med. Imag. 1998 Jun;vol. 17(no. 3):463–468. [PubMed]

28. Kwan RK-S, Evans AC, Pike GB. MRI simulation-based evaluation of image-processing and classification methods. IEEE Trans. Med. Imag. 1999 Nov;vol. 18(no. 11):1085–1097. [PubMed]

29. Haacke EM, Brown RW, Thompson MR, Venkatesan R. Magnetic resonance imaging: Physical principles and sequence design. New York: Wiley; 1999.

30. Schenck JF. The role of magnetic susceptibility in magnetic resonance imaging: MRI magnetic compatibility of the first and second kinds. Med. Phys. 1996 Jun;vol. 23(no. 6):815–850. [PubMed]

31. Fessler JA, Sutton BP. Nonuniform fast Fourier transforms using min-max interpolation. IEEE Trans. Sig. Proc. 2003 Feb;vol. 51(no. 2):560–574.

32. Yip C, Fessler JA, Noll DC. Iterative RF pulse design for multidimensional, small-tip-angle selective excitation. Mag. Res. Med. 2005 Oct;vol. 54(no. 4):908–917. [PubMed]

33. Sutton BP, Noll DC, Fessler JA. Dynamic field map estimation using a spiral-in / spiral-out acquisition. Mag. Res. Med. 2004 Jun;vol. 51(no. 6):1194–1204. [PubMed]

34. Lu K, Liu TT, Bydder M. Optimal phase difference reconstruction: comparison of two methods. Mag. Res. Im. 2007 [PubMed]

35. Gamier SJ, Bilbro GL, Gault JW, Snyder WE. Magnetic resonance image restoration. J. Math. Im. Vision. 1995 Feb;vol. 5(no. 1):7–19.

36. Ying L, Sheng J, Liu B. Joint estimation of image and coil sensitivities in parallel MRI. Proc. IEEE Intl. Symp. Biomed. Imag. 2006:17–20.

37. Yu DF, Fessler JA. Edge-preserving tomographic reconstruction with nonlocal regularization. IEEE Trans. Med. Imag. 2002 Feb;vol. 21(no. 2):159–173. [PubMed]

38. Jacobson MW, Fessler JA. Comm. and Sign. Proc. Lab. Ann Arbor, MI: Dept. of EECS, Univ. of Michigan; 2004. Nov, Properties of MM algorithms on convex feasible sets: extended version. 48109-2122, Tech. Rep. 353.

39. Huber PJ. Robust statistics. New York: Wiley; 1981.

40. Erdoğan H, Fessler JA. Ordered subsets algorithms for transmission tomography. Phys. Med. Biol. 1999 Nov;vol. 44(no. 11):2835–2851. [PubMed]

41. Fessler JA, Clinthorne NH, Rogers WL. On complete data spaces for PET reconstruction algorithms. IEEE Trans. Nuc. Sci. 1993 Aug;vol. 40(no. 4):1055–1061.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |