Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2996098

Formats

Article sections

Authors

Related links

IEEE Trans Med Imaging. Author manuscript; available in PMC 2010 December 2.

Published in final edited form as:

PMCID: PMC2996098

NIHMSID: NIHMS250370

Chunlei Liu, Brain Imaging and Analysis Center, School of Medicine, Duke University, Durham, NC 27705 USA;

Chunlei Liu: ude.ekud@uil.ielnuhc; Jian Zhang: ude.drofnats@znaij; Michael E. Moseley: ude.drofnats@yelesom

The publisher's final edited version of this article is available at IEEE Trans Med Imaging

Image acquisition of magnetic resonance imaging (MRI) can be accelerated by using multiple receiving coils simultaneously. The problem of reconstructing an unaliased image from partially sampled k-space data can be formulated as a large system of sparse linear equations. The k-space sparse matrix (kSPA) algorithm proposes to solve the system of equations by finding a sparse approximate inverse. This algorithm has been shown to accelerate the image reconstruction for a large number of coils. The original kSPA algorithm requires knowledge of coil sensitivities. Here, we propose and demonstrate an auto-calibrated kSPA algorithm that does not require the explicit computation of the coil sensitivity maps. We have also shown that calibration data, in principle, can be acquired at any region of k-space. This property applies to arbitrary sampling trajectories and all reconstruction algorithms based on k-space. In practice, because of its higher SNR, calibration data acquired at the center of k-space performed more favorably. Such auto-calibration can be advantageous in cases where an accurate sensitivity map is difficult to obtain.

By utilizing the spatial variation of the receiving coil’s sensitivity, parallel imaging has become a very useful technique for improving the acquisition speed of magnetic resonance imaging (MRI) [1]–[5]. This fast imaging technique has found great utility in a variety of imaging applications [6]–[13].

The main rationale of parallel MRI is to reduce the time-consuming process of phase encoding with spatial encoding. By doing so, the image reconstruction procedure has deviated from performing the fast Fourier transform to solving a system of large linear equations. In Fourier transform based MRI, the phase encoding coefficients can be controlled by varying the magnetic field gradients. Although the gradients may not be perfect, they can be predetermined fairly accurately with the continuing advance of gradient coil design. In parallel imaging, however, the spatial encoding coefficient, that is the coil’s sensitivity, cannot be controlled by the MRI scanner operator and is typically unknown. The image quality of parallel imaging, on a large degree, depends on the accuracy of coil sensitivity. Therefore, it is crucial to determine coil sensitivities as accurately as possible.

In general, there are two approaches to determine and utilize the coil sensitivity information. One approach is to measure sensitivity maps directly and use them to form a system of linear equations such as the approach adopted in sensitivity encoding (SENSE) [3], [4]. The other approach is to estimate the reconstruction weights from some pre-measured calibration data without determining the sensitivity maps explicitly such as the approach proposed in the auto-calibrated simultaneous acquisition of spatial harmonics (AUTO-SMASH) [14] and the approach adopted in the technique of generalized autocalibrating partially parallel acquisitions (GRAPPA) [5]. The general idea of GRAPPA is that sensitivity information is inherently captured by the calibration data, reconstruction weights, therefore, may be trained on this calibration data. GRAPPA was originally introduced for Cartesian sampling trajectory; however, some recent works have attempted to extend this technique to non-Cartesian trajectories [15]–[17].

For the purpose of fast reconstruction of time-series images and massive parallel imaging using a large number of coils, Liu *et al*. recently proposed a method for parallel imaging reconstruction by inverting a **k**-space sparse matrix (kSPA) [18]. In the kSPA algorithm, the image reconstruction problem is formulated as a system of sparse linear equations in the **k**-space, and the system of equations is solved by computing a sparse approximate inverse [19], [20]. This algorithm shares some characteristics of both SENSE and GRAPPA. Similar to SENSE, kSPA requires an explicit estimation of the coil sensitivity. On the other hand, kSPA reconstructs an image in the **k**-space similar to GRAPPA. In contrary to GRAPPA, kSPA reconstructs a total of one image rather than one image for each coil.

In this paper, we propose a method of performing kSPA reconstruction without estimating the coil sensitivity. Rather than calculating the sparse reconstruction matrix based on the coil sensitivity, the reconstruction matrix is computed directly from calibration data. In addition, we show mathematically that the calibration data can be situated in any location of **k**-space and is not restricted to the center of **k**-space. The proposed algorithm demonstrates excellent image quality for both simulated and *in vivo* data. Although the reconstruction speed is generally slower than the original kSPA algorithm, the proposed kSPA variation can be particularly advantageous in cases where an accurate sensitivity map is difficult to obtain.

To aid the presentation, a table is provided that summarizes and defines the notations used (Table I).

Following the Nyquist theorem, the data acquired by the *n*th channel of a multichannel coil at an arbitrary **k**-space location *κ** _{μ}* can be written as [18]

$$\begin{array}{l}{d}_{n}({\mathit{\kappa}}_{\mu})=(m\ast {s}_{n}\ast c)({\mathit{\kappa}}_{\mu})\\ =\sum _{\rho =1}^{{N}^{2}}\left(\sum _{{\rho}^{\prime}=1}^{{N}^{2}}m({\mathbf{k}}_{{\rho}^{\prime}}){s}_{n}({\mathbf{k}}_{\rho}-{\mathbf{k}}_{{\rho}^{\prime}})\right)c({\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{\rho}).\end{array}$$

(1)

To differentiate a Cartesian grid from an arbitrary sampling location, the Latin letter **k** is used to indicate a Cartesian grid point while the Greek letter ** κ** is used to indicate an arbitrary location. Here,

Equation (1) can be simplified to

$${d}_{n}({\mathit{\kappa}}_{\mu})=(m\ast {\stackrel{\sim}{s}}_{n})({\mathit{\kappa}}_{\mu}).$$

(2)

Here * _{n}*(

$${\stackrel{\sim}{s}}_{n}({\mathit{\kappa}}_{\mu})=\sum _{\rho =1}^{{N}^{2}}{s}_{n}({\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{\rho})c({\mathbf{k}}_{\rho}).$$

(3)

Notice that both *s _{n}*(

$$\mathbf{d}=\mathbf{Gm}.$$

(4)

Here, **d** is a column vector stacked with the **k**-space data acquired by all coils; **m** is a column vector stacked with the **k**-space value to be estimated; **G** is the coefficient matrix whose entries are formed by corresponding values of * _{n}* (

$$\mathbf{d}={\left[\begin{array}{llllllll}{d}_{1}({\mathit{\kappa}}_{1})\hfill & \dots \hfill & {d}_{1}({\mathit{\kappa}}_{\mu})\hfill & \dots \hfill & {d}_{1}({\mathit{\kappa}}_{{n}_{k}})\hfill & {d}_{2}({\mathit{\kappa}}_{1})\hfill & \dots \hfill & {d}_{{n}_{c}}({\mathit{\kappa}}_{{n}_{k}})\hfill \end{array}\right]}^{\mathbf{T}}$$

(5)

$$\mathbf{m}={[\begin{array}{ll}m({\mathbf{k}}_{1})\hfill & m({\mathbf{k}}_{2})\dots m({\mathbf{k}}_{{N}^{2}})\hfill \end{array}]}^{\mathbf{T}}$$

(6)

$$\mathbf{G}=\left[\begin{array}{cccc}{\stackrel{\sim}{s}}_{1}({\mathit{\kappa}}_{1}-{\mathbf{k}}_{1})& {\stackrel{\sim}{s}}_{1}({\mathit{\kappa}}_{1}-{\mathbf{k}}_{2})& \dots & {\stackrel{\sim}{s}}_{1}({\mathit{\kappa}}_{1}-{\mathbf{k}}_{{N}^{2}})\\ \vdots & \vdots & \vdots & \vdots \\ {\stackrel{\sim}{s}}_{1}({\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{1})& {\stackrel{\sim}{s}}_{1}({\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{2})& \dots & {\stackrel{\sim}{s}}_{1}({\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{{N}^{2}})\\ \vdots & \vdots & \vdots & \vdots \\ {\stackrel{\sim}{s}}_{1}({\mathit{\kappa}}_{{n}_{k}}-{\mathbf{k}}_{1})& {\stackrel{\sim}{s}}_{1}({\mathit{\kappa}}_{{n}_{k}}-{\mathbf{k}}_{2})& \dots & {\stackrel{\sim}{s}}_{1}({\mathit{\kappa}}_{{n}_{k}}-{\mathbf{k}}_{{N}^{2}})\\ {\stackrel{\sim}{s}}_{2}({\mathit{\kappa}}_{1}-{\mathbf{k}}_{1})& {\stackrel{\sim}{s}}_{2}({\mathit{\kappa}}_{1}-{\mathbf{k}}_{2})& \dots & {\stackrel{\sim}{s}}_{2}({\mathit{\kappa}}_{1}-{\mathbf{k}}_{{N}^{2}})\\ \vdots & \vdots & \vdots & \vdots \\ {\stackrel{\sim}{s}}_{{n}_{c}}({\mathit{\kappa}}_{{n}_{k}}-{\mathbf{k}}_{1})& {\stackrel{\sim}{s}}_{{n}_{c}}({\mathit{\kappa}}_{{n}_{k}}-{\mathbf{k}}_{2})& \dots & {\stackrel{\sim}{s}}_{{n}_{c}}({\mathit{\kappa}}_{{n}_{k}}-{\mathbf{k}}_{{N}^{2}})\end{array}\right].$$

(7)

In kSPA image reconstruction, an image is reconstructed from undersampled **k**-space data by computing the pseudo inverse operator

$${\mathbf{G}}^{\u2020}={({\mathbf{G}}^{\mathbf{H}}\mathbf{G})}^{-\mathbf{1}}{\mathbf{G}}^{\mathbf{H}}.$$

(8)

For typical MRI exams, **G** is prohibitively large and its inverse cannot be practically computed. To realistically compute the inverse operator, the kSPA algorithm makes two basic approximations: the approximation of **G** as a sparse matrix and the approximation of (**G ^{H}G**)

The first approximation is valid because coil sensitivity contains only low spatial frequency components. Let *w _{s}* be the cutoff bandwidth of the sensitivity, then

$${\stackrel{\sim}{s}}_{n}({\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{p})=0,\phantom{\rule{0.38889em}{0ex}}\text{if}\phantom{\rule{0.16667em}{0ex}}\left|\right|{\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{p}\left|\right|>{w}_{s}.$$

(9)

An empirical choice of *w _{s}* is such that at the cutoff frequency

$$\begin{array}{l}{M}_{\rho {\rho}^{\prime}}=\sum _{n=1}\sum _{\mu}{\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{k}}_{\rho}-{\mathit{\kappa}}_{\mu}){\stackrel{\sim}{s}}_{n}({\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{{\rho}^{\prime}}),\\ \text{for}\phantom{\rule{0.16667em}{0ex}}\{{\mathbf{k}}_{\mu}:||{\mathbf{k}}_{\rho}-{\mathit{\kappa}}_{\mu}||\le {w}_{s},||{\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{{\rho}^{\prime}}||\le {w}_{s}\}.\end{array}$$

(10)

Note that, since *M _{ρρ′}* is nonzero only if there exists at least one sampling location

Assuming the inverse matrix **M**^{†} can be approximated as a sparse matrix [19], [20], we can find each row of **M**^{†} by solving a small set of linear equations which is formed by varying **k*** _{σ}* following the condition set by (9)

$$\begin{array}{l}\sum _{{\rho}^{\prime}}{M}_{\rho {\rho}^{\prime}}^{\u2020}\sum _{n=1}\sum _{\mu}{\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{k}}_{{\rho}^{\prime}}-{\mathit{\kappa}}_{\mu}){\stackrel{\sim}{s}}_{n}({\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{\sigma})=\delta ({\mathbf{k}}_{\rho}-{\mathit{\kappa}}_{\sigma})\\ \text{for}\phantom{\rule{0.16667em}{0ex}}({\mathbf{k}}_{\rho}-{\mathit{\kappa}}_{{\rho}^{\prime}})\le w\end{array}$$

(11)

and

$${M}_{\rho {\rho}^{\prime}}^{\u2020}=0,\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\text{for}\phantom{\rule{0.16667em}{0ex}}\left|\right|{\mathbf{k}}_{\rho}-{\mathbf{k}}_{{\rho}^{\prime}}\left|\right|>w.$$

(12)

Here, *w* is the width of the reconstruction kernel that defines the sparsity of the inverse matrix. After **M**^{†} is computed, the Fourier transform of the image can be simply estimated as

$$\widehat{\mathbf{m}}={\mathbf{M}}^{\u2020}{\mathbf{G}}^{\mathbf{H}}\mathbf{d}.$$

(13)

To summarize, kSPA reconstruction consists of two steps: the first step is multiplying the raw data with a gridding operator **G ^{H}**, and the second step is multiplying the gridded data with an unmixing operator

The original implementation of kSPA requires the estimation of coil sensitivity [18]. The estimation of coil sensitivity typically involves the division of each individual coil’s image by the sum-of-squares image, followed by postprocessing procedures such as low-pass filtering and surface fitting [3]. Because image quality is sensitive to the accuracy of the measured coil sensitivity, the requirement of accurate coil sensitivity is a critical source of reconstruction error. An implementation of kSPA that does not require the explicit estimation of coil sensitivity is therefore highly desirable. Such an implementation would compute the inversion operator **G**^{†} directly from some form of calibration data in acquired **k**-space. Specifically, the auto-calibration method presented here shall compute **G**^{†} directly from measured **k**-space data rather than from the estimated coil sensitivity.

One option for realizing such a kind of auto-calibration is to acquire a fully sampled data set on the designed trajectory and use this data set to estimate the weights for synthesizing missing data points in an undersampled trajectory. However, such a calibration scheme is expensive and recalibration is inconvenient. Here, we propose an alternative calibration method that can rely on a small fully sampled area centered at any location of the **k**-space for estimating the inversion operator **G**^{†}.

We first show that the value of
${M}_{\rho {\rho}^{\prime}}^{\u2020}$, is determined only by the sampling pattern surrounding **k*** _{ρ}* and it is independent of the exact location of

The weights for synthesizing data on a Cartesian grid depend only on the surrounding sampling pattern. The shaded area indicates the calibration region where **k**-space is fully sampled. The circle indicates the region whose sampling pattern is used to construct **...**

$$\sum _{{\rho}_{0}^{\prime}}{M}_{{\rho}_{0}{\rho}_{0}^{\prime}}^{\u2020}\sum _{n=1}\sum _{\mu}{\stackrel{\sim}{s}}_{n}^{\ast}(({\mathbf{k}}_{{\rho}^{\prime}}+\mathrm{\Delta}\mathbf{k})-({\mathit{\kappa}}_{\mu}+\mathrm{\Delta}\mathbf{k}))\times {\stackrel{\sim}{s}}_{n}(({\mathit{\kappa}}_{\mu}+\mathrm{\Delta}\mathbf{k})-({\mathbf{k}}_{\sigma}+\mathrm{\Delta}\mathbf{k}))=\delta (({\mathbf{k}}_{\rho}+\mathrm{\Delta}\mathbf{k})-({\mathbf{k}}_{\sigma}+\mathrm{\Delta}k))$$

(14)

which can be simplified to

$$\sum _{{\rho}_{0}^{\prime}}{M}_{{\rho}_{0}{\rho}_{0}^{\prime}}^{\u2020}\sum _{n=1}\sum _{\mu}{\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{k}}_{{\rho}^{\prime}}-{\mathit{\kappa}}_{\mu}){\stackrel{\sim}{s}}_{n}({\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{\sigma})=\delta ({\mathbf{k}}_{\rho}-{\mathbf{k}}_{\sigma}).$$

(15)

Comparing (15) and (11), we conclude that ${M}_{{\rho}_{0}{\rho}_{0}^{\prime}}^{\u2020}={M}_{\rho {\rho}^{\prime}}^{\u2020}$.

That is, the value of
${M}_{\rho {\rho}^{\prime}}^{\u2020}$, is independent of the absolute locations of **k*** _{ρ}* and

Because of this shift-invariant property of
${M}_{\rho {\rho}^{\prime}}^{\u2020}$, we only need to acquire a small region of **k**-space for the purpose of auto-calibration. This calibration region can be centered at any point **k**_{ρ0} in **k**-space. To calculate the reconstruction weights for the grid point **k*** _{ρ}*, i.e., the

$$\sum _{{\rho}_{0}^{\prime}}{M}_{{\rho}_{0}{\rho}_{0}^{\prime}}^{\u2020}\sum _{n=1}\sum _{{\mu}_{0}}{\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{k}}_{{\rho}_{0}^{\prime}}-{\mathit{\kappa}}_{{\mu}_{0}})\sum _{{\sigma}_{0}}{\stackrel{\sim}{s}}_{n}({\mathit{\kappa}}_{{\mu}_{0}}-{\mathbf{k}}_{{\sigma}_{0}})d({\mathbf{k}}_{{\sigma}_{0}})=\sum _{{\sigma}_{0}}\delta ({\mathbf{k}}_{{\rho}_{0}}-{\mathbf{k}}_{{\sigma}_{0}})d({\mathbf{k}}_{{\sigma}_{0}}).$$

(16)

Note that **d**(**k**_{σ0}) is the data that would have been acquired with a single-channel coil of homogeneous sensitivity. By recognizing the convolution summation on both sides, we can simplify (16) to

$$\sum _{{\rho}_{0}^{\prime}}{M}_{{\rho}_{0}{\rho}_{0}^{\prime}}^{\u2020}\sum _{n=1}\sum _{{\mu}_{0}}{\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{k}}_{{\rho}_{0}^{\prime}}-{\mathit{\kappa}}_{{\mu}_{0}}){d}_{n}({\mathit{\kappa}}_{{\mu}_{0}})=d({\mathbf{k}}_{{\rho}_{0}})$$

(17)

which can be further reduced to

$$\sum _{n}\sum _{{\mu}_{0}}{G}_{{\rho}_{0}{\mu}_{0},n}^{\u2020}{d}_{n}({\mathit{\kappa}}_{{\mu}_{0}})=d({\mathbf{k}}_{{\rho}_{0}}).$$

(18)

Based on (9) and (12), we recognize that

$${G}_{{\rho}_{0}{\mu}_{0},n}^{\u2020}=0,\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\text{for}\phantom{\rule{0.16667em}{0ex}}\left|\right|{\mathbf{k}}_{{\rho}_{0}}-{\mathit{\kappa}}_{{\mu}_{0}}\left|\right|>w+{w}_{s}.$$

(19)

Here,
${G}_{{\rho}_{0}{\mu}_{0},n}^{\u2020}$ is the entry in the *ρ*_{0}th row of **G**^{†} that corresponds to the *μ*_{0}th sampling location of the *n*th coil. It is expressed as

$${G}_{{\rho}_{0}{\mu}_{0},n}^{\u2020}=\sum _{{\rho}_{0}^{\prime}}{M}_{{\rho}_{0}{\rho}_{0}^{\prime}}^{\u2020}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}{\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{k}}_{{\rho}_{0}^{\prime}}-{\mathit{\kappa}}_{{\mu}_{0}}).$$

(20)

Because
${M}_{\rho {\rho}^{\prime}}^{\u2020}$, is independent of the absolute locations of **k*** _{ρ}* and
${\mathbf{k}}_{\rho}^{\prime}$, we observe
${G}_{{\rho}_{0}{\mu}_{0},n}^{\u2020}$ is the same as
${G}_{\rho \mu ,n}^{\u2020}$.

Obviously, computing
${G}_{\rho \mu ,n}^{\u2020}$ requires a set of linear independent equations such that the total number of equations is larger than the number of unknowns in
${G}_{\rho \mu ,n}^{\u2020}$. Because of the shift-invariant property of
${G}_{\rho \mu ,n}^{\u2020}$, such a set of linear equations can be constructed by translating the sampling points around **k*** _{ρ}* to a set of grid points within the calibration region. For each grid point, an equation can be constructed following (18). Therefore, by shifting all sampling points around

The auto-calibration scheme based on (18) requires the knowledge of both *d _{n}*(

$$\sum _{n}\sum _{{\mu}_{0}}{G}_{{\rho}_{0}{\mu}_{0},{nn}^{\prime}}^{\u2020}{d}_{n}({\mathit{\kappa}}_{{\mu}_{0}})={d}_{{n}^{\prime}}({\mathbf{k}}_{{\rho}_{0}}).$$

(21)

Similarly,
${G}_{\rho \mu ,n{n}^{\prime}}^{\u2020}$, is sparse and satisfies (19). The indices *nn′* indicate that there is one set of weights for each pair of coils. That is, instead of synthesizing the true **k**-space data, the calibration data from all coils are used to synthesize the **k**-space data of each individual coil. Equation (21) can be simply verified by convolving (18) with *s _{n′}* (

This auto-calibrated kSPA algorithm can be summarized as followed.

- Step 1) Acquire a small fully sampled region in
**k**-space for auto-calibration and compute*d*(_{n}**k**_{ρ0}) through interpolation. The full width of the calibration region should be larger than 2*w*. - Step 2) For a given grid point
**k**in_{ρ}**k**-space, locate all sampling points within a distance of*w*from that grid point. - Step 3) Translate these sampling points to center at a set of grid points in the middle of the calibration region. At each grid point, compute
*d*(_{n}*κ*_{μ0}) through interpolation and construct an equation following (21). Combine all such equations and solve the resulting set of equations for ${G}_{\rho \mu ,n{n}^{\prime}}^{\u2020}$. - Step 4) Repeat step 2 and 3 for all grid points and construct the matrix
**G**^{†}. - Step 5) Compute m for the
*n*th coil as**G**^{†}**d**. Filter**m**and FFT it to the image space. - Step 6) Calculate the sum-of-squares of all coil images.

The purpose of the filter in step 5 is to suppress the signal in the edge of the **k**-space that is not covered entirely by the sampling trajectory. For example, spiral and radial trajectories have a circular coverage of **k**-space, leaving the corners of **k**-space unsampled. The reconstructed data in those areas amplify noise and, therefore, should be eliminated. The filter we used is a function that is 1 within the region of the **k**-space trajectory and linearly decreases to zero outside the region with a transition width of five pixels. For spiral trajectories, the function is similar to the filter used by Pruessmann *et al*. [4].

Interpolation steps are involved in the resampling of calibration data onto targeted sampling patterns. Interpolation in **k**-space with a small kernel results in uneven intensity shading in the image space which is commonly referred to as apodization [25]. This apodization effect is usually corrected by dividing the reconstructed image by a de-apodization function, typically by the inverse Fourier transform of the interpolation kernel. We provide an analysis and a solution to this effect in kSPA and auto-calibrated kSPA. We show that the apodization effect is completely removed during the estimation of reconstruction weights due to the resampling of calibration data, therefore, there is no need for post apodization correction.

To evaluate the effect of the small interpolation kernel, we can combine (3) and (7) and write out the entries of **G** explicitly as

$$\begin{array}{l}{\stackrel{\sim}{s}}_{n}({\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{p})=\sum _{{\rho}^{\prime}=1}^{{N}^{2}}{s}_{n}({\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{\rho}-{\mathbf{k}}_{{\rho}^{\prime}})c({\mathbf{k}}_{{\rho}^{\prime}})\\ =\sum _{{\mathbf{k}}_{{\rho}^{\u2033}}={\mathbf{k}}_{\rho}+{\mathbf{k}}_{{\rho}^{\prime}}}{s}_{n}({\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{{p}^{\u2033}})c({\mathbf{k}}_{{\rho}^{\u2033}}-{\mathbf{k}}_{p}).\end{array}$$

(22)

Following (22), we can separate the coil sensitivity and the interpolation kernel in the inverse gridding operator **G** such that

$$\mathbf{G}={\mathbf{G}}_{s}\mathbf{C}.$$

(23)

Here, **G*** _{s}* is a matrix of size (

$$\begin{array}{l}\widehat{\mathbf{m}}={({\mathbf{C}}^{\mathbf{H}}{\mathbf{G}}_{s}^{\mathbf{H}}{\mathbf{G}}_{s}\mathbf{C})}^{-1}{\mathbf{C}}^{\mathbf{H}}{\mathbf{G}}_{s}^{\mathbf{H}}\mathbf{d}\\ ={\mathbf{C}}^{-1}{({\mathbf{G}}_{s}^{\mathbf{H}}{\mathbf{G}}_{s})}^{-1}{\mathbf{G}}_{s}^{\mathbf{H}}\mathbf{d}={\mathbf{C}}^{-1}\mathbf{m}.\end{array}$$

(24)

Equation (24) shows that the reconstructed image is the true image multiplied by the inverse interpolation matrix. To remove this apodization effect, we can modify the unmixing operator **M**^{†}. Instead of computing the unmixing operator as a direct inverse of **M**, we need to find an unmixing operator **M**^{†} such that

$${\mathbf{M}}^{\u2020}\mathbf{M}=\mathbf{C}.$$

(25)

Specifically, we need to change the delta function in the right-hand side of (11) to the corresponding interpolation weights. As a result, we have

$$\sum _{{\rho}^{\prime}}{M}_{\rho {\rho}^{\prime}}^{\u2020}\sum _{n=1}\sum _{\mu}{\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{k}}_{{\rho}^{\prime}}-{\mathit{\kappa}}_{\mu}){\stackrel{\sim}{s}}_{n}({\mathit{\kappa}}_{\mu}-{\mathbf{k}}_{\sigma})=c({\mathbf{k}}_{\rho}-{\mathbf{k}}_{\sigma})$$

(26)

Accordingly, in the case of auto-calibrated kSPA, (13) is changed to

$$\sum _{{\rho}_{0}^{\prime}}{M}_{{\rho}_{0}{\rho}_{0}^{\prime}}^{\u2020}\sum _{n=1}\sum _{{\mu}_{0}}{\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{k}}_{{\rho}_{0}^{\prime}}-{\mathit{\kappa}}_{{\mu}_{0}})\times \sum _{{\sigma}_{0}}{\stackrel{\sim}{s}}_{n}({\mathit{\kappa}}_{{\mu}_{0}}-{\mathbf{k}}_{{\sigma}_{0}})d({\mathbf{k}}_{{\sigma}_{0}})=\sum _{{\sigma}_{0}}c({\mathbf{k}}_{{\rho}_{0}}-{\mathbf{k}}_{{\sigma}_{0}})d({\mathbf{k}}_{{\sigma}_{0}}).$$

(27)

That is, the calibration data needs to be convolved with the interpolation kernel. When the calibration data are acquired on non-Cartesian trajectories, the data are first interpolated onto a Cartesian grid (see step 1 of the auto-calibrated kSPA algorithm); therefore, the apodization correction is automatically realized. When the calibration data are acquired on a Cartesian grid, (27) suggests that, the Cartesian data also need to be convolved with the interpolation kernel in order to remove the apodization effect.

The computation of the auto-calibrated kSPA mainly lies in solving the system of linear equations defined by [15]. This equation needs to be solved not only for each Cartesian grid location but also for each coil. Since at the same grid location, this equation is the same for all coils, ideally, we would like to invert the system matrix and apply the inverse matrix to all coils. By doing so, we can speed up the computation by a factor comparable to the total number of coils. Unfortunately, such a kind of strategy is not practical because the equation is typically ill-posed as a result of noise contamination and gridding errors in the calibration data. As a result, an accurate inversion matrix may not exist.

Alternatively, the equations can be solved iteratively without computing a matrix inversion directly. There are a number of iterative algorithms that might be suitable for this task. In this paper, we demonstrate an implementation with the least squares using orthogonal and right triangular decomposition (LSQR) algorithm [26]. The LSQR algorithm has been shown to be more robust than the conjugate gradient algorithm in the case of ill-posed problems [26]. In this study, we use the implementation of LSQR by Matlab (Version 7, Release 14, The Math-Works Inc.) with the default stopping criteria.

There are two main sources of reconstruction error: approximation error of matrix **G** denoted as **G*** _{e}* and approximation error of matrix

$$\widehat{\mathbf{m}}={({(\mathbf{G}-{\mathbf{G}}_{\mathbf{e}})}^{\mathbf{H}}(\mathbf{G}-{\mathbf{G}}_{\mathbf{e}}))}^{-1}{(\mathbf{G}-{\mathbf{G}}_{\mathbf{e}})}^{\mathbf{H}}\mathbf{d}.$$

(28)

With a first-order approximation of Taylor expansion, it can be reduced to

$$\widehat{\mathbf{m}}=(\mathbf{I}+{\mathbf{M}}^{-1}({\mathbf{G}}^{\mathbf{H}}{\mathbf{G}}_{\mathbf{e}}+{\mathbf{G}}_{\mathbf{e}}^{\mathbf{H}}\mathbf{G})){\mathbf{M}}^{-1}({\mathbf{G}}^{\mathbf{H}}-{\mathbf{G}}_{\mathbf{e}}^{\mathbf{H}})\mathbf{d}.$$

(29)

Taking into consideration the sparse approximation error of **M**^{−1}, one gets

$$\widehat{\mathbf{m}}=(\mathbf{I}+({\mathbf{M}}^{-1}-{\mathbf{M}}_{\mathbf{e}}^{-1})({\mathbf{G}}^{\mathbf{H}}{\mathbf{G}}_{\mathbf{e}}+{\mathbf{G}}_{\mathbf{e}}^{\mathbf{H}}\mathbf{G}))\times ({\mathbf{M}}^{-1}-{\mathbf{M}}_{\mathbf{e}}^{-1})({\mathbf{G}}^{\mathbf{H}}-{\mathbf{G}}_{\mathbf{e}}^{\mathbf{H}})\mathbf{d}.$$

(30)

Finally, keeping terms up to the first order, one gets

$$\widehat{\mathbf{m}}=\mathbf{m}-{\mathbf{M}}^{-1}{\mathbf{G}}_{\mathbf{e}}^{\mathbf{H}}\mathbf{d}+{\mathbf{M}}^{-1}({\mathbf{G}}^{\mathbf{H}}{\mathbf{G}}_{\mathbf{e}}+{\mathbf{G}}_{\mathbf{e}}^{\mathbf{H}}\mathbf{G}))\mathbf{m}-{\mathbf{M}}_{\mathbf{e}}^{-1}{\mathbf{G}}^{\mathbf{H}}\mathbf{d}$$

(31)

where **m** is the true image computed with the exact matrices **G** and **M**^{−1} as

$$\mathbf{m}={\mathbf{M}}^{-1}{\mathbf{G}}^{\mathbf{H}}\mathbf{d}$$

(32)

The first two error terms in (31) result from the sparse approximation of matrix **G**, which is primarily determined by the choice of *w _{s}* and associated interpolation process. The last error term in (31) results from the error in the inversion matrix

The auto-calibrated kSPA algorithm is applied to a spiral **k**-space trajectory. The trajectory consists of two segments: a short single-shot spiral-in navigator and an interleaved spiral-out imaging trajectory. The spiral-in navigator corresponds to an image matrix size of 32 × 32 (Fig. 2); the spiral-out trajectory corresponds to an image matrix size of 256 × 256. The spiral trajectory was designed based on the analytic method by Glover [27]. The spiral-out trajectory contains of 32 interleaves. Each interleaf has 1744 sampling points.

Sprial-in navigator. (a) Readout gradient waveform consisting of a short spiral-in navigator followed by a regular spiral trajectory. (b) **k**-Space trajectory of the spiral-in navigator. (c) Image reconstructed from the navigator for one coil.

A Shepp–Logan phantom and an 8-channel receiving coil were used to simulate the **k**-space data *via* inverse gridding. The coil sensitivity maps were acquired using an 8-channel head coil on a large water phantom (MRI Devices Corporation, Pewaukee, WI) and fitted with a thin-plate spline function [18]. The image seen by each coil was simulated by multiplying the phantom image with the corresponding coil sensitivity map. To reduce the effect of **k**-space circular convolution resulting from this image-domain multiplication and subsequent discrete Fourier Transform, both the phantom image and coil sensitivity maps were upsampled by a factor of 2 prior to the multiplication of the phantom image with coil sensitivity maps. The upsampling is accomplished by zero-padding in the Fourier domain. To improve the accuracy of inverse gridding, each coil image was further zero-padded by a factor of 2 before performing Fourier transform. In addition, the edges of the coil sensitivity maps that were outside the imaging object were tapered with a Fermi window [18], [28] in order to further suppress the energy leakage caused by Gibbs ringing. The radius of the circular Fermi window was equal to the half-width of the imaging matrix; the transition width of the window equals five pixels. This procedure was necessary because the coil sensitivity fitted by the thin-plate-spline function covers the whole imaging matrix and has an abrupt cutoff due to the limited field-of-view (FOV).

To study the effect of noise on the performance of the algorithm and to demonstrate the advantage of LSQR, Gaussian noise was added to the image such that the signal-to-noise ratio (SNR) was 20. The image was reconstructed with and without using the LSQR algorithm for a comparison.

*In vivo* brain images of a healthy volunteer were acquired using the same navigated spiral readout trajectory (as described in Section III-A) on a 1.5 T whole-body system (GE Signa, GE Healthcare, Waukesha, WI) equipped with a maximum gradient of 50mT/m and a slew rate of 150 mT/m/s. An eight-channel head coil (MRI Devices Corporation, Pewaukee, WI) was used for image acquisition. The scan parameters were: FOV = 24 cm, time of repetition (TR) = 300 ms, time of echo (TE) = 15 ms, bandwidth (BW) = 125 kHz, and matrix size = 256 × 256. Undersampling in **k**-space was achieved by skipping a certain number of interleaves. The calibration data was acquired using the spiral-in navigator that samples the **k**-space on a 32 × 32 grid and precedes each spiral-out interleaf.

All image reconstructions were performed using Matlab (Version 7, Release 14, The MathWorks), running on a LINUX PC equipped with a 3.20 GHz Intel Xeon CPU and 5 GB RAM. Images were reconstructed using reduction factors ranging from 1 to 3. The auto-kSPA reconstruction parameters were: *w _{s}* = 6, and

Fig. 3 shows a set of typical individual coil images reconstructed with the auto-calibrated kSPA algorithm and the corresponding sum-of-squares image. In either case, the calibration data were not used to reconstruct the final image besides for the purpose of calibration. However, the algorithm does not exclude the addition of calibration data in the final image. Our goal is to evaluate the ability of the algorithm in removing aliasing artifacts. Although adding navigation data will certainly improve the SNR of the final image, it may require additional consideration when the navigator has different image contrast compared to the imaging data [29]. As an illustration of the improved image quality, images reconstructed with the simple gridding algorithm are also shown. Excellent image quality is achieved with auto-calibrated kSPA for all tested reduction factors. Fig. 4 shows a similar set of images reconstructed with the *in vivo* data. Similarly to the simulated data, the reconstructed *in vivo* images have excellent image quality and have no obvious artifacts.

Demonstration of auto-calibrated kSPA with simulated data. (a) *R* = 2; (b) *R* = 3. In each row, images from left to right are: image reconstructed with standard gridding, two typical coil images, image reconstructed with auto-calibrated kSPA, difference **...**

Demonstration of auto-calibrated kSPA with *in vivo* data. (a) *R* = 2; (b) *R* = 3. In each row, images from left to right are: image reconstructed with standard gridding, two typical coil images, image reconstructed with auto-calibrated kSPA, difference with **...**

To illustrate the location independency of calibration data, Fig. 5 compares images reconstructed using calibration data centered around the **k**-space origin to those reconstructed using calibration data centered around (kx = 16, ky = 16) and (kx = 64, ky = 64). For the simulated data, there is no difference in the image quality with different calibration region. For the *in vivo* data, however, some residual aliasing artifacts remain when the calibration data are off-center [Fig. 5(c)]. These residual artifacts are caused by the increasing noise level in the calibration data.

Images reconstructed with calibration data centered at different location in **k**-space and with *R* = 2. The center of the calibration data are (a) (0,0); (b) (16,16); (c) (64,64), respectively. The first column is simulated data without noise, the second **...**

Fig. 6 compares the image quality when direct matrix inversion is used to compute the reconstruction weights to that when LSQR is used to compute the weights. For the simulated data without noise, both direct matrix inversion and LSQR results in good image quality and there is no significant difference. However, for the simulated data with noise and the *in vivo* data, although the direct matrix inversion results in good image reconstruction for *R* = 2, no successful reconstruction is obtained for higher reduction factors. On the other hand, with LSQR, the auto-calibrated kSPA achieves successful reconstruction for all reduction factors up to *R* = 4.

Representative coil images reconstructed using weights calculated by (a) the LSQR algorithm and (b) matrix pseudo-inversion. In both cases, the reduction factor is *R* = 3. The first row is simulated data without noise; the second row is simulated data **...**

The image reconstruction quality is also affected by the size of the calibration data and the width of the reconstruction kernel, that is, the number of nonzero elements in each row of **G**^{†}. Fig. 7(a) plots the square root of mean square error (RMSE) as a function of *w _{s}* and

Auto-calibration in the image reconstruction of partially-acquired **k**-space data has shown some favorable properties in previous studies of parallel imaging methods [5], [14], [16], [17], [30], [31]. We have demonstrated that auto-calibration schemes can also be incorporated into the kSPA algorithm. In addition, we have shown that the calibration data can be acquired, in principle, at any region in **k**-space.

The proposed auto-calibrated kSPA shares the same characteristics as AUTO-SMASH [14], [30] and GRAPPA in which calibration steps are performed directly on acquired extra **k**-space calibration data. Although the popular GRAPPA algorithm was originally proposed for Cartesian trajectory, a number of recent works have extended GRAPPA for spiral and radial trajectories. For example, Heidemann *et al.* presented a method for constant angular-velocity spiral trajectory [17]. In that method, samples on a constant angular-velocity spiral trajectory are reordered and placed on a hybrid Cartesian grid where the original GRAPPA algorithm is applicable. More recently, Heberlein and Hu introduced a method to interpolate missing spiral interleaves using GRAPPA interpolation kernels by dividing the **k**-space into angular sectors and assuming that the kernel within each sector is radially invariant [16]. While a GRAPPA based kernel provides the weighting factors to directly synthesize missing data in **k**-space, auto-calibrated kSPA uses the calibration data to estimate the sparse matrix inversion. In addition, auto-kSPA does not attempt to fill in missing data points in **k**-space, instead, it computes the whole **k**-space based on measured samples. Auto-calibrated kSPA is a general reconstruction algorithm that applies to all sampling patterns. Some recent studies have also extended GRAPPA auto-calibration to non-Cartesian trajectories. For example, Seiberlich *et al.* used a procedure with GRAPPA operator gridding (GROG) to reconstruct non-Cartesian data [15], [32]. In addition, Samsonov established a connection between the technique of parallel magnetic resonance imaging with adaptive radius in **k**-space (PARS) [33] and GRAPPA by using the concept of *in vivo* coil sensitivity which is the true sensitivity modulated by the object [34]. Such a connection should also allow auto-calibration for PARS.

The proposed algorithm has been demonstrated with both simulated and *in vivo* data. The quality of reconstructed images is good. The average RMSE is 2.1% for simulated data and 4.0% for *in vivo* data as shown in Figs. 3 and and4.4. In the simulated data, the residual artifacts mainly originate from the edges of the Shepp-Logan phantom where high spatial frequency components exist. Such artifacts are consistent with the interpolation nature of the kSPA algorithm and the non-Cartesian sampling in **k**-space. On the other hand, there is noticeable noise amplification in the center of the *in vivo* images. This noise enhancement is explained by the increased g-factor and is consistent with that observed commonly in SENSE reconstruction [3], [4].

We show, in theory, that calibration data can be acquired at any location in **k**-space. This result applies to all sampling trajectories and is consistent with the well-known shift-invariant feature of the GRAPPA kernel for Cartesian trajectory [5] and the technique of parallel magnetic resonance imaging with adaptive radius in **k**-space (PARS) [33]. This result implies that the reconstruction matrix or the interpolation weights in **k**-space are solely determined by the sampling pattern and the coil configurations, and they are independent of the absolute location in **k**-space. This observation is also verified with both simulated and *in vivo* data as shown in Fig. 5. While there are no observable artifacts in the simulated data without noise, the quality of the simulated images with noise and the *in vivo* images appears to deteriorate as the calibration region shifts towards outer **k**-space. This disparity demonstrates that although the location of the calibration region does not affect the algorithm in the noiseless situation, image quality may decrease due to the higher noise level at outer **k**-space.

Computing the reconstruction matrix requires solving ill-conditioned systems of linear equations. The system of linear equations shown in (12) is generally poorly conditioned as a result of both the irregular sampling pattern and noisy **k**-space data that appear in both sides of (21). Because data are sampled on a non-Cartesian trajectory, data values on both sides of the equation have to be computed with interpolation. Errors resulting from imperfect sampling density correction or limited width of interpolation kernel will worsen the condition of (18). Finding the solution using matrix pseudo-inversion seems to work well for the simulated data without noise but not for the simulated data with noise and the *in vivo* data especially at relatively high reduction factors as shown in Fig. 6. This issue is resolved effectively by using the LSQR algorithm that has been shown to be more robust for solving ill-posed linear problems.

There are two parameters that need to be optimized for the auto-calibrated kSPA algorithm: the cutoff bandwidth of sensitivity *w _{s}* and the half-width of reconstruction kernel

In comparison to the original kSPA algorithm, auto-calibrated kSPA reconstructs one image for each coil instead of one final image without sensitivity weighting. As a result, the computation for auto-kSPA is approximately slowed by a factor equal to the total number of coils. Therefore, for massively parallel imaging, kSPA would still be the method of choice. For a small number of coils which is what is currently used by a vast majority of scanners, auto-kSPA offers the advantage of robustness and immunity to coil sensitivity estimation error. A further improvement to auto-kSPA would be reconstructing one final image instead of reconstructing one image for each coil, which is currently under investigation. Furthermore, auto-calibrated kSPA provides a solution for cases where an accurate sensitivity map is difficult or impossible to obtain. These scenarios include, for example, calibration data being too noisy, calibration data being off center and image being sparse such that it does not support a sensitivity map over the whole field-of-view. In the specific examples shown in Fig. 5(a), the original kSPA algorithm is certainly capable of reconstructing images of similar quality when the calibration data are located around the center of **k**-space and are of high SNR. However, it is not applicable for the two cases in Fig. 5(b) and (c).

We have proposed and implemented an auto-calibrated kSPA algorithm that does not require the explicit computation of the coil sensitivity maps. We have also shown that calibration data, in principle, can be acquired at any region of **k**-space. This property applies to arbitrary sampling trajectories and all reconstruction algorithms based on **k**-space. In practice, because of its higher SNR, calibration data acquired at the center of **k**-space performed more favorably.

The authors would like to thank M. Saunders, Ph.D., of Stanford University for discussions on LSQR. The authors would also like to thank T. Harshbarger, Ph.D. for editorial assistance.

This work was supported in part by the National Institute of Health under Grant 1K99EB007182 and Grant 4R00EB007182, in part by the Lucas Foundation, and in part by the National Center for Research Resources under Grant P41RR09784.

Chunlei Liu, Brain Imaging and Analysis Center, School of Medicine, Duke University, Durham, NC 27705 USA.

Jian Zhang, Lucas Center for MR Imaging, Radiology Department, and Electrical Engineering Department, Stanford University, Stanford, CA 94305 USA.

Michael E. Moseley, Lucas Center for MR Imaging, Radiology Department, Stanford University, Stanford, CA 94305 USA.

1. Sodickson DK, Manning WJ. Simultaneous acquisition of spatial harmonics (SMASH): Fast imaging with radiofrequency coil arrays. Magn Reson Med. 1997;38:591–603. [PubMed]

2. Hutchinson M, Raff U. Fast MRI data acquisition using multiple detectors. Magn Reson Med. 1988;6:87–91. [PubMed]

3. Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: Sensitivity encoding for fast MRI. Magn Reson Med. 1999;42:952–62. [PubMed]

4. Pruessmann KP, Weiger M, Bornert P, Boesiger P. Advances in sensitivity encoding with arbitrary **k**-space trajectories. Magn Reson Med. 2001;46:638–51. [PubMed]

5. Griswold MA, Jakob PM, Heidemann RM, Nittka M, Jellus V, Wang J, Kiefer B, Haase A. Generalized autocalibrating partially parallel acquisitions (GRAPPA) Magn Reson Med. 2002;47:1202–10. [PubMed]

6. Dydak U, Weiger M, Pruessmann KP, Meier D, Boesiger P. Sensitivity-encoded spectroscopic imaging. Magn Reson Med. 2001;46:713–22. [PubMed]

7. Bammer R, Keeling SL, Augustin M, Pruessmann KP, Wolf R, Stollberger R, Hartung HP, Fazekas F. Improved diffusion-weighted single-shot echo-planar imaging (EPI) in stroke using sensitivity encoding (SENSE) Magn Reson Med. 2001;46:548–54. [PubMed]

8. Golay X, de Zwart JA, Ho YC, Sitoh YY. Parallel imaging techniques in functional MRI. Top Magn Reson Imaging. 2004;15:255–65. [PubMed]

9. Golay X, Pruessmann KP, Weiger M, Crelier GR, Folkers PJ, Kollias SS, Boesiger P. PRESTO-SENSE: An ultrafast whole-brain fMRI technique. Magn Reson Med. 2000;43:779–86. [PubMed]

10. Heidemann RM, Griswold MA, Kiefer B, Nittka M, Wang J, Jellus V, Jakob PM. Resolution enhancement in lung 1 H imaging using parallel imaging methods. Magn Reson Med. 2003;49:391–4. [PubMed]

11. Kostler H, Sandstede JJ, Lipke C, Landschutz W, Beer M, Hahn D. Auto-SENSE perfusion imaging of the whole human heart. J Magn Reson Imaging. 2003;18:702–8. [PubMed]

12. Wintersperger BJ, Nikolaou K, Dietrich O, Rieber J, Nittka M, Reiser MF, Schoenberg SO. Single breath-hold real-time cine MR imaging: Improved temporal resolution using generalized autocalibrating partially parallel acquisition (GRAPPA) algorithm. Eur Radiol. 2003;13:1931–6. [PubMed]

13. Liu C, Moseley ME, Bammer R. Simultaneous phase correction and sense reconstruction for navigated multi-shot dwi with non-cartesian **k**-space sampling. Magn Reson Med. 2005;54:1412–22. [PubMed]

14. Jakob PM, Griswold MA, Edelman RR, Sodickson DK. AUTO-SMASH: A self-calibrating technique for SMASH imaging. Simultaneous acquisition of spatial harmonics. Magma. 1998;7:42–54. [PubMed]

15. Seiberlich N, Breuer F, Heidemann R, Blaimer M, Griswold M, Jakob P. Reconstruction of undersampled non-Cartesian data sets using pseudo-Cartesian GRAPPA in conjunction with GROG. Magn Reson Med. 2008;59:1127–37. [PubMed]

16. Heberlein K, Hu X. Auto-calibrated parallel spiral imaging. Magn Reson Med. 2006;55:619–25. [PubMed]

17. Heidemann RM, Griswold MA, Seiberlich N, Kruger G, Kannengiesser SA, Kiefer B, Wiggins G, Wald LL, Jakob PM. Direct parallel image reconstructions for spiral trajectories using GRAPPA. Magn Reson Med. 2006 [PubMed]

18. Liu C, Bammer R, Moseley ME. Parallel imaging reconstruction for arbitrary trajectories using **k**-space sparse matrices (kSPA) Magn Reson Med. 2007;58:1171–81. [PubMed]

19. Benson M. Vol Master. Thunder Bay, Canada: Lakehead University; 1973. Iterative solution of large scale linear systems.

20. Benson M, Frederickson P. Iterative solution of large sparse linear systems arising in certain multidimensional approximation problems. Utilitas Math. 1982;22:127–140.

21. Demko S, Moss WF, Smith PW. Decay rates for inverses of band matrices. Mathematics of Computation. 1984;43:491–499.

22. Boor Cd. Dichotomies for band matrices. SIAM J Numer Anal. 1980;17:894–907.

23. Demko S. Inverses of band matrices and local convergence of spline projections. SIAM J Numer Anal. 1977;14:616–619.

24. Beatty PJ, Nishimura DG, Pauly JM. Rapid gridding reconstruction with a minimal oversampling ratio. IEEE Trans Med Imaging. 2005;24:799–808. [PubMed]

25. Jackson JI, Meyer CH, Nishimura DG, Macovski A. Selection of a convolution function for Fourier inversion using gridding. IEEE Transactions on Medical Imaging. 1991;10:473–478. [PubMed]

26. Saunders M. Solution of sparse rectangular systems using LSQR and CRAIG. BIT Numerical Mathematics. 1995;35:588–604.

27. Glover GH. Simple analytic spiral **k**-space algorithm. Magn Reson Med. 1999;42:412–5. [PubMed]

28. Lowe MJ, Sorenson JA. Spatially filtering functional magnetic resonance imaging data. Magn Reson Med. 1997;37:723–9. [PubMed]

29. Glover GH, Law CS. Spiral-in/out BOLD fMRI for increased SNR and reduced susceptibility artifacts. Magn Reson Med. 2001;46:515–22. [PubMed]

30. Heidemann RM, Griswold MA, Haase A, Jakob PM. VD-AUTO-SMASH imaging. Magn Reson Med. 2001;45:1066–74. [PubMed]

31. Lustig M, Pauly JM. Iterative GRAPPA: A general solution for the GRAPPA reconstruction from arbitrary **k**-space sampling. Proceedings of 15th Annual Meeting of ISMRM; Berlin. 2007. pp. 333–333.

32. Seiberlich N, Breuer FA, Blaimer M, Barkauskas K, Jakob PM, Griswold MA. Non-Cartesian data reconstruction using GRAPPA operator gridding (GROG) Magn Reson Med. 2007;58:1257–65. [PubMed]

33. Yeh EN, McKenzie CA, Ohliger MA, Sodickson DK. Parallel magnetic resonance imaging with adaptive radius in **k**-space (PARS): Constrained image reconstruction using **k**-space locality in radiofrequency coil encoded data. Magn Reson Med. 2005;53:1383–92. [PubMed]

34. Samsonov AA, Block WF, Arunachalam A, Field AS. Advances in locally constrained **k**-space-based parallel MRI. Magn Reson Med. 2006;55:431–8. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |