Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC3985775

Formats

Article sections

Authors

Related links

Magn Reson Med. Author manuscript; available in PMC 2014 April 14.

Published in final edited form as:

PMCID: PMC3985775

NIHMSID: NIHMS388081

Lucas Center for MR Spectroscopy and Imaging, Department of Radiology, Stanford University, Stanford, California, USA

The publisher's final edited version of this article is available free at Magn Reson Med

See other articles in PMC that cite the published article.

Although the concept of receiving MR signal using multiple coils simultaneously has been known for over two decades, the technique has only recently become clinically available as a result of the development of several effective parallel imaging reconstruction algorithms. Despite the success of these algorithms, it remains a challenge in many applications to rapidly and reliably reconstruct an image from partially-acquired general non-Cartesian *k*-space data. Such applications include, for example, three-dimensional (3D) imaging, functional MRI (fMRI), perfusion-weighted imaging, and diffusion tensor imaging (DTI), in which a large number of images have to be reconstructed. In this work, a systematic *k*-space–based reconstruction algorithm based on *k*-space sparse matrices (kSPA) is introduced. This algorithm formulates the image reconstruction problem as a system of sparse linear equations in *k*-space. The inversion of this system of equations is achieved by computing a sparse approximate inverse matrix. The algorithm is demonstrated using both simulated and in vivo data, and the resulting image quality is comparable to that of the iterative sensitivity encoding (SENSE) algorithm. The kSPA algorithm is noniterative and the computed sparse approximate inverse can be applied repetitively to reconstruct all subsequent images. This algorithm, therefore, is particularly suitable for the aforementioned applications.

Parallel magnetic resonance imaging (MRI) utilizes multiple coils to simultaneously receive radio frequency (RF) signals emitted from a scan subject (1–4). Aided by the spatial distribution of the RF coils’ reception sensitivity, parallel MRI has been widely used for improving imaging speed or reducing artifacts. A key problem in developing parallel MRI has been the computational difficulty in forming an image from data acquired by multiple coils. Over the past several years, two feasible classes of algorithm have been introduced for parallel imaging reconstruction: image-domain algorithms and *k*-space algorithms. Image-domain algorithms directly compute the image from the *k*-space data acquired by each coil, where as the *k*-space algorithms compute the spectrum of the image.

The most successful image-domain algorithm has been the iterative sensitivity encoding (SENSE) algorithm for arbitrary trajectories by Pruessmann et al. (5,6). Another image-based algorithm named “sensitivity profiles from an array of coils for encoding and reconstruction in parallel” (SPACE RIP) has also been introduced by Kyriakos et al (7) to reconstruct an image column-by-column along the phase encoding direction. *k*-Space algorithms include, for example, simultaneous acquisition of spatial harmonics (SMASH) by Sodickson and Manning (8) and generalized GRAPPA by Bydder et al. (9), generalized autocalibrating partially parallel acquisitions (GRAPPA) by Griswold et al. (10,11), and parallel imaging with adaptive radius in *k*-space (PARS) by Yeh et al. (12), among other methods (13,14). The iterative SENSE algorithm expressed the *k*-space data as a linear combination of the spatially-encoded magnetization in which the multiplicative spatial encoding function is the product of coil sensitivity and the Fourier encoding function. The resulting system of linear equations is solved with the conjugate gradient (CG) or related method in an iterative fashion since the sheer size of the design matrix precludes a direct solution of the inverse problem (15). To improve the reconstruction speed, the matrix-vector multiplication encountered in each iteration is replaced by a gridding and inverse gridding procedure, which is an important innovation that has made iterative SENSE feasible for arbitrary sampling trajectories with a reasonable reconstruction speed (6). With the perfect knowledge of coil sensitivity, the iterative SENSE algorithm is shown to provide an accurate estimation of the true underlying image.

GRAPPA was first introduced to reconstruct an image that is partially acquired on a Cartesian grid (10). The essential idea is to estimate the missing *k*-space data points by linearly combining its acquired neighboring points. The weights used in this estimation are first trained on some calibration lines that are typically acquired near the center of the *k*-space. In addition, there have been some recent efforts to extend this method beyond Cartesian trajectory (16,17). The PARS algorithm estimates the *k*-space data on a grid using its neighboring data points sampled on arbitrary trajectories similar to the gridding procedure. Contrary to GRAPPA, the combination weights of PARS are calculated using coil sensitivity maps rather than the calibration lines. Both GRAPPA and PARS compute each individual coil image first and combine the resulting coil images through a sum-of-squares reconstruction.

Despite the successes of these existing algorithms, it remains a challenge to rapidly and reliably reconstruct an image from undersampled *k*-space data. One major issue is the difficulty in processing the large amount of data generated by higher spatial resolution, more coil elements, and 3D imaging. In addition, functional studies, such as functional MRI (fMRI) (18–20), perfusion imaging (21,22), and diffusion tensor imaging (DTI) (23–27), pose an especially difficult challenge, in which thousands of images have to be reconstructed for a single study. For example, a typical whole-brain DTI study generates on the order of 1000 images. Without parallel computing implementation, it typically takes the iterative SENSE algorithm minutes to reconstruct a 256 × 256 image acquired with an eight-channel receiving coil. Even with a speed of one minute per image, it takes over 16 hours to reconstruct every 1000 images. Such long computational time makes parallel imaging essentially impractical for routine clinical functional studies. For dynamic studies or DTI studies, in principle, the design matrix only needs to be inverted once and then can be applied repetitively to reconstruct all subsequent images. However, iterative reconstruction has precluded such approach. Alternative means that allow the precomputing of the complex weights needed for parallel imaging reconstruction are, therefore, highly desirable for these types of applications.

In this work, a systematic parallel imaging reconstruction algorithm in *k*-space, termed *k*-space sparse matrices (kSPA), is proposed that particularly suits such kind of repetitive image reconstruction process. The image reconstruction problem is formulated in the *k*-space as a linear algebra problem. The kSPA algorithm then solves the problem by taking advantage of the sparsity of the resulting matrices. With this new algorithm, a sparse approximate reconstruction matrix (i.e., the inverse of the design matrix) can be directly computed. Multiple images can then be reconstructed in a fraction of the time needed for iterative SENSE by repetitively applying the reconstruction matrix through a matrix and vector multiplication. This algorithm is demonstrated with both simulated and *in vivo* data.

Because of the discrete nature of MR image acquisition, all mathematical treatment of the reconstruction algorithm involves only discrete samples of continuous functions.

Assuming that the *n*-th coil has a receiving sensitivity of *s _{n}*(

$${m}_{n}({\mathbf{k}}_{\rho})=m({\mathbf{k}}_{\rho})\ast {s}_{n}({\mathbf{k}}_{\rho})=\sum _{{\rho}^{\prime}=1}^{{N}^{2}}m({\mathbf{k}}_{{\rho}^{\prime}}){s}_{n}({\mathbf{k}}_{\rho}-{\mathbf{k}}_{{\rho}^{\prime}}).$$

[1]

Here, *m*(**k**_{ρ}) represents the true magnetization to be imaged in the *k*-space, and the sign “*” represents the two-dimensional (2D) convolution.

Because the field of view (FOV) of an imaging experiment is always limited, MR data in any *k*-space location can be calculated through interpolation following the Nyquist theorem. Specifically, for an arbitrary location **κ**_{μ} in *k*-space, the data acquired by the *n*-th coil can be written as,

$${d}_{n}({\mathbf{\kappa}}_{\mu})={m}_{n}({\mathbf{\kappa}}_{\mu})\ast c({\mathbf{\kappa}}_{\mu})=\sum _{\rho =1}^{{N}^{2}}{m}_{n}({\mathbf{k}}_{\rho})c({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{\rho}),$$

[2]

where *c*(**k**_{κ}) is the interpolation kernel. Ideally, *c*(**k**_{κ}) should be a sinc function of infinite length. In practice, *c*(**k**_{κ}) is usually implemented as a finite-width kernel, for example, a Kaiser-Bessel window (28). To differentiate a Cartesian grid from an arbitrary sampling location, the Latin letter **k** is used to indicate a Cartesian grid point, while the Greek letter **κ** is used to indicate an arbitrary location.

Combining Eqs. [1] and [2], the data on an arbitrary *k*-space location can be written as,

$${d}_{n}({\mathbf{\kappa}}_{\mu})=m({\mathbf{\kappa}}_{\mu})\ast {\stackrel{\sim}{s}}_{n}({\mathbf{\kappa}}_{\mu})=\sum _{{\rho}^{\prime}=1}^{{N}^{2}}m({\mathbf{k}}_{{\rho}^{\prime}})\sum _{\rho =1}^{{N}^{2}}{s}_{n}({\mathbf{k}}_{\rho}-{\mathbf{k}}_{{\rho}^{\prime}})c({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{\rho}),$$

[3]

where * _{n}*(

$${\stackrel{\sim}{s}}_{n}({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{{\rho}^{\prime}})=\sum _{\rho =1}^{{N}^{2}}{s}_{n}({\mathbf{k}}_{\rho}-{\mathbf{k}}_{{\rho}^{\prime}})c({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{\rho}).$$

[4]

With multiple receiving coils and a number of sampling locations, Eq. [3] forms a system of linear equations that can be denoted as,

$$\mathbf{d}=\mathbf{Gm}.$$

[5]

Here, **d** is a column vector stacked with the *k*-space data acquired by all coils; **m** is also a column vector with the *k*-space value to be estimated; **G** is the coefficient matrix. Specifically,

$${\mathbf{d}}_{n}={[\begin{array}{ccc}{d}_{n}({\mathbf{\kappa}}_{1})& \cdots & {d}_{n}({\mathbf{\kappa}}_{n\kappa})\end{array}]}^{\text{T}},$$

[6]

$$\mathbf{d}={[\begin{array}{ccc}{\mathbf{d}}_{1}& \cdots & {\mathbf{d}}_{nc}\end{array}]}^{\text{T}},$$

[7]

$$\mathbf{m}={[\begin{array}{cccc}m({\mathbf{k}}_{1})& m({\mathbf{k}}_{2})& \cdots & m({\mathbf{k}}_{{N}^{2}})\end{array}]}^{\text{T}},$$

[8]

$$\mathbf{G}=\left[\begin{array}{cccc}{\stackrel{\sim}{s}}_{1}({\mathbf{\kappa}}_{1}-{\mathbf{k}}_{1})& {\stackrel{\sim}{s}}_{1}({\mathbf{\kappa}}_{1}-{\mathbf{k}}_{2})& \cdots & {\stackrel{\sim}{s}}_{1}({\mathbf{\kappa}}_{1}-{\mathbf{k}}_{{N}^{2}})\\ \vdots & \vdots & \vdots & \vdots \\ {\stackrel{\sim}{s}}_{1}({\mathbf{\kappa}}_{n\kappa}-{\mathbf{k}}_{1})& {\stackrel{\sim}{s}}_{1}({\mathbf{\kappa}}_{n\kappa}-{\mathbf{k}}_{2})& \cdots & {\stackrel{\sim}{s}}_{1}({\mathbf{\kappa}}_{n\kappa}-{\mathbf{k}}_{{N}^{2}})\\ {\stackrel{\sim}{s}}_{2}({\mathbf{\kappa}}_{1}-{\mathbf{k}}_{1})& {\stackrel{\sim}{s}}_{2}({\mathbf{\kappa}}_{1}-{\mathbf{k}}_{2})& \cdots & {\stackrel{\sim}{s}}_{2}({\mathbf{\kappa}}_{1}-{\mathbf{k}}_{{N}^{2}})\\ \vdots & \vdots & \cdots & \vdots \\ {\stackrel{\sim}{s}}_{nc}({\mathbf{\kappa}}_{n\kappa}-{\mathbf{k}}_{1})& {\stackrel{\sim}{s}}_{nc}({\mathbf{\kappa}}_{n\kappa}-{\mathbf{k}}_{2})& \cdots & {\stackrel{\sim}{s}}_{nc}({\mathbf{\kappa}}_{n\kappa}-{\mathbf{k}}_{{N}^{2}})\end{array}\right].$$

[9]

Here, *n*κ denotes the total number sampling locations in *k*-space and *nc* denotes the total number of receiving coils.

In principle, Eq. [5] can be solved with various techniques for solving systems of linear equations, for example, through computing the pseudoinversion of matrix **G**. However, the typical size of matrix **G** is prohibitively large and prevents such kind of matrix operation on a typical personal computer. For example, with an eight-channel receiving coil, a reduction factor of 2, and an image size of 256 × 256, the size of the **G** matrix will be around 260,000 × 65,536. To store this matrix requires 128 GB of memory with double floating-point precision.

To reduce the storage requirement, we study the sparsity of the matrix **G** by exploring the physical properties of the coil sensitivity. According to the Biot-Savart law, the coil sensitivity decays roughly quadratically with the distance from the coil location. Although the actual decaying speed depends on the size and shape of the coil, in general, when the object is sufficiently far away from the coil, the sensitivity is a smooth function containing only low spatial frequency components. In other words, the convolution kernel defined by the coil sensitivity is very compact leading to a sparse matrix **G** (Fig. 1a).

Illustration of sparse matrices: (**a**) an example of the sparse design matrix **G** resulting from an echo-planar imaging experiment with an image size of 32 × 32, an eight-channel coil, and a reduction factor of 4; (**b**) a schematic illustration of the **...**

Let *w _{s}* be the cutoff bandwidth (BW) of the sensitivity, i.e.,

$${\stackrel{\sim}{s}}_{n}({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{\rho})\ne 0,\phantom{\rule{0.38889em}{0ex}}\mathit{when}{\Vert {\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{\rho}\Vert}_{2}\le {w}_{s}.$$

Here, ||·||_{2} denotes the vector Euclidean norm. In this work, *w _{s}* was chosen such that at the cutoff frequency

Although LSQR or any other suitable iterative method can provide a very accurate solution to Eq. [5], iterative methods are time consuming for repetitive image reconstruction, such as fMRI or DTI, since multiple iterations are generally required for each image. Ideally, one would like to find a generalized inverse matrix **G ^{+}** applicable to all images such that

$${\mathbf{G}}^{+}\mathbf{G}=\mathbf{I},$$

[10]

where **I** is the identity matrix. Note that, since **G** usually is an overdetermined system, it does not have a right inverse **G ^{+}** such that

$${\mathbf{GG}}^{+}=\mathbf{I}.$$

[11]

A least-squares solution to Eq. [10] is given by

$${\mathbf{G}}^{+}={({\mathbf{G}}^{\mathbf{H}}\mathbf{G})}^{+}{\mathbf{G}}^{\mathbf{H}}.$$

[12]

Here, **G ^{H}** refers to the complex conjugate transpose of

Our method is to approximate **G ^{+}** with a sparse matrix. The concept of sparse approximate inverse was first introduced by Benson and further developed by other authors as a preconditioner for the CG method (30–32). In Benson’s original work (32), the matrix to be inverted has been assumed to be real and square. In our case,

$${M}_{{\rho \rho}^{\prime}}=\sum _{n=1}^{nc}\sum _{\mu}{\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{\rho}){\stackrel{\sim}{s}}_{n}({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{{\rho}^{\prime}}),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\mathit{for}\phantom{\rule{0.16667em}{0ex}}{\mathbf{\kappa}}_{\mu}:{\Vert {\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{\rho}\Vert}_{2}\le {w}_{s}.$$

[13]

Note that **M** is a sparse matrix, since *M*_{ρρ′} is only nonzero when there exists at least one sampling location **κ**_{μ} such that both
${\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{\rho})$ and * _{n}*(

A sparse approximate inverse **M ^{+}** can be found by solving the following minimization problem:

$$\underset{{\mathbf{M}}^{+}}{min}{\Vert {\mathbf{M}}^{+}\mathbf{M}-\mathbf{I}\Vert}_{F},$$

[14]

subject to constraints on the sparsity pattern of **M ^{+}**, i.e., the number and position of the nonzero entries of

To solve Eq. [14], a sparsity pattern needs to be first prescribed for **M ^{+}**. The sparse pattern can be determined by studying the analytical expression of the inverse of a square matrix. Following the Cayley-Hamilton theorem (33), if a rank-

$$\sum _{{\rho}^{\prime}}{M}_{{\rho \rho}^{\prime}}^{+}\sum _{n=1}^{nc}\sum _{\mu}{\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{{\rho}^{\prime}}){\stackrel{\sim}{s}}_{n}({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{\sigma})=\delta ({\mathbf{k}}_{\rho}-{\mathbf{k}}_{\sigma}),\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\mathit{for}{\Vert {\mathbf{k}}_{\rho}-{\mathbf{k}}_{{\rho}^{\prime}}\Vert}_{2}\le w.$$

[15]

and,

$${M}_{{\rho \rho}^{\prime}}^{+}=0,\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\phantom{\rule{0.38889em}{0ex}}\mathit{for}{\Vert {\mathbf{k}}_{\rho}-{\mathbf{k}}_{{\rho}^{\prime}}\Vert}_{2}>w.$$

This equation can be solved for each row of **M ^{+}**. For the ρth row, ρ′ is the set of

Equation [15] can be further modified to include the apodization correction needed to address the effect of the convolution kernel:

$$\sum _{{\rho}^{\prime}}{M}_{{\rho \rho}^{\prime}}^{+}\sum _{n=1}^{nc}\sum _{\mu}{\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{{\rho}^{\prime}}){\stackrel{\sim}{s}}_{n}({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{\sigma})=c({\mathbf{k}}_{\rho}-{\mathbf{k}}_{\sigma}).$$

[16]

An additional advantage of using the convolution kernel instead of the delta function in the right hand side of the equation is to reduce the ringing artifact caused by the sharp transition of the delta function. Equation [16] forms a relatively small set of linear equations with (*w* + 1)^{2} unknowns and (*w*_{σ} + 1)^{2} equations, where *w*_{σ} defines the maximum distance between **k**_{ρ} and **k**_{σ}. In a matrix format, this set of equations can be written as

$${\mathbf{M}}_{\rho}{\mathbf{m}}_{\rho}^{+}={\mathbf{c}}_{\rho}.$$

[17]

The column vector
${\mathbf{m}}_{\rho}^{+}$ is the stack of
${M}_{{\rho \rho}^{\prime}}^{+}$; **M**_{ρ} is the coefficient matrix, which basically is the submatrix of **M** corresponding to
${\mathbf{m}}_{\rho}^{+}$; and **c**_{ρ} is the column vector formed by *c*(**k**_{ρ} − **k**_{σ}). For the above equation to be overdetermined, it is required that *w*_{σ} > *w*. In this work, *w*_{σ} is chosen to be *w* + *w _{s}*.

The set of equations defined by Eq. [17] is solved for each row of **M ^{+}** by computing the pseudoinverse of

$${\mathbf{m}}_{\rho}^{+}={({\mathbf{M}}_{\rho})}^{+}{\mathbf{c}}_{\rho}.$$

[18]

The threshold of the singular value was chosen to be the maximum singular value multiplied by the size of the Cartesian grid (i.e., *N*) and the floating point precision. This threshold was relatively conservative, representing only the level of quantization error. A larger threshold may be chosen to further suppress noise and accelerate the computation. For each ρ,
${\mathbf{m}}_{\rho}^{+}$ yields the ρ-th row of **M ^{+}**. After all rows of

$${G}_{\rho (\mu ,n)}^{+}=\sum _{{\rho}^{\prime}}{M}_{{\rho \rho}^{\prime}}^{+}({\mathbf{k}}_{\rho}-{\mathbf{k}}_{{\rho}^{\prime}}){\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{{\rho}^{\prime}}).$$

[19]

The summation is over those ρ′ such that ||**k**_{ρ} − **k**_{ρ′}||_{2} ≤ *w*.

The above proposed algorithm for *k*-space parallel imaging reconstruction can be summarized as:

- For each coil, compute its sensitivity in the
*k*-space on a*w*×_{s}*w*Cartesian grid._{s} - For the ρ-th row of
**M**^{+}, find all**k**_{ρ′}such that ||**k**_{ρ}−**k**_{ρ′}||_{2}≤*w*and all**k**_{σ}such that ||**k**_{ρ}−**k**_{σ}||_{2}≤*w*_{σ}with*w*_{σ}=*w*+*w*. Solve Eq. [16] for ${M}_{\rho \rho}^{+}$ with SVD._{s} - For the ρ-th row of
**G**, find all^{+}**κ**_{μ}such that ||**κ**_{μ}−**k**_{ρ}||_{2}≤*w*+_{s}*w*. Compute ${G}_{\rho (\mu ,n)}^{+}$ with Eq. [19]. - Repeat step 2 and step 3 until all rows of
**G**are computed.^{+} - Compute the
*k*-space data points on the Cartesian grid with**m**=**G**^{+}**d**. - Multiply
**m**with a Fermi window (etc.) and Fourier transform**m**to obtain an image.

Note that if there is sufficient memory, * _{n}*(

The main computational burden of kSPA lies in the computation of **M**^{+}. For an arbitrary sampling pattern, because the summation over μ is different, the set of equations defined by Eq. [16] is different for different row of **M**^{+}. As a result, Eq. [16] needs to be solved independently for every row of **M**^{+}. However, if the sampling pattern has a certain shift-invariant property, then the amount of computation can be dramatically reduced. For example, for an undersampled Cartesian trajectory where *R* phase encoding lines are skipped, the sampling pattern is periodic with a period of *R* grid points. Because of this self-resembling sampling pattern, there are only *R* sets of unique system of linear equations excluding some edge points (discussed at the end of this section). As a result, Eq. [16] only needs to be solved for *R* consecutive sampling points on a representative line along the phase encoding direction. These *R* sets of weights can then be selectively used by other grid points based on their relative locations. By solving only *R* sets of equations, the amount of computation is reduced by a factor of *N*^{2}/*R*. For example, with *N* = 256 and *R* = 4, the speed-up factor is 4096. Of course, for Cartesian trajectory, aliasing can also simply be unfolded in the image domain pixel by pixel.

For an arbitrary non-Cartesian trajectory, the sampling pattern is not periodic; therefore the above implementation based on the periodicity of the sampling pattern is no longer feasible. For arbitrary trajectories, we use a block-by-block implementation of kSPA that significantly reduces the total amount of computation required. We first observe that, although the matrix **M**_{ρ} defined in Eqs. [16] and [17] is different at different *k*-space grid point, this matrix does not differ significantly for two adjacent grid points. Specifically, given two neighboring points **k**_{ρ1} and **k**_{ρ2}, the set of **k**_{ρ′} satisfying ||**k**_{ρ1} − **k**_{ρ′}||_{2} ≤ *w* and the set of **k**_{ρ′} satisfying ||**k**_{ρ2} − **k**_{ρ′}||_{2} ≤ *w* consist of mostly the same grid points except for a few points near the edge of the circle. Based on this observation, we can divide the *k*-space into a number of blocks (Fig. 2). Each block has a half-width of *w _{b}* with

An illustration of the block-wise implementation of kSPA. The solid straight lines indicate the Cartesian grid, which is divided into small blocks shown in dotted squares. Each block has a half-width of *w*_{b}. For each block, **M**_{ρ} is constructed once **...**

It should be pointed out that, for grid points within a distance of *w* from the edge of the matrix,
${\mathbf{m}}_{\rho}^{+}$ cannot be computed accurately using the block-wise implementation. This limitation is a result of the fact that there are no sampling points outside the matrix. For example, a grid point on the left edge of the matrix has only sampling points on its right hand side, while a grid point in the middle of the matrix has sampling points on both sides. The resulting equations defined by Eq. [16] are vastly different. Therefore, in the block-wise implementation, the edge points need to be computed separately. Alternatively, the block-wise implementation can be applied to all grid points, while the synthesized data towards the edge has to be filtered out using a low-pass filter, for example using a Fermi filter. For simplicity, the latter approach is implemented in this work.

The sensitivity information required for kSPA reconstruction can be obtained with any established sensitivity estimation method (5,34–36). For example, the sensitivity can be obtained with an additional body coil image that is assumed to have homogeneous sensitivity. In the absence of a body coil image, the sum of squares image can also be used for sensitivity estimation (Fig. 3a). Another common method is to use a low resolution image that is either acquired with an extra navigator or with a self-navigated trajectory, for example a variable density spiral (37). The self-navigation approach avoids the need for a separate calibration scan and errors from misregistration.

The estimation of coil sensitivity map using 2D thin-plate spline fitting. (a) An example of raw coil sensitivity map calculated by dividing the coil image with the sum-of-squares image. This raw sensitivity map is typically noisy. (b) The raw sensitivity **...**

Because the kSPA algorithm utilizes only the low-frequency component of the coil-sensitivity, it is important to preprocess the raw coil sensitivity and remove high-frequency noise and residual tissue contrast (Fig. 3a). Any signal void or noise in the sensitivity map will contribute errors in the estimation of the low-frequency component of the coil-sensitivity. Here, we use the thin-plate spline function for smoothing the coil sensitivity because it not only provides sensitivity estimation for regions of signal voids inside the object, but also provides some sensitivity extrapolation outside the object. Given a set of *n* control points (*x _{i}*,

$$f(x,y)=\sum _{i=1}^{n}{a}_{i}{r}_{i}^{2}log{r}_{i}^{2}+{a}_{n+1}x+{a}_{n+2}y+{a}_{n+3},$$

[20]

$${r}_{i}^{2}={(x-{x}_{i})}^{2}+{(y-{y}_{i})}^{2}.$$

[21]

Given a set of measured points on the sensitivity map, the smoothed real and imaginary part of the full sensitivity map can be estimated independently by solving the following minimization problem:

$$minp\sum _{i=1}^{n}{(s({x}_{i},{y}_{i})-f({x}_{i},{y}_{i}))}^{2}+(1-p)\int \int \left(\frac{{\partial}^{2}}{\partial {x}^{2}}+2\frac{\partial}{\partial x}\frac{\partial}{\partial y}+\frac{{\partial}^{2}}{\partial {y}^{2}}\right)f(x,y)\mathit{dxdy}.$$

[22]

Here *s*(*x _{i}*,

To remove any potential fast oscillating phase in the raw coil sensitivity that is common for all coils, the phase of the sensitivity map of a randomly chosen coil is subtracted from all coils. Each raw coil sensitivity map is masked based on a threshold set on the order of the noise level of the sum-of-squares image (Fig. 3b). A small set of support points is then randomly selected within the masked region and used to solve Eq. [22] (Fig. 3c). This set of control points consists of only 2.5% of the total points inside the masked region. As can be seen, the thin-plate spline function not only provides a good fitting to the support points within the object, but also provides a reasonable extrapolation beyond the boundary of the object (Fig. 3d–f).

The kSPA algorithm is applied for various *k*-space trajectories including a Cartesian trajectory, a spiral trajectory, and a random trajectory. All sampling trajectories correspond to an image matrix size of 128 × 128. The spiral trajectory was designed based on the analytic method by Glover (39). It contains eight interleaves. Each interleaf has 1861 sampling points. A Shepp-Logan phantom and an eight-channel receiving coil were used to simulate the *k*-space data via inverse gridding (6,27,40). The image seen by each coil was simulated by multiplying the phantom image with the corresponding coil sensitivity map. To reduce the effect of *k*-space circular convolution resulting from this image-domain multiplication and subsequent discrete Fourier transform, both the phantom image and coil sensitivity maps were upsampled by a factor of 2 prior to the multiplication of the phantom image with coil sensitivity maps. To improve the accuracy of inverse gridding, each coil image was further zero-padded by a factor of 2 before performing Fourier transform. In addition, the edges of the coil sensitivity maps that were outside the imaging object are tapered with a Fermi filter in order to further suppress the energy leakage caused by Gibbs ringing. Images were reconstructed using reduction factors ranging from 1 to 4.

In vivo brain images of a healthy volunteer were acquired using a spiral readout trajectory on a 1.5T whole-body system (GE Signa; GE Healthcare, Waukesha, WI, USA) equipped with a maximum gradient of 50 mT/m and a slew rate of 150 mT/m/s. An eight-channel head coil (MRI Devices Corporation, Pewaukee, WI, USA) was used for image acquisition. The scan parameters were: FOV = 24 cm, TR = 4 s, TE = 90 ms, BW = 125 kHz, and matrix size = 256 × 256. The spiral readout trajectory consists of 32 interleaves. Each interleaf has 1744 samples. Under-sampling in *k*-space was achieved by skipping a certain number of interleaves. Coil sensitivities were measured using a spiral-in navigator that samples the *k*-space on a 64 × 64 grid and precedes each spiral-out interleaf.

All image reconstructions were performed using Matlab (Version 7, Release 14; The MathWorks Inc., USA), running on a LINUX PC equipped with a 3.20 GHz Intel Xeon CPU and 5 GB random access memory (RAM). For the simulation, the kSPA reconstruction parameters were: *w _{s}* = 12,

Figure 4 compares images reconstructed with gridding and kSPA for all three types of trajectories. Reduction factors range from 1 to 4 with one meaning no reduction. For each trajectory, the first row shows images reconstructed with a gridding algorithm (28) and the second row shows images reconstructed with kSPA. The kSPA parameters were *w _{s}* = 12,

kSPA reconstruction for three sampling trajectories with undersampling factors up to 4: (**a**) Cartesian, (**b**) spiral, and (**c**) random trajectory. The first row of each group shows images reconstructed with a simple gridding procedure. With *k*-space under-sampling **...**

To further assess the dependence of the kSPA reconstruction on the three window sizes, *w _{s}*,

Figure 7 shows the in vivo results. The first row shows a typical coil image reconstructed with gridding for each reduction factor, while the second row shows the kSPA images. The kSPA parameters were *w _{s}* = 8,

We have shown that kSPA is a *k*-space-based parallel imaging reconstruction algorithm that can be applied to arbitrary *k*-space sampling trajectories. While the iterative SENSE algorithm relates the image-domain data with the acquired *k*-space data through a sensitivity-encoded Fourier transform, kSPA expresses the acquired *k*-space data as a convolution between the spectrum of the coil sensitivity and the spectrum of the image. The kSPA reconstruction, therefore, is a deconvolution process based on undersampled data, which is accomplished by approximating the inverse of the design matrix with a sparse matrix. Its feasibility and accuracy is verified in the simulation study using various undersampling ratios and various trajectories including a Cartesian, a spiral, and a random trajectory. In vivo studies also showed that the image quality of kSPA is comparable to that of iterative SENSE with a slightly increased computational time for a single image. For repetitive image reconstruction, an important strength of kSPA is its significant increase in speed compared to the iterative SENSE approach, which becomes increasingly important for time-series data, DTI, and spectroscopic imaging.

The image reconstruction process of kSPA offers an analogy to the traditional gridding procedure (28). Based on Eq. [19], the *k*-space data on a *k*-space grid point can be computed as

$$m({\mathbf{k}}_{\rho})=\sum _{{\rho}^{\prime}}{M}_{{\rho \rho}^{\prime}}^{+}({\mathbf{k}}_{\rho}-{\mathbf{k}}_{{\rho}^{\prime}})\sum _{n=1}^{nc}\sum _{\mu}{\stackrel{\sim}{s}}_{n}^{\ast}({\mathbf{\kappa}}_{\mu}-{\mathbf{k}}_{{\rho}^{\prime}}){m}_{n}({\mathbf{\kappa}}_{\mu}).$$

[23]

This equation can be interpreted as a double convolution. Each coil’s data are first gridded onto a Cartesian grid with a gridding kernel equaling to the complex conjugate of the sensitivity of that coil, i.e.,
${\stackrel{\sim}{s}}_{n}^{\ast}(\mathbf{k})$. The gridded data from all coils are then summed together. Finally, the sum of all gridded data is convolved with each row of **M**^{+} to estimate each point of *m*(**k**_{ρ}). In general, **M**^{+} provides a spatially varying convolution kernel. This gridding interpretation is illustrated in Fig. 8.

A schematic illustration of the kSPA algorithm shows a gridding analogy. From left to right, *k*-space data sampled on arbitrary trajectories (shown in small black circles) are first gridded onto Cartesian grids using a convolution kernel equaling to the **...**

In the case of fully sampled single-coil acquisition, the gridding kernel becomes the conventional gridding kernel *c*(**k**) instead of
${\stackrel{\sim}{s}}_{n}^{\ast}(\mathbf{k})$, and **M**^{+} is simply the postsampling density compensation. More specifically, assuming the single-channel coil has homogeneous reception sensitivity, the Fourier transform of the sensitivity is a delta function. As a result,
${\stackrel{\sim}{s}}_{n}^{\ast}(\mathbf{k})$ equals to *c*(**k**) following Eq. [4]. If the coil has inhomogeneous reception sensitivity, in order to correct the sensitivity modulation, the gridding kernel has to be *(**k**), that is, the spectrum of the coil sensitivity. When multiple coils are used, such gridding procedure can be carried out independently for each coil. However, if the *k*-space is undersampled, the resulting image after this gridding step is still aliased. The aliasing artifacts have to be removed by multiplying the gridded *k*-space data with the unaliasing matrix **M**^{+}.

The quality of kSPA reconstruction depends on a number of parameters, namely the cutoff BW of coil sensitivity *w _{s}*, the reconstruction kernel width

Given a sufficiently wide cutoff BW, the kSPA image quality is mainly determined by the reconstruction kernel width *w*. The reconstruction kernel width *w* limits the total number of nonzero entries in each row of the sparse approximate inverse **M ^{+}**, or, in other words, it decides how many neighboring points are used to synthesize each

The block size *w _{b}*, together with the reconstruction kernel width

In summary, the three parameters *w _{s}*,

The kSPA algorithm synthesizes *k*-space data directly. Similar to other *k*-space based algorithms, this process is noniterative. Such noniterative reconstruction offers some advantages in certain applications, such as repetitive image reconstruction.

Similar to GRAPPA and PARS, kSPA synthesizes *k*-space data using limited data support in the *k*-space. Particularly, kSPA shares some unique characteristics with PARS in that both compute the reconstruction weights purely based on the coil sensitivity maps instead of the measured raw data. In addition, by recognizing the compact structure of coil sensitivity’s spectrum, PARS and kSPA enable the computation of reconstruction weights by solving a set of small inverse subproblems and permit the storage and repetitive application of the reconstruction weights. However, kSPA differs fundamentally from PARS in the way the weights are computed and in the meaning of the weights. In PARS, the weights are the weighting factors that combine all coil sensitivity maps to synthesize a given coil sensitivity map. Accordingly, PARS reconstructs each coil image one-by-one. In kSPA, on the other hand, the weights are the inverse matrix of the autocorrelation function of the sampled spectrum of the coil sensitivity. kSPA reconstructs the image itself instead of the coil images. Because the weights only need to be computed once in kSPA, whereas a different set of weights have to be computed for different coils in PARS, kSPA offers a significant gain in speed, especially for situations in which there is a large number of coils.

Although kSPA is slightly slower than the iterative SENSE algorithm for a single image reconstruction in the example shown in Table 1, the result of kSPA reconstruction can be applied directly to reconstruct subsequent images through a simple matrix and vector multiplication, which can be accomplished in one second for this particular example. As a result, kSPA is particularly useful, for example, for fMRI and perfusion imaging, in which a large number of images are acquired in the same study. It is estimated that, for 1000 images, the image reconstruction time can be reduced by a factor of 100.

A general parallel imaging reconstruction algorithm termed kSPA has been introduced that is *k*-space–based and applies for arbitrary *k*-space sampling trajectories. This algorithm approximates the reconstruction in *k*-space by approximating the inverse of the design matrix with a sparse matrix. Such computation is noniterative and the results can be applied to reconstruct all subsequent images with the same prescription (e.g., slice location, FOV, resolution, and coil placement) acquired in the same imaging session. Both simulated and in vivo studies have shown that kSPA provides excellent image quality that is comparable to that of iterative SENSE. It is expected that kSPA will find great utility in various applications including fMRI, DTI, perfusion-weighted imaging, as well as spectroscopic imaging.

Grant sponsor: National Institutes of Health (NIH); Grant numbers: 1K99NS057943-01; 1R01NS35959; 1R01EB002771; Grant sponsor: Center of Advanced MR Technology of Stanford; Grant number: NCRR P41 RR 09784; Grant sponsor: Lucas Foundation.

We thank Michael Saunders, Ph.D., for discussions on LSQR, Murat Aksoy, M.S., for assisting the image acquisition, Huanzhou Yu, Ph.D., for discussions on sparse matrix, and Karen Chen, M.S., for her editorial assistance.

1. Carlson JW. An algorithm for NMR imaging reconstruction based on multiple RF receiver coils. J Magn Reson. 1987;74:376–380.

2. Hutchinson M, Raff U. Fast MRI data acquisition using multiple detectors. Magn Reson Med. 1988;6:87–91. [PubMed]

3. Roemer PB, Edelstein WA, Hayes CE, Souza SP, Mueller OM. The NMR phased array. Magn Reson Med. 1990;16:192–225. [PubMed]

4. Carlson JW, Minemura T. Imaging time reduction through multiple receiver coil data acquisition and image reconstruction. Magn Reson Med. 1993;29:681–687. [PubMed]

5. Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: sensitivity encoding for fast MRI. Magn Reson Med. 1999;42:952–962. [PubMed]

6. Pruessmann KP, Weiger M, Bornert P, Boesiger P. Advances in sensitivity encoding with arbitrary *k*-space trajectories. Magn Reson Med. 2001;46:638–651. [PubMed]

7. Kyriakos WE, Panych LP, Kacher DF, Westin CF, Bao SM, Mulkern RV, Jolesz FA. Sensitivity profiles from an array of coils for encoding and reconstruction in parallel (SPACE RIP) Magn Reson Med. 2000;44:301–308. [PubMed]

8. Sodickson DK, Manning WJ. Simultaneous acquisition of spatial harmonics (SMASH): fast imaging with radiofrequency coil arrays. Magn Reson Med. 1997;38:591–603. [PubMed]

9. Bydder M, Larkman DJ, Hajnal JV. Generalized SMASH imaging. Magn Reson Med. 2002;47:160–170. [PubMed]

10. Griswold MA, Jakob PM, Heidemann RM, Nittka M, Jellus V, Wang J, Kiefer B, Haase A. Generalized autocalibrating partially parallel acquisitions (GRAPPA) Magn Reson Med. 2002;47:1202–1210. [PubMed]

11. Griswold MA, Blaimer M, Breuer F, Heidemann RM, Mueller M, Jakob PM. Parallel magnetic resonance imaging using the GRAPPA operator formalism. Magn Reson Med. 2005;54:1553–1556. [PubMed]

12. Yeh EN, McKenzie CA, Ohliger MA, Sodickson DK. Parallel magnetic resonance imaging with adaptive radius in *k*-space (PARS): constrained image reconstruction using *k*-space locality in radiofrequency coil encoded data. Magn Reson Med. 2005;53:1383–1392. [PubMed]

13. Griswold MA, Jakob PM, Nittka M, Goldfarb JW, Haase A. Partially parallel imaging with localized sensitivities (PILS) Magn Reson Med. 2000;44:602–609. [PubMed]

14. Samsonov AA, Block WF, Arunachalam A, Field AS. Advances in locally constrained *k*-space-based parallel MRI. Magn Reson Med. 2006;55:431–438. [PubMed]

15. Hestenes MR, Stiefel EL. Method of conjugate gradients for solving linear systems. J Res Natl Bur Standards. 1952:409–435.

16. Heberlein K, Hu X. Auto-calibrated parallel spiral imaging. Magn Reson Med. 2006;55:619–625. [PubMed]

17. Heidemann RM, Griswold MA, Seiberlich N, Kruger G, Kannengiesser SA, Kiefer B, Wiggins G, Wald LL, Jakob PM. Direct parallel image reconstructions for spiral trajectories using GRAPPA. Magn Reson Med. 2006 [PubMed]

18. Ogawa S, Menon RS, Tank DW, Kim SG, Merkle H, Ellermann JM, Ugurbil K. Functional brain mapping by blood oxygenation level-dependent contrast magnetic resonance imaging. A comparison of signal characteristics with a biophysical model. Biophys J. 1993;64:803–812. [PubMed]

19. Ogawa S, Lee TM, Kay AR, Tank DW. Brain magnetic resonance imaging with contrast dependent on blood oxygenation. Proc Natl Acad Sci USA. 1990;87:9868–9872. [PubMed]

20. Golay X, Pruessmann KP, Weiger M, Crelier GR, Folkers PJ, Kollias SS, Boesiger P. PRESTO-SENSE: an ultrafast whole-brain fMRI technique. Magn Reson Med. 2000;43:779–786. [PubMed]

21. Warach S, Levin JM, Schomer DL, Holman BL, Edelman RR. Hyperperfusion of ictal seizure focus demonstrated by MR perfusion imaging. AJNR Am J Neuroradiol. 1994;15:965–968. [PubMed]

22. Warach S, Dashe JF, Edelman RR. Clinical outcome in ischemic stroke predicted by early diffusion-weighted and perfusion magnetic resonance imaging: a preliminary analysis. J Cereb Blood Flow Metab. 1996;16:53–59. [PubMed]

23. Bammer R, Auer M, Keeling SL, Augustin M, Stables LA, Prokesch RW, Stollberger R, Moseley ME, Fazekas F. Diffusion tensor imaging using single-shot SENSE-EPI. Magn Reson Med. 2002;48:128–136. [PubMed]

24. Basser PJ, Mattiello J, LeBihan D. MR diffusion tensor spectroscopy and imaging. Biophys J. 1994;66:259–267. [PubMed]

25. Jaermann T, Crelier G, Pruessmann KP, Golay X, Netsch T, van Muiswinkel AM, Mori S, van Zijl PC, Valavanis A, Kollias S, Boesiger P. SENSE-DTI at 3 T. Magn Reson Med. 2004;51:230–236. [PubMed]

26. Liu C, Bammer R, Kim DH, Moseley ME. Self-navigated interleaved spiral (SNAILS): application to high-resolution diffusion tensor imaging. Magn Reson Med. 2004;52:1388–1396. [PubMed]

27. Liu C, Moseley ME, Bammer R. Simultaneous phase correction and SENSE reconstruction for navigated multi-shot DWI with non-Cartesian *k*-space sampling. Magn Reson Med. 2005;54:1412–1422. [PubMed]

28. Jackson JI, Meyer CH, Nishimura DG, AM Selection of a convolution function for Fourier inversion using gridding. IEEE Trans Med Imaging. 1991;10:473–478. [PubMed]

29. Paige CC, Saunders MA. LSQR: An algorithm for sparse linear equations and sparse least squares. ACM Trans of Math Software. 1982;8(1):43–71.

30. Benzi M, Meyer C, Tuma M. A sparse approximate inverse preconditioner for the conjugate gradient method. Siam J Sci Comput. 1996;17:1135.

31. Benson M, Frederickson P. Iterative solution of large sparse linear systems arising in certain multidimensional approximation problems. Utilitas Math. 1982;22:127–140.

32. Benson M. Master’s Thesis. Thunder Bay, Canada: Lakehead University; 1973. Iterative solution of large scale linear systems.

33. Golub GH, Van Loan CF. Matrix computations. Baltimore, MD: Johns Hopkins; 1996. p. 590.

34. Bydder M, Larkman DJ, Hajnal JV. Combination of signals from array coils using image-based estimation of coil sensitivity profiles. Magn Reson Med. 2002;47:539–548. [PubMed]

35. Griswold MA, Breuer F, Blaimer M, Kannengiesser S, Heidemann RM, Mueller M, Nittka M, Jellus V, Kiefer B, Jakob PM. Autocalibrated coil sensitivity estimation for parallel imaging. NMR Biomed. 2006;19:316–324. [PubMed]

36. Lin FH, Chen YJ, Belliveau JW, Wald LL. A wavelet-based approximation of surface coil sensitivity profiles for correction of image intensity inhomogeneity and parallel imaging reconstruction. Hum Brain Mapp. 2003;19:96–111. [PubMed]

37. Kim DH, Adalsteinsson E, Spielman DM. Simple analytic variable density spiral design. Magn Reson Med. 2003;50:214–219. [PubMed]

38. Reinsch C. Smoothing by spline functions. Numer Math. 1967;10:177–183.

39. Glover GH. Simple analytic spiral *k*-space algorithm. Magn Reson Med. 1999;42:412–415. [PubMed]

40. Rasche V, Proksa R, Sinkus R, Bornert P, Eggers H. Resampling of data between arbitrary grids using convolution interpolation. IEEE Trans Med Imaging. 1999;18:385–392. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |