PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
IEEE Trans Med Imaging. Author manuscript; available in PMC Dec 2, 2010.
Published in final edited form as:
PMCID: PMC2996098
NIHMSID: NIHMS250370
Auto-Calibrated Parallel Imaging Reconstruction for Arbitrary Trajectories Using k-Space Sparse Matrices (kSPA)
Chunlei Liu,corresponding author Jian Zhang, and Michael E. Moseley
Chunlei Liu, Brain Imaging and Analysis Center, School of Medicine, Duke University, Durham, NC 27705 USA;
corresponding authorCorresponding author.
Chunlei Liu: chunlei.liu/at/duke.edu; Jian Zhang: jianz/at/stanford.edu; Michael E. Moseley: moseley/at/stanford.edu
Image acquisition of magnetic resonance imaging (MRI) can be accelerated by using multiple receiving coils simultaneously. The problem of reconstructing an unaliased image from partially sampled k-space data can be formulated as a large system of sparse linear equations. The k-space sparse matrix (kSPA) algorithm proposes to solve the system of equations by finding a sparse approximate inverse. This algorithm has been shown to accelerate the image reconstruction for a large number of coils. The original kSPA algorithm requires knowledge of coil sensitivities. Here, we propose and demonstrate an auto-calibrated kSPA algorithm that does not require the explicit computation of the coil sensitivity maps. We have also shown that calibration data, in principle, can be acquired at any region of k-space. This property applies to arbitrary sampling trajectories and all reconstruction algorithms based on k-space. In practice, because of its higher SNR, calibration data acquired at the center of k-space performed more favorably. Such auto-calibration can be advantageous in cases where an accurate sensitivity map is difficult to obtain.
Index Terms: k-space sparse matrix (kSPA), magnetic resonance imaging (MRI), parallel imaging, sparse approximate inverse, sparse matrix
By utilizing the spatial variation of the receiving coil’s sensitivity, parallel imaging has become a very useful technique for improving the acquisition speed of magnetic resonance imaging (MRI) [1]–[5]. This fast imaging technique has found great utility in a variety of imaging applications [6]–[13].
The main rationale of parallel MRI is to reduce the time-consuming process of phase encoding with spatial encoding. By doing so, the image reconstruction procedure has deviated from performing the fast Fourier transform to solving a system of large linear equations. In Fourier transform based MRI, the phase encoding coefficients can be controlled by varying the magnetic field gradients. Although the gradients may not be perfect, they can be predetermined fairly accurately with the continuing advance of gradient coil design. In parallel imaging, however, the spatial encoding coefficient, that is the coil’s sensitivity, cannot be controlled by the MRI scanner operator and is typically unknown. The image quality of parallel imaging, on a large degree, depends on the accuracy of coil sensitivity. Therefore, it is crucial to determine coil sensitivities as accurately as possible.
In general, there are two approaches to determine and utilize the coil sensitivity information. One approach is to measure sensitivity maps directly and use them to form a system of linear equations such as the approach adopted in sensitivity encoding (SENSE) [3], [4]. The other approach is to estimate the reconstruction weights from some pre-measured calibration data without determining the sensitivity maps explicitly such as the approach proposed in the auto-calibrated simultaneous acquisition of spatial harmonics (AUTO-SMASH) [14] and the approach adopted in the technique of generalized autocalibrating partially parallel acquisitions (GRAPPA) [5]. The general idea of GRAPPA is that sensitivity information is inherently captured by the calibration data, reconstruction weights, therefore, may be trained on this calibration data. GRAPPA was originally introduced for Cartesian sampling trajectory; however, some recent works have attempted to extend this technique to non-Cartesian trajectories [15]–[17].
For the purpose of fast reconstruction of time-series images and massive parallel imaging using a large number of coils, Liu et al. recently proposed a method for parallel imaging reconstruction by inverting a k-space sparse matrix (kSPA) [18]. In the kSPA algorithm, the image reconstruction problem is formulated as a system of sparse linear equations in the k-space, and the system of equations is solved by computing a sparse approximate inverse [19], [20]. This algorithm shares some characteristics of both SENSE and GRAPPA. Similar to SENSE, kSPA requires an explicit estimation of the coil sensitivity. On the other hand, kSPA reconstructs an image in the k-space similar to GRAPPA. In contrary to GRAPPA, kSPA reconstructs a total of one image rather than one image for each coil.
In this paper, we propose a method of performing kSPA reconstruction without estimating the coil sensitivity. Rather than calculating the sparse reconstruction matrix based on the coil sensitivity, the reconstruction matrix is computed directly from calibration data. In addition, we show mathematically that the calibration data can be situated in any location of k-space and is not restricted to the center of k-space. The proposed algorithm demonstrates excellent image quality for both simulated and in vivo data. Although the reconstruction speed is generally slower than the original kSPA algorithm, the proposed kSPA variation can be particularly advantageous in cases where an accurate sensitivity map is difficult to obtain.
To aid the presentation, a table is provided that summarizes and defines the notations used (Table I).
TABLE I
TABLE I
List of Symbols
A. kSPA Algorithm
Following the Nyquist theorem, the data acquired by the nth channel of a multichannel coil at an arbitrary k-space location κμ can be written as [18]
equation M1
(1)
To differentiate a Cartesian grid from an arbitrary sampling location, the Latin letter k is used to indicate a Cartesian grid point while the Greek letter κ is used to indicate an arbitrary location. Here, m(kρ) represents the Fourier transform of the underlying magnetization distribution as expressed in k-space (ρ = 1 ··· N2 for an N × N grid); sn(kρ) is the Fourier transform of the receiving sensitivity sn(rρ) of the nth coil; c(κμ) is the interpolation kernel; and the sign “*” represents the 2-D convolution. In this study, c(κμ) is approximated by a finite-width Kaiser-Bessel window. It should be noted that although a typical MRI image is not band limited, higher frequency components beyond the coverage of k-space are irrecoverable and, hence, assumed to be zero throughout this paper.
Equation (1) can be simplified to
equation M2
(2)
Here sn(κμ) is the spectrum of the coil sensitivity interpolated at an arbitrary k-space location κμ according to
equation M3
(3)
Notice that both sn(κμkρ) and c(kρ) are sampled on regular Cartesian grids for a given κμ. With multiple receiving coils and a number of sampling locations, a system of linear equations can be constructed using (2) and it can be written in a matrix form as
equation M4
(4)
Here, d is a column vector stacked with the k-space data acquired by all coils; m is a column vector stacked with the k-space value to be estimated; G is the coefficient matrix whose entries are formed by corresponding values of sn (κμkρ). In other words, G is an inverse gridding operator that transforms data on a Cartesian grid to arbitrary locations in k-space. More specifically, d, G and m are defined, respectively
equation M5
(5)
equation M6
(6)
equation M7
(7)
In kSPA image reconstruction, an image is reconstructed from undersampled k-space data by computing the pseudo inverse operator
equation M8
(8)
For typical MRI exams, G is prohibitively large and its inverse cannot be practically computed. To realistically compute the inverse operator, the kSPA algorithm makes two basic approximations: the approximation of G as a sparse matrix and the approximation of (GHG)−1 as a sparse matrix.
The first approximation is valid because coil sensitivity contains only low spatial frequency components. Let ws be the cutoff bandwidth of the sensitivity, then
equation M9
(9)
An empirical choice of ws is such that at the cutoff frequency sn(kρ) decreases to around 0.36% of its peak value. The second approximation originates from the concept of sparse approximate inverse [19], [20] which was first introduced as a pre-conditioner for the conjugate gradient (CG) method. Let M = (GHG) and M be its pseudo inverse, then M is a N2 × N2 symmetric matrix whose entries are given by
equation M10
(10)
Note that, since Mρρ′ is nonzero only if there exists at least one sampling location κμ such that both equation M11 and sn(κμκρ′) are nonzero. Furthermore, matrix M is a banded semi-positive definite matrix. It can be shown that for such matrix, the entries of the inverse matrix decay exponentially and are bounded by equation M12 where a > 0 and γ < 1. The values of a and γ are determined by the spectrum of matrix M [21]. Intuitively, when the distance ||kρkρ′|| is large, the size of the element equation M13, will be very small, that is, M is approximately sparse [21]–[23].
Assuming the inverse matrix M can be approximated as a sparse matrix [19], [20], we can find each row of M by solving a small set of linear equations which is formed by varying kσ following the condition set by (9)
equation M14
(11)
and
equation M15
(12)
Here, w is the width of the reconstruction kernel that defines the sparsity of the inverse matrix. After M is computed, the Fourier transform of the image can be simply estimated as
equation M16
(13)
To summarize, kSPA reconstruction consists of two steps: the first step is multiplying the raw data with a gridding operator GH, and the second step is multiplying the gridded data with an unmixing operator M. The algorithm has been demonstrated with both simulated and in vivo data sets sampled on arbitrary trajectories. The image quality has been shown to be similar to that of iterative SENSE.
B. Auto-Calibrated kSPA
The original implementation of kSPA requires the estimation of coil sensitivity [18]. The estimation of coil sensitivity typically involves the division of each individual coil’s image by the sum-of-squares image, followed by postprocessing procedures such as low-pass filtering and surface fitting [3]. Because image quality is sensitive to the accuracy of the measured coil sensitivity, the requirement of accurate coil sensitivity is a critical source of reconstruction error. An implementation of kSPA that does not require the explicit estimation of coil sensitivity is therefore highly desirable. Such an implementation would compute the inversion operator G directly from some form of calibration data in acquired k-space. Specifically, the auto-calibration method presented here shall compute G directly from measured k-space data rather than from the estimated coil sensitivity.
One option for realizing such a kind of auto-calibration is to acquire a fully sampled data set on the designed trajectory and use this data set to estimate the weights for synthesizing missing data points in an undersampled trajectory. However, such a calibration scheme is expensive and recalibration is inconvenient. Here, we propose an alternative calibration method that can rely on a small fully sampled area centered at any location of the k-space for estimating the inversion operator G.
Calibration Region can be Arbitrarily Located
We first show that the value of equation M17, is determined only by the sampling pattern surrounding kρ and it is independent of the exact location of kρ. Let kρ be an arbitrary location in the k-space, then given a location kρ0, we can translate the region within a distance of 2ws from kρ to center around kρ0 by adding a displacement of Δk = kρ0kρ as illustrated in Fig. 1. As a result, we can rewrite (11) as
Fig. 1
Fig. 1
The weights for synthesizing data on a Cartesian grid depend only on the surrounding sampling pattern. The shaded area indicates the calibration region where k-space is fully sampled. The circle indicates the region whose sampling pattern is used to construct (more ...)
equation M18
(14)
which can be simplified to
equation M19
(15)
Comparing (15) and (11), we conclude that equation M20.
That is, the value of equation M21, is independent of the absolute locations of kρ and kρ′ as long as the sampling pattern surrounding kρ does not change.
Because of this shift-invariant property of equation M22, we only need to acquire a small region of k-space for the purpose of auto-calibration. This calibration region can be centered at any point kρ0 in k-space. To calculate the reconstruction weights for the grid point kρ, i.e., the ρth row of G, we simply identify all the sampling points within a distance of 2ws from kρ and translate these points by Δk to center around kρ0. To express the elements of G in terms of the calibration data, we multiply both sides of (15) with d(kσ0) and sum over all σ0
equation M23
(16)
Note that d(kσ0) is the data that would have been acquired with a single-channel coil of homogeneous sensitivity. By recognizing the convolution summation on both sides, we can simplify (16) to
equation M24
(17)
which can be further reduced to
equation M25
(18)
Based on (9) and (12), we recognize that
equation M26
(19)
Here, equation M27 is the entry in the ρ0th row of G that corresponds to the μ0th sampling location of the nth coil. It is expressed as
equation M28
(20)
Because equation M29, is independent of the absolute locations of kρ and equation M30, we observe equation M31 is the same as equation M32.
Obviously, computing equation M33 requires a set of linear independent equations such that the total number of equations is larger than the number of unknowns in equation M34. Because of the shift-invariant property of equation M35, such a set of linear equations can be constructed by translating the sampling points around kρ to a set of grid points within the calibration region. For each grid point, an equation can be constructed following (18). Therefore, by shifting all sampling points around kρ to the region centered on kρ0, we are able to compute all reconstruction weights using only the calibration data acquired near kρ0.
Auto-Calibrated kSPA Algorithm
The auto-calibration scheme based on (18) requires the knowledge of both dn(κμ0) and d(kρ0). In our implementation, dn(κμ0) is calculated using a Kaiser-Bessel interpolation kernel with a width of 4 pixels and an oversampling ratio of 1.25 as described by Beatty et al. [24]. Although dn(κμ0) can be readily computed through interpolation of the calibration data, d(kρ0) cannot be computed directly by using the calibration data alone. To avoid this difficulty, we can modify (18) to
equation M36
(21)
Similarly, equation M37, is sparse and satisfies (19). The indices nn′ indicate that there is one set of weights for each pair of coils. That is, instead of synthesizing the true k-space data, the calibration data from all coils are used to synthesize the k-space data of each individual coil. Equation (21) can be simply verified by convolving (18) with sn′ (kρ). The final image can then be approximated with the sum-of-squares image similar to the approach adopted in GRAPPA.
This auto-calibrated kSPA algorithm can be summarized as followed.
  • Step 1)  
    Acquire a small fully sampled region in k-space for auto-calibration and compute dn(kρ0) through interpolation. The full width of the calibration region should be larger than 2w.
  • Step 2)  
    For a given grid point kρ in k-space, locate all sampling points within a distance of w from that grid point.
  • Step 3)  
    Translate these sampling points to center at a set of grid points in the middle of the calibration region. At each grid point, compute dn(κμ0) through interpolation and construct an equation following (21). Combine all such equations and solve the resulting set of equations for equation M38.
  • Step 4)  
    Repeat step 2 and 3 for all grid points and construct the matrix G.
  • Step 5)  
    Compute m for the nth coil as Gd. Filter m and FFT it to the image space.
  • Step 6)  
    Calculate the sum-of-squares of all coil images.
The purpose of the filter in step 5 is to suppress the signal in the edge of the k-space that is not covered entirely by the sampling trajectory. For example, spiral and radial trajectories have a circular coverage of k-space, leaving the corners of k-space unsampled. The reconstructed data in those areas amplify noise and, therefore, should be eliminated. The filter we used is a function that is 1 within the region of the k-space trajectory and linearly decreases to zero outside the region with a transition width of five pixels. For spiral trajectories, the function is similar to the filter used by Pruessmann et al. [4].
Built-In Apodization Correction
Interpolation steps are involved in the resampling of calibration data onto targeted sampling patterns. Interpolation in k-space with a small kernel results in uneven intensity shading in the image space which is commonly referred to as apodization [25]. This apodization effect is usually corrected by dividing the reconstructed image by a de-apodization function, typically by the inverse Fourier transform of the interpolation kernel. We provide an analysis and a solution to this effect in kSPA and auto-calibrated kSPA. We show that the apodization effect is completely removed during the estimation of reconstruction weights due to the resampling of calibration data, therefore, there is no need for post apodization correction.
To evaluate the effect of the small interpolation kernel, we can combine (3) and (7) and write out the entries of G explicitly as
equation M39
(22)
Following (22), we can separate the coil sensitivity and the interpolation kernel in the inverse gridding operator G such that
equation M40
(23)
Here, Gs is a matrix of size (ncnk × N2) whose entries are formed by corresponding values of sn(κμkρ″), and C is a matrix of size (N2 × N2) whose entries are formed by the corresponding values of c(kρ″kρ). Here, nc is the total number of coils and nk is the total number of k-space sampling points. Following this reformulation, the reconstructed image becomes
equation M41
(24)
Equation (24) shows that the reconstructed image is the true image multiplied by the inverse interpolation matrix. To remove this apodization effect, we can modify the unmixing operator M. Instead of computing the unmixing operator as a direct inverse of M, we need to find an unmixing operator M such that
equation M42
(25)
Specifically, we need to change the delta function in the right-hand side of (11) to the corresponding interpolation weights. As a result, we have
equation M43
(26)
Accordingly, in the case of auto-calibrated kSPA, (13) is changed to
equation M44
(27)
That is, the calibration data needs to be convolved with the interpolation kernel. When the calibration data are acquired on non-Cartesian trajectories, the data are first interpolated onto a Cartesian grid (see step 1 of the auto-calibrated kSPA algorithm); therefore, the apodization correction is automatically realized. When the calibration data are acquired on a Cartesian grid, (27) suggests that, the Cartesian data also need to be convolved with the interpolation kernel in order to remove the apodization effect.
Solving Ill-Posed Problem With LSQR
The computation of the auto-calibrated kSPA mainly lies in solving the system of linear equations defined by [15]. This equation needs to be solved not only for each Cartesian grid location but also for each coil. Since at the same grid location, this equation is the same for all coils, ideally, we would like to invert the system matrix and apply the inverse matrix to all coils. By doing so, we can speed up the computation by a factor comparable to the total number of coils. Unfortunately, such a kind of strategy is not practical because the equation is typically ill-posed as a result of noise contamination and gridding errors in the calibration data. As a result, an accurate inversion matrix may not exist.
Alternatively, the equations can be solved iteratively without computing a matrix inversion directly. There are a number of iterative algorithms that might be suitable for this task. In this paper, we demonstrate an implementation with the least squares using orthogonal and right triangular decomposition (LSQR) algorithm [26]. The LSQR algorithm has been shown to be more robust than the conjugate gradient algorithm in the case of ill-posed problems [26]. In this study, we use the implementation of LSQR by Matlab (Version 7, Release 14, The Math-Works Inc.) with the default stopping criteria.
Error Analysis
There are two main sources of reconstruction error: approximation error of matrix G denoted as Ge and approximation error of matrix M−1 denoted as equation M45. The error matrix Ge results from the truncation of G; while equation M46 results from both the truncation of M−1 and the interpolation of calibration data [left side of (21)]. In this section, we assume G and M−1 are the exact matrices without sparse approximation and M−1 = (GHG)−1. Following (13), the estimated image can be expressed as
equation M47
(28)
With a first-order approximation of Taylor expansion, it can be reduced to
equation M48
(29)
Taking into consideration the sparse approximation error of M−1, one gets
equation M49
(30)
Finally, keeping terms up to the first order, one gets
equation M50
(31)
where m is the true image computed with the exact matrices G and M−1 as
equation M51
(32)
The first two error terms in (31) result from the sparse approximation of matrix G, which is primarily determined by the choice of ws and associated interpolation process. The last error term in (31) results from the error in the inversion matrix M−1 which is determined by the choice of ws, the choice of w and consequently the condition of (21).
A. Simulations
The auto-calibrated kSPA algorithm is applied to a spiral k-space trajectory. The trajectory consists of two segments: a short single-shot spiral-in navigator and an interleaved spiral-out imaging trajectory. The spiral-in navigator corresponds to an image matrix size of 32 × 32 (Fig. 2); the spiral-out trajectory corresponds to an image matrix size of 256 × 256. The spiral trajectory was designed based on the analytic method by Glover [27]. The spiral-out trajectory contains of 32 interleaves. Each interleaf has 1744 sampling points.
Fig. 2
Fig. 2
Sprial-in navigator. (a) Readout gradient waveform consisting of a short spiral-in navigator followed by a regular spiral trajectory. (b) k-Space trajectory of the spiral-in navigator. (c) Image reconstructed from the navigator for one coil.
A Shepp–Logan phantom and an 8-channel receiving coil were used to simulate the k-space data via inverse gridding. The coil sensitivity maps were acquired using an 8-channel head coil on a large water phantom (MRI Devices Corporation, Pewaukee, WI) and fitted with a thin-plate spline function [18]. The image seen by each coil was simulated by multiplying the phantom image with the corresponding coil sensitivity map. To reduce the effect of k-space circular convolution resulting from this image-domain multiplication and subsequent discrete Fourier Transform, both the phantom image and coil sensitivity maps were upsampled by a factor of 2 prior to the multiplication of the phantom image with coil sensitivity maps. The upsampling is accomplished by zero-padding in the Fourier domain. To improve the accuracy of inverse gridding, each coil image was further zero-padded by a factor of 2 before performing Fourier transform. In addition, the edges of the coil sensitivity maps that were outside the imaging object were tapered with a Fermi window [18], [28] in order to further suppress the energy leakage caused by Gibbs ringing. The radius of the circular Fermi window was equal to the half-width of the imaging matrix; the transition width of the window equals five pixels. This procedure was necessary because the coil sensitivity fitted by the thin-plate-spline function covers the whole imaging matrix and has an abrupt cutoff due to the limited field-of-view (FOV).
To study the effect of noise on the performance of the algorithm and to demonstrate the advantage of LSQR, Gaussian noise was added to the image such that the signal-to-noise ratio (SNR) was 20. The image was reconstructed with and without using the LSQR algorithm for a comparison.
B. Experiments
In vivo brain images of a healthy volunteer were acquired using the same navigated spiral readout trajectory (as described in Section III-A) on a 1.5 T whole-body system (GE Signa, GE Healthcare, Waukesha, WI) equipped with a maximum gradient of 50mT/m and a slew rate of 150 mT/m/s. An eight-channel head coil (MRI Devices Corporation, Pewaukee, WI) was used for image acquisition. The scan parameters were: FOV = 24 cm, time of repetition (TR) = 300 ms, time of echo (TE) = 15 ms, bandwidth (BW) = 125 kHz, and matrix size = 256 × 256. Undersampling in k-space was achieved by skipping a certain number of interleaves. The calibration data was acquired using the spiral-in navigator that samples the k-space on a 32 × 32 grid and precedes each spiral-out interleaf.
All image reconstructions were performed using Matlab (Version 7, Release 14, The MathWorks), running on a LINUX PC equipped with a 3.20 GHz Intel Xeon CPU and 5 GB RAM. Images were reconstructed using reduction factors ranging from 1 to 3. The auto-kSPA reconstruction parameters were: ws = 6, and w = R where R is the reduction factor. For example, with R = 2, all sampling points within a distance of 2 pixels of a given Cartesian grid point are used to synthesize the value on the particular grid point. Furthermore, given ws = 6, for each coil, there are less than 64 linearly independent equations (i.e., (2ws − 2w)2) that can be used to estimate the synthesizing weights. In our Matlab implementation, it takes approximately 6.8 min to reconstruct one data set acquired with eight coils.
Fig. 3 shows a set of typical individual coil images reconstructed with the auto-calibrated kSPA algorithm and the corresponding sum-of-squares image. In either case, the calibration data were not used to reconstruct the final image besides for the purpose of calibration. However, the algorithm does not exclude the addition of calibration data in the final image. Our goal is to evaluate the ability of the algorithm in removing aliasing artifacts. Although adding navigation data will certainly improve the SNR of the final image, it may require additional consideration when the navigator has different image contrast compared to the imaging data [29]. As an illustration of the improved image quality, images reconstructed with the simple gridding algorithm are also shown. Excellent image quality is achieved with auto-calibrated kSPA for all tested reduction factors. Fig. 4 shows a similar set of images reconstructed with the in vivo data. Similarly to the simulated data, the reconstructed in vivo images have excellent image quality and have no obvious artifacts.
Fig. 3
Fig. 3
Demonstration of auto-calibrated kSPA with simulated data. (a) R = 2; (b) R = 3. In each row, images from left to right are: image reconstructed with standard gridding, two typical coil images, image reconstructed with auto-calibrated kSPA, difference (more ...)
Fig. 4
Fig. 4
Demonstration of auto-calibrated kSPA with in vivo data. (a) R = 2; (b) R = 3. In each row, images from left to right are: image reconstructed with standard gridding, two typical coil images, image reconstructed with auto-calibrated kSPA, difference with (more ...)
To illustrate the location independency of calibration data, Fig. 5 compares images reconstructed using calibration data centered around the k-space origin to those reconstructed using calibration data centered around (kx = 16, ky = 16) and (kx = 64, ky = 64). For the simulated data, there is no difference in the image quality with different calibration region. For the in vivo data, however, some residual aliasing artifacts remain when the calibration data are off-center [Fig. 5(c)]. These residual artifacts are caused by the increasing noise level in the calibration data.
Fig. 5
Fig. 5
Images reconstructed with calibration data centered at different location in k-space and with R = 2. The center of the calibration data are (a) (0,0); (b) (16,16); (c) (64,64), respectively. The first column is simulated data without noise, the second (more ...)
Fig. 6 compares the image quality when direct matrix inversion is used to compute the reconstruction weights to that when LSQR is used to compute the weights. For the simulated data without noise, both direct matrix inversion and LSQR results in good image quality and there is no significant difference. However, for the simulated data with noise and the in vivo data, although the direct matrix inversion results in good image reconstruction for R = 2, no successful reconstruction is obtained for higher reduction factors. On the other hand, with LSQR, the auto-calibrated kSPA achieves successful reconstruction for all reduction factors up to R = 4.
Fig. 6
Fig. 6
Representative coil images reconstructed using weights calculated by (a) the LSQR algorithm and (b) matrix pseudo-inversion. In both cases, the reduction factor is R = 3. The first row is simulated data without noise; the second row is simulated data (more ...)
The image reconstruction quality is also affected by the size of the calibration data and the width of the reconstruction kernel, that is, the number of nonzero elements in each row of G. Fig. 7(a) plots the square root of mean square error (RMSE) as a function of ws and w. Fig. 7(b) shows the trade-off between reconstruction accuracy and efficiency for different choices of ws and w. The RMSE is normalized by the ground truth image and displayed as percentage error because MRI data are in general subject to arbitrary scaling. In general, increasing ws improves the image quality. On the other hand, increasing w only improves the image quality when ws is sufficiently large. Once ws and w reach a certain value, further improvement in the image quality becomes slower and limited.
Fig. 7
Fig. 7
Effects of the cutoff bandwidth (ws) of the coil sensitivity and the reconstruction kernel width (w) on the image quality. The result is computed using the simulated data without noise for R = 3. In general, the larger the cutoff bandwidth is, the smaller (more ...)
Auto-calibration in the image reconstruction of partially-acquired k-space data has shown some favorable properties in previous studies of parallel imaging methods [5], [14], [16], [17], [30], [31]. We have demonstrated that auto-calibration schemes can also be incorporated into the kSPA algorithm. In addition, we have shown that the calibration data can be acquired, in principle, at any region in k-space.
The proposed auto-calibrated kSPA shares the same characteristics as AUTO-SMASH [14], [30] and GRAPPA in which calibration steps are performed directly on acquired extra k-space calibration data. Although the popular GRAPPA algorithm was originally proposed for Cartesian trajectory, a number of recent works have extended GRAPPA for spiral and radial trajectories. For example, Heidemann et al. presented a method for constant angular-velocity spiral trajectory [17]. In that method, samples on a constant angular-velocity spiral trajectory are reordered and placed on a hybrid Cartesian grid where the original GRAPPA algorithm is applicable. More recently, Heberlein and Hu introduced a method to interpolate missing spiral interleaves using GRAPPA interpolation kernels by dividing the k-space into angular sectors and assuming that the kernel within each sector is radially invariant [16]. While a GRAPPA based kernel provides the weighting factors to directly synthesize missing data in k-space, auto-calibrated kSPA uses the calibration data to estimate the sparse matrix inversion. In addition, auto-kSPA does not attempt to fill in missing data points in k-space, instead, it computes the whole k-space based on measured samples. Auto-calibrated kSPA is a general reconstruction algorithm that applies to all sampling patterns. Some recent studies have also extended GRAPPA auto-calibration to non-Cartesian trajectories. For example, Seiberlich et al. used a procedure with GRAPPA operator gridding (GROG) to reconstruct non-Cartesian data [15], [32]. In addition, Samsonov established a connection between the technique of parallel magnetic resonance imaging with adaptive radius in k-space (PARS) [33] and GRAPPA by using the concept of in vivo coil sensitivity which is the true sensitivity modulated by the object [34]. Such a connection should also allow auto-calibration for PARS.
The proposed algorithm has been demonstrated with both simulated and in vivo data. The quality of reconstructed images is good. The average RMSE is 2.1% for simulated data and 4.0% for in vivo data as shown in Figs. 3 and and4.4. In the simulated data, the residual artifacts mainly originate from the edges of the Shepp-Logan phantom where high spatial frequency components exist. Such artifacts are consistent with the interpolation nature of the kSPA algorithm and the non-Cartesian sampling in k-space. On the other hand, there is noticeable noise amplification in the center of the in vivo images. This noise enhancement is explained by the increased g-factor and is consistent with that observed commonly in SENSE reconstruction [3], [4].
We show, in theory, that calibration data can be acquired at any location in k-space. This result applies to all sampling trajectories and is consistent with the well-known shift-invariant feature of the GRAPPA kernel for Cartesian trajectory [5] and the technique of parallel magnetic resonance imaging with adaptive radius in k-space (PARS) [33]. This result implies that the reconstruction matrix or the interpolation weights in k-space are solely determined by the sampling pattern and the coil configurations, and they are independent of the absolute location in k-space. This observation is also verified with both simulated and in vivo data as shown in Fig. 5. While there are no observable artifacts in the simulated data without noise, the quality of the simulated images with noise and the in vivo images appears to deteriorate as the calibration region shifts towards outer k-space. This disparity demonstrates that although the location of the calibration region does not affect the algorithm in the noiseless situation, image quality may decrease due to the higher noise level at outer k-space.
Computing the reconstruction matrix requires solving ill-conditioned systems of linear equations. The system of linear equations shown in (12) is generally poorly conditioned as a result of both the irregular sampling pattern and noisy k-space data that appear in both sides of (21). Because data are sampled on a non-Cartesian trajectory, data values on both sides of the equation have to be computed with interpolation. Errors resulting from imperfect sampling density correction or limited width of interpolation kernel will worsen the condition of (18). Finding the solution using matrix pseudo-inversion seems to work well for the simulated data without noise but not for the simulated data with noise and the in vivo data especially at relatively high reduction factors as shown in Fig. 6. This issue is resolved effectively by using the LSQR algorithm that has been shown to be more robust for solving ill-posed linear problems.
There are two parameters that need to be optimized for the auto-calibrated kSPA algorithm: the cutoff bandwidth of sensitivity ws and the half-width of reconstruction kernel w. At first thought, it seems that larger ws and larger w should improve image reconstruction quality. Although our limited data seem to support the idea that RMSE decreases monotonically as the calibration region grows, there does not seem to be a monotonic relationship between RMSE and the width of the reconstruction kernel w as shown in Fig. 7. At R = 3, it appears that a w of 3 or 4 performs consistently better that other values. One reason that further increasing w does not reduce RMSE is that the number of independent equations is limited given a value of ws. On the other hand, increasing the value of w increases the total number of unknowns. Further evaluation is necessary to investigate whether there is an optimal combination of these two parameters that achieves the best reconstruction quality.
In comparison to the original kSPA algorithm, auto-calibrated kSPA reconstructs one image for each coil instead of one final image without sensitivity weighting. As a result, the computation for auto-kSPA is approximately slowed by a factor equal to the total number of coils. Therefore, for massively parallel imaging, kSPA would still be the method of choice. For a small number of coils which is what is currently used by a vast majority of scanners, auto-kSPA offers the advantage of robustness and immunity to coil sensitivity estimation error. A further improvement to auto-kSPA would be reconstructing one final image instead of reconstructing one image for each coil, which is currently under investigation. Furthermore, auto-calibrated kSPA provides a solution for cases where an accurate sensitivity map is difficult or impossible to obtain. These scenarios include, for example, calibration data being too noisy, calibration data being off center and image being sparse such that it does not support a sensitivity map over the whole field-of-view. In the specific examples shown in Fig. 5(a), the original kSPA algorithm is certainly capable of reconstructing images of similar quality when the calibration data are located around the center of k-space and are of high SNR. However, it is not applicable for the two cases in Fig. 5(b) and (c).
VI. Conclusion
We have proposed and implemented an auto-calibrated kSPA algorithm that does not require the explicit computation of the coil sensitivity maps. We have also shown that calibration data, in principle, can be acquired at any region of k-space. This property applies to arbitrary sampling trajectories and all reconstruction algorithms based on k-space. In practice, because of its higher SNR, calibration data acquired at the center of k-space performed more favorably.
Acknowledgments
The authors would like to thank M. Saunders, Ph.D., of Stanford University for discussions on LSQR. The authors would also like to thank T. Harshbarger, Ph.D. for editorial assistance.
This work was supported in part by the National Institute of Health under Grant 1K99EB007182 and Grant 4R00EB007182, in part by the Lucas Foundation, and in part by the National Center for Research Resources under Grant P41RR09784.
Contributor Information
Chunlei Liu, Brain Imaging and Analysis Center, School of Medicine, Duke University, Durham, NC 27705 USA.
Jian Zhang, Lucas Center for MR Imaging, Radiology Department, and Electrical Engineering Department, Stanford University, Stanford, CA 94305 USA.
Michael E. Moseley, Lucas Center for MR Imaging, Radiology Department, Stanford University, Stanford, CA 94305 USA.
1. Sodickson DK, Manning WJ. Simultaneous acquisition of spatial harmonics (SMASH): Fast imaging with radiofrequency coil arrays. Magn Reson Med. 1997;38:591–603. [PubMed]
2. Hutchinson M, Raff U. Fast MRI data acquisition using multiple detectors. Magn Reson Med. 1988;6:87–91. [PubMed]
3. Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: Sensitivity encoding for fast MRI. Magn Reson Med. 1999;42:952–62. [PubMed]
4. Pruessmann KP, Weiger M, Bornert P, Boesiger P. Advances in sensitivity encoding with arbitrary k-space trajectories. Magn Reson Med. 2001;46:638–51. [PubMed]
5. Griswold MA, Jakob PM, Heidemann RM, Nittka M, Jellus V, Wang J, Kiefer B, Haase A. Generalized autocalibrating partially parallel acquisitions (GRAPPA) Magn Reson Med. 2002;47:1202–10. [PubMed]
6. Dydak U, Weiger M, Pruessmann KP, Meier D, Boesiger P. Sensitivity-encoded spectroscopic imaging. Magn Reson Med. 2001;46:713–22. [PubMed]
7. Bammer R, Keeling SL, Augustin M, Pruessmann KP, Wolf R, Stollberger R, Hartung HP, Fazekas F. Improved diffusion-weighted single-shot echo-planar imaging (EPI) in stroke using sensitivity encoding (SENSE) Magn Reson Med. 2001;46:548–54. [PubMed]
8. Golay X, de Zwart JA, Ho YC, Sitoh YY. Parallel imaging techniques in functional MRI. Top Magn Reson Imaging. 2004;15:255–65. [PubMed]
9. Golay X, Pruessmann KP, Weiger M, Crelier GR, Folkers PJ, Kollias SS, Boesiger P. PRESTO-SENSE: An ultrafast whole-brain fMRI technique. Magn Reson Med. 2000;43:779–86. [PubMed]
10. Heidemann RM, Griswold MA, Kiefer B, Nittka M, Wang J, Jellus V, Jakob PM. Resolution enhancement in lung 1 H imaging using parallel imaging methods. Magn Reson Med. 2003;49:391–4. [PubMed]
11. Kostler H, Sandstede JJ, Lipke C, Landschutz W, Beer M, Hahn D. Auto-SENSE perfusion imaging of the whole human heart. J Magn Reson Imaging. 2003;18:702–8. [PubMed]
12. Wintersperger BJ, Nikolaou K, Dietrich O, Rieber J, Nittka M, Reiser MF, Schoenberg SO. Single breath-hold real-time cine MR imaging: Improved temporal resolution using generalized autocalibrating partially parallel acquisition (GRAPPA) algorithm. Eur Radiol. 2003;13:1931–6. [PubMed]
13. Liu C, Moseley ME, Bammer R. Simultaneous phase correction and sense reconstruction for navigated multi-shot dwi with non-cartesian k-space sampling. Magn Reson Med. 2005;54:1412–22. [PubMed]
14. Jakob PM, Griswold MA, Edelman RR, Sodickson DK. AUTO-SMASH: A self-calibrating technique for SMASH imaging. Simultaneous acquisition of spatial harmonics. Magma. 1998;7:42–54. [PubMed]
15. Seiberlich N, Breuer F, Heidemann R, Blaimer M, Griswold M, Jakob P. Reconstruction of undersampled non-Cartesian data sets using pseudo-Cartesian GRAPPA in conjunction with GROG. Magn Reson Med. 2008;59:1127–37. [PubMed]
16. Heberlein K, Hu X. Auto-calibrated parallel spiral imaging. Magn Reson Med. 2006;55:619–25. [PubMed]
17. Heidemann RM, Griswold MA, Seiberlich N, Kruger G, Kannengiesser SA, Kiefer B, Wiggins G, Wald LL, Jakob PM. Direct parallel image reconstructions for spiral trajectories using GRAPPA. Magn Reson Med. 2006 [PubMed]
18. Liu C, Bammer R, Moseley ME. Parallel imaging reconstruction for arbitrary trajectories using k-space sparse matrices (kSPA) Magn Reson Med. 2007;58:1171–81. [PubMed]
19. Benson M. Vol Master. Thunder Bay, Canada: Lakehead University; 1973. Iterative solution of large scale linear systems.
20. Benson M, Frederickson P. Iterative solution of large sparse linear systems arising in certain multidimensional approximation problems. Utilitas Math. 1982;22:127–140.
21. Demko S, Moss WF, Smith PW. Decay rates for inverses of band matrices. Mathematics of Computation. 1984;43:491–499.
22. Boor Cd. Dichotomies for band matrices. SIAM J Numer Anal. 1980;17:894–907.
23. Demko S. Inverses of band matrices and local convergence of spline projections. SIAM J Numer Anal. 1977;14:616–619.
24. Beatty PJ, Nishimura DG, Pauly JM. Rapid gridding reconstruction with a minimal oversampling ratio. IEEE Trans Med Imaging. 2005;24:799–808. [PubMed]
25. Jackson JI, Meyer CH, Nishimura DG, Macovski A. Selection of a convolution function for Fourier inversion using gridding. IEEE Transactions on Medical Imaging. 1991;10:473–478. [PubMed]
26. Saunders M. Solution of sparse rectangular systems using LSQR and CRAIG. BIT Numerical Mathematics. 1995;35:588–604.
27. Glover GH. Simple analytic spiral k-space algorithm. Magn Reson Med. 1999;42:412–5. [PubMed]
28. Lowe MJ, Sorenson JA. Spatially filtering functional magnetic resonance imaging data. Magn Reson Med. 1997;37:723–9. [PubMed]
29. Glover GH, Law CS. Spiral-in/out BOLD fMRI for increased SNR and reduced susceptibility artifacts. Magn Reson Med. 2001;46:515–22. [PubMed]
30. Heidemann RM, Griswold MA, Haase A, Jakob PM. VD-AUTO-SMASH imaging. Magn Reson Med. 2001;45:1066–74. [PubMed]
31. Lustig M, Pauly JM. Iterative GRAPPA: A general solution for the GRAPPA reconstruction from arbitrary k-space sampling. Proceedings of 15th Annual Meeting of ISMRM; Berlin. 2007. pp. 333–333.
32. Seiberlich N, Breuer FA, Blaimer M, Barkauskas K, Jakob PM, Griswold MA. Non-Cartesian data reconstruction using GRAPPA operator gridding (GROG) Magn Reson Med. 2007;58:1257–65. [PubMed]
33. Yeh EN, McKenzie CA, Ohliger MA, Sodickson DK. Parallel magnetic resonance imaging with adaptive radius in k-space (PARS): Constrained image reconstruction using k-space locality in radiofrequency coil encoded data. Magn Reson Med. 2005;53:1383–92. [PubMed]
34. Samsonov AA, Block WF, Arunachalam A, Field AS. Advances in locally constrained k-space-based parallel MRI. Magn Reson Med. 2006;55:431–8. [PubMed]