Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC3162069

Formats

Article sections

Authors

Related links

Magn Reson Med. Author manuscript; available in PMC 2012 November 1.

Published in final edited form as:

Published online 2011 May 20. doi: 10.1002/mrm.22899

PMCID: PMC3162069

NIHMSID: NIHMS271766

The publisher's final edited version of this article is available free at Magn Reson Med

See other articles in PMC that cite the published article.

A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and *in vivo* studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines.

Parallel magnetic resonance imaging (pMRI) techniques have been proposed and widely introduced into clinical applications over the last decade. In pMRI, spatial sensitivity encodings are provided by utilizing an array of multiple receiver surface coils. With this data redundancy, we are able to reconstruct unaliased images from k-space data sampled below the Nyquist rate with fewer k-space phase encodes. This acceleration can be used to save scan time, increase temporal or spatial resolution [1–3], or reduce imaging artifacts [4,5]. Various parallel imaging reconstruction methods have been invented. Among those, the most widely used pMRI techniques are sensitivity encoding (SENSE) [6] and generalized autocalibrating partially parallel acquisition (GRAPPA) [7]. Other Cartesian pMRI reconstruction methods including SMASH [8,9], AUTO-SMASH [10], VD-AUTO-SMASH [11], Generalized SMASH [12], mSENSE [13], PILS [14], and SPACE-RIP [15] have also been demonstrated and implemented into numerous applications. Recently, several non-Cartesian pMRI reconstruction techniques have been investigated as well, such as non-Cartesian SENSE (NC-SENSE) [16] and kSPA [17,18]. All of these reconstruction algorithms can be classified into two categories, based on how the sensitivity encoding is decoded from the multi-channel data, i.e., physically-based reconstruction and data-driven reconstruction [19]. SENSE reconstruction is a typical physically-based algorithm, in which all coils sensitivity maps have to be estimated *explicitly* because image needs to be unfolded in the image domain based on these maps. On the other hand, GRAPPA is a data-driven method, in which the sensitivity information is merely *implicitly* derived in the k-space through data calibration, and the missing samples are fitted in k-space based on this information.

As a k-space data-driven method, GRAPPA has become the most successful pMRI technique in recent years, due to its convenience, flexibility and good performance. Compared to SENSE, the cumbersome coil sensitivity measurement is not required in GRAPPA. In addition, neither data calibration nor k-space fitting is computationally demanding. And many experimental results have shown that GRAPPA reconstruction yields relatively good image quality for low accelerating pMRI, for example, with a reduction factor of 2 to 3. However, the performance of GRAPPA reconstruction usually degrades significantly as the reduction rate gets higher [20], unless a very large number of autocalibration signal (ACS) lines are imposed.

In this work, an iterative k-space data-driven pMRI reconstruction algorithm is demonstrated, termed Parallel Reconstruction Using Null Operations (PRUNO). Based on a generalized problem formulation of k-space reconstruction, we show that PRUNO is a more flexible algorithm than GRAPPA. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems and an iterative conjugate-gradient (CG) algorithm is proposed to solve the reconstruction equation efficiently. Both simulation and *in vivo* results show that PRUNO provides significantly improved image quality compared to GRAPPA while requires a reduced number of ACS lines, especially under high reduction factors.

In this section, only study the case of 2D Cartesian pMRI is studied for simplicity. The same theory can be easily extended to 3D reconstruction. Let's denote the image size to be *N _{x}* ×

- Assumption 1: All sensitivity maps are band-limited, with a full width of
*w*in k-space. Due to the smoothness of coil sensitivity maps in nature,_{s}*w*is usually approximated as a reasonably small number._{s} - Assumption 2: The overall sensitivity encodings yield good orthogonality. That is, the pMRI reconstruction can be treated as an overdetermined problem if
*N*>_{c}*R*and the k-space sampling is relatively uniform. Here*R*is the reduction factor and*N*is the number of coils. (Only the cases of_{c}*N*>_{c}*R*are investigated in this work.)

These conditions essentially form the basis behind most k-space data-driven pMRI reconstruction methods, such as GRAPPA [7,21] and kSPA [17,18]. To simplify the following discussion, we treat *N _{x}*,

For multi-channel acquisition, k-space data acquired from the *n*-th coil can be approximated as

$${d}_{n}({k}_{x},{k}_{y})=m({k}_{x},{k}_{y})\ast {s}_{n}({k}_{x},{k}_{y})=\sum _{{k}_{x}\prime ={k}_{x}-{w}_{s}\u22152}^{{k}_{x}+{w}_{s}\u22152-1}\sum _{{k}_{y}\prime ={k}_{y}-{w}_{s}\u22152}^{{k}_{y}+{w}_{s}\u22152-1}m({k}_{x}\prime ,{k}_{y}\prime ){s}_{n}({k}_{x}-{k}_{x}\prime ,{k}_{y}-{k}_{y}\prime ).$$

[1]

Here *m*(*k _{x}*,

$$\mathbf{d}=\mathbf{Sm},$$

[2]

where **d** is a vector of concatenated fully-sampled k-space data from all coils (encoded samples) and **m** is a column vector containing the true magnetization in the k-space (unencoded samples). Hence, **S** is called sensitivity encoding matrix. The primary goal of a k-space based reconstruction is to compute either **m** or full presentation of **d**, from which the final image will be synthesized in image domain. With data undersampling, all samples in **d** can be classified into two groups according to their existence upon reconstruction, i.e., missing samples and acquired samples. To separate the two groups, masking operators can be applied to Eq. [2]:

$$\mathbf{d}=\mathbf{Id}=({\mathbf{I}}_{\mathbf{m}}+{\mathbf{I}}_{\mathbf{a}})\mathbf{d}=({\mathbf{I}}_{\mathbf{m}}+{\mathbf{I}}_{\mathbf{a}})\mathbf{Sm}.$$

[3]

Here the subscripts “**m**” and “**a**” represent “missing” and “acquired”, respectively. **I** denotes an identity matrix, which can be decomposed into two diagonal masking matrices, **I _{m}** and

$${\mathbf{d}}_{\mathbf{m}}={\mathbf{S}}_{\mathbf{m}}\mathbf{m},$$

[4]

and

$${\mathbf{d}}_{\mathbf{a}}={\mathbf{S}}_{\mathbf{a}}\mathbf{m}.$$

[5]

Now **d _{m}** is the vector including only missing samples and

With the two assumptions stated at the beginning of this section, Eq. [5] should be an over-determined linear equation with a very sparse system matrix. If the encoding matrix **S _{a}** is known, the image reconstruction will be directly performed by solving

To avoid the measurement of sensitivity maps due to its complexity and inaccuracy, we can solve **d _{m}** first, and then apply a coil-by-coil reconstruction followed by a sum-of-squares (SOS) synthesis in image domain, as in GRAPPA. By using Eq. [4] and [5], this k-space data-driven method can be explicitly formulated as

$${\mathbf{d}}_{\mathbf{m}}={\mathbf{S}}_{\mathbf{m}}\mathbf{m}={\mathbf{S}}_{\mathbf{m}}\left[{\left({\mathbf{S}}_{\mathbf{a}}^{\mathbf{H}}{\mathbf{S}}_{\mathbf{a}}\right)}^{-1}{\mathbf{S}}_{\mathbf{a}}^{\mathbf{H}}{\mathbf{d}}_{\mathbf{a}}\right]={\mathbf{Rd}}_{\mathbf{a}}.$$

[6]

We use **R** to denote an overall reconstruction matrix here. And the superscript **H** is used to label a Hermitian matrix (conjugate transpose). In a more generalized form, Eq. [6] can be rewritten as

$$[\mathbf{I}-\mathbf{R}]\left[\begin{array}{c}\hfill {\mathbf{d}}_{\mathbf{m}}\hfill \\ \hfill {\mathbf{d}}_{\mathbf{a}}\hfill \end{array}\right]=0.$$

[7]

And if we re-permute the matrices back according to the original order of all k-space samples, we will end up with

$$\mathbf{Nd}=0.$$

[8]

Here **N** denotes a non-zero system matrix, each row of which can null k-space samples from all coils through multiplication. This equation carries one of the fundamental forms of k-space data-driven reconstruction. Generally speaking, it implies that k-space samples from all receiver channels are essentially linearly correlated through null operations, which results from the intrinsic nature of sensitivity encodings. Once the matrix N is determinable, Eq. [8] can be utilized to solve pMRI reconstruction problem through a procedure that is termed Parallel Reconstruction Using Null Operations (PRUNO).

In this section, we will first find a more tractable form of **N** by directly exploring the linear dependence among local k-space samples. And it will be shown that the resulting null operations may be characterized by a group of 2D convolutions with small kernels. **N** can thus be feasibly derived through data-calibration. Finally, an iterative conjugate-gradient method will be presented to solve **d _{m}** from Eq. [8] efficiently.

As illustrated in Eq. [1], each encoded sample *d _{n}*(

In other words, a local sensitivity encoding relationship can be formulated as

$$\stackrel{~}{\mathbf{d}}=\stackrel{~}{\mathbf{S}}\stackrel{~}{\mathbf{m}}.$$

[9]

To differentiate from Eq. [2], we use tilde symbols to denote the small local subsets $\stackrel{~}{\mathbf{d}}$ and $\stackrel{~}{\mathbf{m}}$ here. $\stackrel{~}{\mathbf{S}}$ is the corresponding local encoding matrix, which is $n{w}_{m}^{2}$ by ${w}_{m}^{2}$. As the size of the target subsets grow, the encoding matrix $\stackrel{~}{\mathbf{S}}$ tends to be a “taller” matrix because the total number of target samples ($n{w}_{m}^{2}$) increases much faster than that of the corresponding source samples (${w}_{m}^{2}$) (see Appendix A).

According to the theorem of linear algebra, any “fat” matrix (row < column) has a nonempty null space. Therefore, ${\stackrel{~}{\mathbf{S}}}^{\mathbf{H}}$, the Hermitian of the encoding matrix, would eventually have a non-empty null space if we increase the size of source subsets so that $n{w}_{d}^{2}>{w}_{m}^{2}$. Furthermore, at least *r* independent non-zero nulling kernels exist if the following equation is satisfied (see Appendix A)

$${w}_{d}\ge [({w}_{s}-1)+\sqrt{{N}_{c}{({w}_{s}-1)}^{2}+({N}_{c}-1)r}]\u22152({N}_{c}-1).$$

[10]

For each of these nulling kernels, **n**_{1}, **n**_{2}, …, **n**_{r}, ${\stackrel{~}{\mathbf{S}}}^{\mathbf{H}}{\mathbf{n}}_{i}=0(i=1,2,\dots ,r)$. If applying these kernels to both sides of Eq. [9], we will get

$${\stackrel{~}{\mathbf{N}}}^{\mathbf{H}}\stackrel{~}{\mathbf{d}}=\left[\begin{array}{c}\hfill {\mathbf{n}}_{1}^{\mathbf{H}}\hfill \\ \hfill {\mathbf{n}}_{2}^{\mathbf{H}}\hfill \\ \hfill \vdots \hfill \\ \hfill {\mathbf{n}}_{r}^{\mathbf{H}}\hfill \end{array}\right]\stackrel{~}{\mathbf{d}}=\left[\begin{array}{c}\hfill {\mathbf{n}}_{1}^{\mathbf{H}}\stackrel{~}{\mathbf{S}}\hfill \\ \hfill {\mathbf{n}}_{2}^{\mathbf{H}}\stackrel{~}{\mathbf{S}}\hfill \\ \hfill \vdots \hfill \\ \hfill {\mathbf{n}}_{r}^{\mathbf{H}}\stackrel{~}{\mathbf{S}}\hfill \end{array}\right]\stackrel{~}{\mathbf{m}}=0.$$

[11]

Here ${\stackrel{~}{\mathbf{N}}}^{\mathbf{H}}$, *r* by $n{w}_{d}^{2}$, turns out to be a local nulling system matrix on $\stackrel{~}{\mathbf{d}}$. Since sensitivity encoding is essentially shift-invariant in k-space, both $\stackrel{~}{\mathbf{S}}$ and $\stackrel{~}{\mathbf{N}}$ are independent of k-space location. Therefore, we are now able to apply the nulling kernels to all k-space locations so that the full nulling matrix **N** in Eq. [8], which is *r N _{x} N_{y}* by

We call these nulling kernels as PRUNO kernels. And *w _{d}* is just the kernel size used in PRUNO. Based on the first assumption and empirical approximation,

Once **N** has been determined, the image reconstruction can be proceeded by solving **d _{m}** from Eq. [8]. The is very similar to the derivation of Eq. [6]. By applying the same masking operators to Eq. [8], we get

$${\mathbf{N}}_{\mathbf{m}}{\mathbf{d}}_{\mathbf{m}}=-{\mathbf{N}}_{\mathbf{a}}{\mathbf{d}}_{\mathbf{a}}$$

[12]

Here **N _{m}** and

$$\left({\mathbf{N}}_{\mathbf{m}}^{\mathbf{H}}{\mathbf{N}}_{\mathbf{m}}\right){\mathbf{d}}_{\mathbf{m}}=-\left({\mathbf{N}}_{\mathbf{m}}^{\mathbf{H}}{\mathbf{N}}_{\mathbf{a}}\right){\mathbf{d}}_{\mathbf{a}}.$$

[13]

Essentially, this is another representation of Eq. [6].

As stated above, a set of local nulling kernels (columns of $\stackrel{~}{\mathbf{N}}$) need to be found in PRUNO through data calibration. This procedure is similar to GRAPPA calibration. For each kernel, a series of linear equations can be established by sliding the kernel window within a certain k-space calibration region. And the calibration is again a linear algebra problem, which is illustrated as

$$\left[\begin{array}{c}\hfill {\stackrel{~}{\mathbf{d}}}^{\mathbf{H}}({k}_{x1},{k}_{y1})\hfill \\ \hfill {\stackrel{~}{\mathbf{d}}}^{\mathbf{H}}({k}_{x2},{k}_{y2})\hfill \\ \hfill \vdots \hfill \\ \hfill {\stackrel{~}{\mathbf{d}}}^{\mathbf{H}}({k}_{xl},{k}_{yl})\hfill \end{array}\right]{\mathbf{n}}_{i}={\stackrel{~}{\mathbf{D}}}^{\mathbf{H}}{\mathbf{n}}_{i}=0,(i=1,2,\dots ,r).$$

[14]

Here *l* denotes the total number of calibration locations. $\stackrel{~}{\mathbf{D}}$ is a calibration matrix, of which each column is one windowed subset of calibration data, as shown in Figure 2.

To ensure Eq. [12] an overdetermined system, **N _{m}** needs to be a tall matrix, i.e., $r\ge \frac{R-1}{R}{N}_{c}$. Although

One simple strategy of kernel selection is inspired by GRAPPA, in which a small number (usually *N _{c}*) of template based kernels are used [20]. However, the performance of this method is limited by its inflexibility and instability. An optimal kernel selection strategy is to find all generalized nulling kernels directly from Eq. [14], i.e., from the null space of ${\stackrel{~}{\mathbf{D}}}^{\mathbf{H}}$. This can be easily achieved by performing a Singular Value Decomposition (SVD) on $\stackrel{~}{\mathbf{D}}$, and extracting the null space from its eigenspace [22]. One example is shown in Figure 2. In this case, a full kernel width of 5 was used on 8-channel simulated phantom data. Around 1000 k-space subsets (vectors) were collected to form the calibration matrix $\stackrel{~}{\mathbf{D}}$, with a vector length of 200 for each. SVD was then applied on this matrix. The eigenvalue plot indicates that the span of the column vector space is in a much lower rank than 200 if reasonable thresholding is applied to its eigenvalues. Once a cut-off is determined, all nulling kernels are simply eigenvectors corresponding to those eigenvalues that are smaller than the threshold. In this example, around 100 kernels were obtained.

As the nulling matrix has been calculated from data calibration, the remaining objective is to solve Eq. [13]. Considering the huge matrix size, it would usually be computationally infeasible to perform a direct matrix inversion. An alternative indirect solver is thus desired. Using the representation of non-pruned matrices, the system equation can be rewritten as

$${\mathbf{I}}_{\mathbf{m}}\left({\mathbf{N}}^{\mathbf{H}}\mathbf{N}\right){\mathbf{I}}_{\mathbf{m}}\mathbf{d}=-{\mathbf{I}}_{\mathbf{m}}\left({\mathbf{N}}^{\mathbf{H}}\mathbf{N}\right){\mathbf{I}}_{\mathbf{a}}\mathbf{d}.$$

[15]

Now that each forward matrix-vector multiplication in Eq. [15] is either simply a masking operation or essentially composed of 2D convolutions and summations, which are not computationally demanding, a feasible iterative algorithm can be implemented to solve this system. A conjugate-gradient method has been developed and demonstrated in [20], in which matrix-vector multiplications are performed step by step during the reconstruction. This is very similar to NC-SENSE reconstruction [16].

According to the construction of nulling matrix, **N** can be decomposed into *r* × *N _{c}* blocks,

$$\mathbf{N}=\left[\begin{array}{cccc}\hfill {\mathbf{N}}_{11}\hfill & \hfill {\mathbf{N}}_{12}\hfill & \hfill \cdots \hfill & \hfill {\mathbf{N}}_{1{N}_{c}}\hfill \\ \hfill {\mathbf{N}}_{21}\hfill & \hfill {\mathbf{N}}_{22}\hfill & \hfill \cdots \hfill & \hfill {\mathbf{N}}_{2{N}_{c}}\hfill \\ \hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ \hfill {\mathbf{N}}_{r1}\hfill & \hfill {\mathbf{N}}_{r2}\hfill & \hfill \cdots \hfill & \hfill {\mathbf{N}}_{r{N}_{c}}\hfill \end{array}\right]$$

[16]

Here each **N**_{ij} is a *N _{X}N_{y}* ×

Immediately from the decomposition of **N** (Eq. [16]), we have

$${\mathbf{N}}^{\mathbf{H}}\mathbf{N}=\left[\begin{array}{cccc}\hfill \sum _{i=1}^{r}{\mathbf{N}}_{i1}^{\mathbf{H}}{\mathbf{N}}_{i1}\hfill & \hfill \sum _{i=1}^{r}{\mathbf{N}}_{i1}^{\mathbf{H}}{\mathbf{N}}_{i2}\hfill & \hfill \cdots \hfill & \hfill \sum _{i=1}^{r}{\mathbf{N}}_{i1}^{\mathbf{H}}{\mathbf{N}}_{i{N}_{c}}\hfill \\ \hfill \sum _{i=1}^{r}{\mathbf{N}}_{i2}^{\mathbf{H}}{\mathbf{N}}_{i1}\hfill & \hfill \sum _{i=1}^{r}{\mathbf{N}}_{i2}^{\mathbf{H}}{\mathbf{N}}_{i2}\hfill & \hfill \cdots \hfill & \hfill \sum _{i=1}^{r}{\mathbf{N}}_{i2}^{\mathbf{H}}{\mathbf{N}}_{i{N}_{c}}\hfill \\ \hfill \vdots \hfill & \hfill \vdots \hfill & \hfill \ddots \hfill & \hfill \cdots \hfill \\ \hfill \sum _{i=1}^{r}{\mathbf{N}}_{i{N}_{c}}^{\mathbf{H}}{\mathbf{N}}_{i1}\hfill & \hfill \sum _{i=1}^{r}{\mathbf{N}}_{i{N}_{c}}^{\mathbf{H}}{\mathbf{N}}_{i2}\hfill & \hfill \cdots \hfill & \hfill \sum _{i=1}^{r}{\mathbf{N}}_{i{N}_{c}}^{\mathbf{H}}{\mathbf{N}}_{i{N}_{c}}\hfill \end{array}\right].$$

[17]

This equation shows that one “composite” null operation **N ^{H}Nx** is actually equivalent to ${N}_{c}^{2}$ 2D convolutions, independent of the number of kernels (

$${\eta}_{ij}=\sum _{k=1}^{r}{n}_{ik}^{\ast}\ast {n}_{kj}(i,j=1,2,\dots ,{N}_{c})$$

[18]

Here *n* and *n** denote a local 2D convolution kernel and its conjugate, respectively. And η is a composite kernel, corresponding to one block in (**N ^{H}N**). Because these convolutions are calculated among very small kernels and only performed once prior to the CG iterations, its computational load is nearly negligible. Although the size of a composite kernel (2

The procedure of an optimized PRUNO reconstruction is summarized below:

- Determine the size of PRUNO kernels (
*w*)._{d} - Construct the calibration matrix $\stackrel{~}{\mathbf{D}}$ from ACS data (Eq. [14]).
- Apply SVD on $\stackrel{~}{\mathbf{D}}$ and choose the number of nulling kernels (
*r*) that will be used. - Select the eigenvectors corresponding to the
*r*smallest eigenvalues as PRUNO kernels. - Compute the ${N}_{c}^{2}$ composite kernels (Eq. [18]).
- Provide an initial guess of k-space missing data and perform CG reconstruction (Figure 3).
- Generate coil images and synthesize the final image by using SOS.

For comparison, PRUNO and GRAPPA reconstructions have been applied to Cartesian pMRI for both computer simulated phantom and *in vivo* studies. A wide range of reduction factors were tested with 8-channel coils. Both data simulation and image reconstruction were implemented by using MATLAB 7.7 (The Mathworks, Inc, Natick, MA, USA) on a Linux PC with a 3.20 GHz Intel Pentium 4 CPU and 4GB RAM.

A 256×256 Shepp-Logan image was used for the simulation. Pre-measured sensitivity maps from an eight-channel receiving coil were used for sensitivity encodings [6,24–26]. Full k-space data of each coil were generated by applying fast Fourier transform to the encoded coil image, which was produced by multiplying the original image by the corresponding sensitivity map. Complex white Gaussian noises were added to the phantom image during the simulation, which corresponds to a signal to noise ratio of 25. Uniform data undersampling was then performed while preserving a small number of ACS lines near k-space center. The reduction factors were varied from 2 to 7 in these experiments. Two calibration data blocks (*N _{b}*=2), or 2 (

*In vivo* brain images of a healthy volunteer were acquired on a 3T whole-body scanner (GE MR750; GE Healthcare, Waukesha, WI, USA) with an array of 8 receiver coils. A 2D FSE T2-FLAIR sequence was used to obtain axial images. In this experiment, the following scan parameters were used: FOV=24cm, TE/TR =50ms/2s, rBW(receiver bandwidth)=15.63kHz, slice-thickness=5mm, and matrix size = 256×256. Full Cartesian k-space samples were acquired from scanner without undersampling. The data were uniformly undersampled along the phase encoding direction offline, by using reduction factors ranging from 2 to 6. In each experiment, the same strategy was used to keep necessary ACS lines, as in the phantom simulation.

Numerous parameter configurations were first tested for both reconstruction algorithms, including kernel width of GRAPPA, kernel selection of PRUNO (size and number), initial guess and stopping criteria of the CG algorithm. The stopping criterion is defined as a condition that either the relative residual RMS error is lower than an error bound or a predefined maximum loop number has been reached. In both GRAPPA and PRUNO, the ACS blocks are always preserved during the reconstruction. The effective reduction factors (*R _{eff}*) of all experiments are calculated, as shown in Table 1.

For GRAPPA, we have evaluated various kernel sizes, including 2×3, 2×5, 4×3, and 4×5 for each experiment. The kernel size mentioned here is defined on acquired samples only, as shown in Figure 4. In each case, the best GRAPPA image was picked according to its minimum RMS reconstruction error. A similar scenario applies to the PRUNO kernel selection as well. The optimal PRUNO kernel selection strategies varied among different studies. Detailed optimal GRAPPA and PRUNO kernel configurations for each study are listed in Table 1. The LSQR algorithm was used for each data calibration. No other regularization was applied during the following image reconstruction so that we could compare the intrinsic reconstruction power of GRAPPA and PRUNO.

In each experiment, a GRAPPA reconstruction was performed first. The output k-space data of GRAPPA was used as a good initial guess to the following PRUNO reconstruction because it usually offers fast algorithm convergence. For all phantom experiments, the CG error bound was set to 0.01% and the maximum iteration number was 200. While for all *in vivo* experiments, the error bound was relaxed to be 0.1% and the maximum step size was 300.

Figure 5 and Figure 6 compare PRUNO and GRAPPA reconstructions of phantom simulations and *in vivo* experiments, respectively. Each reconstructed image was also compared with a reference image, which was reconstructed under full k-space sampling, and the image difference (×10) was taken for evaluating the quality of this reconstruction. The actual reconstruction time of each study is listed in Table 1.

Figure 5 shows that the reconstruction error of PRUNO is consistently smaller than that of GRAPPA, which indicates that a solution that is closer to the actual missing data has been derived in the CG algorithm by utilizing the equations established from null operations. GRAPPA performs very similar to PRUNO at *R* = 2, with good image quality. As the reduction factor increases, the reconstruction quality degrades for both algorithms. However, the performance of GRAPPA reconstruction drops much more rapidly than that of PRUNO. It can be seen that there are obvious unfolded aliasing artifacts in GRAPPA images as *R* exceeds 3. On the contrary, PRUNO is capable of producing much better images with the same data availability. Nearly no visible artifacts are observed on PRUNO images for high accelerating cases, up to *R* = 5. Even if the reduction factor is pushed into 6, the PRUNO reconstruction still yields reasonably good image quality. However, as *R* grows even further, the image aliasing can hardly be eliminated because the PRUNO reconstruction tends to be a severely ill-posed problem. Furthermore, it can be seen that the reconstruction errors and noises are distributed more uniformly in PRUNO.

Very similar phenomena are observed in the *in vivo* studies as well, as demonstrated in Figure 6. GRAPPA yields comparable performance to PRUNO at *R* = 2. Small visible aliasing artifacts are present in the GRAPPA image at *R* = 3. At reduction rates higher than 3, the image quality of GRAPPA reconstruction is not acceptable given the number ACS lines we have. On the other hand, PRUNO works robustly as long as the reduction factor is no greater than 5. At *R* = 6, some high-frequency reconstruction errors can be seen in PRUNO due to the poor system condition in Eq. [12].

We have shown that PRUNO is an iterative k-space data-driven parallel imaging reconstruction algorithm that is more robust than GRAPPA, especially for ultra-high accelerating MRI. With its generalized formulation and precise mathematical model, PRUNO offers excellent flexibility and accuracy. Our experimental results have shown that PRUNO performs fairly stable against various influencing factors such as reduction factor and low SNR. In addition, the PRUNO algorithm usually requires a smaller number of ACS lines to achieve acceptable reconstruction quality, which can help further reduce scan time.

The width and content of all kernels have an important impact to the system condition according to Eq. [12]. A good ensemble of null operations is expected to capture sufficient impendent linear equations between missing samples and acquired samples. In general, larger kernels are always favorable because more samples can be correlated in each null operation. However, using larger kernels demands more ACS lines and longer reconstruction time. On the other hand, PRUNO kernel width cannot be too small either, as addressed in Eq. [10]. Our empirical choice of kernel width is 5 to 7 for typical imaging protocols, which has been tested by using our 8-channel coils in site. At small reduction rates such as 2 or 3, a width of 5 should be used to lower the ACS requirement, without clearly degrading the performance of the algorithm. However, larger kernels are usually necessary as *R* gets larger, which ensures that missing samples and acquired samples are coupled with each other in a long distance (along y).

There is another tradeoff between the number of kernels (or system condition) and the accuracy of null operations resulting from these kernels. Less PRUNO kernels ensures a more strict null space, thus produces smaller residual errors. On the contrary, a larger number of nulling kernels form more equations in Eq. [12] that improves the condition of the linear system and speeds up the convergence of CG. Choosing a moderate number of kernels can thus optimize the performance of PRUNO reconstruction. In practice, however, the image quality of the final reconstruction is not quite sensitive to the number of kernels if it is chosen in a reasonable range. This can be seen from Figure 7. In this *R* = 4 example, we may choose the number of kernels from a really wide range without significantly affecting the reconstruction quality. But more kernels are usually preferred to speed up the reconstruction speed. To have a better convergence rate, the number of PRUNO kernels was empirically determined in our experiments, based on the eigenvalues of the calibration data. A threshold that roughly corresponds to 0.1% of the maximum eigenvalue was used in each case to separate null space and span space. Alternatively, the necessary number of kernels may also be estimated if we have a reasonable estimation of the maximum bandwidth of sensitivity maps (*w _{s}*) [17], as discussed in Appendix A, such that

$$r\le {N}_{c}{w}_{d}^{2}-{({w}_{d}+{w}_{s}-1)}^{2}.$$

[19]

In phantom experiments, *w _{s}* is approximated to be 6. In this case, Eq. [19] returns

In these experiments, a PRUNO reconstruction usually takes ten to hundreds of CG steps to converge with our stopping criteria. The computational time of a single CG loop varies from 1 to 3 seconds in the actual implementations, depending on the image size, number of coils, and the width of PRUNO kernels. Therefore, a typical PRUNO reconstruction takes several minutes. The actual converging speed of the CG algorithm is determined by several factors, which have been mostly addressed already, including the reduction factor, the exact sampling pattern, the number and quality of PRUNO kernels, the initial guess of missing data, and the image noise level.

There are a few options in generating the initial guess data. The simplest strategy is to set values of all missing samples to 0's, which doesn't require any extra effort. Another straightforward strategy is to perform a certain interpolation at the missing k-space locations. More accurate estimates can be made as well, such as the approach used in our demonstrated experiments. That is, a GRAPPA reconstruction is performed first and the fitted k-space data are used as the initial guess of the PRUNO reconstruction. According to our evaluations, the choice of the initial guess does not necessarily have a significant impact onto the quality of the final image in most cases. However, a good starting guess does help improving the converging speed of the algorithm. This can be clearly observed from the top two rows of Figure 8. It would thus be worthy of generating the initial guess data through a rough GRAPPA reconstruction, as the computational time of GRAPPA is much shorter compared with that of PRUNO.

As another k-space coil-by-coil reconstruction algorithm, GRAPPA reconstruction also yields a formulation of Eq. [6]. In GRAPPA, the reconstruction matrix **R** itself is assumed to be sparse and in a shift-invariant pattern, so that it can be directly approximated from data calibration. In this case, each missing k-space sample is fitted from multiple neighboring acquired samples, which is equivalent to applying a template based PRUNO kernel. Therefore, GRAPPA requires that there exists a nulling kernel for each particular template. For small reduction rates, this fitting model may be a good approximation since the target sample and the fitting samples are close to each other, so that there would be strong linear correlation between them. However, as the reduction rate goes higher, the distance between target sample and fitting samples increases as well. Consequently, correlation between these samples becomes weaker and the template based nulling kernel may not be applicable any longer. To obtain an accurate fitting kernel, many more fitting samples must be involved (larger kernels) which requires a larger number of ACS lines. In addition, the distances between fitting samples and the target differ significantly among each other under high acceleration rates. Because the k-space data exhibit exponential decay in distance, the data calibration in GRAPPA would tend to be a highly ill-posed problem. These factors make GRAPPA an invalid reconstruction algorithm for ultra-high accelerating pMRI.

On the contrary, PRUNO always uses a local convolution kernel regardless of the reduction factor. With a proper kernel size, each nulling kernel in PRUNO yields better fitting accuracy due to the greater number of involved samples and better locality. With iterative reconstruction, each missing sample will eventually be fitted by using all available k-space data. This explains why PRUNO always performs better than GRAPPA, especially under high reduction rates.

However, unlike in GRAPPA, ACS blocks should be always added into the PRUNO reconstruction. The existence of a continuous ACS band in CG not only helps increase the SNR of the reconstruction, but also significantly improves the condition of the entire linear system. Otherwise, the original under sampling pattern may easily lead Eq. [12] into a seriously ill-posed system. In other words, the ACS samples are essentially strong and important constraint conditions when solving this linear system by using a least square approach. This effect can be clearly seen if we compare the second and third rows of Figure 8.

PRUNO doesn't require any regularization during either data calibration or image reconstruction. However, a proper regularization during CG loops might further improve the performance of the algorithm, by either improving the image quality or achieving faster converging rate. For instance, the noise-like high-frequency reconstruction errors observed in the *R* = 6 case of *in vivo* study (Figure 6) are expected to be suppressed if the CG algorithm is combined with an image domain smoothness constraint, such as total variance (TV) regularization [27]. Hench, an even higher reduction factor might be achieved with regularized PRUNO reconstruction. This is beyond the scope of this work and will be investigated in our future research.

Only 1D-PRUNO has been discussed in this work. However, there is not any restriction about the undersampling pattern in the formulation of PRUNO. It hence suggests that PRUNO reconstruction could also deal with any 2D undersampling problem. Although issues such as system condition of 2D-PRUNO and optimal 2D sampling scheme still need to be investigated, we believe PRUNO will offer many opportunities in ultra-fast 3D parallel imaging as well.

In this work, an iterative parallel imaging reconstruction algorithm termed PRUNO has been demonstrated, which is based on a generalized formulation of k-space data-driven pMRI reconstruction. Both simulation and in vivo studies have shown that PRUNO offers better image quality than GRAPPA with very light ACS requirement, especially under high accelerating rates. Data calibration and image reconstruction in PRUNO are both tractable linear algebra problems and hence easy to implement. With its robustness and flexibility, PRUNO is expected to be applied to various clinical applications to offer improved pMRI reconstruction.

The authors thank Dr. Jing Chen for proofreading the manuscript.

**Grant Sponsors:** National Institute of Health, Grant number R00EB007182, R01NS04760705 and NCRR P41 RR09784; Center of Advanced MR Technology of Stanford, Lucas Foundation.

If we use *L _{d}* and

$${L}_{d}={N}_{c}{w}_{d}^{2},$$

and

$${L}_{m}={w}_{m}^{2}={({w}_{d}+{w}_{s}-1)}^{2}.$$

Apparently, if multiple receiver coils are used, i.e., *N _{c}* > 1,

$${L}_{d}\ge {L}_{m}+{r}_{0}.$$

Therefore, existence of *r* independent non-zero nulling kernels in Eq. [11] requires that

$$r\le {r}_{0}\le {L}_{d}-{L}_{m}={N}_{c}{w}_{d}^{2}-{({w}_{d}+{w}_{s}-1)}^{2}.$$

[A.1]

This is exactly Eq. [19]. We can also solve *w _{d}* from this equation to get Eq. [10].

1. Tsao J, Hansen MS, Kozerke S. Accelerated parallel imaging by transform coding data compression with k-t SENSE. Conf Proc IEEE Eng Med Biol Soc. 2006;1:372. [PubMed]

2. Bauer JS, Banerjee S, Henning TD, Krug R, Majumdar S, Link TM. Fast high-spatial-resolution MRI of the ankle with parallel imaging using GRAPPA at 3 T. AJR Am J Roentgenol. 2007;189(1):240–245. [PubMed]

3. Quick HH, Vogt FM, Maderwald S, Herborn CU, Bosk S, Gohde S, Debatin JF, Ladd ME. High spatial resolution whole-body MR angiography featuring parallel imaging: initial experience. Rofo. 2004;176(2):163–169. [PubMed]

4. Fautz HP, Honal M, Saueressig U, Schafer O, Kannengiesser SA. Artifact reduction in moving-table acquisitions using parallel imaging and multiple averages. Magn Reson Med. 2007;57(1):226–232. [PubMed]

5. Winkelmann R, Bornert P, Dossel O. Ghost artifact removal using a parallel imaging approach. Magn Reson Med. 2005;54(4):1002–1009. [PubMed]

6. Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: sensitivity encoding for fast MRI. Magn Reson Med. 1999;42(5):952–962. [PubMed]

7. Griswold MA, Jakob PM, Heidemann RM, Nittka M, Jellus V, Wang J, Kiefer B, Haase A. Generalized autocalibrating partially parallel acquisitions (GRAPPA) Magn Reson Med. 2002;47(6):1202–1210. [PubMed]

8. McKenzie CA, Ohliger MA, Yeh EN, Price MD, Sodickson DK. Coil-by-coil image reconstruction with SMASH. Magn Reson Med. 2001;46(3):619–623. [PubMed]

9. Sodickson DK, Griswold MA, Jakob PM. SMASH imaging. Magn Reson Imaging Clin N Am. 1999;7(2):237–254. vii–viii. [PubMed]

10. Jakob PM, Griswold MA, Edelman RR, Sodickson DK. AUTO-SMASH: a self-calibrating technique for SMASH imaging. SiMultaneous Acquisition of Spatial Harmonics. MAGMA. 1998;7(1):42–54. [PubMed]

11. Heidemann RM, Griswold MA, Haase A, Jakob PM. VD-AUTO-SMASH imaging. Magn Reson Med. 2001;45(6):1066–1074. [PubMed]

12. Bydder M, Larkman DJ, Hajnal JV. Generalized SMASH imaging. Magn Reson Med. 2002;47(1):160–170. [PubMed]

13. Wang J, Klugel K. Parrel acquisition techniques with modified SENSE reconstruction (mSENSE). 1st Workshop on Parallel MRI: Basics and Clinical Applications; Wurzberg, Germany. 2001. p. 92.

14. Griswold MA, Jakob PM, Nittka M, Goldfarb JW, Haase A. Partially parallel imaging with localized sensitivities (PILS) Magn Reson Med. 2000;44(4):602–609. [PubMed]

15. Kyriakos WE, Panych LP, Kacher DF, Westin CF, Bao SM, Mulkern RV, Jolesz FA. Sensitivity profiles from an array of coils for encoding and reconstruction in parallel (SPACE RIP) Magn Reson Med. 2000;44(2):301–308. [PubMed]

16. Pruessmann KP, Weiger M, Bornert P, Boesiger P. Advances in sensitivity encoding with arbitrary k-space trajectories. Magn Reson Med. 2001;46(4):638–651. [PubMed]

17. Liu C, Bammer R, Moseley ME. Parallel imaging reconstruction for arbitrary trajectories using k-space sparse matrices (kSPA) Magn Reson Med. 2007;58(6):1171–1181. [PMC free article] [PubMed]

18. Liu C, Zhang J, Moseley ME. Auto-Calibrated Parallel Imaging Reconstruction Using K-Space Sparse Matrices (KSPA). ISMRM 16th Annual Meeting; Toronto, Canada. 2008. p. 5.

19. Brau AC, Beatty PJ, Skare S, Bammer R. Comparison of reconstruction accuracy and efficiency among autocalibrating data-driven parallel imaging methods. Magn Reson Med. 2008;59(2):382–395. [PMC free article] [PubMed]

20. Zhang J, Liu C, Moseley ME. Parallel Reconstruction Using Null Operations (PRUNO). ISMRM 16th Annual Meeting; Toronto, Canada. 2008. p. 9.

21. Griswold MA, Blaimer M, Breuer F, Heidemann RM, Mueller M, Jakob PM. Parallel magnetic resonance imaging using the GRAPPA operator formalism. Magn Reson Med. 2005;54(6):1553–1556. [PubMed]

22. Zhang J, Liu C, Moseley ME. Generalized PRUNO Kernel Selection by Using Singular Value Decomposition (SVD). ISMRM 18th Annual Meeting; Stockholm, Sweden. 2010. p. 4907.

23. Magnus RH, Eduard S. Methods of Conjugate Gradients for Solving Linear Systems. Journal of Research of the National Bureau of Standards. 1952;49(6):28.

24. Lin FH, Chen YJ, Belliveau JW, Wald LL. A wavelet-based approximation of surface coil sensitivity profiles for correction of image intensity inhomogeneity and parallel imaging reconstruction. Hum Brain Mapp. 2003;19(2):96–111. [PubMed]

25. Bydder M, Larkman DJ, Hajnal JV. Combination of signals from array coils using image-based estimation of coil sensitivity profiles. Magn Reson Med. 2002;47(3):539–548. [PubMed]

26. Griswold MA, Breuer F, Blaimer M, Kannengiesser S, Heidemann RM, Mueller M, Nittka M, Jellus V, Kiefer B, Jakob PM. Autocalibrated coil sensitivity estimation for parallel imaging. NMR Biomed. 2006;19(3):316–324. [PubMed]

27. Lustig M, Donoho D, Pauly JM. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn Reson Med. 2007;58(6):1182–1195. [PubMed]

28. Blaimer M, Breuer F, Mueller M, Heidemann RM, Griswold MA, Jakob PM. SMASH, SENSE, PILS, GRAPPA: how to choose the optimal method. Top Magn Reson Imaging. 2004;15(4):223–236. [PubMed]

29. Blaimer M, Breuer FA, Mueller M, Seiberlich N, Ebel D, Heidemann RM, Griswold MA, Jakob PM. 2D-GRAPPA-operator for faster 3D parallel MRI. Magn Reson Med. 2006;56(6):1359–1364. [PubMed]

30. Breuer FA, Kannengiesser SA, Blaimer M, Seiberlich N, Jakob PM, Griswold MA. General formulation for quantitative G-factor calculation in GRAPPA reconstructions. Magn Reson Med. 2009;62(3):739–746. [PubMed]

31. Heidemann RM, Griswold MA, Seiberlich N, Kruger G, Kanneng SA, Kiefer B, Wiggins G, Wald LL, Jakob PM. Direct parallel image reconstructions for spiral trajectories using GRAPPA. Magn Reson Med. 2006;56(2):317–326. [PubMed]

32. Heidemann RM, Griswold MA, Seiberlich N, Nittka M, Kannengiesser SA, Kiefer B, Jakob PM. Fast method for 1D non-cartesian parallel imaging using GRAPPA. Magn Reson Med. 2007;57(6):1037–1046. [PubMed]

33. Huo D, Wilson D. Using the Perceptual Difference Model (PDM) to Optimize GRAPPA Reconstruction. Conf Proc IEEE Eng Med Biol Soc. 2005;7:7409–7412. [PubMed]

34. Huo D, Wilson DL. Robust GRAPPA reconstruction and its evaluation with the perceptual difference model. J Magn Reson Imaging. 2008;27(6):1412–1420. [PMC free article] [PubMed]

35. Kreitner KF, Romaneehsen B, Krummenauer F, Oberholzer K, Muller LP, Duber C. Fast magnetic resonance imaging of the knee using a parallel acquisition technique (mSENSE): a prospective performance evaluation. Eur Radiol. 2006;16(8):1659–1666. [PubMed]

36. Lin FH, Kwong KK, Belliveau JW, Wald LL. Parallel imaging reconstruction using automatic regularization. Magn Reson Med. 2004;51(3):559–567. [PubMed]

37. Lustig M, Pauly JM. Iterative GRAPPA: a General Solution for the GRAPPA Reconstruction from Arbitrary k-Space Sampling. Berlin, Germany: 2007. p. 333.

38. Zhao T, Hu X. Iterative GRAPPA (iGRAPPA) for improved parallel imaging reconstruction. Magn Reson Med. 2008;59(4):903–907. [PubMed]

39. Sodickson DK. Tailored SMASH image reconstructions for robust in vivo parallel MR imaging. Magn Reson Med. 2000;44(2):243–251. [PubMed]

40. Sodickson DK, McKenzie CA, Ohliger MA, Yeh EN, Price MD. Recent advances in image reconstruction, coil sensitivity calibration, and coil array design for SMASH and generalized parallel MRI. MAGMA. 2002;13(3):158–163. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |