|Home | About | Journals | Submit | Contact Us | Français|
A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines.
Parallel magnetic resonance imaging (pMRI) techniques have been proposed and widely introduced into clinical applications over the last decade. In pMRI, spatial sensitivity encodings are provided by utilizing an array of multiple receiver surface coils. With this data redundancy, we are able to reconstruct unaliased images from k-space data sampled below the Nyquist rate with fewer k-space phase encodes. This acceleration can be used to save scan time, increase temporal or spatial resolution [1–3], or reduce imaging artifacts [4,5]. Various parallel imaging reconstruction methods have been invented. Among those, the most widely used pMRI techniques are sensitivity encoding (SENSE)  and generalized autocalibrating partially parallel acquisition (GRAPPA) . Other Cartesian pMRI reconstruction methods including SMASH [8,9], AUTO-SMASH , VD-AUTO-SMASH , Generalized SMASH , mSENSE , PILS , and SPACE-RIP  have also been demonstrated and implemented into numerous applications. Recently, several non-Cartesian pMRI reconstruction techniques have been investigated as well, such as non-Cartesian SENSE (NC-SENSE)  and kSPA [17,18]. All of these reconstruction algorithms can be classified into two categories, based on how the sensitivity encoding is decoded from the multi-channel data, i.e., physically-based reconstruction and data-driven reconstruction . SENSE reconstruction is a typical physically-based algorithm, in which all coils sensitivity maps have to be estimated explicitly because image needs to be unfolded in the image domain based on these maps. On the other hand, GRAPPA is a data-driven method, in which the sensitivity information is merely implicitly derived in the k-space through data calibration, and the missing samples are fitted in k-space based on this information.
As a k-space data-driven method, GRAPPA has become the most successful pMRI technique in recent years, due to its convenience, flexibility and good performance. Compared to SENSE, the cumbersome coil sensitivity measurement is not required in GRAPPA. In addition, neither data calibration nor k-space fitting is computationally demanding. And many experimental results have shown that GRAPPA reconstruction yields relatively good image quality for low accelerating pMRI, for example, with a reduction factor of 2 to 3. However, the performance of GRAPPA reconstruction usually degrades significantly as the reduction rate gets higher , unless a very large number of autocalibration signal (ACS) lines are imposed.
In this work, an iterative k-space data-driven pMRI reconstruction algorithm is demonstrated, termed Parallel Reconstruction Using Null Operations (PRUNO). Based on a generalized problem formulation of k-space reconstruction, we show that PRUNO is a more flexible algorithm than GRAPPA. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems and an iterative conjugate-gradient (CG) algorithm is proposed to solve the reconstruction equation efficiently. Both simulation and in vivo results show that PRUNO provides significantly improved image quality compared to GRAPPA while requires a reduced number of ACS lines, especially under high reduction factors.
In this section, only study the case of 2D Cartesian pMRI is studied for simplicity. The same theory can be easily extended to 3D reconstruction. Let's denote the image size to be Nx × Ny. Without loss of generality, it is assumed that the phase encoding (PE) is along the y direction. Due to the discreteness of data acquisition in MRI, we formulate both acquisition and reconstruction as discrete system problems. Furthermore, two major assumptions are made about the sensitivity encoding:
These conditions essentially form the basis behind most k-space data-driven pMRI reconstruction methods, such as GRAPPA [7,21] and kSPA [17,18]. To simplify the following discussion, we treat Nx, Ny and ws as even numbers.
For multi-channel acquisition, k-space data acquired from the n-th coil can be approximated as
Here m(kx, ky) stands for the sampled true magnetization to be imaged in the k-space. dn(kx,ky) and sn (kx, ky) represent k-space samples of the acquirable data from the n-th coil and the spectrum of sensitivity map of that coil, respectively. “*” denotes the operator of two-dimensional convolution. Based on our first assumption, sn (kx, ky) is treated as a small kernel. Since a system convolution is essentially a linear operation, the sensitivity encoding can thus be formulated as a system of linear equations,
where d is a vector of concatenated fully-sampled k-space data from all coils (encoded samples) and m is a column vector containing the true magnetization in the k-space (unencoded samples). Hence, S is called sensitivity encoding matrix. The primary goal of a k-space based reconstruction is to compute either m or full presentation of d, from which the final image will be synthesized in image domain. With data undersampling, all samples in d can be classified into two groups according to their existence upon reconstruction, i.e., missing samples and acquired samples. To separate the two groups, masking operators can be applied to Eq. :
Here the subscripts “m” and “a” represent “missing” and “acquired”, respectively. I denotes an identity matrix, which can be decomposed into two diagonal masking matrices, Im and Ia. Each masking only preserves samples from one category by setting entries corresponding to the other category to 0's. We can thus split Eq.  into two separate equations: Imd = (ImS)m and Iad = (IaS)m. If rows of all zeroes are eliminated from each equation, they are formulated more compactly as
Now dm is the vector including only missing samples and da is the other vector including only acquired ones. Sm and Sa are the corresponding row pruned matrices from ImS and IaS, respectively.
With the two assumptions stated at the beginning of this section, Eq.  should be an over-determined linear equation with a very sparse system matrix. If the encoding matrix Sa is known, the image reconstruction will be directly performed by solving m from this equation. However, the knowledge of Sa usually requires a complete estimation of all sensitivity maps. This scenario is a k-space “physically-based” reconstruction, which is essentially similar to Generalized SMASH .
To avoid the measurement of sensitivity maps due to its complexity and inaccuracy, we can solve dm first, and then apply a coil-by-coil reconstruction followed by a sum-of-squares (SOS) synthesis in image domain, as in GRAPPA. By using Eq.  and , this k-space data-driven method can be explicitly formulated as
We use R to denote an overall reconstruction matrix here. And the superscript H is used to label a Hermitian matrix (conjugate transpose). In a more generalized form, Eq.  can be rewritten as
And if we re-permute the matrices back according to the original order of all k-space samples, we will end up with
Here N denotes a non-zero system matrix, each row of which can null k-space samples from all coils through multiplication. This equation carries one of the fundamental forms of k-space data-driven reconstruction. Generally speaking, it implies that k-space samples from all receiver channels are essentially linearly correlated through null operations, which results from the intrinsic nature of sensitivity encodings. Once the matrix N is determinable, Eq.  can be utilized to solve pMRI reconstruction problem through a procedure that is termed Parallel Reconstruction Using Null Operations (PRUNO).
In this section, we will first find a more tractable form of N by directly exploring the linear dependence among local k-space samples. And it will be shown that the resulting null operations may be characterized by a group of 2D convolutions with small kernels. N can thus be feasibly derived through data-calibration. Finally, an iterative conjugate-gradient method will be presented to solve dm from Eq.  efficiently.
As illustrated in Eq. , each encoded sample dn(kx0,ky0) is correlated with m(kx, ky) locally in k-space, i.e., only for , in the unit of Δk, where Δk = 1/ FOV and FOV refers to imaging field of view. Therefore, if we look at a small square-shaped “target subset” from the n-th coil dn(kx, ky), samples of m(kx, ky) that are being encoded into these targets also completely come from a small local “source subset”, as shown in Figure 1. Apparently, this correspondence extends to every coil. If we use wd to denote the width of the target subset, the width of the source subset should be wm = wd + ws − 1.
In other words, a local sensitivity encoding relationship can be formulated as
To differentiate from Eq. , we use tilde symbols to denote the small local subsets and here. is the corresponding local encoding matrix, which is by . As the size of the target subsets grow, the encoding matrix tends to be a “taller” matrix because the total number of target samples () increases much faster than that of the corresponding source samples () (see Appendix A).
According to the theorem of linear algebra, any “fat” matrix (row < column) has a nonempty null space. Therefore, , the Hermitian of the encoding matrix, would eventually have a non-empty null space if we increase the size of source subsets so that . Furthermore, at least r independent non-zero nulling kernels exist if the following equation is satisfied (see Appendix A)
For each of these nulling kernels, n1, n2, …, nr, . If applying these kernels to both sides of Eq. , we will get
Here , r by , turns out to be a local nulling system matrix on . Since sensitivity encoding is essentially shift-invariant in k-space, both and are independent of k-space location. Therefore, we are now able to apply the nulling kernels to all k-space locations so that the full nulling matrix N in Eq. , which is r Nx Ny by NcNxNy, can be constructed row by row. On the other hand, if each ni is treated as a weighting kernel, we can conclude that multiplying each ensemble of NxNy rows of N by d is equivalent to a series of 2D filtering (inside each coil) followed by a summation (across all coils) in the 2D k-space. Furthermore, N can be fully characterized by all nulling kernels, that can be feasibly estimated through data calibration.
We call these nulling kernels as PRUNO kernels. And wd is just the kernel size used in PRUNO. Based on the first assumption and empirical approximation, ws is usually estimated to be less than10 for typical imaging FOV and practical coil configurations. For example, if we approximate ws as 8 and use 8 receiver channels, it requires a minimum kernel width of 4 to obtain 8 nulling kernels according to Eq. . This is a feasible number for both data calibration and reconstruction in terms of computational time and the number of required ACS lines, which will be addressed later.
Once N has been determined, the image reconstruction can be proceeded by solving dm from Eq. . The is very similar to the derivation of Eq. . By applying the same masking operators to Eq. , we get Nmdm + Nada = 0, or
Here Nm and Na denote the two column pruned matrices, similar to the previous notations of Sm and Sa. In a positive (semi-)definite system form, Eq.  turns to be
Essentially, this is another representation of Eq. .
As stated above, a set of local nulling kernels (columns of ) need to be found in PRUNO through data calibration. This procedure is similar to GRAPPA calibration. For each kernel, a series of linear equations can be established by sliding the kernel window within a certain k-space calibration region. And the calibration is again a linear algebra problem, which is illustrated as
Here l denotes the total number of calibration locations. is a calibration matrix, of which each column is one windowed subset of calibration data, as shown in Figure 2.
To ensure Eq.  an overdetermined system, Nm needs to be a tall matrix, i.e., . Although Nc kernels always seem to be sufficient from this perspective, the condition of the final reconstruction system (Eq. ) strongly depends on the number of independent equations in Eq. , i.e., the number of PRUNO kernels and the orthogonality among them. On the other hand, imposing a larger number of kernels may increase the kernel size and consequently the computational time. We call the procedure of determining the number of kernels and defining the size of kernels as kernel selection.
One simple strategy of kernel selection is inspired by GRAPPA, in which a small number (usually Nc) of template based kernels are used . However, the performance of this method is limited by its inflexibility and instability. An optimal kernel selection strategy is to find all generalized nulling kernels directly from Eq. , i.e., from the null space of . This can be easily achieved by performing a Singular Value Decomposition (SVD) on , and extracting the null space from its eigenspace . One example is shown in Figure 2. In this case, a full kernel width of 5 was used on 8-channel simulated phantom data. Around 1000 k-space subsets (vectors) were collected to form the calibration matrix , with a vector length of 200 for each. SVD was then applied on this matrix. The eigenvalue plot indicates that the span of the column vector space is in a much lower rank than 200 if reasonable thresholding is applied to its eigenvalues. Once a cut-off is determined, all nulling kernels are simply eigenvectors corresponding to those eigenvalues that are smaller than the threshold. In this example, around 100 kernels were obtained.
As the nulling matrix has been calculated from data calibration, the remaining objective is to solve Eq. . Considering the huge matrix size, it would usually be computationally infeasible to perform a direct matrix inversion. An alternative indirect solver is thus desired. Using the representation of non-pruned matrices, the system equation can be rewritten as
Now that each forward matrix-vector multiplication in Eq.  is either simply a masking operation or essentially composed of 2D convolutions and summations, which are not computationally demanding, a feasible iterative algorithm can be implemented to solve this system. A conjugate-gradient method has been developed and demonstrated in , in which matrix-vector multiplications are performed step by step during the reconstruction. This is very similar to NC-SENSE reconstruction .
According to the construction of nulling matrix, N can be decomposed into r × Nc blocks,
Here each Nij is a NXNy × NXNy matrix, corresponding to the 2D convolution of the i-th null operation performed on the j-th coil. Eq.  illustrates that rNc convolutions are involved in each ensemble of null operations Nx, where the vector x refers to certain k-space data on that the null operations are performed. On the other hand, it is easy to show that in each ensemble of conjugate null operations NH (Nx), rNc convolutions need to be performed as well. Therefore, a total number of 2rNc 2D convolutions are required during each CG loop when applying step-by-step matrix-vector multiplications , which constitute the major portion of reconstruction time. In this case, the reconstruction time is nearly proportional to r, which prohibits us from using a large number of kernels in practice, for example, using more than 100 kernels.
Immediately from the decomposition of N (Eq. ), we have
This equation shows that one “composite” null operation NHNx is actually equivalent to 2D convolutions, independent of the number of kernels (r). We can thus combine the sequential null operations Nx and their conjugate operations NH (Nx) into single-step convolutions, .i.e., calculate (NHN)x directly. In this case, the representation of a local composite convolution kernel can be directly derived as:
Here n and n* denote a local 2D convolution kernel and its conjugate, respectively. And η is a composite kernel, corresponding to one block in (NHN). Because these convolutions are calculated among very small kernels and only performed once prior to the CG iterations, its computational load is nearly negligible. Although the size of a composite kernel (2wd −1) is larger than that of a nulling kernel, the number of convolutions involved in each CG step will be reduced to if these composite kernels are being used. Therefore, it is favorable to use composite kernels in PRUNO reconstruction because the overall computational time is almost independent of the number of nulling kernels. Such a modified reconstruction algorithm is shown in Figure 3. Details of a CG algorithm can be found in  and .
The procedure of an optimized PRUNO reconstruction is summarized below:
For comparison, PRUNO and GRAPPA reconstructions have been applied to Cartesian pMRI for both computer simulated phantom and in vivo studies. A wide range of reduction factors were tested with 8-channel coils. Both data simulation and image reconstruction were implemented by using MATLAB 7.7 (The Mathworks, Inc, Natick, MA, USA) on a Linux PC with a 3.20 GHz Intel Pentium 4 CPU and 4GB RAM.
A 256×256 Shepp-Logan image was used for the simulation. Pre-measured sensitivity maps from an eight-channel receiving coil were used for sensitivity encodings [6,24–26]. Full k-space data of each coil were generated by applying fast Fourier transform to the encoded coil image, which was produced by multiplying the original image by the corresponding sensitivity map. Complex white Gaussian noises were added to the phantom image during the simulation, which corresponds to a signal to noise ratio of 25. Uniform data undersampling was then performed while preserving a small number of ACS lines near k-space center. The reduction factors were varied from 2 to 7 in these experiments. Two calibration data blocks (Nb=2), or 2 (R-1) ACS lines, were used in the cases of 2 to 4-fold. And three blocks were used if R is greater than 4. A calibration block refers to one unsampled slab in k-space, as shown in Figure 4. The exact number of ACS lines and the effective reduction factor of each study are listed in Table 1. The same k-space data were used for both GRAPPA and PRUNO calibration and reconstruction, with all ACS lines preserved during reconstruction.
In vivo brain images of a healthy volunteer were acquired on a 3T whole-body scanner (GE MR750; GE Healthcare, Waukesha, WI, USA) with an array of 8 receiver coils. A 2D FSE T2-FLAIR sequence was used to obtain axial images. In this experiment, the following scan parameters were used: FOV=24cm, TE/TR =50ms/2s, rBW(receiver bandwidth)=15.63kHz, slice-thickness=5mm, and matrix size = 256×256. Full Cartesian k-space samples were acquired from scanner without undersampling. The data were uniformly undersampled along the phase encoding direction offline, by using reduction factors ranging from 2 to 6. In each experiment, the same strategy was used to keep necessary ACS lines, as in the phantom simulation.
Numerous parameter configurations were first tested for both reconstruction algorithms, including kernel width of GRAPPA, kernel selection of PRUNO (size and number), initial guess and stopping criteria of the CG algorithm. The stopping criterion is defined as a condition that either the relative residual RMS error is lower than an error bound or a predefined maximum loop number has been reached. In both GRAPPA and PRUNO, the ACS blocks are always preserved during the reconstruction. The effective reduction factors (Reff) of all experiments are calculated, as shown in Table 1.
For GRAPPA, we have evaluated various kernel sizes, including 2×3, 2×5, 4×3, and 4×5 for each experiment. The kernel size mentioned here is defined on acquired samples only, as shown in Figure 4. In each case, the best GRAPPA image was picked according to its minimum RMS reconstruction error. A similar scenario applies to the PRUNO kernel selection as well. The optimal PRUNO kernel selection strategies varied among different studies. Detailed optimal GRAPPA and PRUNO kernel configurations for each study are listed in Table 1. The LSQR algorithm was used for each data calibration. No other regularization was applied during the following image reconstruction so that we could compare the intrinsic reconstruction power of GRAPPA and PRUNO.
In each experiment, a GRAPPA reconstruction was performed first. The output k-space data of GRAPPA was used as a good initial guess to the following PRUNO reconstruction because it usually offers fast algorithm convergence. For all phantom experiments, the CG error bound was set to 0.01% and the maximum iteration number was 200. While for all in vivo experiments, the error bound was relaxed to be 0.1% and the maximum step size was 300.
Figure 5 and Figure 6 compare PRUNO and GRAPPA reconstructions of phantom simulations and in vivo experiments, respectively. Each reconstructed image was also compared with a reference image, which was reconstructed under full k-space sampling, and the image difference (×10) was taken for evaluating the quality of this reconstruction. The actual reconstruction time of each study is listed in Table 1.
Figure 5 shows that the reconstruction error of PRUNO is consistently smaller than that of GRAPPA, which indicates that a solution that is closer to the actual missing data has been derived in the CG algorithm by utilizing the equations established from null operations. GRAPPA performs very similar to PRUNO at R = 2, with good image quality. As the reduction factor increases, the reconstruction quality degrades for both algorithms. However, the performance of GRAPPA reconstruction drops much more rapidly than that of PRUNO. It can be seen that there are obvious unfolded aliasing artifacts in GRAPPA images as R exceeds 3. On the contrary, PRUNO is capable of producing much better images with the same data availability. Nearly no visible artifacts are observed on PRUNO images for high accelerating cases, up to R = 5. Even if the reduction factor is pushed into 6, the PRUNO reconstruction still yields reasonably good image quality. However, as R grows even further, the image aliasing can hardly be eliminated because the PRUNO reconstruction tends to be a severely ill-posed problem. Furthermore, it can be seen that the reconstruction errors and noises are distributed more uniformly in PRUNO.
Very similar phenomena are observed in the in vivo studies as well, as demonstrated in Figure 6. GRAPPA yields comparable performance to PRUNO at R = 2. Small visible aliasing artifacts are present in the GRAPPA image at R = 3. At reduction rates higher than 3, the image quality of GRAPPA reconstruction is not acceptable given the number ACS lines we have. On the other hand, PRUNO works robustly as long as the reduction factor is no greater than 5. At R = 6, some high-frequency reconstruction errors can be seen in PRUNO due to the poor system condition in Eq. .
We have shown that PRUNO is an iterative k-space data-driven parallel imaging reconstruction algorithm that is more robust than GRAPPA, especially for ultra-high accelerating MRI. With its generalized formulation and precise mathematical model, PRUNO offers excellent flexibility and accuracy. Our experimental results have shown that PRUNO performs fairly stable against various influencing factors such as reduction factor and low SNR. In addition, the PRUNO algorithm usually requires a smaller number of ACS lines to achieve acceptable reconstruction quality, which can help further reduce scan time.
The width and content of all kernels have an important impact to the system condition according to Eq. . A good ensemble of null operations is expected to capture sufficient impendent linear equations between missing samples and acquired samples. In general, larger kernels are always favorable because more samples can be correlated in each null operation. However, using larger kernels demands more ACS lines and longer reconstruction time. On the other hand, PRUNO kernel width cannot be too small either, as addressed in Eq. . Our empirical choice of kernel width is 5 to 7 for typical imaging protocols, which has been tested by using our 8-channel coils in site. At small reduction rates such as 2 or 3, a width of 5 should be used to lower the ACS requirement, without clearly degrading the performance of the algorithm. However, larger kernels are usually necessary as R gets larger, which ensures that missing samples and acquired samples are coupled with each other in a long distance (along y).
There is another tradeoff between the number of kernels (or system condition) and the accuracy of null operations resulting from these kernels. Less PRUNO kernels ensures a more strict null space, thus produces smaller residual errors. On the contrary, a larger number of nulling kernels form more equations in Eq.  that improves the condition of the linear system and speeds up the convergence of CG. Choosing a moderate number of kernels can thus optimize the performance of PRUNO reconstruction. In practice, however, the image quality of the final reconstruction is not quite sensitive to the number of kernels if it is chosen in a reasonable range. This can be seen from Figure 7. In this R = 4 example, we may choose the number of kernels from a really wide range without significantly affecting the reconstruction quality. But more kernels are usually preferred to speed up the reconstruction speed. To have a better convergence rate, the number of PRUNO kernels was empirically determined in our experiments, based on the eigenvalues of the calibration data. A threshold that roughly corresponds to 0.1% of the maximum eigenvalue was used in each case to separate null space and span space. Alternatively, the necessary number of kernels may also be estimated if we have a reasonable estimation of the maximum bandwidth of sensitivity maps (ws) , as discussed in Appendix A, such that
In phantom experiments, ws is approximated to be 6. In this case, Eq.  returns r ≤ 100 if the kernel width is 5, and it returns r ≤ 248 if the kernel width is 7. For in vivo experiments, a slightly larger ws is assumed to improve the accuracy of data calibration. If ws = 8 is used, Eq.  returns r ≤ 56 when wd=5, and r ≤ 196 when wd=7. All these numbers are close to our actual selections, as shown in Table 1.
In these experiments, a PRUNO reconstruction usually takes ten to hundreds of CG steps to converge with our stopping criteria. The computational time of a single CG loop varies from 1 to 3 seconds in the actual implementations, depending on the image size, number of coils, and the width of PRUNO kernels. Therefore, a typical PRUNO reconstruction takes several minutes. The actual converging speed of the CG algorithm is determined by several factors, which have been mostly addressed already, including the reduction factor, the exact sampling pattern, the number and quality of PRUNO kernels, the initial guess of missing data, and the image noise level.
There are a few options in generating the initial guess data. The simplest strategy is to set values of all missing samples to 0's, which doesn't require any extra effort. Another straightforward strategy is to perform a certain interpolation at the missing k-space locations. More accurate estimates can be made as well, such as the approach used in our demonstrated experiments. That is, a GRAPPA reconstruction is performed first and the fitted k-space data are used as the initial guess of the PRUNO reconstruction. According to our evaluations, the choice of the initial guess does not necessarily have a significant impact onto the quality of the final image in most cases. However, a good starting guess does help improving the converging speed of the algorithm. This can be clearly observed from the top two rows of Figure 8. It would thus be worthy of generating the initial guess data through a rough GRAPPA reconstruction, as the computational time of GRAPPA is much shorter compared with that of PRUNO.
As another k-space coil-by-coil reconstruction algorithm, GRAPPA reconstruction also yields a formulation of Eq. . In GRAPPA, the reconstruction matrix R itself is assumed to be sparse and in a shift-invariant pattern, so that it can be directly approximated from data calibration. In this case, each missing k-space sample is fitted from multiple neighboring acquired samples, which is equivalent to applying a template based PRUNO kernel. Therefore, GRAPPA requires that there exists a nulling kernel for each particular template. For small reduction rates, this fitting model may be a good approximation since the target sample and the fitting samples are close to each other, so that there would be strong linear correlation between them. However, as the reduction rate goes higher, the distance between target sample and fitting samples increases as well. Consequently, correlation between these samples becomes weaker and the template based nulling kernel may not be applicable any longer. To obtain an accurate fitting kernel, many more fitting samples must be involved (larger kernels) which requires a larger number of ACS lines. In addition, the distances between fitting samples and the target differ significantly among each other under high acceleration rates. Because the k-space data exhibit exponential decay in distance, the data calibration in GRAPPA would tend to be a highly ill-posed problem. These factors make GRAPPA an invalid reconstruction algorithm for ultra-high accelerating pMRI.
On the contrary, PRUNO always uses a local convolution kernel regardless of the reduction factor. With a proper kernel size, each nulling kernel in PRUNO yields better fitting accuracy due to the greater number of involved samples and better locality. With iterative reconstruction, each missing sample will eventually be fitted by using all available k-space data. This explains why PRUNO always performs better than GRAPPA, especially under high reduction rates.
However, unlike in GRAPPA, ACS blocks should be always added into the PRUNO reconstruction. The existence of a continuous ACS band in CG not only helps increase the SNR of the reconstruction, but also significantly improves the condition of the entire linear system. Otherwise, the original under sampling pattern may easily lead Eq.  into a seriously ill-posed system. In other words, the ACS samples are essentially strong and important constraint conditions when solving this linear system by using a least square approach. This effect can be clearly seen if we compare the second and third rows of Figure 8.
PRUNO doesn't require any regularization during either data calibration or image reconstruction. However, a proper regularization during CG loops might further improve the performance of the algorithm, by either improving the image quality or achieving faster converging rate. For instance, the noise-like high-frequency reconstruction errors observed in the R = 6 case of in vivo study (Figure 6) are expected to be suppressed if the CG algorithm is combined with an image domain smoothness constraint, such as total variance (TV) regularization . Hench, an even higher reduction factor might be achieved with regularized PRUNO reconstruction. This is beyond the scope of this work and will be investigated in our future research.
Only 1D-PRUNO has been discussed in this work. However, there is not any restriction about the undersampling pattern in the formulation of PRUNO. It hence suggests that PRUNO reconstruction could also deal with any 2D undersampling problem. Although issues such as system condition of 2D-PRUNO and optimal 2D sampling scheme still need to be investigated, we believe PRUNO will offer many opportunities in ultra-fast 3D parallel imaging as well.
In this work, an iterative parallel imaging reconstruction algorithm termed PRUNO has been demonstrated, which is based on a generalized formulation of k-space data-driven pMRI reconstruction. Both simulation and in vivo studies have shown that PRUNO offers better image quality than GRAPPA with very light ACS requirement, especially under high accelerating rates. Data calibration and image reconstruction in PRUNO are both tractable linear algebra problems and hence easy to implement. With its robustness and flexibility, PRUNO is expected to be applied to various clinical applications to offer improved pMRI reconstruction.
The authors thank Dr. Jing Chen for proofreading the manuscript.
Grant Sponsors: National Institute of Health, Grant number R00EB007182, R01NS04760705 and NCRR P41 RR09784; Center of Advanced MR Technology of Stanford, Lucas Foundation.
If we use Ld and Lm to denote the vector length of and in Eq.  respectively, we will have
Apparently, if multiple receiver coils are used, i.e., Nc > 1, Ld increases more rapidly than Lm as wd gets larger. In another word, we may make Ld > Lm if wd is sufficiently large. In this case, (Lm × Ld) is a flat matrix, having a non-empty null space. If we denote r0 as the rank of its null space, according to the rank-nullity theorem, we will have
Therefore, existence of r independent non-zero nulling kernels in Eq.  requires that