Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC3042509

Formats

Article sections

Authors

Related links

Magn Reson Med. Author manuscript; available in PMC 2012 March 20.

Published in final edited form as:

Published online 2010 October 28. doi: 10.1002/mrm.22655

PMCID: PMC3042509

NIHMSID: NIHMS247734

Ali Tabesh,^{a,}^{2} Jens H. Jensen,^{a,}^{b} Babak A. Ardekani,^{c,}^{d} and Joseph A. Helpern^{a,}^{b,}^{c,}^{d}

The publisher's final edited version of this article is available free at Magn Reson Med

See other articles in PMC that cite the published article.

This paper presents two related advancements to the diffusional kurtosis imaging (DKI) estimation framework to increase its robustness to noise, motion, and imaging artifacts. The first advancement substantially improves the estimation of diffusion and kurtosis tensors parameterizing the DKI model. Rather than utilizing conventional unconstrained least squares (LS) methods, the tensor estimation problem is formulated as linearly constrained linear LS, where the constraints ensure physically and/or biologically plausible tensor estimates. The exact solution to the constrained problem is found via convex quadratic programming methods or, alternatively, an approximate solution is determined through a fast heuristic algorithm. The computationally more demanding quadratic programming-based method is more flexible, allowing for an arbitrary number of diffusion weightings and different gradient sets for each diffusion weighting. The heuristic algorithm is suitable for real-time settings such as on clinical scanners, where run time is crucial. The advantage offered by the proposed constrained algorithms is demonstrated using *in vivo* human brain images. The proposed constrained methods allow for shorter scan times and/or higher spatial resolution for a given fidelity of the DKI parametric maps. The second advancement increases the efficiency and accuracy of the estimation of mean and radial kurtoses by applying exact closed-form formulae.

Diffusion of water molecules in biological tissues is conventionally quantified via diffusion tensor imaging (DTI) (1). DTI is a valuable tool for noninvasive characterization of tissue microstructural properties (2–3). The DTI model uses a Gaussian approximation to the probability distribution governing the random displacement of water molecules. In many biological tissues, however, the displacement probability distribution can deviate considerably from a Gaussian form. Several techniques have been proposed to characterize non-Gaussian diffusion, with diffusion spectrum imaging (DSI) (4) being arguably the most comprehensive. Although it is being used in human research, the application of DSI as a routine clinical protocol has been limited by its extended acquisition time and hardware requirements.

Diffusional kurtosis imaging (DKI) is a clinically feasible extension of DTI which enables the characterization of non-Gaussian diffusion by estimating the kurtosis of the displacement distribution (5–8), in addition to the estimation of the standard DTI-derived parameters. DKI has shown promising results in studies of human brain aging (9), tumor characterization in gliomas (10) and head and neck cancers (11), and rodent brain maturation (12). Moreover, the additional information provided by DKI has been exploited to resolve intravoxel fiber crossings (13), which could help to improve upon DTI-based fiber tracking methods.

The DKI model is parameterized by the diffusion tensor (DT) and kurtosis tensor (KT) from which several rotationally-invariant scalar measures are extracted. The most common DT-derived measures are mean, axial, and radial diffusivity (2,14), as well as fractional anisotropy (FA) (2); and the KT-derived measures are axial, radial, and mean kurtoses (6–7,15). The interpretability of these metrics is influenced by the estimation accuracy of the tensors. Noise, motion, and imaging artifacts can introduce errors into the estimated tensors. Sufficiently large errors can cause the tensor estimates to be physically and/or biologically implausible. For instance, the directional diffusivities may become negative, that is, the DT may become non-positive definite (NPD). A well-known consequence of NPD DT estimates is that FA values, which in theory should range between 0 and 1, may exceed 1, particularly in high FA regions of the brain such as the corpus callosum (16). Inaccuracies in the estimated tensors may also drive the directional kurtoses outside of an acceptable range. Empirical evidence in the brain as well as idealized multi-compartment diffusion models suggest that directional kurtoses should typically be positive and should not exceed a certain level depending on tissue complexity (6). The maximum allowable kurtosis is also influenced by the maximum *b*-value used in image acquisition (8,13).

In our previous work, the DT and KT were estimated using unconstrained nonlinear least squares (UNLS) (7). In a related work, higher-order DTs were estimated using unconstrained linear least squares (ULLS) (17). These unconstrained schemes do not guarantee acceptable tensor estimates. To address this drawback in the context of DTI, the Cholesky decomposition has been utilized to impose the non-negative definiteness constraint on the DT (16,18) using either the ULLS or UNLS algorithms. Moreover, a parameterization has been proposed in a UNLS framework to guarantee a positive diffusivity function in a fourth-order tensor-only model of the diffusion signal (19). While these methods outperform other algorithms for imposing the positive diffusivity function in the context of a second order-only and a fourth order-only diffusion signal model, their generalization to impose the constraints on the DKI signal model is not straightforward.

We propose that the tensor estimation problem be cast as linear least squares (LS) subject to linear constraints. The constraints ensure that the directional diffusivities and kurtoses along the imaged gradient directions remain within a physically and biologically plausible range. The proposed constrained linear LS (CLLS) formulation yields a convex objective function that permits efficient solutions via convex quadratic programming or a fast heuristic algorithm. The two algorithms proposed for solving the CLLS problem strike different tradeoffs between the speed and exactness of the solution as well as algorithm flexibility. The quadratic programming-based (CLLS-QP) algorithm exactly satisfies the constraints and can also handle an arbitrary number of diffusion weightings and different gradient sets for each diffusion weighting, but it does so at a moderately high computational cost. On the contrary, the heuristic (CLLS-H) algorithm produces an approximation to the optimal solution, but it accomplishes this at almost no computational overhead compared to the ULLS. Both constrained algorithms substantially improve the estimation fidelity of the tensors.

Once the tensors are estimated, they are utilized to determine the scalar diffusion and kurtosis measures. In our previous work, mean kurtosis (MK) was being approximated as the average of directional kurtoses (5–7). Here, we present an exact analytical formula for the MK, expressing it as an explicit function of the DT and KT. We also present an exact analytical expression for the radial kurtosis, defining it as the diffusional kurtosis averaged over all directions perpendicular to the DT eigenvector with the largest eigenvalue. Our definition is consistent with the definition of radial diffusivity in DTI and the definitions given in (8,20), but is different from that of (15). The analytic formulae for the MK and radial kurtosis were previously quoted in (8) but here for the first time we present their derivations and discuss important practical details of their implementation. The application of these formulae improves both the accuracy and efficiency of DKI parameter estimation.

For direction vector **n** and diffusion weighting *b*, the DKI approximation to diffusion signal intensity *S*(**n**,*b*) is given by (6)

$$\text{ln}[\frac{S(\mathbf{n},b)}{{S}_{0}}]=-b{\displaystyle \sum _{i=1}^{3}{\displaystyle \sum _{j=1}^{3}{n}_{i}{n}_{j}{D}_{ij}+\frac{1}{6}{b}^{2}{\overline{D}}^{2}{\displaystyle \sum _{i=1}^{3}{\displaystyle \sum _{j=1}^{3}{\displaystyle \sum _{k=1}^{3}{\displaystyle \sum _{l=1}^{3}{n}_{i}{n}_{j}{n}_{k}{n}_{l}{W}_{ijkl},}}}}}}$$

[1]

where *S*_{0} is the signal intensity for *b* = 0, *D _{ij}* and

$$D(\mathbf{n})={\displaystyle \sum _{i=1}^{3}{\displaystyle \sum _{j=1}^{3}{n}_{i}{n}_{j}{D}_{ij},}}$$

[2]

and

$$K(\mathbf{n})=\frac{{\overline{D}}^{2}}{D{(\mathbf{n})}^{2}}{\displaystyle \sum _{i=1}^{3}{\displaystyle \sum _{j=1}^{3}{\displaystyle \sum _{k=1}^{3}{\displaystyle \sum _{l=1}^{3}{n}_{i}{n}_{j}{n}_{k}{n}_{l}{W}_{ijkl}.}}}}$$

[3]

Both **D** and **W** are symmetric, that is, they are invariant to permutations of their indices. Consequently, **D** has 6 and **W** has 15 independent parameters.

To ensure that the diffusivity and kurtosis parameters are physically and biologically plausible, *D*(**n**) and *K*(**n**) must satisfy

$$D(\mathbf{n})\ge 0,$$

[4]

and

$${K}_{\text{max}}(\mathbf{n})\ge K(\mathbf{n})\ge {K}_{\text{min}},$$

[5]

and

$${K}_{\text{max}}(\mathbf{n})=\frac{C}{{b}_{\text{max}}D(\mathbf{n})},\text{\hspace{1em}}3\ge C\ge 0,$$

[6]

where **n** **N**, with **N** denoting the set of gradient directions used in acquisition. The constraints in Eq. [4] ensure that **D** satisfies the necessary conditions for non-negative definiteness. The constraints in Eq. [5] arise from the fact that biologically relevant tissue geometries impose restrictions on physically acceptable directional kurtoses (6). While the theoretically feasible minimum kurtosis of a probability distribution is −2, multi-compartment diffusion models and empirical evidence in the brain suggest a super-Gaussian displacement distribution; that is, *K*_{min} = 0 (6). Although scenarios are conceivable under which *S*(**n**,*b*) can be an increasing function of *b* (22), this has never been observed in biological tissues. Thus, it is reasonable to require that the estimated *S*(**n**,*b*) be strictly decreasing functions of *b* for all **n** **N**. The maximum *K*(**n**) in Eq. [6] with *C* = 3 ensures that this condition is satisfied in the range of *b*-values used for data acquisition (13). Smaller *C* values may be used to further constrain the directional kurtoses.

The objective in the estimation problem is to find **D** and **W** such that the right-hand side of Eq. [1] closely matches its left-hand side, while the constraints in Eqs. [4] and [5] are satisfied. The estimation problem may therefore be formulated as

$$\begin{array}{c}\text{Minimize}\phantom{\rule{thinmathspace}{0ex}}{\Vert \mathbf{A}\mathbf{X}-\mathbf{B}\Vert}^{2}\\ \text{such that}\phantom{\rule{thinmathspace}{0ex}}\mathbf{C}\mathbf{X}\le \mathbf{d},\end{array}$$

[7]

where

$$\mathbf{X}={[{D}_{11}\phantom{\rule{thinmathspace}{0ex}}{D}_{22}\phantom{\rule{thinmathspace}{0ex}}{D}_{33}\phantom{\rule{thinmathspace}{0ex}}{D}_{12}\phantom{\rule{thinmathspace}{0ex}}{D}_{13}\phantom{\rule{thinmathspace}{0ex}}{D}_{23}\phantom{\rule{thinmathspace}{0ex}}{V}_{1111}\phantom{\rule{thinmathspace}{0ex}}\dots \phantom{\rule{thinmathspace}{0ex}}{V}_{1112}\phantom{\rule{thinmathspace}{0ex}}\dots \phantom{\rule{thinmathspace}{0ex}}{V}_{1122}\phantom{\rule{thinmathspace}{0ex}}\dots \phantom{\rule{thinmathspace}{0ex}}{V}_{1233}]}^{T}$$

[8]

is a 21×1 vector of unknowns, and

$${V}_{ijkl}={\overline{D}}^{2}{W}_{ijkl}.$$

[9]

The change of variables in Eq. [9] is made to permit a *linear* LS formulation. Note that **V** is reminiscent of the fourth-order tensor in the formulation of (17).

Matrix **A** is given by

$$\mathbf{A}=\left[\begin{array}{cc}-{b}_{1}{\mathbf{A}}_{D}^{(1)}& \frac{1}{6}{b}_{1}^{2}{\mathbf{A}}_{K}^{(1)}\\ \vdots & \vdots \\ -{b}_{M}{\mathbf{A}}_{D}^{(M)}& \frac{1}{6}{b}_{M}^{2}{\mathbf{A}}_{K}^{(M)}\end{array}\right],$$

[10]

where *M* is the number of nonzero *b*-values, and row *k* of ${\mathbf{A}}_{D}^{(m)}$ and ${\mathbf{A}}_{K}^{(m)}$ are given by

$$\begin{array}{c}{[{\mathbf{A}}_{D}^{(m)}]}_{k}=[{({n}_{k1}^{(m)})}^{2}\phantom{\rule{thinmathspace}{0ex}}{({n}_{k2}^{(m)})}^{2}\phantom{\rule{thinmathspace}{0ex}}{({n}_{k3}^{(m)})}^{2}\phantom{\rule{thinmathspace}{0ex}}{n}_{k1}^{(m)}{n}_{k2}^{(m)}\phantom{\rule{thinmathspace}{0ex}}{n}_{k1}^{(m)}{n}_{k3}^{(m)}\phantom{\rule{thinmathspace}{0ex}}{n}_{k2}^{(m)}{n}_{k3}^{(m)}],\\ {[{\mathbf{A}}_{K}^{(m)}]}_{k}=[{({n}_{k1}^{(m)})}^{4}\phantom{\rule{thinmathspace}{0ex}}\dots \phantom{\rule{thinmathspace}{0ex}}4{({n}_{k1}^{(m)})}^{3}{n}_{k2}\phantom{\rule{thinmathspace}{0ex}}\dots \phantom{\rule{thinmathspace}{0ex}}6{({n}_{k1}^{(m)})}^{2}{({n}_{k2}^{(m)})}^{2}\phantom{\rule{thinmathspace}{0ex}}\dots \phantom{\rule{thinmathspace}{0ex}}12{n}_{k1}^{(m)}{n}_{k2}^{(m)}{({n}_{k3}^{(m)})}^{2}],\end{array}$$

[11]

where
${\mathbf{n}}_{k}^{(m)}={[{n}_{k1}^{(m)}\phantom{\rule{thinmathspace}{0ex}}{n}_{k2}^{(m)}\phantom{\rule{thinmathspace}{0ex}}{n}_{k3}^{(m)}]}^{T}$. Note that each diffusion weighting *b _{m}* may correspond to a distinct gradient set ${\mathbf{N}}^{(m)}=\{{\mathbf{n}}_{1}^{(m)},\dots ,{\mathbf{n}}_{{N}_{m}}^{(m)}\}$. The number of rows of

$$N={\displaystyle \sum _{m=1}^{M}{N}_{m}.}$$

Vector **B** is given by

$$\mathbf{B}={[\text{ln}({S}_{1}/{S}_{0})\phantom{\rule{thinmathspace}{0ex}}\dots \phantom{\rule{thinmathspace}{0ex}}\text{ln}({S}_{N}/{S}_{0})]}^{T},$$

[12]

where ${S}_{k}=S({\mathbf{n}}_{i}^{(m)},{b}_{m})$ is the diffusion signal intensity for gradient direction ${\mathbf{n}}_{i}^{(m)}$ and diffusion weighting *b _{m}*, and

$$k={\displaystyle \sum _{j=1}^{m-1}{N}_{j}+i.}$$

Matrix **C** representing the linear constraints is given by

$$\mathbf{C}=\left[\begin{array}{cc}-{\mathbf{A}}_{D}& \mathbf{0}\\ \mathbf{0}& -{\mathbf{A}}_{K}\\ -(C/{b}_{\text{max}}){\mathbf{A}}_{D}& {\mathbf{A}}_{K}\end{array}\right],$$

[13]

where

$${\mathbf{A}}_{D}=\left[\begin{array}{c}{\mathbf{A}}_{D}^{(1)}\\ \vdots \\ {\mathbf{A}}_{D}^{(M)}\end{array}\right]$$

[14]

and

$${\mathbf{A}}_{K}=\left[\begin{array}{c}{\mathbf{A}}_{K}^{(1)}\\ \vdots \\ {\mathbf{A}}_{K}^{(M)}\end{array}\right].$$

[15]

Vector **d** is given by

$$\mathbf{d}={\left[\mathbf{0}\phantom{\rule{thinmathspace}{0ex}}{K}_{\text{min}}D{({\mathbf{n}}_{1}^{(1)})}^{2}\phantom{\rule{thinmathspace}{0ex}}\cdots \phantom{\rule{thinmathspace}{0ex}}{K}_{\text{min}}D{({\mathbf{n}}_{{N}_{1}}^{(1)})}^{2}\phantom{\rule{thinmathspace}{0ex}}\cdots \phantom{\rule{thinmathspace}{0ex}}{K}_{\text{min}}D{({\mathbf{n}}_{1}^{(M)})}^{2}\phantom{\rule{thinmathspace}{0ex}}\cdots \phantom{\rule{thinmathspace}{0ex}}{K}_{\text{min}}D{({\mathbf{n}}_{{N}_{M}}^{(M)})}^{2}\phantom{\rule{thinmathspace}{0ex}}\mathbf{0}\right]}^{T},$$

[16]

where the zero vectors are 1 × *N* -dimensional.

Two comments regarding the above formulation are in order:

- For the typical choice
*K*= 0, Eq. [7] provides an exact formulation of the estimation problem. If_{min}*K*≠ 0, the $D({\mathbf{n}}_{n}^{(m)})$ terms in_{min}**d**need to be approximated. This may be accomplished via estimating**D**once and plugging that estimate into Eq. [2]. The estimate of**D**may be obtained from the ULLS estimate of**X**, denoted as , given bywhere$$\widehat{\mathbf{X}}={\mathbf{A}}^{+}\mathbf{B},$$[17]**A**^{+}denotes the pseudoinverse of**A**. The $D({\mathbf{n}}_{n}^{(m)})$ terms can be further refined using the CLLS-based estimate of**D**.

The problem defined by Eq. [7] is a convex quadratic programming problem whose solution can be determined via standard algorithms. Two common classes of algorithms for solving this problem are active set and interior-point methods (23). Here we used the active set method which works as follows. First, the algorithm is initialized with a feasible point, if one exists. A feasible point, defined as a point that satisfies the constraints in Eq. [7], may be found via linear programming. In our implementation, the ULLS solution of Eq. [17] is used first as a potential feasible point. If this point is feasible, it is also the solution to the CLLS problem. If the point is not feasible, then the algorithm uses the feasible point found via linear programming.

In the second phase, the feasible point is iteratively refined until it converges to the optimal solution. At each iteration, an estimate of the active constraints in Eq. [7] is maintained. Active constraints are a subset of the constraints that are actually enforced, that is, they are the ones that become equality constraints. As the algorithm proceeds, the set of active constraints is updated by sequentially adding the active and removing the inactive constraints from the set.

Alternatively, the heuristic procedure described below may be used to estimate **D** and **W** such that the constraints of Eqs. [4] and [5] are approximately satisfied. This algorithm assumes that exactly two nonzero *b*-values are used, and that the gradient directions are the same for both *b*-values; that is, **N**^{(1)} = **N**^{(2)} = {**n**_{1},…,**n**_{N}}. The central idea in this algorithm is to first estimate *D*(**n**_{i}) and *K*(**n**_{i}) along individual gradient directions, and then to threshold the ones that violate the constraints of Eqs. [4] and [5] to within their prescribed limits (steps D1, K1, and K2 below). More specifically, *D*(**n**_{i}) and *K*(**n**_{i}) are first determined from *S*(**n**_{i},*b*_{1}) and *S*(**n**_{i},*b*_{2}) (Eqs. [18] and [19]). Then, *D*(**n**_{i}) are thresholded such that they satisfy the constraints on both *D*(**n**_{i}) and *K*(**n**_{i}) (steps D1-D4). Next, the constrained *D*(**n**_{i}) are used to estimate **D** (Eq. [21]), which in turn is used to re-estimate *D*(**n**_{i}) (Eq. [22]). The re-estimated diffusivities *D*^{(R)} (**n**_{i}) are then used to obtain the re-estimated *K*^{(R)} (**n**_{i}) (Eq. [23]). Finally, *K*^{(R)} (**n**_{i}) are thresholded (steps K1 and K2), and are then used along with *D*^{(R)} (**n**_{i}) to compute **W**.

Along direction **n**_{i}, *D _{i}* =

$${D}_{i}=\frac{{b}_{2}{D}_{i}^{(1)}-{b}_{1}{D}_{i}^{(2)}}{{b}_{2}-{b}_{1}},$$

[18]

and

$${K}_{i}=6\frac{{D}_{i}^{(1)}-{D}_{i}^{(2)}}{({b}_{2}-{b}_{1})D{({\mathbf{n}}_{i})}^{2}},$$

[19]

where

$${D}_{i}^{(1)}=\frac{-\text{ln}(S({\mathbf{n}}_{i},{b}_{1})/{S}_{0})}{{b}_{1}},\text{\hspace{1em}}{D}_{i}^{(2)}=\frac{-\text{ln}(S({\mathbf{n}}_{i},{b}_{2})/{S}_{0})}{{b}_{2}}.$$

[20]

Note that Eqs. [18] and [19] result when exactly two nonzero *b*-values are used.

Next, a set of rules is applied to constrain *D _{i}*:

- D1.If
*D*≤ 0, set_{i}*D*= 0._{i} - D2.Otherwise, if ${D}_{i}^{(1)}<0$, set
*D*= 0._{i} - D3.Otherwise, if
*D*> 0 and_{i}*K*<_{i}*K*_{min},- If
*K*_{min}= 0, set ${D}_{i}={D}_{i}^{(1)}$. - Otherwise, set$${D}_{i}=\frac{1}{x}[\sqrt{1+2x{D}_{i}^{(1)}}-1]\phantom{\rule{thinmathspace}{0ex}}\text{where}\phantom{\rule{thinmathspace}{0ex}}x=-\frac{1}{3}{K}_{\text{min}}{b}_{1}.$$

- D4.Otherwise, if
*D*> 0 and_{i}*K*>_{i}*K*_{maxi}, where*K*_{maxi}=*K*_{max}(**n**_{i}), setwhere$${D}_{i}=\frac{{D}_{i}^{(1)}}{1-C{b}_{1}/6{b}_{\text{max}}}$$*b*_{max}= max(*b*_{1},*b*_{2}). Note that the above equation results by plugging the expression for*K*_{maxi}=*K*_{max}(**n**_{i}) given by Eq. [6] into Eq. [1], setting*b*=*b*_{1}, and solving for*D*. The diffusion signal for_{i}*b*=*b*_{1}is utilized here, as it is less sensitive to noise and is a stronger function of*D*._{i}

From the constrained *D _{i}* set,

$${\widehat{\mathbf{X}}}_{D}={\mathbf{A}}_{D}^{+}{\mathbf{b}}_{D},$$

[21]

where _{D} is the ULLS estimate of **X**_{D} = [*D*_{11}
*D*_{22}
*D*_{33}
*D*_{12}
*D*_{13}
*D*_{23}]^{T}, and **b**_{D} = [*D*_{1} … *D _{N}*]

The re-estimated ${\mathbf{b}}_{D}^{(R)}$ is then obtained as

$${\mathbf{b}}_{D}^{(R)}={\mathbf{A}}_{D}{\widehat{\mathbf{X}}}_{D}.$$

[22]

The re-estimation step has a noise removal effect on **b**_{D} by forcing it to be consistent with _{D}. Next, ${D}^{(R)}({\mathbf{n}}_{i})={D}_{i}^{(R)}$ are used to obtain ${K}^{(R)}({\mathbf{n}}_{i})={K}_{i}^{(R)}$ as

$${K}_{i}^{(R)}=\{\begin{array}{cc}6\frac{{D}_{i}^{(R)}-{D}_{i}^{(2)}}{{b}_{2}{({D}_{i}^{(R)})}^{2}}& \text{if}\phantom{\rule{thinmathspace}{0ex}}{D}_{i}^{(R)}>0\\ 0& \text{otherwise}\end{array},$$

[23]

Next, a set of rules is applied to constrain ${K}_{i}^{(R)}$:

- K1.If ${K}_{i}^{(R)}>{K}_{{\text{max}}_{i}}^{(R)}$, set ${K}_{i}^{(R)}={K}_{{\text{max}}_{i}}^{(R)}$.
- K2.If ${K}_{i}^{(R)}<{K}_{\text{min}}$, set ${K}_{i}^{(R)}={K}_{\text{min}}$.

Finally, the kurtosis tensor is estimated as

$${\widehat{\mathbf{X}}}_{K}=\frac{1}{{\overline{D}}^{2}}{\mathbf{A}}_{K}^{+}{\mathbf{b}}_{K}$$

[24]

where * _{K}* is the ULLS estimate of

The MK is defined as the diffusional kurtosis averaged over all gradient directions. In a reference frame that diagonalizes **D**, it may be calculated as (see Appendix for the derivation)

$$\overline{K}\phantom{\rule{thinmathspace}{0ex}}={F}_{1}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3}){\tilde{W}}_{1111}+{F}_{1}({\lambda}_{2},{\lambda}_{1},{\lambda}_{3}){\tilde{W}}_{2222}+{F}_{1}({\lambda}_{3},{\lambda}_{2},{\lambda}_{1}){\tilde{W}}_{3333}+\phantom{\rule{thinmathspace}{0ex}}{F}_{2}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3}){\tilde{W}}_{2233}+{F}_{2}({\lambda}_{2},{\lambda}_{1},{\lambda}_{3}){\tilde{W}}_{1133}+{F}_{2}({\lambda}_{3},{\lambda}_{2},{\lambda}_{1}){\tilde{W}}_{1122}$$

[25]

where λ_{1} > λ_{2} > λ_{3} denote the eigenvalues of **D**, and _{ijkl} are the elements of the kurtosis tensor in the rotated frame and are given by

$${\tilde{W}}_{ijkl}={\displaystyle \sum _{i\prime ,j\prime ,k\prime ,l\prime =1}^{3}{R}_{ii\prime}{R}_{jj\prime}{R}_{kk\prime}{R}_{ll\prime}{W}_{i\prime j\prime k\prime l\prime},}$$

[26]

with *R _{ij}* being the

$${F}_{1}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3})=\frac{{({\lambda}_{1}+{\lambda}_{2}+{\lambda}_{3})}^{2}}{18({\lambda}_{1}-{\lambda}_{2})({\lambda}_{1}-{\lambda}_{3})}\left[\frac{\sqrt{{\lambda}_{2}{\lambda}_{3}}}{{\lambda}_{1}}{R}_{F}\left(\frac{{\lambda}_{1}}{{\lambda}_{2}},\frac{{\lambda}_{1}}{{\lambda}_{3}},1\right)+\frac{3{\lambda}_{1}^{2}-{\lambda}_{1}{\lambda}_{2}-{\lambda}_{1}{\lambda}_{3}-{\lambda}_{2}{\lambda}_{3}}{3{\lambda}_{1}\sqrt{{\lambda}_{2}{\lambda}_{3}}}{R}_{D}\left(\frac{{\lambda}_{1}}{{\lambda}_{2}},\frac{{\lambda}_{1}}{{\lambda}_{3}},1\right)-1\right]$$

[27]

and

$${F}_{2}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3})=\frac{{({\lambda}_{1}+{\lambda}_{2}+{\lambda}_{3})}^{2}}{3{({\lambda}_{2}-{\lambda}_{3})}^{2}}\left[\frac{{\lambda}_{2}+{\lambda}_{3}}{\sqrt{{\lambda}_{2}{\lambda}_{3}}}{R}_{F}\left(\frac{{\lambda}_{1}}{{\lambda}_{2}},\frac{{\lambda}_{1}}{{\lambda}_{3}},1\right)+\frac{2{\lambda}_{1}-{\lambda}_{2}-{\lambda}_{3}}{3\sqrt{{\lambda}_{2}{\lambda}_{3}}}{R}_{D}\left(\frac{{\lambda}_{1}}{{\lambda}_{2}},\frac{{\lambda}_{1}}{{\lambda}_{3}},1\right)-2\right],$$

[28]

where *R _{F}* and

$${R}_{F}(x,y,z)=\frac{1}{2}{{\displaystyle {\int}_{0}^{\infty}\left(t+x\right)}}^{-1/2}{\left(t+y\right)}^{-1/2}{\left(t+z\right)}^{-1/2}d$$

[29]

and

$${R}_{D}(x,y,z)=\frac{3}{2}{{\displaystyle {\int}_{0}^{\infty}\left(t+x\right)}}^{-1/2}{\left(t+y\right)}^{-1/2}{\left(t+z\right)}^{-3/2}d.$$

[30]

As there are highly efficient numerical algorithms for calculating Carlson’s elliptic integrals (24–25), Eqs. [27] and [28] may be used for fast computation of the MK. Note that when two or more of the eigenvalues of **D** coincide, *F*_{1} and *F*_{2} have removable singularities whose treatment is discussed in the Appendix.

The axial kurtosis *K*_{║} is the diffusional kurtosis in the direction of the highest diffusion. In a reference frame that diagonalizes **D**, it is simply given by

$${K}_{\u2551}=\frac{{({\lambda}_{1}+{\lambda}_{2}+{\lambda}_{3})}^{2}}{9{\lambda}_{1}^{2}}{\tilde{W}}_{1111}.$$

[31]

The radial kurtosis *K*_{} is defined as the mean diffusional kurtosis perpendicular to the direction of highest diffusion, consistent with the definition of radial diffusivity. In a reference frame that diagonalizes **D**, it is given by (see the Appendix for the derivation)

$${K}_{\perp}={G}_{1}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3}){\tilde{W}}_{2222}+{G}_{1}({\lambda}_{1},{\lambda}_{3},{\lambda}_{2}){\tilde{W}}_{3333}+{G}_{2}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3}){\tilde{W}}_{2233}.$$

[32]

where

$${G}_{1}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3})=\frac{{({\lambda}_{1}+{\lambda}_{2}+{\lambda}_{3})}^{2}}{18{\lambda}_{2}{({\lambda}_{2}-{\lambda}_{3})}^{2}}\left(2{\lambda}_{2}+\frac{{\lambda}_{3}^{2}-3{\lambda}_{2}{\lambda}_{3}}{\sqrt{{\lambda}_{2}{\lambda}_{3}}}\right)$$

[33]

and

$${G}_{2}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3})=\frac{{({\lambda}_{1}+{\lambda}_{2}+{\lambda}_{3})}^{2}}{3{({\lambda}_{2}-{\lambda}_{3})}^{2}}\left(\frac{{\lambda}_{2}+{\lambda}_{3}}{\sqrt{{\lambda}_{2}{\lambda}_{3}}}-2\right).$$

[34]

The treatment of the singularities of *G*_{1} (λ_{1},λ_{2},λ_{3}) and *G*_{2} (λ_{1},λ_{2},λ_{3}) is discussed in the Appendix.

DKI scans were performed on three healthy volunteers using a 3 T Siemens Tim Trio system with a 12-channel head coil. This study was approved by the New York University School of Medicine Institutional Review Board and written informed consent was obtained from all subjects. Diffusion-weighted images (DWI’s) were acquired along 30 gradient directions with a twice-refocused spin-echo sequence. Scan parameters were TR = 5900 ms, TE = 96 ms, matrix = 82 × 82, FOV = 222 × 222 mm^{2}, 45 slices, slice thickness = 2.7 mm with no gap, partial Fourier encoding = 3/4, NEX = 44 for *b* = 0 (NEX = 45 for Dataset 2), and NEX = 4 for *b* = 500, 1000, 1500, and 2000 s/mm^{2} (NEX = 5 for Dataset 2). The total acquisition time was 53:20 minutes (65:32 minutes for Dataset 2). We emphasize that standard DKI scans (discussed below) require 7:29 minutes.

In the first stage of processing, the DWI’s are optionally smoothed using a Gaussian kernel to reduce the impact of noise and misregistration. Next, **D** and **W** are estimated using the CLLS-QP algorithm, and finally, the scalar measures are obtained from the tensors. The pipeline was implemented in-house in the MATLAB environment (http://www.mathworks.com). The program, designated as Diffusional Kurtosis Estimator (DKE), is available upon request from the corresponding author.

The parameters used with DKE were as follows: the full width at half maximum of the Gaussian kernel was 3.375 mm; *K*_{min} = 0; and *C* = 3. The tensors were also estimated using the UNLS, ULLS and CLLS-H algorithms following smoothing of DWI’s. Three processing protocols were considered where different subsets of the acquired DWI’s were used to estimate the parametric maps. In the *reference* protocol, the maps were obtained using all of the available DWI’s. In the *standard* protocol, the maps were estimated using NEX = 11 for *b* = 0 images, and NEX = 1 for *b* = 1000, 2000 s/mm^{2} DWI’s. Finally, in the *fast* protocol, the maps were obtained with NEX = 11 for *b* = 0 images, and NEX = 1 with 15 of *b* = 1000 s/mm^{2} and all of *b* = 2000 s/mm^{2} DWI’s. We used the maps obtained using the reference protocol and ULLS as baseline for evaluating the maps estimated with the other protocols. The standard protocol is currently used for clinical research acquisitions, and the fast protocol was conjured up as a potential candidate for future clinical research acquisitions. The estimated acquisition time for the fast protocol is 5:54 minutes.

Table 1 shows the processing times for generating the scalar tensor-derived measures using different tensor estimation methods, and summarizes the features of the algorithms. Figure 1, Figure 2, and Figure 3 show the MK, MD, and FA maps obtained using the standard protocol. The regions with the most noticeable change in the estimated MK across all datasets are in the splenium and the genu of the corpus callosum, where the voxels with the incorrectly estimated MK values are in black. The CLLS-H algorithm corrects the MK values for some of these voxels, whereas the CLLS-QP method corrects all of the voxels. Although constrained methods also improve the estimates of the MD and FA, this improvement is visually less discernible. Figure 4 shows the voxels where tensor estimates obtained via ULLS and the standard and reference protocols violate the constraints in Eqs. [4] or [5]. With the standard protocol, very few voxels violate the Eq. [4] constraint in general. Such voxels are typically located in the corpus callosum. Minimum kurtosis constraint violations also typically occur in the corpus callosum. These voxels are less likely to violate the maximum kurtosis constraints, although this is not uncommon. The cerebrospinal fluid (CSF) voxels show a prominent pattern of violating the maximum kurtosis constraints.

MK maps obtained using the standard protocol and (a) UNLS; (b) ULLS; (c) CLLS-H; (d) CLLS-QP; and (e) the reference protocol and ULLS.

MD maps obtained using the standard protocol and (a) UNLS; (b) ULLS; (c) CLLS-H; (d) CLLS-QP; and (e) the reference protocol and ULLS.

FA maps obtained using the standard protocol and (a) UNLS; (b) ULLS; (c) CLLS-H; (d) CLLS-QP; and (e) the reference protocol and ULLS.

Voxels (indicated in red) for which the diffusion and kurtosis tensors estimated using standard (rows 1, 3, and 5) and reference (rows 2, 4, and 6) protocols and ULLS violate the constraints on (a) directional diffusivities; (b) minimum directional kurtoses; **...**

Feature comparison between the UNLS, ULLS, CLLS-H, and CLLS-QP algorithms. Run times are for all slices in Dataset 2 obtained with the standard protocol and include the input/output time.

Table 2 shows the root mean-square error (RMSE) between the maps obtained using the standard protocol and the maps estimated using the reference protocol. RMSE values are shown for the whole brain as well as for voxels with constraint violations. The whole-brain RMSE values averaged over all subjects indicate that the constrained methods outperform the unconstrained methods in estimating all three scalar quantities. The RMSE improvement offered by the constrained methods for the voxels with constraint violations is more dramatic, and is more than 35% for the MK, 7% for the MD, and 8% for the FA. Note that a lower threshold of −2, the theoretical minimum kurtosis, was applied to MK values estimated with the unconstrained methods to prevent their RMSE values from becoming excessively large. The difference between the RMSE’s of the two unconstrained methods is fairly small, and so is the difference between the constrained methods.

RMSE between the parametric maps estimated using the standard protocol and different estimation methods, and the maps obtained using the reference protocol and ULLS; (a) MK; (b) MD; and (c) FA. In each cell, two RMSE values are reported: the first RMSE **...**

Figure 5, Figure 6, and Figure 7 show the MK, MD, and FA maps estimated using the fast protocol, while Figure 8 shows the voxels for which the constraints are violated. In this case, the voxels with constraint violations are distributed more widely in the brain parenchyma, although their most consistent location is the corpus callosum. Table 3 shows the RMSE values for the maps obtained using the fast protocol. As with the standard protocol, the RMSE values of the unconstrained methods are comparable. However, with the fast protocol, the whole-brain RMSE gain due to the CLLS-QP method is larger than with the standard protocol. For the voxels with constraint violations, the RMSE improvement due to the constrained methods exceeds 40% for the MK, 10% for the MD, and 19% for the FA.

MK maps obtained using the fast protocol and (a) UNLS; (b) ULLS; (c) CLLS-QP; and (d) the reference protocol and ULLS. Note that due to the different numbers of gradient directions, the CLLS-H algorithm is not applicable.

MD maps obtained using the fast protocol and (a) UNLS; (b) ULLS; (c) CLLS-QP; and (d) the reference protocol and ULLS.

FA maps obtained using the fast protocol and (a) UNLS; (b) ULLS; (c) CLLS-QP; and (d) the reference protocol and ULLS.

Voxels (indicated in red) for which the diffusion and kurtosis tensors estimated using the fast protocol and ULLS violate the constraints on (a) directional diffusivities; (b) minimum directional kurtoses; and (c) maximum directional kurtoses. The shade **...**

RMSE between the parametric maps estimated using the fast protocol and different estimation methods, and the maps obtained using the reference protocol and ULLS; (a) MK; (b) MD; and (c) FA. In each cell, two RMSE values are reported: the first RMSE value **...**

Figure 9 shows the directional diffusivities and kurtoses for two Dataset 1 voxels with constraint violations. In the corpus callosum voxel, the UNLS solution violates 5 out of 90 constraints, and the ULLS solution violates 11 constraints. The CLLS-H solution reduces the total number of violations from 11 to 5, and the estimate based on CLLS-QP satisfies all constraints as expected. In the prefrontal white matter voxel, the unconstrained algorithms both violate one constraint, whereas both constrained algorithms satisfy all constraints.

Voxels in the (a) corpus callosum and (b) prefrontal white matter from Dataset 1 and their corresponding directional diffusivities (top right), kurtoses (bottom left), and the product *b*_{max}*D*(**n**)*K*(**n**) (bottom right), estimated using different unconstrained **...**

Finally, Figure 10 shows the maps estimated using the new expressions for the MK and *K*_{}, along with the map for *K*_{║}.

The voxels in Figure 1 and Figure 5 where CLLS-H and CLLS-QP produce improved MK estimates are those for which ULLS yields an NPD or a *nearly* NPD estimate of **D** and/or the constraints on the minimum and maximum kurtoses are violated. In these voxels, which most consistently occur in highly anisotropic regions such as the corpus callosum, λ_{1} is large and λ_{3} is small. This situation increases the sensitivity of unconstrained methods to noise. One effect of noise is to create the appearance of non-decreasing diffusion signal intensity with increasing *b*. This typically occurs along gradient directions with high diffusivity, but can also happen along directions with very low diffusivity. At high *b*-values, a small disturbance due to noise can artificially increase the signal intensity with increasing *b*-value, yielding falsely large estimates of directional kurtoses. A prominent instance of this effect can be observed in the CSF voxels in Figure 4 and Figure 8.

The other effect of noise is to create the appearance of the diffusion signal being a concave function of *b* along low-diffusivity directions, yielding negative estimates of directional kurtoses. In this case, the measured diffusion-weighted signal intensity may even exceed the signal intensity at *b* = 0, in which case a negative estimate of diffusivity, that is, an NPD DT, arises.

It is well-known that imaging noise causes λ_{1} to be overestimated and λ_{3} to be underestimated (26). The constraints in Eqs. [4] and [5] force a smaller λ_{1} and larger λ_{3}, yielding more reasonable estimates of **D** and **W**. The top right plot of Figure 9(a) shows an instance of this, where the CLLS-QP estimates of directional diffusivities along directions with high diffusivity are slightly lower than those obtained with ULLS, whereas the opposite is the case along low-diffusivity directions.

Table 2 and Table 3 provide further confirmation of the improvement offered by the constrained algorithms. Note that the whole-brain RMSE improvement is actually due to only 10–20% of voxels for which there are constraint violations, hence the RMSE improvement within the set of voxels with constraint violations is quite larger than the improvement over the whole brain. The constrained algorithms achieve comparable RMSE, and both outperform the unconstrained algorithms consistently across all subjects, measures and protocols. However, the RMSE improvement is more evident with the fast protocol. Note also that the average whole-brain RMSE in estimating the MK and MD with the fast protocol and the constrained methods is smaller than or is comparable to the whole-brain RMSE with the standard protocol and unconstrained methods. This means that the constrained methods can achieve a savings of more than 20% in acquisition time. (Note that a total of 56 *b* = 0 images and DWI’s were utilized in the fast protocol, whereas in the standard protocol 71 images were used.) The RMSE improvement offered by the constrained methods is less pronounced in estimating the FA. A possible explanation for this may be that the FA is more heavily influenced by the Rician noise corrupting the DWI’s than the MD, consistent with previous reports in the literature (27). Therefore, to obtain more reliable FA estimates, constrained methods may need to be combined with schemes to compensate for the Rician noise bias.

The CLLS-H algorithm strikes a balance between complexity and accuracy by correcting the MK in most of the voxels that violate the constraints, doing so at a negligible 2.5% overhead compared to the ULLS (cf. Table 1). Thus, the CLLS-H method can be beneficial in real-time situations where run time is crucial. On the other hand, the CLLS-QP approach removes all such voxels, but it has a much longer run time. It should be noted that while the improvement offered by CLLS-QP comes at the expense of increased algorithm complexity and consequently higher run time, it is straightforward to speed up the computations via parallelization.

A limitation of the formulation in Eq. [7] is that it ensures that constraints in Eqs. [4] and [5] are satisfied along all *imaged* gradient directions, rather than all *possible* directions. Imposing the latter constraints is more challenging and requires a deeper understanding of the kurtosis tensor. A practically viable alternative is to estimate the tensors using either of the constrained algorithms and then check if the solution satisfies the constraints along other directions. The directions of the eigenvectors of **D** are of the most interest as Eq. [25] and Eq. [32] are explicit functions of the kurtosis tensor elements along these directions. If the solution does not satisfy the constraints along these directions, then these constraints can be added to the set of constraints in Eqs. [4] and [5] and the problem re-solved. This procedure may be repeated until the solution satisfies the constraints.

Finally, as the expressions for *K*_{} and MK used for estimating the maps in Figure 10 represent the exact values of these parameters, they more accurately quantify the diffusional kurtosis characteristics of the tissue. The advantage of *K*_{} as defined here and in (8,20) over that of (15) is that ours gives all radial directions equal weight, rather than selecting out two particular radial directions.

We presented two related advancements to the DKI estimation framework. The first advancement significantly improves the fidelity of the tensors parameterizing the DKI model by casting the estimation problem as CLLS. The CLLS formulation ensures that the estimated diffusivities and kurtoses along the imaged gradient directions remain within a physically and biologically plausible range. Two algorithms were provided for minimizing the CLLS cost function: a convex quadratic programming (CLLS-QP) method and a heuristic (CLLS-H) algorithm. The CLLS-H algorithm imposes a negligible computational overhead compared to the ULLS algorithm, making it suitable for real-time applications with two *b*-values and equal numbers of gradients per *b*-value, whereas the computationally more demanding CLLS-QP algorithm permits an arbitrary number of *b*-values and different gradient sets per each *b*-value. *In vivo* image data demonstrated the significant improvement offered by the proposed constrained tensor estimation methods over unconstrained algorithms. The advantage of the constrained methods is particularly evident when fewer DWI’s are available. The RMSE improvement due to constrained methods may be translated into shorter scan times and/or increased spatial resolution for a given fidelity of the parametric maps; our results indicate a potential time savings of more than 20%. The second advancement allows for more accurate and efficient calculation of MK and *K*_{} by utilizing the exact closed-form formulae derived in this paper. The proposed tensor estimation methods along with the exact expressions for the kurtosis measures improve the robustness of the DKI metrics, thus facilitating the clinical application of DKI.

The authors would like to thank Caixia Hu, Els Fieremans, and Vitria Adisetiyo for their technical assistance.

The MK may be calculated from the formula

$$\overline{K}={\displaystyle \sum _{i,j,k,l=1}^{3}{A}_{ijkl}{W}_{ijkl},}$$

[A1]

where

$${A}_{ijkl}\equiv \frac{{\overline{D}}^{2}}{4\pi}{\displaystyle \underset{0}{\overset{\pi}{\int}}d\theta}{\displaystyle \underset{0}{\overset{2\pi}{\int}}d\varphi \frac{{n}_{i}{n}_{j}{n}_{k}{n}_{l}}{{({\displaystyle \sum _{p,q=1}^{3}{n}_{p}{n}_{q}{D}_{pq}})}^{2}}\text{sin}\phantom{\rule{thinmathspace}{0ex}}\theta}$$

[A2]

and *n _{i}* is the

In a reference frame that diagonalizes **D** (see Eq. [26]), we can write

$$\overline{K}={\displaystyle \sum _{i,j,k,l=1}^{3}{A}_{ijkl}{\tilde{W}}_{ijkl},}$$

[A3]

where *A _{ijkl}* are given by

$${A}_{ijkl}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3})=\frac{{\overline{D}}^{2}}{4\pi}{\displaystyle \underset{0}{\overset{\pi}{\int}}d\theta}{\displaystyle \underset{0}{\overset{2\pi}{\int}}d\varphi}\frac{{n}_{i}(\theta ,\varphi ){n}_{j}(\theta ,\varphi ){n}_{k}(\theta ,\varphi ){n}_{l}(\theta ,\varphi )\text{sin}\phantom{\rule{thinmathspace}{0ex}}\theta}{{({\lambda}_{1}{\text{sin}}^{2}\theta \phantom{\rule{thinmathspace}{0ex}}{\text{cos}}^{2}\phantom{\rule{thinmathspace}{0ex}}\varphi +{\lambda}_{2}{\text{sin}}^{2}\theta \phantom{\rule{thinmathspace}{0ex}}{\text{sin}}^{2}\varphi +{\lambda}_{3}{\text{cos}}^{2}\theta )}^{2}},$$

[A4]

and

$$\begin{array}{l}{n}_{1}(\theta ,\varphi )=\text{sin}\phantom{\rule{thinmathspace}{0ex}}\theta \phantom{\rule{thinmathspace}{0ex}}\text{cos}\varphi ,\hfill \\ {n}_{2}(\theta ,\varphi )=\text{sin}\phantom{\rule{thinmathspace}{0ex}}\theta \phantom{\rule{thinmathspace}{0ex}}\text{sin}\varphi ,\hfill \\ {n}_{3}(\theta ,\varphi )=\text{cos}\phantom{\rule{thinmathspace}{0ex}}\theta .\hfill \end{array}$$

[A5]

It can be seen from Eq. [A4] that *A _{ijkl}* is invariant with respect to permutations of its indices. Thus, we need consider only the 15 components

$${A}_{1112}={A}_{1113}={A}_{1123}={A}_{1222}={A}_{1223}={A}_{1233}={A}_{1333}={A}_{2223}={A}_{2333}=0.$$

[A6]

This leaves only 6 nonzero components to consider. We can further reduce this number by noting the symmetries

$${A}_{1111}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3})={A}_{2222}({\lambda}_{2},{\lambda}_{1},{\lambda}_{3})={A}_{3333}({\lambda}_{3},{\lambda}_{2},{\lambda}_{1})$$

[A7]

and

$${A}_{2233}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3})={A}_{1133}({\lambda}_{2},{\lambda}_{1},{\lambda}_{3})={A}_{1122}({\lambda}_{3},{\lambda}_{2},{\lambda}_{1}).$$

[A8]

Hence we only need to find analytic expressions for *A*_{1111} and *A*_{2233}. From these, and by applying conventional integration techniques and also utilizing = (λ_{1} + λ_{2} +λ_{3})/3, the MK can be calculated as prescribed by Eq. [25], where *F*_{1}(λ_{1}, λ_{2}, λ_{3}) = *A*_{1111} (λ_{1}, λ_{2}, λ_{3}) and *F*_{2}(λ_{1}, λ_{2}, λ_{3}) = 6*A*_{2233}(λ_{1}, λ_{2}, λ_{3}).

When λ_{1} = λ_{2} or λ_{1} = λ_{3}, Eq. [27] has a removable singularity, which can be resolved with

$${A}_{1111}({\lambda}_{1},{\lambda}_{1},{\lambda}_{3})=3{A}_{2233}({\lambda}_{3},{\lambda}_{1},{\lambda}_{1})\text{\hspace{1em}or\hspace{1em}}{A}_{1111}({\lambda}_{1},{\lambda}_{2},{\lambda}_{1})=3{A}_{2233}({\lambda}_{2},{\lambda}_{1},{\lambda}_{1}).$$

[A9]

When λ_{2} = λ_{3}, Eq. [28] has a removable singularity. This can be resolved with

$${A}_{2233}({\lambda}_{1},{\lambda}_{3},{\lambda}_{3})=\frac{{({\lambda}_{1}+2{\lambda}_{3})}^{2}}{144{\lambda}_{3}^{2}{({\lambda}_{1}-{\lambda}_{3})}^{2}}\left[{\lambda}_{3}({\lambda}_{1}+2{\lambda}_{3})+{\lambda}_{1}({\lambda}_{1}-4{\lambda}_{3})\alpha \left(1-\frac{{\lambda}_{1}}{{\lambda}_{3}}\right)\right],$$

[A10]

where

$$\alpha (x)\equiv \{\begin{array}{ll}\frac{1}{\sqrt{x}}\text{arctanh}(\sqrt{x}),\hfill & \text{if}\phantom{\rule{thinmathspace}{0ex}}x>0,\hfill \\ \frac{1}{\sqrt{-x}}\text{arctan}(\sqrt{-x}),\hfill & \text{if}\phantom{\rule{thinmathspace}{0ex}}x<0.\hfill \end{array}$$

[A11]

Finally, for the special case when λ_{1} = λ_{2} = λ_{3}, we simply have

$${A}_{1111}=\frac{1}{5}\text{\hspace{1em}and\hspace{1em}}{A}_{2233}=\frac{1}{15}.$$

[A12]

The radial kurtosis *K*_{} is the mean diffusional kurtosis perpendicular to the direction of the highest diffusion. In a reference frame that diagonalizes **D**, it is given by

$${K}_{\perp}={\displaystyle \sum _{i,j,k,l=1}^{3}{C}_{ijkl}{\tilde{W}}_{ijkl},}$$

[A13]

where

$${C}_{ijkl}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3})=\frac{{\overline{D}}^{2}}{2\pi}{\displaystyle \underset{0}{\overset{2\pi}{\int}}d\varphi}\frac{{n}_{i}(\varphi ){n}_{j}(\varphi ){n}_{k}(\varphi ){n}_{l}(\varphi )}{{({\lambda}_{2}{\text{cos}}^{2}\varphi +{\lambda}_{3}{\text{sin}}^{2}\varphi )}^{2}},$$

[A14]

and

$$\begin{array}{l}{n}_{1}(\varphi )=0,\hfill \\ {n}_{2}(\varphi )=\text{cos}\varphi ,\hfill \\ {n}_{3}(\varphi )=\text{sin}\varphi .\hfill \end{array}$$

[A15]

It can be seen from Eq. [A14] that *C _{ijkl}* is invariant with respect to permutations of its indices. Thus, we need consider only the 15 components

$${C}_{1111}={C}_{1112}={C}_{1113}={C}_{1122}={C}_{1123}={C}_{1133}={C}_{1222}={C}_{1223}={C}_{1233}={C}_{1333}={C}_{2223}={C}_{2333}=0.$$

[A16]

This leaves only 3 nonzero components to consider. We can further reduce this number by noting the symmetry

$${C}_{2222}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3})={C}_{3333}({\lambda}_{1},{\lambda}_{3},{\lambda}_{2}).$$

[A17]

Hence we only need to find analytic expressions for *C*_{2222} and *C*_{2233}. From these, *K*_{} can be obtained as prescribed by Eq. [32], where the integrals

$${C}_{2222}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3})=\frac{{\overline{D}}^{2}}{2\pi}{\displaystyle \underset{0}{\overset{2\pi}{\int}}d\varphi \frac{{\text{cos}}^{4}\varphi}{{({\lambda}_{2}{\text{cos}}^{2}\varphi +{\lambda}_{3}{\text{sin}}^{2}\varphi )}^{2}}}$$

[A18]

and

$${C}_{2233}({\lambda}_{1},{\lambda}_{2},{\lambda}_{3})=\frac{{\overline{D}}^{2}}{2\pi}{\displaystyle \underset{0}{\overset{2\pi}{\int}}d\varphi \frac{{\text{cos}}^{2}\varphi {\text{sin}}^{2}\varphi}{{({\lambda}_{2}{\text{cos}}^{2}\varphi +{\lambda}_{3}{\text{sin}}^{2}\varphi )}^{2}}}$$

[A19]

are elementary forms which lead to Eqs. [33] and [34], and *G*_{1}(λ_{1},λ_{2},λ_{3}) = *C*_{2222}(λ_{1},λ_{2},λ_{3}) and *G*_{2}(λ_{1},λ_{2},λ_{3}) = 6*C*_{2233}(λ_{1},λ_{2},λ_{3}).

For the special case of λ_{2} = λ_{3}, Eq. [33] and Eq. [34] reduce to

$${C}_{2222}({\lambda}_{1},{\lambda}_{2},{\lambda}_{2})=\frac{{({\lambda}_{1}+2{\lambda}_{2})}^{2}}{24{\lambda}_{2}^{\phantom{\rule{thinmathspace}{0ex}}2}}$$

[A20]

and

$${C}_{2233}({\lambda}_{1},{\lambda}_{2},{\lambda}_{2})=\frac{{({\lambda}_{1}+2{\lambda}_{2})}^{2}}{72{\lambda}_{2}^{\phantom{\rule{thinmathspace}{0ex}}2}}.$$

[A21]

^{1}Grant support: NIH 1R01AG027852, 1R01EB007656, and Litwin Foundation for Alzheimer’s Research.

1. Basser PJ, Mattiello J, LeBihan D. MR diffusion tensor spectroscopy and imaging. Biophysical Journal. 1994;66:259–267. [PubMed]

2. Basser PJ. Inferring microstructural features and the physiological state of tissues from diffusion-weighted images. NMR in Biomedicine. 1995;8:333–344. [PubMed]

3. Basser PJ, Pierpaoli C. Microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor MRI. Journal of Magnetic Resonance B. 1996;111:209–219. [PubMed]

4. Wedeen VJ, Hagmann P, Tseng W-YI, Reese TG, Weisskoff RM. Mapping complex tissue architecture with diffusion spectrum magnetic resonance imaging. Magnetic Resonance in Medicine. 2005;54(6):1377–1386. [PubMed]

5. Jensen JH, Helpern JA. Quantifying non-Gaussian water diffusion by means of pulsed-field-gradient MRI; Toronto, Canada. Proceedings of the International Society for Magnetic Resonance in Medicine Annual Meeting; 2003. p. 2154.

6. Jensen JH, Helpern JA, Ramani A, Lu H, Kaczynski K. Diffusional kurtosis imaging: The quantification of non-Gaussian water diffusion by means of magnetic resonance imaging. Magnetic Resonance in Medicine. 2005;53:1432–1440. [PubMed]

7. Lu H, Jensen JH, Ramani A, Helpern JA. Three-dimensional characterization of non-gaussian water diffusion in humans using diffusion kurtosis imaging. NMR in Biomedicine. 2006;19:236–247. [PubMed]

8. Jensen JH, Helpern JA. MRI quantification of non-Gaussian water diffusion by kurtosis analysis. NMR in Biomedicine. 2010 Published online: May 19, 2010. DOI:10.1002/nbm.1518. [PMC free article] [PubMed]

9. Falangola MF, Jensen JH, Babb JS, Hu C, Castellanos FX, Di Martino A, Ferris SH, Helpern JA. Age-related non-Gaussian diffusion patterns in the prefrontal brain. Journal of Magnetic Resonance Imaging. 2008;28:1345–1350. [PMC free article] [PubMed]

10. Raab P, Hattingen E, Franz K, Zanella FE, Lanfermann H. Cerebral gliomas: Diffusional kurtosis imaging analysis of microstructural differences. Radiology. 2010;254:876–881. [PubMed]

11. Jansen JFA, Stambuk HE, Koutcher JA, Shukla-Dave A. Non-Gaussian analysis of diffusion-weighted MR imaging in head and neck squamous cell carcinoma: A feasibility study. AJNR Am J Neuroradiol. 2009 ajnr.A1919. [PMC free article] [PubMed]

12. Cheung MM, Hui ES, Chan KC, Helpern JA, Qi L, Wu EX. Does diffusion kurtosis imaging lead to better neural tissue characterization? A rodent brain maturation study. NeuroImage. 2009;45:386–392. [PubMed]

13. Lazar M, Jensen JH, Xuan L, Helpern JA. Estimation of the orientation distribution function from diffusional kurtosis imaging. Magnetic Resonance in Medicine. 2008;60:774–781. [PMC free article] [PubMed]

14. Song S-K, Sun S-W, Ramsbottom MJ, Chang C, Russell J, Cross AH. Dysmyelination revealed through MRI as increased radial (but unchanged axial) diffusion of water. NeuroImage. 2002;17(3):1429–1436. [PubMed]

15. Hui ES, Cheung MM, Qi L, Wu EX. Towards better MR characterization of neural tissues using directional diffusion kurtosis analysis. NeuroImage. 2008;42:122–134. [PubMed]

16. Koay CG, Carew JD, Alexander AL, Basser PJ, Meyerand ME. Investigation of anomalous estimates of tensor-derived quantities in diffusion tensor imaging. Magnetic Resonance in Medicine. 2006;55:930–936. [PubMed]

17. Liu C, Bammer R, Acar B, Moseley ME. Characterizing non-Gaussian diffusion by using generalized diffusion tensors. Magnetic Resonance in Medicine. 2004;51:924–937. [PubMed]

18. Wang Z, Vemuri BC, Chen Y, Mareci TH. A Constrained variational principle for direct estimation and smoothing of the diffusion tensor field from complex DWI. IEEE Transactions on Medical Imaging. 2004;23:930–939. [PubMed]

19. Barmpoutis A, Hwang MS, Howland D, Forder JR, Vemuri BC. Regularized positive-definite fourth order tensor field estimation from DW-MRI. NeuroImage. 2009;45:S153–S162. [PMC free article] [PubMed]

20. Poot DHJ, den Dekker AJ, Achten E, Verhoye M, Sijbers J. Optimal Experimental Design for Diffusion Kurtosis Imaging. IEEE Transactions on Medical Imaging. 2010;29:819–829. [PubMed]

21. Stejskal EO, Tanner JE. Spin diffusion measurements: spin echoes in the presence of a time-dependent field gradient. Journal of Chemical Physics. 1965;42:288–292.

22. Callaghan PT, Coy A, MacGowan D, Packer KJ, Zelaya FO. Diffraction-like effects in NMR diffusion studies of fluids in porous solids. Nature. 1991;351:467–469.

23. Nocedal J, Wright SJ. Numerical Optimization. New York: Springer; 1999.

24. Carlson BC. Computing elliptic integrals by duplication. Numerische Mathematik. 1979;33:1–16.

25. Carlson BC, Notis EM. Algorithm 577: Algorithms for incomplete elliptic integrals [S21] ACM Transactions on Mathematical Software. 1981;7(3):398–403.

26. Anderson AW. Theoretical analysis of the effects of noise on diffusion tensor imaging. Magnetic Resonance in Medicine. 2001;46(6):1174–1188. [PubMed]

27. Basu S, Fletcher T, Whitaker R. Lecture Notes in Computer Science. Volume 4190. Berlin: Springer; 2006. Rician noise removal in diffusion tensor MRI; pp. 117–125. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |