Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2860879

Formats

Article sections

- Abstract
- I. INTRODUCTION
- II. GEOMETRY AND NOTATION
- III. RECONSTRUCTION THEORY
- IV. NUMERICAL ALGORITHM
- V. NUMERICAL EVALUATION
- VI. DISCUSSION AND CONCLUSION
- REFERENCES

Authors

Related links

IEEE Trans Med Imaging. Author manuscript; available in PMC 2010 April 28.

Published in final edited form as:

PMCID: PMC2860879

NIHMSID: NIHMS109406

Frank Dennerlein, Student Member, IEEE,^{*} Frédéric Noo, Member, IEEE, Harald Schöndube, Student Member, IEEE, Günter Lauritsch, and Joachim Hornegger

Frank Dennerlein, Department of Radiology, University of Utah, 729 Arapeen Drive, Salt Lake City, UT 84102 USA;

The publisher's final edited version of this article is available at IEEE Trans Med Imaging

See other articles in PMC that cite the published article.

In this paper, we introduce a new algorithm for 3-D image reconstruction from cone-beam (CB) projections acquired along a partial circular scan. Our algorithm is based on a novel, exact factorization of the initial 3-D reconstruction problem into a set of independent 2-D inversion problems, each of which corresponds to finding the object density on one, single plane. Any such 2-D inversion problem is solved numerically using a projected steepest descent iteration scheme. We present a numerical evaluation of our factorization algorithm using computer-simulated CB data, without and with noise, of the FORBILD head phantom and of a disk phantom. First, we study quantitatively the impact of the reconstruction parameters on the algorithm performance. Next, we present reconstruction results for visual assessment of the achievable image quality and provide, for comparison, results obtained with two other state-of-the-art reconstruction algorithms for the circular short-scan.

Over the past few years, cone-beam (CB) X-ray computed tomography (CT) has become a powerful imaging technology in the clinical arena. Among all possible scanning modes, circular motion of the X-ray scanning device relative to the interrogated object remains one of the most attractive ones. This is because a circular scan is easy to implement, mechanically robust and allows fast data acquisition.

Unfortunately, with the circular acquisition geometry, Tuy’s data sufficiency condition for exact and stable reconstruction [1], [2] is not satisfied for the points that lie outside the plane of the X-ray source trajectory. This means that reconstruction of a 3-D volume of interest (VOI) is an ill-posed problem almost everywhere [2]. Therefore, a numerically stable recovery of the object density in the entire VOI from noisy data can only be achieved by sacrificing some accuracy in the image results. A reconstruction algorithm can only be useful for practical applications if it performs an efficacious trade-off between numerical stability and accuracy.

For the closed circular trajectory (full-scan), an attractive algorithm is given by the approach suggested in 1984 by Feldkamp *et al.* (FDK) [3]. However, not every CT device can acquire data along a full-scan. In particular, the mechanical design of C-arm systems typically prevents data acquisition over more than 330°. Moreover, for various physical reasons including dose and scanning time, most scanning protocols on C-arm systems are restricted to a short-scan of 210°–240°. Hence, FDK cannot be applied to C-arm systems.

Finding an efficacious algorithm for the short-scan geometry turns out to be a challenging task. To give one example, a popular algorithm that is used by many state-of-the-art C-arm systems is a modification of FDK (short-scan FDK) [4]. The short-scan FDK method uses ray-based weighting schemes [5], [6] to approximately equalize redundancies in the acquired CB data. These weighting schemes, however, use a significant geometric approximation: they neglect the cone angle of the acquired CB data samples. Therefore, data points that are in general distinct are considered as redundant and this approximation typically yields a significant amount of CB artifacts in the reconstruction results.

Within the last few years, various methods have been suggested to yield improvements in image quality when reconstructing from short-scan CB data. Some of them use the reconstruction obtained with short-scan FDK as an intermediate result and subsequently aim at correcting, in postprocessing steps, or using iteration schemes, the CB artifacts occurring in the FDK-based reconstruction; see e.g., [7]–[9]. Other methods are based on original reconstruction formulae [10]–[16] that allow less severe geometric approximations, if any, and a more precise handling of data redundancies, so that the level of CB artifacts is reduced compared to short-scan FDK.

In this paper, we present a novel, efficacious reconstruction algorithm for the short scan circular trajectory. This algorithm is based on a theoretically-exact factorization of the 3-D short-scan CB reconstruction problem into a set of independent 2-D inversion problems, each of which corresponds to finding the object density on one plane. The algorithm uses an iterative method to solve the 2-D inversion problems on a one-by-one basis and achieves reconstruction of the entire VOI by combining the results of all considered planes. Two important properties of our factorization approach, which distinguishes it from the other previously-mentioned methods, are its ability to enforce all reconstructed values to be positive and to perform reconstruction without involving geometric approximations. These properties make the factorization approach similar to 3-D iterative reconstruction approaches. The latter are, however, computationally more demanding; the factorization approach, in contrast, is more efficient, as it involves only 2-D iterations.

This paper is organized as follows. Section II presents the data acquisition geometry and introduces the notation used throughout the rest of this paper. The factorization theory underlying our reconstruction approach is described in Section III. Section IV explains the numerical scheme used for finding solutions to the 2-D problems and describes a constrained iterative inversion algorithm. In Section V, we present a numerical evaluation of the factorization reconstruction approach using computer-simulated CB data and a discussion of the results. The main body of this paper ends with Section VI that provides further, general discussions and conclusions about our research.

This paper follows the standard convention to refer to a point in space using the vector **x** = (*x, y, z*)^{T}, where *x, y* and *z* are the coordinates of a Cartesian world coordinate system attached to the patient. Here and in the following, the symbol ^{T} denotes the transpose of a matrix or vector.

Let the function *f*(**x**) denote the spatial distribution of the X-ray linear attenuation coefficient of the object of interest. The region occupied by this object is denoted with Ω* _{f}*, and thus f(

While CB data is acquired, the X-ray source-detector assembly rotates around the interrogated object so that the focal spot of the source describes a circular trajectory. This trajectory is represented with the function **a**(λ) = (*R* cos λ, *R* sin λ, 0)^{T}, where *R* denotes the radius of the circular scan. The parameter λ varies in [λ_{min}, λ_{max}] and corresponds to the polar angle of the source. Since this paper focuses on short-scans, we require λ_{max} > λ_{min} + 2π. Moreover, the geometric setting is such that the curve **a**(λ) lies completely outside the convex hull of Ω_{f}. It is here assumed—however without restriction of generality—that the rotation axis of the scanning device coincides with the axis of the coordinate *z* (the *z*-axis) and that the plane of the circular scan (PCS) is at *z* = 0.

We assume that the X-ray detector is flat and that it is at fixed distance *D* from the source during the entire scan. For a given λ, the detector plane is spanned by the vectors **e*** _{u}*(λ) = (−sin λ, cos λ, 0)

CB data acquisition yields integrals of the object density along rays, with each ray connecting a point on the source trajectory to a point on the detector. Following the introduced notation and using the unit vector **α** to denote ray directions, the CB data function can be written as

$$g(\lambda ,\mathbf{\alpha})={\displaystyle \underset{0}{\overset{\infty}{\int}}\phantom{\rule{thinmathspace}{0ex}}dt\phantom{\rule{thinmathspace}{0ex}}}f(\mathbf{a}(\lambda )+t\mathbf{\alpha})$$

(1)

or equivalently, to better emphasize the detector geometry, as

$${g}_{m}(\lambda ,u,v)=g(\lambda ,\mathbf{\alpha}(\lambda ,u,v)).$$

(2)

In this equation, the function

$$\mathbf{\alpha}(\lambda ,u,v)=\frac{u{\mathbf{e}}_{u}(\lambda )+v{\mathbf{e}}_{v}(\lambda )-D{\mathbf{e}}_{w}(\lambda )}{\sqrt{{u}^{2}+{v}^{2}+{D}^{2}}}$$

(3)

gives the direction of the ray diverging from **a**(λ) and intersecting the detector plane at coordinates (*u*, *v*).

The objective of CB reconstruction can now be formally described as recovering the object function *f*(**x**) with **x** Ω* _{f}* from the CB data function with

We target reconstruction within planes that are parallel to the *z* axis and intersect the source trajectory at two locations. Let be such a plane of interest. We use λ_{1} and λ_{2}to specify the locations where hits the source trajectory. By construction, **a**(λ_{1}) and **a**(λ_{2}) belong to and λ_{min} ≤ λ_{1} < λ_{2} ≤ λ_{max}.

In the following, several entities will be introduced that are closely related to . Some of these entities correspond to geometric parameters that provide further description on . Others correspond to functions which are defined on and take an essential role in the reconstruction algorithm. To keep the notation clear, all these entities will be described from now on with symbols that contain a hat.

A selection of implicitly imposes another coordinate system for the image domain. This system is described with the orthonormal system of vectors **ê*** _{s}* = (cos , sin , 0)

Illustration of geometric entities on the plane of interest , which is orthogonal to PCS and intersects the source trajectory at **a**(λ_{1}) and **a**(λ_{2}).

We use *ŝ* to denote the (signed) distance from to the origin **x** = (0, 0, 0)^{T}; *ŝ* is measured in the direction of **ê**_{s} and is such that *ŝ* = **ê**_{s} · **a**(λ_{1}). Hence, a point **x** belongs to if **x** · **ê*** _{s}* =

To specify locations on , we use Cartesian coordinates *t* and *z*, measured along the directions **ê*** _{t}* and

$$\widehat{\mathbf{x}}(t,z)=\widehat{s}{\widehat{\mathbf{e}}}_{s}+t{\widehat{\mathbf{e}}}_{t}+z{\widehat{\mathbf{e}}}_{z}.$$

(4)

The object density on , i.e., the 2-D function

$$\widehat{f}(t,z)=f(\widehat{\mathbf{x}}(t,z))$$

(5)

is now recovered using a two-step scheme. In step one, an intermediate function is calculated for points on as

$$\widehat{b}(t,z)={\displaystyle \underset{{\lambda}_{1}}{\overset{{\lambda}_{2}}{\int}}d\lambda}\frac{1}{\Vert \widehat{\mathbf{x}}(t,z)-\mathbf{a}(\lambda )\Vert}{g}_{F}(\lambda ,{u}^{*},{v}^{*})$$

(6)

where *g _{F}* is a partial derivative of CB data, namely

$${g}_{F}(\lambda ,u,v)=\frac{\partial}{\partial q}g(q,\mathbf{\alpha}(\lambda ,u,v{))|}_{q=\lambda}$$

(7)

and where *u** and *v** denote the detector coordinates of the CB projection of **$\widehat{x}$** (*t, z*) at a given λ.

In words, the 2-D intermediate function (*t, z*) is obtained by first differentiating CB data with respect to the parameter of the source trajectory at fixed ray direction. The differentiation result is then backprojected onto while using only the data over λ [λ_{1}, λ_{2}]. These algorithmic steps resemble those of classical FBP methods. However, an important difference lies in the filtering step, which, according to (7), corresponds to a local operation. Consequently, to accurately obtain at coordinates (*t, z*)^{T}, it is sufficient to know, for each λ [λ_{1}, λ_{2}], the function *g* on the rays that pass through a neighborhood of **$\widehat{x}$**(*t, z*). The union of all points (*t, z*)^{T} for which *g* satisfies this sufficiency condition defines a region, which we call Ω_{}.

Actual image reconstruction is achieved in step two, by making use of a fundamental relation linking the intermediate function to the sought object density function. This fundamental relation is derived in its general form in [17], using equations based on integration by parts that avoid the derivative with respect to λ in (7) but brings in boundary terms. Considering our geometric assumptions and using the coordinate system introduced on , the relation can be expressed as

$$\frac{\widehat{b}(t,z)}{\pi}={\displaystyle \underset{-\infty}{\overset{\infty}{\int}}d\tau}\phantom{\rule{thinmathspace}{0ex}}{h}_{H}(t-\tau )\phantom{\rule{thinmathspace}{0ex}}\left(\widehat{f}(\tau ,{z}_{1}(\tau ))+\widehat{f}(\tau ,{z}_{2}(\tau ))\right)$$

(8)

with *h _{H}*(

Illustration of functions and on . (Top) An example realization of the 2-D intermediate function in the region Ω_{} (delineated with a dashed line). (Bottom) The **...**

To understand (8), picture its left-hand side (LHS) as the value of at one fixed point in . Then, draw the two lines, _{1} and _{2}, that connect this point to the source positions **a**(λ_{1}) and **a**(λ_{2}). These lines are within , as shown in Fig. 2. The right-hand side (RHS) of (8) is the addition of the outcomes at of the convolutions of with the kernel *h _{H}* along these two lines. Altogether, (8) thus corresponds to an integral equation that relates one value of to many values of . Both, and are 2-D functions and consequently, image reconstruction on corresponds to a 2-D problem, namely to solving (8) for .

We note that the feasibility of finding this solution depends on the support of , which we denote as Ω_{}, and on the region Ω* _{}* in which the intermediate function is known. Here, Ω

The previous section described how to recover the object density in one plane of interest. From there, reconstruction in 3-D can be accomplished in a straightforward manner.

Consider all planes on which reconstruction is possible according to Section III-B and carry out the required algorithmic steps to obtain on each such plane. Combination of these planar results using interpolation schemes simply yields the object density for any 3-D VOI that lies in the union of the considered planes. Hence, 3-D reconstruction is accomplished by finding solutions to a set of 2-D problems on a one-by-one basis.

From a practical viewpoint, however, the consideration of all possible planes is not feasible nor attractive since a lot of redundant information could be obtained. Furthermore, the issue of combining the planar results can be handled more efficiently if the considered planes do not intersect each other in the object region. For these reasons, we focus here on planes that are parallel to each other. We note that this choice may not be optimal in terms of noise, but it highly simplifies the numerical implementation. Since each planar reconstruction is achieved using Cartesian coordinates *t* and *z*, and since the considered planes are parallel to each other, a direct assembling of the planar reconstructions immediately yields the object density function *f* on a 3-D Cartesian grid, as typically desired in 3-D image reconstruction. More specifically, from now on, we focus on a one-parametric family of planes with normal vector *ê** _{s}* = (cos , sin , 0)

$${\mathcal{P}}_{p}=\left\{\mathbf{x}\phantom{\rule{thinmathspace}{0ex}}|\phantom{\rule{thinmathspace}{0ex}}\mathbf{x}\cdot {\widehat{\mathbf{e}}}_{s}={s}_{\text{start}}+p\mathrm{\Delta}s,\phantom{\rule{thinmathspace}{0ex}}p=0,1,\cdots ,{N}_{p}\right\}$$

(9)

where *N _{p}* is the number of planes and

In this section, we suggest a numerical algorithm to achieve image reconstruction on a plane according to the theory of Section III-B. The basic principle of this algorithm is to first compute a discrete representation of and to accurately transfer the initial, continuous 2-D inversion problem (8) into a discrete setting. Next, an iterative scheme is used to compute an estimate * ^{e}* for the planar object density in (8).

In the first part of the algorithm, the intermediate function on is computed by following the steps of Section III-B. The required CB data differentiation according to (7) is implemented using the robust method suggested in [19]. This method involves a differentiation parameter ϵ, which controls the trade-off between resolution and noise in the results [19]. Throughout this paper, ϵ is fixed to the value ϵ =0.01 , so that we expect to be of high spatial resolution. The integration in λ over the backprojection interval in (6) is replaced by a first-order approximation, i.e., a summation of the differentiated data at the known samples in λ according to the trapezoidal rule. For efficiency reasons, the intermediate function is immediately computed for the entire set of planes _{p} at once in our implementation, yielding, for each of these planes, an approximation ^{e} of the true function . The accuracy of ^{e} depends on the noise in the acquired CB data and the discretization effects occurring during computation. For the remainder of Section IV, we focus again on just one plane .

The functions and on the plane of interest are both discretized, over their respective regions Ω* _{}* and Ω

$$\begin{array}{c}{\widehat{f}}_{i,j}=\widehat{f}(i\mathrm{\Delta}t,j\mathrm{\Delta}z)\hfill \\ {\widehat{b}}_{i,j}=\widehat{b}(i\mathrm{\Delta}t+\mathrm{\Delta}t/2,j\mathrm{\Delta}z).\hfill \end{array}$$

(10)

We discretize (8) in the following form:

$$\begin{array}{cc}{\widehat{b}}_{i,j}^{\sigma}=\hfill & \pi \mathrm{\Delta}t{\displaystyle \sum _{k={i}_{\text{min}}}^{{i}_{\text{max}}}{h}_{H}^{\sigma}}\left(\left(i-k+\frac{1}{2}\right)\phantom{\rule{thinmathspace}{0ex}}\mathrm{\Delta}t\right)\hfill \\ \hfill & \text{\hspace{1em}\hspace{1em}\hspace{1em}}\times (\widehat{f}(k\mathrm{\Delta}t,{z}_{1}(k\mathrm{\Delta}t))+\widehat{f}(k\mathrm{\Delta}t,{z}_{2}(k\mathrm{\Delta}t)))\hfill \end{array}$$

(11)

where the superscript σ denotes a free parameter that allows us to modify the smoothness of the applied discretization scheme. Integer *k* covers the interval [*i*_{min}, *i*_{max}] that is bounded by the minimum and maximum value of index *i* among all samples * _{i,j}* in Ω

The LHS of (11) corresponds to a discrete 1-D convolution of the known samples of , namely

$${\widehat{b}}_{i,j}^{\sigma}=U(\sigma ){\displaystyle \sum _{c=-2}^{2}\text{exp}}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{-{c}^{2}}{2{\sigma}^{2}}\right)\phantom{\rule{thinmathspace}{0ex}}{\widehat{b}}_{i-c,j}.$$

(12)

Hence, ${\widehat{b}}_{i,j}^{\sigma}$ is a smooth version of * _{i,j}*, obtained with a Gaussian kernel of standard deviation σΔ

$$U(\sigma )={\left({\displaystyle \sum _{c=-2}^{2}\text{exp}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{-{c}^{2}}{2{\sigma}^{2}}\right)}\right)}^{-1}.$$

(13)

Low-pass filtering of the function * _{i,j}* is used to decrease discretization errors in the 2-D intermediate function, and thereby provide improved input data for the planar reconstructions. While reducing discretization errors, we note that this low-pass filtering also reduces resolution. We counteract this resolution issue by including a similar Gaussian filtering in the RHS of (11). More precisely, the convolution kernel, ${h}_{H}^{\sigma},$ in (11) is defined as

$${h}_{H}^{\sigma}(t)=U(\sigma )\phantom{\rule{thinmathspace}{0ex}}{\displaystyle \sum _{c=-2}^{2}\text{exp}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{-{c}^{2}}{2{\sigma}^{2}}\right)}\phantom{\rule{thinmathspace}{0ex}}{h}_{H}^{b}(t-c\mathrm{\Delta}t)$$

(14)

with *U*(σ) given by (13). This means that ${h}_{H}^{\sigma}(t)$ is a Gaussian-smoothed version of the band-limited Hilbert kernel ${h}_{H}^{b}(t)$ that is based on rectangular apodization and a cutoff frequency of 1/(2Δ*t*). The expression of ${h}_{H}^{b}(t)$ is

$${h}_{H}^{b}(t)=\frac{1}{\pi t}\left(1-\text{cos}\frac{\pi t}{\mathrm{\Delta}t}\right)$$

(15)

and we note that according to (11) and (14), this function is only evaluated at coordinates for which the cosine term vanishes. This avoids oscillations in the discretized filter kernel and contributes to better resolution and less aliasing artifacts in the reconstruction results [20].

Going back to relation (11), we finally note that the function on its RHS is in general evaluated at points that are not on the sampling grid for , as illustrated in Fig. 4. In order to obtain for example the value (*k*Δ*t*, *z*_{1}(*k*Δ*t*)), we use a linear interpolation in *z* between the two samples * _{k,j}* and

The applied discretization scheme on : Δ*z* and Δ*t* define the density of the sampling pattern. The small, dashed circles show all *N*_{} sampling positions considered for _{i,j}, while the indices *i*_{min} **...**

From now on, the discretized 2-D functions * _{i,j}* and ${\widehat{b}}_{i,j}^{\sigma}$ will be represented using the vectors

$${\widehat{b}}_{r}^{\sigma}={\mathbf{M}}_{r}^{\sigma}\widehat{\mathbf{f}}$$

(16)

where ${\widehat{b}}_{r}^{\sigma}$ denotes element *r* of the vector ^{σ} and ${\mathbf{M}}_{r}^{\sigma}$ denotes the row *r* of a matrix **M**^{σ} **IR**^{N × N}. The components of ${\mathbf{M}}_{r}^{\sigma}$ include both, the values of the filter kernel ${h}_{H}^{\sigma}(t)$ and the interpolation weights for , as occurring in (11). A finite dimensional approximation of the relation between all samples of ^{σ} and on is thus given by the linear system of equations

$${\widehat{\mathbf{b}}}^{\sigma}={\mathbf{M}}^{\sigma}\widehat{\mathbf{f}}.$$

(17)

Image reconstruction on can now be accomplished by solving (17) for .

In the rest of this paper, emphasis on the smoothing parameter sigma is dropped to simplify the notation. So, from now on, will be used for ^{σ} and **M** will be used for **M**^{σ}.

The numerical stability of the reconstruction problem on can be investigated by studying the singular values of the system matrix **M** in (17). Using σ = 0 and the acquisition geometry parameters of Table I, we composed **M** for the plane defined by *p* = 0 and *s*_{start} = 0 mm, then computed its singular values; see Fig. 5. Note that we used the same number of samples, namely 7528, for both and . Hence, **M** was square. Note also that the samples were distributed in a region of shape similar to that of Ω* _{}* in Fig. 4, using Δ

Singular values of an example matrix **M** (see text for the geometry parameters), obtained using a discretization of Δ*t* = Δ*z* = 2 mm.

We observe from Fig. 5 that **M** is nonsingular and that its condition number is approximately 33, and therefore unexpectedly small. We attribute this phenomena to the band-limitation modeled in **M**, which was necessarily introduced when discretizing the continuous problem. The fast decay of the singular values in the last 10% of the graph in Fig. 5, however, is an indication that solving (17) for truly corresponds to an ill-conditioned problem. This ill-posedness is problematic in practical scenarios, where we have only access to an approximation * ^{e}* of the 2-D intermediate function. To obtain a meaningful reconstruction from contaminated data, we focus on finding a regularized solution to a constrained least-square formulation of (17). Two constraints are considered.

**CRAY:**Knowledge about integrals of along rays that diverge from**a**(λ_{1}) and**a**(λ_{2}) and that are entirely contained in . These integrals are part of the acquired CB data*g*._{m}**CPOS:**Knowledge that is a nonnegative function.

Hence, we set up the constrained optimization problem as

$$\begin{array}{c}\text{minimize}\phantom{\rule{thinmathspace}{0ex}}O(\widehat{\mathbf{f}})\phantom{\rule{thinmathspace}{0ex}}\u2254{\Vert \mathbf{M}\widehat{\mathbf{f}}-{\widehat{\mathbf{b}}}^{e}\Vert}_{2}^{2}+{\alpha}^{2}{\Vert \mathbf{C}\widehat{\mathbf{f}}-{\widehat{\mathbf{c}}}^{e}\Vert}_{2}^{2}\hfill \\ \hfill \text{subject}\phantom{\rule{thinmathspace}{0ex}}\text{to}\phantom{\rule{thinmathspace}{0ex}}\widehat{\mathbf{f}}\ge \mathbf{0.}\end{array}$$

(18)

The first term of *O*() accounts for the congruence between the reconstruction and the given intermediate function, while the second term is a penalizing term that incorporates the linear constraint CRAY. Each row of the matrix **C** contains an equation to compute a ray integral from the discrete , while the vector **ĉ*** ^{e}* contains the corresponding integral values, i.e., samples of the function

To simplify the notation for the remainder of this section, the equivalent form of the objective function

$$\begin{array}{cc}O(\widehat{\mathbf{f}})& ={\Vert \mathbf{A}\widehat{\mathbf{f}}-{\widehat{\mathbf{d}}}^{e}\Vert}_{2}^{2}\hfill \\ & ={\widehat{\mathbf{f}}}^{\mathrm{T}}{\mathbf{A}}^{\mathrm{T}}\mathbf{A}\widehat{\mathbf{f}}-2{\widehat{\mathbf{f}}}^{\mathrm{T}}{\mathbf{A}}^{\mathrm{T}}{\widehat{\mathbf{d}}}^{e}+{\widehat{\mathbf{d}}}^{e\mathrm{T}}{\widehat{\mathbf{d}}}^{e}\end{array}$$

(19)

is used, where

$$\mathbf{A}=\left[\begin{array}{c}\mathbf{M}\\ \alpha \mathbf{C}\end{array}\right]\phantom{\rule{thinmathspace}{0ex}}\text{and}\phantom{\rule{thinmathspace}{0ex}}{\widehat{\mathbf{d}}}^{e}=\left[\begin{array}{c}{\widehat{\mathbf{b}}}^{e}\\ \alpha {\widehat{\mathbf{c}}}^{e}\end{array}\right]\phantom{\rule{thinmathspace}{0ex}}.$$

(20)

The gradient of *O*() is given as

$$\nabla O(\widehat{\mathbf{f}})=2{\mathbf{A}}^{\mathrm{T}}(\mathbf{A}\widehat{\mathbf{f}}-{\widehat{\mathbf{d}}}^{e}).$$

(21)

We estimate the solution of (18) by applying a projected steepest descent iteration scheme [21], [22] during which a sequence of intermediate estimates ^{(q)} *IR** ^{N}* with integer iteration index

$${\widehat{\mathbf{f}}}^{(q+1)}=P({\widehat{\mathbf{f}}}^{(q)}+{w}^{(q)}{\mathbf{u}}^{(q)})$$

(22)

using the update direction

$${\mathbf{u}}^{(q)}=-\nabla O({\widehat{\mathbf{f}}}^{(q)})$$

(23)

and the update step-width

$${w}^{(q)}=\frac{1}{2}\phantom{\rule{thinmathspace}{0ex}}\frac{{\Vert {\mathbf{u}}^{(q)}\Vert}_{2}^{2}}{{\Vert \mathbf{A}{\mathbf{u}}^{(q)}\Vert}_{2}^{2}}.$$

(24)

To understand the scheme described by (22) to (24), note that in each iteration, ^{(q)} is first updated in the direction of the steepest descent of the quadratic functional *O* at point ^{(q)}, which is a natural selection when pursuing minimization. The update step-width according to (24) guarantees that ^{(q)} + *w*^{(q)}
**u**^{(q)} minimizes *O* on the line through ^{(q)} with direction **u**^{(q)} [22]. The operator *P* in (22) describes the orthogonal projection onto the convex set of vectors with nonnegative entries. This projection assures that each intermediate estimate satisfies the nonlinear constraint CPOS.

It can be shown that the sequence ^{(q)} converges towards the solution of (18) for increasing *q* [23]. However, at some point during the iterations, the intermediate estimate may start diverging from the true, desired, due to data noise. We want to avoid this behavior and therefore suggest to stop iterations early, when either of the two following conditions is satisfied.

- The iteration index
*q*reaches a predefined maximum number γ_{max}. - The value ‖
^{(q)}−^{(q−1)}‖_{∞}, first becomes equal or less than a certain threshold. This threshold is here defined as γ_{thres}‖^{(q)}‖_{∞}, i.e., relative to the norm of the current intermediate estimate.

Thus, iteration continues up to the maximum index γ_{max} if at least one element of the intermediate estimate is still updated strongly enough, even if the update in all other elements is already below the introduced threshold. Note that for our steepest descent iteration scheme, there exists a number of indications that our early stopping rule asymptotically defines a regularization process [24], as discussed in [25] and [26].

Let *N*_{q} denote the iteration index at which the stopping criterion is satisfied. We define

$${\widehat{\mathbf{f}}}^{e}\text{\hspace{0.17em}}\u2254{\widehat{\mathbf{f}}}^{({N}_{q})}$$

(25)

as the nonnegative approximation to the solution of the constrained optimization problem (18) and consider * ^{e}* as a good estimate for the vector that satisfies (17). The inverse of the mapping defined earlier in this section,

This section presents an evaluation of our factorization method based on computer-simulated CB data. We used acquisition geometry parameters that are listed in Table I and are similar to those of real medical C-arm systems. Furthermore, the CB data was not truncated in all evaluations presented in this section, i.e., the CB projection of the region Ω* _{f}* onto the detector plane was always entirely contained in the measured area [

Two mathematical phantoms were considered: a high-contrast disk phantom and the FORBILD head phantom (without ears), which is described online.^{1} The head phantom was positioned so that its center is 40 mm above the PCS. The disk phantom consisted of six cylindrical disks embedded in a low-attenuating cylinder of radius 100 mm and density −900 HU. Each disk had a thickness of 10 mm, a radius of 80 mm and the density of water (selected at 0.183 cm^{−1}). The disks were centered on the *z* axis and stacked one above the other with a center-to-center distance of 20 mm between any two adjacent disks. The first disk was centered on PCS and therefore, all other disks were above PCS.

As described in Section IV, the iterative inversion algorithm that we use for each planar reconstruction involves the selection of 4 parameters: α, σ, γ_{thres}, and γ_{max}. Here, we consider a fixed stopping rule defined with γ_{thres} = 0.002 and γ_{max} = 400 and investigate the impact of α and σ on achievable reconstruction quality.

We reconstructed the disk phantom on a single plane of interest for a range of values of α while using the discretization Δ*t* = Δ*z* = 0.5 mm. The plane was defined with *s*_{start} = 0 mm, *p* = 0, and we set σ = 0.

The reconstruction results were evaluated quantitatively by computing, as a figure of merit (FOM), the root-mean-squared error (RMSE) between reconstruction * ^{e}* and the true object density within two different regions in . These regions, denoted as A and B, were rectangular with width 100 mm and height 6 mm and were entirely contained, respectively, in the bottom disk and in the top disk of the phantom (see Fig. 7). The left two graphs in Fig. 6 display the RMSE as a function of α

The plane with *s*_{start} = 0 mm and *p* = 0 through the disk phantom. (Top-left) Illustration of the true density values, of the phantom position relative to PCS and of regions A and B. (Top-right) Reconstruction using short-scan FDK. (Bottom) Reconstruction **...**

The right diagram in Fig. 6 presents, for the noisy CB data, a closer investigation on the convergence properties of RMSE in region B for three particular choices of α^{2}: α^{2} = 10^{−5}, α^{2} = 10^{−2}, and α^{2} = 0.85. We realize that the general development of the RMSE during the iteration process is similar for various α: the intermediate estimates approach the true densities, reach a minimum distance to them at some iteration index and then start to diverge. Again, the superiority of α^{2} = 10^{−2} becomes obvious, because this choice, compared to the two others, yields 1) better image quality for almost any fixed number of iterations and 2) a minimum FOM that is closer to the true density and reached after fewer iterations. We also note that the stopping criterion we applied for Fig. 6 (left) is satisfied fairly early, before the minimum of either of the three curves is reached. On the other hand, the small improvements expected when continuing iterations, appear not very attractive considering that up to 10 times more iterations would have to be carried out. The selected values of γ_{max} and γ_{thres} thus achieve a practical trade-off between efficiency and image quality. Note that the three investigated choices of α already yield improved CB artifact behavior, compared to short-scan FDK, after only 10 iterations.

Fig. 7 shows reconstruction results on and illustrates that when using the factorization method (with α^{2}) = 10^{−2}, it is easily possible to distinguish between the disks and the gaps at the top of the phantom. This distinction, however, appears impossible in the short-scan FDK reconstruction, which is presented in the top right image in Fig. 7.

The effect of σ was studied using reconstructions of the FORBILD head phantom within the plane that is given by *p* = 0 and *s*_{start} = 10 mm. The plane was again discretized using Δ*t* = Δ*z* = 0.5 mm and the reconstructions were obtained with fixed α^{2} = 10^{−2}, while using three different values of σ, namely σ = 0, σ = 0.7, and σ = 1.4. The results, displayed in Fig. 8, show that an appropriate choice of the smoothing parameter, such as σ = 0.7, can lower the strength of horizontal, streak-like discretization artifacts. This improvement is achieved without noticeably reducing resolution (see the difference image on the right of Fig. 8). A large selection of σ should be avoided, since it would degrade high frequency contents in * _{i,j}* too much. Then, a reconstruction of the object density

Here, we present additional reconstructions of the head phantom for the purpose of visual assessment of image quality. For comparison, we use results obtained with two other, state-of-the-art reconstruction methods for the circular short-scan, namely with 1) short-scan FDK, which we applied with sinc-apodization of the ramp convolution kernel and with 2) the algorithm suggested in [12, Section II-C], which we refer to as the virtual PI-line backprojection-filtration (BPF) method in the following. We stopped the iterations for the factorization method according to γ_{max} = 400 and γ_{thres} = 0.002, and we used α ^{−2} = 10^{−2} and σ = 0.7, which turned out to be efficacious parameter values in the evaluations of Section V-B.

In this section, reconstruction was carried out for the entire 3-D VOI, not only a single plane, by following the concepts of Section III-C and thus considering a set of 220 parallel planes _{p} with *s*_{start} = 10 mm, Δ_{s} = 0.5 mm and *p* = 0.1, …, 219. Fig. 9 displays slices *z* = 50 mm and *z* = 60 mm through the volume reconstructions, which were obtained from both ideal CB data, but also from CB data with simulated Poisson noise, based on 300 000 photons per ray. Fig. 10 shows the reconstructions on the plane with index *p* = 136 using noise-free CB data.

Reconstructions of the FORBILD head phantom in the grayscale window [0HU, 100HU], obtained with (left) short-scan FDK, (center) the virtual PI-line BPF approach, and (right) the factorization approach. From (top) to (bottom): slice *z* = 50 mm, no noise; **...**

Reconstruction of the FORBILD head phantom within the plane defined by *p* = 136, *s*_{start} = 10 mm and Δ_{s} = 0.5 mm, obtained with (left) short-scan FDK, (center) the virtual PI-line BPF method and (right) the factorization approach. Grayscale window:[0HU, **...**

Visual inspection of these reconstruction results confirms that the factorization approach yields a significant reduction in CB artifacts compared to the other two presented methods. Compared to short-scan FDK, we can almost completely avoid low-frequency artifacts, such as the dark shadows attached to the bones in the FORBILD head phantom or the drop in reconstructed intensities in axial direction. We can also avoid almost all directed artifacts as well as the geometry distortion occurring in the virtual PI-line BPF method. Because the available CB data is insufficient for stable reconstruction, the issue of CB artifacts cannot be solved entirely, but the effect of insufficient CB data can now only be seen in occasional spatially compact, streak-like (high frequency) artifacts, that are tangent to sharp discontinuities in the object; see, e.g., below the skullcap in Fig. 10 or around the nose in Fig. 9 and Fig 10.

We suggested an efficacious 3-D CB reconstruction approach for the circular short-scan. This approach involves a novel, theoretically-exact factorization of the reconstruction problem into a set of independent 2-D inversion problems. Each 2-D inversion problem corresponds to finding the object density on one plane, and we introduced a practical, numerical method to solve any such problem using a constrained steepest descent iteration scheme.

A numerical evaluation of our factorization algorithm was presented using computer-simulated CB data of the FORBILD head phantom and a disk phantom. Our results showed a significantly decreased level of CB artifacts when compared to short-scan FDK and to the virtual PI-line BPF method of [12], and also showed good robustness to noise. Moreover, we saw that a judicious selection of parameters of our algorithm can accelerate convergence and improve image quality.

The factorization method uses for reconstruction on any given plane only CB data on a super short-scan [11], i.e., on a limited interval along the source trajectory. For this reason, it is most likely not optimal in terms of noise. It would be interesting to find a way to extend the method so that all measured projections can be beneficially used for reconstruction in each plane. Also, we expect a dependency of total achievable image quality on the actual selection of the set of planes. These issues are investigated at the moment.

A comparison of the factorization algorithm to other reconstruction approaches recently suggested for the circular short-scan, e.g., [10], [11], [13]–[15], is of high interest to us and is the topic of our future investigations. Those comparisons will cover, in more detail, measurements of achievable spatial resolution and also of signal-to-noise ratio in the reconstruction results. They will furthermore have to deal with distinct truncation scenarios to reveal the flexibility of the considered algorithms with respect to limited data.

We finally note that an extension of the factorization reconstruction method to nonplanar scan orbits can easily be achieved. In [27], e.g., we proposed such an extension for the circle-plus-orthogonal-line trajectory by using CB data from the line scan as additional constraints to the 2-D inversion problems. An extension like that allows us to overcome the problem of insufficient CB data [1], while allowing arbitrary (sparse) sampling on the additional linear scan segment.

This work was supported in part by a grant from Siemens AG, Healthcare Sector and in part by the U.S. National Institutes of Health (NIH) under Grant R01 EB000627.

This Appendix presents how to derive (8) in Section III-B from [17, eq. (34)]. Reference [17, eq. (34)] gives the fundamental relation between the intermediate function *b* and the object density function *f*, namely

$$(1/\pi )\phantom{\rule{thinmathspace}{0ex}}b(\mathbf{x})={K}^{*}(\mathit{\omega}({\lambda}_{2},\mathbf{x}),\mathbf{x})-{K}^{*}(\mathit{\omega}({\lambda}_{1},\mathbf{x}),\mathbf{x})$$

(26)

$${K}^{*}(\mathit{\omega},\mathbf{x})=K(l,\mathit{\omega},\mathbf{s}){|}_{l=(x-s)\cdot \mathit{\omega}}$$

(27)

$$\mathit{\omega}(\lambda ,\mathbf{x})=(\mathbf{x}-\mathbf{a}(\lambda ))/\Vert \mathbf{x}-\mathbf{a}(\lambda )\Vert $$

(28)

$$K(l,\mathit{\omega},\mathbf{s})={\displaystyle \underset{-\infty}{\overset{\infty}{\int}}dl\prime \frac{1}{\pi (l-l\prime )}}f(\mathbf{s}+l\prime \mathit{\omega}).$$

(29)

Since Section III-B focuses on the reconstruction of points in only, we may substitute **x** = **$\widehat{x}$**(*t, z*). Considering only the first term *K** := *K** (**ω**(λ_{2}, **$\widehat{x}$**(*t, z*)), **$\widehat{x}$**(*t, z*)) in the RHS of (26) and selecting **s** = **a**(λ_{2}) yields *l* = ‖**$\widehat{x}$**(*t, z*) − **a**(λ_{2})‖ and therefore

$${K}^{*}={\displaystyle \underset{-\infty}{\overset{\infty}{\int}}dl\prime}\frac{f\phantom{\rule{thinmathspace}{0ex}}\left(\mathbf{a}({\lambda}_{2})+l\prime \frac{\widehat{\mathbf{x}}(t,z)-\mathbf{a}({\lambda}_{2})}{\Vert \widehat{\mathbf{x}}(t,z)-\mathbf{a}({\lambda}_{2})\Vert}\right)}{\pi (\Vert \widehat{\mathbf{x}}(t,z)-\mathbf{a}({\lambda}_{2})\Vert -l\prime )}.$$

(30)

For the argument of *f* in (3), we have

$$\left(\mathbf{a}({\lambda}_{2})+l\prime \frac{\widehat{\mathbf{x}}(t,z)-\mathbf{a}({\lambda}_{2})}{\Vert \widehat{\mathbf{x}}(t,\phantom{\rule{thinmathspace}{0ex}}z)-\mathbf{a}({\lambda}_{2})\Vert}\right)\xb7{\widehat{\mathbf{e}}}_{s}=\widehat{s}$$

which shows that the function *f* in (3) is only evaluated on . Introducing coordinates *z*_{λ2} = **a**(λ_{2}) · **ê*** _{z}* and

$$\begin{array}{c}{K}^{*}={\displaystyle \underset{-\infty}{\overset{\infty}{\int}}dl\prime \frac{1}{\pi (\Vert \widehat{\mathbf{x}}(t,z)-\mathbf{a}({\lambda}_{2})\Vert -l\prime )}}\hfill \\ \text{\hspace{1em}}\times \widehat{f}\phantom{\rule{thinmathspace}{0ex}}\left({t}_{{\lambda}_{2}}+l\prime \frac{t-{t}_{{\lambda}_{2}}}{\Vert \widehat{\mathbf{x}}(t,z)-\mathbf{a}({\lambda}_{2})\Vert},{z}_{{\lambda}_{2}}+l\prime \frac{z-{z}_{{\lambda}_{2}}}{\Vert \widehat{\mathbf{x}}(t,z)-\mathbf{a}({\lambda}_{2})\Vert}\right)\hfill \end{array}$$

The change of variable with = *l′* (*t* − *t*_{λ2})/‖**$\widehat{x}$**(*t, z*) − **a**(λ_{2})‖with Jacobian ‖**$\widehat{x}$**(*t, z*) − **a**(λ_{2})‖/‖*t* − *t*_{λ2}‖yields

$$\begin{array}{cc}{K}^{*}\hfill & ={\displaystyle \underset{-\infty}{\overset{\infty}{\int}}d\tilde{l}\frac{\text{sign}(t-{t}_{{\lambda}_{2}})}{\pi (t-{t}_{{\lambda}_{2}}-\tilde{l})}\widehat{f}\phantom{\rule{thinmathspace}{0ex}}\left({t}_{{\lambda}_{2}}+\tilde{l},{z}_{{\lambda}_{2}}+\tilde{l}\frac{z-{z}_{{\lambda}_{2}}}{t-{t}_{{\lambda}_{2}}}\right)}\hfill \\ \hfill & ={\displaystyle \underset{-\infty}{\overset{\infty}{\int}}d\tau \frac{\text{sign}(t-{t}_{{\lambda}_{2}})}{\pi (t-\tau )}\widehat{f}\phantom{\rule{thinmathspace}{0ex}}\left(\tau ,{z}_{{\lambda}_{2}}+(\tau -{t}_{{\lambda}_{2}})\frac{z-{z}_{{\lambda}_{2}}}{t-{t}_{{\lambda}_{2}}}\right)}\hfill \end{array}$$

where τ = *t*_{λ2} + . The calculations for the second term in the RHS of (26) are straightforward. We use that, by definition, *t*_{λ2} < *t* < *t*_{λ1}, and introduce

$${z}_{k}(\tau )={z}_{{\lambda}_{k}}+(\tau -{t}_{{\lambda}_{k}})(z-{z}_{{\lambda}_{k}})/(t-{t}_{{\lambda}_{k}})$$

(31)

with *k* {1, 2}, which yields (8) from the main text.

The concepts and information presented in this paper are based on research and are not commercially available.

^{1}http://www.imp.uni-erlangen.de/forbild/english/results/index.htm

Frank Dennerlein, Department of Radiology, University of Utah, 729 Arapeen Drive, Salt Lake City, UT 84102 USA.

Frédéric Noo, Department of Radiology, University of Utah, Salt Lake City, UT 84102 USA.

Harald Schöndube, Department of Radiology, University of Utah, Salt Lake City, UT 84102 USA.

Günter Lauritsch, Siemens AG, Healthcare Sector, 91301 Forchheim, Germany.

Joachim Hornegger, Chair of Computer Science 5 (Pattern Recognition), University of Erlangen-Nuremberg, 91058 Erlangen, Germany.

1. Tuy HK. An inversion formula for cone-beam reconstruction. SIAM J. Appl. Math. 1983;vol. 43(no. 3):546–552.

2. Finch DV. Cone-beam reconstruction with sources on a curve. SIAM J. Appl. Math. 1985;vol. 45(no. 4):665–673.

3. Feldkamp LA, Davis LC, Kress JW. Practical cone-beam algorithm. J. Opt. Soc. Am. A. 1984;vol. 1(no. 6):612–619.

4. Wang G, Liu Y, Lin TH, Cheng PC. Half-scan cone-beam x-ray microtomography formula. Scanning. 1994;vol. 16(no. 4):216–220. [PubMed]

5. Parker DL. Optimal short scan convolution reconstruction for fan-beam CT. Med. Phys. 1982;vol. 9(no. 2):254–257. [PubMed]

6. Silver MD. A method for including redundant data in computed tomography. Med. Phys. 2000;vol. 27:773–774. [PubMed]

7. Hsieh J. A practical cone beam artifact correction algorithm. IEEE Nucl. Sci. Symp. Conf. Rec.; Lyon, France. 2000. pp. 15/71–15/74.

8. Zeng K, Chen Z, Zhang L, Wang G. An error-reduction-based algorithm for cone-beam computed tomography. Med. Phys. 2004;vol. 31(no. 12):3206–3212. [PubMed]

9. Zhu L, Starman J, Fahrig R. Proc. Fully 3-D. Lindau, Germany: 2007. An efficient method for reducing the axial intensity drop in circular cone-beam CT; pp. 265–268. [PMC free article] [PubMed]

10. Nett BE, Zhuang TL, Leng S, Chen GH. Arc-based cone-beam reconstruction algorithm using an equal weighting scheme. J. X-ray Sci. Tech. 2007;vol. 15(no. 1):19–48.

11. Kudo H, Noo F, Defrise M, Clackdoyle R. New super-short-scan algorithm for fan-beam and cone-beam reconstruction. IEEE Nucl. Sci. Symp. Conf. Rec.; Norfolk, VA. 2002. pp. 902–906.

12. Yu L, Zou Y, Sidky EY, Pelizzari CA, Munro P, Pan X. Region of interest reconstruction from truncated data in circular cone-beam CT. IEEE Trans. Med. Imag. 2006;vol. 25(no. 7):869–881. [PubMed]

13. Noo F, Heuscher DJ. Proc. SPIE. vol. 4684. San Diego, CA: 2002. Image reconstruction from cone-beam data on a circular short-scan; pp. 50–59.

14. Hsieh J, Tang X. Proc. SPIE. vol. 6142. San Diego, CA: 2006. Conjugate backprojection approach for cone beam artifact reduction; pp. 758–764.

15. Grass M, Koehler T, Proksa R. 3-D cone-beam CT reconstruction for circular trajectories. Phys. Med. Biol. 2000;vol. 45(no. 2):329–347. [PubMed]

16. Yu H, Wang G. Feldkamp-type VOI reconstruction from super-short-scan cone-beam data. Med. Phys. 2004;vol. 31(no. 6):1357–1362. [PubMed]

17. Pack JD, Noo F, Clackdoyle R. Cone-beam reconstruction using the backprojection of locally filtered projections. IEEE Trans. Med. Imag. 2005;vol. 24(no. 1):70–85. [PubMed]

18. Noo F, Clackdoyle R, Pack JD. A two-step hilbert transform method for 2-D image reconstruction. Phys. Med. Biol. 2004;vol. 49(no. 17):3903–3923. [PubMed]

19. Noo F, Hoppe S, Dennerlein F, Lauritsch G, Hornegger J. A new scheme for view-dependent data differentiation in fan-beam and cone-beam computed tomography. Phys. Med. Biol. 2007;vol. 52(no. 17):5393–5414. [PubMed]

20. Noo F, Pack JD, Heuscher D. Exact helical reconstruction using native cone-beam geometries. Phys. Med. Biol. 2003;vol. 48(no. 23):3787–3818. [PubMed]

21. Nakamura O, Kawata S, Minami S. Optical microscope tomography. II. Nonnegative constraint by a gradient-projection method. J. Opt. Soc. Am. A. 1988;vol. 5(no. 4):554–561.

22. Chong EKP, Zak SH. An Introduction to Optimization. New York: Wiley; 1996.

23. lusem AN. On the convergence properties of the projected gradient method for convex optimization. Mat. Apt. Comput. 2003;vol. 22(no. 1):37–52.

24. Natterer F, Wiibbeling F. Mathematical methods in image reconstruction. SIAM J. 2001

25. Bakushinsky A, Goncharsky A. Ill-Posed Problems: Theory and Applications (Mathematics and Its Applications) Norwell, MA: Kluwer; 1994.

26. Yao Y, Rosasco L, Caponnetto A. On early stopping in gradient descent learning. Constructive Approx. 2007;vol. 26(no. 2):289–315.

27. Dennerlein F, Noo F, Schoendube H, Hornegger J, Lauritsch G. Proc. Fully3D. Lindau, Germany: 2007. Cone-beam reconstruction on a circular short-scan using the factorization approach; pp. 346–349.

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |