Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC2884190

Formats

Article sections

- Abstract
- 1. Introduction
- 2. Mathematical background
- 3. DBP–HT with redundant data
- 4. Algorithm validation
- 5. Discussion and conclusion
- References

Authors

Related links

Phys Med Biol. Author manuscript; available in PMC 2010 August 7.

Published in final edited form as:

Published online 2009 July 10. doi: 10.1088/0031-9155/54/15/001

PMCID: PMC2884190

NIHMSID: NIHMS200414

The publisher's final edited version of this article is available at Phys Med Biol

See other articles in PMC that cite the published article.

We present a new image reconstruction algorithm for helical cone-beam computed tomography (CT). This algorithm is designed for data collected at or near maximum pitch, and provides a theoretically exact and stable reconstruction while beneficially using all measured data. The main operations involved are a differentiated backprojection and a finite-support Hilbert transform inversion. These operations are applied onto M-lines, and the beneficial use of all measured data is gained from averaging three volumes reconstructed each with a different choice of M-lines. The technique is overall similar to that presented by one of the authors in a previous publication, but operates volume-wise, instead of voxel-wise, which yields a significantly more efficient reconstruction procedure. The algorithm is presented in detail. Also, preliminary results from computer-simulated data are provided to demonstrate the numerical stability of the algorithm, the beneficial use of redundant data and the ability to process data collected with an angular flying focal spot.

The ongoing increase in the number of detector rows has created a need for more and more sophisticated helical cone-beam (CB) reconstruction algorithms in x-ray computed tomography (CT). So far, for computational complexity reasons, preference has been given to analytical reconstruction algorithms. These can be divided into two classes: the approximate and the theoretically exact and stable (TES) algorithms. The approximate algorithms are currently the method of choice for commercial CT scanners; see, e.g., Feldkamp *et al* (1984), Stierstorfer *et al* (2002), Flohr *et al* (2003), Stierstorfer *et al* (2004), Heuscher *et al* (2004), Kudo *et al* (2004), Shechter *et al* (2004), Köhler *et al* (2006), Tang and Hsieh (2007), Zamyatin *et al* (2006), Katsevich *et al* (2007). They are attractive because they are able to use all measured data. But they result in reconstructions that are distorted by CB artifacts that can be severe for imaging with 64 detector rows or more.

In this paper, we focus on the class of TES algorithms, which due to their mathematically exact nature do not suffer from CB artifacts. We are particularly interested in the difficult problem of performing a TES reconstruction while beneficially using all measurements. Several TES algorithms have been published; we categorize them into 1-π and *n*-π methods.

The 1-π methods, such as Katsevich’s algorithm (Katsevich 2002) and the π-line two-step Hilbert algorithm (Zou and Pan 2004, Pack *et al* 2005, Ye *et al* 2005, Schöndube *et al* 2007a), essentially use a data angular coverage of 180° for each voxel, as viewed from the location of the voxel. That is, they perform reconstruction by backprojecting only filtered data defined inside the Tam–Danielsson (TD) window (Tam *et al* 1998, Danielsson *et al* 1997). For a standard helical CB data acquisition geometry, this means that even in the best case scenario only 73% of the measured data are effectively used in terms of noise. The best case scenario is data acquisition at maximum pitch, which is the pitch value such that the TD window just fits within the detector area. Therefore, the 1-π methods significantly waste dose when applied to standard helical CB data. This issue could be solved by adapting the x-ray beam to the TD window to prevent the patient from being exposed to unnecessary dose, but such an adaptation is mechanically challenging to build and could potentially remove flexibility in pitch selection. A preferable solution is to find a way to efficaciously include the data measured outside the TD window into a TES algorithm such that no dose is wasted. Including these data then serves as a way for reducing image noise while maintaining resolution.

The *n*-π methods, and in particular the 3-π methods, rely on the theoretical observations made in Proksa *et al* (2000). They extend the data set on which the reconstruction is based to the general case of a *n*-π detector window (Bontus *et al* 2003, Katsevich 2004, Bontus *et al* 2005, Katsevich 2006), where *n* is an odd integer. Thus, they essentially perform reconstruction at any given location using a relative data angular coverage of *n* times 180°. This allows a good increase in noise-effective data usage, namely to 86% for *n* = 3 and 89% for *n* = 5 (Proksa *et al* 2000), but does not allow full data utilization. Also, it requires fixing the pitch to relatively small values, e.g., for *n* = 3, the pitch needs to be fixed at about one third of the maximum pitch that was defined above for the 1-π methods. This results in a significant increase of the data acquisition time for a given axial coverage and thus induces a higher risk of motion artifacts, especially for imaging applications where a high pitch is typically required.

Up to now, it has not been clear if efficient TES reconstruction can be achieved while using *all* data acquired at or near maximum pitch with a standard rectangular (flat or curved) detector. It is also unclear if a TES algorithm can be built such that it uses most or all data on the detector for arbitrary pitch values. This paper focuses on the first of these questions. We start our investigation from an observation that was made in Pack *et al* (2005), namely that in some circumstances, the x-ray linear attenuation coefficient at a given location can be reconstructed in three different ways that involve each a different amount of data and yield together full data utilization. Moreover, a reduction of image noise can be achieved by averaging the three different reconstructions. Unfortunately, this observation was based on a computationally demanding procedure, as reconstruction at any location required a differentiated backprojection (DBP) and a subsequent inverse Hilbert transform (HT) on three lines that contain .

We present here an efficient algorithm that allows TES reconstruction using all the data acquired at or near maximum pitch with a standard cylindrical detector. This algorithm is a volume-based implementation of the observation made in Pack *et al* (2005), which was found by extending our efficient π-line-based DBP–HT method (Schöndube *et al* 2007a, 2007b) to the general case of M-lines (i.e., lines that connect a source position with a point on the detector; see section 2.2) (‘M-line’ is an acronym for ‘measured line’). The extension provides a means to efficiently reconstruct the attenuation function on any desired stack of nutating slices that are made of M-lines and cover the volume of interest (VOI). We apply it to reconstruct the VOI three times, using stacks of nutating slices that require different amounts of data and that involve, altogether, all measurements. Averaging of the multiple VOI reconstructions then yields a result that effectively utilizes all measured data. Reconstruction on the different stacks of nutating slices can be performed in a sequential manner, or as a whole, within a single loop on the projections, which offers some advantages in terms of computational effort and data management, and is our preferred approach.

The paper is organized as follows. First, a mathematical description of the data acquisition geometry and of the DBP on M-lines is given in section 2. Then, in section 3, we explain how data outside the TD window can effectively be involved using the DBP, and describe our new algorithm in detail. Last, preliminary reconstruction results are shown in section 4, demonstrating high accuracy, the ability to make significant use of redundant data and the ability to reduce noise through this use. A discussion-and-conclusion section is given after these results.

To describe the data acquisition geometry we use a system of Cartesian coordinates, *x*, *y* and *z*, that is attached to the object under examination. This system is laid out such that the *z*-axis corresponds to the direction in which the object is translated. A vector in this coordinate system is denoted as = [*x, y, z*]^{T}, with [·]^{T} denoting the transpose of a matrix or vector. The 3D distribution of x-ray attenuation coefficients to be reconstructed is *f* ().

The motion of the vertex point (i.e., the radiation source) relative to the object is described as

$$\underset{\xaf}{\mathrm{a}}(\mathrm{\lambda})={[{R}_{0}\phantom{\rule{thinmathspace}{0ex}}\text{cos}(\mathrm{\lambda}+{\mathrm{\lambda}}_{0}),{R}_{0}\phantom{\rule{thinmathspace}{0ex}}\text{sin}(\mathrm{\lambda}+{\mathrm{\lambda}}_{0}),{z}_{0}+h\mathrm{\lambda}]}^{T},$$

(1)

where *R*_{0} is the helix radius, and 2π*h* is the helix pitch. The position of the vertex point is therefore defined by a single parameter, λ [λ_{start}, λ_{end}], where λ_{start} denotes the first and λ_{end} denotes the last position where measurements are taken. The vertex path is adjusted by λ_{0} and *z*_{0} such that at λ = 0 the source is located in the plane *z* = *z*_{0} at a polar angle λ_{0}.

The algorithm is designed for a third generation CT scanner, where the x-ray data are measured using a curved-area detector; see figure 1. The curvature of the detector is such that the detector sits on the cylindrical surface of radius *D* that is centered on the line parallel to the *z*-axis through the vertex point. Any point *P* on the area detector can be specified using two coordinates, *w* and γ, which are defined relative to a midplane and a midline within this plane, as explained below. The area detector is pixelated into *N*_{rows} rows of *N*_{cols} pixels that are equidistantly spaced in *w* and γ; the sampling steps are denoted as Δ*w* and Δγ, so that each pixel is of size (*D*Δγ) × Δ*w*, with Δγ expressed in radians.

Data acquisition geometry. Our algorithm is designed for the geometry of third generation CT scanners with a multi-row detector.

We call the plane that contains the vertex point and is orthogonal to the *z*-axis the *midplane*, and we call the line within the midplane that diverges from the vertex point and intersects the *z*-axis the *midline*. Coordinate *w* specifies the signed distance between *P* and the midplane; it is measured in the direction of *z*. Coordinate γ is the angle between the midline and the line that connects the vertex point with the orthogonal projection of *P* onto the midplane. This angle is measured positively in the clockwise direction, relative to the *z*-axis. By construction, (γ, *w*) = (0, 0) defines the point where the midline intersects the area detector.

The detector is centered on the origin, (γ, *w*) = (0, 0), except for a quarter-detector-pixel offset in γ, as in current CT scanners. The maximum fan angle is denoted as γ_{max}, and implicitly defines the radius of the field of view (FOV) within which the object is contained. The first detector row is at position *w* = *w*_{min} and the last detector row is at position *w* = *w*_{max}, with *w*_{min} = −*w*_{max} and *w*_{max} = (*N*_{rows} − 1)Δ*w*/2. Note that these definitions will be extensively used later.

To mathematically describe the data measured at a given (γ, *w*) location, we introduce three orthogonal unit vectors:

$${\underset{\xaf}{\mathrm{e}}}_{u}(\mathrm{\lambda})=[-\text{sin}(\mathrm{\lambda}+{\mathrm{\lambda}}_{0}),\text{cos}(\mathrm{\lambda}+{\mathrm{\lambda}}_{0}),0],$$

(2)

$${\underset{\xaf}{\mathrm{e}}}_{v}(\mathrm{\lambda})=[-\text{cos}(\mathrm{\lambda}+{\mathrm{\lambda}}_{0}),-\text{sin}(\mathrm{\lambda}+{\mathrm{\lambda}}_{0}),0],$$

(3)

$${\underset{\xaf}{\mathrm{e}}}_{w}=[0,0,1].$$

(4)

These vectors are such that e̱_{v}(λ) is parallel to the midline, and e̱_{u}(λ) = e̱_{v}(λ) × e̱_{w}. Thus, e̱_{u}(λ), e̱_{v}(λ) and e̱_{w} form a 3D orthonormal basis rotating with the vertex point.

The line that connects the vertex point (λ) to the (γ, *w*) location on the area detector is of direction

$$\underset{\xaf}{\alpha}(\mathrm{\lambda},\gamma ,w)=\frac{(D\phantom{\rule{thinmathspace}{0ex}}\text{sin}\phantom{\rule{thinmathspace}{0ex}}\gamma {\underset{\xaf}{\mathrm{e}}}_{u}(\mathrm{\lambda})+D\phantom{\rule{thinmathspace}{0ex}}\text{cos}\phantom{\rule{thinmathspace}{0ex}}\gamma {\underset{\xaf}{\mathrm{e}}}_{v}(\mathrm{\lambda})+w{\underset{\xaf}{\mathrm{e}}}_{w})}{\sqrt{{D}^{2}+{w}^{2}}},$$

(5)

and the data measured along this line are

$$g(\mathrm{\lambda},\gamma ,w)={\displaystyle {\int}_{0}^{\infty}f(\underset{\xaf}{\mathrm{a}}(\mathrm{\lambda})+t\underset{\xaf}{\alpha}(\mathrm{\lambda},\gamma ,w))\mathrm{d}t.}$$

(6)

The TD window plays a significant role in our algorithm. It is defined as the region of the detector inside which the data are measured that are necessary (Tuy 1983, Danielsson *et al* 1997) and sufficient (Zou and Pan 2004) for TES reconstruction. When using only data inside this window, each voxel is reconstructed from a 180° range of data, as viewed from the voxel perspective. The upper and lower boundaries of the TD window are the CB projections of the helix turns above and below the current source position onto the detector; in our notation, they are given by (Noo *et al* 2003)

$${w}_{\text{top}}=\frac{\mathit{\text{Dh}}}{{R}_{0}}\frac{\pi /2-\gamma}{\text{cos}\phantom{\rule{thinmathspace}{0ex}}\gamma},\text{\hspace{1em}\hspace{1em}}{w}_{\text{bottom}}=-\frac{\mathit{\text{Dh}}}{{R}_{0}}\frac{\pi /2+\gamma}{\text{cos}\phantom{\rule{thinmathspace}{0ex}}\gamma}.$$

(7)

We call the pitch value such that the TD window just fits within the detector area *maximum pitch*. This value, denoted by 2π *h*_{max}, corresponds to a pitch factor, ${p}_{\text{max}}=2\pi {h}_{\text{max}}\xb7\frac{D}{{N}_{\text{rows}}\mathrm{\Delta}w{R}_{0}}$, that depends on γ_{max} and the number of detector rows, *N*_{rows}, according to the formula

$${p}_{\text{max}}=\pi \frac{{N}_{\text{rows}}-1}{{N}_{\text{rows}}}\frac{\text{cos}\phantom{\rule{thinmathspace}{0ex}}{\gamma}_{\text{max}}}{\pi /2+{\gamma}_{\text{max}}}.$$

(8)

For a maximum fan angle of 26°, we have *p*_{max} 1.4.

Our algorithm is not applicable for arbitrary values of γ_{max} and of the pitch factor, *p*. The following conditions must be met:

$$\begin{array}{ccccc}\hfill \hfill & \hfill {\gamma}_{\mathrm{max}}\hfill & \hfill <\hfill & \hfill {\gamma}_{\text{crit},}\hfill & \hfill \hfill \\ \hfill {P}_{\text{min}}\hfill & \hfill <\hfill & \hfill P\hfill & \hfill <\hfill & \hfill {P}_{\text{max},}\hfill \end{array}$$

(9)

where γ_{crit} is the smallest positive root of the equation (π/2 + δ) tan δ = 1, namely γ_{crit} 26.24°, and

$${p}_{\text{min}}=\pi \frac{{N}_{\text{rows}}-1}{{N}_{\text{rows}}}\text{sin}\phantom{\rule{thinmathspace}{0ex}}{\gamma}_{\text{max}}.$$

(10)

The condition on γ_{max} is only introduced to guarantee *p*_{min} < *p*_{max}, whereas the constraint on *p* is derived from the algorithm, as will be discussed later.

The backprojection part of the algorithm presented in this paper is essentially a generalization of the DBP-0 variant of our DBP–HT algorithm (Schöndube *et al* 2007a) from π-lines to ‘M-lines’. A π-line is a line that connects two source positions separated by less than one helix turn. In contrast, an M-line is any line that connects a source position with a point on the detector. By construction, any point within the helix cylinder belongs to a multitude of M-lines, and any π-line is an M-line, whereas the converse is not true.

In the same way as for the DBP on π-lines, the outcome of the DBP on M-lines is not the function *f* (). It is the HT of *f* () along an M-line. To achieve the reconstruction of *f* (), an inverse HT has to be applied after the DBP is performed. For this task, we use an implementation of the formula first presented by Söhngen (1937) (Noo *et al* 2004), as it requires the HT of *f* () only to be known over the support of *f* (). A detailed description of how we implement Söhngen’s formula can be found in Schöndube *et al* (2007a equations (12) and (13)).

The link between the DBP and the HT along M-lines can be described as follows. Let (*f*)(, ω̱) be the HT of *f* at along the unit vector ω̱, i.e.,

$$(\mathscr{H}f)(\underset{\xaf}{\mathrm{x}},\underset{\xaf}{\omega})=-{\displaystyle {\int}_{-\infty}^{\infty}\frac{f(\underset{\xaf}{\mathrm{x}}-t\underset{\xaf}{\omega})}{\pi t}\mathrm{d}t,}$$

(11)

where the integral has to be understood in the sense of a Cauchy principal value. Let ω̱(λ, ) denote the unit vector pointing from (λ) toward :

$$\underset{\xaf}{\omega}(\mathrm{\lambda},\underset{\xaf}{\mathrm{x}})=\frac{\underset{\xaf}{\mathrm{x}}-\underset{\xaf}{\mathrm{a}}(\mathrm{\lambda})}{\Vert \underset{\xaf}{\mathrm{x}}-\underset{\xaf}{\mathrm{a}}(\mathrm{\lambda})\Vert}.$$

(12)

Furthermore, let {λ_{a}, λ_{b}, } be an operator that denotes the DBP of *g*(λ, γ, *w*) over the interval [λ_{a}, λ_{b}] at . Using this notation we can formulate the relation that allows the reconstruction of *f* () along M-lines using the DBP, as introduced in Pack *et al* (2005), as

$$(\mathscr{H}f)(\underset{\xaf}{\mathrm{x}},\underset{\xaf}{\omega}({\mathrm{\lambda}}_{M},\underset{\xaf}{\mathrm{x}}))=\mathcal{D}\mathcal{B}\mathcal{P}\{{\mathrm{\lambda}}_{M},{\mathrm{\lambda}}_{2}(\underset{\xaf}{\mathrm{x}}),\underset{\xaf}{\mathrm{x}}\}-\mathcal{D}\mathcal{B}\mathcal{P}\{{\mathrm{\lambda}}_{1}(\underset{\xaf}{\mathrm{x}}),{\mathrm{\lambda}}_{M},\underset{\xaf}{\mathrm{x}}\}.$$

(13)

In this relation, λ_{1}() and λ_{2}() correspond to the source positions where the projection of just enters (respectively, leaves) the TD window on the detector, and λ_{M} specifies the direction of the HT, which is along the M-line that connects (λ_{M}) to , i.e., along ω̱(λ_{M}, ). Relation (13) is defined for any point inside the helix cylinder, and can be implemented whenever is illuminated with no interruptions over the intervals^{3} [λ_{M}, λ_{2}()] and [λ_{1}(), λ_{M}]. A point is said to be illuminated over the interval [*a, b*] if the line connecting (λ) to is covered by the measurements for all λ [*a, b*].

The value at a given point on the left-hand side of equation (13) depends on λ_{M}, as, for different choices of λ_{M}, the value of ω̱(λ_{M}, ) indicating the direction of the HT through will be different. It is only after performing the inverse HT that the reconstruction result at is mathematically independent from the choice of λ_{M}. It is also important to note that λ_{M} is *not* required to be within the interval [λ_{1}(), λ_{2}()]. Let λ_{i}() and λ_{o}() be the first (respectively, last) source position at which the projection of just enters (respectively, leaves) *the area detector*. If there is no interrupted illumination over [λ_{i}(), λ_{o}()], then λ_{M} can be selected anywhere between λ_{i}() and λ_{o}() (Pack *et al* 2005). Conditions (9) guarantee that there is no such interrupted illumination for all points within the FOV, as will be explained later. The possibility of choosing λ_{M} freely from a large number of values provides us with a degree of freedom in selecting the set of M-lines used for reconstruction. In the following section, we will discuss how this degree of freedom can be used to take redundant data measured outside the TD window into account.

The DBP can be implemented in various ways; see, e.g., Pack *et al* (2005), Zou and Pan (2004), Schöndube *et al* (2007a, 2007b), Yu *et al* (2007). In Schöndube *et al* (2007a, 2007b), Yu *et al* (2007), it was shown that using a rebinning step for the DBP significantly improves efficiency and noise properties. Using a rebinning step also provides an easy way of processing flying focal spot (FFS) data, since this feature affects then only the rebinning step, not the actual backprojection. In our implementation, we thus carry out the backprojection in a rebinned pseudo-parallel ‘wedge’ geometry (Flohr *et al* 2005, Heuscher *et al* 2004).

The rebinning is performed according to the equation

$${g}_{\mathrm{r}}(\vartheta (\mathrm{\lambda},\gamma ),{s}_{\mathrm{r}}(\mathrm{\lambda},\gamma ),w)=g(\mathrm{\lambda},\gamma ,w),$$

(14)

with

$$\vartheta (\mathrm{\lambda},\gamma )=\mathrm{\lambda}+\frac{\pi}{2}-\gamma ,\text{\hspace{1em}\hspace{1em}}{s}_{\mathrm{r}}(\mathrm{\lambda},\gamma )={R}_{0}\phantom{\rule{thinmathspace}{0ex}}\text{sin}\phantom{\rule{thinmathspace}{0ex}}\gamma ,$$

(15)

and *w* remaining untouched. In this rebinned geometry, the boundaries of the TD window are given by

$${w}_{\text{top}}=\frac{\mathit{\text{Dh}}}{{R}_{0}}\frac{\pi /2-\text{arcsin}({s}_{\mathrm{r}}/{R}_{0})}{\sqrt{1-{s}_{\mathrm{r}}^{2}/{R}_{0}^{2}}},\text{\hspace{1em}\hspace{1em}}{w}_{\text{bottom}}=-\frac{\mathit{\text{Dh}}}{{R}_{0}}\frac{\pi /2+\mathrm{arcsin}({s}_{\mathrm{r}}/{R}_{0})}{\sqrt{1-{s}_{\mathrm{r}}^{2}/{R}_{0}^{2}}},$$

(16)

and the DBP is expressed with an operator, _{r}{_{a}, _{b}, }, which designates the DBP of *g*_{r}(, *s*_{r},*w*) over the interval [_{a}, _{b}] at .

The link between the wedge definition of the DBP and the expression used in equation (13) is

$$\mathcal{D}\mathcal{B}\mathcal{P}\{{\mathrm{\lambda}}_{a},{\mathrm{\lambda}}_{b},\underset{\xaf}{\mathrm{x}}\}=\mathcal{D}\mathcal{B}{\mathcal{P}}_{\mathrm{r}}\{{\vartheta}^{*}({\mathrm{\lambda}}_{a},\underset{\xaf}{\mathrm{x}}),{\vartheta}^{*}({\mathrm{\lambda}}_{b},\underset{\xaf}{\mathrm{x}}),\underset{\xaf}{\mathrm{x}}\},$$

(17)

where

$${\vartheta}^{*}(\mathrm{\lambda},\underset{\xaf}{\mathrm{x}})=\mathrm{\lambda}+\frac{\pi}{2}-{\gamma}^{*}(\mathrm{\lambda},\underset{\xaf}{\mathrm{x}}),$$

(18)

with γ*(λ, ) being the value of γ that corresponds to the line through (λ) and , i.e.,

$${\gamma}^{*}(\mathrm{\lambda},\underset{\xaf}{\mathrm{x}})=\text{arctan}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{y\phantom{\rule{thinmathspace}{0ex}}\text{cos}(\mathrm{\lambda}+{\mathrm{\lambda}}_{0})-x\phantom{\rule{thinmathspace}{0ex}}\text{sin}(\mathrm{\lambda}+{\mathrm{\lambda}}_{0})}{{R}_{0}-x\phantom{\rule{thinmathspace}{0ex}}\text{cos}(\mathrm{\lambda}+{\mathrm{\lambda}}_{0})-y\phantom{\rule{thinmathspace}{0ex}}\text{sin}(\mathrm{\lambda}+{\mathrm{\lambda}}_{0})}\right).$$

(19)

Therefore, equation (13) can be rewritten in the form

$$(\mathscr{H}f)(\underset{\xaf}{\mathrm{x}},\underset{\xaf}{\omega}({\mathrm{\lambda}}_{M},\underset{\xaf}{\mathrm{x}}))=\mathcal{D}\mathcal{B}{\mathcal{P}}_{\mathrm{r}}\{{\vartheta}^{*}({\mathrm{\lambda}}_{M},\underset{\xaf}{\mathrm{x}}),{\vartheta}_{2}(\underset{\xaf}{\mathrm{x}}),\underset{\xaf}{\mathrm{x}}\}-\mathcal{D}\mathcal{B}{\mathcal{P}}_{\mathrm{r}}\{{\vartheta}_{1}(\underset{\xaf}{\mathrm{x}}),{\vartheta}^{*}({\mathrm{\lambda}}_{M},\underset{\xaf}{\mathrm{x}}),\underset{\xaf}{\mathrm{x}}\},$$

(20)

where _{1}() = *(λ_{1}(), ) and _{2}() = *(λ_{2}(), ) denote the values of where the projection of just enters (respectively leaves) the TD window in the rebinned geometry.

The precise expression for _{r}{_{a}, _{b}, } is

$$\mathcal{D}\mathcal{B}{\mathcal{P}}_{\mathrm{r}}\{{\vartheta}_{a},{\vartheta}_{b},\underset{\xaf}{\mathrm{x}}\}=-\frac{1}{2\pi}{\displaystyle {\int}_{{\vartheta}_{a}}^{{\vartheta}_{b}}{g}_{\mathrm{d}}(\vartheta ,{s}_{\mathrm{r}}^{*}(\vartheta ,\underset{\xaf}{\mathrm{x}}),{w}^{*}(\vartheta ,\underset{\xaf}{\mathrm{x}}))\mathrm{d}\vartheta ,}$$

(21)

with

$${g}_{\mathrm{d}}(\vartheta ,{s}_{\mathrm{r}},w)=\frac{D}{\sqrt{{D}^{2}+{w}^{2}}}\frac{\partial}{\partial {s}_{\mathrm{r}}}{g}_{\mathrm{r}}(\vartheta ,{s}_{\mathrm{r}},w),$$

(22)

$${s}_{\mathrm{r}}^{*}(\vartheta ,\underset{\xaf}{\mathrm{x}})=x\phantom{\rule{thinmathspace}{0ex}}\text{cos}(\vartheta +{\vartheta}_{0})+y\phantom{\rule{thinmathspace}{0ex}}\text{sin}(\vartheta +{\vartheta}_{0}),$$

(23)

$${w}^{*}(\vartheta ,\underset{\xaf}{\mathrm{x}})=\frac{D(z-{z}_{0}-h(\vartheta -\pi /2+\text{arcsin}({s}_{\mathrm{r}}^{*}/{R}_{0})))}{y\phantom{\rule{thinmathspace}{0ex}}\text{cos}(\vartheta +{\vartheta}_{0})-x\phantom{\rule{thinmathspace}{0ex}}\text{sin}(\vartheta +{\vartheta}_{0})+\sqrt{{R}_{0}^{2}-{s}_{\mathrm{r}}^{*2}}},$$

(24)

where _{0} = λ_{0} + π/2.

Inserting equation (21) into the right-hand side of equation (20) and combining the resulting terms together into a single weighted backprojection integral offers a more practical expression for implementing equation (20). This expression given below is used in our algorithm as described later:

$$(\mathscr{H}f)(\underset{\xaf}{\mathrm{x}},\underset{\xaf}{\omega}({\mathrm{\lambda}}_{M},\underset{\xaf}{\mathrm{x}}))=-\frac{1}{2\pi}{\displaystyle {\int}_{{\vartheta}_{\text{min}}}^{{\vartheta}_{\text{max}}}\alpha (\vartheta ,\underset{\xaf}{\mathrm{x}}){g}_{\mathrm{d}}\phantom{\rule{thinmathspace}{0ex}}(\vartheta ,{s}_{\mathrm{r}}^{*}(\vartheta ,\underset{\xaf}{\mathrm{x}}),{w}^{*}(\vartheta ,\underset{\xaf}{\mathrm{x}}))\mathrm{d}\vartheta ,}$$

(25)

where _{min} and _{max} are the minimum and maximum values among the three angles, _{1}(), _{2}() and ^{*}(λ_{M}, ), and where

$$\alpha (\vartheta ,\underset{\xaf}{\mathrm{x}})=\text{sign}(\vartheta -{\vartheta}^{*}({\mathrm{\lambda}}_{M},\underset{\xaf}{\mathrm{x}}))-\frac{1}{2}\text{sign}(\vartheta -{\vartheta}_{1}(\underset{\xaf}{\mathrm{x}}))-\frac{1}{2}\text{sign}(\vartheta -{\vartheta}_{2}(\underset{\xaf}{\mathrm{x}})).$$

(26)

Note that α(, ) is zero for [_{min}, _{max}], so the integration bounds in equation (25) can be extended to the nearest view samples outside [_{min}, _{max}] to simplify the implementation.

As discussed in the previous section, λ_{M} can be chosen freely over a wide range of values when implementing equation (20) for a given point . By construction, the selection of λ_{M} plays a significant role in the range of projections involved in the DBP. In order to illustrate this, figure 2 shows three different choices of λ_{M} with the corresponding M-lines. The part of the vertex path marked in bold indicates the source positions that contribute to the backprojection terms on the right-hand side of equation (13). The backprojection range spans over the interval [λ_{1}(), λ_{2}()] for λ_{1}() ≤ λ_{M} ≤ λ_{2}(), whereas it is extended to [λ_{M}, λ_{2}()] if λ_{M} < λ_{1}() and to [λ_{1}(), λ_{M}] if λ_{M} > λ_{2}(). The backprojection interval in the case λ_{M} [λ_{1}(), λ_{2}()] is precisely the same as in the 1-π methods for TES reconstruction (Katsevich 2002, Noo *et al* 2003, Zou and Pan 2004, Schöndube *et al* 2007a). This interval is exactly covered by the TD window and thus does not allow using the redundant data. On the other hand, the backprojection range extends beyond the boundaries of the TD window when λ_{M} [λ_{1}(), λ_{2}()]. Thus, in this case, redundant data are actually used for reconstruction at . This interesting result was first noted in Pack *et al* (2005).

M-lines and backprojection range. Reconstruction at can be achieved by computing and then inverting the HT on any M-line through , but this procedure requires a different range of projections depending on the position **...**

Unfortunately, when evaluating their method, Pack *et al* (2005) found that the amount of noise in the reconstruction at increases for a choice of λ_{M} [λ_{1}(), λ_{2}()], compared to λ_{M} [λ_{1}(), λ_{2}()]. This somewhat surprising result can be understood by considering the simplified 2D case of *h* = 0 and *w* = 0. In this case, the relation *g*_{d}(, *s*_{r}, 0) = −*g*_{d}( + π, −*s*_{r}, 0) holds, which shows that, in contrast to the standard filtered backprojection (FBP) case, there is a minus sign between opposite filtered projections. If the differentiated backprojection is extended over more than a π-interval, the contributions from data that are 180° apart begin to cancel out each other, whereas they are averaged in the standard FBP. Because there is no noise correlation between measurements that are 180° apart, noise contributions add up regardless of whether we perform a DBP or an FBP reconstruction. Therefore, in the case of the DBP, extending the backprojection range yields an increase in the noise level. Pack *et al* (2005) concluded that a similar behavior can be assumed for the 3D case. In summary, when using the DBP–HT reconstruction method on M-lines it is not possible to reduce the noise level in reconstructed images just by naively extending the backprojection range over more than the π-interval.

Nonetheless, Pack *et al* (2005) found that noise can be reduced by averaging the three different reconstructions of *f* () that correspond to the following choices for λ_{M}: λ_{M} = λ_{i}(), λ_{M} [λ_{1}(), λ_{2}()] and λ_{M} = λ_{o}(). Unfortunately, this procedure requires that a backprojection on all points along the intersections of three specific M-lines with the support of *f* must be performed to obtain the final outcome at any given point ; it is thus inefficient and not suitable for practical use. Specifically, the required number of operations is of order *O*(*N*^{5}), which can be seen as follows. Let *N _{x}* =

In the remainder of this paper, the ideas from Pack *et al* (2005), Schöndube *et al* (2007a, 2007b) are combined to obtain a new TES reconstruction algorithm that effectively uses the redundant data while requiring a number of operations of order *O* (*N*^{4}), instead of *O* (*N*^{5}).

In contrast to the inefficient voxel-based method outlined in section 3.1, our approach performs a volume-based reconstruction similar to our π-line method presented in Schöndube *et al* (2007a). More precisely, we perform three volumetric reconstructions of the desired volume *V*, and then compute their average to obtain a result that beneficially involves the redundant data.

Each volumetric reconstruction is obtained by partitioning *V* into a stack of nutating surfaces of M-lines, performing reconstruction on those surfaces and then interpolating to a Cartesian grid. The partitioning of *V* is based on a single parameter, denoted by *w*_{surf}. This parameter specifies a *w*-coordinate on the detector toward which all M-lines of all surfaces are required to point to. The three partitionings of *V* that we consider correspond to the following choices for *w*_{surf}: *w*_{surf} = 0, *w*_{surf} = *w*_{min} and *w*_{surf} = *w*_{max}, with *w*_{min} and *w*_{max} being the *w*-coordinate for the first and the last detector row, respectively, as described in section 2.1. The reconstructions of *V* using these three values of *w*_{surf} are denoted as *V**(0), *V**(*w*_{min}) and *V**(*w*_{max}), and their average as $\overline{{V}^{*}}=({V}^{*}(0)+{V}^{*}({w}_{\text{min}})+{V}^{*}({w}_{\text{max}}))/3$. These four volumes differ from *V* only due to data noise and discretization errors.

In addition to pointing toward *w* = *w*_{surf}, the M-lines forming each surface in any given partitioning of *V* are chosen to be equidistant and parallel to each other when projected onto the (*x, y*)-plane. There is thus one and only one M-line on each surface that intersects the *z*-axis. We use the angle that specifies the source position of this M-line to index the surfaces; this angle is called λ_{surf}. Figure 3 depicts the surface of index λ_{surf} for partitioning of *V* based on a given value of *w*_{surf}. We denote this surface as (λ_{surf};*w*_{surf}) with a semicolon between the two parameters, λ_{surf} and *w*_{surf}, to emphasize that the first parameter is a variable, whereas the second is a constant for a given partitioning of *V*.

Depiction of the M-line surface (λ_{surf}; *w*_{surf}). Each M-line on (λ_{surf}; *w*_{surf}) hits the detector at *w* = *w*_{surf}, i.e., on a fixed row of the virtual detector onto which the data are mapped by the wedge rebinning. By varying **...**

To specify locations onto (λ_{surf}; *w*_{surf}), we use two Cartesian coordinates, *s* and τ, that are defined within the (*x, y*)-plane, as shown in figure 3. These coordinates are measured along the following orthonormal axes, which rotate with the direction of the M-lines:

$${\underset{\xaf}{\mathrm{e}}}_{s}({\mathrm{\lambda}}_{\text{surf}})=[-\text{sin}({\mathrm{\lambda}}_{\text{surf}}+{\mathrm{\lambda}}_{0}),\text{cos}({\mathrm{\lambda}}_{\text{surf}}+{\mathrm{\lambda}}_{0}),0],$$

(27)

$${\underset{\xaf}{\mathrm{e}}}_{\tau}({\mathrm{\lambda}}_{\text{surf}})=[-\text{cos}({\mathrm{\lambda}}_{\text{surf}}+{\mathrm{\lambda}}_{0}),-\text{sin}({\mathrm{\lambda}}_{\text{surf}}+{\mathrm{\lambda}}_{0}),0].$$

(28)

Vector e̱_{τ}(λ_{surf}) is along the projection of each M-line onto the (*x, y*)-plane and e̱_{s}(λ_{surf}) is defined such that e̱_{s}(λ_{surf}) × e̱_{τ} (λ_{surf}) = e̱_{z} for any λ_{surf}. Coordinate *s* identifies the M-lines on the surface, whereas τ is the positioning parameter on the M-line at the location *s*. The meaning of *s* is that of the signed distance from the origin to the projection of any M-line onto the (*x, y*)-plane. Location (*s*, τ) = (0, 0) corresponds to the intersection of (λ_{surf}; *w*_{surf}) with the *z*-axis.

For any given *w*_{surf}, the variables *s*, τ and λ_{surf} parameterize together the three-dimensional space within the helix cylinder. The Cartesian coordinates of the point, (*s*, τ, λ_{surf}; *w*_{surf}), specified by these three variables are

$$x=-s\phantom{\rule{thinmathspace}{0ex}}\text{sin}({\mathrm{\lambda}}_{\text{surf}}+{\mathrm{\lambda}}_{0})-\tau \phantom{\rule{thinmathspace}{0ex}}\text{cos}({\mathrm{\lambda}}_{\text{surf}}+{\mathrm{\lambda}}_{0}),$$

(29)

$$y=s\phantom{\rule{thinmathspace}{0ex}}\text{cos}({\mathrm{\lambda}}_{\text{surf}}+{\mathrm{\lambda}}_{0})-\tau \phantom{\rule{thinmathspace}{0ex}}\text{sin}({\mathrm{\lambda}}_{\text{surf}}+{\mathrm{\lambda}}_{0}),$$

(30)

$$z={z}_{0}+h{\mathrm{\lambda}}_{M}^{*}({\mathrm{\lambda}}_{\text{surf}},s)+(\tau +\sqrt{{R}_{0}^{2}-{s}^{2}})\xb7\frac{{w}_{\text{surf}}}{D},$$

(31)

where
${\mathrm{\lambda}}_{M}^{*}({\mathrm{\lambda}}_{\text{surf}},s)$ is the value of λ_{M} that corresponds to the M-line at coordinate *s* on the surface (λ_{surf}; *w*_{surf}), that is

$${\mathrm{\lambda}}_{M}^{*}({\mathrm{\lambda}}_{\text{surf}},s)={\mathrm{\lambda}}_{\text{surf}}+\text{arcsin}\phantom{\rule{thinmathspace}{0ex}}(s/{R}_{0}).$$

(32)

Note that, by construction, the quantity
${\vartheta}^{*}\left({\mathrm{\lambda}}_{M}^{*}({\mathrm{\lambda}}_{\text{surf}},s),\underset{\xaf}{\mathrm{x}}(s,\tau ,{\mathrm{\lambda}}_{\text{surf}};{w}_{\text{surf}})\right)$ based on equation (18) is the same for all (*s*, τ)-locations on (λ_{surf}; *w*_{surf}). It is equal to

$${\vartheta}_{\text{surf}}={\mathrm{\lambda}}_{\text{surf}}+\pi /2.$$

(33)

Note also that the *z*-coordinate of (*s*, τ, λ_{surf}; *w*_{surf}), given by equation (31), is a continuous function of λ_{surf} that tends toward +∞(respectively, −∞)when λ_{surf} tends to +∞(respectively, −∞). Hence, any point *Q* inside the FOV can be associated with at least one value of λ_{surf} such that *Q* belongs to (λ_{surf}; *w*_{surf}).

To accommodate the geometry of nutating slices, all reconstructions of *V* are performed with equal, equidistant sampling in *x* and *y*; *s* and τ are sampled using the same grid as that used for *x* and *y*. The number of samples in *x, y, s* and τ is denoted as *N*_{lat} and the sampling step as Δ_{lat}, where ‘lat’ stands for lateral. The main steps to compute *V**(*w*_{surf}) on a Cartesian grid of *N*_{lat} × *N*_{lat} × *N _{z}* voxels of size (Δ

- Calculate the range of λ
_{surf}needed to fully cover*V*by the M-line surfaces corresponding to the fixed value of*w*_{surf}. - Create a stack of M-line surfaces, (λ
_{surf};*w*_{surf}), spaced by Δλ_{surf}= Δ*z/h*over the calculated range of λ_{surf}. - Rebin the data to the wedge geometry using the method described in Flohr
*et al*(2005), which is applicable to data collected both without and with FFS. - Differentiate the rebinned data in
*s*using a simple two-point scheme, i.e., for a generic function_{r}*k*(*s*):_{r}$$k\prime ({s}_{r}+\mathrm{\Delta}{s}_{r}/2)\simeq (k({s}_{r}+\mathrm{\Delta}{s}_{r})-k({s}_{r}))/(\mathrm{\Delta}{s}_{r}).$$(34) - Weight the data in
*w*according to equation (22). - Compute the DBP onto the stack of M-line surfaces according to equation (25).
- Apply a finite HT inversion on each M-line.
- Linearly interpolate from the M-line surfaces to Cartesian coordinates. We first interpolate from (
*s*, τ, λ_{surf}) to an intermediate (*x, y*, λ_{surf}) grid using equations (29) and (30) and then from these intermediate coordinates to (*x, y, z*) using equation (31).

Step 6 is most demanding in terms of managing computational effort and discretization errors. We provide further details on our implementation of this step in section 3.3. Before going into these details, two observations are, however, in order.

First, the steps outlined above are only applicable when there is no interrupted illumination for all points within the FOV. We show in the appendix that this condition is satisfied as long as the pitch factor, *p*, and the maximum fan angle, γ_{max}, are selected as in conditions (9). Interestingly, the condition of no interrupted illumination is closely linked to the requirement that the M-line surfaces have no intersection with each other within the FOV when |*w*_{surf}| = *w*_{max}. In fact, under conditions (9), relation (31) with *s* and τ defined from equations (29) and (30) establishes a one-to-one relationship between *z* and λ_{surf} for any (*x, y*) in the FOV and any |*w*_{surf}| ≤ *w*_{max}. This property facilitates the implementation of step 8 and is discussed in the appendix along with the issue of interrupted illumination.

Second, the three targeted reconstructions of *V* involve different data as they are based on different values of *w*_{surf}. On one hand, reconstruction *V**(0) involves no data outside the TD window. On the other hand, reconstructions *V**(*w*_{min}) and *V**(*w*_{max}) involve together all measured data, with *V**(*w*_{min}) using the data below and within the TD window, and *V**(*w*_{max}) using the data within and above the TD window. This is illustrated in figure 4. Following our discussion in section 3.1, the noise level in *V**(*w*_{min}) and in *V**(*w*_{max}) may be expected to be higher than in *V**(0), whereas the average
$\overline{{V}^{*}}$ may be expected to be significantly less noisy than any of these three reconstructions of *V*, as is demonstrated in the following section.

Given that the backprojection step involves many arithmetic operations that do not depend on the value of *w*_{surf}, it is more efficient to perform the backprojection for the three volumes, *V**(0), *V**(*w*_{min}) and *V**(*w*_{max}), altogether. We describe the steps for such a joint backprojection below. These steps compute a numerically stable version of equation (25) using the simple rectangular rule for numerical integration. The numerical stabilization amounts to replacing α(, ) in (25) by

$${\alpha}_{a}(\vartheta ,\underset{\xaf}{\mathrm{x}})={\text{sign}}_{a}(\vartheta -{\vartheta}^{*}({\mathrm{\lambda}}_{M},\underset{\xaf}{\mathrm{x}}))-\frac{1}{2}\text{sign}(\vartheta -{\vartheta}_{1}(\underset{\xaf}{\mathrm{x}}))-\frac{1}{2}\text{sign}(\vartheta -{\vartheta}_{2}(\underset{\xaf}{\mathrm{x}})),$$

(35)

where sign_{a}(·) is a smooth approximation of the signum function, namely,

$${\text{sign}}_{a}(\vartheta )=\{\begin{array}{ccc}-1\hfill & \text{if}\hfill & \vartheta \le -\mathrm{\Delta}\vartheta \hfill \\ 2\frac{\vartheta}{\mathrm{\Delta}\vartheta}+{\left(\frac{\vartheta}{\mathrm{\Delta}\vartheta}\right)}^{2}\hfill & \text{if}\hfill & -\mathrm{\Delta}\vartheta <\vartheta <0\hfill \\ 2\frac{\vartheta}{\mathrm{\Delta}\vartheta}-{\left(\frac{\vartheta}{\mathrm{\Delta}\vartheta}\right)}^{2}\hfill & \text{if}\hfill & 0\le \vartheta <\mathrm{\Delta}\vartheta \hfill \\ 1\hfill & \text{if}\hfill & \vartheta \ge \mathrm{\Delta}\vartheta .\hfill \end{array}$$

(36)

Note that only the first appearance of the signum function is smoothed; the other two signum functions are left untouched. These two functions are sufficiently randomly sampled over , *x, y* and *z* that no smoothing is needed, whereas the first signum function is used on a fairly deterministic basis that requires numerical stabilization.

Before starting the backprojection, we compute a range of λ_{surf} that allows covering *V* for all three values of *w*_{surf}. This range is defined as [λ_{surf,min}, λ_{surf,max}], where λ_{surf,min} is the minimum value of λ_{surf} for which (λ_{surf}; *w*_{max}) just intersects *V*, and λ_{surf,max} is the maximum value of λ_{surf} for which (λ_{surf}; *w*_{min}) just intersects *V*. Since the M-line surfaces are sampled with a step Δλ_{surf} = Δ*z/h*, the number of surfaces for each partition of *V* is then selected as

$${N}_{\text{surf}}=\text{ceil}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{{\mathrm{\lambda}}_{\text{surf},\text{max}}-{\mathrm{\lambda}}_{\text{surf},\text{min}}}{\mathrm{\Delta}{\mathrm{\lambda}}_{\text{surf}}}\right),$$

(37)

where ceil (·) is the operation that rounds any number to the nearest larger integer. By construction, *N*_{surf} differs from *N _{z}* only by some overhead resulting from the obliqueness of the surfaces of M-line relative to the

The backprojection is performed over the range [λ_{surf,min} + π/2, λ_{surf,max} + π/2] and fills the entries of three arrays of size *N*_{lat} × *N*_{lat} × *N*_{surf}, where each array corresponds to one of the desired values of *w*_{surf}. The steps for any filtered projection of angle that is within the backprojection range are as follows:

- For every λ
_{surf}[λ_{surf,min}, λ_{surf,max}]:- Test if the current projection contributes to samples that are within the FOV on (λ
_{surf};*w*_{surf}) for any*w*_{surf}{0,*w*_{min},*w*_{max}}. If not, increment the value of λ_{surf}; otherwise, proceed with the steps below. The test is always (conservatively) positive when = −_{surf}satisfies the following condition, with_{surf}= λ_{surf}+ π/2:$$|\widehat{\vartheta}|<{\gamma}_{\text{max}}+({N}_{\text{rows}}\mathrm{\Delta}w)\frac{{R}_{0}(1+\text{sin}\phantom{\rule{thinmathspace}{0ex}}{\gamma}_{\text{max}})}{\mathit{\text{Dh}}}.$$(38)This condition can be used to speed up this first step. - For every
*s*on the grid perform the following:- Determine the range of τ which is inside the FOV for the current value of
*s*. - For every τ within that interval:
- Compute the value of ${s}_{\mathrm{r}}^{*}$ for the current value of (
*s*, τ) according to equation (23). - For this value of ${s}_{\mathrm{r}}^{*}$, compute
*w*_{top}and*w*_{bottom}, the upper and lower boundaries of the TD window, using linear interpolation through tabulated values of equation (16). - For each
*w*_{surf}{0,*w*_{min},*w*_{max}}:- Compute the value of
*w*^{*}for the current (*s*, τ,*w*_{surf}) triple according to equations (24) and (31), using again interpolation through tabulated values, this time of the functions arcsin(·) and $\sqrt{\xb7}$. - Compute α
_{a}(, ) of equation (35) using the formula$${\alpha}_{a}(\vartheta ,\underset{\xaf}{\mathrm{x}})=\{\begin{array}{ccc}({\text{sign}}_{a}(\widehat{\vartheta})-1)\hfill & \text{if}\hfill & {w}^{*}<{w}_{\text{bottom}}\hfill \\ ({\text{sign}}_{a}(\widehat{\vartheta}))\hfill & \text{if}\hfill & {w}_{\text{bottom}}<{w}^{*}<{w}_{\text{top}}\hfill \\ ({\text{sign}}_{a}(\widehat{\vartheta})+1)\hfill & \text{if}\hfill & {w}^{*}>{w}_{\text{top}}.\hfill \end{array}$$(39) - Compute ${\alpha}_{a}(\vartheta ,\underset{\xaf}{\mathrm{x}})\xb7{g}_{\mathrm{d}}(\vartheta ,{s}_{\mathrm{r}}^{*},{w}^{*})$, with bilinear interpolation as the backprojection contribution to the location associated with the current values of
*w*_{surf}, τ,*s*and λ_{surf}.

The required number of operations of this algorithm is of order *O*(*N*^{4}). Let *N* be the number of voxels that is desired in *x, y* and *z*. Thus, *N*_{lat} = *N* and *N _{z}* =

We present here preliminary results on the performance of our algorithm. These results were obtained from computer-simulated data defined using parameters that are representative of state-of-the-art commercial CT scanners; see table 1. Two configurations were considered for the data simulation, namely, without and with a lateral FFS. Moreover, the effect of the finite size of the focal spot and of the detector elements was consistently included, using 3 × 3 detector-lets and 2 × 2 source-lets at each source position, with exponential averaging of all line integrals connecting a source-let to a detector-let.

Three experiments were performed. In the first experiment, we used the FORBILD head phantom to visually assess robustness in terms of CB artifacts, discretization errors and effective handling of the lateral FFS feature. Figure 5 displays the outcome of this experiment for the slice *z* = 0 cm, as obtained on a grid of 401 × 401 square pixels of side 0.75 mm. The top row of this figure shows the results of our algorithm without (left) and with (right) lateral FFS, whereas the bottom row shows the ground truth and the difference image between the two images of the first row. These results are complemented by figure 6, where the profile through the ear at *y* = 0 cm is given for the reconstructions without and with FFS, and for the ground truth. It can be observed that the new algorithm yields accurate results, and that the FFS feature is effectively accounted for by the algorithm; in that it yields a clear improvement in resolution.

Reconstructions of the FORBILD head phantom (with right ear only). Upper row: DBP–HT reconstructions from data simulated without (left) and with (right) angular FFS. Bottom row: ground truth (left) and difference image between the two reconstructions **...**

The second experiment aimed at demonstrating the effective use of the data outside the TD window, and at assessing the magnitude of the contribution of these data to the reconstruction. To achieve this goal, we performed two reconstructions using the FORBILD head phantom data of the first experiment. One reconstruction was obtained with a binary weight applied to the data that set all values inside the TD window to zero and left the other values unchanged. The other reconstruction was obtained with all detector values outside the TD window being set to zero. Naturally, none of these two reconstructions is theoretically exact, but their sum is due to the linearity of all operations in our algorithm. This data splitting technique allows singling out the impact of the data outside the TD window. The two reconstructions and their sum are displayed in figure 7. It can be observed that the data outside the TD window significantly impact the whole reconstruction process.

Impact of redundant data in our reconstruction process. The top row shows the reconstruction of the FORBILD head phantom with no ears, as obtained when setting to zero either all data outside the TD window (left) or all data inside the TD window (right). **...**

The third and last experiment aimed at verifying that noise performance behaves as expected. For this experiment, we used the FORBILD thorax phantom with addition of Poisson noise corresponding to an emission of 150 000 photons per ray. Figure 8 displays the reconstructions of slice *z* = 2.18 cm through each of the three sub-volumes, *V**(0), *V**(*w*_{min}) and *V**(*w*_{max}), and also through the combined volume
$\overline{{V}^{*}}$. These reconstructions were performed on a grid of 512 × 512 square pixels of side 0.9mm. We also included the difference image between the reconstructions of *V**(0) and
$\overline{{V}^{*}}$ from noise-free data. This image shows that the difference in resolution between *V**(0) and
$\overline{{V}^{*}}$ is very small, as the edges in the phantom are hardly visible, whereas the noise level is significantly lower in
$\overline{{V}^{*}}$. Quantitative measurements of the standard deviation within seven regions of interest, labeled as A through G, are given in table 2 and support the visual assessment.

We have presented a new algorithm for helical CB CT that allows TES reconstruction while beneficially using all measured data. This algorithm was designed as an extension of the π-line DBP–HT algorithm presented in Schöndube *et al* (2007a) to the more general case of M-lines. The noise improvement that this algorithm offers is achieved using a principle that was first described in Pack *et al* (2005). Specifically, three different reconstructions of the VOI based on data sets that are only partially overlapping are averaged together to obtain a noise-improved image. Each individual reconstruction is obtained using a differentiated backprojection onto a specific type of M-lines. To ensure the usage of all redundant data on the detector, our proposed method uses one set of M-lines directed to the upper end of the detector, one set directed to the lower end of the detector and one set directed to the central detector row. Unlike the technique presented in Pack *et al* (2005) our algorithm performs the reconstruction volume-wise, instead of voxel-wise; this allows a significant decrease in computational effort, which we have shown to be of the order of the cubic root of the number of voxels.

Our algorithm only applies when there is no interrupted illumination for all points within the FOV, and we have proven that conditions (9) guarantee this to be the case. We did not discuss so far the optimality of these conditions. Regarding this issue, we have observed that the M-line surfaces defined with |*w*_{surf}| = *w*_{max} intersect each other non-tangentially within the FOV when *p* is decreased below *p*_{min}. Such intersections appear first near the edge of the FOV, and can be shown to imply the appearance of interrupted illumination. For this reason, we believe that conditions (9) are optimal.

Conditions (9) are not very restrictive for a small FOV, such as that corresponding to a head scan. But they are very stringent for thorax imaging, especially with wide patients. We are actively investigating new methodologies allowing the handling of interrupted illumination, so that lower pitch values may be considered for an arbitrary FOV size.

Preliminary results from computer-simulated data were provided to show effective use of redundant data and also of the angular FFS. Further quantification of gains in image quality, especially for real data, is important. Noise comparison with other approximate and TES algorithms is also of high interest. However, these two topics are beyond the scope of this paper, particularly because a careful resolution analysis needs to be performed at the same time. We will report on such evaluations in the near future.

An important issue when evaluating noise is whether uniform weighting of independent M-line results is optimal or not. Also, reconstruction using more than three sets of M-lines may be preferred to mitigate patient-motion artifacts along with noise. An early investigation of these issues was reported inKöhler *et al* (2008) at the 2008 IEEE Medical Imaging Conference, and yielded very encouraging results regarding patient-motion handling. No algorithm details were provided in Köhler *et al* (2008), but it is likely that a methodology that is very similar to ours was used. A first description of our algorithm was also given at the 2008 IEEE Medical Imaging Conference (Schöndube *et al* 2008).

This work was partially supported by NIH grant R01 EB007236 and by Siemens Healthcare. Its contents are solely the responsibility of the authors.

In this appendix, we discuss how conditions (9) guarantee that there is no interrupted illumination for the points within the FOV. Our proof involves a close examination of intersections between the surfaces of M-lines defined with |*w*_{surf}| = *w*_{max}. More precisely, we demonstrate that: (i) under conditions (9) and the assumption that |*w*_{surf}| ≤ *w*_{max}, the surfaces (λ_{surf}; *w*_{surf}) do not intersect each other within the FOV when varying λ_{surf}, (ii) there is no interrupted illumination when the surfaces (λ_{surf}; *w*_{surf}) do not intersect each other within the FOV for |*w*_{surf}| = *w*_{max}. Taken together, these two statements prove that there is no interrupted illumination under conditions (9).

To best understand our argument, recall that the condition *p* < *p*_{max} is required for any TES reconstruction, and is thus independent from our algorithm. Recall also that the object is always within the FOV, so the CB projection of a point that is within the FOV can only leave the detector through its top and bottom rows. Last, note that the rebinning step of equation (14) does not change the data properties with respect to interrupted illumination: if there is no interrupted illumination in the initial data geometry, there is also none after rebinning, and conversely.

To prove statement (i) above, we rewrite equation (31) into an expression that yields the *z*-coordinate of the point (*x, y, z*) that belongs to (λ_{surf}; *w*_{surf}) at given *x* and *y*:

$$z(x,y,{\mathrm{\lambda}}_{\text{surf}};{w}_{\text{surf}}={z}_{0}+h{\mathrm{\lambda}}_{\text{surf}}+h\phantom{\rule{thinmathspace}{0ex}}\text{arcsin}\phantom{\rule{thinmathspace}{0ex}}\left(\frac{s(x,y,{\mathrm{\lambda}}_{\text{surf}})}{{R}_{0}}\right)+\frac{{w}_{\text{surf}}}{D}(\tau (x,y,{\mathrm{\lambda}}_{\text{surf}})+\sqrt{{R}_{0}^{2}-s{(x,y,{\mathrm{\lambda}}_{\text{surf}})}^{2}})$$

(A.1)

with

$$s(x,y,{\mathrm{\lambda}}_{\text{surf}})=-x\phantom{\rule{thinmathspace}{0ex}}\text{sin}({\mathrm{\lambda}}_{\text{surf}}+{\mathrm{\lambda}}_{0})+y\phantom{\rule{thinmathspace}{0ex}}\text{cos}({\mathrm{\lambda}}_{\text{surf}}+{\mathrm{\lambda}}_{0}),$$

(A.2)

$$\tau (x,y,{\mathrm{\lambda}}_{\text{surf}})=-x\phantom{\rule{thinmathspace}{0ex}}\text{cos}({\mathrm{\lambda}}_{\text{surf}}+{\mathrm{\lambda}}_{0})-y\phantom{\rule{thinmathspace}{0ex}}\text{sin}({\mathrm{\lambda}}_{\text{surf}}+{\mathrm{\lambda}}_{0}).$$

(A.3)

Then, we examine the behavior of *z*/λ_{surf} for any (*x, y*) within the FOV. Straightforward calculations yield

$$\frac{\partial z}{\partial {\mathrm{\lambda}}_{\text{surf}}}=\left(h-\frac{{w}_{\text{surf}}s(x,y,{\mathrm{\lambda}}_{\text{surf}})}{D}\right)\phantom{\rule{thinmathspace}{0ex}}\left(1+\frac{\tau (x,y,{\mathrm{\lambda}}_{\text{surf}})}{\sqrt{{R}_{0}^{2}-s{(x,y,{\mathrm{\lambda}}_{\text{surf}})}^{2}}}\right)$$

(A.4)

$$=\left(h+\frac{{w}_{\text{surf}}}{D}\xb7r\phantom{\rule{thinmathspace}{0ex}}\text{sin}({\mathrm{\lambda}}_{\text{surf}}-\phi )\right)\phantom{\rule{thinmathspace}{0ex}}\left(1-\frac{r\phantom{\rule{thinmathspace}{0ex}}\text{cos}({\mathrm{\lambda}}_{\text{surf}}-\phi )}{\sqrt{{R}_{0}^{2}-{r}^{2}+{r}^{2}{\text{cos}}^{2}({\mathrm{\lambda}}_{\text{surf}}-\phi )}}\right)$$

(A.5)

$$=\left(h+\frac{{w}_{\text{surf}}}{D}\xb7r\phantom{\rule{thinmathspace}{0ex}}\text{sin}({\mathrm{\lambda}}_{\text{surf}}-\phi )\right)\phantom{\rule{thinmathspace}{0ex}}k(\text{cos}({\mathrm{\lambda}}_{\text{surf}}-\phi )),$$

(A.6)

where *r* ≥ 0 and ϕ [0, 2π) are the polar coordinates of *x* and *y*, and where $k(t)=1-\mathit{\text{rt}}/\sqrt{{R}_{0}^{2}-{r}^{2}+{r}^{2}{t}^{2}}$. It turns out that *k*(*t*) > 0 for all *t* when *r* < *R*_{0} because

- $k\prime (t)=-r({R}_{0}^{2}-{r}^{2})/{({R}_{0}^{2}-{r}^{2}+{r}^{2}{t}^{2})}^{3/2}<0$ for any
*t*if*r*<*R*_{0}, - lim
_{t→−∞}*k*(*t*) = 2, lim_{t→+∞}*k*(*t*)=0.

Therefore, *z*/λ_{surf} > 0 whenever *r* < *R*_{0} and |*w*_{surf}|*r* < *hD*. However, under conditions (9), we have *p*_{min} < *p*, which can easily be rewritten in the form *w*_{max}*R*_{0} sin γ_{max} < *hD* from the definition of *p*, *w*_{max} and *p*_{min}. Given that *R*_{0} sin γ_{max} is the radius of the FOV, we can conclude at this stage that *z*/λ_{surf} > 0 for any (*x, y*) within the FOV. This proves statement (i).

The proof of statement (ii) goes as follows. If there was interrupted illumination, there would exist a point within the FOV that is projected at least twice onto *w* = *w*_{max} (or *w* = *w*_{min}). But any point that is on the M-line surface (λ_{surf}; *w*_{surf}) sits on an M-line that represents the course of a ray projecting it onto the detector row *w* = *w*_{surf}, with the direction = λ_{surf} + π/2 in the rebinned geometry. Thus, if a point is projected onto the detector row *ŵ* = *w*_{max} (or *ŵ* = *w*_{min}) twice, say under the rebinned projection angles _{1} and _{2}, it must also be located on two M-line surfaces, namely (_{1} − π/2; *ŵ*) and (_{2} − π/2; *ŵ*). This is only possible if those two M-line surfaces intersect. Hence, there is no interrupted illumination if the surfaces (λ_{surf}; *w*_{surf}) do not intersect each other within the FOV for |*w*_{surf}| = *w*_{max}.

^{3}In this context, [*a, b*] means [*b, a*] when *b* < *a*.

- Bontus C, Köhler T, Proksa R. A quasiexact reconstruction algorithm for helical CT using a 3-pi acquisition. Med. Phys. 2003;30:2493–2502. [PubMed]
- Bontus C, Köhler T, Proksa R. EnPiT: filtered back-projection algorithm for helical CT using an n-pi acquisition. IEEE Trans. Med. Imaging. 2005;24:977–986. [PubMed]
- Danielsson PE, Edholm P, Eriksson J, Magnusson Seger M. Towards exact reconstruction for helical cone-beam scanning of long objects. a new detector arrangement and a new completeness condition. In: Townsend DW, Kinahan PE, editors. Proc. 1997 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine; Pittsburgh, PA. 1997. pp. 141–144.
- Feldkamp LA, Davis LC, Kress JW. Practical cone-beam algorithm. J. Opt. Soc. Am. A. 1984;1:612–619.
- Flohr T, Stierstorfer K, Bruder H, Simon J, Polacin A, Schaller S. Image reconstruction and image quality evaluation for a 16-slice CT scanner. Med. Phys. 2003;30:832–845. [PubMed]
- Flohr TG, Stierstorfer K, Ulzheimer S, Bruder H, Primak AN, McCollough CH. Image reconstruction and image quality evaluation for a 64-slice CT scanner with z-flying focal spot. Med. Phys. 2005;32:2536–2547. [PubMed]
- Heuscher D, Brown K, Noo F. Redundant data and exact helical cone-beam reconstruction. Phys. Med. Biol. 2004;49:2219–2238. [PubMed]
- Katsevich A. On two versions of a 3π algorithm for spiral CT. Phys. Med. Biol. 2004;49:2129–2143. [PubMed]
- Katsevich A. 3PI algorithms for helical computer tomography. Adv. Appl. Math. 2006;36:213–250.
- Katsevich A, Zamyatin AA, Silver MD. Optimized reconstruction algorithm for helical CT with fractional pitch between 1PI and 3PI. In: Beekman F, Kachelrieβ M, editors. Proc. 2007 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine; Lindau, Germany. 2007. pp. 333–336.
- Katsevich AI. Analysis of an exact inversion algorithm for spiral cone-beam CT. Phys. Med. Biol. 2002;47:2583–2597. [PubMed]
- Köhler T, Bontus C, Koken P. The radon-split method for helical cone-beam CT and its application to nongated reconstruction. IEEE Trans. Med. Imaging. 2006;25:882–897. [PubMed]
- Köhler T, Bontus C, Proksa R. BPF reconstruction for helical CT using all data; IEEE Nuclear Science Symp. Conf. Record NSS’08; 2008. pp. 4154–4156.
- Kudo H, Rodet T, Noo F, Defrise M. Exact and approximate algorithms for helical cone-beam CT. Phys. Med. Biol. 2004;49:2913–2931. [PubMed]
- Noo F, Clackdoyle R, Pack J. A two-step Hilbert transform method for 2D image reconstruction. Phys. Med. Biol. 2004;49:3903–3923. [PubMed]
- Noo F, Pack J, Heuscher D. Exact helical reconstruction using native cone-beam geometries. Phys. Med. Biol. 2003;48:3787–3818. [PubMed]
- Pack J, Noo F, Clackdoyle R. Cone-beam reconstruction using the backprojection of locally filtered projections. IEEE Trans. Med. Imaging. 2005;24:70–85. [PubMed]
- Proksa R, Köhler T, Grass M, Timmer J. The n-PI-method for helical cone-beam CT. IEEE Trans. Med. Imaging. 2000;19:848–863. [PubMed]
- Schöndube H, Stierstorfer K, Dennerlein F, Noo F. Comparative evaluation of two analytical methods for helical cone-beam tomography; IEEE Nuclear Science Symp. Conf. Record NSS’07; 2007b. pp. 4467–4471.
- Schöndube H, Stierstorfer K, Dennerlein F, White TA, Noo F. Towards an efficient two-step Hilbert algorithm for helical cone-beam CT. In: Beekman F, Kachelrieβ M, editors. Proc. 2007 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine; Lindau, Germany. 2007a. pp. 120–123.
- Schöndube H, Stierstorfer K, Noo F. Accurate helical CT reconstruction with redundant data using nutating slices; IEEE Nuclear Science Symp. Conf. Record NSS’08; 2008. pp. 5158–5160.
- Shechter G, Köhler Th, Altman A, Proksa R. The frequency split method for helical cone-beam reconstruction. Med. Phys. 2004;31:2230–2236. [PubMed]
- Söhngen H. Die Lösungen der Integralgleichung $g(x)=\frac{1}{2\pi}{\displaystyle {\int}_{-a}^{a}\frac{f(\xi )}{x-\xi}\mathrm{d}\xi}$ und deren Anwendung in der Tragflügeltheorie. Math. Z. 1937;45:245–264.
- Stierstorfer K, Flohr T, Bruder H. Segmented multiple plane reconstruction: a novel approximate reconstruction scheme for multi-slice spiral CT. Phys. Med. Biol. 2002;47:2571–2581. [PubMed]
- Stierstorfer K, Rauscher A, Boese J, Bruder H, Schaller S, Flohr T. Weighted—a simple approximate 3D FBP algorithm for multislice spiral CT with good dose usage for arbitrary pitch. Phys. Med. Biol. 2004;49:2209–2218. [PubMed]
- Tam KC, Samarasekera S, Sauer F. Exact cone-beam CT with a spiral scan. Phys. Med. Biol. 1998;43:1015–1024. [PubMed]
- Tang X, Hsieh J. Handling data redundancy in helical cone beam reconstruction with a cone-angle-based window function and its asymptotic approximation. Med. Phys. 2007;34:1989–1998. [PubMed]
- Tuy HK. An inversion formula for cone-beam reconstruction. SIAM J. Appl. Math. 1983;43:546–552.
- Ye Y, Zhao S, Yu H, Wang G. A general exact reconstruction for cone-beam CT via backprojection-filtration. IEEE Trans. Med. Imaging. 2005;24:1190–1198. [PubMed]
- Yu L, Xia D, Zou Y, Sidky E, Bian J, Pan X. A rebinned backprojection-filtration algorithm for image reconstruction in helical cone-beam CT. Phys. Med. Biol. 2007;52:5497–5508. [PubMed]
- Zamyatin AA, Katsevich A, Silver MD, Nakanishi S. Helical CT reconstruction with large cone angle; IEEE Nuclear Science Symp. Conf. Record NSS’06; 2006. pp. 2264–2267.
- Zou Y, Pan X. Exact image reconstruction on PI-line from minimum data in helical cone-beam CT. Phys. Med. Biol. 2004;49:941–959. [PubMed]

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |