PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Proc IEEE Int Symp Biomed Imaging. Author manuscript; available in PMC 2010 May 24.
Published in final edited form as:
Proc IEEE Int Symp Biomed Imaging. 2009 June 28; 2009: 875–878.
doi:  10.1109/ISBI.2009.5193192
PMCID: PMC2874900
NIHMSID: NIHMS185146

A SHAPE-NAVIGATED IMAGE DEFORMATION MODEL FOR 4D LUNG RESPIRATORY MOTION ESTIMATION

Xiaoxiao Liu, Rohit R. Saboo, and Stephen M. Pizer
Computer Science Department The University of North Carolina at Chapel Hill Chapel Hill, NC

Abstract

Intensity modulated radiation therapy (IMRT) for cancers in the lung remains challenging due to the complicated respiratory dynamics. We propose a shape-navigated dense image deformation model to estimate the patient-specific breathing motion using 4D respiratory correlated CT (RCCT) images. The idea is to use the shape change of the lungs, the major motion feature in the thorax image, as a surrogate to predict the corresponding dense image deformation from training.

To build the statistical model, dense diffeomorphic deformations between images of all other time points to the image at end expiration are calculated, and the shapes of the lungs are automatically extracted. By correlating the shape variation with the temporally corresponding image deformation variation, a linear mapping function that maps a shape change to its corresponding image deformation is calculated from the training sample. Finally, given an extracted shape from the image at an arbitrary time point, its dense image deformation can be predicted from the pre-computed statistics.

The method is carried out on two patients and evaluated in terms of the tumor and lung estimation accuracies. The result shows robustness of the model and suggests its potential for 4D lung radiation treatment planning.

Keywords: respiratory motion model, diffeomorphic image registration, statistical shape analysis

1. INTRODUCTION

Currently, computed tomography is the major imaging modality for comprehensive evaluation of the lung in respiratory motion due to significant advances in its both temporal and spatial resolution [1]. Multi-slice CT enables acquisition of a sequence of temporal 3D images in one breathing cycle. Developments in computerized cancer treatment planning techniques such as intensity-modulated radiation therapy (IMRT) require precise and efficient calculation of dose distributions to maximize the tumor treatment while avoiding endangering the surrounding normal tissues. However, the complicated respiration dynamics limit the application of IMRT for lung cancer treatment and diagnosis [2]. Clinical 4D modeling of the thorax region in respiratory motion remains to be a challenging task.

Various models have been proposed to simulate and predict the complicated free-breathing system. Recognizing the hysteresis and irregular breathing patterns, auxiliary devices, such as the pencil-beam navigators [3] and the spirometer [4], have been used to obtain extra modeling parameters as the surrogate for the motion estimation. Pure image based models [5, 6, 7, 8, 9, 10] have been adopted to characterize the respiratory motion in the whole imaging space or within the major organ region, mostly through image registration techniques. In [8] a mean motion model for lung is proposed based on image registration both intra- and inter-patients, where tidal volume estimated by the air content in the lung is used to account for individual differences. However, the model is more or less limited in the applications to regular and repeated breathing patterns due to the lack of surrogate signals; In [9] a patient-specific model is trained in the dense image deformations space, where the diaphragm position was manually labeled as the surrogate. It can be observed that the regions around the diaphragm contain the motions of large magnitudes, which makes the diaphragm position an efficient navigator of the deformation dynamics. Nevertheless a more globally involved surrogate is needed to account for the motions in more superior regions of lungs. Considering the large coverage and the high-contrast intensity feature of the air-filled lung fields in the image, the shape changes of the lung is a promising surrogate signal.

In this paper we propose a pure image based modeling method of estimating patient-specific respiratory model for lung cancer radiation therapy, using the shape of the lung as the navigator of the dense image deformation space.

2. METHODOLOGY

Our method begins from calculating dense diffeomorphic deformations from the end of expiration (EE) phase image to all other phases (Sec.2.1). In the same images, shapes of the anatomical structures are automatically extracted (Sec.2.2). By correlating the shape changes with the temporally corresponding image deformation fields in their separate reparameterized spaces, we compute a linear mapping function which maps a shape model temporal change to a dense image deformation field (Sec.2.3). This relative computationally expensive training process can be done completely off-line. Given a target image at an arbitrary phase, we only need the extracted shape to estimate the corresponding image deformation using the trained statistics.

2.1. Motion characterization via diffeomorphic registration

The state-of-the-art symmetric diffeomorphic image registration method developed by Avants et al. [11] is used to obtain the dense deformation fields of all phases. This method provides smooth deformation in a multi-resolution framework, resulting in both forward and inverse mappings regardless of which image is fixed and which image is moving.

In a respiratory sequence, each image is denoted by It where t [set membership] [0, 1] with 0 as the starting phase and 1 as the end phase. We use the end of expiration (EE) phase IEE as the base image due to its high reproducibility. For each image It with dimension of I × J × K, the corresponding dense deformation field is denoted by Ht = [ut(0, 0, 0), …, ut(i, j, k), …, ut(I, J, K)], where ut(i,j,k)=[utx(i,j,k),uty(i,j,k),utz(i,j,k)] is the displacement vector field as the result of registration with the base image IEE (HEE itself is zero everywhere).

2.2. Binary segmentation and shape representation of the lungs

The respiratory motions can be observed from the shape changes of the lungs (left lung and right lung) in a 4D CT clearly, especially near the diaphragm regions. As the major source of motion in the imaging field, the geometrical variations of lungs are used as the surrogate for the temporally corresponding dense image deformation. Furthermore the air filled regions have high contrast boundaries, which makes the automatic segmentation feasible.

Binary lungs are extracted using an automated segmentation pipeline, similar to the approach in [12]. First, a binary threshold is chosen carefully to exclude airway regions and to separate the two lungs. A 3D math-morphological “ball rolling” operator fills the holes after thresholding and remove unnecessary details in connecting regions of bronchi structure, airway and lungs. An example of resulting binary images are illustrated in Fig.1a.

Fig. 1
Segmentation and deformation of the lungs: (a) an axial slice of binary segmentation, in red; (b) a posterior view of the initial m-rep model: the medial sheet of atoms are shown for the left lung and surface mesh are shown for the right lung; (c) an ...

The surface point distribution model (PDM) representing the shape of lungs is denoted by Pt=[Ptleft,Ptright]=[ptleft(0),,ptleft(N),ptright(0),,ptright(N)], where pt(i)=[ptx(i),pty(i),ptz(i)] is the ith point on the surface and N is the number of surface points for each lung.

In order to train statistics on PDMs of all phases, correspondences among the surface points across all phases must be satisfied. A direct way of producing such group-wise correspondence is to generate all the models via deforming a common initial model. We start from a semi-manual triangle mesh model of the lungs. A diffusion process deforms the triangle mesh along surface normal directions to obtain a good fit of the binary image contours. Correspondence among the mesh vertices is provided by the use of an underlying medial representation (m-rep) [13] as the initial shape model, as shown in Fig.1b. The m-rep provides a fixed number of sparse surface points (the spoke ends) with surface normals (unit spoke vectors) for the diffusion process. The vertices of each diffused mesh is a PDM; see Fig.1c. In our experiments each lung has 450 surface points.

2.3. Shape-navigated image deformation statistics

2.3.1. Reparameterization of the image deformation and shape space

For 3D CT lung images with typical resolution of 512 × 512 × 100 the image deformation fields are storage demanding. dimension of 3D shape representation for lungs is also in the order of thousands. Multiplying by the time dimension, the calculation is computationally prohibitive. Dimension reduction becomes the natural solution. We use principal component analysis (PCA) on all phases to reparameterize the shape and dense image deformation space respectively. The reparameterization of the image deformation space is described as

Ht=μH+i=1kHciHeiH=μH+CtHEH,
(1)

where μH is the sample mean, eiH is the ith eigenvector in the dense image deformation space, EH is the matrix of eigen-vectors, ciH is the corresponding coefficient or the projection score of Ht in the ith eigendirection, and kH is the number of eigenmodes. Similarly, we have the equation for the point sets forming the surrogate:

Pt=μP+i=1kPciPeiP=μP+CtPEP.
(2)

In our experiments with the training sample consisted of 10 phases, kH = 2 and kP = 5 cover more than 90% of the total variance in their spaces respectively. The rest of the variations are discarded as sampling noise.

2.3.2. Correlation between shape variation and image variation

Linear correlation between the surrogate and the image motion has been hypothesized in [3, 9] and was shown to be effective in modeling the respiratory dynamics. Based on the same hypothesis, linear correlation between the shape variation space and the image variation space within the breathing cycle is assumed. Given CtH and CtP, we first center the data by Xt=CtPμCtP, Yt=CtHμCtH. We then form the training data matrix X = [X0, …, Xt, …, X1] and Y = [Y0, …, Yt, …, Y1]. The linear mapping matrix M, which gives Y = XM, can be calculated via the standard multivariate regression as follows:

M=(XTX)1XTY.
(3)

Given a target image at arbitrary phase t* with an automatically extracted shape of the lungs, we directly project the shape onto the shape prior to get CtP, and the corresponding center CtH is recovered by CtH=(CtPμCtP)M+μCtH.

3. EXPERIMENTAL RESULTS

In our experiments, the respiratory correlated CT (RCCT) data sets are provided by a 4-slice scanner (lightSpeed GX/i, GE Medical System), acquiring repeat CT images for a complete respiratory cycle at each couch position while recording patient respiration (Real-time Position Management System, Varian Medical Systems). The CT images are retrospectively sorted (GE Advantage 4D) to produce a series of 3D images at 10 respiratory time points. The time resolution is 0.5 s. The CT slice thickness is 2.5 mm. For each time point, tumor contours by experts’ delineation are used as the clinician truth.

Sequences of RCCT for two patients ( patient A and B in the plots) are tested. We quantified the estimation errors by applying the predicted deformations to the gross tumor volume (GTV) of the EE phase and measured the results with the clinicians’ delineation. Fig.3 shows an example of GTV delineation and the predicted 3D tumor colored by the surface distance to the clinician truth.

Fig. 3
GTV estimation evaluation on patient A (left) and B (right) on Dice coefficient (top), average surface distance (middle) and 90% quantile surface distance (bottom). Our shape-navigated statistical model results (box 3) are compared with the results without ...

Three measurements are adopted to investigate the results using standard boxplots (median, interquantile range, largest observation and outliers are plotted): The Dice coefficient measures the volume overlap between the prediction and the ground truth GTV; The average surface distance measures the overall geometry fitness; The 90% quantile surface distance accounts for large discrepancies. In comparison, we plot the estimation errors without deformation estimation (no registration), in which the tumors are assumed static and EE phase GTV are used for all time points. This static situation illustrates the mobility of the tumors. We also plot the estimation errors when directly applying the image deformation to the GTV via the same symmetric diffeomorphic registration used in training. The results are summarized in Fig.3. The maximum error of 0.25cm surface distance is in fact within one voxel size, 0.1cm × 0.1cm × 0.25cm. In both patients, our shape-navigated statistical model largely reduces the estimation error from the static (no registration) method and is more robust than the registration method as a result of the noise removal during the reparameterization.

We also evaluated the estimation error of the shape of lungs. Automatic binary segmentation results were used as the ground truth. Since the results on left and right lungs are quite consistent with each other, we use the average numbers of the two lungs in the boxplots of Fig.4. Results without motion modeling ( no registration) are not in comparison merely because the motion of the lungs are obvious to observe. Consistent with the GTV evaluations, our method yields good estimations on lungs, slightly better than the registration method in terms of the average surface distance.

Fig. 4
Lung volume estimation evaluation on patient A (left) and B (right) on Dice coefficient (top), average surface distance (middle) and 90% quantile surface distance (bottom). Our shape-navigated statistical model results (box 1) are compared with the results ...

4. CONCLUSION

In the evaluated cases the proposed model is able to estimate the dense image deformation within the breathing sequence robustly. The dimension reduction in both image and shape deformation spaces not only retains anatomically relevant variation but also reduces noise. The hypothesis of linear correlation between the shape and image deformation space is shown to be effective. Given 4D image sequences both at planning time and treatment time, our next goal includes estimating the respiratory motion at the treatment time based on the statics trained on the sequence of planning time. Also, it is possible to extend the methodology to inter-patent studies to include more training samples for more robust statistics.

Fig. 2
(a): A tumor contoured in white on axial slices as clinician truth. (b): An estimated tumor volume (the median case for patient A) shown in two views, using the proposed method colored by surface distance errors, with the maximum surface distance of 0.11 ...

5. REFERENCES

[1] Hoffman EA, Clough AV, Christensen GE, Lin C, McLennan G, Reinhardt JM, Simon BA, Sonka M, Tawhai MH, Beek E, Wang G. The comprehensive imaging-based analysis of the lung: a forum for team science. Academic radiology. 2004;vol. 11:1370–80. [PubMed]
[2] Yu CX, Jaffray DA, Wong JW. The effects of intrafraction organ motion on the delivery of dynamic intensity modulation. Physics in medicine and biology. 1998;vol. 43:91–104. [PubMed]
[3] Manke D, Nehrke K, Brnert P. Novel prospective respiratory motion correction approach for free-breathing coronary mr angiography using a patient-adapted affine motion model. Magnetic Resonance in Medicine. 2003;vol. 50:122–131. [PubMed]
[4] Low DA, Parikh PJ, Lu W, Dempsey JF, Wahab SH, Hubenschmidt HP, Nystrom MM, Handoko M, Bradley JD. Novel breathing motion model for radiotherapy. International Journal of Radiation Oncology Biology Physics. 2005;vol. 63:921–929. [PubMed]
[5] El Naqa I, Low DA, Deasy JO, Amini A, Parikh P, Nystrom M. Nuclear Science Symposium Conference Record, 2003 IEEE. vol. 5. 2003. Automated breathing motion tracking for 4d computed tomography; pp. 3219–3222. Vol.5.
[6] Wu Ziji, Rietzel Eike, Boldea Vlad, Sarrut David, Sharp Gregory C. Evaluation of deformable registration of patient lung 4dct with subanatomical region segmentations. Medical Physics. 2008;vol. 35:775–781. [PubMed]
[7] Chandrashekara Raghavendra, Rao Anil, Sanchez-Ortiz Gerardo, Mohiaddin Raad, Rueckert Daniel. Construction of a Statistical Model for Cardiac Motion Analysis Using Non-rigid Image Registration. 2003. pp. 599–610. [PubMed]
[8] Ehrhardt J, Werner R, Schmidt-Richberg A, Schulz B, Handels H. Generation of a mean motion model of the lung using 4d ct image data. Eurographics Workshop on Visual Computing for Biomedicine.2008. pp. 69–76.
[9] Zhang Q, Pevsner A, Hertanto A, Hu Y, Rosenzweig KE, Ling CC, Mageras GS. A patient-specific respiratory model of anatomical motion for radiation treatment planning. Medical Physics. 2007;vol. 34:4772–4781. [PubMed]
[10] Reinhardt Joseph, Christensen Gary, Hoffman Eric, Ding Kai, Cao Kunlin. Registration-Derived Estimates of Local Lung Expansion as Surrogates for Regional Ventilation. 2008. pp. 763–774. [PubMed]
[11] Avants B, Duda JT, Kim J, Zhang H, Pluta J, Gee JC, Whyte J. Multivariate analysis of structural and diffusion imaging in traumatic brain injury. Academic Radiology. 2008;vol. 15:1360–1375. [PubMed]
[12] Heuberger J, Geissbuhler A, Muller H. Lung ct segmentation for image retrieval using the insight tookit(itk) Medical Imaging and Telemedicine. 2005
[13] Pizer SM, Joshi S, Thomas PF, Styner M, Tracton G, Chen JZ. Segmentation of single-figure objects by deformable m-reps. In: Niessen WJ, Viergever MA, editors. Medical Image Computing and Computer-Assisted Intervention. vol. 2208. 2001. pp. 862–871.