Home | About | Journals | Submit | Contact Us | Français |

**|**Iran J Public Health**|**v.46(12); 2017 December**|**PMC5734968

Formats

Article sections

- Abstract
- Introduction
- Materials and Methods
- Results
- Discussion
- Conclusion
- Ethical considerations
- References

Authors

Related links

Iran J Public Health. 2017 December; 46(12): 1679–1689.

PMCID: PMC5734968

Received 2016 December 22; Accepted 2017 April 26.

Copyright © Iranian Public Health Association & Tehran University of Medical Sciences

This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Compared to the rigid image registration task, the non-rigid image registration task faces much more challenges due to its high degree of freedom and inherent requirement of smoothness in the deformation field. The purpose was to propose an efficient coarse-to-fine non-rigid medical image registration algorithm based on a multilevel deformable model.

In this paper, a robust and efficient coarse-to-fine non-rigid medical image registration algorithm is proposed. It contains three level deformation models, i.e., the global homography model, the local mesh-level homography model, and the local B-spline FFD (Free-Form Deformation) model. The coarse registration is achieved by the first two level models. In the global homography model, a robust algorithm for simultaneous outliers (error matched feature points) removal and model estimation is applied. In the local mesh-level homography model, a new similarity measure is proposed to improve the robustness and accuracy of local mesh based registration. In the fine registration, a local B-spline FFD model with normalized mutual information gradient is employed.

We verified the effectiveness of each stage of the proposed registration algorithm with many non-rigid transformation image pairs, and quantitatively compared our proposed registration algorithm with the HBFFD method which is based on the control points of multi-resolution. The experimental results show that our algorithm is more accurate than the hierarchical local B-spline FFD method.

Our algorithm can achieve high precision registration by coarse-to-fine process based on multi-level deformable model, which ourperforms the state-of-the-art methods.

Medical image registration is one of the most important and challenging research in the modern medical image analysis field. It aims to align two images which are captured from different device/time into the same coordinate system. “It has many potential and important applications in clinical diagnosis, such as fusion of computer tomography (CT) and magnetic resonance imaging (MRI) data to obtain more complete information about the patient, monitoring tumor growth, treatment verification, and comparison of the patient’s data with anatomical atlases” (1).

From the view of the image transformation, medical image registration can be classified into rigid registration and non-rigid registration. In the past few years, a number of excellent rigid image registration methods were proposed and widely applied, such as the cross-correlation method (2), maximization of mutual information method (3) and normalized mutual information method (4). These methods are extended or integrated, and gradually applied to solve the non-rigid image registration problem (5–7). Compared to the rigid image registration task, the non-rigid image registration task faces much more challenges due to its high degree of freedom and inherent requirement of smoothness in the deformation field. The accuracy, robustness and speed of these algorithms are required to be further improved for clinical applications (8–9).

The non-rigid medical image registration algorithm naturally depends on the geometric deformation model and the similarity measure criterion. The geometric deformation models can be classified into two main categories: i) physics-based models such as the elastic body models (10–11), the optical flow models (12) and the diffusion models (13); and ii) interpolation-based models such as free-form deformations (14–15). Christensen et al. (10) proposed an approach to tackle large deformations with multiple linear elastic models which represent a series of small deformations. Guyader et al. (11) proposed an approach that combines segmentation and registration that is based on nonlinear elasticity. Lu et al. (12) proposed a registration method based on the optical flow model for non-rigid heart images. This algorithm includes two steps, i.e., coarse registration and precise registration, to improve both of the registration accuracy and the convergence speed. Thirion (13) regarded the image registration as a diffusion process. This algorithm is actually an iterative process between estimation of the pixel displacements and the update of the transformation. In each iteration, the movement of any pixel is decided by a matching process based on the Sum of Squared Differences (SSD) criterion. For the above registration algorithms based on the physical models, it is difficult to construct a reasonable physical model that can simulate the complex tissue deformations between the two input images. Rueckert et al. (14) proposed a local deformation model for non-rigid registration on breast MR images. This model was described by the so called Free-Form Deformation (FFD) based on B-splines, and it employed the normalized mutual information (NMI) as the similarity function. Since the degree of freedom of the local deformation model is determined by the number of control points, it is important to decide whether a sparse or dense set of control points should be used. However, both sparse and dense sets have limitations. If a sparse set of control points is used, the movements of the control points will not well represent complicated deformations. If a dense set of control points is used, the optimization can be computationally inefficient. In order to tackle these shortcomings, some researchers proposed some compromise methods. For example, Shi et al. (15) proposed a multi-level B-spline model in which only a sparse subset of the control points is active to balance speed and accuracy.

In addition to the geometric deformation model itself, extraction of the robust and precise feature correspondences is also very important. It is an essential step to estimate the geometric deformation model in many registration methods. However, it is often affected by image noise, feature outliers and local deformations. In the past decade, a number of methods were proposed to solve the robust feature matching problem. Among them, one kind of fuzzy correspondence methods, such as softassign methods (16) and relaxation labeling methods (17) have been developed, in which the binary constrains of the correspondences are relaxed to become a fuzzy correspondence during the optimization process. Some researchers combined the iconic feature and the geometric feature for correspondence searching and outlier discarding (18–19). However, most of these approaches have limited capability in handling outliers caused by feature extraction errors or large deformations.

This paper proposes an efficient coarse-to-fine non-rigid medical image registration algorithm based on a multi-level deformable model.

The study was carried out according to the Helsinki Declaration and approved by the ethical committee of Chinese Academy of Medical Sciences. The need for informed consent was waived, because the data sets used in this study downloaded from some open web sites.

Fig. 1 shows the flow chart of our algorithm, which contains three steps. The left part of Fig. 1 (a) shows the two input images to be registered, where the top one is called the reference image (which will be fixed during the registration step) and the bottom one is called the float image (which will be transformed by the registration process). The right part of (a) shows the difference of the two input images. (e) shows the difference between the fixed reference image and the registered float image after the proposed coarse-to-fine registration algorithm. Comparing (e) and the right part of (a), it can be seen that most pixels of the two input images are already aligned accurately. (b), (c) and (d) are the intermediate results corresponding to the three steps of the proposed coarse-to-fine registration algorithm, and will be described in detail in the following text.

- [1] Feature correspondence detection based on global homography model. The robust and accurate feature correspondence detection between the reference image and float image plays an important role in image registration (shown in Fig 1 (b)). In this paper, we used the SIFT (Scale-invariant feature transform) (20–21) algorithm to detect sparse feature correspondences in the two images. Although SIFT can be invariant to uniform scaling, orientation, and partially invariant to affine distortion and illumination changes, it inevitably produces outliers due to feature extraction errors or large deformations. To tackle this issue, an improved robust RANSAC algorithm for simultaneous removal of outliers and estimation of the global level transformation model is applied.
- [2] Coarse registration based on local-mesh level homography model Since the global transformation model cannot precisely simulate the local deformations of non-rigid tissue, a number of local deformable models corresponding to a series of uniform grid mesh are robustly estimated. In this step, the local deformation model is estimated by a local homography with shape-preserving constraints. The local deformation mesh is shown in Fig 1 (c). Then, the coarse registration based on the local mesh-level homography model is performed. It can greatly improve the convergence speed and precision of the following fine registration step.
- [3] Fine registration based on B-spline FFD model. Although the above local homography transformation model can effectively simulate the deformations to some extent, its ability is limited for the images with very complex deformations because the model reduces degree of freedom by introducing a shape-preserving constraint. To tackle the complex deformations, a fine registration step is further applied. In this step, a B-spline FFD model is constructed by integrating the normalized mutual information (NMI) gradient to acquire more accurate registration results in this step (shown in Fig. 1 (d)).

The global level transformation model is described by a homography *H _{g}*, a 3 × 3 matrix. Suppose {x

$${\text{x}}_{i}^{\prime}={H}_{g}{\text{x}}_{i},\hspace{0.17em}\hspace{0.17em}i=1,\dots ,4$$

[1]

where x_{i} is a homogeneous coordinate, i.e., x_{i} =(*x _{i}*

In order to enhance the robustness and accuracy of the feature correspondences, the robust ORSA (Optimized Random Sample Consensus, i.e., Optimized RANSAC) algorithm (23) is applied to estimate the parameters of the global homography model *H _{g}* and eliminate the outliers simultaneously. The average residual error ε of the correspondences is used to distinguish the inliers and outliers in the process of the estimation (Fig 2).

$$\epsilon =\frac{1}{2n}\sum _{i=1}^{n}(d({x}_{i},{H}_{g}^{-1}{\text{x}}_{i}^{\prime})+d({\text{x}}_{i}^{\prime},{H}_{g}{x}_{i}))$$

[2]

where *d*(x′* _{i}*,

After performing the ORSA algorithm, a global transformation model *H _{g}* and an inlier set (with a relatively large error threshold ε

The reference image R is divided into a number of regular square cells (called meshes). Each grid cell has β × β (β = 16 in our experiments) pixels. As shown in Fig 3, the grid cell *C _{j,k}* is enclosed by 4 vertices ${C}_{j,k}=[{v}_{j,k}^{1},{v}_{j,k}^{2},{v}_{j,k}^{3},{v}_{j,k}^{4}]$, and its corresponding cell

$${\widehat{C}}_{j,k}={H}_{j,k}{C}_{j,k}$$

[3]

Local deformable model in the corresponding grid cell of the reference image *R* and the float image *F*

Since the local model *H _{j,k}* can be linearly solved with the 4 vertices of the grid cell, so the estimation of the local homography model becomes the estimation of the 4 vertices

Suppose {x_{i}, x′* _{i}*} is the

$${E}_{d}(\widehat{C})={\sum}_{i}\Vert {C}_{j,k}{\omega}_{i}-{H}_{g}{\text{x}}_{i}^{\prime}\Vert $$

[4]

Although this flexible motion model can well describe the transformation between the local corresponding cells, it is difficult to estimate it because there are no sufficient feature correspondences in each cell. In order to address this challenge, a similarity constraint term is introduced to limit the degrees of freedom of the motion model:

$${E}_{c}(\widehat{C})={\sum}_{\widehat{v}}\Vert \widehat{v}-{v}_{1}-s{R}_{90}({v}_{0}-{v}_{1)}\Vert ,{R}_{90}=\left[\begin{array}{cc}0& 1\\ -1& 0\end{array}\right]$$

[5]

where , *ν*_{0}, *ν*_{1} are three vertices of one grid in the float image, *s* = ‖*ν* − *ν*_{1}‖/‖*ν*_{0} − *ν*_{1}‖ is a scale ratio computed from the corresponding three vertices in the reference image, and *R*_{90} is a rotation matrix used to guarantee the right angle of vector _{1}*ν* and _{1}*ν*_{0}. This constraint term requires the triangle formed by three neighboring vertices *ν*, *ν*_{0}, *ν*_{1} to follow a similarity transformation.

The data term and constraint term are linearly combined to obtain the final energy function:

$$E(\widehat{C})={E}_{d}(C)+\lambda {E}_{c}(C)$$

[6]

where *λ* is a weight to control the contribution of the two terms. Since the energy function is quadratic, all cell’s vertices *Ĉ* can be solved by a sparse linear solver. Then the local model *H _{j,k}* will be estimated by Eq.(3). Note that the final local homography

The data term in the above mentioned energy function mainly considers the position information of the feature correspondences. In order to further improve the robustness of the local homography estimation algorithm, a new similarity measure with stronger constraints by employing direction and distance information (See Sec.4.2) is proposed for adaptively determining the weight *λ* of the constraint term in Eq.(6). The weight *λ* is equally discretized into 10 values between 0.3 and 3. Taking the center point of each grid as the center of the support window in the reference image, the corresponding support window in the float image can be determined by the estimated local homographies. Then the similarity is calculated between the corresponding windows, and the optimal local homographies are selected as those with maximum similarity measure.

Once the optimal local homographies *Ĥ _{j,k}* are determined, they are used for coarse registration by transforming and interpolating the float image to obtain the coarsely registered image

The new similarity measure is a weighted normalized cross correlation (Weighted NCC) aiming to select the optimal local homographies. As we know, NCC is a simple but effective similarity measure. However, it only uses the gray pixel values to measure the similarity of the corresponding local windows. In fact, direction of the pixel gradient is also valuable and powerful information for similarity measurement. In this paper, a new similarity measure is proposed by assigning a weight for each pixel except the center point in a support window. It is defined as:

$$C(\text{x},{\text{x}}^{\prime})=\frac{{\sum}_{i}w(\text{x}+i){w}^{\prime}({\text{x}}^{\prime}+i)(I(\text{x}+i)-\overline{I}(\text{x}))({I}^{\prime}({\text{x}}^{\prime}+i)-{\overline{I}}^{\prime}({\text{x}}^{\prime}))}{\sqrt{{\sum}_{i}{w}^{2}(\text{x}+i{)(I(\text{x}+i)-\overline{I}(\text{x}))}^{2}{\sum}_{i}{{w}^{\prime}}^{2}({\text{x}}^{\prime}+i){({I}^{\prime}({\text{x}}^{\prime}+i)-{\overline{I}}^{\prime}({\text{x}}^{\prime}))}^{2}}}$$

[7]

where *I*(x) and *I′*(x′) are the gray values at point x and x′ in the image *R* and *F* respectively. *Ī*(x) and *Ī′*(x′) are the mean gray pixel values in the given β × β windows centered at x and x′. In Eq.(7), the weights *w*(x+*i*) and *w′*(x′+*i*) are respectively determined by the product of two components of direction and distance, i.e. *w _{r}*(x+

The direction component is computed as:

$${w}_{r}(\text{x}+i)=\text{exp}(-\Vert \psi (\text{x}+i)-\psi (\text{x})\Vert /{\delta}_{r})$$

[8]

where ψ(x+*i*) and ψ(x) are the main direction at point x + *i* and x. ‖ψ (x + *i*)−ψ (x)‖ is the angle between the two main directions. δ* _{r}* is a scale factor. The main direction, such as ψ(x), is determined by the histogram of oriented gradient (HOG) formed by the gradient orientations of all pixels within a 8×8 window region centered at x(

$$\{\begin{array}{l}m(x,y)=\sqrt{{\left(L(x+1,y,\sigma )-L(x-1,y,\sigma )\right)}^{2}+{\left(L(x,y+1,\sigma )-L(x,y-1,\sigma )\right)}^{2}}\hfill \\ \theta (x,y)={\text{tan}}^{-1}\left(\left(L(x,y+1,\sigma )-L(x,y-1,\sigma )\right)/\left(L(x+1,y,\sigma )-L(x-1,y,\sigma )\right)\right)\hfill \\ L(x,y,\sigma )=G(x,y,\sigma )*I(x,y)\hfill \end{array}$$

[9]

where the Gaussian window *G*(*x*,*y*,σ) (σ = 2.5 in our experiments) aims to give emphasis on gradients close to the center of the region. The peak value of the orientation histogram is considered as the main direction. Then the angle of two main directions can be calculated by the number of bins of histogram.

Similarly, the distance component is computed as:

$${w}_{s}(\mathbf{x}+i)=\text{exp}(-d(\mathbf{x}+i,\mathbf{x})/{\delta}_{s})$$

[10]

where *d*(**x** + *i*, **x**) is the Euclidean distance between the locations of pixel x + *i* and x. δ_{s} is a scale factor and it is set as the half of the support window, i.e., β/2. The pixel with smaller distance to the center of the region will be assigned a higher weight in
Eq.(10).

In a word, the new similarity measure presents stronger constraints by combing direction and distance information to weight each pixel in the support window. It can obtain the best score when each pixel has similar direction and distance simultaneously.

In order to better adapt to the local complex deformation of organ tissue, an additional transformation is required. We chose an FFD model, based on B-splines, which affects the transformation only in the local neighborhood of the control point. Suppose image *F′* is divided into a number of mesh cells and let ϕ denote a *n _{s}* ×

$$T(x,y)=\sum _{l=0}^{3}\sum _{m=0}^{3}{B}_{l}(u){B}_{m}(v){\phi}_{i+l,j+m}$$

[11]

where (*x*, *y*) is a point in the image *F′*. *i* = *x*/δ − 1, *j* = *y*/δ − 1, *u* = *x*/δ − *x*/δ, *ν* = *y*/δ − *y*/δ , and *B _{i}* represents the

$$\{\begin{array}{l}{B}_{0}(u)={\left(1-u\right)}^{3}/6\hfill \\ {B}_{1}(u)=\left(3{u}^{3}-6{u}^{2}+4\right)/6\hfill \\ {B}_{2}(u)=(-3{u}^{3}+3{u}^{2}+3u+1)/6\hfill \\ {B}_{3}(u)={u}^{3}/6\hfill \end{array}$$

[12]

To find the optimal local transformation, a cost function which consists of smoothing constraint term *C*_{B_FFD} and similarity measure term *C*_{NMI} is defined as follows:

$$C(\varphi )=-{C}_{NMI}({N}_{1},T({N}_{2}))+\mu {C}_{B\_FFD}(T)$$

[13]

where μ (μ = 0.01 in our experiments) is a weight to balance the two terms of the cost function. *C*_{B_FFD} (*T*) is used to guarantee the smoothness of the spline-based FFD transformation, and *C*_{NMI}(*N*_{1},*T*(*N*_{2})) measures the similarity by the normalized mutual information. Because (*x*, *y*) is only affected by its 4 × 4 neighboring control points, i.e. the position of the control point _{i,j} is only depends its 4δ × 4δ neighborhood grid. So NMI is not calculated on the full image, and it is only calculated between the neighborhood of the corresponding control point (i.e.,*N*_{1} and *T*(*N*_{2})) before and after transformation. This will greatly improve the computational efficiency. The two terms of the energy function defined in Eq.(13) are given as follows:

$$\{\begin{array}{l}{C}_{B\_FFD}(T)=\frac{1}{S}{\int}_{0}^{w}{\int}_{0}^{h}\left[{\left(\frac{{\partial}^{2}T}{\partial {x}^{2}}\right)}^{2}+2{\left(\frac{{\partial}^{2}T}{\partial x\partial y}\right)}^{2}+{\left(\frac{{\partial}^{2}T}{\partial {y}^{2}}\right)}^{2}\right]dxdy\hfill \\ {C}_{NMI}\left({N}_{1},T({N}_{2})\right)=\frac{H({N}_{1})+H(T({N}_{2}))}{H({N}_{1},T({N}_{2}))}\hfill \end{array}$$

[14]

where *S* is the area of the image domain, and *w* and *h* are its width and height. *H*(*N*_{1}) and *H*(*T*(*N*_{2})) denote the marginal entropy at *N*_{1} and *T*(*N*_{2}), and *H*(*N*_{1},*T*(*N*_{2})) denotes their joint entropy.

The key of the fine registration algorithm is to find the optimal transformation parameters ϕ by minimizing the cost *C*(ϕ). We employ an iterative gradient descent optimization method which steps in the direction of the gradient vector with a certain step size ρ. The procedure of the algorithm is summarized in Algorithm 1.

Algorithm 1. The fine registration algorithm.

- Step 1: Partition the coarse registration image
*F′*to initialize the control point set ϕ. - Step 2: Calculate the gradient of the cost function in Eq.(13), i.e. $\nabla C=\frac{\partial C}{\partial \varphi}$.
- Step 3: If ‖
*C*‖≥ξ (ξ = 10^{−4}in our experiments), update the control point set $\varphi =\varphi +\rho \frac{\nabla C}{\Vert \nabla C\Vert}$, and turn to step 2. - Step 4: Compute
*T*(*x*,*y*) by Eq.(11), Obtain the fine registration image*F*″ by the interpolation method (PVI).

We first verify the effectiveness of each stage of the proposed registration algorithm with many non-rigid transformation image pairs. Fig. 4 shows the results of three brain data sets after performing the registration algorithm, where (a) is the reference image (image download from (25), ARRA project) and (b) is the float image. The difference of (a) and (b) is shown in (c). From the difference we can see that there are large local deformations between the two input images. (d) shows the correspondences after outliers discarding by robustly estimating the global homography transformation model. (e) shows the estimated warpped meshes from all inliers. The warped meshes are represented by a series of local homographies which are used to perform the coarse registration. The difference between the float image and the coarsely registered image is shown in (f). It demonstrates that the deformation becomes smaller after performing the coarse registration. (g) is the transformation mesh which is determined based on B-spline FFD model. (h) is the deformed image after the fine registration. (i) shows the difference between the reference image (a) and the final registered float image (h). A large number of experiments show that, although there are the large local deformations between the two images, a very small difference can be obtained by performing our coarse-to-fine non-rigid medical image registration algorithm which means accurate registration can be achieved.

At the same time, our method can achieve fast convergence in the fine registration stage because a coarse registration step is applied first to compensate the large displacements.

Fig (5) shows another set of testing results to demonstrate the effectiveness of the proposed algorithm. This dataset has strong noises and large deformations (26). In Fig. (5), (a) and (b) are the reference images and float images, (c) shows the difference of images in (a) and (b). (d) and (e) show the registered float images and the differences with the reference images. (f) shows the difference of the reference and the float images registered by the hierarchical local B-spline FFD method (called HBFFD method). Obviously, our algorithm has better robustness to noises and transformations than HBFFD method.

We also quantitatively evaluate and measure the proposed registration algorithm and compare it with the HBFFD method which is based on the control points of multi-resolution. The similarity measurements used to evaluate the accuracy of our algorithm are sum of squared differences (SSD), sum of absolute difference (SAD), normalized correlation coefficient (NCC), and normalized mutual information (NMI). Table 1 lists the experimental results of the three data sets shown Fig. 4 and five data sets shown Fig. 5. From the results we can see that our method can obtain better similarity scores between the registered float images and the reference images than HBFFD method on each data set.

In addition to the geometric deformation model itself, extraction of the robust and precise feature correspondences is also very important. It is an essential step to estimate the geometric deformation model in many registration methods. However, it is often affected by image noise, feature outliers and local deformations. In the past decade, a number of methods were proposed to solve the robust feature matching problem. Among them, one kind of fuzzy correspondence methods, such as softassign methods (16) and relaxation labeling methods (17) have been developed, in which the binary constrains of the correspondences are relaxed to become a fuzzy correspondence during the optimization process. Some researchers combined the iconic feature and the geometric feature for correspondence searching and outlier discarding (18–19). However, most of these approaches have limited capability in handling outliers caused by feature extraction errors or large deformations.

This paper proposes an efficient coarse-to-fine non-rigid medical image registration algorithm based on a multi-level deformable model. Compared to the existing non-rigid medical image registration methods, our algorithm has the following characteristics:

- The multi-level deformable model consists of global homography model, local mesh-level homography model and local B-spline based FFD model. The coarse registration process which is based on the first two level models can effectively improve the convergence speed. It also helps improve the precision of the fine registration process which employs an iterative optimization model. More reliable registration results can be obtained compared to the hierarchical local B-spline FFD method which is based on the control points of multi-resolution.
- In order to improve the robustness of the registration algorithm, on the one hand, a robust algorithm for simultaneous outliers removal and model estimation is applied in the estimation of the global level homography model; on the other hand, a new similarity measure with strong constraints is proposed and applied in the local mesh-level homography model. It combines direction and distance information to weight each pixel in a support window, so as to achieve more accurate comparison of corresponding pixels.

This paper proposes an efficient coarse-to-fine non-rigid medical image registration algorithm based on a multi-level deformable model. The multi-level deformable model consists of global homography model, local mesh-level homography model and local B-spline based FFD model. In the estimation of the global level transformation model, a robust algorithm for simultaneous outliers removal and model estimation is applied. A new similarity measure with strong constraints is proposed to robustly estimate the local mesh-level deformable model. It combines the direction and distance information to weight each pixel in the support window. The coarse registration of the first two level models can greatly improve the convergence speed and help improve the precision of the fine registration stage. The experimental results show that our algorithm is more accurate than the hierarchical local B-spline FFD method which is based on the control points of multi-resolution.

Ethical issues (Including plagiarism, informed consent, misconduct, data fabrication and/or falsification, double publication and/or submission, redundancy, etc.) have been completely observed by the authors.

This work was supported by PUMC Youth Fund (3332016128), the Fundamental Research Funds for the Central Universities (2016ZX330013), and the National Natural Science Foundation of China (No. 71273280).

**Conflict of interest**

The authors declare that they have no conflict of interest.

1. Wyawahare Medha V, Patil Pradeep M, Abhyankar Hemant K. (2009). Image Registration Techniques: An overview. Int J Signal Processing Pattern Recognition, 2: 11–27.

2. Kanekoa S, Yutaka S, Satoru I. (2003). Using selective correlation coefficient for robust image registration. Pattern Recognition, 19: 1165–73.

3. Maes F, Collignon A, Vandermeulen D, Marchal G, Suetens P. (1997). Multimodality image registration by maximization of mutual information. IEEE Trans Med Imaging, 16: 187–98. [PubMed]

4. Bardera A, Feixas M, Boada I, Sbert M. (2006). High-dimensional normalized mutual information for image registration using random lines. In: International Workshop on Biomedical Image Registration, 4057: 264–71.

5. Andronache A, Von-Siebenthal MG, Cattin P. (2008). Non-rigid registration of multi-modal images using both mutual information and cross-correlation. Med Image Anal, 12: 3–15. [PubMed]

6. Pluim J, Maintz JA, Viergever M. (2003). Mutual information based registration of medical images: a survey. IEEE Trans Med Imaging, 22: 986–1004. [PubMed]

7. Zhang J, Wang J, Wang X, Feng D. (2015). Multimodal image registration with joint structure tensor and local entropy. Int J Comput Assist Radiol Surg, 10:1765–75. [PubMed]

8. Holden M. (2008). A review of geometric transformations for nonrigid body registration. IEEE Trans Med Imaging, 27: 111–28. [PubMed]

9. Sotiras A, Davatzikos C, Paragios N. (2013). Deformable medical image registration: a survey. IEEE Trans Med Imaging, 32:1153–90. [PMC free article] [PubMed]

10. He J, Christensen GE. (2003). Large deformation inverse consistent elastic image registration. In: Inf Process Med Imaging, 18: 438–49. [PubMed]

11. Le Guyader C, Vese LA. (2011). A combined segmentation and registration framework with a nonlinear elasticity smoother. Computer Vision and Image Understanding, 115: 1689–709.

12. Lu X, Zhao Y, Zhang B, Wu JS, Li N, Jia WT. (2013). A non-rigid cardiac image registration method based on an optical flow model. Int J Light Electron Optics, 124: 4266–73.

13. Thirion JP. (1998). Image matching as a diffusion process: an analogy with Maxwell’s demons. Med Image Anal, 2: 243–60. [PubMed]

14. Rueckert D, Sonoda LI, Hayes C, Hill D, Leach M, Hawkes D. (1999). Non-rigid registration using free-form deformations: application to breast MR images. IEEE Trans Med Imaging, 18: 712–21. [PubMed]

15. Rohde GK, Aldroubi A, Dawant BM. (2003). The adaptive bases algorithm for intensity-based nonrigid image registration. IEEE Trans Med Imaging, 22:1470–9. [PubMed]

16. Chui H, Rangarajan A. (2003). A new point matching algorithm for non-rigid registration. Computer Vision Image Understand, 89: 114–41.

17. Datteri RD, Liu Y, D’Haese PF, Dawant BM. (2015). Validation of a nonrigid registration error detection algorithm using clinical MRI brain data. IEEE Trans Med Imaging, 34:86–96. [PMC free article] [PubMed]

18. Hellier P, Barillot C. (2003). Coupling dense and landmark-based approaches for non-rigid registration. IEEE Trans Med Imaging, 22:217–27. [PubMed]

19. Wu G, Yap PT, Kim M, Shen D. (2010). TPS-HAMMER: improving HAMMER registration algorithm by soft correspondence matching and thin-plate splines based deformation interpolation. NeuroImage, 49: 2225–33. [PMC free article] [PubMed]

20. Lowe DG. (2004). Distinctive Image Features from Scale-Invariant Keypoints. Int J Computer Vision, 60: 91–110.

21. Mikolajczyk K, Schmid C. (2005). Performance evaluation of local descriptors. IEEE Trans Pattern Anal Mach Intell, 27: 1615–30. [PubMed]

22. Liu S, Yuan L, Tan P, Sun J. (2013). Bundled camera paths for video stabilization. Acm Transactions on Graphic, 32:1–10.

23. Moisan L, Moulon P, Monasse P. (2012). Automatic Homographic Registration of a Pair of Images, with A Contrario Elimination of Outliers. Image Processing on Line, 2: 56–73.

24. Yushkevich PA, Wang HZ, Pluta J, Das SR, Craige C, Avants BB, Weiner MW, Mueller S. (2010). Nearly automatic segmentation of hippocampal subfields in in vivo focal T2-weighted MRI. Neuroimage, 53: 1208–24. [PMC free article] [PubMed]

25. Suh JW, Scheinost D, Dione DP, Dobrucki LW, Sinusas AJ, Papademetris X. (2011). A non-rigid registration method for serial lower extremity hybrid SPECT/CT imaging. Med Image Anal, 15:96–111. [PMC free article] [PubMed]

26. Cocosco CA, Kollokian V, Kwan RKS, Evans AC. (1997). Brain Web: Online Interface to a 3D MRI Simulated Brain Database. Neuroimage, 5: 425.

Articles from Iranian Journal of Public Health are provided here courtesy of **Tehran University of Medical Sciences**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |