PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
IEEE Trans Image Process. Author manuscript; available in PMC 2012 February 18.
Published in final edited form as:
PMCID: PMC3282047
NIHMSID: NIHMS355533

Nonrigid Registration of 2-D and 3-D Dynamic Cell Nuclei Images for Improved Classification of Subcellular Particle Motion

Abstract

The observed motion of subcellular particles in fluorescence microscopy image sequences of live cells is generally a superposition of the motion and deformation of the cell and the motion of the particles. Decoupling the two types of movements to enable accurate classification of the particle motion requires the application of registration algorithms. We have developed an intensity-based approach for nonrigid registration of multi-channel microscopy image sequences of cell nuclei. First, based on 3-D synthetic images we demonstrate that cell nucleus deformations change the observed motion types of particles and that our approach allows to recover the original motion. Second, we have successfully applied our approach to register 2-D and 3-D real microscopy image sequences. A quantitative experimental comparison with previous approaches for nonrigid registration of cell microscopy has also been performed.

Index Terms: Biomedical image processing, image sequence analysis, microscopy, registration

I. Introduction

Time-lapse microscopy enables the observation of dynamic cellular processes. To understand these processes it is important to analyze and classify the types of motion of subcellular particles (proteins). However, live cells generally change their form and position over time. Therefore, when determining the motion of tagged particles within a cell nucleus, a superposition of the movement of the particles and that of the cell is obtained. To find the real motion of the particles, one has to compensate the movement and deformation of the cell. As an example, in Fig. 1(a) a cell nucleus from a microscopy image sequence is shown (contrast enhanced), and in Fig. 1(c) sub-cellular particles can be observed from an additional channel at the same time point (contrast enhanced). In Fig. 1(b), (d) the cell nucleus and the subcellular particles are shown at a later time point. It can be seen, that the cell nucleus undergoes significant deformations. Determining the location and motion of the subcellular particles with respect to the nucleus requires the compensation of the deformation. This can be achieved by registering dynamic cell microscopy images w.r.t. a reference frame. By computing the particle positions in the reference frame based on the registration transformations the particle motion then can be decoupled from the cell deformation.

Fig. 1
Example images from a multichannel microscopy image sequence: (a) cell nucleus at time point 0, (b) cell nucleus at time point 80, (c) subcellular particles at time point 0, (d) subcellular particles at time point 80.

In previous work on the registration of microscopy images of cell nuclei mainly rigid or affine approaches have been used. Rigid registration approaches determine a transformation which consists of either a translation, a rotation, or a combination of translation and rotation. Intensity-based rigid registration approaches for 2-D images of cells and cell nuclei have been used by Wilson and Theriot [1] as well as Würflinger et al. [2]. Goobic et al. [3] used a correlation-based approach to compute 2-D translation for the registration of intravital video microscopy images of rolling leukocytes. Sage et al. [4] described a model-based approach where a least-squares fit of an ellipse to segmented 2-D images of a nucleus has been used to determine a rigid transformation. To compute the translation of cell nuclei from 3-D microscopy images Cabal et al. [5] determined the centroids of segmented cell nuclei. Baheeratan et al. [6] used the phase correlation method to determine rigid transformations and a landmark-based approach for computing affine transformations of serial sections of mouse liver cell nuclei. Bornfleth et al. [7] used a correlation-based method to compute 3-D translation and 2-D rotation of cells to determine the motion of subchromosomal foci. Rieger et al. [8] employed the center of mass and the inertia tensor to compute 3-D translation and rotation of cell nuclei to register labeled proteins. Dzyubachyk et al. [9] determined 2-D translation and rotation by minimization of an energy functional based on distance functions to register 3-D microscopy images. Point-based approaches to determine 3-D translation and rotation of live cells have been used by Gerlich et al. [10] and Matula et al. [11]. Bacher et al. [12] used an intensity-based approach for rigid and affine registration of cell nuclei.

However, rigid or affine transformations cannot cope with deformations of live cells. Therefore, nonrigid or elastic registration approaches are required (for surveys on general non-rigid registration approaches see, e.g., [13], [14]). Up to now, only few approaches for nonrigid registration of cell nuclei images have been described. These approaches either use thin-plate splines and semiautomatically extracted point landmarks (e.g., [15]) or an extension of the demons algorithm with symmetric forces using segmented images (e.g., [16]) or original images (e.g., [17]). Mattes et al. [15] employed a thin-plate spline transformation model and semiautomatically extracted point landmarks. 2-D images of fixed cells are analyzed and the observed deformation is due to staining. A problem with this approach is the proper selection of corresponding point landmarks, which has a strong impact on the transformation. Yang et al. [16] used segmented images and an extension of the demons algorithm to register 3-D static images of different cell nuclei as well as dynamic cell nuclei images. A disadvantage of this approach is the requirement of a segmentation. Kim et al. [17] directly utilized the intensity information based on the demons algorithm to register 2-D and 3-D images from time-lapse microscopy of cell nuclei. For use of registration approaches in other biological applications such as the registration of gene expression patterns we refer to, e.g., Peng et al. [18] and Preibisch et al. [19].

Previous intensity-based approaches for nonrigid registration of dynamic cell nuclei images are based on the demons algorithm. The demons algorithm [20] is based on an optic flow equation and can be interpreted as an iterative optimization approach using the steepest descent algorithm. However, in comparison to other optimization schemes, the steepest descent algorithm is known for relatively low convergence rate, especially in the neighborhood of the solution. In contrast, the Lucas-Kanade algorithm [21], which has been previously used for optic flow estimation in 2-D video images, can be interpreted as an iterative Gauss-Newton optimization method, which is superior in accuracy and convergence compared to the steepest descent method (e.g., see the performance evaluation study on optic flow estimation [22]).

In this contribution, we suggest to incorporate the Lucas-Kanade optic flow algorithm [21] within an approach for non-rigid temporal registration of cell nuclei in dynamic microscopy image sequences (for an earlier version of this paper see [23]). We also introduce two extensions of this algorithm. The first extension is based on a symmetric formulation and the second extension is based on a weighting approach. In addition, we propose to use spatial and temporal regularization for estimating the transformation. Our approach determines registration transformations from the nucleus channel of time-lapse microscopy image sequences. The computed transformations are then employed to transform the positions of subcellular particles, which are observed in the second channel, into a reference frame. Afterwards, the motion of the particles is classified into directed, diffusive, and obstructed motion.

Our new nonrigid registration approach is fully automatic and has been successfully applied to 2-D as well as 3-D synthetic and real microscopy image sequences. Based on synthetic images we demonstrate that the motion types of particles are changed under cell deformation and that our approach is able to recover the original motion. The real image sequences depict nuclei of living cells with strong deformations since the cells are going into mitosis (cell division). Note, that in previous work only cells in interphase have been studied, which do not undergo cell division. Our approach has been successfully applied to these images and a quantitative comparison with previous schemes have been performed.

This paper is organized as follows: In the following section, we present our intensity-based approach for nonrigid temporal registration of fluorescence microscopy image sequences. Then, we describe experimental results from a simulation study on the motion type of particles under deformation. Afterwards, we present experimental results of our approach for 2-D and 3-D synthetic as well as real microscopy image sequences, and describe a quantitative comparison with other approaches. Finally, we conclude this paper with a summary.

II. Nonrigid Optic Flow-Based Temporal Registration

In this section, we first present our incremental approach for temporal registration of an image gk at time point k to the reference image g1 at the first time point. Our approach is based on an iterative nonrigid registration scheme for subsequent images. We first briefly review previous demons-based approaches for nonrigid registration. Then we present our nonrigid temporal registration approach based on the Lucas-Kanade algorithm, and we introduce two extensions of this algorithm.

A. Temporal Registration of Image Sequences

Our approach for nonrigid registration of temporal cell microscopy image sequences is based on an incremental scheme, which computes the transformation of an image gk at time point k to the reference image g1 (first image of an image sequence). In the first step, we compute a nonrigid registration of the image g2 at time point 2 to the reference image g1, represented by a dense deformation vector field u(g2, g1). Then we compute a transformation u(g3, g2) of the image g3 to the image g2. Concatenation of u(g3, g2) with u(g2, g1) and subsequent regularization of the deformation field with a Gaussian kernel GT then yields u(g3, g1) = GT * (u(g3, g2) [composite function (small circle)] u(g2, g1)). In general, the transformation of an image gk at time point k > 1 is obtained by u(gk, g1) = GT * (u(gk, gk−1) [composite function (small circle)] u(gk−1, g1)), where u(g1, g1)is a zero vector field.

The computed transformations from the nucleus channel (first channel) are used to determine the positions of the particles (second channel) with respect to the reference frame. The location pk of a particle at time point k is transformed to the reference frame by equation M1, where u−1(gk, g1, pk) denotes the vector of the inverse deformation field of u(gk, g1) at pk. The inverse deformation field is obtained by using a B-spline-based representation of the deformation field which is sampled in the coordinate system of the original source image.

B. Nonrigid Registration of Subsequent Images

In our approach, at each time point k > 1 of an image sequence we compute for subsequent images gk and gk−1 a non-rigid transformation represented by a dense deformation vector field u(gk, gk−1) using an iterative scheme. At each iteration i the deformation vector field is calculated by

equation M2
(1)

where ui−1(gk, gk−1) is the deformation field from the previous time point, equation M3 is an update field, equation M4 is the transformed image at time point k, x denotes the spatial coordinate, and GD and GU are Gaussian kernels which are used for regularization of the deformation and update fields. It has been noted earlier (e.g., [24]), that the regularization of vector fields using Gaussian kernels can be interpreted as an approximation to elastic deformations. The initial deformation field u0(gk, gk−1) is a zero vector field. The final deformation field u(gk, gk−1) is obtained after a certain number of iterations or when the mean squared intensity differences between the images gk−1 and equation M5 are below a certain threshold value. In all our experiments we used σD = 2.0 and σU = 2.0 for the standard deviations of the Gaussian kernels for regularization of the deformation and update field, respectively. These values were chosen based on initial experiments and proved to be an acceptable compromise between smoothing the deformation field to reduce errors and preservation of the overall accuracy.

C. Nonrigid Registration Using the Demons Algorithm

For computing the update field equation M6 in (1), the demons algorithm [20] can be used. Then at each iteration i for each voxel x we have

equation M7
(2)

where [nabla]gk−1 is the gradient of the image gk−1, and cD is a normalization constant. Alternatively, an improved variant of the demons algorithm with symmetric forces (e.g., [16]) can be used:

equation M8
(3)

where equation M9 is the gradient of the transformed image equation M10 at voxel x. This variant is called symmetric because the gradients of both images equation M11 and gk−1 are used.

D. Nonrigid Registration Using the Lucas-Kanade Algorithm

The demons algorithm described above can be interpreted as a steepest-descent optimization scheme. In [25] it has been shown in the context of estimating the optic flow in 2-D video images, that the Lucas-Kanade algorithm [21], which can be interpreted as an iterative Gauss-Newton optimization method, is superior with respect to convergence rate and accuracy in comparison to the steepest-descent approach (as used in the demons algorithm). Therefore, to improve the convergence and accuracy of nonrigid temporal registration we here suggest to use the approach in [21] for computing the dense deformation vector field u(gk, gk−1) between the images gk and gk−1. The approach of Lucas-Kanade minimizes for each position xc the sum of squared intensity differences between the target image gk and the source image gk−1 over a neighborhood region Ω around xc by

equation M12
(4)

where U is the vector at voxel position xc of the deformation field u(gk, gk−1), and d is the image dimension. Note, that x + U is a local transformation model since (4) is minimized for each voxel xc separately. Note also, that the resulting deformation field u(gk, gk−1) is a nonrigid transformation for all types of local transformation models. Using a linear expansion of gk(x + U) (4) can be written as

equation M13
(5)

where gk, [nabla]gk, and gk−1 are evaluated at the position x. Minimization of (5) yields a system of linear equations:

equation M14
(6)

from which the vector U can be determined. In our experiments we used the reference implementation by Barron et al. [22] for this approach, where the gradient [nabla]gk is computed by a four point central difference scheme and the terms on both sides of (6) are weighted with the gradient norm ||[nabla]gk||, and a Gaussian function is used to emphasize the center of the neighborhood Ω. For the neighborhood region Ω we used in our experiments a size of 5 × 5 pixels and 5 × 5 × 5 for the 2-D and 3-D images, respectively. Using smaller neighborhood regions yielded worse results, whereas using larger neighborhood regions led to an increase in the computation time without a significant improvement of the accuracy.

E. Symmetric Lucas-Kanade Approach

The solution (6) of the Lucas-Kanade approach [21] only exploits the gradient [nabla]gk of one image. Thus, in image regions where the gradient [nabla]gk is a zero vector (or close to a zero vector), U cannot be computed (or cannot be computed robustly). We propose to extend the original formulation in (4) such that the gradients of both images are exploited. Our approach is based on the minimization problem

equation M15
(7)

A linear expansion of (7) yields

equation M16
(8)

Minimization of (8) leads to a linear system of equations which includes the gradients of both images equation M17 and gk−1:

equation M18
(9)

This symmetric formulation allows to compute vectors U in image regions where at least one of the image gradients [nabla]gk or [nabla]gk−1 is not a zero vector (or not close to a zero vector). In contrast, if [nabla]gk in (6) is zero (or close to a zero vector) then U cannot be determined. Note that in previous work, a symmetric minimization problem has been used for global parametric image alignment (e.g., [26]), however, we here propose to use the symmetric formulation for estimation of local optical flow.

In our nonrigid registration approach we use the symmetric formulation in conjunction with the iterative scheme in (1), where at each iteration (7) is solved with equation M19 instead of gk and where equation M20 is the transformed target image gk using ui−1(gk, gk−1) from iteration i − 1, and U is the vector at voxel position xc of the update deformation field equation M21. The image gradients we determined by a Gaussian derivative kernel with a standard deviation of σδ = 1.5. The value for the standard deviation was chosen based on initial experimental results covering a range of values from σδ = 0.5 up to σδ = 2, where the chosen value yielded the best result.

F. Weighting Approach

In this section, we describe a second extension of the Lucas-Kanade algorithm [21], which is based on the optimization method of Levenberg-Marquardt (e.g., [27]), but exploits a different similarity measure. We motivate this extension by noting, that the linear approximation equation M22 used in [21] is, in general, good for relatively small magnitude values of U. However, for large magnitude values of U the linear approximation is not good when the intensity surface of a local image region cannot be described by a linear function (because higher order terms are neglected in the approximation). Consequently, the solution obtained from (6) is subject to errors. To cope with problems due to approximation errors when iteratively solving linear systems of equations like (6), a standard technique is to introduce a weight matrix Di:

equation M23
(10)

In the well known nonlinear optimization method of Leven-berg-Marquardt [27] Di is a diagonal matrix with elements equation M24, and c0 at iteration 0 is set to a fixed value. Note, that during the iterative optimization the Leven-berg-Marquardt method switches between the Gauss-Newton method and the steepest-descent method depending on the matrix Di. Note also, that the Lucas-Kanade algorithm can be interpreted as a Gauss-Newton method, which is preferable if the current estimate is close to the solution (i.e., the linear approximation used in the Lucas-Kanade approach is good). In contrast, if the current estimate is far from the solution, then the steepest descent method is preferable. Thus, using the Levenberg-Marquardt method within the Lucas-Kanade algorithm generally improves the solution.

In the method of Levenberg-Marquardt [27] at each iteration i a metric Ei for the similarity of the images equation M25 and gk−1 within the region Ω is computed by equation M26 and compared with the previous value Ei−1. If the metric has decreased during iteration i (i.e., if the images have become more similar), then the elements of Di are decreased by a constant factor (e.g., ci = ci−1/C, where C > 1). The system of linear equations in (10) then becomes more similar to the original solution in (6). In contrast, when the metric increases, then the elements of Di are increased by a constant factor (e.g., ci = ci−1C). A disadvantage of the Levenberg-Marquardt method is that the comparison of the metrics Ei and Ei−1 in subsequent iterations is only exploited qualitatively, since the factor ci is only a qualitative measure for the image similarity and does not reflect the quantitative value of the metric. For example, a change of the metric from Ei−1 = 1000 to Ei = 999 results in the same factor ci as a change of the metric from Ei−1 = 1000 to Ei = 100. Instead we here measure the image similarity by a comparison of the image gradients. This allows to define the weights based on an quantitative measure. In our approach at every iteration i we determine a quantitative value of the image similarity based on the image gradients which defines the weights. For each voxel x we compute a weight function w(x) by

equation M27
(11)

where the first term of the sum evaluates the similarity of the gradients w.r.t. the magnitude and the second term evaluates the similarity w.r.t. the angle. The weight function w(x) is zero when the image gradients are identical, and close to 1 when they are most dissimilar. The weights are then used to compute the entries of the diagonal matrix by equation M28, where c is a constant (in our experiments we used c = 1.0, which in initial experiments yielded good results). Note, that another possibility for the weight function would be to use a normalized value of the image metric equation M29. However, using the image gradients is generally more robust w.r.t. intensity variations and, in addition, more directly determines the deviation from the linear approximation used in the Lucas-Kanade approach.

G. Combined Symmetric Weighting Approach

Our weighting approach described above can also be combined with our symmetric approach in (9) yielding

equation M30
(12)

where equation M31. We will refer to this extension as the symmetric weighting approach.

III. Experimental Study Using Simulated Particle Motion

Before applying the above described nonrigid registration approaches we first study the effects of the deformation of a cell on the motion types of particles. To this end we have generated 3-D particle motion data with characteristics as observed in real experiments. We distinguish between three different types of particle motion: Directed, diffusive, and obstructed motion. We have generated 1000 particle tracks for each motion type, where each track consists of 23 time points.

The motion of particles can be characterized by the mean squared displacement (MSD), which on average is a function of the time interval Δt:

equation M32
(13)

where D is the diffusion coefficient and α characterizes the motion type. The parameter α is approximately 1 for diffusive motion, larger than 1 for directed motion, and smaller than 1 for obstructed motion [12]. In Fig. 2(a) the average MSD over 1000 particles as a function of Δt is shown for each motion type. Directed motion has been generated by a mixture of a Pearson random walk and a directed motion to a randomly chosen location with an increasing instantaneous velocity from 0.5 μm/ΔT to 1.0 μm/ΔT, where ΔT is the discrete time interval between two measurements. Diffusive motion was simulated by a Pearson random walk with an instantaneous velocity of 1.0 μm/ΔT. Finally, obstructed motion was simulated by a Pearson random walk within a small confined spherical region (radius of 1.8 μm) with an instantaneous velocity of 0.7 μm/ΔT. The particles are restricted to a larger spherical region (radius of 10 μm) to simulate subcellular particles within a nucleus. The values were chosen in accordance with the motion of particles as observed in real microscopy images.

Fig. 2
Averaged mean squared displacement (MSD) of synthetic particle motion: (a) original data, (b) deforming cell (unregistered), (c) after registration using our nonrigid registration approach.

The MSD can be used to classify the motion type of a population of particles as described above. However, the determination of the motion type of a single particle from a finite set of time points can be error prone, because diffusive motion is a random process and the MSD describes the average behaviour from an ideally infinite number of samples. In our experiments we found, for example, that diffusive motion can be misclassified as directed or obstructed motion, which has also been observed in previous work [28]. Therefore, to improve the classification results, we classify the motion type of an individual particle by the relative shape anisotropy of the gyration tensor [29]. Whereas in previous work shape anisotropy was used to classify 2-D particle tracks (e.g., [28], [30]), we here use shape anisotropy to classify 3-D particle motion. The gyration tensor S of a particle track p(t) for N time points is defined by

equation M33
(14)

where the coordinate system has been chosen such that equation M34. The eigenvalues λ1, λ2, and λ3 of the tensor can be used to compute the squared radius of gyration s2, the asphericity b, the acylindricity c, and the relative shape anisotropy k2:

equation M35
(15)

equation M36
(16)

equation M37
(17)

equation M38
(18)

where λ1 ≥ λ2 ≥ λ3. The relative shape anisotropy k2 is zero for an isotropic shape of the particle track and one for an anisotropic shape. If, for example, the particle track can be described by a linear function (a straight line), the shape of the track is anisotropic and the parameter k2 is one. If the shape of the track exposes a tetrahedral or higher symmetry, the shape can be considered as isotropic and k2 tends to zero [29]. For each single simulated particle track we compute k2 for all subtracks (partial tracks) with a length of 10 time points, and the mean value over all subtracks is used to classify the particle motion. In Fig. 3(a) the histogram of k2 of all particle tracks is presented. The histogram shows that directed particle motion can well be distinguished from diffusive and obstructed particle motion by applying a threshold of k2 = 0.6.

Fig. 3
(a) Histogram of relative shape anisotropy for directed, diffusive, and obstructed particle motion data, (b) histogram and cumulative histogram of the maximum distance of the particle position to the position at the first time point for 1000 particles ...

Diffusive and obstructed motion can be distinguished by determining the maximum distance dmax of the particle position at a certain time point to the starting position at the first time point. The classification accuracy depends on dmax and the radius of the confined region for obstructed motion. In Fig. 3(b) the histogram and the cumulative histogram of dmax is shown for 1000 particles with diffusive motion for all 23 time points. It can be seen that if we use dmax = 1.8 μm in accordance with the radius of the confined region as mentioned above, then only about 5% of particles with diffusive motion are misclassified as particles with obstructed motion (i.e., 5% of the particles do not exceed a maximum distance of 1.8 μm). In our case to distinguish between diffusive and obstructed motion we used a threshold of dmax = 1.8 μm + ct, where ct = 0.1 μm is a tolerance term to take into account small numerical errors.

IV. Experimental Results Using Synthetic Image Sequences

In this section, we describe experimental results of applying our approach for nonrigid temporal registration of nuclei images and subsequent particle motion classification based on synthetic data. To simulate a nucleus, we have generated 3-D image data of a spherical cell with an outer radius of 10 μm. For the motion of the particles within the nucleus we use the simulated data as described in the previous section. The motion is restricted to a spherical region with a radius of 10.0 μm. In Fig. 4(a) a visualization of the sphere by three orthogonal slices is shown. The spherical cell consists of an inner sphere (radius of 2.5 μm) with uniform intensity of 175, a region between radii of 2.5 μm and 5.0 μm with uniform intensity of 150, a region between radii of 5.0 μm and 7.5 μm with uniform intensity of 175, and an outer region between radii 7.5 μm and 10 μm with an intensity level of 200. The voxel size is 0.22 μm × 0.22 μm × 0.22 μm. The 3-D synthetic images of a spherical cell nucleus were generated to serve as image data where ground truth is known, and based on which the performance of the different approaches can be compared quantitatively. Using different regions with different intensity values compared to a completely homogeneous cell nucleus has the advantage that an interior intensity structure is introduced which can be exploited by the intensity-based registration approaches. Our experimental results showed, that this relatively coarse model of a cell nucleus is well-suited for a quantitative evaluation of the different approaches.

Fig. 4
(a) Original synthetic image at time point 0, (b) original image at time point 22, (c) registered image at time point 22 using our nonrigid registration approach.

To evaluate the effects of a deformation of a cell nucleus on the determined motion types we have computed deformation fields based on an analytic solution of the Navier equation:

equation M39
(19)

where u(x) denotes the deformation vector u at voxel x, and μ as well as λ are the Lamé coefficients. The analytic solution has been obtained by defining displacements at the border of the sphere, which act as boundary conditions. The deformation was chosen such that the radius of the spherical cell was decreased from 10 μm at the first time point to 5 μm at the last time point.

The computed deformation fields were then used to deform the generated image data of the spherical cell as well as the particle motion data, resulting in a 3-D image sequence and 3-D particle data over time with deformation. The image sequence consists of 23 3-D images of size 100 × 100 × 100 voxels. We have added Gaussian noise to the images with standard deviations σn = 1.0 and σn = 2.0, resulting in two image sequences. A visualization of the deformed sphere at the last time point is shown in Fig. 4(b), and the averaged MSD for the corresponding particle motion data is shown in Fig. 2(b). In comparison to Fig. 2(a), which shows the averaged MSD for the original data, it can be seen, that the deformation changes the observed motion types. To recover the true motion types we have applied our non-rigid registration approaches as well as the demons-based approaches. The computed transformations were used to register the particle positions with respect to the reference frame. The results for classification of the particle motion are shown in Table I for the approaches using temporal regularization for the image sequence with Gaussian noise of σn = 1.0. The first and second row give the results for the original particle data and for the particle data with deformation, whereas the remaining rows summarize the results after application of the different nonrigid registration approaches. It can be seen, that for all approaches the most significant improvements are achieved for obstructed motion. The demons-based approaches resulted in a classification accuracy for obstructed motion of 72.6% (non-symmetric) and 72.5% (symmetric), respectively. The Lucas-Kanade algorithm using the reference implementation of Barron et al. [22] yielded similar results as the demons-based approaches. The best result is obtained by the symmetric approach, which increases the classification accuracy for obstructed motion from 45.2% to 77.7%. The averaged classification accuracy for all motion types for the symmetric approach is 84.67%, which is a significant improvement compared to the unregistered case with 64.93% and compared to the demons-based approaches with 82.30% (non-symmetric) and 82.20% (symmetric), respectively.

TABLE I
Classification Accuracies for the Different Non-Rigid Registration Approaches With Temporal Regularization With σT = 2.0 (Synthetic Images, Gaussian Noise With Standard Deviation σn = 1.0)

The averaged classification accuracies for different levels of Gaussian noise and for the approaches without and with temporal regularization (σT = 2.0) are shown in Table II. The original particle data can be classified with an overall accuracy of 95.80%. A small fraction of particles with diffusive motion is misclassified as directed motion and vice-versa, due to the overlap of the histograms of the relative shape anisotropy parameter (see Fig. 3(b)), and some particles with diffusive motion are misclassified as obstructed motion. Under deformation the overall classification accuracy is decreased to 64.93%. Using the demons-based approaches for registration yielded a classification accuracy of 82.90% (non-symmetric) and 82.79% (symmetric) without temporal regularization (averaged over the two noise levels of σn = 1.0 and σn = 2.0), and 82.12% (non-symmetric) and 82.00% (symmetric) with temporal regularization, respectively. The Lucas-Kanade approach (Barron et al. [22]) yielded similar results as the demons-based approaches. Our approaches yielded a significantly higher accuracy, where the best result is obtained by the weighting approach with an accuracy of 84.75% (without temporal regularization) and 84.23% (with temporal regularization).

TABLE II
Classification Accuracies for the Different Non-Rigid Registration Approaches Without and With Temporal Regularization for Different Levels of Gaussian Noise (Synthetic Images)

The averaged MSD for the registered particle motion using the symmetric weighting approach is shown in Fig. 2(c), and a visualization of the registered image at the last time point is shown in Fig. 4(c). In comparison to the unregistered case in Fig. 2(b) and Fig. 4(b) we obtain a significant improvement. We have also quantified the motion type parameter α for the particle data without registration and with registration using our approaches as well as the demons-based approaches, and we have summarized the results for the image data with Gaussian noise (σn = 1.0) in Table III. For the unregistered particle data with deformation significant errors are obtained for the diffusive and obstructed motion. However, after registration the motion type parameter is relatively well recovered using our approaches, and the result is significantly better compared to the demons-based approaches. In addition, we have computed the mean absolute error ēMSD of the MSD curves by averaging the absolute differences between the MSD curves after registration and the original MSD curves (without deformation). The results for all nonrigid registration approaches are shown in Table IV. It can be seen that the best results are obtained for our approaches, in particular, for the symmetric weighting approach.

TABLE III
Results for the Motion Type Parameter α of the MSD Curves for the Different Non-Rigid Registration Approaches (Synthetic Images)
TABLE IV
Results for the Mean Error ēMSD of the MSD Curves for the Different Non-Rigid Registration Approaches (Synthetic Images)

V. Experimental Results Using Real Image Data

We have also applied our approach to four 2-D and two 3-D multichannel time-lapse fluorescence microscopy image sequences of live cells. The image sequences depict in one channel nuclei of living human cells (U2OS cell line) with different chromatin stainings (H2A-mCherry, YFP-SP100 with Hoechst), and in a second channel subcellular particles (CFP stained PML bodies). Strong deformations occur since the cells are going into mitosis (cell division). The images were acquired by a DeltaVision RT widefield microscope (Applied Precision) at a resolution of 0.21576 μm × 0.21576 μm for the 2-D image sequences, and 0.21576 μm × 0.21576 μm × 0.5 μm as well as 0.21576 μm × 0.21576 μm × 1.5 μm for the 3-D image sequences. The four 2-D image sequences consist of 100 up to 200 images per sequence and have a resolution of 384 × 384 up to 512 × 512 pixels. The two 3-D image sequences have resolutions of 480 × 480 × 10 and 512 × 512 × 15 voxels with 100 and 149 images per sequence. As an example, in Fig. 5(a), (b) the original images at time points 0 and 70 from a 2-D image sequence are shown. In Fig. 5(c) the difference image between the original images is displayed (contrast-enhanced) and in Fig. 5(d) the difference image of the registered images using the symmetric weighting approach is presented (contrast-enhanced). It can be seen that we obtain a good alignment. In Fig. 5(e) the registration transformation for the image at time point 70 using the symmetric weighting approach is represented as a vector field.

Fig. 5
(a) Original image at time point 0, (b) original image at time point 70, (c) difference of the original images, (d) difference of the registered images, (e) registration transformation visualized as a vector field.

To evaluate the performance of the nonrigid registration approaches, we calculated the root mean squared (RMS) intensity error and the correlation coefficient (CC) between registered images from subsequent time points. In Fig. 6(a), (b) the RMS error and the CC averaged over all images of a 2-D image sequence consisting of 150 images are shown. It can be seen that the best results are obtained using our approaches (three curves at the bottom of the RMS diagram and at the top of the CC diagram). The symmetric approach performs best. In comparison, the demons-based approaches show a much lower convergence rate. We have used 10 iterations for each image pair. The computation time for one iteration using images with 512 × 512 pixels and a single-core implementation is approximately 2.9, 3.3, and 5.3 seconds for the demons, the symmetric demons, and the Lucas-Kanade approach, respectively. Our symmetric, weighting, and symmetric weighting approaches take 6.2, 6.9, and 7 seconds, respectively. However, it should be noted that all algorithms can be highly parallelized.

Fig. 6
(a) Root mean squared (RMS) intensity error and (b) correlation coefficient (CC) averaged over a real image sequence.

In addition, we performed a quantitative evaluation using ground truth for all four 2-D image sequences, as well as for the two 3-D image sequences. Ground truth was determined for each image of the image sequences by manually localizing spot-like structures within the nucleus. In each of the 2-D image sequences we determined the locations of 9 structures over 38 up to 125 subsequent images, and in each of the 3-D image sequences the positions of 6 structures over 28 up to 83 subsequent images were localized. For each structure we computed the error as the mean Euclidean distance to its ground truth location at the first image without registration and after registration. In Fig. 7 the error for an example structure is shown for a 2-D image sequence using no temporal regularization (a) and using temporal regularization (σT = 2.0) (b). Fig. 8 shows the analogous results for a 3-D image sequence. It can be seen, that our approaches yield significantly lower errors compared to the unregistered case and compared to the demons-based approaches. The results for the mean error ē averaged over all 9 structures in all considered images of the 2-D image sequences are shown in Table V without temporal regularization and with temporal regularization using different values of the standard deviation (σT = 1.0, 1.5, 2.0). Without registration we have ē = 18.76 pixels. Without temporal regularization and using the demons-based approaches yielded ē = 10.53 pixels and ē = 6.79 pixels, respectively. The Lucas-Kanade algorithm (Barron et al. [22]) yielded ē = 11.20 pixels. In comparison, our approaches resulted in much lower errors of ē = 2.20 pixels, ē = 1.71 pixels, and ē = 2.07 pixels for the symmetric, weighting, and symmetric weighting approach, respectively. With temporal regularization we obtained analogous results, while for all approaches the mean error increased. In Table VI the results for the mean error averaged over all 6 structures in the 3-D image sequences are shown. Without registration we have ē = 23.09. Without temporal regularization and using the demons-based approaches yielded ē = 21.12 voxels (non-symmetric) and ē = 18.08 voxels (symmetric), respectively. The Lucas-Kanade approach (Barron et al. [22]) yielded ē = 18.38 voxels. The best results are obtained by our approaches with averaged mean errors of ē = 5.80 voxels (symmetric), ē = 5.47 voxels (weighting), and ē = 5.00 voxels (symmetric weighting). With temporal regularization we obtained analogous results, while the errors increased. In summary, the registration error has been significantly reduced by our approaches in comparison to the unregistered case and compared to the demons-based approaches.

Fig. 7
Error of the position of a nucleus structure to the ground truth position without and with registration of a 2-D image sequence using the different nonrigid registration approaches: (a) without temporal regularization, (b) with temporal regularization ...
Fig. 8
Error of the position of a nucleus structure to the ground truth position without and with registration of a 3-D image sequence using the different nonrigid registration approaches: (a) without temporal regularization, (b) with temporal regularization ...
TABLE V
Mean Error of the Positions of Nucleus Structures to Ground Truth Positions Without and With Registration Averaged Over 2-D Image Sequences Using the Different Non-Rigid Registration Approaches for Different Strengths of Temporal Regularization (σ ...
TABLE VI
Mean Error of the Positions of Nucleus Structures to Ground Truth Positions Without and With Registration Averaged Over 3-D Image Sequences Using the Different Non-Rigid Registration Approaches for Different Strengths of Temporal Regularization (σ ...

Moreover, we determined the motion types of the particles from the 2-D image sequences (169 particles in total) by computing the MSD and determining α before and after registration. Using the demons-based approaches yielded a change of the motion type of 35.5% (non-symmetric) and 41.4% (symmetric) of the particles, respectively. Our approach corrected the motion type of 56.2%, 57.4%, and 55.6% of the particles for the symmetric, the weighting, and the symmetric weighting approach, respectively. Thus, a significant larger number of particles was corrected.

VI. Discussion

From the experimental results based on synthetic as well as real images it turned out that our approaches consistently yield better results than the previous demons-based approaches. The demons-based approaches can be interpreted as steepest descent optimization schemes, which are known for relatively low convergence rate, especially in the neighborhood of the solution. In comparison, our approaches, which are extensions of the Lucas-Kanade algorithm, can be interpreted as Gauss-Newton optimization schemes, which are superior in accuracy and convergence compared to the steepest descent method (see, e.g., [22], [31]). This is probably a main reason for the improved performance of our approaches. For the synthetic images, the best result was obtained by our approaches with a classification accuracy of approximately 84%. Without registration the classification accuracy is about 65%. Thus, the use of registration yields a significant improvement. One reason why the classification accuracy after registration is not higher is probably due to the used classification scheme for the motion types of single particles. For the original data without deformation the classification accuracy is about 96%. Thus, besides further improvements of the registration approaches, improvements of the classification scheme are expected to further improve the classification accuracy. Based on our experimental results it also turned out that temporal regularization did not improve the registration result. It seems that the used spatial regularization already introduces enough smoothing for estimating the deformation vector field.

VII. Summary

We have presented a novel intensity-based approach for temporal registration of 2-D and 3-D multichannel fluorescence cell microscopy images. Our approach determines for each image of an image sequence a nonrigid transformation to the image at the first time point (reference coordinate system). The transformation then can be used to decouple the movement and deformation of a cell from the movement of nuclear particles. This allows an improved classification of the particle motion. Our approach is based on two extensions of an optic flow algorithm. The first extension is based on a symmetric formulation of the original minimization problem. The second extension is a weighting approach, which is based on the nonlinear optimization method of Levenberg-Marquardt and uses as weighting function the similarity of image gradients during iterative optimization. Based on simulations we have shown that the deformation of a cell changes the observed motion type, and that registration is required to obtain an improved classification of particle motion. Using 2-D and 3-D synthetic as well as real image sequences, we have analyzed the performance of our approach and we have performed a quantitative comparison with previous approaches for temporal registration of cell nuclei images. It turned out, that our approach yields a significant improvement compared to previous schemes. In future work, we plan to analyze in more detail the classified motion of particles based on a very large number of images and assess biological hypotheses about cellular processes.

Acknowledgments

This work was supported by the BMBF FORSYS project ViroQuant and the Human Frontier Science Program (HFSP). The work of D. L. Spector was supported by NIH/GM42694 and NIH/EY18244. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Birsen Yazici.

This work benefited from the use of ITK [32].

Biographies

An external file that holds a picture, illustration, etc.
Object name is nihms355533b1.gif

Il-Han Kim received the Diploma degree in electrical engineering from the University of Hannover, Hannover, Germany, in 2003, and is currently pursuing the Ph.D. degree in computer science in the Biomedical Computer Vision Group, University of Heidelberg and the German Cancer Research Center in Heidelberg, Germany.

Until 2005, he was with Panasonic, studying motion estimation algorithms for frame rate conversion methods. His research interests include motion estimation, rigid and nonrigid image registration, as well as spatiotemporal analysis of biomedical image data.

An external file that holds a picture, illustration, etc.
Object name is nihms355533b2.gif

Yi-Chun M. Chen received the B.S. degree in life science from National Tsing Hua University, Hsinchu, Taiwan, in 2000, and the Ph.D. degree from the Department of Biochemistry and Cell Biology, State University of New York at Stony Brook, in 2007.

She is currently a postdoctoral fellow in Dr. Matthew Krummel’s laboratory at the University of California, San Francisco, and focuses on signaling pathways that regulate T cell motility during T cell activation. Her approaches include biochemical purification, live cell imaging, intravital imaging, biosensor labeling, image analysis, and mouse genetics.

An external file that holds a picture, illustration, etc.
Object name is nihms355533b3.gif

David L. Spector received the Ph.D. degree in cell biology from Rutgers University, Piscataway, NJ, in 1980.

He is currently a Professor at Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, and Head of the Gene Expression Program of the Cold Spring Harbor Laboratory Cancer Center. His research centers on understanding the organization and regulation of gene expression in living cells. His laboratory’s work is focused on implementing innovative approaches to elucidate the spatial and temporal aspects of gene expression and in identifying and characterizing the function of nuclear retained long noncoding RNAs.

Dr. Spector has edited numerous microscopy techniques manuals that are used in laboratories throughout the world. He serves on the editorial boards of Journal of Cell Science, Epigenetics and Chromatin, and Current Opinion in Cell Biology. He is a member of the American Society for Cell Biology and currently serves on the ASCB council.

An external file that holds a picture, illustration, etc.
Object name is nihms355533b4.gif

Roland Eils received the Ph.D. degree in mathematics from Heidelberg University, Germany, in 1995.

He is currently a Professor at Heidelberg University and holds a joint appointment as division head at the German Cancer Research Center (DKFZ) in Heidelberg. He is founding director of BioQuant, the Center for Quantitative Analysis of Molecular and Cellular Biosystems at Heidelberg University. He has edited the book Computational Systems Biology (Elsevier). His research focuses on the integration of tools from mathematical modeling, image analysis and informatics into life science research and he is a leading figure of systems biology research. His research is applying various methods on issues related to human health such as viral infection, cellular death pathways and cancer genomics.

Prof. Eils is coordinating the two major German systems biology initiatives, FORSYS and the Helmholtz Alliance on Systems Biology. He is a member of the International Society for Systems Biology (ISSB).

An external file that holds a picture, illustration, etc.
Object name is nihms355533b5.gif

Karl Rohr (A’01) studied electrical engineering at the University of Karlsruhe (TH) and received the Ph.D. degree (1994) as well as the Habilitation degree (1999) in computer science from the University of Hamburg, Germany.

He is an Associate Professor and Head of the Biomedical Computer Vision group (BMCV) at the University of Heidelberg and the DKFZ (German Cancer Research Center). From 2000 to 2004 he was an Associate Professor at the International University in Germany, Bruchsal. In summer 1999 he spent a research stay at the Surgical Planning Laboratory, Harvard Medical School, Boston, MA. Since 2007 he has been a Guest Professor at the International University in Germany, Bruchsal. His research field is biomedical image analysis with focus on elastic registration, vessel segmentation, landmark localization, and tracking. He has written a book on Landmark-Based Image Analysis (Kluwer Academic, 2001) covering landmark localization and elastic registration, and he has published more than 150 peer-reviewed scientific articles.

Dr. Rohr is an Associate Editor of the IEEE Transactions on Biomedical Engineering and has served on the editorial board of the journal Pattern Recognition.

Contributor Information

Il-Han Kim, Biomedical Computer Vision Group, Department of Bioinformatics and Functional Genomics, and Department of Theoretical Bioinformatics, University of Heidelberg, D-69120 Heidelberg, Germany.

Yi-Chun M. Chen, Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724 USA.

David L. Spector, Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724 USA.

Roland Eils, Biomedical Computer Vision Group, Department of Bioinformatics and Functional Genomics, and Department of Theoretical Bioinformatics, University of Heidelberg, D-69120 Heidelberg, Germany.

Karl Rohr, Biomedical Computer Vision Group, Department of Bioinformatics and Functional Genomics, and Department of Theoretical Bioinformatics, University of Heidelberg, D-69120 Heidelberg, Germany.

References

1. Wilson CA, Theriot JA. A correlation-based approach to calculate rotation and translation of moving cells. IEEE Trans Image Process. 2006 Jul;15(7):1939–1951. [PubMed]
2. Würflinger T, Stockhausen J, Meyer-Ebrecht D, Böcking A. Robust automatic coregistration, segmentation, and classification of cell nuclei in multimodal cytopathological microscopic images. Computerized Medical Imaging and Graphics. 2004 Sep;28(28):87–98. [PubMed]
3. Goobic AP, Tang J, Acton ST. Image stabilization and registration for tracking cells in the microvasculature. IEEE Trans Biomed Eng. 2005;52(2):287–299. [PubMed]
4. Sage D, Neumann FR, Hediger F, Gasser SM, Unser M. Automatic tracking of individual fluorescent particles: Application to the study of chromosome dynamics. IEEE Trans Image Process. 2005 Sep;14(9):1372–1383. [PubMed]
5. Cabal GG, Genovesio A, Rodriguez-Navarro S, Zimmer C, Gadal O, Lesne A, Buc H, Feuerbach-Fournier F, Olivio-Marin JC, Hurt EC, Nehrbass U. Saga interacting factors confine sub-diffusion of transcribed genes to the nuclear envelope. Nature. 2006 Jun;441:770–773. [PubMed]
6. Baheerathan S, Albregtsen F, Danielsen H. Registration of serial sections of mouse liver cell nuclei. J Microscopy. 1998;192(1):37–53. [PubMed]
7. Bornfleth H, Edelmann P, Zink D, Cremer T, Cremer C. Quantitative motion analysis of subchromosomal foci in living cells using four-dimensional microscopy. Biophys J. 1999 Nov;77(5):2871–2886. [PubMed]
8. Rieger B, Molenaar C, Dirks R, Vliet IV. Alignment of the cell nucleus from labeled proteins only for 4D in vivo imaging. Microscopy Research and Technique. 2004;64(2):142–150. [PubMed]
9. Dzubachyk O, van Cappellen WA, Essers J, Niessen W, Meijering E. Energy minimization methods for cell motion correction and intracellular analysis in live-cell fluorescence microscopy. Proc. IEEE Int. Symp. Biomed. Imag.: From Nano to Macro 2009 (ISBI’09); Piscataway, NJ. 2009. pp. 1127–1130.
10. Gerlich D, Beaudouin J, Gebhard M, Ellenberg J, Eils R. Four-dimensional imaging and quantitative reconstruction to analyse complex spatiotemporal processes in live cells. Nature Cell Biology. 2001 Sep;3:852–856. [PubMed]
11. Matula P, Matula P, Kozubek M, Dvorák V. Fast point-based 3-D alignment of live cells. IEEE Trans Image Process. 2006 Aug;15(8):2388–2396. [PubMed]
12. Bacher CP, Reichenzeller M, Athale C, Herrmann H, Eils R. 4-D single particle tracking of synthetic and proteinaceous microspheres reveals preferential movement of nuclear particles along chromatin—Poor tracks. BMC Cell Biology. 2004 Nov;5(45) [PMC free article] [PubMed]
13. Zitová B, Flusser J. Image registration methods: A survey. Image Vis Comput. 2003 Oct;21(11):977–1000.
14. Holden M. A review of geometric transformations for nonrigid body registration. IEEE Trans Med Imag. 2008 Jan;27(1):111–128. [PubMed]
15. Mattes J, Nawroth J, Boukamp P, Eils R, Greulich-Bode K. Analyzing motion and deformation of the cell nucleus for studying co-localizations of nuclear structures. Proc. IEEE Int. Symp. Biomed. Imag.: From Nano to Macro (ISBI’06); Arlington, VA. Apr. 6–9, 2006; pp. 1044–1047.
16. Yang S, Köhler D, Teller K, Cremer T, Le Baccon P, Heard E, Eils R, Rohr K. Nonrigid registration of 3-D multichannel microscopy images of cell nuclei. IEEE Trans Image Process. 2008 Apr;17(4):493–499. [PubMed]
17. Kim I, Yang S, Le Baccon P, Heard E, Chen Y, Spector D, Kappel C, Eils R, Rohr K. Non-rigid temporal registration of 2-D and 3-D multi-channel microscopy image sequences of human cells. Proc. IEEE Int. Symp. Biomed. Imag.: From Nano to Macro 2007 (ISBI’07); Arlington, VA. Apr. 12–15, 2007; pp. 1328–1331.
18. Peng H, Myers G. Comparing in situ mRNA expression patterns of drosophila embryos,” in. Proc. 8th Annu. Int. Conf. Research in Computational Molecular Biology (RECOMB 2004); San Diego, CA. Mar. 2004; pp. 157–166.
19. Preibisch S, Ejsmont R, Rohlfing T, Tomancak R. Towards digital representation of Drosophila embryogenesis. Proc. IEEE Int. Symp. Biomed. Imag.: From Nano to Macro 2008 (ISBI’08); Paris, France. May 14–17, 2008; pp. 324–327.
20. Thirion JP. Image matching as a diffusion process: An analogy with Maxwell’s demons. Medical Image Analysis. 1998;2(3):243–260. [PubMed]
21. Lucas BD, Kanade T. An iterative image registration technique with an application to stereo vision. Proc. 7th Int. Joint Conf. Artificial Intelligence (IJCAI’81); Vancouver, British Columbia, Canada. Aug. 24–28, 1981; pp. 674–679.
22. Barron JL, Fleet DJ, Beauchemin SS. Performance of optical flow techniques. Int J Comput Vis. 1994 Feb;12(1):43–77.
23. Kim I-H, Eils R, Rohr K. Non-rigid alignment of multi-channel fluorescence microscopy images of live cells for improved classification of subcellular particle motion. Proc. SPIE Medical Imaging 2009: Biomedical Applications in Molecular, Structural, and Functional Imaging; Lake Buena Vista, FL. Feb. 2009; pp. 7262–72620S.
24. Pennec X, Cachier P, Ayache N. Understanding the “Demon’s Algorithm”: 4D non-rigid registration by gradient descent. Proc. 2nd Int. Conf. Medical Image Computing and Computer-Assisted Intervention 1999 (MICCAI’99); Cambridge, U.K. Sep. 19–22, 1999; pp. 597–605.
25. Baker S, Matthews I. Lucas-Kanade 20 years on: A unifying framework. Int J Comput Vis. 2004 Mar;56(3):221–255.
26. Keller Y, Averbuch A. Global parametric image alignment via high-order approximation. Comput Vis Image Understand. 2008 Mar;109(3):244–259.
27. Press WH, Teukolsky SA, Vetterling WT, Flannery BP. Numerical Recipes in C: The Art of Scientific Computing. 2. Cambridge, U.K: Cambridge Univ. Press; 1992.
28. Saxton MJ. Lateral diffusion in an archipelago—Single-particle diffusion. Biophys J. 1993 Jun;64(6):1766–1780. [PubMed]
29. Theodorou DN, Sutter UW. Shape of unperturbed linear polymers: Polypropylene. Macromolecules. 1985 Nov;18(6):1206–1214.
30. Jaqaman K, Loerke D, Mettlen M, Kuwata H, Grinstein S, Schmid SL, Danuser G. Robust single-particle tracking in live-cell time-lapse sequences. Nature Methods. 2008 Jul;5(8):695–702. [PMC free article] [PubMed]
31. Press WH, Tekolsky SA, Vetterling WT, Flannery BP. Numerical Recipes in C. New York: Cambridge Univ. Press; 1992.
32. Ibáñez L, Schroeder W, Ng L, Cates J. The ITK Software Guide. Clifton Park, NY: Kitware Inc; 2005.