In this work, we present a method for the integration of feature and intensity information for non rigid registration. Our method is based on a free-form deformation model, and uses a normalized mutual information intensity similarity metric to match intensities and the robust point matching framework to estimate feature (point) correspondences. The intensity and feature components of the registration are posed in a single energy functional with associated weights. We compare our method to both point-based and intensity-based registrations. In particular, we evaluate registration accuracy as measured by point landmark distances and image intensity similarity on a set of seventeen normal subjects. These results suggest that the integration of intensity and point-based registration is highly effective in yielding more accurate registrations.
Rationale and Objectives
To develop non-rigid image registration between pre-procedure contrast enhanced MR images and intra-procedure unenhanced CT images, to enhance tumor visualization and localization during CT-guided liver tumor cryoablation procedures.
Materials and Methods
After IRB approval, a non-rigid registration (NRR) technique was evaluated with different pre-processing steps and algorithm parameters and compared to a standard rigid registration (RR) approach. The Dice Similarity Coefficient (DSC), Target Registration Error (TRE), 95% Hausdorff distance (HD) and total registration time (minutes) were compared using a two-sided Student’s t-test. The entire registration method was then applied during five CT-guided liver cryoablation cases with the intra-procedural CT data transmitted directly from the CT scanner, with both accuracy and registration time evaluated.
Selected optimal parameters for registration were section thickness of 5mm, cropping the field of view to 66% of its original size, manual segmentation of the liver, B-spline control grid of 5×5×5 and spatial sampling of 50,000 pixels. Mean 95% HD of 3.3mm (2.5x improvement compared to RR, p<0.05); mean DSC metric of 0.97 (13% increase); and mean TRE of 4.1mm (2.7x reduction) were measured. During the cryoablation procedure registration between the pre-procedure MR and the planning intra-procedure CT took a mean time of 10.6 minutes, the MR to targeting CT image took 4 minutes and MR to monitoring CT took 4.3 minutes. Mean registration accuracy was under 3.4mm.
Non-rigid registration allowed improved visualization of the tumor during interventional planning, targeting and evaluation of tumor coverage by the ice ball. Future work is focused on reducing segmentation time to make the method more clinically acceptable.
non-rigid registration; B-Spline registration; liver tumor cryoablation; multimodal registration
Purpose. To develop a technique to automate landmark selection for point-based interpolating transformations for nonlinear medical image registration. Materials and Methods. Interpolating transformations were calculated from homologous point landmarks on the source (image to be transformed) and target (reference image). Point landmarks are placed at regular intervals on contours of anatomical features, and their positions are optimized along the contour surface by a function composed of curvature similarity and displacements of the homologous landmarks. The method was evaluated in two cases (n = 5 each). In one, MRI was registered to histological sections; in the second, geometric distortions in EPI MRI were corrected. Normalized mutual information and target registration error were calculated to compare the registration accuracy of the automatically and manually generated landmarks. Results. Statistical analyses demonstrated significant improvement (P < 0.05) in registration accuracy by landmark optimization in most data sets and trends towards improvement (P < 0.1) in others as compared to manual landmark selection.
Image-guided radiotherapy (IGRT), adaptive radiotherapy (ART), and online reoptimization rely on accurate mapping of the radiation beam isocenter(s) from planning to treatment space. This mapping involves rigid and/or nonrigid registration of planning (pCT) and intratreatment (tCT) CT images. The purpose of this study was to retrospectively compare a fully automatic approach, including a non-rigid step, against a user-directed rigid method implemented in a clinical IGRT protocol for prostate cancer. Isocenters resulting from automatic and clinical mappings were compared to reference isocenters carefully determined in each tCT. Comparison was based on displacements from the reference isocenters and prostate dose-volume histograms (DVHs). Ten patients with a total of 243 tCTs were investigated. Fully automatic registration was found to be as accurate as the clinical protocol but more precise for all patients. The average of the unsigned x, y, and z offsets and the standard deviations (σ) of the signed offsets computed over all images were (avg. ± σ (mm)): 1.1 ± 1.4, 1.8 ± 2.3, 2.5 ± 3.5 for the clinical protocol and 0.6 ± 0.8, 1.1 ± 1.5 and 1.1 ± 1.4 for the automatic method. No failures or outliers from automatic mapping were observed, while 8 outliers occurred for the clinical protocol.
Non-rigid multi-modal image registration plays an important role in medical image processing and analysis. Existing image registration methods based on similarity metrics such as mutual information (MI) and sum of squared differences (SSD) cannot achieve either high registration accuracy or high registration efficiency. To address this problem, we propose a novel two phase non-rigid multi-modal image registration method by combining Weber local descriptor (WLD) based similarity metrics with the normalized mutual information (NMI) using the diffeomorphic free-form deformation (FFD) model. The first phase aims at recovering the large deformation component using the WLD based non-local SSD (wldNSSD) or weighted structural similarity (wldWSSIM). Based on the output of the former phase, the second phase is focused on getting accurate transformation parameters related to the small deformation using the NMI. Extensive experiments on T1, T2 and PD weighted MR images demonstrate that the proposed wldNSSD-NMI or wldWSSIM-NMI method outperforms the registration methods based on the NMI, the conditional mutual information (CMI), the SSD on entropy images (ESSD) and the ESSD-NMI in terms of registration accuracy and computation efficiency.
non-rigid multi-modal registration; Weber local descriptor; sum of squared differences; structural similarity; mutual information
We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.
Aliasing; image registration; image resampling; interpolation artifacts
Ultrasound-guided prostate interventions could benefit from incorporating the radiologic localization of the tumor which can be acquired from multiparametric MRI. To enable this integration, we propose and compare two solutions for registration of T2 weighted MR images with transrectal ultrasound. Firstly, we propose an innovative and practical approach based on deformable registration of binary label maps obtained from manual segmentation of the gland in the two modalities. This resulted in a target registration error of 3.6±1.7 mm. Secondly, we report a novel surface-based registration method that uses a biomechanical model of the tissue and results in registration error of 3.2±1.3 mm. We compare the two methods in terms of accuracy, clinical use and technical limitations.
We have developed an algorithm for the rigid-body registration of
a CT volume to a set of C-arm images.
The algorithm uses a gradient-based iterative minimization of a least-squares measure
of dissimilarity between the C-arm images and projections of the
CT volume. To compute projections, we use a novel method for fast
integration of the volume along rays. To improve robustness and
speed, we take advantage of a coarse-to-fine processing of the
volume/image pyramids. To compute the projections of the volume,
the gradient of the dissimilarity measure, and the multiresolution
data pyramids, we use a continuous image/volume model based on
cubic B-splines, which ensures a high interpolation accuracy and a
gradient of the dissimilarity measure that is well defined
everywhere. We show the performance of our algorithm on a human
spine phantom, where the true alignment is determined using a set
of fiducial markers.
To determine whether a non-rigid registration (NRR) technique was more accurate than a rigid registration (RR) technique when fusing pre-procedural contrast-enhanced MR images to unenhanced CT images during CT-guided percutaneous cryoablation of renal tumors.
Both RR and NRR were applied retrospectively to 11 CT-guided percutaneous cryoablation procedures performed to treat renal tumors (mean diameter; 23 mm). Pre-procedural contrast-enhanced MR images of the upper abdomen were registered to unenhanced intra-procedural CT images obtained just prior to the ablation. RRs were performed manually, and NRRs were performed using an intensity-based approach with affine and Basis-Spline techniques used for modeling displacement. Registration accuracy for each technique was assessed using the 95% Hausdorff distance (HD), Fiducial Registration Error (FRE) and the Dice Similarity Coefficient (DSC). Statistical differences were analyzed using a two-sided Student’s t-test. Time for each registration technique was recorded.
Mean 95% HD (1.7 mm), FRE (1.7 mm) and DSC (0.96) using the NRR technique were significantly better than mean 95% HD (6.4 mm), FRE (5.0 mm) and DSC (0.88) using the RR technique (P < 0.05 for each analysis). Mean registration times of NRR and RR techniques were 15.2 and 5.7 min, respectively.
The non-rigid registration technique was more accurate than the rigid registration technique when fusing pre-procedural MR images to intra-procedural unenhanced CT images. The non-rigid registration technique can be used to improve visualization of renal tumors during CT-guided cryoablation procedures.
Multi-modality image fusion; Cryoablation; Renal tumors; B-Spline; Non-rigid registration
MRI is the preferred staging modality for rectal carcinoma patients. This work assesses the CT–MRI co-registration accuracy of four commercial rigid-body techniques for external beam radiotherapy treatment planning for patients treated in the prone position without fiducial markers.
17 patients with biopsy-proven rectal carcinoma were scanned with CT and MRI in the prone position without the use of fiducial markers. A reference co-registration was performed by consensus of a radiologist and two physicists. This was compared with two automated and two manual techniques on two separate treatment planning systems. Accuracy and reproducibility were analysed using a measure of target registration error (TRE) that was based on the average distance of the mis-registration between vertices of the clinically relevant gross tumour volume as delineated on the CT image.
An automated technique achieved the greatest accuracy, with a TRE of 2.3 mm. Both automated techniques demonstrated perfect reproducibility and were significantly faster than their manual counterparts. There was a significant difference in TRE between registrations performed on the two planning systems, but there were no significant differences between the manual and automated techniques.
For patients with rectal cancer, MRI acquired in the prone treatment position without fiducial markers can be accurately registered with planning CT. An automated registration technique offered a fast and accurate solution with associated uncertainties within acceptable treatment planning limits.
We present an evaluation of various non-rigid registration algorithms for the purpose of compensating interfractional motion of the target volume and organs at risk areas when acquiring CBCT image data prior to irradiation. Three different deformable registration (DR) methods were used: the Demons algorithm implemented in the iPlan Software (BrainLAB AG, Feldkirchen, Germany) and two custom-developed piecewise methods using either a Normalized Correlation or a Mutual Information metric (featureletNC and featureletMI). These methods were tested on data acquired using a novel purpose-built phantom for deformable registration and clinical CT/CBCT data of prostate and lung cancer patients. The Dice similarity coefficient (DSC) between manually drawn contours and the contours generated by a derived deformation field of the structures in question was compared to the result obtained with rigid registration (RR). For the phantom, the piecewise methods were slightly superior, the featureletNC for the intramodality and the featureletMI for the intermodality registrations. For the prostate cases in less than 50% of the images studied the DSC was improved over RR. Deformable registration methods improved the outcome over a rigid registration for lung cases and in the phantom study, but not in a significant way for the prostate study. A significantly superior deformation method could not be identified.
Deformable registration; radiotherapy; organ motion; Deformierbare Registrierung; Radiotherapie; Organbewegung
This work aims to develop a methodology for automated atlas-guided analysis of small animal positron emission tomography (PET) data through deformable registration to an anatomical mouse model.
A non-rigid registration technique is used to put into correspondence relevant anatomical regions of rodent CT images from combined PET/CT studies to corresponding CT images of the Digimouse anatomical mouse model. The latter provides a pre-segmented atlas consisting of 21 anatomical regions suitable for automated quantitative analysis. Image registration is performed using a package based on the Insight Toolkit allowing the implementation of various image registration algorithms. The optimal parameters obtained for deformable registration were applied to simulated and experimental mouse PET/CT studies. The accuracy of the image registration procedure was assessed by segmenting mouse CT images into seven regions: brain, lungs, heart, kidneys, bladder, skeleton and the rest of the body. This was accomplished prior to image registration using a semi-automated algorithm. Each mouse segmentation was transformed using the parameters obtained during CT to CT image registration. The resulting segmentation was compared with the original Digimouse atlas to quantify image registration accuracy using established metrics such as the Dice coefficient and Hausdorff distance. PET images were then transformed using the same technique and automated quantitative analysis of tracer uptake performed.
The Dice coefficient and Hausdorff distance show fair to excellent agreement and a mean registration mismatch distance of about 6 mm. The results demonstrate good quantification accuracy in most of the regions, especially the brain, but not in the bladder, as expected. Normalized mean activity estimates were preserved between the reference and automated quantification techniques with relative errors below 10 % in most of the organs considered.
The proposed automated quantification technique is reliable, robust and suitable for fast quantification of preclinical PET data in large serial studies.
PET/CT; Small animals; Quantification; Deformable registration; Atlas
To apply an intensity-based nonrigid registration algorithm to MRI-guided prostate brachytherapy clinical data and to assess its accuracy.
Materials and Methods
A nonrigid registration of preoperative MRI to intraoperative MRI images was carried out in 16 cases using a Basis-Spline algorithm in a retrospective manner. The registration was assessed qualitatively by experts’ visual inspection and quantitatively by measuring the Dice similarity coefficient (DSC) for total gland (TG), central gland (CG), and peripheral zone (PZ), the mutual information (MI) metric, and the fiducial registration error (FRE) between corresponding anatomical landmarks for both the nonrigid and a rigid registration method.
All 16 cases were successfully registered in less than 5 min. After the nonrigid registration, DSC values for TG, CG, PZ were 0.91, 0.89, 0.79, respectively, the MI metric was −0.19 ± 0.07 and FRE presented a value of 2.3 ± 1.8 mm. All the metrics were significantly better than in the case of rigid registration, as determined by one-sided t-tests.
The intensity-based nonrigid registration method using clinical data was demonstrated to be feasible and showed statistically improved metrics when compare to only rigid registration. The method is a valuable tool to integrate pre- and intraoperative images for brachytherapy.
prostate brachytherapy; signal intensity-based nonrigid registration; B-Spline
Registration of preoperative and postresection images is often needed to evaluate the effectiveness of treatment. While several non-rigid registration methods exist, most would be unable to accurately align these types of datasets due to the absence of tissue in one image. Here we present a joint registration and segmentation algorithm which handles the missing correspondence problem. An intensity-based prior is used to aid in the segmentation of the resection region from voxels with valid correspondences in the two images. The problem is posed in a maximum a posteriori (MAP) framework and optimized using the expectation-maximization (EM) algorithm. Results on both synthetic and real data show our method improved image alignment compared to a traditional non-rigid registration algorithm as well as a method using a robust error kernel in the registration similarity metric.
We demonstrate a technique for estimating the location of the hippocampus in MRI and CT images for use in radiotherapy treatment planning, using both rigid and contour based deformable image registration. The automatically generated contours can be subsequently modified for a given patient.
By mapping the hippocampi from several patients into a template image set, a population-based average hippocampal atlas was generated. Approximate hippocampal contours can be automatically generated in a given image set by mapping this atlas onto it. The performance and accuracy of several atlases generated in different ways was tested on 10 MRI images and 7 CT images.
Results and Conclusions
Auto-contouring based on deformable registration significantly outperformed that based on rigid registration alone, with an average Dice similarity score of 0.62 (range .40–.76) for methods utilizing deformation. Comparable results were achieved in auto-contouring CT images when deformable registration was used, demonstrating that the methodology is robust with respect to imaging modality.
Hippocampus; deformable registration; radiotherapy; WBRT; MRI
For eye diseases, such as glaucoma and age-related macular degeneration (ARMD), involved in long-term degeneration procedure, longitudinal comparison of retinal images is a common step for reliable diagnosis of these kinds of diseases.
To provide a retinal image registration approach for longitudinal retinal image alignment and comparison.
Two image registration solutions were proposed for facing different image qualities of retinal images to make the registration methods more robust and feasible in a clinical application system.
Thirty pairs of longitudinal retinal images were used for the registration test. The experiments showed both solutions provided good performance for the accurate image registrations with efficiency.
We proposed a set of retinal image registration solutions for longitudinal retinal image observation and comparison targeting a clinical application environment.
Retinal image registration; Glaucoma; ARMD; clinical decision support
We propose and compare different registration approaches to align small-animal PET studies and a procedure to validate the results by means of objective registration consistency measurements.
We have applied a registration algorithm based on information theory, using different approaches to mask the reference image. The registration consistency allows for the detection of incorrect registrations. This methodology has been evaluated on a test dataset (FDG-PET rat brain images).
The results show that a multiresolution two-step registration approach based on the use of the whole image at the low resolution step, while masking the brain at the high resolution step, provides the best robustness (87.5% registration success) and highest accuracy (0.67-mm average).
The major advantages of our approach are minimal user interaction and automatic assessment of the registration error, avoiding visual inspection of the results, thus facilitating the accurate, objective, and rapid analysis of large groups of rodent PET images.
Image registration; Positron emission tomography (PET); Validation; Algorithm; Rats
A biomechanical model-based deformable image registration incorporating specimen-specific changes in material properties is optimized and evaluated for correlating histology of clinical prostatectomy specimens with in vivo MRI. In this methodology, a three-step registration based on biomechanics calculates the transformations between histology and fixed, fixed and fresh, and fresh and in vivo states. A heterogeneous linear elastic material model is constructed based on magnetic resonance elastography (MRE) results. The ex vivo tissue MRE data provide specimen-specific information for the fresh and fixed tissue to account for the changes due to fixation. The accuracy of the algorithm was quantified by calculating the target registration error (TRE) by identifying naturally occurring anatomical points within the prostate in each image. TRE were improved with the deformable registration algorithm compared to rigid registration alone. The qualitative assessment also showed a good alignment between histology and MRI after the proposed deformable registration.
Biomechanical models; correlative pathology; deformable registration; finite element model; magnetic resonance elastography
An algorithm was developed to estimate the 3D lung tumour position using the projection data forming a cone beam CT sinogram and a template registration method. A pre-existing respiration-correlated CT image was used to generate templates of the target, which were then registered to the individual cone beam CT projections, and estimates of the target position were made for each projection. The registration search region was constrained based on knowledge of the mean tumour position during the cone beam CT scan acquisition. Several template registration algorithms were compared, including correlation coefficient and robust methods such as block correlation, robust correlation coefficient, and robust gradient correlation. Robust registration metrics were found to be less sensitive to occlusions such as overlying tissue and the treatment couch. Mean accuracy of the position estimation was 1.4 mm in phantom with a robust registration algorithm. In two research subjects with peripheral tumours, the mean position and mean target excursion were estimated to within 2.0 mm compared to the results obtained with a ‘4D’ registration of 4D image volumes.
In this paper, a method of acquiring intraoperative data using a laser range scanner (LRS) is presented within the context of model-updated image-guided surgery. Registering textured point clouds generated by the LRS to tomographic data is explored using established point-based and surface techniques as well as a novel method that incorporates geometry and intensity information via mutual information (SurfaceMI). Phantom registration studies were performed to examine accuracy and robustness for each framework. In addition, an in vivo registration is performed to demonstrate feasibility of the data acquisition system in the operating room. Results indicate that SurfaceMI performed better in many cases than point-based (PBR) and iterative closest point (ICP) methods for registration of textured point clouds. Mean target registration error (TRE) for simulated deep tissue targets in a phantom were 1.0 ± 0.2, 2.0 ± 0.3, and 1.2 ± 0.3 mm for PBR, ICP, and SurfaceMI, respectively. With regard to in vivo registration, the mean TRE of vessel contour points for each framework was 1.9 ± 1.0, 0 9 ± 0.6, and 1.3 ± 0.5 for PBR, ICP, and SurfaceMI, respectively. The methods discussed in this paper in conjunction with the quantitative data provide impetus for using LRS technology within the model-updated image-guided surgery framework.
Cortical surface; image-guided surgery; iterative closest point; laser-range scanner; mutual information; registration
Establishing correspondences across brains for the purposes of comparison and group analysis is almost universally done by registering images to one another either directly or via a template. However, there are many registration algorithms to choose from. A recent evaluation of fully automated nonlinear deformation methods applied to brain image registration was restricted to volume-based methods. The present study is the first that directly compares some of the most accurate of these volume registration methods with surface registration methods, as well as the first study to compare registrations of whole-head and brain-only (de-skulled) images. We used permutation tests to compare the overlap or Hausdorff distance performance for more than 16,000 registrations between 80 manually labeled brain images. We compared every combination of volume-based and surface-based labels, registration, and evaluation. Our primary findings are the following: 1. de-skulling aids volume registration methods; 2. custom-made optimal average templates improve registration over direct pairwise registration; and 3. resampling volume labels on surfaces or converting surface labels to volumes introduces distortions that preclude a fair comparison between the highest ranking volume and surface registration methods using present resampling methods. From the results of this study, we recommend constructing a custom template from a limited sample drawn from the same or a similar representative population, using the same algorithm used for registering brains to the template.
We present a 3D non-rigid registration algorithm for the potential use in combining PET/CT and transrectal ultrasound (TRUS) images for targeted prostate biopsy. Our registration is a hybrid approach that simultaneously optimizes the similarities from point-based registration and volume matching methods. The 3D registration is obtained by minimizing the distances of corresponding points at the surface and within the prostate and by maximizing the overlap ratio of the bladder neck on both images. The hybrid approach not only capture deformation at the prostate surface and internal landmarks but also the deformation at the bladder neck regions. The registration uses a soft assignment and deterministic annealing process. The correspondences are iteratively established in a fuzzy-to-deterministic approach. B-splines are used to generate a smooth non-rigid spatial transformation. In this study, we tested our registration with pre- and post-biopsy TRUS images of the same patients. Registration accuracy is evaluated using manual defined anatomic landmarks, i.e. calcification. The root-mean-squared (RMS) of the difference image between the reference and floating images was decreased by 62.6±9.1% after registration. The mean target registration error (TRE) was 0.88±0.16 mm, i.e. less than 3 voxels with a voxel size of 0.38×0.38×0.38 mm3 for all five patients. The experimental results demonstrate the robustness and accuracy of the 3D non-rigid registration algorithm.
Tranrectal ultrasound (TRUS); non-rigid registration; PET/CT; image-guided prostate biopsy; molecular imaging; image registration; prostate cancer; targeted biopsy of the prostate
We present the fast Spherical Demons algorithm for registering two spherical images. By exploiting spherical vector spline interpolation theory, we show that a large class of regularizers for the modified demons objective function can be efficiently implemented on the sphere using convolution. Based on the one parameter subgroups of diffeomorphisms, the resulting registration is diffeomorphic and fast – registration of two cortical mesh models with more than 100k nodes takes less than 5 minutes, comparable to the fastest surface registration algorithms. Moreover, the accuracy of our method compares favorably to the popular FreeSurfer registration algorithm. We validate the technique in two different settings: (1) parcellation in a set of in-vivo cortical surfaces and (2) Brodmann area localization in ex-vivo cortical surfaces.
Prostate cancer is a major health threat for men. For over five years, the U.S. National Cancer Institute has performed prostate biopsies with a magnetic resonance imaging (MRI)-guided robotic system.
A retrospective evaluation methodology and analysis of the clinical accuracy of this system is reported.
Using the pre and post-needle insertion image volumes, a registration algorithm that contains a two-step rigid registration followed by a deformable refinement was developed to capture prostate dislocation during the procedure. The method was validated by using three-dimensional contour overlays of the segmented prostates and the registrations were accurate up to 2 mm.
It was found that tissue deformation was less of a factor than organ displacement. Out of the 82 biopsies from 21 patients, the mean target displacement, needle placement error, and clinical biopsy error was 5.9 mm, 2.3 mm, and 4 mm, respectively.
The results suggest that motion compensation for organ displacement should be used to improve targeting accuracy.
Subtraction of Ictal SPECT Co-registered to MRI (SISCOM) is an imaging technique used to localize the epileptogenic focus in patients with intractable partial epilepsy. The aim of this study was to determine the accuracy of registration algorithms involved in SISCOM analysis using FocusDET, a new user-friendly application. To this end, Monte Carlo simulation was employed to generate realistic SPECT studies. Simulated sinograms were reconstructed by using the Filtered BackProjection (FBP) algorithm and an Ordered Subsets Expectation Maximization (OSEM) reconstruction method that included compensation for all degradations. Registration errors in SPECT-SPECT and SPECT-MRI registration were evaluated by comparing the theoretical and actual transforms. Patient studies with well-localized epilepsy were also included in the registration assessment. Global registration errors including SPECT-SPECT and SPECT-MRI registration errors were less than 1.2 mm on average, exceeding the voxel size (3.32 mm) of SPECT studies in no case. Although images reconstructed using OSEM led to lower registration errors than images reconstructed with FBP, differences after using OSEM or FBP in reconstruction were less than 0.2 mm on average. This indicates that correction for degradations does not play a major role in the SISCOM process, thereby facilitating the application of the methodology in centers where OSEM is not implemented with correction of all degradations. These findings together with those obtained by clinicians from patients via MRI, interictal and ictal SPECT and video-EEG, show that FocusDET is a robust application for performing SISCOM analysis in clinical practice.
Epilepsy; SISCOM; Monte Carlo simulation; Reconstruction algorithms; Registration assessment