Breast cancer is one of the leading causes of cancer death for women all over the world and mammography is thought of as one of the main tools for early detection of breast cancer. In order to detect the breast cancer, computer aided technology has been introduced. In computer aided cancer detection, the detection and segmentation of mass are very important. The shape of mass can be used as one of the factors to determine whether the mass is malignant or benign. However, many of the current methods are semi-automatic. In this paper, we investigate fully automatic segmentation method.
In this paper, a new mass segmentation algorithm is proposed. In the proposed algorithm, a fully automatic marker-controlled watershed transform is proposed to segment the mass region roughly, and then a level set is used to refine the segmentation. For over-segmentation caused by watershed, we also investigated different noise reduction technologies. Images from DDSM were used in the experiments and the results show that the new algorithm can improve the accuracy of mass segmentation.
The new algorithm combines the advantages of both methods. The combination of the watershed based segmentation and level set method can improve the efficiency of the segmentation. Besides, the introduction of noise reduction technologies can reduce over-segmentation.
A fully automated and three-dimensional (3D) segmentation method for the identification of the pulmonary parenchyma in thorax X-ray computed tomography (CT) datasets is proposed. It is meant to be used as pre-processing step in the computer-assisted detection (CAD) system for malignant lung nodule detection that is being developed by the Medical Applications in a Grid Infrastructure Connection (MAGIC-5) Project. In this new approach the segmentation of the external airways (trachea and bronchi), is obtained by 3D region growing with wavefront simulation and suitable stop conditions, thus allowing an accurate handling of the hilar region, notoriously difficult to be segmented. Particular attention was also devoted to checking and solving the problem of the apparent ‘fusion’ between the lungs, caused by partial-volume effects, while 3D morphology operations ensure the accurate inclusion of all the nodules (internal, pleural, and vascular) in the segmented volume. The new algorithm was initially developed and tested on a dataset of 130 CT scans from the Italung-CT trial, and was then applied to the ANODE09-competition images (55 scans) and to the LIDC database (84 scans), giving very satisfactory results. In particular, the lung contour was adequately located in 96% of the CT scans, with incorrect segmentation of the external airways in the remaining cases. Segmentation metrics were calculated that quantitatively express the consistency between automatic and manual segmentations: the mean overlap degree of the segmentation masks is 0.96 ± 0.02, and the mean and the maximum distance between the mask borders (averaged on the whole dataset) are 0.74 ± 0.05 and 4.5 ± 1.5, respectively, which confirms that the automatic segmentations quite correctly reproduce the borders traced by the radiologist. Moreover, no tissue containing internal and pleural nodules was removed in the segmentation process, so that this method proved to be fit for the use in the framework of a CAD system. Finally, in the comparison with a two-dimensional segmentation procedure, inter-slice smoothness was calculated, showing that the masks created by the 3D algorithm are significantly smoother than those calculated by the 2D-only procedure.
CAD; image segmentation; lung nodules; region growing; grid; 3D imaging; biomedical image analysis
Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers’ understanding of infectious diseases.
We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists’ delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework.
Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis.
We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy.
Quantitative analysis; Pulmonary infections; Small animal models; PET-CT; Image segmentation; H1N1; Tuberculosis
The iris as a unique identifier is predicated on the assumption that the iris image does not alter. This does not consider the fact that the iris changes in response to certain external factors including medication, disease, surgery as well as longer term ageing changes. It is also part of a dynamic optical system that alters with light level and focussing distance. A means of distinguishing the features that do not alter over time from those that do is needed. This paper applies iris recognition algorithms to a newly acquired database of 186 iris images from four subjects. These images have greater magnification and detail than iris images in existing databases. Iris segmentation methods are tested on the database. A new technique that enhances segmentation is presented and compared to two existing methods. These are also applied to test the effects of pupil dilation in the identification process.
Segmentation results from all the images showed that using the proposed algorithm accurately detected pupil boundaries for 96.2% respectively of the images, which was an increase of 88.7% over the most commonly used algorithm. For the images collected, the proposed technique also showed significant improvement in detection of the limbal boundary compared to the detection rates using existing methods. With regard to boundary displacement errors, only slight errors were found with the proposed technique compared to extreme errors made when existing techniques were applied. As the pupil becomes more dilated, the success of identification is increasingly more dependent on the decision criterion used.
The enhanced segmentation technique described in this paper performs with greater accuracy than existing methods for the higher quality images collected in this study. Implementation of the proposed segmentation enhancement significantly improves pupil boundary detection and therefore overall iris segmentation. Pupil dilation is an important aspect of iris identification; with increasing dilation, there is a greater risk of identification failure. Choice of decision criterion for identification should be carefully reviewed. It needs to be recognised that differences in the quality of images in different databases may result in variations in the performance of iris recognition algorithms.
Correlation of information from multiple-view mammograms (e.g., MLO and CC views, bilateral views, or current and prior mammograms) can improve the performance of breast cancer diagnosis by radiologists or by computer. The nipple is a reliable and stable landmark on mammograms for the registration of multiple mammograms. However, accurate identification of nipple location on mammograms is challenging because of the variations in image quality and in the nipple projections, resulting in some nipples being nearly invisible on the mammograms. In this study, we developed a computerized method to automatically identify the nipple location on digitized mammograms. First, the breast boundary was obtained using a gradient-based boundary tracking algorithm, and then the gray level profiles along the inside and outside of the boundary were identified. A geometric convergence analysis was used to limit the nipple search to a region of the breast boundary. A two-stage nipple detection method was developed to identify the nipple location using the gray level information around the nipple, the geometric characteristics of nipple shapes, and the texture features of glandular tissue or ducts which converge toward the nipple. At the first stage, a rule-based method was designed to identify the nipple location by detecting significant changes of intensity along the gray level profiles inside and outside the breast boundary and the changes in the boundary direction. At the second stage, a texture orientation-field analysis was developed to estimate the nipple location based on the convergence of the texture pattern of glandular tissue or ducts towards the nipple. The nipple location was finally determined from the detected nipple candidates by a rule-based confidence analysis. In this study, 377 and 367 randomly selected digitized mammograms were used for training and testing the nipple detection algorithm, respectively. Two experienced radiologists identified the nipple locations which were used as the gold standard. In the training data set, 301 nipples were positively identified and were referred to as visible nipples. Seventy six nipples could not be positively identified and were referred to as invisible nipples. The radiologists provided their estimation of the nipple locations in the latter group for comparison with the computer estimates. The computerized method could detect 89.37% (269/301) of the visible nipples and 69.74% (53/76) of the invisible nipples within 1 cm of the gold standard. In the test data set, 298 and 69 of the nipples were classified as visible and invisible, respectively. 92.28% (275/298) of the visible nipples and 53.62% (37/69) of the invisible nipples were identified within 1 cm of the gold standard. The results demonstrate that the nipple locations on digitized mammograms can be accurately detected if they are visible and can be reasonably estimated if they are invisible. Automated nipple detection will be an important step towards multiple image analysis for CAD.
computer-aided detection; mammography; nipple detection; texture orientation field analysis
Care providers use complementary information from multiple imaging modalities to identify and characterize metastatic tumors in early stages and perform surveillance for cancer recurrence. These tasks require volume quantification of tumor measurements using computed tomography (CT) or magnetic resonance imaging (MRI) and functional characterization through positron emission tomography (PET) imaging. In vivo volume quantification is conducted through image segmentation, which may require both anatomical and functional images available for precise tumor boundary delineation. Although integrating multiple image modalities into the segmentation process may improve the delineation accuracy and efficiency, due to variable visibility on image modalities, complex shape of metastatic lesions, and diverse visual features in functional and anatomical images, a precise and efficient segmentation of metastatic breast cancer remains a challenging goal even for advanced image segmentation methods. In response to these challenges, we present here a computer-assisted volume quantification method for PET/MRI dual modality images using PET-guided MRI co-segmentation. Our aims in this study were (1) to determine anatomical tumor volumes automatically from MRI accurately and efficiently, (2) to evaluate and compare the accuracy of the proposed method with different radiotracers (18F-Z HER2-Affibody and 18F-flourodeoxyglucose (18F-FDG)), and (3) to confirm the proposed method’s determinations from PET/MRI scans in comparison with PET/CT scans.
After the Institutional Administrative Panel on Laboratory Animal Care approval was obtained, 30 female nude mice were used to construct a small-animal breast cancer model. All mice were injected with human breast cancer cells and HER2-overexpressing MDA-MB-231HER2-Luc cells intravenously. Eight of them were selected for imaging studies, and selected mice were imaged with MRI, CT, and 18F-FDG-PET at weeks 9 and 10 and then imaged with 18F-Z HER2-Affibody-PET 2 days after the scheduled structural imaging (MRI and CT). After CT and MR images were co-registered with corresponding PET images, all images were quantitatively analyzed by the proposed segmentation technique.
Automatically determined anatomical tumor volumes were compared to radiologist-derived reference truths. Observer agreements were presented through Bland-Altman and linear regression analyses. Segmentation evaluations were conducted using true-positive (TP) and false-positive (FP) volume fractions of delineated tissue samples, as complied with the state-of-the-art evaluation techniques for image segmentation. Moreover, the PET images, obtained using different radiotracers, were examined and compared using the complex wavelet-based structural similarity index (CWSSI). (continued on the next page) (continued from the previous page)
PET/MR dual modality imaging using the 18F-Z HER2-Affibody imaging agent provided diagnostic image quality in all mice with excellent tumor delineations by the proposed method. The 18F-FDG radiotracer did not show accurate identification of the tumor regions. Structural similarity index (CWSSI) between PET images using 18F-FDG and 18F-Z HER2-Affibody agents was found to be 0.7838. MR showed higher diagnostic image quality when compared to CT because of its better soft tissue contrast. Significant correlations regarding the anatomical tumor volumes were obtained between both PET-guided MRI co-segmentation and reference truth (R2=0.92, p<0.001 for PET/MR, and R2=0.84, p<0.001, for PET/CT). TP and FP volume fractions using the automated co-segmentation method in PET/MR and PET/CT were found to be (TP 97.3%, FP 9.8%) and (TP 92.3%, FP 17.2%), respectively.
The proposed PET-guided MR image co-segmentation algorithm provided an automated and efficient way of assessing anatomical tumor volumes and their spatial extent. We showed that although the 18F-Z HER2-Affibody radiotracer in PET imaging is often used for characterization of tumors rather than detection, sensitivity and specificity of the localized radiotracer in the tumor region were informative enough; therefore, roughly determined tumor regions from PET images guided the delineation process well in the anatomical image domain for extracting accurate tumor volume information. Furthermore, the use of 18F-FDG radiotracer was not as successful as the 18F-Z HER2-Affibody in guiding the delineation process due to false-positive uptake regions in the neighborhood of tumor regions; hence, the accuracy of the fully automated segmentation method changed dramatically. Last, we qualitatively showed that MRI yields superior identification of tumor boundaries when compared to conventional CT imaging.
Image segmentation; Computer quantification; FDG-PET; MRI/PET; Breast cancer; Small-animal models; Co-segmentation; Volume quantification; Random walk
When reading mammograms, radiologists routinely search for and compare suspicious breast lesions identified on two corresponding craniocaudal (CC) and mediolateral oblique (MLO) views. Automatically identifying and matching the same true-positive breast lesions depicted on two views is an important step for developing successful multiview based computer-aided detection (CAD) schemes. The authors developed a method to automatically register breast areas and detect matching strips of interest used to identify the matched mass regions depicted on CC and MLO views. The method uses an ellipse based model to fit the breast boundary contour (skin line) and set a local Cartesian coordinate system for each view. One intersection point between the major/minor axis and the fitted ellipse perimeter passed through breast boundary is selected as the origin and the majoraxis and the minoraxis of the ellipse are used as the two axis of the Cartesian coordinate system. When a mass is identified on one view, the scheme computes its position in the local coordinate system. Then, the distance is mapped onto the local coordinate of the other view. At the end of the mapped distance a registered centerline of the matching strip is created. The authors established an image database that includes 200 test examinations each depicting one verified mass visible on the two views. They tested whether the registered centerline identified on another view can be used to locate the matched mass region. The experiments show that the average distance between the mass region centers and the registered centerlines was ±8.3 mm and in 91% of testing cases the registered centerline actually passes through the matched mass regions. A matching strip width of 47 mm was required to achieve 100% sensitivity for the test database. The results demonstrate the feasibility of the proposed method to automatically identify masses depicted on CC and MLO views, which may improve future development of multiview based CAD schemes.
ellipse-fitting; computer-aided detection; mammography; breast mass; image registration; region matching
The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL) boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology.
Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8) obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD), root mean squared difference (RMSD), Hausdorff distance (HD), sensitivity, and specificity metrics.
Results and discussion
The mean MAD was 0.87 mm (sigma = 0.36 mm), RMSD was 1.1 mm (sigma = 0.47 mm), and HD was 3.4 mm (sigma = 2.0 mm) indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171) and 0.990 (sigma = 0.00786), respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early.
The hypothesis that level set segmentation can be accurate to within 1–2 mm on average was supported, although there can be some greater deviation. The method was robust to boundary leakage as evidenced by the high specificity. It was concluded that the technique is promising and that a suitable data set of human ovarian images should be obtained to conduct further studies.
The rupture of thin-cap fibroatheroma accounts for most acute coronary events. Optical Coherence Tomography (OCT) allows quantification of fibrous cap (FC) thickness in vivo. Conventional manual analysis, by visually determining the thinnest part of the FC is subject to inter-observer variability and does not capture the 3-D morphology of the FC. We propose and validate a computer-aided method that allows volumetric analysis of FC. The radial FC boundary is semi-automatically segmented using a dynamic programming algorithm. The thickness at every point of the FC boundary, along with 3-D morphology of the FC, can be quantified. The method was validated against three experienced OCT image analysts in 14 lipid-rich lesions. The proposed method may advance our understanding of the mechanisms behind plaque rupture and improve disease management.
(100.0100) Image processing; (110.4500) Optical coherence tomography
Most of the erythrocyte related diseases are detectable by hematology images analysis. At the first step of this analysis, segmentation and detection of blood cells are inevitable. In this study, a novel method using a line operator and watershed algorithm is rendered for erythrocyte detection and segmentation in blood smear images, as well as reducing over-segmentation in watershed algorithm that is useful for segmentation of different types of blood cells having partial overlap. This method uses gray scale structure of blood cell, which is obtained by exertion of Euclidian distance transform on binary images. Applying this transform, the gray intensity of cell images gradually reduces from the center of cells to their margins. For detecting this intensity variation structure, a line operator measuring gray level variations along several directional line segments is applied. Line segments have maximum and minimum gray level variations has a special pattern that is applicable for detections of the central regions of cells. Intersection of these regions with the signs which are obtained by calculating of local maxima in the watershed algorithm was applied for cells’ centers detection, as well as a reduction in over-segmentation of watershed algorithm. This method creates 1300 sign in segmentation of 1274 erythrocytes available in 25 blood smear images. Accuracy and sensitivity of the proposed method are equal to 95.9% and 97.99%, respectively. The results show the proposed method's capability in detection of erythrocytes in blood smear images.
Blood smear images; line operator; watershed algorithm
Visual inspection for tongue analysis is a diagnostic method in traditional Chinese medicine (TCM). Owing to the variations in tongue features, such as color, texture, coating, and shape, it is difficult to precisely extract the tongue region in images. This study aims to quantitatively evaluate tongue diagnosis via automatic tongue segmentation.
Experiments were conducted using a clinical image dataset provided by the Laboratory of Traditional Medical Syndromes, Shanghai University of TCM. First, a clinical tongue image was refined by a saliency window. Second, we initialized the tongue area as the upper binary part and lower level set matrix. Third, a double geo-vector flow (DGF) was proposed to detect the tongue edge and segment the tongue region in the image, such that the geodesic flow was evaluated in the lower part, and the geo-gradient vector flow was evaluated in the upper part.
The performance of the DGF was evaluated using 100 images. The DGF exhibited better results compared with other representative studies, with its true-positive volume fraction reaching 98.5%, its false-positive volume fraction being 1.51%, and its false-negative volume fraction being 1.42%. The errors between the proposed automatic segmentation results and manual contours were 0.29 and 1.43% in terms of the standard boundary error metrics of Hausdorff distance and mean distance, respectively.
By analyzing the time complexity of the DGF and evaluating its performance via standard boundary and area error metrics, we have shown both efficiency and effectiveness of the DGF for automatic tongue image segmentation.
For some scoliotic patients the spinal instrumentation is inevitable. Among these patients, those with stiff curvature will need thoracoscopic disk resection. The removal of the intervertebral disk with only thoracoscopic images is a tedious and challenging task for the surgeon. With computer aided surgery and 3D visualisation of the interverterbral disk during surgery, surgeons will have access to additional information such as the remaining disk tissue or the distance of surgical tools from critical anatomical structures like the aorta or spinal canal. We hypothesized that automatically extracting 3D information of the intervertebral disk from MR images would aid the surgeons to evaluate the remaining disk and would add a security factor to the patient during thoracoscopic disk resection.
This paper presents a quantitative evaluation of an automatic segmentation method for 3D reconstruction of intervertebral scoliotic disks from MR images. The automatic segmentation method is based on the watershed technique and morphological operators. The 3D Dice Similarity Coefficient (DSC) is the main statistical metric used to validate the automatically detected preoperative disk volumes. The automatic detections of intervertebral disks of real clinical MR images are compared to manual segmentation done by clinicians.
Results show that depending on the type of MR acquisition sequence, the 3D DSC can be as high as 0.79 (±0.04). These 3D results are also supported by a 2D quantitative evaluation as well as by robustness and variability evaluations. The mean discrepancy (in 2D) between the manual and automatic segmentations for regions around the spinal canal is of 1.8 (±0.8) mm. The robustness study shows that among the five factors evaluated, only the type of MRI acquisition sequence can affect the segmentation results. Finally, the variability of the automatic segmentation method is lower than the variability associated with manual segmentation performed by different physicians.
This comprehensive evaluation of the automatic segmentation and 3D reconstruction of intervertebral disks shows that the proposed technique used with specific MRI acquisition protocol can detect intervertebral disk of scoliotic patient. The newly developed technique is promising for clinical context and can eventually help surgeons during thoracoscopic intervertebral disk resection.
The extraction of the breast boundary is crucial to perform further analysis of mammogram. Methods to extract the breast boundary can be classified into two categories: methods based on image processing techniques and those based on models. The former use image transformation techniques such as thresholding, morphological operations, and region growing. In the second category, the boundary is extracted using more advanced techniques, such as the active contour model. The problem with thresholding methods is that it is a hard to automatically find the optimal threshold value by using histogram information. On the other hand, active contour models require defining a starting point close to the actual boundary to be able to successfully extract the boundary. In this paper, we propose a probabilistic approach to address the aforementioned problems. In our approach we use local binary patterns to describe the texture around each pixel. In addition, the smoothness of the boundary is handled by using a new probability model. Experimental results show that the proposed method reaches 38% and 50% improvement with respect to the results obtained by the active contour model and threshold-based methods respectively, and it increases the stability of the boundary extraction process up to 86%.
As a critical technique in a digital pathology laboratory, automatic nuclear detection has been investigated for more than one decade. Conventional methods work on the raw images directly whose color/intensity homogeneity within tissue/cell areas are undermined due to artefacts such as uneven staining, making the subsequent binarization process prone to error. This paper concerns detecting cell nuclei automatically from digital pathology images by enhancing the color homogeneity as a pre-processing step.
Unlike previous watershed based algorithms relying on post-processing of the watershed, we present a new method that incorporates the staining information of pathological slides in the analysis. This pre-processing step strengthens the color homogeneity within the nuclear areas as well as the background areas, while keeping the nuclear edges sharp. Proof of convergence for the proposed algorithm is also provided. After pre-processing, Otsu's threshold is applied to binarize the image, which is further segmented via watershed. To keep a proper compromise between removing overlapping and avoiding over-segmentation, a naive Bayes classifier is designed to refine the splits suggested by the watershed segmentation.
The method is validated with 10 sets of 1000 × 1000 pathology images of lymphoma from one digital slide. The mean precision and recall rates are 87% and 91%, corresponding to a mean F-score equal to 89%. Standard deviations for these performance indicators are 5.1%, 1.6% and 3.2% respectively.
The precision/recall performance obtained indicates that the proposed method outperforms several other alternatives. In particular, for nuclear detection, stain guided mean-shift (SGMS) is more effective than the direct application of mean-shift in pre-processing. Our experiments also show that pre-processing the digital pathology images with SGMS gives better results than conventional watershed algorithms. Nevertheless, as only one type of tissue is tested in this paper, a further study is planned to enhance the robustness of the algorithm so that other types of tissues/stains can also be processed reliably.
Digital pathology; k-means; mean-shift; watershed
This work aims to develop a methodology for automated atlas-guided analysis of small animal positron emission tomography (PET) data through deformable registration to an anatomical mouse model.
A non-rigid registration technique is used to put into correspondence relevant anatomical regions of rodent CT images from combined PET/CT studies to corresponding CT images of the Digimouse anatomical mouse model. The latter provides a pre-segmented atlas consisting of 21 anatomical regions suitable for automated quantitative analysis. Image registration is performed using a package based on the Insight Toolkit allowing the implementation of various image registration algorithms. The optimal parameters obtained for deformable registration were applied to simulated and experimental mouse PET/CT studies. The accuracy of the image registration procedure was assessed by segmenting mouse CT images into seven regions: brain, lungs, heart, kidneys, bladder, skeleton and the rest of the body. This was accomplished prior to image registration using a semi-automated algorithm. Each mouse segmentation was transformed using the parameters obtained during CT to CT image registration. The resulting segmentation was compared with the original Digimouse atlas to quantify image registration accuracy using established metrics such as the Dice coefficient and Hausdorff distance. PET images were then transformed using the same technique and automated quantitative analysis of tracer uptake performed.
The Dice coefficient and Hausdorff distance show fair to excellent agreement and a mean registration mismatch distance of about 6 mm. The results demonstrate good quantification accuracy in most of the regions, especially the brain, but not in the bladder, as expected. Normalized mean activity estimates were preserved between the reference and automated quantification techniques with relative errors below 10 % in most of the organs considered.
The proposed automated quantification technique is reliable, robust and suitable for fast quantification of preclinical PET data in large serial studies.
PET/CT; Small animals; Quantification; Deformable registration; Atlas
An automatic framework is proposed to segment right ventricle on ultrasound images. This method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform (SMT), a training model, and a localized region based level set. First, the sparse matrix transform extracts main motion regions of myocardium as eigenimages by analyzing statistical information of these images. Second, a training model of right ventricle is registered to the extracted eigenimages in order to automatically detect the main location of the right ventricle and the corresponding transform relationship between the training model and the SMT-extracted results in the series. Third, the training model is then adjusted as an adapted initialization for the segmentation of each image in the series. Finally, based on the adapted initializations, a localized region based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle from the whole series. Experimental results from real subject data validated the performance of the proposed framework in segmenting right ventricle from echocardiography. The mean Dice scores for both epicardial and endocardial boundaries are 89.1%±2.3% and 83.6±7.3%, respectively. The automatic segmentation method based on sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
Image segmentation; ultrasound imaging; sparse matrix transform; level set; right ventricle; cardiac imaging; heart; myocardium; genetic algorithm; functional imaging
This paper presents a novel two-step approach that incorporates fuzzy c-means (FCMs) clustering and gradient vector flow (GVF) snake algorithm for lesions contour segmentation on breast magnetic resonance imaging (BMRI). Manual delineation of the lesions by expert MR radiologists was taken as a reference standard in evaluating the computerized segmentation approach. The proposed algorithm was also compared with the FCMs clustering based method. With a database of 60 mass-like lesions (22 benign and 38 malignant cases), the proposed method demonstrated sufficiently good segmentation performance. The morphological and texture features were extracted and used to classify the benign and malignant lesions based on the proposed computerized segmentation contour and radiologists' delineation, respectively. Features extracted by the computerized characterization method were employed to differentiate the lesions with an area under the receiver-operating characteristic curve (AUC) of 0.968, in comparison with an AUC of 0.914 based on the features extracted from radiologists' delineation. The proposed method in current study can assist radiologists to delineate and characterize BMRI lesion, such as quantifying morphological and texture features and improving the objectivity and efficiency of BMRI interpretation with a certain clinical value.
Rationale and Objectives:
To convert and optimize our previously developed computerized analysis methods for use with images from full-field digital mammography (FFDM) for breast mass classification in order to aid in the diagnosis of breast cancer.
Materials and Methods:
An institutional review board approved protocol was obtained, with waiver of consent for retrospective use of mammograms and pathology data. Seven hundreds and thirty-nine full-field digital mammographic images, which contained 287 biopsy-proven breast mass lesions, of which 148 lesions were malignant and 139 lesions were benign, were retrospectively collected. Lesion margins were delineated by an expert breast radiologist and were used as the truth for lesion-segmentation evaluation. Our computerized image analysis method consisted of several steps: 1) identified lesions were automatically extracted from the parenchymal background using computerized segmentation methods; 2) a set of image characteristics (mathematical descriptors) were automatically extracted from image data of the lesions and surrounding tissues; and 3) selected features were merged into an estimate of the probability of malignancy using a Bayesian artificial neural network classifier. Performance of the analyses was evaluated at various stages of the conversion using receiver operating characteristic (ROC) analysis.
An AUC value of 0.81 was obtained in the task of distinguishing between malignant and benign mass lesions in a round-robin by case evaluation on the entire FFDM dataset. We failed to show a statistically significant difference (P value=0.83) as compared with results from our previous study in which the computerized classification was performed on digitized screen-film mammograms (SFMD).
Our computerized analysis methods developed on digitized screen-film mammography can be converted for use with FFDM. Results show that the computerized analysis methods for the diagnosis of breast mass lesions on FFDM are promising, and can potentially be used to aid clinicians in the diagnostic interpretation of FFDM.
Computer-aided diagnosis; Full-field digital mammography; Breast mass classification
For quantitative analysis of histopathological images, such as the lymphoma grading systems, quantification of features is usually carried out on single cells before categorizing them by classification algorithms. To this end, we propose an integrated framework consisting of a novel supervised cell-image segmentation algorithm and a new touching-cell splitting method.
For the segmentation part, we segment the cell regions from the other areas by classifying the image pixels into either cell or extra-cellular category. Instead of using pixel color intensities, the color-texture extracted at the local neighborhood of each pixel is utilized as the input to our classification algorithm. The color-texture at each pixel is extracted by local Fourier transform (LFT) from a new color space, the most discriminant color space (MDC). The MDC color space is optimized to be a linear combination of the original RGB color space so that the extracted LFT texture features in the MDC color space can achieve most discrimination in terms of classification (segmentation) performance. To speed up the texture feature extraction process, we develop an efficient LFT extraction algorithm based on image shifting and image integral.
For the splitting part, given a connected component of the segmentation map, we initially differentiate whether it is a touching-cell clump or a single non-touching cell. The differentiation is mainly based on the distance between the most likely radial-symmetry center and the geometrical center of the connected component. The boundaries of touching-cell clumps are smoothed out by Fourier shape descriptor before carrying out an iterative, concave-point and radial-symmetry based splitting algorithm.
To test the validity, effectiveness and efficiency of the framework, it is applied to follicular lymphoma pathological images, which exhibit complex background and extracellular texture with non-uniform illumination condition. For comparison purposes, the results of the proposed segmentation algorithm are evaluated against the outputs of Superpixel, Graph-Cut, Mean-shift, and two state-of-the-art pathological image segmentation methods using ground-truth that was established by manual segmentation of cells in the original images. Our segmentation algorithm achieves better results than the other compared methods. The results of splitting are evaluated in terms of under-splitting, over-splitting, and encroachment errors. By summing up the three types of errors, we achieve a total error rate of 5.25% per image.
Histopathological image segmentation; touching-cell splitting; supervised learning; color-texture feature extraction; local fourier transform; discriminant analysis; radial-symmetry point; follicular lymphoma
We introduce a novel computational framework to enable automated identification of texture and shape features of lesions on 18F-FDG-PET images through a graph-based image segmentation method. The proposed framework predicts future morphological changes of lesions with high accuracy. The presented methodology has several benefits over conventional qualitative and semi-quantitative methods, due to its fully quantitative nature and high accuracy in each step of (i) detection, (ii) segmentation, and (iii) feature extraction. To evaluate our proposed computational framework, thirty patients received 2 18F-FDG-PET scans (60 scans total), at two different time points. Metastatic papillary renal cell carcinoma, cerebellar hemongioblastoma, non-small cell lung cancer, neurofibroma, lymphomatoid granulomatosis, lung neoplasm, neuroendocrine tumor, soft tissue thoracic mass, nonnecrotizing granulomatous inflammation, renal cell carcinoma with papillary and cystic features, diffuse large B-cell lymphoma, metastatic alveolar soft part sarcoma, and small cell lung cancer were included in this analysis. The radiotracer accumulation in patients' scans was automatically detected and segmented by the proposed segmentation algorithm. Delineated regions were used to extract shape and textural features, with the proposed adaptive feature extraction framework, as well as standardized uptake values (SUV) of uptake regions, to conduct a broad quantitative analysis. Evaluation of segmentation results indicates that our proposed segmentation algorithm has a mean dice similarity coefficient of 85.75±1.75%. We found that 28 of 68 extracted imaging features were correlated well with SUVmax (p<0.05), and some of the textural features (such as entropy and maximum probability) were superior in predicting morphological changes of radiotracer uptake regions longitudinally, compared to single intensity feature such as SUVmax. We also found that integrating textural features with SUV measurements significantly improves the prediction accuracy of morphological changes (Spearman correlation coefficient = 0.8715, p<2e-16).
The optic nerve is a sensitive central nervous system structure, which plays a critical role in many devastating pathological conditions. Several methods have been proposed in recent years to segment the optic nerve automatically, but progress toward full automation has been limited. Multi-atlas methods have been successful for brain segmentation, but their application to smaller anatomies remains relatively unexplored. Herein we evaluate a framework for robust and fully automated segmentation of the optic nerves, eye globes and muscles. We employ a robust registration procedure for accurate registrations, variable voxel resolution and image field-of-view. We demonstrate the efficacy of an optimal combination of SyN registration and a recently proposed label fusion algorithm (Non-local Spatial STAPLE) that accounts for small-scale errors in registration correspondence. On a dataset containing 30 highly varying computed tomography (CT) images of the human brain, the optimal registration and label fusion pipeline resulted in a median Dice similarity coefficient of 0.77, symmetric mean surface distance error of 0.55 mm, symmetric Hausdorff distance error of 3.33 mm for the optic nerves. Simultaneously, we demonstrate the robustness of the optimal algorithm by segmenting the optic nerve structure in 316 CT scans obtained from 182 subjects from a thyroid eye disease (TED) patient population.
Multi-Atlas; Statistical Label Fusion; Optic Nerve; Segmentation; Computed Tomography
The introduction of enhanced depth imaging optical coherence tomography (EDI-OCT) has provided the advantage of in vivo cross-sectional imaging of the choroid, similar to the retina, with standard commercially available spectral domain (SD) OCT machines. A texture-based algorithm is introduced in this paper for fully automatic segmentation of choroidal images obtained from an EDI system of Heidelberg 3D OCT Spectralis. Dynamic programming is utilized to determine the location of the retinal pigment epithelium (RPE). Bruch's membrane (BM) (the blood-retina barrier which separates the RPE cells of the retina from the choroid) can be segmented by searching for the pixels with the biggest gradient value below the RPE. Furthermore, a novel method is proposed to segment the choroid-sclera interface (CSI), which employs the wavelet based features to construct a Gaussian mixture model (GMM). The model is then used in a graph cut for segmentation of the choroidal boundary. The proposed algorithm is tested on 100 EDI OCTs and is compared with manual segmentation. The results showed an unsigned error of 2.48 ± 0.32 pixels for BM extraction and 9.79 ± 3.29 pixels for choroid detection. It implies significant improvement of the proposed method over other approaches like k-means and graph cut methods.
In previous research, a watershed-based algorithm was shown to be useful for automatic lesion segmentation in dermoscopy images, and was tested on a set of 100 benign and malignant melanoma images with the average of three sets of dermatologist-drawn borders used as the ground truth, resulting in an overall error of 15.98%. In this study, to reduce the border detection errors, a neural network classifier was utilized to improve the first-pass watershed segmentation; a novel “Edge Object Value (EOV) Threshold” method was used to remove large light blobs near the lesion boundary; and a noise removal procedure was applied to reduce the peninsula-shaped false-positive areas. As a result, an overall error of 11.09% was achieved.
Malignant Melanoma; Watershed; Image Processing; Segmentation; Neural Network
The morphological properties of axons, such as their branching patterns and oriented structures, are of great interest for biologists in the study of the synaptic connectivity of neurons. In these studies, researchers use triple immunofluorescent confocal microscopy to record morphological changes of neuronal processes. Three-dimensional (3D) microscopy image analysis is then required to extract morphological features of the neuronal structures. In this article, we propose a highly automated 3D centerline extraction tool to assist in this task. For this project, the most difficult part is that some axons are overlapping such that the boundaries distinguishing them are barely visible. Our approach combines a 3D dynamic programming (DP) technique and marker-controlled watershed algorithm to solve this problem. The approach consists of tracking and updating along the navigation directions of multiple axons simultaneously. The experimental results show that the proposed method can rapidly and accurately extract multiple axon centerlines and can handle complicated axon structures such as cross-over sections and overlapping objects.
This paper describes a novel method to automatically segment the human brainstem into midbrain and pons, called LABS: Landmark-based Automated Brainstem Segmentation. LABS processes high-resolution structural magnetic resonance images (MRIs) according to a revised landmark-based approach integrated with a thresholding method, without manual interaction.
This method was first tested on morphological T1-weighted MRIs of 30 healthy subjects. Its reliability was further confirmed by including neurological patients (with Alzheimer's Disease) from the ADNI repository, in whom the presence of volumetric loss within the brainstem had been previously described. Segmentation accuracies were evaluated against expert-drawn manual delineation. To evaluate the quality of LABS segmentation we used volumetric, spatial overlap and distance-based metrics.
The comparison between the quantitative measurements provided by LABS against manual segmentations revealed excellent results in healthy controls when considering either the midbrain (DICE measures higher that 0.9; Volume ratio around 1 and Hausdorff distance around 3) or the pons (DICE measures around 0.93; Volume ratio ranging 1.024–1.05 and Hausdorff distance around 2). Similar performances were detected for AD patients considering segmentation of the pons (DICE measures higher that 0.93; Volume ratio ranging from 0.97–0.98 and Hausdorff distance ranging 1.07–1.33), while LABS performed lower for the midbrain (DICE measures ranging 0.86–0.88; Volume ratio around 0.95 and Hausdorff distance ranging 1.71–2.15).
Our study represents the first attempt to validate a new fully automated method for in vivo segmentation of two anatomically complex brainstem subregions. We retain that our method might represent a useful tool for future applications in clinical practice.