PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (913449)

Clipboard (0)
None

Related Articles

1.  Automatic segmentation of right ventricular ultrasound images using sparse matrix transform and a level set 
Physics in medicine and biology  2013;58(21):7609-7624.
An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
doi:10.1088/0031-9155/58/21/7609
PMCID: PMC3925785  PMID: 24107618
2.  Mass segmentation using a combined method for cancer detection 
BMC Systems Biology  2011;5(Suppl 3):S6.
Background
Breast cancer is one of the leading causes of cancer death for women all over the world and mammography is thought of as one of the main tools for early detection of breast cancer. In order to detect the breast cancer, computer aided technology has been introduced. In computer aided cancer detection, the detection and segmentation of mass are very important. The shape of mass can be used as one of the factors to determine whether the mass is malignant or benign. However, many of the current methods are semi-automatic. In this paper, we investigate fully automatic segmentation method.
Results
In this paper, a new mass segmentation algorithm is proposed. In the proposed algorithm, a fully automatic marker-controlled watershed transform is proposed to segment the mass region roughly, and then a level set is used to refine the segmentation. For over-segmentation caused by watershed, we also investigated different noise reduction technologies. Images from DDSM were used in the experiments and the results show that the new algorithm can improve the accuracy of mass segmentation.
Conclusions
The new algorithm combines the advantages of both methods. The combination of the watershed based segmentation and level set method can improve the efficiency of the segmentation. Besides, the introduction of noise reduction technologies can reduce over-segmentation.
doi:10.1186/1752-0509-5-S3-S6
PMCID: PMC3287574  PMID: 22784625
3.  Snake model based lymphoma segmentation for sequential CT images 
The measurement of the size of lesions in follow-up CT examinations of cancer patients is important to evaluate the success of treatment. This paper presents an automatic algorithm for identifying and segmenting lymph nodes in CT images across longitudinal time points. Firstly, a two-step image registration method is proposed to locate the lymph nodes including coarse registration based on body region detection and fine registration based on a double-template matching algorithm. Then, to make the initial segmentation approximate the boundaries of lymph nodes, the initial image registration result is refined with intensity and edge information. Finally, a snake model is used to evolve the refined initial curve and obtain segmentation results. Our algorithm was tested on 26 lymph nodes at multiple time points from 14 patients. The image at the earlier time point was used as the baseline image to be used in evaluating the follow-up image, resulting in 76 total test cases. Of the 76 test cases, we made a 76 (100%) successful detection and 38/40 (95%) correct clinical assessment according to Response Evaluation Criteria in Solid Tumors (RECIST). The quantitative evaluation based on several metrics, such as average Hausdorff distance, indicates that our algorithm is produces good results. In addition, the proposed algorithm is fast with an average computing time 2.58s. The proposed segmentation algorithm for lymph nodes is fast and can achieve high segmentation accuracy, which may be useful to automate the tracking and evaluation of cancer therapy.
doi:10.1016/j.cmpb.2013.05.019
PMCID: PMC3752285  PMID: 23787027
lymphoma; image segmentation; snake model; image registration; template matching
4.  Myofibre segmentation in H&E stained adult skeletal muscle images using coherence-enhancing diffusion filtering 
BMC Medical Imaging  2014;14(1):38.
Background
The correct segmentation of myofibres in histological muscle biopsy images is a critical step in the automatic analysis process. Errors occurring as a result of incorrect segmentations have a compounding effect on latter morphometric analysis and as such it is vital that the fibres are correctly segmented. This paper presents a new automatic approach to myofibre segmentation in H&E stained adult skeletal muscle images that is based on Coherence-Enhancing Diffusion filtering.
Methods
The procedure can be broadly divided into four steps: 1) pre-processing of the images to extract only the eosinophilic structures, 2) performing of Coherence-Enhancing Diffusion filtering to enhance the myofibre boundaries whilst smoothing the interior regions, 3) morphological filtering to connect unconnected boundary regions and remove noise, and 4) marker controlled watershed transform to split touching fibres.
Results
The method has been tested on a set of adult cases with a total of 2,832 fibres. Evaluation was done in terms of segmentation accuracy and other clinical metrics.
Conclusions
The results show that the proposed approach achieves a segmentation accuracy of 89% which is a significant improvement over existing methods.
doi:10.1186/1471-2342-14-38
PMCID: PMC4274691  PMID: 25352214
Digital pathology; Muscle biopsy; Image segmentation
5.  A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging 
EJNMMI Research  2013;3:55.
Background
Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers’ understanding of infectious diseases.
Methods
We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists’ delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework.
Results
Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis.
Conclusions
We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy.
doi:10.1186/2191-219X-3-55
PMCID: PMC3734217  PMID: 23879987
Quantitative analysis; Pulmonary infections; Small animal models; PET-CT; Image segmentation; H1N1; Tuberculosis
6.  Automatic Segmentation of the Ribs, the Vertebral Column, and the Spinal Canal in Pediatric Computed Tomographic Images 
Journal of Digital Imaging  2009;23(3):301-322.
We propose methods to perform automatic identification of the rib structure, the vertebral column, and the spinal canal in computed tomographic (CT) images of pediatric patients. The segmentation processes for the rib structure and the vertebral column are initiated using multilevel thresholding and the results are refined using morphological image processing techniques with features based on radiological and anatomical prior knowledge. The Hough transform for the detection of circles is applied to a cropped edge map that includes the thoracic vertebral structure. The centers of the detected circles are used to derive the information required for the opening-by-reconstruction algorithm used to segment the spinal canal. The methods were tested on 39 CT exams of 13 patients; the results of segmentation of the vertebral column and the spinal canal were assessed quantitatively and qualitatively by comparing with segmentation performed independently by a radiologist. Using 13 CT exams of six patients, including a total of 458 slices with the vertebra from different sections of the vertebral column, the average Hausdorff distance was determined to be 3.2 mm with a standard deviation (SD) of 2.4 mm; the average mean distance to the closest point (MDCP) was 0.7 mm with SD = 0.6 mm. Quantitative analysis was also performed for the segmented spinal canal with three CT exams of three patients, including 21 slices with the spinal canal from different sections of the vertebral column; the average Hausdorff distance was 1.6 mm with SD = 0.5 mm, and the average MDCP was 0.6 mm with SD = 0.1 mm.
doi:10.1007/s10278-009-9176-x
PMCID: PMC3046651  PMID: 19219504
Computed tomographic (CT) images; vertebral column; rib structure; spinal canal; morphological image processing; opening-by-reconstruction; image segmentation; Hough transform; convex hull
7.  Computerized nipple identification for multiple image analysis in computer-aided diagnosis 
Medical physics  2004;31(10):2871-2882.
Correlation of information from multiple-view mammograms (e.g., MLO and CC views, bilateral views, or current and prior mammograms) can improve the performance of breast cancer diagnosis by radiologists or by computer. The nipple is a reliable and stable landmark on mammograms for the registration of multiple mammograms. However, accurate identification of nipple location on mammograms is challenging because of the variations in image quality and in the nipple projections, resulting in some nipples being nearly invisible on the mammograms. In this study, we developed a computerized method to automatically identify the nipple location on digitized mammograms. First, the breast boundary was obtained using a gradient-based boundary tracking algorithm, and then the gray level profiles along the inside and outside of the boundary were identified. A geometric convergence analysis was used to limit the nipple search to a region of the breast boundary. A two-stage nipple detection method was developed to identify the nipple location using the gray level information around the nipple, the geometric characteristics of nipple shapes, and the texture features of glandular tissue or ducts which converge toward the nipple. At the first stage, a rule-based method was designed to identify the nipple location by detecting significant changes of intensity along the gray level profiles inside and outside the breast boundary and the changes in the boundary direction. At the second stage, a texture orientation-field analysis was developed to estimate the nipple location based on the convergence of the texture pattern of glandular tissue or ducts towards the nipple. The nipple location was finally determined from the detected nipple candidates by a rule-based confidence analysis. In this study, 377 and 367 randomly selected digitized mammograms were used for training and testing the nipple detection algorithm, respectively. Two experienced radiologists identified the nipple locations which were used as the gold standard. In the training data set, 301 nipples were positively identified and were referred to as visible nipples. Seventy six nipples could not be positively identified and were referred to as invisible nipples. The radiologists provided their estimation of the nipple locations in the latter group for comparison with the computer estimates. The computerized method could detect 89.37% (269/301) of the visible nipples and 69.74% (53/76) of the invisible nipples within 1 cm of the gold standard. In the test data set, 298 and 69 of the nipples were classified as visible and invisible, respectively. 92.28% (275/298) of the visible nipples and 53.62% (37/69) of the invisible nipples were identified within 1 cm of the gold standard. The results demonstrate that the nipple locations on digitized mammograms can be accurately detected if they are visible and can be reasonably estimated if they are invisible. Automated nipple detection will be an important step towards multiple image analysis for CAD.
PMCID: PMC2898150  PMID: 15543797
computer-aided detection; mammography; nipple detection; texture orientation field analysis
8.  Automated computer quantification of breast cancer in small-animal models using PET-guided MR image co-segmentation 
EJNMMI Research  2013;3:49.
Background
Care providers use complementary information from multiple imaging modalities to identify and characterize metastatic tumors in early stages and perform surveillance for cancer recurrence. These tasks require volume quantification of tumor measurements using computed tomography (CT) or magnetic resonance imaging (MRI) and functional characterization through positron emission tomography (PET) imaging. In vivo volume quantification is conducted through image segmentation, which may require both anatomical and functional images available for precise tumor boundary delineation. Although integrating multiple image modalities into the segmentation process may improve the delineation accuracy and efficiency, due to variable visibility on image modalities, complex shape of metastatic lesions, and diverse visual features in functional and anatomical images, a precise and efficient segmentation of metastatic breast cancer remains a challenging goal even for advanced image segmentation methods. In response to these challenges, we present here a computer-assisted volume quantification method for PET/MRI dual modality images using PET-guided MRI co-segmentation. Our aims in this study were (1) to determine anatomical tumor volumes automatically from MRI accurately and efficiently, (2) to evaluate and compare the accuracy of the proposed method with different radiotracers (18F-Z HER2-Affibody and 18F-flourodeoxyglucose (18F-FDG)), and (3) to confirm the proposed method’s determinations from PET/MRI scans in comparison with PET/CT scans.
Methods
After the Institutional Administrative Panel on Laboratory Animal Care approval was obtained, 30 female nude mice were used to construct a small-animal breast cancer model. All mice were injected with human breast cancer cells and HER2-overexpressing MDA-MB-231HER2-Luc cells intravenously. Eight of them were selected for imaging studies, and selected mice were imaged with MRI, CT, and 18F-FDG-PET at weeks 9 and 10 and then imaged with 18F-Z HER2-Affibody-PET 2 days after the scheduled structural imaging (MRI and CT). After CT and MR images were co-registered with corresponding PET images, all images were quantitatively analyzed by the proposed segmentation technique.
Automatically determined anatomical tumor volumes were compared to radiologist-derived reference truths. Observer agreements were presented through Bland-Altman and linear regression analyses. Segmentation evaluations were conducted using true-positive (TP) and false-positive (FP) volume fractions of delineated tissue samples, as complied with the state-of-the-art evaluation techniques for image segmentation. Moreover, the PET images, obtained using different radiotracers, were examined and compared using the complex wavelet-based structural similarity index (CWSSI). (continued on the next page) (continued from the previous page)
Results
PET/MR dual modality imaging using the 18F-Z HER2-Affibody imaging agent provided diagnostic image quality in all mice with excellent tumor delineations by the proposed method. The 18F-FDG radiotracer did not show accurate identification of the tumor regions. Structural similarity index (CWSSI) between PET images using 18F-FDG and 18F-Z HER2-Affibody agents was found to be 0.7838. MR showed higher diagnostic image quality when compared to CT because of its better soft tissue contrast. Significant correlations regarding the anatomical tumor volumes were obtained between both PET-guided MRI co-segmentation and reference truth (R2=0.92, p<0.001 for PET/MR, and R2=0.84, p<0.001, for PET/CT). TP and FP volume fractions using the automated co-segmentation method in PET/MR and PET/CT were found to be (TP 97.3%, FP 9.8%) and (TP 92.3%, FP 17.2%), respectively.
Conclusions
The proposed PET-guided MR image co-segmentation algorithm provided an automated and efficient way of assessing anatomical tumor volumes and their spatial extent. We showed that although the 18F-Z HER2-Affibody radiotracer in PET imaging is often used for characterization of tumors rather than detection, sensitivity and specificity of the localized radiotracer in the tumor region were informative enough; therefore, roughly determined tumor regions from PET images guided the delineation process well in the anatomical image domain for extracting accurate tumor volume information. Furthermore, the use of 18F-FDG radiotracer was not as successful as the 18F-Z HER2-Affibody in guiding the delineation process due to false-positive uptake regions in the neighborhood of tumor regions; hence, the accuracy of the fully automated segmentation method changed dramatically. Last, we qualitatively showed that MRI yields superior identification of tumor boundaries when compared to conventional CT imaging.
doi:10.1186/2191-219X-3-49
PMCID: PMC3708745  PMID: 23829944
Image segmentation; Computer quantification; FDG-PET; MRI/PET; Breast cancer; Small-animal models; Co-segmentation; Volume quantification; Random walk
9.  An ellipse-fitting based method for efficient registration of breast masses on two mammographic views 
Medical physics  2008;35(2):487-494.
When reading mammograms, radiologists routinely search for and compare suspicious breast lesions identified on two corresponding craniocaudal (CC) and mediolateral oblique (MLO) views. Automatically identifying and matching the same true-positive breast lesions depicted on two views is an important step for developing successful multiview based computer-aided detection (CAD) schemes. The authors developed a method to automatically register breast areas and detect matching strips of interest used to identify the matched mass regions depicted on CC and MLO views. The method uses an ellipse based model to fit the breast boundary contour (skin line) and set a local Cartesian coordinate system for each view. One intersection point between the major/minor axis and the fitted ellipse perimeter passed through breast boundary is selected as the origin and the majoraxis and the minoraxis of the ellipse are used as the two axis of the Cartesian coordinate system. When a mass is identified on one view, the scheme computes its position in the local coordinate system. Then, the distance is mapped onto the local coordinate of the other view. At the end of the mapped distance a registered centerline of the matching strip is created. The authors established an image database that includes 200 test examinations each depicting one verified mass visible on the two views. They tested whether the registered centerline identified on another view can be used to locate the matched mass region. The experiments show that the average distance between the mass region centers and the registered centerlines was ±8.3 mm and in 91% of testing cases the registered centerline actually passes through the matched mass regions. A matching strip width of 47 mm was required to achieve 100% sensitivity for the test database. The results demonstrate the feasibility of the proposed method to automatically identify masses depicted on CC and MLO views, which may improve future development of multiview based CAD schemes.
doi:10.1118/1.2828188
PMCID: PMC2288654  PMID: 18383669
ellipse-fitting; computer-aided detection; mammography; breast mass; image registration; region matching
10.  Automatic Lung Segmentation in CT Images with Accurate Handling of the Hilar Region 
Journal of Digital Imaging  2009;24(1):11-27.
A fully automated and three-dimensional (3D) segmentation method for the identification of the pulmonary parenchyma in thorax X-ray computed tomography (CT) datasets is proposed. It is meant to be used as pre-processing step in the computer-assisted detection (CAD) system for malignant lung nodule detection that is being developed by the Medical Applications in a Grid Infrastructure Connection (MAGIC-5) Project. In this new approach the segmentation of the external airways (trachea and bronchi), is obtained by 3D region growing with wavefront simulation and suitable stop conditions, thus allowing an accurate handling of the hilar region, notoriously difficult to be segmented. Particular attention was also devoted to checking and solving the problem of the apparent ‘fusion’ between the lungs, caused by partial-volume effects, while 3D morphology operations ensure the accurate inclusion of all the nodules (internal, pleural, and vascular) in the segmented volume. The new algorithm was initially developed and tested on a dataset of 130 CT scans from the Italung-CT trial, and was then applied to the ANODE09-competition images (55 scans) and to the LIDC database (84 scans), giving very satisfactory results. In particular, the lung contour was adequately located in 96% of the CT scans, with incorrect segmentation of the external airways in the remaining cases. Segmentation metrics were calculated that quantitatively express the consistency between automatic and manual segmentations: the mean overlap degree of the segmentation masks is 0.96 ± 0.02, and the mean and the maximum distance between the mask borders (averaged on the whole dataset) are 0.74 ± 0.05 and 4.5 ± 1.5, respectively, which confirms that the automatic segmentations quite correctly reproduce the borders traced by the radiologist. Moreover, no tissue containing internal and pleural nodules was removed in the segmentation process, so that this method proved to be fit for the use in the framework of a CAD system. Finally, in the comparison with a two-dimensional segmentation procedure, inter-slice smoothness was calculated, showing that the masks created by the 3D algorithm are significantly smoother than those calculated by the 2D-only procedure.
doi:10.1007/s10278-009-9229-1
PMCID: PMC3046791  PMID: 19826872
CAD; image segmentation; lung nodules; region growing; grid; 3D imaging; biomedical image analysis
11.  A Heuristic Approach to Automated Nipple Detection in Digital Mammograms 
Journal of Digital Imaging  2013;26(5):932-940.
In this paper, a heuristic approach to automated nipple detection in digital mammograms is presented. A multithresholding algorithm is first applied to segment the mammogram and separate the breast region from the background region. Next, the problem is considered separately for craniocaudal (CC) and mediolateral-oblique (MLO) views. In the simplified algorithm, a search is performed on the segmented image along a band around the centroid and in a direction perpendicular to the pectoral muscle edge in the MLO view image. The direction defaults to the horizontal (perpendicular to the thoracic wall) in case of CC view images. The farthest pixel from the base found in this direction can be approximated as the nipple point. Further, an improved version of the simplified algorithm is proposed which can be considered as a subclass of the Branch and Bound algorithms. The mean Euclidean distance between the ground truth and calculated nipple position for 500 mammograms from the Digital Database for Screening Mammography (DDSM) database was found to be 11.03 mm and the average total time taken by the algorithm was 0.79 s. Results of the proposed algorithm demonstrate that even simple heuristics can achieve the desired result in nipple detection thus reducing the time and computational complexity.
doi:10.1007/s10278-013-9575-x
PMCID: PMC3782596  PMID: 23423610
Breast cancer; Nipple detection; Mammography; DDSM database; Multithresholding
12.  Detection and Segmentation of Erythrocytes in Blood Smear Images Using a Line Operator and Watershed Algorithm 
Most of the erythrocyte related diseases are detectable by hematology images analysis. At the first step of this analysis, segmentation and detection of blood cells are inevitable. In this study, a novel method using a line operator and watershed algorithm is rendered for erythrocyte detection and segmentation in blood smear images, as well as reducing over-segmentation in watershed algorithm that is useful for segmentation of different types of blood cells having partial overlap. This method uses gray scale structure of blood cell, which is obtained by exertion of Euclidian distance transform on binary images. Applying this transform, the gray intensity of cell images gradually reduces from the center of cells to their margins. For detecting this intensity variation structure, a line operator measuring gray level variations along several directional line segments is applied. Line segments have maximum and minimum gray level variations has a special pattern that is applicable for detections of the central regions of cells. Intersection of these regions with the signs which are obtained by calculating of local maxima in the watershed algorithm was applied for cells’ centers detection, as well as a reduction in over-segmentation of watershed algorithm. This method creates 1300 sign in segmentation of 1274 erythrocytes available in 25 blood smear images. Accuracy and sensitivity of the proposed method are equal to 95.9% and 97.99%, respectively. The results show the proposed method's capability in detection of erythrocytes in blood smear images.
PMCID: PMC3959006  PMID: 24672764
Blood smear images; line operator; watershed algorithm
13.  Automatic Initialization Active Contour Model for the Segmentation of the Chest Wall on Chest CT 
Objectives
Snake or active contours are extensively used in computer vision and medical image processing applications, and particularly to locate object boundaries. Yet problems associated with initialization and the poor convergence to boundary concavities have limited their utility. The new method of external force for active contours, which is called gradient vector flow (GVF), was recently introduced to address the problems.
Methods
This paper presents an automatic initialization value of the snake algorithm for the segmentation of the chest wall. Snake algorithms are required to have manually drawn initial contours, so this needs automatic initialization. In this paper, our proposed algorithm is the mean shape for automatic initialization in the GVF.
Results
The GVF is calculated as a diffusion of the gradient vectors of a gray-level or binary edge map derived from the medical images. Finally, the mean shape coordinates are used to automatic initialize thepoint of the snake. The proposed algorithm is composed of three phases: the landmark phase, the procrustes shape distance metric phase and aligning a set of shapes phase. The experiments showed the good performance of our algorithm in segmenting the chest wall by chest computed tomography.
Conclusions
An error analysis for the active contours results on simulated test medical images is also presented. We showed that GVF has a large capture range and it is able to move a snake into boundary concavities. Therefore, the suggested algorithm is better than the traditional potential forces of image segmentation.
doi:10.4258/hir.2010.16.1.36
PMCID: PMC3089843  PMID: 21818422
Active Contour Model; Automatic Initialization; Mean Shape; Gradient Vector Flow; Computed Tomography
14.  Volumetric quantification of fibrous caps using intravascular optical coherence tomography 
Biomedical Optics Express  2012;3(6):1413-1426.
The rupture of thin-cap fibroatheroma accounts for most acute coronary events. Optical Coherence Tomography (OCT) allows quantification of fibrous cap (FC) thickness in vivo. Conventional manual analysis, by visually determining the thinnest part of the FC is subject to inter-observer variability and does not capture the 3-D morphology of the FC. We propose and validate a computer-aided method that allows volumetric analysis of FC. The radial FC boundary is semi-automatically segmented using a dynamic programming algorithm. The thickness at every point of the FC boundary, along with 3-D morphology of the FC, can be quantified. The method was validated against three experienced OCT image analysts in 14 lipid-rich lesions. The proposed method may advance our understanding of the mechanisms behind plaque rupture and improve disease management.
doi:10.1364/BOE.3.001413
PMCID: PMC3370980  PMID: 22741086
(100.0100) Image processing; (110.4500) Optical coherence tomography
15.  Watershed segmentation of dermoscopy images using a watershed technique 
Background/purpose
Automatic lesion segmentation is an important part of computer-based image analysis of pigmented skin lesions. In this research, a watershed algorithm is developed and investigated for adequacy of skin lesion segmentation in dermoscopy images.
Methods
Hair, black border and vignette removal methods are introduced as preprocessing steps. The flooding variant of the watershed segmentation algorithm was implemented with novel features adapted to this domain. An outer bounding box, determined by a difference function derived from horizontal and vertical projection functions, is added to estimate the lesion area, and the lesion area error is reduced by a linear estimation function. As a post-processing step, a second-order B-Spline smoothing method is introduced to smooth the watershed border.
Results
Using the average of three sets of dermatologist-drawn borders as the ground truth, an overall error of 15.98% was obtained using the watershed technique.
Conclusion
The implementation of the flooding variant of the watershed algorithm presented here allows satisfactory automatic segmentation of pigmented skin lesions.
doi:10.1111/j.1600-0846.2010.00445.x
PMCID: PMC3160671  PMID: 20637008
malignant melanoma; watershed; image processing; segmentation
16.  Hippocampal volumetry for lateralization of temporal lobe epilepsy: automated versus manual methods 
NeuroImage  2010;54S1:S218-S226.
The hippocampus has been the primary region of interest in the preoperative imaging investigations of mesial temporal lobe epilepsy (mTLE). Hippocampal imaging and electroencephalographic features may be sufficient in several cases to declare the epileptogenic focus. In particular, hippocampal atrophy, as appreciated on T1-weighted (T1W) magnetic resonance (MR) images, may suggest a mesial temporal sclerosis. Qualitative visual assessment of hippocampal volume, however, is influenced by head position in the magnet and the amount of atrophy in different parts of the hippocampus. An entropy-based segmentation algorithm for subcortical brain structures (LocalInfo) was developed and supplemented by both a new multiple atlas strategy and a free-form deformation step to capture structural variability. Manually segmented T1-weighted magnetic resonance (MR) images of 10 non-epileptic subjects were used as atlases for the proposed automatic segmentation protocol which was applied to a cohort of 46 mTLE patients. The segmentation and lateralization accuracies of the proposed technique were compared with those of two other available programs, HAMMER and FreeSurfer, in addition to the manual method. The Dice coefficient for the proposed method was 11% (p<10-5) and 14% (p<10-4) higher in comparison with the HAMMER and FreeSurfer, respectively. Mean and Hausdorff distances in the proposed method were also 14% (p<0.2) and 26% (p<10-3) lower in comparison with HAMMER and 8% (p<0.8) and 48% (p<10-5) lower in comparison with FreeSurfer, respectively. LocalInfo proved to have higher concordance (87%) with the manual segmentation method than either HAMMER (85%) or FreeSurfer (83%). The accuracy of lateralization by volumetry in this study with LocalInfo was 74% compared to 78% with the manual segmentation method. LocalInfo yields a closer approximation to that of manual segmentation and may therefore prove to be more reliable than currently published automatic segmentation algorithms.
doi:10.1016/j.neuroimage.2010.03.066
PMCID: PMC2978802  PMID: 20353827
17.  Dynamic iris biometry: a technique for enhanced identification 
BMC Research Notes  2010;3:182.
Background
The iris as a unique identifier is predicated on the assumption that the iris image does not alter. This does not consider the fact that the iris changes in response to certain external factors including medication, disease, surgery as well as longer term ageing changes. It is also part of a dynamic optical system that alters with light level and focussing distance. A means of distinguishing the features that do not alter over time from those that do is needed. This paper applies iris recognition algorithms to a newly acquired database of 186 iris images from four subjects. These images have greater magnification and detail than iris images in existing databases. Iris segmentation methods are tested on the database. A new technique that enhances segmentation is presented and compared to two existing methods. These are also applied to test the effects of pupil dilation in the identification process.
Findings
Segmentation results from all the images showed that using the proposed algorithm accurately detected pupil boundaries for 96.2% respectively of the images, which was an increase of 88.7% over the most commonly used algorithm. For the images collected, the proposed technique also showed significant improvement in detection of the limbal boundary compared to the detection rates using existing methods. With regard to boundary displacement errors, only slight errors were found with the proposed technique compared to extreme errors made when existing techniques were applied. As the pupil becomes more dilated, the success of identification is increasingly more dependent on the decision criterion used.
Conclusions
The enhanced segmentation technique described in this paper performs with greater accuracy than existing methods for the higher quality images collected in this study. Implementation of the proposed segmentation enhancement significantly improves pupil boundary detection and therefore overall iris segmentation. Pupil dilation is an important aspect of iris identification; with increasing dilation, there is a greater risk of identification failure. Choice of decision criterion for identification should be carefully reviewed. It needs to be recognised that differences in the quality of images in different databases may result in variations in the performance of iris recognition algorithms.
doi:10.1186/1756-0500-3-182
PMCID: PMC2909927  PMID: 20594345
18.  A Probabilistic Approach for Breast Boundary Extraction in Mammograms 
The extraction of the breast boundary is crucial to perform further analysis of mammogram. Methods to extract the breast boundary can be classified into two categories: methods based on image processing techniques and those based on models. The former use image transformation techniques such as thresholding, morphological operations, and region growing. In the second category, the boundary is extracted using more advanced techniques, such as the active contour model. The problem with thresholding methods is that it is a hard to automatically find the optimal threshold value by using histogram information. On the other hand, active contour models require defining a starting point close to the actual boundary to be able to successfully extract the boundary. In this paper, we propose a probabilistic approach to address the aforementioned problems. In our approach we use local binary patterns to describe the texture around each pixel. In addition, the smoothness of the boundary is handled by using a new probability model. Experimental results show that the proposed method reaches 38% and 50% improvement with respect to the results obtained by the active contour model and threshold-based methods respectively, and it increases the stability of the boundary extraction process up to 86%.
doi:10.1155/2013/408595
PMCID: PMC3842063  PMID: 24324523
19.  Semi-automatic algorithm for construction of the left ventricular area variation curve over a complete cardiac cycle 
Background
Two-dimensional echocardiography (2D-echo) allows the evaluation of cardiac structures and their movements. A wide range of clinical diagnoses are based on the performance of the left ventricle. The evaluation of myocardial function is typically performed by manual segmentation of the ventricular cavity in a series of dynamic images. This process is laborious and operator dependent. The automatic segmentation of the left ventricle in 4-chamber long-axis images during diastole is troublesome, because of the opening of the mitral valve.
Methods
This work presents a method for segmentation of the left ventricle in dynamic 2D-echo 4-chamber long-axis images over the complete cardiac cycle. The proposed algorithm is based on classic image processing techniques, including time-averaging and wavelet-based denoising, edge enhancement filtering, morphological operations, homotopy modification, and watershed segmentation. The proposed method is semi-automatic, requiring a single user intervention for identification of the position of the mitral valve in the first temporal frame of the video sequence. Image segmentation is performed on a set of dynamic 2D-echo images collected from an examination covering two consecutive cardiac cycles.
Results
The proposed method is demonstrated and evaluated on twelve healthy volunteers. The results are quantitatively evaluated using four different metrics, in a comparison with contours manually segmented by a specialist, and with four alternative methods from the literature. The method's intra- and inter-operator variabilities are also evaluated.
Conclusions
The proposed method allows the automatic construction of the area variation curve of the left ventricle corresponding to a complete cardiac cycle. This may potentially be used for the identification of several clinical parameters, including the area variation fraction. This parameter could potentially be used for evaluating the global systolic function of the left ventricle.
doi:10.1186/1475-925X-9-5
PMCID: PMC3224979  PMID: 20078864
20.  Segmentation of abdomen MR images using kernel graph cuts with shape priors 
Background
Abdominal organs segmentation of magnetic resonance (MR) images is an important but challenging task in medical image processing. Especially for abdominal tissues or organs, such as liver and kidney, MR imaging is a very difficult task due to the fact that MR images are affected by intensity inhomogeneity, weak boundary, noise and the presence of similar objects close to each other.
Method
In this study, a novel method for tissue or organ segmentation in abdomen MR imaging is proposed; this method combines kernel graph cuts (KGC) with shape priors. First, the region growing algorithm and morphology operations are used to obtain the initial contour. Second, shape priors are obtained by training the shape templates, which were collected from different human subjects with kernel principle component analysis (KPCA) after the registration between all the shape templates and the initial contour. Finally, a new model is constructed by integrating the shape priors into the kernel graph cuts energy function. The entire process aims to obtain an accurate image segmentation.
Results
The proposed segmentation method has been applied to abdominal organs MR images. The results showed that a satisfying segmentation without boundary leakage and segmentation incorrect can be obtained also in presence of similar tissues. Quantitative experiments were conducted for comparing the proposed segmentation with other three methods: DRLSE, initial erosion contour and KGC without shape priors. The comparison is based on two quantitative performance measurements: the probabilistic rand index (PRI) and the variation of information (VoI). The proposed method has the highest PRI value (0.9912, 0.9983 and 0.9980 for liver, right kidney and left kidney respectively) and the lowest VoI values (1.6193, 0.3205 and 0.3217 for liver, right kidney and left kidney respectively).
Conclusion
The proposed method can overcome boundary leakage. Moreover it can segment liver and kidneys in abdominal MR images without segmentation errors due to the presence of similar tissues. The shape priors based on KPCA was integrated into fully automatic graph cuts algorithm (KGC) to make the segmentation algorithm become more robust and accurate. Furthermore, if a shelter is placed onto the target boundary, the proposed method can still obtain satisfying segmentation results.
doi:10.1186/1475-925X-12-124
PMCID: PMC4220691  PMID: 24295198
21.  Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images 
Background
The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL) boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology.
Methods
Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8) obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD), root mean squared difference (RMSD), Hausdorff distance (HD), sensitivity, and specificity metrics.
Results and discussion
The mean MAD was 0.87 mm (sigma = 0.36 mm), RMSD was 1.1 mm (sigma = 0.47 mm), and HD was 3.4 mm (sigma = 2.0 mm) indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171) and 0.990 (sigma = 0.00786), respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early.
Conclusion
The hypothesis that level set segmentation can be accurate to within 1–2 mm on average was supported, although there can be some greater deviation. The method was robust to boundary leakage as evidenced by the high specificity. It was concluded that the technique is promising and that a suitable data set of human ovarian images should be obtained to conduct further studies.
doi:10.1186/1477-7827-6-33
PMCID: PMC2519064  PMID: 18680589
22.  Wavelet-Based 3D Reconstruction of Microcalcification Clusters from Two Mammographic Views: New Evidence That Fractal Tumors Are Malignant and Euclidean Tumors Are Benign 
PLoS ONE  2014;9(9):e107580.
The 2D Wavelet-Transform Modulus Maxima (WTMM) method was used to detect microcalcifications (MC) in human breast tissue seen in mammograms and to characterize the fractal geometry of benign and malignant MC clusters. This was done in the context of a preliminary analysis of a small dataset, via a novel way to partition the wavelet-transform space-scale skeleton. For the first time, the estimated 3D fractal structure of a breast lesion was inferred by pairing the information from two separate 2D projected mammographic views of the same breast, i.e. the cranial-caudal (CC) and mediolateral-oblique (MLO) views. As a novelty, we define the “CC-MLO fractal dimension plot”, where a “fractal zone” and “Euclidean zones” (non-fractal) are defined. 118 images (59 cases, 25 malignant and 34 benign) obtained from a digital databank of mammograms with known radiologist diagnostics were analyzed to determine which cases would be plotted in the fractal zone and which cases would fall in the Euclidean zones. 92% of malignant breast lesions studied (23 out of 25 cases) were in the fractal zone while 88% of the benign lesions were in the Euclidean zones (30 out of 34 cases). Furthermore, a Bayesian statistical analysis shows that, with 95% credibility, the probability that fractal breast lesions are malignant is between 74% and 98%. Alternatively, with 95% credibility, the probability that Euclidean breast lesions are benign is between 76% and 96%. These results support the notion that the fractal structure of malignant tumors is more likely to be associated with an invasive behavior into the surrounding tissue compared to the less invasive, Euclidean structure of benign tumors. Finally, based on indirect 3D reconstructions from the 2D views, we conjecture that all breast tumors considered in this study, benign and malignant, fractal or Euclidean, restrict their growth to 2-dimensional manifolds within the breast tissue.
doi:10.1371/journal.pone.0107580
PMCID: PMC4164655  PMID: 25222610
23.  Evaluation of Computer-aided Diagnosis on a Large Clinical Full-Field Digital Mammographic Dataset 
Academic radiology  2008;15(11):1437-1445.
Rationale and Objectives:
To convert and optimize our previously developed computerized analysis methods for use with images from full-field digital mammography (FFDM) for breast mass classification in order to aid in the diagnosis of breast cancer.
Materials and Methods:
An institutional review board approved protocol was obtained, with waiver of consent for retrospective use of mammograms and pathology data. Seven hundreds and thirty-nine full-field digital mammographic images, which contained 287 biopsy-proven breast mass lesions, of which 148 lesions were malignant and 139 lesions were benign, were retrospectively collected. Lesion margins were delineated by an expert breast radiologist and were used as the truth for lesion-segmentation evaluation. Our computerized image analysis method consisted of several steps: 1) identified lesions were automatically extracted from the parenchymal background using computerized segmentation methods; 2) a set of image characteristics (mathematical descriptors) were automatically extracted from image data of the lesions and surrounding tissues; and 3) selected features were merged into an estimate of the probability of malignancy using a Bayesian artificial neural network classifier. Performance of the analyses was evaluated at various stages of the conversion using receiver operating characteristic (ROC) analysis.
Results:
An AUC value of 0.81 was obtained in the task of distinguishing between malignant and benign mass lesions in a round-robin by case evaluation on the entire FFDM dataset. We failed to show a statistically significant difference (P value=0.83) as compared with results from our previous study in which the computerized classification was performed on digitized screen-film mammograms (SFMD).
Conclusion:
Our computerized analysis methods developed on digitized screen-film mammography can be converted for use with FFDM. Results show that the computerized analysis methods for the diagnosis of breast mass lesions on FFDM are promising, and can potentially be used to aid clinicians in the diagnostic interpretation of FFDM.
doi:10.1016/j.acra.2008.05.004
PMCID: PMC2597106  PMID: 18995194
Computer-aided diagnosis; Full-field digital mammography; Breast mass classification
24.  A Level Set Based Framework for Quantitative Evaluation of Breast Tissue Density from MRI Data 
PLoS ONE  2014;9(11):e112709.
Breast density is a risk factor associated with the development of breast cancer. Usually, breast density is assessed on two dimensional (2D) mammograms using the American College of Radiology (ACR) classification. Magnetic resonance imaging (MRI) is a non-radiation based examination method, which offers a three dimensional (3D) alternative to classical 2D mammograms. We propose a new framework for automated breast density calculation on MRI data. Our framework consists of three steps. First, a recently developed method for simultaneous intensity inhomogeneity correction and breast tissue and parenchyma segmentation is applied. Second, the obtained breast component is extracted, and the breast-air and breast-body boundaries are refined. Finally, the fibroglandular/parenchymal tissue volume is extracted from the breast volume. The framework was tested on 37 randomly selected MR mammographies. All images were acquired on a 1.5T MR scanner using an axial, T1-weighted time-resolved angiography with stochastic trajectories sequence. The results were compared to manually obtained groundtruth. Dice's Similarity Coefficient (DSC) as well as Bland-Altman plots were used as the main tools for evaluation of similarity between automatic and manual segmentations. The average Dice's Similarity Coefficient values were and for breast and parenchymal volumes, respectively. Bland-Altman plots showed the mean bias () standard deviation equal for breast volumes and for parenchyma volumes. The automated framework produced sufficient results and has the potential to be applied for the analysis of breast volume and breast density of numerous data in clinical and research settings.
doi:10.1371/journal.pone.0112709
PMCID: PMC4244105  PMID: 25422942
25.  An improved approach for the segmentation of starch granules in microscopic images 
BMC Genomics  2010;11(Suppl 2):S13.
Background
Starches are the main storage polysaccharides in plants and are distributed widely throughout plants including seeds, roots, tubers, leaves, stems and so on. Currently, microscopic observation is one of the most important ways to investigate and analyze the structure of starches. The position, shape, and size of the starch granules are the main measurements for quantitative analysis. In order to obtain these measurements, segmentation of starch granules from the background is very important. However, automatic segmentation of starch granules is still a challenging task because of the limitation of imaging condition and the complex scenarios of overlapping granules.
Results
We propose a novel method to segment starch granules in microscopic images. In the proposed method, we first separate starch granules from background using automatic thresholding and then roughly segment the image using watershed algorithm. In order to reduce the oversegmentation in watershed algorithm, we use the roundness of each segment, and analyze the gradient vector field to find the critical points so as to identify oversegments. After oversegments are found, we extract the features, such as the position and intensity of the oversegments, and use fuzzy c-means clustering to merge the oversegments to the objects with similar features. Experimental results demonstrate that the proposed method can alleviate oversegmentation of watershed segmentation algorithm successfully.
Conclusions
We present a new scheme for starch granules segmentation. The proposed scheme aims to alleviate the oversegmentation in watershed algorithm. We use the shape information and critical points of gradient vector flow (GVF) of starch granules to identify oversegments, and use fuzzy c-mean clustering based on prior knowledge to merge these oversegments to the objects. Experimental results on twenty microscopic starch images demonstrate the effectiveness of the proposed scheme.
doi:10.1186/1471-2164-11-S2-S13
PMCID: PMC2975413  PMID: 21047380

Results 1-25 (913449)