PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1224360)

Clipboard (0)
None

Related Articles

1.  Unsupervised segmentation of noisy electron microscopy images using salient watersheds and region merging 
BMC Bioinformatics  2013;14:294.
Background
Segmenting electron microscopy (EM) images of cellular and subcellular processes in the nervous system is a key step in many bioimaging pipelines involving classification and labeling of ultrastructures. However, fully automated techniques to segment images are often susceptible to noise and heterogeneity in EM images (e.g. different histological preparations, different organisms, different brain regions, etc.). Supervised techniques to address this problem are often helpful but require large sets of training data, which are often difficult to obtain in practice, especially across many conditions.
Results
We propose a new, principled unsupervised algorithm to segment EM images using a two-step approach: edge detection via salient watersheds following by robust region merging. We performed experiments to gather EM neuroimages of two organisms (mouse and fruit fly) using different histological preparations and generated manually curated ground-truth segmentations. We compared our algorithm against several state-of-the-art unsupervised segmentation algorithms and found superior performance using two standard measures of under-and over-segmentation error.
Conclusions
Our algorithm is general and may be applicable to other large-scale segmentation problems for bioimages.
doi:10.1186/1471-2105-14-294
PMCID: PMC3852992  PMID: 24090265
Image segmentation; Superpixels; Salient watershed; Region merging; Electron microscopy; Unsupervised learning
2.  Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery 
Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea that classification is dependent on segmentation is challenged by our unexpected results, casting doubt on the value of pursuing ‘optimal segmentation’. Our results rather suggest that as long as under-segmentation remains at acceptable levels, imperfections in segmentation can be ruled out, so that a high level of classification accuracy can still be achieved.
doi:10.1016/j.isprsjprs.2014.07.002
PMCID: PMC4183749  PMID: 25284960
Supervised segmentation; Unsupervised segmentation; OBIA; Buildings; Random forest classifier; OpenStreetMap
3.  Contrast-Based Fully Automatic Segmentation of White Matter Hyperintensities: Method and Validation 
PLoS ONE  2012;7(11):e48953.
White matter hyperintensities (WMH) on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm), a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC) of 0.96 and a mean similarity index (SI) of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN) and support vector machines (SVM) as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87–0.91 for kNN; 0.89–0.94 for SVM. Mean SI: 0.63–0.71 for kNN, 0.67–0.72 for SVM), and did not need any training set.
doi:10.1371/journal.pone.0048953
PMCID: PMC3495958  PMID: 23152828
4.  Ensemble Semi-supervised Frame-work for Brain Magnetic Resonance Imaging Tissue Segmentation 
Brain magnetic resonance images (MRIs) tissue segmentation is one of the most important parts of the clinical diagnostic tools. Pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to obtain. Moreover, they cannot use unlabeled data to train classifiers. On the other hand, unsupervised segmentation methods have no prior knowledge and lead to low level of performance. However, semi-supervised learning which uses a few labeled data together with a large amount of unlabeled data causes higher accuracy with less trouble. In this paper, we propose an ensemble semi-supervised frame-work for segmenting of brain magnetic resonance imaging (MRI) tissues that it has been used results of several semi-supervised classifiers simultaneously. Selecting appropriate classifiers has a significant role in the performance of this frame-work. Hence, in this paper, we present two semi-supervised algorithms expectation filtering maximization and MCo_Training that are improved versions of semi-supervised methods expectation maximization and Co_Training and increase segmentation accuracy. Afterward, we use these improved classifiers together with graph-based semi-supervised classifier as components of the ensemble frame-work. Experimental results show that performance of segmentation in this approach is higher than both supervised methods and the individual semi-supervised classifiers.
PMCID: PMC3788199  PMID: 24098863
Brain magnetic resonance image tissue segmentation; ensemble semi-supervised frame-work; expectation filtering maximization classifier; MCo_Training classifier
5.  A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging 
EJNMMI Research  2013;3:55.
Background
Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers’ understanding of infectious diseases.
Methods
We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists’ delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework.
Results
Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis.
Conclusions
We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy.
doi:10.1186/2191-219X-3-55
PMCID: PMC3734217  PMID: 23879987
Quantitative analysis; Pulmonary infections; Small animal models; PET-CT; Image segmentation; H1N1; Tuberculosis
6.  Peripheral blood smear image analysis: A comprehensive review 
Peripheral blood smear image examination is a part of the routine work of every laboratory. The manual examination of these images is tedious, time-consuming and suffers from interobserver variation. This has motivated researchers to develop different algorithms and methods to automate peripheral blood smear image analysis. Image analysis itself consists of a sequence of steps consisting of image segmentation, features extraction and selection and pattern classification. The image segmentation step addresses the problem of extraction of the object or region of interest from the complicated peripheral blood smear image. Support vector machine (SVM) and artificial neural networks (ANNs) are two common approaches to image segmentation. Features extraction and selection aims to derive descriptive characteristics of the extracted object, which are similar within the same object class and different between different objects. This will facilitate the last step of the image analysis process: pattern classification. The goal of pattern classification is to assign a class to the selected features from a group of known classes. There are two types of classifier learning algorithms: supervised and unsupervised. Supervised learning algorithms predict the class of the object under test using training data of known classes. The training data have a predefined label for every class and the learning algorithm can utilize this data to predict the class of a test object. Unsupervised learning algorithms use unlabeled training data and divide them into groups using similarity measurements. Unsupervised learning algorithms predict the group to which a new test object belong to, based on the training data without giving an explicit class to that object. ANN, SVM, decision tree and K-nearest neighbor are possible approaches to classification algorithms. Increased discrimination may be obtained by combining several classifiers together.
doi:10.4103/2153-3539.129442
PMCID: PMC4023032  PMID: 24843821
Feature extraction; feature selection; microscopic image analysis; peripheral blood smear; segmentation
7.  Ranked retrieval of segmented nuclei for objective assessment of cancer gene repositioning 
BMC Bioinformatics  2012;13:232.
Background
Correct segmentation is critical to many applications within automated microscopy image analysis. Despite the availability of advanced segmentation algorithms, variations in cell morphology, sample preparation, and acquisition settings often lead to segmentation errors. This manuscript introduces a ranked-retrieval approach using logistic regression to automate selection of accurately segmented nuclei from a set of candidate segmentations. The methodology is validated on an application of spatial gene repositioning in breast cancer cell nuclei. Gene repositioning is analyzed in patient tissue sections by labeling sequences with fluorescence in situ hybridization (FISH), followed by measurement of the relative position of each gene from the nuclear center to the nuclear periphery. This technique requires hundreds of well-segmented nuclei per sample to achieve statistical significance. Although the tissue samples in this study contain a surplus of available nuclei, automatic identification of the well-segmented subset remains a challenging task.
Results
Logistic regression was applied to features extracted from candidate segmented nuclei, including nuclear shape, texture, context, and gene copy number, in order to rank objects according to the likelihood of being an accurately segmented nucleus. The method was demonstrated on a tissue microarray dataset of 43 breast cancer patients, comprising approximately 40,000 imaged nuclei in which the HES5 and FRA2 genes were labeled with FISH probes. Three trained reviewers independently classified nuclei into three classes of segmentation accuracy. In man vs. machine studies, the automated method outperformed the inter-observer agreement between reviewers, as measured by area under the receiver operating characteristic (ROC) curve. Robustness of gene position measurements to boundary inaccuracies was demonstrated by comparing 1086 manually and automatically segmented nuclei. Pearson correlation coefficients between the gene position measurements were above 0.9 (p < 0.05). A preliminary experiment was conducted to validate the ranked retrieval in a test to detect cancer. Independent manual measurement of gene positions agreed with automatic results in 21 out of 26 statistical comparisons against a pooled normal (benign) gene position distribution.
Conclusions
Accurate segmentation is necessary to automate quantitative image analysis for applications such as gene repositioning. However, due to heterogeneity within images and across different applications, no segmentation algorithm provides a satisfactory solution. Automated assessment of segmentations by ranked retrieval is capable of reducing or even eliminating the need to select segmented objects by hand and represents a significant improvement over binary classification. The method can be extended to other high-throughput applications requiring accurate detection of cells or nuclei across a range of biomedical applications.
doi:10.1186/1471-2105-13-232
PMCID: PMC3484015  PMID: 22971117
8.  Supervised and Unsupervised Self-Testing for HIV in High- and Low-Risk Populations: A Systematic Review 
PLoS Medicine  2013;10(4):e1001414.
By systematically reviewing the literature, Nitika Pant Pai and colleagues assess the evidence base for HIV self tests both with and without supervision.
Background
Stigma, discrimination, lack of privacy, and long waiting times partly explain why six out of ten individuals living with HIV do not access facility-based testing. By circumventing these barriers, self-testing offers potential for more people to know their sero-status. Recent approval of an in-home HIV self test in the US has sparked self-testing initiatives, yet data on acceptability, feasibility, and linkages to care are limited. We systematically reviewed evidence on supervised (self-testing and counselling aided by a health care professional) and unsupervised (performed by self-tester with access to phone/internet counselling) self-testing strategies.
Methods and Findings
Seven databases (Medline [via PubMed], Biosis, PsycINFO, Cinahl, African Medicus, LILACS, and EMBASE) and conference abstracts of six major HIV/sexually transmitted infections conferences were searched from 1st January 2000–30th October 2012. 1,221 citations were identified and 21 studies included for review. Seven studies evaluated an unsupervised strategy and 14 evaluated a supervised strategy. For both strategies, data on acceptability (range: 74%–96%), preference (range: 61%–91%), and partner self-testing (range: 80%–97%) were high. A high specificity (range: 99.8%–100%) was observed for both strategies, while a lower sensitivity was reported in the unsupervised (range: 92.9%–100%; one study) versus supervised (range: 97.4%–97.9%; three studies) strategy. Regarding feasibility of linkage to counselling and care, 96% (n = 102/106) of individuals testing positive for HIV stated they would seek post-test counselling (unsupervised strategy, one study). No extreme adverse events were noted. The majority of data (n = 11,019/12,402 individuals, 89%) were from high-income settings and 71% (n = 15/21) of studies were cross-sectional in design, thus limiting our analysis.
Conclusions
Both supervised and unsupervised testing strategies were highly acceptable, preferred, and more likely to result in partner self-testing. However, no studies evaluated post-test linkage with counselling and treatment outcomes and reporting quality was poor. Thus, controlled trials of high quality from diverse settings are warranted to confirm and extend these findings.
Please see later in the article for the Editors' Summary
Editors' Summary
Background
About 34 million people (most living in resource-limited countries) are currently infected with HIV, the virus that causes AIDS, and about 2.5 million people become infected with HIV every year. HIV is usually transmitted through unprotected sex with an infected partner. HIV infection is usually diagnosed by looking for antibodies to HIV in blood or saliva. Early during infection, the immune system responds to HIV by beginning to make antibodies that recognize the virus and target it for destruction. “Seroconversion”—the presence of detectable amounts of antibody in the blood or saliva—usually takes 6–12 weeks. Rapid antibody-based tests, which do not require laboratory facilities, can provide a preliminary result about an individual's HIV status from a simple oral swab or finger stick sample within 20 minutes. However preliminary rapid positive results have to be confirmed in a laboratory, which may take a few days or weeks. If positive, HIV infection can be controlled but not cured by taking a daily cocktail of powerful antiretroviral drugs throughout life.
Why Was This Study Done?
To reduce the spread of HIV, it is essential that HIV-positive individuals get tested, change behaviors avoid transmitting the virus to other people by, for example, always using a condom during sex, and if positive get on to treatment that is available worldwide. Treatment also reduces transmission of virus to the partner and controls the virus in the community. However, only half the people currently living with HIV know their HIV status, a state of affairs that increases the possibility of further HIV transmission to their partners and children. HIV positive individuals are diagnosed late with advanced HIV infection that costs health care services. Although health care facility-based HIV testing has been available for decades, people worry about stigma, visibility, and social discrimination. They also dislike the lack of privacy and do not like having to wait for their test results. Self-testing (i.e., self-test conduct and interpretation) might alleviate some of these barriers to testing by allowing individuals to determine their HIV status in the privacy of their home and could, therefore, increase the number of individuals aware of their HIV status. This could possibly reduce transmission and, through seeking linkages to care, bring HIV under control in communities. In some communities and countries, stigma of HIV prevents people from taking action about their HIV status. Indeed, an oral (saliva-based) HIV self-test kit is now available in the US. But how acceptable, feasible, and accurate is self-testing by lay people, and will people who find themselves self-test positive seek counseling and treatment? In this systematic review (a study that uses pre-defined criteria to identify all the research on a given topic), the researchers examine these issues by analyzing data from studies that have evaluated supervised self-testing (self-testing and counseling aided by a health-care professional) and unsupervised self-testing (self-testing performed without any help but with counseling available by phone or internet).
What Did the Researchers Do and Find?
The researchers identified 21 eligible studies, two-thirds of which evaluated oral self-testing and a third of which evaluated blood-based self-testing. Seven studies evaluated an unsupervised self-testing strategy and 14 evaluated a supervised strategy. Most of the data (89%) came from studies undertaken in high-income settings. The study populations varied from those at high risk of HIV infection to low-risk general populations. Across the studies, acceptability (defined as the number of people who actually self-tested divided by the number who consented to self-test) ranged from 74% to 96%. With both strategies, the specificity of self-testing (the chance of an HIV-negative person receiving a negative test result is true negative) was high but the sensitivity of self-testing (the chance of an HIV-positive person receiving a positive test result is indeed a true positive) was higher for supervised than for unsupervised testing. The researchers also found evidence that people preferred self-testing to facility-based testing and oral self-testing to blood-based self testing and, in one study, 96% of participants who self-tested positive sought post-testing counseling.
What Do These Findings Mean?
These findings provide new but limited information about the feasibility, acceptability, and accuracy of HIV self-testing. They suggest that it is feasible to implement both supervised and unsupervised self-testing, that both strategies are preferred to facility-based testing, but that the accuracy of self-testing is variable. However, most of the evidence considered by the researchers came from high-income countries and from observational studies of varying quality, and data on whether people self-testing positive sought post-testing counseling (linkage to care) were only available from one evaluation of unsupervised self-testing in the US. Consequently, although these findings suggest that self-testing could engage individuals in finding our their HIV status and thereby help modify behavior thus, reduce HIV transmission in the community, by increasing the proportion of people living with HIV who know their HIV status. The researchers suggested that more data from diverse settings and preferably from controlled randomized trials must be collected before any initiatives for global scale-up of self-testing for HIV infection are implemented.
Additional Information
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1001414.
Information is available from the US National Institute of Allergy and Infectious Diseases on HIV infection and AIDS
NAM/aidsmap provides basic information about HIV/AIDS and summaries of recent research findings on HIV care and treatment
Information is available from Avert, an international AIDS charity on many aspects of HIV/AIDS, including information on HIV testing, and on HIV transmission and testing (in English and Spanish)
The UK National Health Service Choices website provides information about all aspects of HIV and AIDS; a “behind the headlines” article provides details about the 2012 US approval for an over-the-counter HIV home-use test
The 2012 World AIDS Day Report provides information about the percentage of people living with HIV who are aware of their HIV status in various African countries, as well as up-to-date information about the AIDS epidemic
Patient stories about living with HIV/AIDS are available through Avert; the nonprofit website Healthtalkonline also provides personal stories about living with HIV, including stories about getting a diagnosis
doi:10.1371/journal.pmed.1001414
PMCID: PMC3614510  PMID: 23565066
9.  IMPST: A New Interactive Self-Training Approach to Segmentation Suspicious Lesions in Breast MRI 
Breast lesion segmentation in magnetic resonance (MR) images is one of the most important parts of clinical diagnostic tools. Pixel classification methods have been frequently used in image segmentation with two supervised and unsupervised approaches up to now. Supervised segmentation methods lead to high accuracy, but they need a large amount of labeled data, which is hard, expensive, and slow to be obtained. On the other hand, unsupervised segmentation methods need no prior knowledge and lead to low performance. However, semi-supervised learning which uses not only a few labeled data, but also a large amount of unlabeled data promises higher accuracy with less effort. In this paper, we propose a new interactive semi-supervised approach to segmentation of suspicious lesions in breast MRI. Using a suitable classifier in this approach has an important role in its performance; in this paper, we present a semi-supervised algorithm improved self-training (IMPST) which is an improved version of self-training method and increase segmentation accuracy. Experimental results show that performance of segmentation in this approach is higher than supervised and unsupervised methods such as K nearest neighbors, Bayesian, Support Vector Machine, and Fuzzy c-Means.
PMCID: PMC3342621  PMID: 22606669
Breast lesions segmentation; magnetic resonance imaging; self-training; semi-supervised learning
10.  A Logarithmic Opinion Pool Based STAPLE Algorithm For The Fusion of Segmentations With Associated Reliability Weights 
IEEE transactions on medical imaging  2014;33(10):1997-2009.
Pelvic floor dysfunction is very common in women after childbirth and precise segmentation of magnetic resonance images (MRI) of the pelvic floor may facilitate diagnosis and treatment of patients. However, because of the complexity of the structures of pelvic floor, manual segmentation of the pelvic floor is challenging and suffers from high inter and intra-rater variability of expert raters. Multiple template fusion algorithms are promising techniques for segmentation of MRI in these types of applications, but these algorithms have been limited by imperfections in the alignment of each template to the target, and by template segmentation errors. In this class of segmentation techniques, a collection of templates is aligned to a target, and a new segmentation of the target is inferred. A number of algorithms sought to improve segmentation performance by combining image intensities and template labels as two independent sources of information, carrying out decision fusion through local intensity weighted voting schemes. This class of approach is a form of linear opinion pooling, and achieves unsatisfactory performance for this application. We hypothesized that better decision fusion could be achieved by assessing the contribution of each template in comparison to a reference standard segmentation of the target image and developed a novel segmentation algorithm to enable automatic segmentation of MRI of the female pelvic floor. The algorithm achieves high performance by estimating and compensating for both imperfect registration of the templates to the target image and template segmentation inaccuracies. The algorithm is a generalization of the STAPLE algorithm in which a reference segmentation is estimated and used to infer an optimal weighting for fusion of templates. A local image similarity measure is used to infer a local reliability weight, which contributes to the fusion through a novel logarithmic opinion pooling. We evaluated our new algorithm in comparison to nine state-of-the-art segmentation methods by comparison to a reference standard derived from repeated manual segmentations of each subject image and demonstrate our algorithm achieves the highest performance. This automated segmentation algorithm is expected to enable widespread evaluation of the female pelvic floor for diagnosis and prognosis in the future.
doi:10.1109/TMI.2014.2329603
PMCID: PMC4264575  PMID: 24951681
11.  A quantitative analytic pipeline for evaluating neuronal activities by high throughput synaptic vesicle imaging 
NeuroImage  2012;62(3):2040-2054.
Synaptic vesicle dynamics play an important role in the study of neuronal and synaptic activities of neurodegradation diseases ranging from the epidemic Alzheimer’s disease to the rare Rett syndrome. A high-throughput assay with a large population of neurons would be useful and efficient to characterize neuronal activity based on the dynamics of synaptic vesicles for the study of mechanisms or to discover drug candidates for neurodegenerative and neurodevelopmental disorders. However, the massive amounts of image data generated via high throughput screening require enormous manual processing time and effort, restricting the practical use of such an assay. This paper presents an automated analytic system to process and interpret the huge data set generated by such assays. Our system enables the automated detection, segmentation, quantification, and measurement of neuron activities based on the synaptic vesicle assay. To overcome challenges such as noisy background, inhomogeneity, and tiny object size, we first employ MSVST (Multi-Scale Variance Stabilizing Transform) to obtain a denoised and enhanced map of the original image data. Then, we propose an adaptive thresholding strategy to solve the inhomogeneity issue, based on the local information, and to accurately segment synaptic vesicles. We design algorithms to address the issue of tiny objects-of-interest overlapping. Several post-processing criteria are defined to filter false positives. A total of 152 features are extracted for each detected vesicle. A score is defined for each synaptic vesicle image to quantify the neuron activity. We also compare the unsupervised strategy with the supervised method. Our experiments on hippocampal neuron assays showed that the proposed system can automatically detect vesicles and quantify their dynamics for evaluating neuron activities. The availability of such an automated system will open opportunities for investigation of synaptic neuropathology and identification of candidate therapeutics for neurodegeneration.
doi:10.1016/j.neuroimage.2012.06.020
PMCID: PMC3437259  PMID: 22732566
synaptic vesicle; neuron activity; detection and quantification; neurodegenerative disease; high throughput image screening
12.  Segmentation and classification of medical images using texture-primitive features: Application of BAM-type artificial neural network 
The objective of developing this software is to achieve auto-segmentation and tissue characterization. Therefore, the present algorithm has been designed and developed for analysis of medical images based on hybridization of syntactic and statistical approaches, using artificial neural network (ANN). This algorithm performs segmentation and classification as is done in human vision system, which recognizes objects; perceives depth; identifies different textures, curved surfaces, or a surface inclination by texture information and brightness. The analysis of medical image is directly based on four steps: 1) image filtering, 2) segmentation, 3) feature extraction, and 4) analysis of extracted features by pattern recognition system or classifier. In this paper, an attempt has been made to present an approach for soft tissue characterization utilizing texture-primitive features with ANN as segmentation and classifier tool. The present approach directly combines second, third, and fourth steps into one algorithm. This is a semisupervised approach in which supervision is involved only at the level of defining texture-primitive cell; afterwards, algorithm itself scans the whole image and performs the segmentation and classification in unsupervised mode. The algorithm was first tested on Markov textures, and the success rate achieved in classification was 100%; further, the algorithm was able to give results on the test images impregnated with distorted Markov texture cell. In addition to this, the output also indicated the level of distortion in distorted Markov texture cell as compared to standard Markov texture cell. Finally, algorithm was applied to selected medical images for segmentation and classification. Results were in agreement with those with manual segmentation and were clinically correlated.
doi:10.4103/0971-6203.42763
PMCID: PMC2772042  PMID: 19893702
Artificial neural network; medical images; segmentation; texture features; texture primitive
13.  Data Mining in Genomics 
Clinics in laboratory medicine  2008;28(1):145-viii.
SYNOPSIS
In this paper we review important emerging statistical concepts, data mining techniques, and applications that have been recently developed and used for genomic data analysis. First, we summarize general background and some critical issues in genomic data mining. We then describe a novel concept of statistical significance, so-called false discovery rate, the rate of false positives among all positive findings, which has been suggested to control the error rate of numerous false positives in large screening biological data analysis. In the next section two recent statistical testing methods---significance analysis of microarray (SAM) and local pooled error (LPE) tests are introduced. We next introduce statistical modeling in genomic data analysis such as ANOVA and heterogeneous error modeling (HEM) approaches that have been suggested for analyzing microarray data obtained from multiple experimental and/or biological conditions. The following two sections describe data exploration and discovery tools largely termed as: supervised learning and unsupervised learning. The former approaches include several multivariate statistical methods to investigate co-expression patterns of multiple genes, and the latter approaches are the classification methods to discover genomic biomarker signatures for predicting important subclasses of human diseases. The last section briefly summarizes various genomic data mining approaches in biomedical pathway analysis and patient outcome and/or chemotherapeutic response prediction. Many of the software packages introduced in this paper are freely available at Bioconductor, the open-source Bioinformatics software web site (http://www.bioconductor.org/).
doi:10.1016/j.cll.2007.10.010
PMCID: PMC2253491  PMID: 18194724
ANOVA; False discovery rate; Genomic data; Heterogeneous error model (HEM); Hierarchical clustering; Linear discriminant Analysis; Local pooled error (LPE) test; Logistic Regression discriminant analysis; Microarray GeneChip™ gene expression; Missclassification penalized posterior (MiPP); Significance analysis of microarray (SAM); Supervised learning; Unsupervised learning
14.  Segmentation of Fluorescence Microscopy Cell Images Using Unsupervised Mining 
The accurate measurement of cell and nuclei contours are critical for the sensitive and specific detection of changes in normal cells in several medical informatics disciplines. Within microscopy, this task is facilitated using fluorescence cell stains, and segmentation is often the first step in such approaches. Due to the complex nature of cell issues and problems inherent to microscopy, unsupervised mining approaches of clustering can be incorporated in the segmentation of cells. In this study, we have developed and evaluated the performance of multiple unsupervised data mining techniques in cell image segmentation. We adapt four distinctive, yet complementary, methods for unsupervised learning, including those based on k-means clustering, EM, Otsu’s threshold, and GMAC. Validation measures are defined, and the performance of the techniques is evaluated both quantitatively and qualitatively using synthetic and recently published real data. Experimental results demonstrate that k-means, Otsu’s threshold, and GMAC perform similarly, and have more precise segmentation results than EM. We report that EM has higher recall values and lower precision results from under-segmentation due to its Gaussian model assumption. We also demonstrate that these methods need spatial information to segment complex real cell images with a high degree of efficacy, as expected in many medical informatics applications.
doi:10.2174/1874431101004020041
PMCID: PMC2930152  PMID: 21116323
Fluorescence microscope cell image; segmentation; K-means clustering; EM; threshold; GMAC.
15.  Hippocampal volumetry for lateralization of temporal lobe epilepsy: automated versus manual methods 
NeuroImage  2010;54S1:S218-S226.
The hippocampus has been the primary region of interest in the preoperative imaging investigations of mesial temporal lobe epilepsy (mTLE). Hippocampal imaging and electroencephalographic features may be sufficient in several cases to declare the epileptogenic focus. In particular, hippocampal atrophy, as appreciated on T1-weighted (T1W) magnetic resonance (MR) images, may suggest a mesial temporal sclerosis. Qualitative visual assessment of hippocampal volume, however, is influenced by head position in the magnet and the amount of atrophy in different parts of the hippocampus. An entropy-based segmentation algorithm for subcortical brain structures (LocalInfo) was developed and supplemented by both a new multiple atlas strategy and a free-form deformation step to capture structural variability. Manually segmented T1-weighted magnetic resonance (MR) images of 10 non-epileptic subjects were used as atlases for the proposed automatic segmentation protocol which was applied to a cohort of 46 mTLE patients. The segmentation and lateralization accuracies of the proposed technique were compared with those of two other available programs, HAMMER and FreeSurfer, in addition to the manual method. The Dice coefficient for the proposed method was 11% (p<10-5) and 14% (p<10-4) higher in comparison with the HAMMER and FreeSurfer, respectively. Mean and Hausdorff distances in the proposed method were also 14% (p<0.2) and 26% (p<10-3) lower in comparison with HAMMER and 8% (p<0.8) and 48% (p<10-5) lower in comparison with FreeSurfer, respectively. LocalInfo proved to have higher concordance (87%) with the manual segmentation method than either HAMMER (85%) or FreeSurfer (83%). The accuracy of lateralization by volumetry in this study with LocalInfo was 74% compared to 78% with the manual segmentation method. LocalInfo yields a closer approximation to that of manual segmentation and may therefore prove to be more reliable than currently published automatic segmentation algorithms.
doi:10.1016/j.neuroimage.2010.03.066
PMCID: PMC2978802  PMID: 20353827
16.  A Unified Set of Analysis Tools for Uterine Cervix Image Segmentation 
Segmentation is a fundamental component of many medical image processing applications, and it has long been recognized as a challenging problem. In this paper, we report our research and development efforts on analyzing and extracting clinically meaningful regions from uterine cervix images in a large database created for the study of cervical cancer. In addition to proposing new algorithms, we also focus on developing open source tools which are in synchrony with the research objectives. These efforts have resulted in three Web-accessible tools which address three important and interrelated sub-topics in medical image segmentation, respectively: the BMT (Boundary Marking Tool), CST (Cervigram Segmentation Tool), and MOSES (Multi-Observer Segmentation Evaluation System). The BMT is for manual segmentation, typically to collect “ground truth” image regions from medical experts. The CST is for automatic segmentation, and MOSES is for segmentation evaluation. These tools are designed to be a unified set in which data can be conveniently exchanged. They have value not only for improving the reliability and accuracy of algorithms of uterine cervix image segmentation, but also promoting collaboration between biomedical experts and engineers which are crucial to medical image processing applications. Although the CST is designed for the unique characteristics of cervigrams, the BMT and MOSES are very general and extensible, and can be easily adapted to other biomedical image collections.
doi:10.1016/j.compmedimag.2010.04.002
PMCID: PMC2955170  PMID: 20510585
17.  UFFizi: a generic platform for ranking informative features 
BMC Bioinformatics  2010;11:300.
Background
Feature selection is an important pre-processing task in the analysis of complex data. Selecting an appropriate subset of features can improve classification or clustering and lead to better understanding of the data. An important example is that of finding an informative group of genes out of thousands that appear in gene-expression analysis. Numerous supervised methods have been suggested but only a few unsupervised ones exist. Unsupervised Feature Filtering (UFF) is such a method, based on an entropy measure of Singular Value Decomposition (SVD), ranking features and selecting a group of preferred ones.
Results
We analyze the statistical properties of UFF and present an efficient approximation for the calculation of its entropy measure. This allows us to develop a web-tool that implements the UFF algorithm. We propose novel criteria to indicate whether a considered dataset is amenable to feature selection by UFF. Relying on formalism similar to UFF we propose also an Unsupervised Detection of Outliers (UDO) method, providing a novel definition of outliers and producing a measure to rank the "outlier-degree" of an instance.
Our methods are demonstrated on gene and microRNA expression datasets, covering viral infection disease and cancer. We apply UFFizi to select genes from these datasets and discuss their biological and medical relevance.
Conclusions
Statistical properties extracted from the UFF algorithm can distinguish selected features from others. UFFizi is a framework that is based on the UFF algorithm and it is applicable for a wide range of diseases. The framework is also implemented as a web-tool.
The web-tool is available at: http://adios.tau.ac.il/UFFizi
doi:10.1186/1471-2105-11-300
PMCID: PMC2893168  PMID: 20525252
18.  Binarization of medical images based on the recursive application of mean shift filtering : Another algorithm 
Binarization is often recognized to be one of the most important steps in most high-level image analysis systems, particularly for object recognition. Its precise functioning highly determines the performance of the entire system. According to many researchers, segmentation finishes when the observer’s goal is satisfied. Experience has shown that the most effective methods continue to be the iterative ones. However, a problem with these algorithms is the stopping criterion. In this work, entropy is used as the stopping criterion when segmenting an image by recursively applying mean shift filtering. Of this way, a new algorithm is introduced for the binarization of medical images, where the binarization is carried out after the segmented image was obtained. The good performance of the proposed method; that is, the good quality of the binarization, is illustrated with several experimental results. In this paper a comparison was carried out among the obtained results with this new algorithm with respect to another developed by the author and collaborators previously and also with Otsu’s method.
PMCID: PMC3169934  PMID: 21918602
image segmentation; mean shift; algorithm; entropy; Otsu’s method
19.  Segmentation of the Left Ventricle from Cardiac MR Images Using a Subject-Specific Dynamical Model 
Statistical models have shown considerable promise as a basis for segmenting and interpreting cardiac images. While a variety of statistical models have been proposed to improve the segmentation results, most of them are either static models (SM),which neglect the temporal dynamics of a cardiac sequence or generic dynamical models (GDM), which are homogeneous in time and neglect the inter-subject variability in cardiac shape and deformation. In this paper, we develop a subject-specific dynamical model (SSDM) that simultaneously handles temporal dynamics (intra-subject variability) and inter-subject variability. We also propose a dynamic prediction algorithm that can progressively identify the specific motion patterns of a new cardiac sequence based on the shapes observed in past frames. The incorporation of this SSDM into the segmentation framework is formulated in a recursive Bayesian framework. It starts with a manual segmentation of the first frame, and then segments each frame according to intensity information from the current frame as well as the prediction from past frames. In addition, to reduce error propagation in sequential segmentation, we take into account the periodic nature of cardiac motion and perform segmentation in both forward and backward directions. We perform “Leave-one-out” test on 32 canine sequences and 22 human sequences, and compare the experimental results with those from SM, GDM, and Active Appearance Motion Model (AAMM). Quantitative analysis of the experimental results shows that SSDM outperforms SM, GDM, and AAMM by having better global and local consistencies with manual segmentation. Moreover, we compare the segmentation results from forward and forward-backward segmentation. Quantitative evaluation shows that forward-backward segmentation suppresses the propagation of segmentation errors.
doi:10.1109/TMI.2009.2031063
PMCID: PMC2832728  PMID: 19789107
Cardiac Segmentation; Statistical Shape Model; Dynamical Model; Bayesian Method
20.  Semi-Automatic segmentation of multiple mouse embryos in MR images 
BMC Bioinformatics  2011;12:237.
Background
The motivation behind this paper is to aid the automatic phenotyping of mouse embryos, wherein multiple embryos embedded within a single tube were scanned using Magnetic Resonance Imaging (MRI).
Results
Our algorithm, a modified version of the simplex deformable model of Delingette, addresses various issues with deformable models including initialization and inability to adapt to boundary concavities. In addition, it proposes a novel technique for automatic collision detection of multiple objects which are being segmented simultaneously, hence avoiding major leaks into adjacent neighbouring structures. We address the initialization problem by introducing balloon forces which expand the initial spherical models close to the true boundaries of the embryos. This results in models which are less sensitive to initial minimum of two fold after each stage of deformation. To determine collision during segmentation, our unique collision detection algorithm finds the intersection between binary masks created from the deformed models after every few iterations of the deformation and modifies the segmentation parameters accordingly hence avoiding collision.
We have segmented six tubes of three dimensional MR images of multiple mouse embryos using our modified deformable model algorithm. We have then validated the results of the our semi-automatic segmentation versus manual segmentation of the same embryos. Our Validation shows that except paws and tails we have been able to segment the mouse embryos with minor error.
Conclusions
This paper describes our novel multiple object segmentation technique with collision detection using a modified deformable model algorithm. Further, it presents the results of segmenting magnetic resonance images of up to 32 mouse embryos stacked in one gel filled test tube and creating 32 individual masks.
doi:10.1186/1471-2105-12-237
PMCID: PMC3224127  PMID: 21679425
21.  Fast automatic quantitative cell replication with fluorescent live cell imaging 
BMC Bioinformatics  2012;13:21.
Background
live cell imaging is a useful tool to monitor cellular activities in living systems. It is often necessary in cancer research or experimental research to quantify the dividing capabilities of cells or the cell proliferation level when investigating manipulations of the cells or their environment. Manual quantification of fluorescence microscopic image is difficult because human is neither sensitive to fine differences in color intensity nor effective to count and average fluorescence level among cells. However, auto-quantification is not a straightforward problem to solve. As the sampling location of the microscopy changes, the amount of cells in individual microscopic images varies, which makes simple measurement methods such as the sum of stain intensity values or the total number of positive stain within each image inapplicable. Thus, automated quantification with robust cell segmentation techniques is required.
Results
An automated quantification system with robust cell segmentation technique are presented. The experimental results in application to monitor cellular replication activities show that the quantitative score is promising to represent the cell replication level, and scores for images from different cell replication groups are demonstrated to be statistically significantly different using ANOVA, LSD and Tukey HSD tests (p-value < 0.01). In addition, the technique is fast and takes less than 0.5 second for high resolution microscopic images (with image dimension 2560 × 1920).
Conclusion
A robust automated quantification method of live cell imaging is built to measure the cell replication level, providing a robust quantitative analysis system in fluorescent live cell imaging. In addition, the presented unsupervised entropy based cell segmentation for live cell images is demonstrated to be also applicable for nuclear segmentation of IHC tissue images.
doi:10.1186/1471-2105-13-21
PMCID: PMC3359210  PMID: 22292799
22.  Curvelet Based Offline Analysis of SEM Images 
PLoS ONE  2014;9(8):e103942.
Manual offline analysis, of a scanning electron microscopy (SEM) image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM). The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD) calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm.
doi:10.1371/journal.pone.0103942
PMCID: PMC4121203  PMID: 25089617
23.  Lung Segmentation Refinement based on Optimal Surface Finding Utilizing a Hybrid Desktop/Virtual Reality User Interface 
Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible.
This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains.
doi:10.1016/j.compmedimag.2013.01.003
PMCID: PMC3852918  PMID: 23415254
Lung segmentation; virtual reality; segmentation refinement; optimal surface finding; computed tomography
24.  Implementing the 2009 Institute of Medicine recommendations on resident physician work hours, supervision, and safety 
Long working hours and sleep deprivation have been a facet of physician training in the US since the advent of the modern residency system. However, the scientific evidence linking fatigue with deficits in human performance, accidents and errors in industries from aeronautics to medicine, nuclear power, and transportation has mounted over the last 40 years. This evidence has also spawned regulations to help ensure public safety across safety-sensitive industries, with the notable exception of medicine.
In late 2007, at the behest of the US Congress, the Institute of Medicine embarked on a year-long examination of the scientific evidence linking resident physician sleep deprivation with clinical performance deficits and medical errors. The Institute of Medicine’s report, entitled “Resident duty hours: Enhancing sleep, supervision and safety”, published in January 2009, recommended new limits on resident physician work hours and workload, increased supervision, a heightened focus on resident physician safety, training in structured handovers and quality improvement, more rigorous external oversight of work hours and other aspects of residency training, and the identification of expanded funding sources necessary to implement the recommended reforms successfully and protect the public and resident physicians themselves from preventable harm.
Given that resident physicians comprise almost a quarter of all physicians who work in hospitals, and that taxpayers, through Medicare and Medicaid, fund graduate medical education, the public has a deep investment in physician training. Patients expect to receive safe, high-quality care in the nation’s teaching hospitals. Because it is their safety that is at issue, their voices should be central in policy decisions affecting patient safety. It is likewise important to integrate the perspectives of resident physicians, policy makers, and other constituencies in designing new policies. However, since its release, discussion of the Institute of Medicine report has been largely confined to the medical education community, led by the Accreditation Council for Graduate Medical Education (ACGME).
To begin gathering these perspectives and developing a plan to implement safer work hours for resident physicians, a conference entitled “Enhancing sleep, supervision and safety: What will it take to implement the Institute of Medicine recommendations?” was held at Harvard Medical School on June 17–18, 2010. This White Paper is a product of a diverse group of 26 representative stakeholders bringing relevant new information and innovative practices to bear on a critical patient safety problem. Given that our conference included experts from across disciplines with diverse perspectives and interests, not every recommendation was endorsed by each invited conference participant. However, every recommendation made here was endorsed by the majority of the group, and many were endorsed unanimously. Conference members participated in the process, reviewed the final product, and provided input before publication. Participants provided their individual perspectives, which do not necessarily represent the formal views of any organization.
In September 2010 the ACGME issued new rules to go into effect on July 1, 2011. Unfortunately, they stop considerably short of the Institute of Medicine’s recommendations and those endorsed by this conference. In particular, the ACGME only applied the limitation of 16 hours to first-year resident physicans. Thus, it is clear that policymakers, hospital administrators, and residency program directors who wish to implement safer health care systems must go far beyond what the ACGME will require. We hope this White Paper will serve as a guide and provide encouragement for that effort.
Resident physician workload and supervision
By the end of training, a resident physician should be able to practice independently. Yet much of resident physicians’ time is dominated by tasks with little educational value. The caseload can be so great that inadequate reflective time is left for learning based on clinical experiences. In addition, supervision is often vaguely defined and discontinuous. Medical malpractice data indicate that resident physicians are frequently named in lawsuits, most often for lack of supervision. The recommendations are: The ACGME should adjust resident physicians workload requirements to optimize educational value. Resident physicians as well as faculty should be involved in work redesign that eliminates nonessential and noneducational activity from resident physician dutiesMechanisms should be developed for identifying in real time when a resident physician’s workload is excessive, and processes developed to activate additional providersTeamwork should be actively encouraged in delivery of patient care. Historically, much of medical training has focused on individual knowledge, skills, and responsibility. As health care delivery has become more complex, it will be essential to train resident and attending physicians in effective teamwork that emphasizes collective responsibility for patient care and recognizes the signs, both individual and systemic, of a schedule and working conditions that are too demanding to be safeHospitals should embrace the opportunities that resident physician training redesign offers. Hospitals should recognize and act on the potential benefits of work redesign, eg, increased efficiency, reduced costs, improved quality of care, and resident physician and attending job satisfactionAttending physicians should supervise all hospital admissions. Resident physicians should directly discuss all admissions with attending physicians. Attending physicians should be both cognizant of and have input into the care patients are to receive upon admission to the hospitalInhouse supervision should be required for all critical care services, including emergency rooms, intensive care units, and trauma services. Resident physicians should not be left unsupervised to care for critically ill patients. In settings in which the acuity is high, physicians who have completed residency should provide direct supervision for resident physicians. Supervising physicians should always be physically in the hospital for supervision of resident physicians who care for critically ill patientsThe ACGME should explicitly define “good” supervision by specialty and by year of training. Explicit requirements for intensity and level of training for supervision of specific clinical scenarios should be providedCenters for Medicare and Medicaid Services (CMS) should use graduate medical education funding to provide incentives to programs with proven, effective levels of supervision. Although this action would require federal legislation, reimbursement rules would help to ensure that hospitals pay attention to the importance of good supervision and require it from their training programs
Resident physician work hours
Although the IOM “Sleep, supervision and safety” report provides a comprehensive review and discussion of all aspects of graduate medical education training, the report’s focal point is its recommendations regarding the hours that resident physicians are currently required to work. A considerable body of scientific evidence, much of it cited by the Institute of Medicine report, describes deteriorating performance in fatigued humans, as well as specific studies on resident physician fatigue and preventable medical errors.
The question before this conference was what work redesign and cultural changes are needed to reform work hours as recommended by the Institute of Medicine’s evidence-based report? Extensive scientific data demonstrate that shifts exceeding 12–16 hours without sleep are unsafe. Several principles should be followed in efforts to reduce consecutive hours below this level and achieve safer work schedules. The recommendations are: Limit resident physician work hours to 12–16 hour maximum shiftsA minimum of 10 hours off duty should be scheduled between shiftsResident physician input into work redesign should be actively solicitedSchedules should be designed that adhere to principles of sleep and circadian science; this includes careful consideration of the effects of multiple consecutive night shifts, and provision of adequate time off after night work, as specified in the IOM reportResident physicians should not be scheduled up to the maximum permissible limits; emergencies frequently occur that require resident physicians to stay longer than their scheduled shifts, and this should be anticipated in scheduling resident physicians’ work shiftsHospitals should anticipate the need for iterative improvement as new schedules are initiated; be prepared to learn from the initial phase-in, and change the plan as neededAs resident physician work hours are redesigned, attending physicians should also be considered; a potential consequence of resident physician work hour reduction and increased supervisory requirements may be an increase in work for attending physicians; this should be carefully monitored, and adjustments to attending physician work schedules made as needed to prevent unsafe work hours or working conditions for this group“Home call” should be brought under the overall limits of working hours; work load and hours should be monitored in each residency program to ensure that resident physicians and fellows on home call are getting sufficient sleepMedicare funding for graduate medical education in each hospital should be linked with adherence to the Institute of Medicine limits on resident physician work hours
Moonlighting by resident physicians
The Institute of Medicine report recommended including external as well as internal moonlighting in working hour limits. The recommendation is: All moonlighting work hours should be included in the ACGME working hour limits and actively monitored. Hospitals should formalize a moonlighting policy and establish systems for actively monitoring resident physician moonlighting
Safety of resident physicians
The “Sleep, supervision and safety” report also addresses fatigue-related harm done to resident physicians themselves. The report focuses on two main sources of physical injury to resident physicians impaired by fatigue, ie, needle-stick exposure to blood-borne pathogens and motor vehicle crashes. Providing safe transportation home for resident physicians is a logistical and financial challenge for hospitals. Educating physicians at all levels on the dangers of fatigue is clearly required to change driving behavior so that safe hospital-funded transport home is used effectively. Fatigue-related injury prevention (including not driving while drowsy) should be taught in medical school and during residency, and reinforced with attending physicians; hospitals and residency programs must be informed that resident physicians’ ability to judge their own level of impairment is impaired when they are sleep deprived; hence, leaving decisions about the capacity to drive to impaired resident physicians is not recommendedHospitals should provide transportation to all resident physicians who report feeling too tired to drive safely; in addition, although consecutive work should not exceed 16 hours, hospitals should provide transportation for all resident physicians who, because of unforeseen reasons or emergencies, work for longer than consecutive 24 hours; transportation under these circumstances should be automatically provided to house staff, and should not rely on self-identification or request
Training in effective handovers and quality improvement
Handover practice for resident physicians, attendings, and other health care providers has long been identified as a weak link in patient safety throughout health care settings. Policies to improve handovers of care must be tailored to fit the appropriate clinical scenario, recognizing that information overload can also be a problem. At the heart of improving handovers is the organizational effort to improve quality, an effort in which resident physicians have typically been insufficiently engaged. The recommendations are: Hospitals should train attending and resident physicians in effective handovers of careHospitals should create uniform processes for handovers that are tailored to meet each clinical setting; all handovers should be done verbally and face-to-face, but should also utilize written toolsWhen possible, hospitals should integrate hand-over tools into their electronic medical records (EMR) systems; these systems should be standardized to the extent possible across residency programs in a hospital, but may be tailored to the needs of specific programs and services; federal government should help subsidize adoption of electronic medical records by hospitals to improve signoutWhen feasible, handovers should be a team effort including nurses, patients, and familiesHospitals should include residents in their quality improvement and patient safety efforts; the ACGME should specify in their core competency requirements that resident physicians work on quality improvement projects; likewise, the Joint Commission should require that resident physicians be included in quality improvement and patient safety programs at teaching hospitals; hospital administrators and residency program directors should create opportunities for resident physicians to become involved in ongoing quality improvement projects and root cause analysis teams; feedback on successful quality improvement interventions should be shared with resident physicians and broadly disseminatedQuality improvement/patient safety concepts should be integral to the medical school curriculum; medical school deans should elevate the topics of patient safety, quality improvement, and teamwork; these concepts should be integrated throughout the medical school curriculum and reinforced throughout residency; mastery of these concepts by medical students should be tested on the United States Medical Licensing Examination (USMLE) stepsFederal government should support involvement of resident physicians in quality improvement efforts; initiatives to improve quality by including resident physicians in quality improvement projects should be financially supported by the Department of Health and Human Services
Monitoring and oversight of the ACGME
While the ACGME is a key stakeholder in residency training, external voices are essential to ensure that public interests are heard in the development and monitoring of standards. Consequently, the Institute of Medicine report recommended external oversight and monitoring through the Joint Commission and Centers for Medicare and Medicaid Services (CMS). The recommendations are: Make comprehensive fatigue management a Joint Commission National Patient Safety Goal; fatigue is a safety concern not only for resident physicians, but also for nurses, attending physicians, and other health care workers; the Joint Commission should seek to ensure that all health care workers, not just resident physicians, are working as safely as possibleFederal government, including the Centers for Medicare and Medicaid Services and the Agency for Healthcare Research and Quality, should encourage development of comprehensive fatigue management programs which all health systems would eventually be required to implementMake ACGME compliance with working hours a “ condition of participation” for reimbursement of direct and indirect graduate medical education costs; financial incentives will greatly increase the adoption of and compliance with ACGME standards
Future financial support for implementation
The Institute of Medicine’s report estimates that $1.7 billion (in 2008 dollars) would be needed to implement its recommendations. Twenty-five percent of that amount ($376 million) will be required just to bring hospitals into compliance with the existing 2003 ACGME rules. Downstream savings to the health care system could potentially result from safer care, but these benefits typically do not accrue to hospitals and residency programs, who have been asked historically to bear the burden of residency reform costs. The recommendations are: The Institute of Medicine should convene a panel of stakeholders, including private and public funders of health care and graduate medical education, to lay down the concrete steps necessary to identify and allocate the resources needed to implement the recommendations contained in the IOM “Resident duty hours: Enhancing sleep, supervision and safety” report. Conference participants suggested several approaches to engage public and private support for this initiativeEfforts to find additional funding to implement the Institute of Medicine recommendations should focus more broadly on patient safety and health care delivery reform; policy efforts focused narrowly upon resident physician work hours are less likely to succeed than broad patient safety initiatives that include residency redesign as a key componentHospitals should view the Institute of Medicine recommendations as an opportunity to begin resident physician work redesign projects as the core of a business model that embraces safety and ultimately saves resourcesBoth the Secretary of Health and Human Services and the Director of the Centers for Medicare and Medicaid Services should take the Institute of Medicine recommendations into consideration when promulgating rules for innovation grantsThe National Health Care Workforce Commission should consider the Institute of Medicine recommendations when analyzing the nation’s physician workforce needs
Recommendations for future research
Conference participants concurred that convening the stakeholders and agreeing on a research agenda was key. Some observed that some sectors within the medical education community have been reluctant to act on the data. Several logical funders for future research were identified. But above all agencies, Centers for Medicare and Medicaid Services is the only stakeholder that funds graduate medical education upstream and will reap savings downstream if preventable medical errors are reduced as a result of reform of resident physician work hours.
doi:10.2147/NSS.S19649
PMCID: PMC3630963  PMID: 23616719
resident; hospital; working hours; safety
25.  Color edges extraction using statistical features and automatic threshold technique: application to the breast cancer cells 
Background
Color image segmentation has been so far applied in many areas; hence, recently many different techniques have been developed and proposed. In the medical imaging area, the image segmentation may be helpful to provide assistance to doctor in order to follow-up the disease of a certain patient from the breast cancer processed images. The main objective of this work is to rebuild and also to enhance each cell from the three component images provided by an input image. Indeed, from an initial segmentation obtained using the statistical features and histogram threshold techniques, the resulting segmentation may represent accurately the non complete and pasted cells and enhance them. This allows real help to doctors, and consequently, these cells become clear and easy to be counted.
Methods
A novel method for color edges extraction based on statistical features and automatic threshold is presented. The traditional edge detector, based on the first and the second order neighborhood, describing the relationship between the current pixel and its neighbors, is extended to the statistical domain. Hence, color edges in an image are obtained by combining the statistical features and the automatic threshold techniques. Finally, on the obtained color edges with specific primitive color, a combination rule is used to integrate the edge results over the three color components.
Results
Breast cancer cell images were used to evaluate the performance of the proposed method both quantitatively and qualitatively. Hence, a visual and a numerical assessment based on the probability of correct classification (P C ), the false classification (P f ), and the classification accuracy (Sens(%)) are presented and compared with existing techniques. The proposed method shows its superiority in the detection of points which really belong to the cells, and also the facility of counting the number of the processed cells.
Conclusions
Computer simulations highlight that the proposed method substantially enhances the segmented image with smaller error rates better than other existing algorithms under the same settings (patterns and parameters). Moreover, it provides high classification accuracy, reaching the rate of 97.94%. Additionally, the segmentation method may be extended to other medical imaging types having similar properties.
doi:10.1186/1475-925X-13-4
PMCID: PMC3926314  PMID: 24456647
Threshold; Statistical features; First order statistics; Second order statistics; Segmentation; Color edge detection

Results 1-25 (1224360)