Home | About | Journals | Submit | Contact Us | Français |

**|**HHS Author Manuscripts**|**PMC3728286

Formats

Article sections

- Abstract
- I. Introduction
- II. Related Work
- III. Approach
- IV. Experiment Results and Discussion
- V. Conclusion and Future Work
- References

Authors

Related links

IEEE Trans Biomed Eng. Author manuscript; available in PMC 2013 July 30.

Published in final edited form as:

Published online 2012 September 10. doi: 10.1109/TBME.2012.2218107

PMCID: PMC3728286

NIHMSID: NIHMS480943

Hang Chang, Life Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 USA (Email: vog.lbl@gnahch)

The publisher's final edited version of this article is available at IEEE Trans Biomed Eng

See other articles in PMC that cite the published article.

Histological tissue sections provide rich information and continue to be the gold standard for the assessment of tissue neoplasm. However, there are a significant amount of technical and biological variations that impede analysis of large histological datasets. In this paper, we have proposed a novel approach for nuclear segmentation in tumor histology sections, which addresses the problem of technical and biological variations by incorporating information from both manually annotated reference patches and the original image. Subsequently, the solution is formulated within a multireference level set framework. This approach has been validated on manually annotated samples and then applied to the TCGA glioblastoma multiforme (GBM) dataset consisting of 440 whole mount tissue sections scanned with either a 20× or 40× objective, in which, each tissue section varies in size from 40k × 40k pixels to 100k × 100k pixels. Experimental results show a superior performance of the proposed method in comparison with present state of art techniques.

Tumor histology provides a detailed insight into cellular morphology, organization, and heterogeneity. For example, tumor histological sections can be used to identify mitotic cells, cellular aneuploidy, and autoimmune responses. More importantly, if tumor morphology and architecture can be quantified on large histological datasets, then it will pave the way for constructing histological databases that are prognostic, the same way that genome analysis techniques have identified molecular subtypes and predictive markers. Genome wide analysis techniques (e.g., microarray analysis) have the advantages of standardized tools for data analysis and pathway enrichment, which enable hypothesis generation for the underlying mechanism. On the other hand, histological signatures are hard to compute because of the biological and technical variations in the stained histological sections; however, they offer insights into tissue composition as well as heterogeneity (e.g., mixed populations) and rare events.

Histological sections are often stained with hematoxylin and eosin stains (H&E), which label DNA and protein contents, respectively. Traditional histological analysis is performed by a trained pathologist through the characterization of phenotypic content, such as various cell types, cellular organization, cell state and health, and cellular secretion. However, such manual analysis may incur inter- and intraobserver variations [1]. On the other hand, the value of the quantitative histological image analysis originates from its capability in capturing detailed morphometric features on a cell-by-cell basis and the organization of cells. Such rich description can be linked with genomic information and survival distribution as an improved basis for diagnosis and therapy. Additionally, in the presence of large datasets, quantitative histological signatures can be used to identify intrinsic subtypes of a specific tumor type, which is supplementary to histological tumor grading.

One of the main technical barriers for processing a large collection of histological data is that the color composition is subject to technical (e.g., fixation, staining) and biological (e.g., cell type, cell state) variations across histological tissue sections, especially when these tissue sections were processed and scanned at different laboratories. Here, a histological tissue section refers to an image of a thin slice of tissue applied to a microscopic slide and scanned from a light microscope. From an image analysis perspective, color variations can occur both within and across tissue sections. For example, within a tissue section, some nuclei may have low chromatin content (e.g., light blue signals), while others may have higher signals (e.g., dark blue); nuclear intensity in one tissue section may be very close to the background intensity (e.g., cytoplasmic, macromolecular components) in another tissue section.

Our approach evolved from our insights and experiments indicating that simple color decomposition and thresholding techniques miss or overestimates some of the nuclei in the image, i.e., nuclei with low chromatin contents are excluded. The problem is further complicated as a result of the diversity in nuclear size and shape (e.g., the classic scale problem). It became clear that the incorporation of prior knowledge (e.g., manual annotation and validation by the pathologist) would be needed not only for validation, but also for constructing a model that captures wide variations in the nuclear staining, both within and across tissue sections. Accordingly, our proposed approach integrates prior knowledge, which is characterized by the Gaussian mixture models (GMM), and the nuclear staining information of the original, which is extracted by color decomposition, within a level set framework. The net result is a binarized image of blobs (a single nucleus or a clump of nuclei), which are either validated or partitioned further through geometric reasoning.

Organization of the rest of this paper is as follows. Section II reviews previous research in this area with a focus on both how 1) quantitative representation of *H*&*E* sections can be leveraged for translational medicine, and 2) nuclear segmentation is performed to address clinical issues; Section III describes the details of our approach; Section IV provides experimental and validation results; and Section V concludes this paper.

The main barriers in correct nuclear segmentation are technical variations (e.g., fixation) and biological heterogeneity (e.g., cell type). Existing techniques have focused on adaptive thresholding followed by morphological operators [2], [3], fuzzy clustering [4], level set method using gradient information [5], [6], color separation followed by optimum thresholding and learning [7], and hybrid color and texture analysis that are followed by learning and unsupervised clustering [8]. Some applications combine the aforementioned techniques. For example, in [9], iterative radial voting [10] was used to estimate seeds for the location of the nuclei and subsequently, the model interaction between neighboring nuclei with multiphase level set [11], [12], and in [13] an initial segmentation of the foreground with graph cut was followed by multiscale seed detection, and the combined results was further refined with a second iteration of graph cut. It is also a common practice that through color decomposition, nuclear regions can be segmented using the same techniques that have been developed for fluorescence microscopy [14].

Yet, it still remains a challenging problem to effectively address the analytical requirements of the tumor histological characterization. Thresholding and clustering assume constant chromatin content for the nuclei in the image. Though, in practice, there is a wide variation in chromatin content. In addition, there is the issue with overlapping and clumping of the nuclei, and sometimes, due to the tissue thickness, they cannot be segmented. The method proposed in [9] aims to delineate overlapping nuclei through iterative radial voting [10], but seed detection can fail in the presence of wide variation in the nuclear size, which will lead to fragmentation. The method described in [15] is based on a voting system using multiple classifiers built from different reference images; we will refer this method as multi-classifier voting system (MCV) in the rest of this paper. Compared to the aforementioned approaches, MCV provides a better way to handle the variation among different batches. However, due to the lack of smoothness constrain, the classification results can be noisy, and sometimes, erroneous, as demonstrated in Fig. 4.

In summary, our goal is to process whole mount tissue sections by addressing the aforementioned issues, construct a large database of morphometric features, and enable subsequent morphometric subtyping and genomic association.

Our strategy leverages several key insights for segmenting nuclear regions: 1) nuclei often respond well to a Laplacian of Gaussian (LoG) filter, 2) nuclear staining information can be captured through color decomposition, 3) color normalization reduces variations in image statistics, and 4) integration of the prior knowledge and nuclear staining information enhances the final segmentation. These concepts are then coupled with a dictionary of manual annotation of nuclei for constructing a model from the TCGA tumor bank. The model constructs representations of the foreground and background of the hand segmented images based on the distribution of 1) the multiscale LoG response in the decomposed nuclear channel and 2) color information in the RGB space. This representation is then condensed and expressed in terms of a GMM, as shown at the top of Fig. 1. Having constructed the model, we will then utilize a level set framework to segment foreground and background content. Finally, delineated blobs are subjected to convexity constraints for partitioning clumps of nuclei. In the rest of this section, we discuss model construction, color normalization, color decomposition, and then proceed with the details of the proposed solution.

Steps in nuclear segmentation. During model construction (offline), for each individual reference image, GMMs are constructed to represent nuclei and background in both RGB space and LoG response space. During classification (online), input test image **...**

Our target dataset consists of 440 individual tissue sections that have been scanned with either a 20× or 40× objective. From these images, which are in the order of 40k × 40k pixels (or higher), a representative of 20 reference images of 1k × 1k pixels have been selected for model construction. These references are utilized in offline and online processing. During offline processing (e.g., training), each image (e.g., reference) is hand segmented and processed with LoG filters at multiple scales. Statistics of foreground (nuclei) and background, in both RGB space and LoG response space, are collected. Subsequently, the foreground and background models of each reference are represented as a mixture of Gaussians. During online processing, a test image is normalized against every reference through a color map normalization strategy [15] for the purpose of low level feature extraction.

The purpose of color normalization is to reduce the variation between an input test image and a reference image so that the prior models constructed from the reference image can be utilized. We evaluated a number of color normalization methods and chose the color map normalization described in [15] for its effectiveness in handling histological data. Let

- input image
*I*and reference image*Q*have*K*and_{I}*K*unique color triplets in terms of (_{Q}*R, G, B*), respectively; - ${\mathbb{R}}_{C}^{X}$ be a monotonic function, which maps the color channel intensity,
*C*{*R, G, B*}, from Image*X*to a rank that is in the range [0,*K*), and_{X}*X*{*I, Q*}; - (
*r*) be the color of pixel_{p}, g_{p}, b_{p}*p*, in image*I*, and $({\mathbb{R}}_{R}^{I}({r}_{p}),{\mathbb{R}}_{G}^{I}({g}_{p}),{\mathbb{R}}_{B}^{I}({b}_{p}))$ be the ranks for each color channel intensity; and - the color channel intensity values
*r*_{ref},*g*_{ref}, and*b*_{ref}, from image*Q*, have ranks:$${\mathbb{R}}_{R}^{Q}({r}_{\text{ref}})=\lfloor \frac{{\mathbb{R}}_{R}^{I}({r}_{p})}{{K}_{I}}\times {K}_{Q}+\frac{1}{2}\rfloor \phantom{\rule{0ex}{0ex}}{\mathbb{R}}_{G}^{Q}({g}_{\text{ref}})=\lfloor \frac{{\mathbb{R}}_{G}^{I}({g}_{p})}{{K}_{I}}\times {K}_{Q}+\frac{1}{2}\rfloor \phantom{\rule{0ex}{0ex}}{\mathbb{R}}_{B}^{Q}({b}_{\text{ref}})=\lfloor \frac{{\mathbb{R}}_{B}^{I}({b}_{p})}{{K}_{I}}\times {K}_{Q}+\frac{1}{2}\rfloor .$$

As a result of color map normalization, the color for pixel *p*: (*r _{p}, g_{p}, b_{p}*), will be normalized as (

In order to provide the nuclear staining information, and reduce complexities for integrating LoG responses, the RGB space is decomposed through the method described in [16]. In our case, we simply used the decomposition matrix established in [16] for *H*&*E* staining. Examples are shown in Fig. 4(b). Please refer to [16] for more details.

Our approach integrates both color information and scale information, in which color information is extracted from the normalized *RGB* space, and scale information is extracted by multiscale *LoG* filters on decomposed nuclear channel. The rationales are that 1) in some cases, color information is insufficient to differentiate nuclear region and background; 2) the scales of the background structure and nuclear region are typically different; and 3) the nuclear region responds well to blob detectors, such as *LoG* filter [13]. As a result, with respect to each reference image, each pixel in the test image is represented by the following two features: 1) {*r, g, b*} in the color space; and 2) {*l*_{σ1}, *l*_{σ2}, …, *l*_{σn}} in the LoG response space, where *l*_{σi} is the *LoG* response at scale: σ_{i}.

Let’s assume*N* reference images: *R _{i}*,

An input test image *I* is first normalized with respect to every reference image *R _{i}* represented as

*f*(^{k}*p*) be the*k*th feature of pixel*p*;- ${\mathbf{p}}_{F}^{k}\text{and}{\mathbf{p}}_{B}^{k}$ be the probability of
*f*produced by Nuclei and Background, respectively:^{k}$${\mathbf{p}}_{F}^{k}(p)=\frac{\mathrm{G}\mathrm{M}{\mathrm{M}}_{F}^{k}(p)}{\mathrm{G}\mathrm{M}{\mathrm{M}}_{F}^{k}(p)+\mathrm{G}\mathrm{M}{\mathrm{M}}_{B}^{k}(p)},\text{and}\phantom{\rule{0ex}{0ex}}{\mathbf{p}}_{B}^{k}(p)=\frac{\mathrm{G}\mathrm{M}{\mathrm{M}}_{B}^{k}(p)}{\mathrm{G}\mathrm{M}{\mathrm{M}}_{F}^{k}(p)+\mathrm{G}\mathrm{M}{\mathrm{M}}_{B}^{k}(p)};$$ - λ
^{k}be the weight for*R*: λ_{i}^{k}=< hist(*R*), hist(_{k}*NI*) > /(hist(_{k}*R*) hist(_{k}*NI*)), where hist(·) is the histogram function,_{k}*R*is the_{k}*k*th reference image,*NI*is the normalized input Image_{k}*I*with respect to*R*;_{k} *DI*be the decomposed nuclear channel;*C*be the curve.

The corresponding energy function to be minimized is then defined as follows:

$$E=\mu \xb7\text{Length}(C)+v\xb7\text{Area}(\text{inside}(C))+{\lambda}_{F}{\int}_{F}{|DI(p)-{C}_{F}(p)|}^{2}dp+{\lambda}_{B}{\int}_{B}{|DI(p)-{C}_{B}(p)|}^{2}dp-\sum _{k=1}^{N}{\lambda}^{k}{\int}_{F}\text{log}{\mathbf{p}}_{F}^{k}({f}^{k}(p))dp-\sum _{k=1}^{N}{\lambda}^{k}{\int}_{B}\text{log}{\mathbf{p}}_{B}^{k}({f}^{k}(p))dp-\alpha \sum _{k=N+1}^{2N}{\lambda}^{k-N}{\int}_{F}\text{log}{\mathbf{p}}_{F}^{k}({f}^{k}(p))dp-\alpha \sum _{k=N+1}^{2N}{\lambda}^{k-N}{\int}_{B}\text{log}{\mathbf{p}}_{B}^{k}({f}^{k}(p))dp$$

(1)

where µ, *v*, λ_{F}, λ_{B}, and α are fixed coefficients. *C _{F}* and

The separation of the nuclei from the background is achieved by minimizing the energy function defined earlier via the evolution of the level set; subsequently, the regularized Heaviside function *H* [17] is introduced as follows:

$$H(z)=\frac{1}{2}(1+\frac{2}{\pi}\text{arctan}(\frac{z}{\epsilon}))$$

(2)

where ε is the regulation parameter of the Heaviside function and Delta function is defined as follows:

$$\delta (z)=\frac{d}{dz}H(z).$$

(3)

The objective energy function can then be rewritten as

$$E=\mu {\int}_{\mathrm{\Omega}}|\nabla H(\varphi (p))|dp+v{\int}_{\mathrm{\Omega}}H(\varphi (p))dp+{\lambda}_{F}{\int}_{\mathrm{\Omega}}|DI(p)-{C}_{F}(p){|}^{2}\xb7H(\varphi (p))dp+{\lambda}_{B}{\int}_{\mathrm{\Omega}}|DI(p)-{C}_{B}(p){|}^{2}\xb7(1-H(\varphi (p)))dp-\sum _{k=1}^{N}{\lambda}^{k}{\int}_{\mathrm{\Omega}}\text{log}{\mathbf{p}}_{F}^{k}({f}^{k}(p))\xb7H(\varphi (p))dp-\sum _{k=1}^{N}{\lambda}^{k}{\int}_{\mathrm{\Omega}}\text{log}{\mathbf{p}}_{B}^{k}({f}^{k}(p))\xb7(1-H(\varphi (p)))dp-\alpha \sum _{k=N+1}^{2N}{\lambda}^{k-N}{\int}_{\mathrm{\Omega}}\text{log}{\mathbf{p}}_{F}^{k}({f}^{k}(p))\xb7H(\varphi (p))dp-\alpha \sum _{k=N+1}^{2N}{\lambda}^{k-N}{\int}_{\mathrm{\Omega}}\text{log}{\mathbf{p}}_{B}^{k}({f}^{k}(p))\xb7(1-H(\varphi (p)))dp.$$

(4)

The minimization of the energy function can be achieved by gradient decent method, and the corresponding Euler–Lagrange equation for ϕ is

$$\frac{\partial \varphi}{\partial t}=\delta (\varphi )(\mu \xb7\text{div}\frac{\nabla \varphi}{|\nabla \varphi |}-v)+\delta (\varphi )({\lambda}_{B}|DI-{C}_{B}{|}^{2}-{\lambda}_{F}|DI-{C}_{F}{|}^{2})+\delta (\varphi )(\sum _{k=1}^{N}\text{log}\frac{{\mathbf{p}}_{F}^{k}{({f}^{k})}^{{\lambda}^{k}}}{{\mathbf{p}}_{B}^{k}{({f}^{k})}^{{\lambda}^{k}}}+\sum _{k=N+1}^{2N}\text{log}\frac{{\mathbf{p}}_{F}^{k}{({f}^{k})}^{\alpha {\lambda}^{k-N}}}{{\mathbf{p}}_{F}^{k}{({f}^{k})}^{\alpha {\lambda}^{k-N}}}).$$

(5)

Since the multireference level set is a region-based active contour model, it is not sensitive to initialization. In our case, a circle with constant radius (*r* = 100) at the center of each test image was used as the initial zero level set, and it is evolved until the differences in the spatial location between two zero level sets from two consecutive iterations are below an empirical threshold. Based on our experience, the convergence is typically reached within 50 iterations.

After the level set evolution, we end up with a binarized image of blobs (a single nucleus or a clump of nuclei). The next step is to partition them into single nucleus, if necessary. A key observation we made is that the nuclear shape is typically convex. Therefore, ambiguities associated with the delineation of overlapping nuclei could be resolved by detecting concavities and partitioning them through geometric reasoning. The process, shown in Fig. 2, consists of the following steps:

*Detection of Points of Maximum Curvature*. The contours of the nuclear mask were extracted, and the curvature along the contour was computed by using $k=\frac{x\prime y\u2033-y\prime x\u2033}{{(x{\prime}^{2}+y{\prime}^{2})}^{3/2}}$, where*x*and*y*are coordinates of the boundary points. The derivatives are computed by convoluting the boundary with derivatives of Gaussian. An example of detected points of maximum curvature is shown in Fig. 2.*Delaunay Triangulation (DT) of Points of Maximum Curvature for Hypothesis Generation and Edge Removal*. DT was applied to all points of maximum curvature to hypothesize all possible groupings. The main advantages of DT are that the edges are nonintersecting, and the Euclidean minimum spanning tree is a subgraph of DT. This hypothesis space was further refined by removing edges based on certain rules, e.g., no background intersection.*Geometric Reasoning*. Properties of both the hypothesis graph (e.g, degree of vertex), and the shape of the object (e.g., convexity) were integrated for edge inference.

Steps in delineating overlapping nuclei. *First step*: detection of points with maximum curvature along contours of nuclear mask; *Second step*: hypothesis generation through triangulation; *Third step*: edge inference through geometric constrains.

Among all the different parameters of this process, only the scale for curvature detection and the threshold for curvature maximum points were adjusted based on the preferred morphology and scale of nuclei in the dataset at 20×, which were further verified on the manually annotated reference image set.

This method is similar to the one proposed in our previous work [18]; however, a significant performance improvement has been made through triangulation and subsequent geometric reasoning. Refer to [19] for details.

Our target dataset consists of 440 hematoxylin and eosin (H&E) stained glioblastoma multiforme (GBM) tumor sections from 152 patients, which were scanned with either a 20× or 40× objective. Since those samples were collected at different laboratories, fixation, and staining protocols lack uniformity. In order to capture the technical variations, we manually selected and annotated 20 samples (at 20×) as reference images from the tumor repository. Each sample is a 1k × 1k block, and a subset is shown in Fig. 3. The segmentation was carried out on decomposed tissue blocks with size 1k × 1k pixels at 20×, and for each tissue block, only the top *M* = 10 reference images with the highest λ were used. Since λ is a similarity measurement between the normalized tissue block and each of the reference images, different tissue blocks may have different subset of reference images during classification. The number of components for GMM was fixed to be 20, with the parameters of GMM estimated via EM algorithm, and the other parameter settings were: α = 0.1, λ_{F} = λ_{B} = 0.05, µ = 1.0, *v* = 1.0, and σ {2.0, 4.0, 6.0}, in which σ was determined based on the preferred dimensions of malignant and normal nuclear size at 20×, and all other parameters were selected to minimize the cross validation error. Repeated hold-out cross-validation was applied on the reference images, and a comparison of the classification performance was made among our approach, random forest [20], EMaGAC [6], and MCV [15], as shown in Table I and Fig. 5, which indicates:

- by incorporating both prior information and nuclear staining information, our system better characterizes the variation in the data, thus is much more effective and robust.
- by incorporating the multiscale LoG responses as a feature, we encode the prior scale information into the system. As a result, ambiguous background structures are excluded, which leads to an increase of precision. However, there is also a decrease in the recall when compared to MRL with only color features, which is due to the fact that the tiny fragments inside the nuclei, as indicated by Fig. 3, can also be eliminated.

Subset of reference image ROI, with manual annotation overlaid as green contours, indicating significant amount of technical variation. Nuclei with white hollow regions inside are pointed out by arrows.

We also provide an intuitive comparison among different approaches, as shown in Fig. 4, which demonstrates the effectiveness of our approach. During comparison, we noticed that EMaGAC [6] was sensitive to initialization, and the quality of initialization provided by [6] experienced a large degradation in the presence of large variation in the our dataset, which led to nonfavorable classification results. More results of our approach on classification and segmentation can be found in Fig. 6.

Classification and segmentation results based on our approach. (a) Original images. (b) Nuclear/background classification results via our approach (MRL). (c) Nuclear partition results via geometric reasoning.

The overall computational complexity of our approach is *O*(*M*^{2} + *N* × *M*), in which *M* is the number of pixels in the input image, and*N* is the number of reference images. In our experiments, the final segmentation was achieved with an average computational time of around 60 s per tissue block with a size 1k × 1k pixels at 20×. The segmentation performance of MRL is indicated in Table II, where the correct nuclear segmentation is defined as follows. Let

- MaxSize(
*a, b*) be the maximum nuclear size of nuclei*a*and*b*, and - Overlap(
*a, b*) be the amount of overlap between nuclei*a*and*b*.

Comparison of Average Segmentation Performance Between our Current Approach (MRL), and our Previous Approach [21], in Which $\text{PRECISION}=\frac{\#\mathit{\text{correctly\_segmented\_nuclei}}}{\#\mathit{\text{segmented\_nuclei}}},\text{AND}$
$\text{RECALL}=\frac{\#\mathit{\text{correctly\_segmented\_nuclei}}}{\#\mathit{\text{manually\_segmented\_nuclei}}}$

Then for any nucleus *n _{G}* from ground truth, if there is one and only one nucleus

The reader may question the classification performance since both the precision and recall are not very high. The reason for this is that the ground truth(annotation) for the reference images is created at the object(nucleus) level, which means the hollow regions(lost of chromatin content for various reasons) inside the nuclei will be marked as part of the nuclear region rather than the background, as indicated by Fig. 3(pointed out by arrows).

We have developed a multireference level set approach for delineating nuclei from *H*&*E* stained tumor sections, and applied it to the GBM cohort from TCGA dataset. Our approach addresses the problem of technical and biological variations by utilizing both global information from the manually annotated reference images, and the local information from the decomposed nuclear channel of the target image. The experimental results and comparisons demonstrate the effectiveness of the proposed approach. Our future work will focus on improving nuclear segmentation by incorporating the nuclear shape model, and evaluating proposed method on other tumor types.

This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain correct information, neither the United States Government nor any agency thereof, nor the Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by its trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or the Regents of the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof or the Regents of the University of California.

This work was supported by National Institutes of Health (NIH) under Grant U24 CA1437991 at Lawrence Berkeley National Laboratory under Contract DE-AC02-05CH11231.

Hang Chang, Life Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 USA (Email: vog.lbl@gnahch)

Ju Han, Life Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 USA (Email: vog.lbl@nahj)

Paul T. Spellman, Center for Spatial Systems Biomedicine, Oregon Health Sciences University, Portland, OR 97239 USA (Email: moc.liamg@hc.retsiger)

Bahram Parvin, Life Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 USA (Email: vog.lbl@nivrap_b)

1. Dalton L, Pinder S, Elston C, Ellis I, Page D, Dupont W, Blamey R. Histolgical gradings of breast cancer: Linkage of patient outcome with level of pathologist agreements. Mod. Pathol. 2000;vol. 13(no. 7):730–735. [PubMed]

2. Phukpattaranont P, Boonyaphiphat P. Color based segmentation of nuclear stained breast cancer cell images. ECTI Trans. Electr. Eng. , Commun. 2007;vol. 5(no. 2):158–164.

3. Ballaro B, Florena A, Franco V, Tegolo D, Tripodo C, Valenti C. An automated image analysis methodology for classifying megakaryocytes in chronic myelprliferative disorders. Med. Image Anal. 2008;vol. 12:703–712. [PubMed]

4. Latson L, Sebek N, Powell K. Automated cell nuclear segmentation in color images of hematoxylin and eosin-stained breast biopsy. Anal. Quant. Cytol. Histol. 2003;vol. 26(no. 6):321–331. [PubMed]

5. Glotsos D, Spyridonos P, Cavouras D, Ravazoula P, Dadioti P, Nikiforidis G. Automated segmentation of routinely hematoxyli-eosin stained microscopic images by combining support vector machine, clustering, and active contour models. Anal. Quant. Cytol. Histol. 2004;vol. 26(no. 6):331–340. [PubMed]

6. Fatakdawala H, Xu J, Basavanhally A, Bhanot G, Ganesan S, Feldman F, Tomaszewski J, Madabhushi A. Expectation-maximization-driven geodesic active contours with overlap resolution (emagacor): Application to lymphocyte segmentation on breast cancer histopathology. IEEE Trans. Biomed. Eng. 2010 Jul;vol. 57(no. 7):1676–1690. [PubMed]

7. Chang H, Defilippis R, Tlsty T, Parvin B. Graphical methods for quantifying macromolecules through bright field imaging. Bioinf. 2009;vol. 25(no. 8):1070–1075. [PMC free article] [PubMed]

8. Datar M, Padfield D, Cline H. Color and texture based segmentation of molecular pathology images using HSOMS. Proc. Int. Symp. Biomed. Imag. 2008:292–295.

9. Bunyak F, Hafiane A, Palanippan K. Histopathology tissue segmentation by combining fuzzy clustering with multiphase vector level set. Adv. Exp. Med. Biol. 2011;vol. 696:413–424. [PubMed]

10. Parvin B, Yang Q, Han J, Chang H, Rydberg B, Barcellos-Hoff Iterative voting for inference of structural saliency and characterization of subcellular events. IEEE Trans. Image Process. 2007 Mar;vol. 16(no. 3):615–623. [PubMed]

11. Nath S, Palaniappan K, Bunyak F. Cell segmentation using coupled level sets and graph-vertex. Med. Image Comput. Computed-Assist. Intervention-MICCAI. 2006:101–108. [PMC free article] [PubMed]

12. Chang H, Parvin B. Multiphase level set for automated delineation of membrane-bound macromolecules. Proc. Int. Symp. Biomed. Imag. 2010:165–168.

13. Al-Kofahi Y, Lassoued W, Lee W, Roysam B. Improved automatic detection and segmentation of cell nuclei in histopathology images. IEEE Trans. Biomed. Eng. 2010 Apr;vol. 57(no. 4):841–852. [PubMed]

14. Coelho L, Shariff A, Murphy R. Nuclear segmentation in microscope cell images: A hand-segmented dataset and comparison of algorithms. Proc. Int. Symp. Biomed. Imag. 2009:518–521. [PMC free article] [PubMed]

15. Kothari S, Phan JH, Moffitt RA, Stokes TH, Hassberger SE, Chaudry Q, Young AN, Wang MD. Automatic batch-invariant color segmentation of histological cancer images. Proc. Int. Symp. Biomed. Imag. 2011 Mar-Apr;:657–660.

16. Ruifork A, Johnston D. Quantification of histochemical staining by color deconvolution. Anal. Quant. Cytol. Histol. 2001;vol. 23(no. 4):291–299. [PubMed]

17. Chan TF, Vese LA. Active contours without edges. IEEE Trans. Image Process. 2001 Feb;vol. 10(no. 2):266–277. [PubMed]

18. Raman S, Maxwell C, Barcellos-Hoff M, Parvin B. Geometric approach segmentation and protein localization in cell cultured assays. J. Microsc. 2007;vol. 225:22–30. [PubMed]

19. Wen Q, Chang H, Parvin B. A delaunay triangulation approach for segmenting clumps of nuclei. Proc. Int. Symp. Biomed. Imag. 2009:9–12.

20. Breiman L. Random forests. Mach. Learn. 2001;vol. 45(no. 1):5–32.

21. Chang H, Fontenay G, Han J, Cong G, Baehner F, Gray J, Spellman P, Parvin B. Morphometric analysis of TGCA gliobastoma multiforme. BMC Bioinf. 2011;vol. 12(no. 484)

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |