Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
IEEE Trans Med Imaging. Author manuscript; available in PMC 2013 July 30.
Published in final edited form as:
PMCID: PMC3728287

Invariant Delineation of Nuclear Architecture in Glioblastoma Multiforme for Clinical and Molecular Association

Hang Chang, Member, IEEE,corresponding author Ju Han, Member, IEEE, Alexander Borowsky, Leandro Loss, Member, IEEE, Joe W. Gray, Paul T. Spellman, and Bahram Parvin, Senior Member, IEEEcorresponding author


Automated analysis of whole mount tissue sections can provide insights into tumor subtypes and the underlying molecular basis of neoplasm. However, since tumor sections are collected from different laboratories, inherent technical and biological variations impede analysis for very large datasets such as The Cancer Genome Atlas (TCGA). Our objective is to characterize tumor histopathology, through the delineation of the nuclear regions, from hematoxylin and eosin (H&E) stained tissue sections. Such a representation can then be mined for intrinsic subtypes across a large dataset for prediction and molecular association. Furthermore, nuclear segmentation is formulated within a multi-reference graph framework with geodesic constraints, which enables computation of multidimensional representations, on a cell-by-cell basis, for functional enrichment and bioinformatics analysis. Here, we present a novel method, Multi-Reference Graph Cut (MRGC), for nuclear segmentation that overcomes technical variations associated with sample preparation by incorporating prior knowledge from manually annotated reference images and local image features. The proposed approach has been validated on manually annotated samples and then applied to a dataset of 377 Glioblastoma Multiforme (GBM) whole slide images from 146 patients. For the GBM cohort, multidimensional representation of the nuclear features and their organization have identified (i) statistically significant subtypes based on several morphometric indices, (ii) whether each subtype can be predictive or not, and (iii) that the molecular correlates of predictive subtypes are consistent with the literature.

Data and intermediaries for a number of tumor types (GBM, low grade glial, and kidney renal clear carcinoma) are available at: for correlation with TCGA molecular data. The website also provides an interface for panning and zooming of whole mount tissue sections with/without overlaid segmentation results for quality control.

Index Terms: Nuclear Segmentation, Tumor Histopathology, Subtyping, Molecular Pathology

I. Introduction

Our main motivation for quantifying morphometric composition from histology sections is to gain insight into cellular morphology, organization, and sample tumor heterogeneity in a large cohort. In tumor sections, robust representation and classification can identify mitotic cells, cellular aneuploidy, and autoimmune responses. More importantly, if tissue morphology and architecture can be quantified on a very large scale dataset, then it will pave the way for constructing databases that are prognostic, the same way that genome-wide array technologies have identified molecular subtypes and predictive markers. Genome-wide molecular characterization (e.g., transcriptome analysis) has the advantage of standardized techniques for data analysis and pathway enrichment, which can enable hypothesis generation for the underlying mechanisms. However, array-based analysis (i) can only provide an average measurement of the tissue biopsy, (ii) can be expensive, (iii) can hide occurrences of rare events, and (iv) lacks the clarity for translating molecular signature into a phenotypic signature. Though nuclear morphology and context are difficult to compute as a result of intrinsic cellular characteristic and technical variations, histology sections can offer insights into tumor architecture and heterogeneity (e.g., mixed populations), in addition to, rare events. Moreover, in the presence of a very large dataset, phenotypic signatures can be used to identify intrinsic subtypes within a specific tumor bank through unsupervised clustering. This facet is orthogonal to histological grading, where tumor sections are classified against known grades. The tissue sections are often visualized with hematoxylin and eosin stains, which label DNA content (e.g., nuclei) and protein contents, respectively, in various shades of color. Even though there are inter- and intra- observer variations [1], a trained pathologist can characterize the rich content, such as the various cell types, cellular organization, cell state and health, and cellular secretion. If hematoxylin and eosin (H&E) stained tissue sections can be quantified in terms of cell type (e.g., epithelial, stromal), tumor subtype, and histopathological descriptors (e.g., necrotic rate, nuclear size and shape), then a richer description can be linked with genomic information for improved diagnosis and therapy. This is the main benefit of histological imaging since it can capture tumor architecture.

Ultimately, our goal is to mine a large cohort of tumor data in order to identify morphometric indices (e.g., nuclear size) that have prognostic and/or predictive subtypes. The Cancer Genome Atlas (TCGA) offers such a collection; however, the main issue with processing a large cohort, is the inherent variations as a result of (i) the sample preparation protocols (e.g., fixation, staining), practiced by different laboratories, and (ii) the intrinsic tumor architecture (e.g., cell type, cell state). For example, with respect to heterogeneity in the tumor architecture, the nuclear color in the RGB space found in one tissue section may be similar to the cytoplasmic color in another tissue section. Simultaneously, the nuclear color intensity (e.g., chromatin content) can vary within a whole slide image. Therefore, image analysis should be tolerant and robust, with respect to variations in sample preparation and tumor architecture, within the entire slide image and across the tumor cohort.

Stained whole mount tissue sections are scanned at either at 20X or 40X, which results in larger images in the order of 40kby-40k pixels or higher. Each image is partitioned into blocks of 1k-by-1k pixels for processing, and cells at the borders of each block are excluded during the processing. The details of the computational pipeline can be found in our earlier paper [2]. Our approach evolved from our observation that simple color decomposition and thresholding misses or over-estimates some of the nuclei in the image, i.e., nuclei with low chromatin contents are excluded. Further complications ensue as a result of diversity in nuclear size and shape (e.g., the classic scale problem).

The general approach is shown in Figure 1, where the primary novelty is in the image-based modeling of inherent ambiguities that are associated with technical variations and biological heterogeneity. Image-based modeling captures prior knowledge from a diverse set of annotated images (e.g., a dictionary) needed in order to model the foreground and background representations. Each annotated image is independent of other images and signifies one facet (e.g., color space, nuclear shape and size) of the diversity within the cohort. Moreover, each image is represented in the feature-space as the Gaussian Mixture Model (GMM) of the Laplacian of Gaussian (LoG) and RGB responses. Collectively, the reference dictionary of annotated images provides the means for color normalization and for capturing global statistics for segmenting test images. The computed global statistics can then be coupled, through a graph cut formulation, with the intrinsic local image statistics and spatial continuity for binarization. Having segmented an input test image, each segmented foreground region is subsequently validated for nuclear shape. If needed, it is decomposed through geometric reasoning. A secondary novelty is in the details of the computational pipeline. For example, we introduce the concept of (i) “color map normalization” for registering a test image against each of the images in the reference library, and (ii) “blue ratio image” for mapping RGB images into the gray space; thus, LoG responses can be computed efficiently in one channel. All important free parameters are selected through cross-validation. Thus far, close to 1000 whole slide images have been processed, and the data has been made publicly available through our website at In addition, segmentation results, from the whole mount tissue sections, are available for quality control through a web-based zoomable interface.

Fig. 1
Work flow in Nuclear Segmentation for a cohort of whole mount tissue sections.

Essentially, nuclear segmentation provides the basis for morphometric representation on a cell-by-cell basis. As a result, tumor histology can be represented as a meaningful data matrix, where well-known bioinformatics and statistical tools can be readily applied for hypotheses generation. For example, a large cohort facilitates tumor subtyping based on computed morphometric features. Each subtype can then be (i) tested for its prognostic value, and (ii) utilized for identifying molecular basis of each subtype for hypothesis generation. In the case of GBM, prognostic and/or predictive subtypes have also been posted on our Web site.

Organization of this paper is as follows: Section II reviews previous research with a focus on quantitative representation of the H&E sections for translational medicine. Sections III and IV describes the details of the image-based modeling for nuclear segmentation and experimental validation, respectively. Section V examines one application of nuclear segmentation of morphometric subtyping and molecular association for hypothesis generation. Lastly, section VI concludes the paper.

II. Review of Previous Work

Several excellent reviews for the analysis of histology sections can be found in [3], [4]. From our perspective, four distinct works have defined the trends in tissue histology analysis: (i) one group of researchers proposed nuclear segmentation and organization for tumor grading and/or prediction of tumor recurrence [5], [6], [7], [8]. (ii) A second group of researchers focused on patch level analysis (e.g., small regions) [9], [10], [11], using color and texture features, for tumor representation. (iii) A third group focused on block-level analysis to distinguish different states of tissue development using cell-graph representation [12], [13]. (iv) Finally, a fourth group has suggested detection and representation of the auto-immune response as a prognostic tool in cancer [14]. In contrast to previous research, our strategy is based on processing a large cohort of tumors, to compute morphometric subtypes, and to examine whether computed subtypes are predictive of outcome. Since tumor histology is characterized in terms of nuclear and cellular features, a more detailed review of nuclear segmentation strategies follows.

The main barriers in nuclear segmentation are technical variations (e.g., fixation) and biological heterogeneity (e.g., cell type). These factors are visible in TCGA dataset. Present techniques have focused on adaptive thresholding followed by morphological operators [15], [16]; fuzzy clustering [17], [18]; level set method using gradient information [14], [19]; color separation followed by optimum thresholding and learning [20], [21]; hybrid color and texture analysis followed by learning and unsupervised clustering [6]; and representation of nuclei organization in tissues [22], [23] that is computed from either interactive segmentation or a combination of feature detector. Some applications combine the above techniques; Several examples are given below. In [24], iterative radial voting [25] was used to estimate seeds for partitioning perceptual boundaries between neighboring nuclei. Subsequently, seeds were used to segment each nucleus through the application of multiphase level sets [26], [27]. In [28], the input image was initially binarized into foreground and background regions with a graph cut framework, the seeds were then selected from a binarized image using a constrained multi-scale LoG filter, with the combined results being refined using a second iteration of the graph cut. Similarly, in [29], the input image was first normalized through histogram equalization, and then binarized based on color-texture extracted from the most discriminant color space. This was followed by an iterative operation to split touching nuclei based on concave-points and radial-symmetry. In their experiment, they had 21 images where 5 of them were annotated. Nuclei, in all images, had similar size with high chromaticity. Recently, a spatially constrained expectation maximization algorithm [30] was demonstrated to be robust to “color nonstandardness” in histological sections with color being represented in the HSV space. However, our analysis of the GBM cohort indicates that strict incorporation of color and spatial information will not be sufficient as demonstrated in Section IV B (MRGC vs MRGC-CF). A more related work, described in [31], was based on a voting system that uses multiple classifiers built from different reference images; we will refer to this method as MCV, for short, in the rest of the paper. Compared to the previous approaches, MCV provides a better way to handle the variation among different batches. However, due to the lack of smoothness constraints and local statistical information, the classification results can be noisy and erroneous, as demonstrated in Figure 8. Some of these concepts have also been utilized in our earlier paper [2], but the results posted on our website are for the current implementation outlined in this paper.

Fig. 8
A comparison between MCV and MRGC (as shown in (c) and (d), respectively) based on the same reference image, as shown in (a). Even though the test image and the reference image are slightly different in color space, compared with MCV, MRGC still produces ...

In summary, the main limitations of the above techniques are that they are often applied to a small dataset that originate from a single laboratory, ignore technical variations that are manifested in both nuclear and background signals, and are insensitive to cellular heterogeneity (e.g., variation in chromatin contents). Our goal is to address these issues by processing whole mount tissue sections, from multiple laboratories, to construct a large database of morphometric features, and to enable subtyping and genomic association.

III. Approach

Details of the proposed approach are shown in Figure 2, which leverages several key observations for segmenting nuclear regions: (i) global variations across a large cohort of tissue sections can be captured by a representative set of reference images, (ii) local variations within an image can be captured by local foreground(nuclei)/background samples detected by LoG filter, and (iii) color normalization, against a reference image, reduces variations in image statistics and batch effects between a test and a reference image. These concepts are integrated within a graph cut framework to delineate nuclei or clumps of nuclei from the background. Having performed foreground and background segmentation, we then partitioned potential clumps of nuclei through geometric reasoning. In the rest of this section, we summarize (a) the representation of prior models from a diverse set of reference images, (b) the methodology for color normalization, (c) an effective approach for color transformation for dimensionality reduction, (d) the details of feature extraction from each test image, (e) the multi-reference graph cut formalism for nuclei/background separation, and (f) the partitioning of a clump of nuclei into individual nucleus.

Fig. 2
Steps in Nuclear Segmentation.

A. Construction and Representation of Priors

The purpose of this step is to capture the global variations for an entire cohort from a reference library. For bioinformatics analysis, the target dataset consists of 377 individual tissue sections, and a representative of N (N = 20) reference images of 1k-by-1k pixels at 20X have been selected. Each reference image is selected to be an exemplar of tumor phenotypes based on staining and morphometric properties. Therefore, it is reasonable to suggest that each reference image has its own unique feature space, in terms of RGB and LoG responses, which leads to 2N feature spaces for all reference images:


where 𝔽RGBiiand𝔽LoGiN+i are RGB feature space and LoG feature space for the ith reference image, 1 ≤ iN. Subsequently, each reference image is hand segmented and processed with a LoG filter (please refer to Section III-C for the details on our LoG integration), at a single scale, followed by the collection of foreground (nuclei) and background statistics in both the RGB space and LoG response. Our experience indicates that even within a single reference image, there could be distinct modes in terms of RGB color and nuclear size. One way to capture these heterogeneities is to represent foreground and background distributions with GMM. Hence, the conditional probability for pixel p, with feature fk(p) in the kth (k [set membership] [1, 2N]) feature space, belonging to Nuclei(l = 1)/Background(l = 0) can be expressed as a mixture with D component densities:


where a mixing parameter P(j) corresponds to the weight of component j and j=1DP(j)=1. Each mixture component is a Gaussian with mean μ and covariance matrix Σ in the corresponding feature space (e.g., 3-by-3 and 1-by-1 matrices in RGB and single scale LoG spaces, respectively):


P(j) and (μj, Σj) for [p with macron](Cp|j) were estimated by expectation maximization (EM) algorithm [32].

B. Color Normalization

The purpose of color normalization is to close the gap, in color space, between an input test image and a reference image. As a result, the prior models, constructed from each reference image, can be better utilized. We evaluated a number of color normalization methods and chose the color map normalization described in [31] for its effectiveness in handling histological data. Let

  • input image I and reference image Q have KI and KQ unique color triplets in terms of (R, G, B), respectively;
  • CI/Q be a monotonic function, which maps the color channel intensity, C [set membership] {R, G, B} from Image I/Q to a rank that is in the range [0, KI)/[0, KQ);
  • (rp, gp, bp) be the color of pixel p, in image I, and (RI(rp),GI(gp),BI(bp)) be the ranks for each color channel intensity; and
  • the color channel intensity values rref, gref and bref, from image Q, have ranks:

As a result of color map normalization, the color for pixel p: (rp, gp, bp), will be normalized as (rref, gref, bref). In contrast to standard quantile normalization, which utilizes all pixels in the image, color map normalization is based on the unique color in the image, thereby, excluding the frequency of any color. Our experience suggests that this method is quite powerful for normalizing histology sections, since the color frequencies vary widely as a result of technical variations and tumor heterogeneity. Examples of color map normalization can be found in Figure 2.

C. Color transformation

In order to reduce the computational complexities for integrating the LoG responses, the RGB space is transformed into a gray level image to accentuate the nuclear dye. While several techniques for color decomposition have been proposed [34], [33], they are either too time-consuming or do not yield favorable outcomes. The color transformation policy needs to enhance the nuclear stain while attenuating the background stain. One way to realize such a transformation is by: BR(x,y)=100*B(x,y)1+R(x,y)+G(x,y)×2561+B(x,y)+R(x,y)+G(x,y), where B(x, y), R(x, y) and G(x, y) are the blue, red and green intensities at position (x, y). We refer to this transformation as the blue ratio image in the rest of this manuscript. In this formulation, the first and second terms accentuate and attenuate nuclear and background signals, respectively. Subsequently, the LoG responses are always computed at a single scale from the blue ratio image. Figure 3 demonstrates that the blue ratio image method has an improved performance compared to an alternative method [33].

Fig. 3
(a) Two diverse pinhole of tumor signatures; (b) Decompositions by [33]; (c) Blue ratio images.

D. Feature Extraction

Our approach integrates both color and scale information, where the scale is encoded by the LoG response.

  1. Normalization of the input test image against every reference image, as described in Section III-B;
  2. Conversion of each normalized image into the blue ratio image, as described in Section III-C;
  3. Application of a LoG filter on each of the blue ratio images, at a single scale; and
  4. Representation of each pixel, from the test image, by its RGB color in each of the normalized images and LoG response from each of the blue ratio images.

As a result, each pixel in the test input image is represented by 2N features, where the first N features are RGB colors from the normalized images, and the last N features are LoG responses computed from the blue ratio of the normalized images. All 2N features are assumed to be independent per selection of images in Section III-A. The rational for integrating both color and scale information is that: (i) in some cases, color information is insufficient to differentiate nuclear regions from background; (ii) the scales (e.g., LoG responses) of the background structure and nuclear region are typically different; and (iii) the nuclear region responds well to blob detectors, such as a LoG filter [28].

E. Multi-Reference Graph Cut Model

In this section, we first present the background material on graph cut formalism, and then proceed to the details of the image-based modeling for incorporating intrinsic and extrinsic variations.

Within the graph cut formulation, an image is represented as a graph G = left angle bracketV, Ēright angle bracket, where V is the set of all nodes, and Ē is the set of all arcs connecting adjacent nodes. Usually, the nodes and edges correspond to pixels (P) and their adjacency relationship, respectively. Additionally, there are special nodes known as terminals, which correspond to the set of labels that can be assigned to pixels. In the case of a graph with two terminals, the terminals are referred to as the source (S) and the sink (T), which correspond to specific labels. The labeling problem is to assign a unique label xp (0 for background, and 1 for foreground) for each node p [set membership] V, and the image cutout is performed by minimizing the Gibbs energy E [35]:


Where Efitness(xp) is the likelihood energy, encoding the data fitness cost for assigning xp to p, and Esmoothness(xp, xq) is the prior energy, denoting the cost when the labels of adjacent nodes, p and q, are xp and xq, respectively; β is the weight for Esmoothness.

The optimization algorithms could be classified into two groups: Goldberg-Tarjan “push-relabel” methods [36], and Ford-Fulkerson “augmenting paths” [37]. The details of the two methods can be found in [38].

We recognize that the training data set cannot fully capture the intrinsic variations of the nuclear signature. Therefore, the data fitness term is expressed as a combination of the intrinsic local probability map and learned global property map. The local probability map has the advantage of capturing local intrinsic image property in the absence of colormap normalization, thus, diversifying the data fitness term. Equation 4 is rewritten as


where Egf is the global data fitness term encoding the fitness cost for assigning xp to p, Elf is the local data fitness term encoding the fitness cost for assigning xp to p. Each term together with the optimization process is discussed below.

  1. Global fitness term: The global fitness is established based on manually annotated reference images. Let’s assume N reference images: Qi, i [set membership] [1, N], and for each reference image, GMMs are used to represent the nuclei and background in both RGB space and LoG response space, respectively: GMMNucleik,GMMBackgroundk, in which k [set membership] [1, 2N], and the first N GMMs are for RGB space, and the last N GMMs are for LoG response space. Details can be found in III-A.
    An input test image I is first normalized as Ui with respect to every reference image, Qi. Subsequently, RGB color and LoG responses of Ui are collected to construct 2N features per pixels, where the first N features are from the normalized color(RGB) space, and the second N features are from LoG response. Let
    • p be a node corresponding to a pixel;
    • fk(p) be kth feature of p;
    • α be the weight of LoG response;
    • pik be the probability function of fk being Nuclei(l = 1)/Background(l = 0):
    • λi be the weight for Qi:
      where ||·|| is L2 norm, HC(·) is the histogram function on a single color channel C [set membership] {R, G, B} of an image. Intuitively, λ measures similarity between two histograms derived from Qi and Ui, which are represented with 20 bins. Based on our experiments, the λs become stable when the number of bins reaches 20; conversely, histograms with less than 20 bins are considered to have insufficient resolution. The similarity parameter weighs the fitness of the prior model, constructed from Qi, to the features extracted from the normalized image Ui.
    The global fitness term is now defined as
    where the first and second terms integrate normalized color features and LoG responses, respectively.
  2. Local Fitness Term: While the global fitness term utilizes both color and LoG information in the normalized space, it does not utilize information in the original color space of the input image. As a result, local variation may be lost for a number of reasons, i.e., non-uniformity in the tissue sections, local lesions, etc. The local data fitness of a pixel, p, is computed from foreground and background seeds in a local neighborhood around p that corresponds to peaks detected by a LoG filter on the blue ratio image, where positive and negative peaks often, but not always, correspond to the background and foreground (nuclei), respectively. The accuracy can be improved by a cascade of filters as follows:
    1. Seeds detection: This step aims to collect local foreground and background seeds by incorporating local and global image statistics. Typical positive and negative peak responses, associated with the LoG filter, are shown in Figure 4(a). Most of the time, the LoG filter detects foreground and background locations correctly, but there is a potential for errors. The protocol consists of three steps:
      1. Create a blue ratio image (Section III-C): In this transformed space, the peak of the intensity histogram always corresponds to the preferred frequency of the background intensity as shown in Figure 4(b).
      2. Construct distributions of the foreground and background: Apply the LoG filter on the blue ratio image, detect peaks, and construct a distribution of the blue ratio intensity at the peaks corresponding to the negative and positive LoG responses. A small subset of seeds can be mislabeled, but most can be corrected in the following step.
      3. Constrain the seed selection: Seeds (e.g., peaks of the LoG response) are constrained by three criteria: (i) the LoG responses must be above a minimum conservative threshold for removing strictly noisy artifacts; (ii) the intensity associated with the peak of the negative LoG responses (e.g., foreground peaks) must concur with the background peak, specified in step (a); and (iii) within a small neighborhood of w1 × w1, the minimum blue ratio intensity, at the location of negative seeds, is set as the threshold for background peaks, as shown in Figure 5.
        Fig. 5
        LoG responses can be either positive (e.g., potential background) or negative (e.g., foreground or part of foreground) in the transformed blue ratio image. In the blue ratio image with the most negative LoG response, the threshold is set at the minimum ...
      Fig. 4
      (a) An example of the LoG response for detection of foreground (green dot) and background (blue dot) signals indicates an excellent performance on the initial estimate; (b) Histogram of the blue ratio intensity derived from image (a) indicates that the ...
    2. Local foreground/background color modeling: For each pixel, p, foreground and background statistics within a local neighborhood, w2 × w2, is represented by two GMMs in the original color space. These GMMs correspond to the nuclei and background models (e.g., GMMNucleiLocalandGMMBackgroundLocal), respectively.
    The local fitness term is defined as:
    where f(p) refers to the RGB feature of node p in the original color space, γ is the weight for local fitness, pl is the probability function of f being Nuclei(l = 1)/Background(l = 0):
  3. Smoothness Term: While both local and global data fitness terms are encoded by t-links (links between node and terminals) in the graph, the smoothness term, which ensures the smoothness of labeling between adjacent nodes, is represented by n-links (links between adjacent nodes). Here, we adopt the setup from [39] for n-links, which approximates a continuous Riemannian metric by a discrete weighted graph so that the max-flow/min-cut solution for the graph corresponds to a local geodesic or minimal surface in the continuous case. Consider a weighted graph constructed in III-E: G = left angle bracketV, Ēright angle bracket, where V is the set of image pixels, and Ē is the set of all edges connecting adjacent pixels. Let,
    • {ek|1 ≤ knG} be a set of vectors for the neighborhood system, where nG is the neighborhood order, and the vectors are ordered by their corresponding angle ϕk w.r.t. the +x axis, such that 0 ≤ ϕ1 < ϕ2 (...) < ϕnG < π. For example, when nG = 8, we have e1 = (1, 0), e2 = (1, 1), e3 = (0, 1), e4 = (−1, 1), as shown in Figure 6(a);
      Fig. 6
      (a) Eight-neighborhood system: nG = 8; (b) Contour on eight-neighborhood 2D grid; (c) One family of lines formed by edges of the graph.
    • wk be the weight for the edge between pixels: p and q, where p and q belong to the same neighborhood system, and pq=±ek;
    • L be a line formed by the edges in the graph, as shown in Figure 6(c);
    • C be a contour in the same 2D space where the graph G is embedded, as shown in Figure 6(b);
    • |C|G be the cut metric of C:
      where ĒC is the set of edges intersecting contour C;
    • |C|R be the Riemannian length of contour C; and,
    • D(p) be the metric(tensor), which continuously varies over points p in the 2D Riemannian space;
    Based on Integral Geometry [40], the Crofton-style formula for Riemannian length |C|R of contour C can be written as,
    where uL is the unit vector in the direction of the line L, and nC is a function that specifies how many times line L intersects contour C. Following the approach in [39], the local geodesic can be approximated by the max-flow/min-cut solution (|C|G → |C|R) with the following edge weight setting:
    where, δ is the cell-size of the grid, [big up triangle, open]ϕk is the angular difference between the kth and (k + 1)th edge lines, [big up triangle, open]ϕk = ϕk+1 − ϕk, and
    where u=I|I| is a unit vector in the direction of image gradient at point p, I is the identity matrix, and g(x)=exp(x22σ2)
  4. Optimization: The construction of the graph, with two terminals, source S and sink T, is defined in Table I. This graph is partitioned via the max-flow/min-cut algorithm proposed in [41] to label the input image into foreground and background. The optimization method belongs to a class of algorithms based on augmenting paths, and the details can be found in [41].
    Edge weights for the graph construction, where N is the neighborhood system, and β is the weight for smoothness.

F. Nuclear Mask Partitioning

A key observation we made is that the nuclear shape is typically convex. Therefore, ambiguities associated with the delineation of overlapping nuclei could be resolved by detecting concavities and partitioning them through geometric reasoning. The process, shown in Figure 7, consists of the following steps:

  1. Detection of Points of Maximum Curvature: The contours of the nuclear mask were extracted, and the curvature along the contour was computed by using k=xyyx(x2+y2)3/2, where x and y are coordinates of the boundary points. The derivatives were then computed by convoluting the boundary with derivatives of Gaussian. An example of detected points of maximum curvature is shown in Figure 7.
  2. Delaunay Triangulation (DT) of Points of Maximum Curvature for Hypothesis Generation and Edge Removal: DT was applied to all points of maximum curvature to hypothesize all possible groupings. The main advantage of DT is that the edges are non-intersecting, and the Euclidean minimum spanning tree is a sub-graph of DT. This hypothesis space was further refined by removing edges based on certain rules, e.g., no background intersection.
  3. Geometric reasoning: Properties of both the hypothesis graph (e.g, degree of vertex), and the shape of the object (e.g., convexity) were integrated for edge inference.
Fig. 7
Steps in the delineation of overlapping nuclei: (Top row) identifying points of maximum curvature where potential folds are formed, (middle row) formation of partitioning hypotheses through triangulation, (bottom row) stepwise application of geometric ...

This method is similar to the one proposed in our previous work [42]; however, a significant performance improvement has been made through triangulation and subsequent geometric reasoning. Please refer to [43] for details.

IV. Experimental Results and Discussion

In this section, we (i) discuss parameter setting, and (ii) evaluate performance of the system against previous methods.

A. Experimental design and parameter setting

In order to capture the technical variation, we manually selected and annotated 20 reference images of the size of 1k-by-1k pixels at 20X, and a subset is shown in Figure 9. Nuclear segmentation was also performed at 20X, and only the top M = 10 reference images with the highest weight of λ were used. Essentially, this was a trade-off between performance and computational time cost (see in Figure 13). The number of components for GMM was selected to be D = 20, while the parameters for GMM were estimated via EM algorithm. Other parameter settings were: α = 0.1, β = 10.0, γ = 0.1, w1 = 100, w2 = 100, and σ = 4.0 (the scale for both seeds detection and LoG feature extraction), in which σ was determined based on the preferred nuclear size at 20X, w1 was selected to minimize the seeds detection error on the annotated reference images, and all other parameters were selected to minimize the cross validation error from the following discretization: D [set membership] {5, 10, 15, 20, 25, 30}, α [set membership] {0.05, 0.10, …, 0.95, 1.00}, β [set membership] {5, 10, …, 95, 100}, γ [set membership] {0.05, 0.10, …, 0.95, 1.00}, w2 [set membership] {50, 60, …, 190, 200}. The optimal γ value is relatively small, which can be attributed to the fact that the global statistics from the well-constructed reference images, cover most of the heterogeneity in our dataset, and the role of local statistics is simply to assist the global statistics with improved discriminating powers.

Fig. 9
A subset of reference image ROI, with manual annotation overlaid as green contours, indicating significant amounts of technical variation. Nuclei with white hollow regions inside are pointed out by arrows.
Fig. 13
Top and bottom rows show average classification performance and computational time as a function of number of reference images used. It is clear that the top M = 10 reference images with highest λ is a reasonable trade-off between performance ...

B. Evaluation

Two-fold cross validation, with optimized parameter settings, was applied to the reference images, and a comparison of average classification performance was made between our approach, random forest [44], and the most related work (Here, we refer it to MCV: multi-classifier voting, for short) in [31], as shown in Table II. Our experiment indicates that

  1. By incorporating both global and local statistics (MRGC vs MRGC-GF), our system better characterizes the variation in the data.
  2. By incorporating the LoG response as a feature (MRGC vs MRGC-CF), we can encode the prior scale information into the system. As a result, ambiguous background structures are excluded, which leads to an increase of precision. However, there is also a decrease in the recall when compared to MRGC-CF, which is due to the fact that the tiny fragments inside the nuclei, as indicated by Figure 9, can also be eliminated.
  3. MRGC with multi-scale LoG features (MRGC-MS) has the best performance. We evaluated LoG responses at three scales, σ [set membership] {2, 4, 6}, to compensate for a wide variation in the nuclear size. Improvement in segmentation is marginal, and it comes with a significant increase in the computational cost of about 40%. The LoG filter is simply used for seed detection to represent the underlying image statistics, and as long as a single scale can provide sufficient statistics, multiscale LoG is redundant. Besides, in processing whole slide images, computational throughput is an important factor.
Comparison of average classification performance among our approach(MRGC), our previous approach [2], mcv approach in [31], and random forest. for MCV, only color in RGB space is used, which is identical to [31]. For random forest, the same features are ...

We also provide an intuitive example, shown in Figure 10, demonstrating the effectiveness of the local probability map. It is clear that the local probability map (Figure 10(c)) helps to characterize nuclei with the low chromatin content, as shown in the blue bounding boxes. Another example, shown in Figure 11, further demonstrates the effectiveness of our approach on the segmentation of low chromatin nuclei.

Fig. 10
A comparison among our approach, MCV, and random forest. (a) Original image patch; (b) Detected seeds, Green: Nuclei region; Blue: background; (c) Local Nuclei Probability established based on seeds; (d) Classification by our approach; (e) Classification ...
Fig. 11
Segmentation on low chromatin nuclei. (a) Original image patch; (b) Segmentation by our approach.

Finally, a comparison of the segmentation performance between our current approach and our previous approach [2] is indicated in Table III, where the correct nuclear segmentation is defined as follows. Let

  • MaxSize(a, b) be the maximum nuclear size of nuclei a and b, and
  • Overlap(a, b) be the amount of overlap between nuclei a and b.

Subsequently, for any nucleus, nG, from ground truth, if there is one and only one nucleus, nS, in the segmentation result, that satisfies Overlap(nG,nS)MaxSize(nG,nS)>T, then nS is considered to be a correct segmentation of nG. The threshold was set to be T = 0.8.

Comparison of average segmentation performance between our current approach(MRGC), and our previous approach [2], in which precision=#correctly_segmented_nuclei#segmented_nuclei, and recall=#correctly_segmented_nuclei#manually_segmented_nuclei.

The reader may question the classification performance since both precision and recall are not very high. The major reason is that the ground truth (annotation) for the reference images is created at the object (nucleus) level, which means the hollow regions (loss of chromatin content for various reasons) inside the nuclei will be marked as the nuclear region rather than the background, as indicated by Figure 9.

V. Analysis of TCGA GBM Cohort

Having evaluated the performance of the system, we applied our method to a cohort of 377 GBM whole slide images, from 146 patients, for bioinformatics analysis. Figure 12 shows a few snapshots of our classification and segmentation results; Complete results for all the GBM tissue sections (and a few other tumor types) are available through the NIH web site at Following segmentation, each nucleus is represented by a multidimensional feature vector, which includes over 52 morphometric indices such as nuclear size, cellularity, cytoplasmic features, etc., [2]. The density distribution of each index is then computed per histology section and aggregated per patient.

Fig. 12
Classification and segmentation results indicates tolerance to intrinsic variations: (a) Original images; (b) Nuclear/Background classification results via our approach(MRGC); (c) Nuclear partition results via geometric reasoning.

A particular aspect of bioinformatics analysis relies on subtyping based on a subset of computed morphometric indices (e.g., cellular density), where subtyping is performed through consensus clustering [45], [46]. In our experiment, we evaluated all morphometric indices and discovered that subtyping based on (i) nuclear size and cellularity, and (ii) nuclear intensity and gradient, are statistically stable, where four and two subtypes were inferred, respectively. Figure 14 shows the computed subtypes based on nuclear size and cellularity, where one of the subtypes is predictive of the outcome based on the clinical data. The patients in the GBM cohort received one of the two types of therapies (i) an intensive therapy with either concurrent radiation and chemotherapy, or 4 or more cycles of chemotherapy only, or (ii) a less intensive therapy of either non-concurrent radiation and chemotherapy or less than 4 cycles of chemotherapy only [47]. Although the sample size for the patient receiving the less intensive therapy is small, survival analyses [48] for one of the subtypes in each of the clustering experiments points to a trend in an improved survival for patients receiving the more intensive therapy, as shown in Figure 15. In addition, several computed subtypes, based on other morphometric indices, have also been found to be predictive of the outcome. We also examined molecular correlates of the predictive subtypes. With respect to predictive subtype computed from nuclear size and cellularity indices, we used moderated t-test [49] and identified a set of differentially regulated transcripts for subtype 2 (e.g., predictive subtype) as shown in Figure 16. A total of 10 differentially regulated transcripts were then subject to further bioinformatics analysis for subnetwork enrichment analysis using Pathway Logic, which computes and ranks hubs according to their p-values, as shown in Table IV(e.g., IL1, IL6), which impacts tumor proliferation and migration in both normal and malignant cells [50], [51] and the recruitment of the immune response. The relationships between these hubs and the genes associated with them are shown in Figure 17. Among the common regulators, MAPK1 and FN1, which are involved in the proliferation, are highly ranked transcripts in TCGA’s gene tracker for GBM. Furthermore, FN1 is (i) implicated in the invasion and angiogenesis, and (ii) validated as differentially expressed transcripts in GBM versus benign tumors [52]. Finally, TGFB1 is well known to be involved in tumor maintenance and progression through suppression of the immune response and is abundantly produced by GBM [53]. These molecular associations reflect that morphometric subtyping can hypothesize relevant transcripts that are potential targets of therapy, which is consistent with current literature. An example being, FN1, and its role in the induction of angiogenesis. With respect to the predictive subtype computed from nuclear intensity and gradient indices, subnetwork enrichment analysis revealed a large number of hubs from a set of differentially regulated transcripts. In this case, VEGF was discovered to be at the intersection of all pathways curated through enrichment analysis. VEGF is well known to be the hallmark of glioblastoma for the induction of microvasculture formation [54] and has been suggested as a therapuetic target in GBM [55].

Fig. 14
Morphometric subtyping reveals four subtypes based on cellularity index and nuclear area: (a) visualization of consensus clustering with four clusters; and (b) distribution of cellularity index per subtype.
Fig. 15
Computed subtypes with cellularity and nuclear size is predictive as a result of more aggressive therapy.
Fig. 16
Heat map representing a subset of differentially regulated transcripts for Subtype 2.
Fig. 17
Subnetwork enrichment analysis, for the predictive subtype in Figure 15(a), reveals inflammatory hubs that promote tumor differentiation and invasiveness in GBM.
Key hubs identified through pathway enrichment analysis.

VI. Conclusion

We have shown that morphometric representation of cellular architecture from a large cohort of histology sections can provide new opportunities for hypothesis generation. The main barriers are the batch effect and tumor heterogeneity which hinders nuclear segmentation. However, through image-based modeling, technical and tumor variations can be captured for robust nuclear segmentation from whole slide images. Subsequently, segmented nuclei and corresponding computed morphometric representation enables characterization of tumor histopathology. Our approach for nuclear segmentation addresses technical and biological variations by (i) utilizing global information from a diverse set of annotated reference images, (ii) normalizing the test image against the reference images in the color space, and (iii) incorporating local variations in the test image. Segmentation is formulated within a graph cut framework with geodesic constraint for improved accuracy of the nuclear boundaries. The method has been validated against annotated data and applied to a large dataset of GBM tumor cohort to identify subtypes as a function of cellularity and nuclear size. One of these subtypes is shown to have an increase in survival as a result of a more aggressive therapy with an underlying molecular signature that is consistent with invasiveness and proliferation.


This work was supported by NIH U24 CA1437991 carried out at Lawrence Berkeley National Laboratory under Contract No. DE-AC02-05CH11231.


Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to gro.eeei@snoissimrep-sbup

Contributor Information

Hang Chang, Life Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720 U.S.A.

Ju Han, Life Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720 U.S.A.

Alexander Borowsky, Center for Comparative Medicine, University of California, Davis, California, 95616 U.S.A.

Leandro Loss, Life Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720 U.S.A.

Joe W. Gray, Center for Spatial Systems Biomedicine, Oregon Health Sciences University, Portland, Oregon, 97239 U.S.A.

Paul T. Spellman, Center for Spatial Systems Biomedicine, Oregon Health Sciences University, Portland, Oregon, 97239 U.S.A.

Bahram Parvin, Life Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720 U.S.A.


1. Dalton L, Pinder S, Elston C, Ellis I, Page D, Dupont W, Blamey R. Histolgical gradings of breast cancer: linkage of patient outcome with level of pathologist agreements. Modern Pathology. 2000;vol. 13(no. 7):730–735. [PubMed]
2. Chang H, Fontenay G, Han J, Cong G, Baehner F, Gray J, Spellman P, Parvin B. Morphometric analysis of TCGA Gliobastoma Multiforme. BMC Bioinformatics. 2011;vol. 12(no. 1)
3. Demir C, Yener B. Technical Report. Rensselaer Polytechnic Institute, Department of Computer Science; 2009. Automated cancer diagnosis based on histopathological images: A systematic survey.
4. Gurcan M, Boucheron L, Can A, Madabhushi A, Rajpoot N, Bulent Y. Histopathological image analysis: a review. IEEE Transactions on Biomedical Engineering. 2009;vol. 2:147–171. [PMC free article] [PubMed]
5. Axelrod D, Miller N, Lickley H, Qian J, Christens-Barry W, Yuan Y, Fu Y, Chapman J. Effect of quantitative nuclear features on recurrence of ductal carcinoma in situ (DCIS) of breast. Cancer Informatics. 2008;vol. 4:99–109. [PMC free article] [PubMed]
6. Datar M, Padfield D, Cline H. Color and texture based segmentation of molecular pathology images using HSOMs. ISBI. 2008:292–295.
7. Basavanhally A, Xu J, Madabhushu A, Ganesan S. Computeraided prognosis of ER+ breast cancer histopathology and correlating survival outcome with oncotype DX assay. ISBI. 2009:851–854.
8. Doyle S, Feldman M, Tomaszewski J, Shih N, Madabhushu A. Cascaded multi-class pairwise classifier (CASCAMPA) for normal, cancerous, and cancer confounder classes in prostate histology. ISBI. 2011:715–718.
9. Bhagavatula R, Fickus M, Kelly W, Guo C, Ozolek J, Castro C, Kovacevic J. Automatic identification and delineation of germ layer components in h&e stained images of teratomas derived from human and nonhuman primate embryonic stem cells. ISBI. 2010:1041–1044. [PMC free article] [PubMed]
10. Kong J, Cooper L, Sharma A, Kurk T, Brat D, Saltz J. Texture based image recognition in microscopy images of diffuse gliomas with multi-class gentle boosting mechanism. ICASSAP. 2010:457–460.
11. Han J, Chang H, Loss L, Zhang K, Baehner F, Gray J, Spellman P, Parvin B. Comparison of sparse coding and kernel methods for histopathological classification of glioblastoma multiforme. ISBI. 2011:711–714. [PMC free article] [PubMed]
12. Acar E, Plopper G, Yener B. Coupled analysis of in vitro and histology samples to quantify structure-function relationships. PLoS One. 2012;vol. 7(no. 3):e32227. [PMC free article] [PubMed]
13. Bilgin C, Ray S, Baydil B, Daley W, Larsen M, Yener B. Multiscale feature analysis of salivary gland branching morphogenesis. PLoS One. 2012;vol. 7(no. 3):e32906. [PMC free article] [PubMed]
14. Fatakdawala H, Xu J, Basavanhally A, Bhanot G, Ganesan S, Feldman F, Tomaszewski J, Madabhushi A. Expectation-maximizationdriven geodesic active contours with overlap resolution (EMaGACOR): Application to lymphocyte segmentation on breast cancer histopathology. IEEE Transactions on Biomedical Engineering. 2010;vol. 57(no. 7):1676–1690. [PubMed]
15. Phukpattaranont P, Boonyaphiphat P. Color based segmentation of nuclear stained breast cancer cell images. ECTI Transactions on Electrical Engineering, and Communication. 2007;vol. 5(no. 2):158–164.
16. Ballaro B, Florena A, Franco V, Tegolo D, Tripodo C, Valenti C. An automated image analysis methodology for classifying megakaryocytes in chronic myeloproliferative disorders. Medical Image Analysis. 2008;vol. 12:703–712. [PubMed]
17. Latson L, Sebek N, Powell K. Automated cell nuclear segmentation in color images of hematoxylin and eosin-stained breast biopsy. Analytical and Quantitative Cytology and Histology. 2003;vol. 26(no. 6):321–331. [PubMed]
18. Land W, McKee D, Zhukov T, Song D, Qian W. A kernelized fuzzy support verctor machine CAD system for the diagnostic of lung cancer from tissue. International Journal of Functional Informatics and Personalised Medicine. 2008;vol. 1(no. 1):26–52.
19. Glotsos D, Spyridonos P, Cavouras D, Ravazoula P, Dadioti P, Nikiforidis G. Automated segmentation of routinely hematoxylieosin stained microscopic images by combining support vector machine, clustering, and active contour models. Anal Quant Cytol Histol. 2004;vol. 26(no. 6):331–340. [PubMed]
20. Chang H, Defilippis R, Tlsty T, Parvin B. Graphical methods for quantifying macromolecules through bright field imaging. Bioinformatics. 2009;vol. 25(no. 8):1070–1075. [PMC free article] [PubMed]
21. Cosatto E, Miller M, Graf H, Meyer J. Grading nuclear plemorphism on histological micrographs. International Conference on Pattern Recognition. 2008:1–4.
22. Petushi S, Garcia F, Haber M, Katsinis C, Tozeren A. Large-scale computations on histology images reveal grade-differentiation parameters for breast cancer. BMC Medical Imaging. 2006;vol. 6(no. 14):1070–1075. [PMC free article] [PubMed]
23. Doyle S, Agner S, Madabhushi A, Feldman M, Tomaszewski Automated grading of breast cancer histopathology using spectral clustering with textural and architectural image features. ISBI. 2008:496–499.
24. Bunyak F, Hafiane A, Palanippan K. Histopathology tissue segmentation by combining fuzzy clustering with multiphase vector level set. Adv Exp Med Biol. 2011;vol. 696:413–424. [PubMed]
25. Parvin B, Yang Q, Han J, Chang H, Rydberg B, Barcellos-Hoff Iterative voting for inference of structural saliency and characterization of subcellular events. IEEE Transactions on Image Processing. 2007 Mar;vol. 16(no. 3):615–623. [PubMed]
26. Nath S, Palaniappan K, Bunyak F. Cell segmentation using coupled level sets and graph-vertex. Medical Image Computing and Computed-assisted Intervention-MICCAI. 2006:101–108. [PMC free article] [PubMed]
27. Chang H, Parvin B. Multiphase level set for automated delineation of membrane-bound macromolecules. ISBI. 2010:165–168.
28. Al-Kofahi Y, Lassoued W, Lee W, Roysam B. Improved automatic detection and segmentation of cell nuclei in histopathology images. IEEE Transactions on Biomedical Engineering. 2010;vol. 57(no. 4):841–852. [PubMed]
29. Kong H, Gurcan M, Belkacem-Boussaid K. Partitioning histopathological images: an integrated framework for supervised color-texture segmentation and cell splitting. IEEE Transactions on Medical Imaging. 2011;vol. 30(no. 9):1661–1677. [PMC free article] [PubMed]
30. Monaco J, Hipp J, Lucas D, Smith S, Balis U, Madabhushi A. Image segmentation with implicit color standardization using spatially constrained expectation maximization: Detection of nuclei. Medical Image Computing and Computed-assisted Intervention-MICCAI. 2012:365–372. [PubMed]
31. Kothari S, Phan JH, Moffitt RA, Stokes TH, Hassberger SE, Chaudry Q, Young AN, Wang MD. ISBI. IEEE; 2011. Automatic batch-invariant color segmentation of histological cancer images; pp. 657–660.
32. Tomasi C. Estimating Gaussian Mixture Densities with EM - A Tutorial. 2004
33. Ruifork A, Johnston D. Quantification of histochemical staining by color decomposition. Anal Quant Cytol Histology. 2001;vol. 23(no. 4):291–299. [PubMed]
34. Rabinovich A, Agarwal S, Laris C, Price JH, Belongie S. Unsupervised color decomposition of histologically stained tissue samples. NIPS. 2003:667–674.
35. Geman S, Geman D. Stochastic relaxation, Gibbs distribution and the Bayesian restoration of images. IEEE Transaction on PAMI. 1984;vol. 6(no. 6):721–741. [PubMed]
36. Goldberg AV, Tarjan RE. A New Approach to Maximum-Flow Problem. Journal of the Association for Computing Machinery. 1988;vol. 35(no. 4):921–940.
37. Ford L, Fullkerson D. Flows in Networks. Princeton University Press; 1962.
38. Cook WJ, Cunningham WH, Pulleyblank WR, Schrijver A. Combinatorial Optimization. John Wiley & Sons; 1998.
39. Boykov Y, Kolmogorov V. Computing geodesics and minimal surfaces via graph cuts. Proc. of IEEE ICCV. 2003;vol. 1:26–33.
40. Santalo LA. Integral geometry and geometric probability. Addison-Wesley; 1979.
41. Boykov Y, Kolmogorov V. An experimental comparision of mincut/max-flow algorithms for energy minimization in vision. IEEE Transaction on PAMI. 2004;vol. 26(no. 9):1124–1137. [PubMed]
42. Raman S, Maxwell C, Barcellos-Hoff M, Parvin B. Geometric approach segmentation and protein localization in cell cultured assays. Journal of Microscopy. 2007:427–436. [PubMed]
43. Wen Q, Chang H, Parvin B. A Delaunay triangulation approach for segmenting clumps of nuclei. ISBI. 2009:9–12.
44. Breiman L. Random forests. Machine Learning. 2001;vol. 45(no. 1):5–32.
45. Monti S, Tamayo P, Mesirov J, Golub T. Consensus clustering – a resampling-based method for class discovery and visualization of gene expression microarray data. MACHINE LEARNING, FUNCTIONAL GENOMICS SPECIAL ISSUE. 2003:91–118.
46. Han J, Chang H, Giricz O, Lee G, Baehner F, Gray J, Bissell M, Kenny P, Parvin B. Molecular predictors of 3D morphogenesis by breast cancer cells in 3D culture. PLoS Computational Biology. 2010;vol. 6(no. 2):e1000684. [PMC free article] [PubMed]
47. Verhaak RG, Hoadley KA, Purdom E, Wang V, Qi Y, Wilkerson MD, Miller CR, Ding L, Golub T, Mesirov JP, Alexe G, Lawrence M, O’Kelly M, Tamayo P, Weir BA, Gabrie S, Winckler W, Gupta S, Jakkula L, Feiler HS, Hodgson JG, James CD, Sarkaria JN, Brennan C, Kahn A, Spellman PT, Wilson RK, Speed TP, W. Gray J, Meyerson M, Getz G, Perou CM, Hayes DN, Network TCGAR. Integrated genomic analysis identifies clinically relevant subtypes of glioblastoma characterized by abnormalities in PDGFRA, IDH1, EGFR, and NF1. Cancer Cell. 2010;vol. 17(no. 1):98–110. [PMC free article] [PubMed]
48. Meier P, Kaplan E. Nonparametric estimation from incomplete observations. Journal of American Statistical Association. 1958;vol. 53:457–481.
49. Smyth G. Linear models and empirical bayes methods for assessing differential expression in microarray experiments. Statistical Applied Genetics in Molecular Biology. 2004;vol. 3(no. 3) [PubMed]
50. Paugh B, Bryan L, Paugh S, Wilczynska K, Alvarez S, Singh S, Kapitonov D, Rokita H, Wright S, Griswold-Prenner I, Milstien S, Spiegel S, Kordula T. Interleukin-1 regulates the expression of shpingosone kinase 1 in glioblastoma cells. The Journal of Biological Chemistry. 2009;vol. 284(no. 6):3408–3417. [PMC free article] [PubMed]
51. Liu Q, Li R, Shen J, He Q, Deng L, Zhang C, Zhang J. Il-6 promotion of glioblastoma cell invasion and angiogenesis in u251 and t98 cell lines. Journal of Neurooncology. 2010;vol. 100(no. 2):165–176. [PubMed]
52. Colin C, Baeza N, Bartoli C, Fina F, Eudes N, Nanni I, Martin P, Ouafik L, Figarella-Branger D. Identification of genes differentially expressed in glioblastoma versus pilocytic astrocytoma using suppression subtractive hybridization. Oncogenomics. 2006;vol. 25:2818–2826. [PubMed]
53. Barcellos-Hoff M, Newcomb E, Zagzag D, Narayana A. Therapeutic targets in malignant glioblastoma microenvironment. Seminal Radiation Oncology. 2009;vol. 19:163–170. [PMC free article] [PubMed]
54. Jain R, Di T, Duda D, Loeffler J, Sorensen A, Batchelor T. Angiogenesis in brain tumors. Nature Review Neuroscience. 2007;vol. 8(no. 8):610–622. [PubMed]
55. Hormigo A, Ding B, Rafii S. A target for antiangiogenic therapy: Vascular enothelium derived from glioblastoma. Proceedings of National Academy of Science. 2011;vol. 108(no. 11):4271–4272. [PubMed]