Search tips
Search criteria 


Logo of plosonePLoS OneView this ArticleSubmit to PLoSGet E-mail AlertsContact UsPublic Library of Science (PLoS)
PLoS One. 2009; 4(10): e7497.
Published online Oct 22, 2009. doi:  10.1371/journal.pone.0007497
PMCID: PMC2760782
Bright Field Microscopy as an Alternative to Whole Cell Fluorescence in Automated Analysis of Macrophage Images
Jyrki Selinummi,1,2 Pekka Ruusuvuori,1,2 Irina Podolsky,1 Adrian Ozinsky,1 Elizabeth Gold,1 Olli Yli-Harja,2 Alan Aderem,1 and Ilya Shmulevich1,2,3,4*
1Institute for Systems Biology, Seattle, Washington, United States of America
2Department of Signal Processing, Tampere University of Technology, Tampere, Finland
3Department of Bioengineering, University of Washington, Seattle, Washington, United States of America
4Department of Electrical Engineering, University of Washington, Seattle, Washington, United States of America
Teresa Serrano-Gotarredona, Editor
National Microelectronics Center, Spain
* E-mail: ishmulevich/at/
Conceived and designed the experiments: JS AO ESG OYH AA IS. Performed the experiments: JS PR IP ESG. Analyzed the data: JS PR. Contributed reagents/materials/analysis tools: JS IP ESG. Wrote the paper: JS PR IP AO IS.
Received August 28, 2009; Accepted September 28, 2009.
Fluorescence microscopy is the standard tool for detection and analysis of cellular phenomena. This technique, however, has a number of drawbacks such as the limited number of available fluorescent channels in microscopes, overlapping excitation and emission spectra of the stains, and phototoxicity.
We here present and validate a method to automatically detect cell population outlines directly from bright field images. By imaging samples with several focus levels forming a bright field An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e001.jpg -stack, and by measuring the intensity variations of this stack over the An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e002.jpg -dimension, we construct a new two dimensional projection image of increased contrast. With additional information for locations of each cell, such as stained nuclei, this bright field projection image can be used instead of whole cell fluorescence to locate borders of individual cells, separating touching cells, and enabling single cell analysis. Using the popular CellProfiler freeware cell image analysis software mainly targeted for fluorescence microscopy, we validate our method by automatically segmenting low contrast and rather complex shaped murine macrophage cells.
The proposed approach frees up a fluorescence channel, which can be used for subcellular studies. It also facilitates cell shape measurement in experiments where whole cell fluorescent staining is either not available, or is dependent on a particular experimental condition. We show that whole cell area detection results using our projected bright field images match closely to the standard approach where cell areas are localized using fluorescence, and conclude that the high contrast bright field projection image can directly replace one fluorescent channel in whole cell quantification. Matlab code for calculating the projections can be downloaded from the supplementary site:
The development of highly specific stains and probes, for example the green fluorescent protein and its derivatives, have made fluorescence microscopy the standard tool for visualization and analysis of cellular functions and phenomena. On the other hand, automated microscopes and advances in digital image analysis have enabled high-throughput studies automating the imaging procedure and cell based measurements. In fluorescence microscopy of eukaryotic cells, automated single-cell quantification can be achieved using multiple fluorescent probes and channels in a single experiment. The first fluorescence channel enables detection of stained nuclei, resulting in markers for cell locations. The second fluorescent channel visualizes the areas occupied by whole cells or cytoplasm, for example by a cytoskeletal actin stain [1]. Alternatively, a nonspecific subcellular stain can be used for whole cell detection, with most fluorescence molecules located in the compartments the stain targets, but with stain residue visible in the cytoplasmic area. Regardless of the approach for whole cell staining, cells that are touching or partly overlapping can be automatically separated with the help of the nuclei markers of the first channel [2]. Finally, subcellular phenomena are quantified by measuring different properties of the first and second channels, or by using additional organelle and molecule specific probes and extra fluorescence channels, for example in colocalization measurements [3].
Because of the limited number of fluorescent channels available, and because of partly overlapping excitation and emission spectra of the probes, studies involving subcellular colocalization are commonly carried out without nuclear or whole cell staining. As a consequence, cell-by-cell measurements are not possible. Single cell measurements are also difficult or even impossible in cells that are used for negative control, where the lack of fluorescence is used for the detection of some phenomena. Furthermore, there are other limitations in fluorescence microscopy, such as phototoxicity and imaging setup complexity. These problems have motivated the search for alternate methods to replace at least some of the fluorescence channels with standard transmitted light microscopy.
The bright field channel, although readily available in all microscopes, is often neglected in cell population studies. Firstly, the cells are often nearly transparent, making the contrast very poor. Even by manual visual cell analysis it is often impossible to reliably detect the locations of cell borders, especially if the cells are clumped together. Furthermore, since no specific staining is applied, subcellular phenomena cannot be detected and nuclei are often only faintly visible. Recently, however, a number of studies have been published showing the usefulness of the bright field channel in cell detection and automated image analysis of cell populations. In Quantitative Phase Microscopy, a phase map of samples is estimated from bright field images of different focus levels [4] using proprietary software to greatly increase the contrast. In [5] a similar approach was taken, but the phase map was measured using lowpass digital filtering, followed by a computationally expensive level set based segmentation of individual cells. Texture analysis methods have also been used for bright field cell detection, such as the method presented by [6], where cell contours were extracted after initial segmentation. For round cells with rather good contrast borders, such as yeast, there are multiple algorithms available [7][9]. In cell tracking, the bright field cell segmentation is often presented as a preprocessing step followed by the actual tracking algorithm [10]. Utilizing bright field images with rather good contrast, it has also been shown that it is possible to classify between different cell types without fluorescent stains [11]. Finally, special microscopy techniques such as digital holography [12] have been used instead of fluorescent staining.
We introduce and validate An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e003.jpg -projection based methods for replacing whole cell fluorescent staining with bright field microscopy. In the presented approaches the cells are imaged with several different focal planes as in [5] and [4], but instead of solving for the phase map, we measure the intensity variations in the An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e004.jpg -dimension of bright field stack, creating a new 2-D image for analysis. The pixel intensities inside the cells vary when the focus is changing, but the background intensity stays more constant throughout the stack, resulting in relatively high variation inside the cells, but almost zero outside. Therefore, in the resulting projections the cells appear as brighter objects on an essentially black background, enabling us to replace the fluorescence image of whole cell staining with this bright field projection. In comparison to the previous bright field based cell segmentation techniques presented in the literature, this approach is more straightforward to implement, and the resulting bright field projection image is directly applicable for segmentation using CellProfiler [2] analysis software designed for fluorescent microscopy. Furthermore, with the exception of a preprocessing step with image filtering, no parameters need to be set when calculating the projection. As validation, we apply the technique for segmentation of mouse bone marrow derived macrophage cells with complex shapes and very low contrast. Phase contrast and differential interference contrast (DIC) microscopy techniques offer contrast increase through special optics, but to the best of our knowledge there is no work in the literature suggesting that standard cell segmentation algorithms for fluorescence microscopy would be applicable for phase or DIC images, or that the robust segmentation of cells with irregular shapes would be possible for large sets of images.
The resulting projections are shown to enable whole cell segmentation if only nuclear staining or other marker, such as manual cell marking for each cell is available, removing the need for an additional fluorescent channel for whole cell detection.
To evaluate the performance of projection based methods, we acquired test image data by culturing and imaging bone marrow macrophages (BMM). The macrophages isolated from BL6 were cultured on glass cover slip in RPMI medium, supplemented with 10% fetal bovine serum, 100 u/ml penicillin, 100 ug/ml streptomycin, 2 mM GlutaMAX and 50 ng/ml m-CSF (37 C, 5% CO2). The cells were stimulated with LPS 100 ng/ml for 1, 2, 4, 6, 18, and 24 hours, fixed with 3% Paraformaldehyde for 20 min and stained with BODIPY 493/503 (Invitrogen) for lipid bodies, and Sytox (Invitrogen) for nuclei. Unstimulated macrophages as well as the stimulated cells of different time points were imaged with Leica DMIRB confocal laser scanning microscope.
The image stacks form eight groups with varying cell morphologies: two image sets of unstimulated macrophage cells, and a time series experiment with six groups of macrophage images from different time points during the stimulation. For each group, there are five image stacks, each consisting of three channels: 1. fluorescent nuclei 2. fluorescence subcellular stain for lipid bodies also visualizing the cytoplasm and 3. bright field channel. Each of the stacks for every channel consist of 20 individual An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e005.jpg -slices. One stack for each channel of the time point An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e006.jpg had to be removed because it was erroneously imaged as a single slice instead of a stack. In total, the test data set includes nearly 800 cells.
To enable whole cell segmentation from bright field images, the contrast must be enhanced by increasing the intensity differences between cell and background areas. We achieve this by calculating different measures of variation in the An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e007.jpg -direction, projecting the bright field stacks into two dimensional (2-D) images. That is, each pixel in the resulting 2-D projections corresponds to a measure of intensity variation in the An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e008.jpg -direction in the original stack in that specific An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e009.jpg, An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e010.jpg pixel location. Since there is typically less An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e011.jpg intensity variation in the background than in cells, these two classes of pixels can be separated. Specifically, we make the projections using standard deviation (STD), interquartile range (IQR), coefficient of variation (CV), and median absolute deviation (MAD) measures.
The STD projection image is constructed by calculating the standard deviation of intensities in the An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e012.jpg -direction for each pixel of the original stack:
A mathematical equation, expression, or formula.
 Object name is pone.0007497.e013.jpg
where An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e014.jpg is the pixel intensity of An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e015.jpg -slice An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e016.jpg, An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e017.jpg is the mean of the pixel intensities, and An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e018.jpg is the total number of An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e019.jpg -slices.
For a more robust measure of variation we calculated IQR projection, the difference between the 75th and the 25th percentiles of the sample. That is, the lowest 25% and highest 25% of the values are first discarded, and the IQR is the range between the maximum and minimum of all the remaining intensities of An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e020.jpg -slices.
In CV projection, the standard deviation of the An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e021.jpg -values is divided by the mean of the values
A mathematical equation, expression, or formula.
 Object name is pone.0007497.e022.jpg
And finally, MAD measures how much “on average” one value deviates from the median of all the values, that is, the median deviation from the median of the intensities of all the An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e023.jpg -slices for every An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e024.jpg pixel location:
A mathematical equation, expression, or formula.
 Object name is pone.0007497.e025.jpg
where An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e026.jpg, An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e027.jpg, and An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e028.jpg.
To assess the projections' sensitivity to the number of An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e029.jpg -slices imaged for each stack, we applied the STD projection to two different types of reduced stacks, consisting only of three slices. First, the three slices were selected by hand representing nearly the whole An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e030.jpg -range of the original stack (slices 2, 10 and 19), referred to as the 3Slices-method. And second, we created five reduced versions of the original stacks by selecting the three slices randomly, referred to as 3SlicesRandom1 to 3SlicesRandom5.
The automated image analysis and cell segmentation for the evaluation of the various projection methods was carried out by the open source CellProfiler software package [2], originally designed for fluorescence microscopy. First, markers for each cell were obtained by detecting fluorescent nuclei with IdentifyPrimAutomatic analysis module. Second, to smooth out small unwanted details from the projections, a Gaussian lowpass filter radius of An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e031.jpg pixels was applied by SmoothOrEnhance module. Third, we used the propagation algorithm [13] in the IdentifySecondaryAutomatic module for detecting the whole cell areas. For ground truth, the whole cell areas were segmented with the same procedure (excluding the lowpass filter) using fluorescent cytoplasm images to be compared against cell area detection using the various 2-D projections. To simulate a situation where no fluorescent staining is available, the cytoplasmic areas were estimated by an annulus of radius 30 pixels around each nuclei as described, for example, in [14]. This estimation approach is referred to as the Annulus-method.
For further validation, we also enumerated fluorescent spots visible in the second fluorescent channel of the stacks. The spot enumeration was done with a kernel density estimation based algorithm [15] using a Gaussian kernel. Since this spot enumeration module is not included in the standard CellProfiler distribution, we implemented the analysis pipeline in the Developer's Version of CellProfiler, running on Matlab 2008a. The various approaches for whole cell segmentation are summarized in Table 1.
Table 1
Table 1
Summary of different whole cell segmentation methods and abbreviations.
We did not discard cells touching image borders, although it is a procedure commonly performed to minimize bias in measurements caused by cells that are only partly visible. These cells allows us to compare segmentation accuracy also on image borders where image quality is often compromised due to nonuniform background. The computational complexity of the analysis is relatively low, taking around 4 seconds per method to calculate the projection and segment the image on a 2GHz PC with Windows Vista.
As described in the previous section, we projected stacks of bright field images into 2-D by various measures of stack An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e036.jpg -variation, with the aim of replacing whole cell fluorescent staining. This procedure is outlined in Fig. 1, where markers for each cell are detected from fluorescence, or marked by hand, with two alternative methods for whole cell detection: fluorescence and the projections. Fig. 2 illustrates the contrast improvement by one of the projection approaches (STD). Fig. 2A shows one slice of the original bright field image, while fluorescence staining, the proposed STD projection, and the inverse of the projection are presented in Fig. 2B, 2C and 2D, respectively. The difference in contrast between the projection 2C and original bright field data 2A is easily noticeable, and furthermore, since the deviation in background intensities is similar in all the An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e037.jpg -slices, the nonuniform background is efficiently removed by the projection. The projections by all the methods for all the stacks are given in the supplementary www-pages at
Figure 1
Figure 1
Flowchart of the cell segmentation procedure.
Figure 2
Figure 2
Contrast enhancement by standard deviation projection of bright field image stack.
For assessing the performance of the projection method, we compared automated image segmentation of whole cell areas of fluorescently stained cells to the bright field projections, and to the Annulus-method where the cytoplasm areas were estimated by annuli around the detected nuclei. We were unable to detect the cells of our whole dataset using the best previously published method in the literature for segmenting complex cell shapes in bright field images [5], and we therefore had to leave it out of this comparison study. Fig. 3 illustrates one segmentation comparison, after image analysis by CellProfiler software. Fig. 3A presents the the whole cell segmentation result using fluorescence (Fig. 2B), and in Fig. 3B the whole cell areas were detected from the projected bright field stack (Fig. 2C). Fig. 3C shows the annuli around nuclei, resulting from the Annulus method. All the methods use fluorescent nuclei as markers for each cell, around which the whole cell areas are located.
Figure 3
Figure 3
Whole cell segmentation using different input data.
To quantify the segmentation accuracy for all the image stacks of the time series experiment, we measured the precision
A mathematical equation, expression, or formula.
 Object name is pone.0007497.e038.jpg
and recall
A mathematical equation, expression, or formula.
 Object name is pone.0007497.e039.jpg
where An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e040.jpg, An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e041.jpg, and An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e042.jpg are the numbers of detected true positive, false positive, and false negative pixels, respectively [16]. Perfect precision would indicate that all the pixels detected by the method under testing (different bright field projections) are also present in the ground truth segmentation result (fluorescence). Perfect recall, on the other hand, would indicate that that no pixels of the fluorescence image are missed by using the bright field projection image.
For a more compact representation of the segmentation accuracy we computed the F-score [16]:
A mathematical equation, expression, or formula.
 Object name is pone.0007497.e043.jpg
that is, the harmonic mean of precision and recall. An F-score of An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e044.jpg corresponds to perfect segmentation accuracy.
Fig. 4A presents the per cell segmentation F-score medians over all cells for all the different projection methods against the fluorescence ground truth. Furthermore, the segmentation results for the STD projection of the 3Slices set with only 3 hand picked An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e045.jpg -slices are given, as well as the F-score for the Annulus method. Fig. 4B gives the segmentation results of STD projection for 3SlicesRandom1 to 5, assessing the effect of random An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e046.jpg -slice selection from the stack for the projection.
Figure 4
Figure 4
Pixel-by-pixel comparison of whole cell segmentation using bright field projections against fluorescence ground truth.
With our data set consisting of nearly 800 macrophage cells with highly complex morphologies, the overall performance of the projection methods were close to the ground truth fluorescence staining with the median F-score fluctuating around 0.8. As expected, the F-score is consistently lower for the Annulus method. More extensive plots, including F-score boxplots for each method, are given at the supplementary site. The supplementary boxplots show a number of outliers for each of the of the eight groups, for all the projection methods. In comparison to the whole dataset, the number of outliers is limited, and the effect of these outliers can be reduced, for example, by discarding the corresponding cells from further analysis, similarly as cells that are too clumped together often need to be removed from automated segmentation results. As seen from the segmentation result images (supplement) the outliers were caused by segmentation errors overestimating the whole cell areas, suggesting the area of the cell to be a suitable feature for discarding these outliers if necessary.
To evaluate whether the outliers and other variations in the cell segmentation results affect the biological conclusions drawn from the data, we compared subcellular spot counts on a single cell level. By utilizing the second fluorescent channel where lipid bodies are emphasized as bright spots, we first detected the spots in the images (spot detection results for all images available in the supplement site). Then, based on the whole cell segmentation by all the projection approaches, we determined the cell to which each spot belongs. Finally, we discarded the spots outside the detected cells. This procedure enables us to estimate the effect of the different whole cell detection methods on the actual biological conclusions (spot counts per cell), since if the whole cell area detection differs dramatically from the fluorescence ground truth cell area, the numbers of spots detected in these erroneously segmented cells also change. If there is no change in spot counts, the whole cell detection is considered to have worked satisfactorily.
The results for this experiment are given in Fig. 5, where 5A shows the average spot counts per cell in each image for the different projection techniques, and in 5B the spot per cell enumeration is presented for the standard deviation projections of sets 3SlicesRandom1 to 5. With all the projection methods the spot count per cell increases over time, as previously reported in the literature [17].
Figure 5
Figure 5
Spot enumeration, average number of spots per cell.
Since each spot was assigned to a specific cell, we also compared the spot per cell counts for each individual cell for further validation. Fig. 6A shows a scatter plot and a regression curve obtained with linear least squares regression [18] of spot counts per each individual cell, for ground truth fluorescence against the STD projection. Overlapping data points are indicated with different colors. For clarity, in Fig. 6B only the regression lines are given for all the projections, with all the scatter plots available in the supplementary pages. Similarly to the previous plots, Fig. 6C illustrates the regression results for 3SlicesRandom1 to 5 against the ground truth fluorescence, with all the scatter plots again available as a supplement. The results of the spot-per-cell analysis are summarized in Table 2 listing the spot count slopes and biases for the different methods against ground truth. All the regression results except Annulus and the STD projection of 3SlicesRandom3 show a near perfect match between cell-by-cell spot counts by projections and fluorescence segmentation.
Figure 6
Figure 6
Cell by cell spot enumeration.
Table 2
Table 2
Slopes and biases of spot per cell counts for all methods.
We have presented and evaluated different An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e047.jpg -projection methods for contrast enhancement in bright field image stacks, and shown that the projection approach can replace whole cell fluorescent staining for our set of macrophage images. In single cell detection and segmentation, our method has several advantages over the previously presented bright field based techniques. Firstly, the projection images can be directly used for whole cell segmentation in the freeware CellProfiler software or other tools. Secondly, among the different projection methods tested, the standard deviation projection is computationally very light and trivial to implement, requires no parameters to be set, and still offers excellent segmentation performance. Thirdly, we have successfully applied the whole cell detection method to macrophages, a cell type of high morphological complexity with various protrusions and low contrast. Fourthly, the segmentation results with randomly selected An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e048.jpg -slices suggest that precise focusing is not critical. And finally, background intensity variations have no effect on the resulting projection images. The drawback of our approach is the need for taking three images instead of one, requiring a rather fast stage in live cell imaging to acquire the images without cell movement, and currently the segmentation results include outliers resulting from erroneous whole cell detection. Space requirements, on the other hand, are not increased since only the projection images must be stored for analysis.
Further studies are needed for assessing generality of the projection approach. We only used images of one cell type, with low contrast all around the cells, without clearly visible cell borders. Halo effects, present in bright field images of many other cell types, for example yeast, might be emphasized erroneously in the projections. Furthermore, it would be interesting to study the segmentation performance with various cell densities and different imaging setups, and to search for optimal conditions for the imaging and subsequent analysis. Many different approaches could also be tested for preprocessing; in this work the standard Gaussian filter was found adequate, but no rigorous parameter optimization or method comparisons were performed.
To fully automate the bright field cell segmentation, the markers for each cell need to be located without fluorescent nuclei, but to the best of our knowledge, there are no robust bright field based methods presented in the literature. The markers could also be set manually, but especially in high throughput studies a manual approach is not realistic. In certain studies where the cells have a very distinctive shape, such us bacteria or yeast cells, the object separation could be done based on cell shape, removing the need for a nuclear marker and thus, the need for fluorescence altogether.
Bright field images are not the only stacks where the standard deviation or other projections should be studied in more detail. In fluorescence microscopy, the studied phenomenon is often visible as subcellular spots, the intensities varying according to the An external file that holds a picture, illustration, etc.
Object name is pone.0007497.e049.jpg -levels. This suggests that the spots may be better visible in the standard deviation projections as compared to the methods commonly used, such as mean and maximum projections. The projection approach is also not limited to cellular objects, and any nearly transparent targets should benefit from the increased contrast without the need for any special optics.
The authors would like to thank Prof. Aimée M. Dudley, and Cecilia Garmendia-Torres, PhD, for discussions and advice during the preparation of this article. We are also grateful to Tarmo Äijö for implementing the fluorescent spot detection algorithm.
Competing Interests: The authors have declared that no competing interests exist.
Funding: This work was supported by the National Institute of Science and Technology (, 70NANB8H8117), NIH/NIGMS (P50 GM076547), the Academy of Finland (, 213462), and by the National Technology Agency of Finland ( Support by Tampere University of Technology Graduate School, Tampere Graduate School in Information Science and Engineering ( and Nokia Science Foundation ( is acknowledged. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
1. Moffat J, Grueneberg DA, Yang X, Kim SY, Kloepfer AM, et al. A lentiviral RNAi library for human and mouse genes applied to an arrayed viral high-content screen. Cell. 2006;124:1283–1298. [PubMed]
2. Carpenter AE, Jones TR, Lamprecht MR, Clarke C, Kang IH, et al. CellProfiler: image analysis software for identifying and quantifying cell phenotypes. Genome Biol. 2006;7:R100. [PMC free article] [PubMed]
3. Bolte S, Cordelières FP. A guided tour into subcellular colocalization analysis in light microscopy. J Microsc. 2006;224:213–232. [PubMed]
4. Curl CL, Bellair CJ, Harris T, Allman BE, Harris PJ, et al. Refractive index measurement in viable cells using quantitative phase-amplitude microscopy and confocal microscopy. Cytometry A. 2005;65:88–92. [PubMed]
5. Ali R, Gooding M, Christlieb M, Brady M. Advanced phase-based segmentation of multiple cells from brightfield microscopy images. Proc. 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro ISBI 2008. 2008. pp. 181–184.
6. Korzynska A, Strojny W, Hoppe A, Wertheim D, Hoser P. Segmentation of microscope images of living cells. Pattern Anal Appl. 2007;10:301–319.
7. Niemistö A, Korpelainen T, Saleem R, Yli-Harja O, Aitchison J, et al. A K-means segmentation method for finding 2-D object areas based on 3-D image stacks obtained by confocal microscopy. Proc. 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society EMBS 2007. 2007. pp. 5559–5562. [PubMed]
8. Gordon A, Colman-Lerner A, Chin TE, Benjamin KR, Yu RC, et al. Single-cell quantification of molecules and rates using open-source microscope-based cytometry. Nat Methods. 2007;4:175–181. [PubMed]
9. Kvarnström M, Logg K, Diez A, Bodvard K, Käll M. Image analysis algorithms for cell contour recognition in budding yeast. Opt Express. 2008;16:12943–12957. [PubMed]
10. Zimmer C, Zhang B, Dufour A, Thebaud A, Berlemont S, et al. On the digital trail of mobile cells. IEEE Signal Proc Mag. 2006;23:54–62.
11. Long X, Cleveland WL, Yao YL. Multiclass cell detection in bright field images of cell mixtures with ECOC probability estimation. Image Vision Comput. 2008;26:578–591.
12. Mölder A, Sebesta M, Gustafsson M, Gisselson L, Wingren AG, et al. Non-invasive, label-free cell counting and quantitative analysis of adherent cells using digital holography. J Microsc. 2008;232:240–247. [PubMed]
13. Jones T, Carpenter A, Golland P. Voronoi-based segmentation of cells on image manifolds. Lect Notes in Comput Sc. 2005;3765:535–543.
14. Schlumberger MC, Käppeli R, Wetter M, Müller AJ, Misselwitz B, et al. Two newly identified SipA domains (F1, F2) steer effector protein localization and contribute to Salmonella host cell manipulation. Mol Microbiol. 2007;65:741–760. [PubMed]
15. Chen TB, Lu HH, Lee YS, Lan HJ. Segmentation of cDNA microarray images by kernel density estimation. J Biomed Inform. 2008;41:1021–1027. [PubMed]
16. Fawcett T. An introduction to ROC analysis. Pattern Recogn Lett. 2006;27:861–874.
17. Pacheco P, de Abreu AV, Gomes RN, Barbosa-Lima G, Wermelinger LB, et al. Monocyte chemoattractant protein-1/CC chemokine ligand 2 controls microtubule-driven biogenesis and leukotriene b4-synthesizing function of macrophage lipid bodies elicited by innate immune response. J Immunol. 2007;179:8500–8508. [PubMed]
18. Hastie T, Friedman J, Tibshirani R. New York, NY, USA: Springer; 2001. The Elements of Statistical Learning: Data Mining, Inference, and Prediction.
Articles from PLoS ONE are provided here courtesy of
Public Library of Science