|Home | About | Journals | Submit | Contact Us | Français|
Lens-free holographic on-chip imaging is an emerging approach that offers both wide field-of-view (FOV) and high spatial resolution in a cost-effective and compact design using source shifting based pixel super-resolution. However, color imaging has remained relatively immature for lens-free on-chip imaging, since a ‘rainbow’ like color artifact appears in reconstructed holographic images. To provide a solution for pixel super-resolved color imaging on a chip, here we introduce and compare the performances of two computational methods based on (1) YUV color space averaging, and (2) Dijkstra’s shortest path, both of which eliminate color artifacts in reconstructed images, without compromising the spatial resolution or the wide FOV of lens-free on-chip microscopes. To demonstrate the potential of this lens-free color microscope we imaged stained Papanicolaou (Pap) smears over a wide FOV of ~14 mm2 with sub-micron spatial resolution.
Optical microscopy has been serving engineers, scientists and medical experts for decades. Its ease of use and real time imaging capabilities have made the microscope an irreplaceable tool. However, even the optical microscope has its shortcomings, such as limited field-of-view (FOV), bulkiness, and relatively high-cost for quality optical components such as objective lenses. In the meantime, the digital revolution that we have been experiencing over the last decades provides powerful and yet cost-effective resources and components that can be harnessed by computational methods to address some of the shortcomings of conventional microscopy tools [1–29].
Among these emerging computational methods, lens-free imaging has been gaining significant attention since it does not require the use of any lenses or bulky optical components to render an image [6,7,30–45]. Lens-free holographic on-chip microscopes that are based on partially coherent illumination form an interesting subgroup of such lens-free imagers [6,39–41], in which the distance between sample and the image sensor (Z2, see Fig. 1 ) is typically less than a millimeter, while the distance between the illumination source and the sample plane (Z1) is relatively large (for example 5-10 cm). This unique imaging geometry gives rise to important properties: (1) a wide FOV that is equal to the active area of the image sensor chip as this microscope is working with unit fringe magnification; (2) the illumination aperture does not need to be sub-micron sized, and can actually be significantly widened (e.g., 50-100 µm). As a result, the smearing effect of the illumination aperture function on spatial resolution is demagnified by Z1/Z2, which remarkably simplifies the microscope design since mechanical fine alignment and focusing of the source to the aperture are not necessary; and (3) the sensor chip samples an in-line hologram of the object even though the illumination is partially coherent, both spatially and temporally. As a matter of fact, partial coherence of theillumination can be fine-tuned to significantly reduce speckle noise and multiple reflection interference artifacts while a high numerical aperture (NA) of e.g., 0.8-0.9 can still be maintained, across an object FOV of e.g., >20 mm2 [39,40]. These in-line holograms can then be reconstructed thus allowing e.g., digital focusing capability or the ability to localize objects with sub-micron tracking accuracy within large volumes [42–44].
The spatial resolution of such lens-free on-chip microscopes is fundamentally related to the pixel-pitch of the image sensor chip since the holographic fringes are sampled without magnification. However, using pixel super-resolution techniques that are based on source shifting one can considerably improve the spatial resolution of the reconstructed images. Since Z1 >> Z2 for our on-chip imaging geometry (Fig. 1), a small shift in the aperture of a partially coherent source would result in highly demagnified translation of the holographic pattern of the object on the sensor plane. Since the object cross section remains the same, each one of these undersampled lens-free in-line holograms can be merged together to digitally create a single hologram with a much smaller effective pixel size [6,7,39,40,45].
Despite these advances, color imaging using a lens-free holographic microscope is still relatively immature. Color has a paramount role in biomedical imaging; for example color staining acts as a contrast mechanism to differentiate various cell types . Moreover, color has an essential psychological effect as the end users of imaging systems such as pathologists and cytotechnologists are accustomed to observe specimen in color. In general, color images in lens-free holographic microscopy can be rendered by acquiring three high-resolution holograms of the same object [47–50], each with a different illumination wavelength, typically Red, Green and Blue (RGB). Combining these holographic images with or without additional processing can create an RGB image of the object [51–55]. However, a ‘rainbow’ like color artifact appears in the resulting RGB image [see for example Fig. 2(a) ], which cannot be entirely removed even after preforming object-support based phase-recovery [see Fig. 2(b)]. Similar rainbow like color artifacts also exist in lens-based holographic color imaging techniques .
The relative strength of the ‘rainbow’ color artifact in digital holography depends on the image acquisition and reconstruction schemes. By and large, any noise term (e.g., speckle noise, multiple reflection interference terms) or reconstruction artifacts that vary their spatial patterns/signatures as a function of the illumination wavelength would create ‘rainbow’ like color noise as different color holograms (e.g., red, green and blue) are reconstructed and digitally super-imposed to create a color image. More specific to digital in-line holography [51–53], a significant source of this rainbow artifact can be considered to be the twin image noise that exhibits different ripple frequencies at different illumination and reconstruction wavelengths. Since the twin image artifact is nothing but the residue of a defocused (i.e., diffracted) object function, its physical dependency on wavelength of light is due to free space diffraction of light that is scattered from an object. Similar to how a grating would disperse different colors of light, an object’s twin image artifact or its residue will also exhibit similar wavelength dependent diffraction patterns. As a result of this, when three reconstructed holograms, acquired with e.g., red, green and blue illumination wavelengths, are combined to form an RGB image, the superposition of the twin-image noise or its residues would create ‘rainbow’ like color artifacts. In addition to twin image, coherence of illumination, both spatially and temporally, might also contribute to the ‘rainbow’ color noise observed in holographic images. For instance speckle noise and multiple reflection interference (due to partial reflections that occur at e.g., substrate-air interfaces) are also functions of the illumination wavelength, and would therefore create similar ‘rainbow’ like artifacts if different color holographic images are directly merged together [54,55].
Here we introduce and compare two new methods to eliminate ‘rainbow’ like color artifacts in lens-free holographic on-chip microscopy, without compromising the spatial resolution or wide FOV of lens-free reconstructed images [see Figs. 2(c) and 2(d)]. The first method (see Fig. 3 ) averages only the color components of an image, while preserving the brightness (gray-scale) component. This can be realized by transforming the RGB image to a different color space such as the YUV color space, which separates the brightness component of an image from its color components . In the second method (see Fig. 4 ), a colorization algorithm that relies on Dijkstra’s distances to propagate colors from automatically generated color patches to the entire FOV is utilized to create a lens-free color image [57–59]. By using either one of these methods together with pixel super-resolution and multi-height phase-recovery approaches [60–64], lens-free on-chip color microscopy can provide wide FOV (~14 mm2) images with sub-micron spatial resolution and accurate color reproduction. In addition to removal of the rainbow color artifacts, with both of these colorization approaches the image acquisition and processing times are improved by factor of ~3 compared to obtaining high-resolution holograms at each color channel (red, green and blue). To demonstrate the color imaging capability of this on-chip microscopy platform, Papanicolaou (Pap) smears were successfully imaged. The significantly improved color rendering capability of our lens-free pixel super-resolution microscopy platform opens up new avenues for wide-field imaging of stained samples that are commonly used in e.g., diagnostics or biomedical research.
Our experimental set-up is shown in Fig. 1. The partially coherent illumination is provided through a Xenon lamp (Newport, 69911) attached to a monochromator (Newport, 74100), which enables tuning of the illumination wavelength and its bandwidth (~2.7-20 nm). The output of the monochromator is coupled to a multi-mode fiber with 100 μm core diameter (Thorlabs, AFS-105/125Y). The fiber tip is positioned on a micro-controlled X-Y stage (Newport, MFA-PPD), which is laterally translated to perform pixel super-resolution.
A single lower resolution in-line hologram is formed and sampled as follows: the partially coherent illumination light from the fiber tip vertically propagates a distance of ~7 cm (Z1) and impinges on the specimen that is positioned in close proximity (~250-600 μm, Z2 distance) to the color image sensor (Sony, pixel-pitch: 1.12 μm, mega-pixel: 16.4). The transmitted light from the specimen diffracts and interferes with the background illumination. During the hologram acquisition for color channels, the monochrome slits are fully opened, thus resulting in an illumination bandwidth of ~20 nm. In contrast, during sub-pixel shifted hologram acquisition process (to synthesize the super-resolved brightness channel), the monochrome slits are closed until the illumination bandwidth reduces to ~3 nm, satisfying the temporal coherence requirement that is essential for super-resolution.
Pixel super resolution is a computational method that mitigates under-sampling related issues, which degrade the resolution and image quality of lens-free holograms [6,45,65–68]. Consequently, pixel super resolution techniques synthesize one high-resolution image from multiple (for example 6x6 or 5x5) low-resolution images of the same object FOV. The success of pixel super-resolution algorithm relies on the fact that each low-resolution image is slightly shifted compared to the other low-resolution images; therefore each lens-free in-line hologram, after going through under-sampling at the sensor plane, contains different information about the object FOV that can be fused into one high-resolution hologram. In our lens-free imaging set-up, the shifts between the different low-resolution holograms are obtained by laterally shifting the light source using a coarse X-Y stage (Fig. 1 upper-left inset). Because of the fact that Z1>>Z2, a coarse and therefore unknown X-Y shift at theillumination aperture plane corresponds to a highly demagnified shift of the hologram at the detector plane. After these shifted low-resolution holograms are acquired, the lateral shifts between the under-sampled holograms are digitally estimated using an iterative gradient method  and the in-line holograms are fused together using a non-iterative process that preserves the optimality of the reconstruction in the maximum-likelihood sense . The resulting high-resolution hologram, which we refer to as “pixel super-resolved” is equivalent to an in-line hologram that was sampled/imaged by a sensor chip with a significantly smaller effective pixel size. The above outlined pixel super resolution approach synthesizes high-resolution holograms regardless of the image sensor type i.e., monochrome or color. Here, in order to create super-resolved holograms using a color (e.g., RGB) sensor chip (Sony), minor modifications need to be implemented [40,67] where green illumination wavelength can be used to optimize the reconstruction process since the Bayer pattern contains two green pixels in each period of the color sensor-array. The lateral shifts were estimated after rotating the green pixels as described in , and the lower resolution holograms from only the green pixels were fused together according to the method reported in .
The high-resolution (i.e., pixel super-resolved) in-line holograms need to be reconstructed, which can be achieved by multiplying these in-line holograms with a reference wave. This reference can be approximated in our holographic set-up with a plane wave since Z1 is much larger than the width of our FOV . Then these in-line holograms can be digitally focused or back propagated to the object plane by using the angular spectrum approach . To eliminate the twin image noise, which is a common artifact of in-line holography, or equivalently to recover the lost phase information during the hologram recording process, we utilize a multi-height phase-recovery method which is especially suitable for dense and connected specimen such as blood smears or Pap tests [60–64]. In this phase-recovery method, several intensity measurements leading to pixel super-resolved holograms are acquired, each with a different sample to sensor distance, i.e., Z2 (see Fig. 1). To change this Z2 distance, glass cover slips with different thicknesses (e.g. 130-250 μm, Fischer Scientific, 12-548B or 12-540C) are placed between the sample and the image sensor planes. The resulting pixel super-resolved holograms from different heights are then digitally registered to each other, mitigating possible rotations and shifts of the sample with respect to the imager chip, which is followed by an iterative phase-recovery algorithm . In this iterative algorithm, the pixel super-resolved holograms are propagated back and forth between different heights and at each height the algorithm enforces the measured amplitude of the holographic field while keeping the resulting phase from the previous iteration. After a few (typically ~5-10) iterations, this algorithm converges and the recovered lens-free image now contains both amplitude and phase information. For this recovery process, the different sample to sensor (Z2) distances used in our experiments do “not” need to be known a-priori as an autofocusing algorithm is used to estimate them digitally .
To mitigate the ‘rainbow’ color artifact in the reconstructed holographic images [see e.g., Figs. 2(a)-2(b)], in our image acquisition scheme first a super-resolved multi-height phase-recovered holographic image is obtained with only one illumination wavelength (λ = 530 nm) using 6 x 6 = 36 shifts of the source aperture. This super-resolved image provides the high-resolution brightness component (Y) of our lens-free color image [see Fig. 3(a)]. To obtain the color information, three lower resolution holograms (i.e., without pixel super-resolution) at three different illumination wavelengths (λ = 460 nm, 530 nm and 630 nm) are also acquired, reconstructed and merged to a lower resolution RGB image [Fig. 3(b)]. This lower resolution RGB image is then converted to the YUV color space  using Colorspace Transformations package that is processed in Matlab . In this YUV color space, the brightness component (Y) is separated from the color or chrominance components (UV) and it is replaced with our pixel super-resolved high-resolution lens-free image. To obtain an artifact-free high-resolution color image, the color components (UV channels) are averaged with a rectangular window (~10 μm edge size), while the Y component of each image remained untouched containing the super-resolved phase recovered image. Finally, this hybrid YUV image is converted back to an RGB image [see Fig. 3(c)].
This second colorization method (Fig. 4) is inspired by earlier work in video and photography colorization literature [58,72], and is adopted for lens-free holographic microscopy needs where artificial leakage of the colors outside the physical size of individual cells is digitally prevented.
At the core of this second approach is the Dijkstra’s shortest path algorithm , a graph search algorithm that finds the shortest path from a given node (i.e., the initial node) to all the remaining nodes within the graph. This algorithm assumes that the graph is connected with only non-negative edges, where a non-negative edge between two nodes can be considered as the cost of moving from one node to the other. The algorithm starts by assigning tentative distances to all the nodes in the graph, by calculating the accumulated edge cost in an already explored path from the initial node to a specific node in the graph. The initial node gets a zero tentative cost; while the rest of the nodes are assigned with an initial tentative distance of infinity; as the algorithm proceeds, the tentative costs of the rest of the nodes gradually decrease, until the algorithm converges to the shortest path for a given initial node. Except for this initial node, the algorithm marks all the remaining nodes in the graph as ‘unvisited’, and stores them in a wait-list, while setting the initial node to ‘current’. The algorithm then checks the distance between the ‘current’ node and its neighbors. If this distance plus the tentative cost of the ‘current’ is less than the previously assigned tentative distance of the neighboring node, the new total distance will replace the tentative distance that was previously assigned to this neighboring node. After this step, the current node will be moved to the ‘visited’ list and will never be scanned again; the new ‘current’ node will be selected as the node in the ‘unvisited’ wait-list with the smallest tentative distance. The algorithm will continue until all the nodes will be removed from the ‘unvisited’ list, at which point the tentative distances will converge to the shortest paths from the initial node to all the other nodes in the graph.
This second colorization method that is based on Dijkstra’s algorithm is composed of five computational steps:
(i) Obtain a high-resolution (i.e., pixel super-resolved) gray scale image and lower resolution color images of the object (see Methods Section 2.1).
(ii) For each discrete color in the image, color patches are initially created by averaging and thresholding the low-resolution color image in the YUV color space . For stained Pap smear samples, we assumed that only three colors are present in the image: red, green, and no color, i.e., background. In the following three steps (iii-v), each discrete color patch is processed separately. The pixels with color values above a preset threshold value were further processed using morphological operations such as dilation, erosion and skeleton to prevent the color patches from leaking out of the cell boundaries .
(iii) The Dijkstra algorithm was utilized to find the shortest path from a collection of unicolor label patches to all the pixels across the image FOV [57–59]. To calculate the shortest path, the high-resolution gray-scale image is conceptually transformed into an undirected graph, where each pixel is a node that is connected by eight positively weighted edges to its neighboring pixels (pixels located on the boundaries of the image will have smaller number of neighbors or edges). The positive weight of each edge is defined as the absolute value of intensity difference between the two nodes that the edge is connecting. Moreover, the pixels in the unicolor label patches are all connected by an edge with zero weight. Therefore the Dijkstra algorithm will find the shortest path in terms of edge cost between each pixel in the image to the unicolor label patches. We implemented the Dijkstra algorithm using C\C + + with a binary heap that provided a computational complexity of O(N logN) when N is the number of pixels in the image .
(iv) We then apply a spatial constraint to prevent the leakage of the colors outside the physical size of an individual cell. Stated differently, a typical red cell with a diameter of ~50 μm would contribute ~20 pixels that would serve as an individual red color patch. Therefore, when Dijkstra’s shortest path is calculated we do not expect to find pixels that are physically far from that color patch (e.g., > 100 μm) and have a relatively small Dijkstra’s distance, since these pixels are located outside the cell’s boundary. Subsequently, the algorithm also tracks the ancestor patch that each pixel’s distance value is originated from. If the Euclidian distance between each pixel in the image to its ancestor patch is larger than the physical size of a typical cell we then assign this pixel a large Dijkstra distance (e.g. infinity), thus cutting off the leakage caused by that patch, which avoids color artifacts forming in our lens-free images.
(v) Finally, the reciprocals of the Dijkstra’s distances are used as weights in order to mix the UV values of all color patches and determine the optimum UV value for each pixel of the reconstructed lens-free image .
As briefly discussed in our introduction, a straightforward approach for rendering a lens-free color image involves the acquisition of three high-resolution (i.e., pixel super-resolved) holograms each with a different illumination wavelength (typically red, green and blue). These three high-resolution holograms can then be reconstructed and combined into one RGB image, such as the one shown in Fig. 2(a). The ‘rainbow’ color artifact is quite apparent in this image, and it cannot be mitigated even after applying object-support based phase-recovery with a tight mask [see e.g., Fig. 2(b)]. On the other hand, this color artifact can be effectively mitigated using colorization method #1 by spatial averaging of the color information, while preserving the brightness information (see Methods Section 2.4). The resulting image, through application of method #1 exhibits a significantly improved color rendering and the ‘rainbow’ artifact is now eliminated as shown in Fig. 2(c). Similar results are also obtained using method # 2 (see Methods section 2.5) through the use of Dijkstra’s modified shortest path algorithm, as illustrated in Fig. 2(d).
To quantify the spatial resolution of our lens-free microscope, a 1951 USAF resolution test chart was imaged according to the flowchart in Fig. 3(a). In the acquisition process 36 lower resolution holograms were acquired for each height, and in total three heights were used for the multi-height phase-recovery process (Z2 = 270 μm, 392 μm and 440 μm). As can be seen in Fig. 5 the entire USAF resolution target was clearly resolved, including the smallest grating line in group 9 element 3 with a width of 0.78 μm.
One important application of this wide FOV and high-resolution computational color microscope could be in cervical cancer pre-screening by imaging Pap smears. Pap test is a cytology based screening test used to detect premalignant and/or malignant cells that indicate the development of cervical cancer, which is the second most common cancer type among women worldwide . Pap tests require a wide FOV since typically only one out of thousands of cells is premalignant; and furthermore high-resolution color imaging capability is rather important as different cell types are stained with different colors. Our partially-coherent lens-free color microscope can address all of these requirements, and to illustrate its proof of concept Fig. 6 shows the image of a wide FOV Pap smear sample that is reconstructed using our lens-free holographic microscope based on colorization method #1.
In the image acquisition process 36 lower resolution holograms were acquired for each height, and in total three heights (Z2 = 449 μm, 550 μm and 592 μm) were used for multi-height phase-recovery. Our lens-free color images shown in Fig. 6 are in very good agreement with 10 × microscope objective (0.25 NA) color images that are provided for comparison purposes.
Colorization is the task of assigning colors to a gray-scale image or a film, and it has traditionally been a labor-intensive task that eventually resulted in various computational approaches expediting the colorization process [58,72]. In this work, we have demonstrated a novel colorization method to mitigate the ‘rainbow’ color artifact in lens-free holographic on-chip microscopy, by averaging only the color information of the reconstructed image while preserving its brightness. Furthermore, inspired by the colorization literature [58,72], and under the assumption that various biomedical objects have only a discrete number of stains or colors, we have also modified and implemented a fully automated colorization method based on Dijkstra’s shortest path algorithm (Fig. 4). This automated colorization algorithm is an alternative to our method #1, i.e., the YUV color space averaging method (Fig. 3). Similar to method #1, in this modified Dijkstra colorization algorithm, inputs are (1) a high-resolution (pixel super-resolved) gray scale image and (2) a lower resolution RGB color image of the same object. The algorithm automatically creates the color patches or scribes for three different classes of colors (red, blue and no color i.e., background). To propagate the colors to the rest of the image FOV, Dijkstra’s shortest path distances are calculated for each color patch and a spatial constraint is applied to prevent excessive color leakage (refer to Methods Section 2.5 for further details).
Note that in the context of Pap smear imaging, for both of these colorization approaches the brightness channel that is pixel super-resolved contains the cell and nucleus boundary information of the specimen, and therefore a sub-micron resolution for the brightness channel is rather important for possible applications of this approach in e.g., point of care settings. On the other hand, the color stains used in creating e.g., a Pap smear are used as visual markers for different cells of interest, and therefore the spatial sharpness of the boundaries of these stains is less significant compared to the super-resolved brightness channel of the same specimen. In fact, the diffusion of the dye molecules within the staining process itself creates some resolution loss and spatial ambiguity in the colorized boundaries of the specimen.
We also compared the performances of these two colorization approaches (Methods 1 and 2) using a confluent region of a Pap smear sample [see Fig. 7(a) ] as well as a sparse region of the same sample [see Fig. 7(b)]. Remarkably, these results illustrate that two entirely different methods provide very similar colorization performance under different sample densities. Overall, the YUV color space averaging preformed slightly better than the modified Dijkstra approach since the latter could not fully colorize the cells or specific areas of the sample that are indicated by the yellow arrows in Fig. 7. In Fig. 7(a) the yellow arrow points to an area where the color is a mixture of red and green, while in Fig. 7(b) the yellow arrow points to a cell which has a light red color. One solution to further improve the performance of the Dijkstra approach could be to discretize and represent each color of the image with more discrete levels, which could potentially help eliminate some of the relatively faint colors reported with the yellow arrows in Fig. 7.
Lens-free on-chip imaging techniques have shown great promise in addressing diagnostics and biomedical research challenges that require both a large FOV and a high spatial resolution. However, color imaging has remained relatively immature for lens-free holographic on-chip imaging since a ‘rainbow’ color artifact appears in the reconstructed images. Here we have demonstrated and compared two computational methods to mitigate this ‘rainbow’ color noise in lens-free color microscopy. The computational implementation of these two colorization methods is inexpensive and they preserve both the wide FOV and the sub-micron spatial resolution of lens-free on-chip microscopy. The first method that was introduced, YUV color space averaging, separates the color information from the brightness, thus allowing averaging only the colors of the image, while maintaining the gray-scale imagewith high resolution. This method is very robust and does not require any prior knowledge about the colors of the object. The second method is based on Dijkstra’s shortest path algorithm and it requires prior knowledge about the number of dominant colors or the stains within the imaged object/sample. The proof of concept of this lens-free color microscope using both of these colorization methods was demonstrated by imaging Pap smear samples over a wide FOV of ~14 mm2 with sub-micron spatial resolution and reliable color reproduction.
Ozcan Research Group gratefully acknowledges the support of the Presidential Early Career Award for Scientists and Engineers (PECASE), ARO Young Investigator Award, NSF CAREER Award, ONR Young Investigator Award, and the NIH Director's New Innovator Award DP2OD006427 from the Office of The Director, NIH.