PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
JALA Charlottesv Va. Author manuscript; available in PMC 2011 August 1.
Published in final edited form as:
PMCID: PMC2917821
NIHMSID: NIHMS190993

Towards A Fully Automated High-Throughput Phototransfection System

Abstract

We have designed and implemented a framework for creating a fully automated high-throughput phototransfection system. Integrated image processing, laser target position calculation, and stage movements show a throughput increase of > 23X over the current manual phototransfection method while the potential for even greater throughput improvements (> 110X) is described. A software tool for automated off-line single cell morphological measurements, as well as real-time image segmentation analysis, has also been constructed and shown to be able quantify changes in the cell before and after the process, successfully characterizing them, using metrics such as cell perimeter, area, major and minor axis length, and eccentricity values.

Keywords: robotics, image segmentation, computer vision, phototransfection, cell morphology

INTRODUCTION

Stem cell research has become very prevalent in recent years [1], [2], [3]. Stem cells are naturally produced by the body during embryonic development; thus, this research is popular since these cells contain a blue-print of how to build everything in your body. However, with the possibility of using stem cells in cell replacement therapies for various illnesses a more ready and less controversial source of stem cells has been sought. One approach is to create them artificially by using viruses to deliver a set of transcription factor cDNAs into mature cells that will then dedifferentiate these cells into an induced-pluripotent stem cell (iPS) [4]. Signals (instructions) can then be sent to this iPS cell to lead it down a desired developmental pathway to create specified cell types. This procedure is shown schematically in Figure 1(a). Creating therapeutically relevant cells in this manner suffers from the difficulty in programming stem cells to become a particular cell type. An alternative approach is the direct reprogramming of one cell type into another cell type using the Transcriptome Induced Phenotype Remodeling approach (TIPeR) [5], [6] whereby populations of RNA are introduced into a host cell in an effort to reprogram that host cell. It attempts to wipe out the current instruction set that is in place in the host cell and replace it with another. A key feature of the TIPeR procedure is introducing the RNA population into the host cell. One method for performing TIPeR is through the use of transfection [712] to transiently introduce holes into the host cell through which mRNA populations can diffuse. Once the holes reseal, the introduced mRNA will be translated and produce functional proteins that can modify the host cell phenotype. Transfection in [6] is performed with a titanium sapphire laser and this combination of cell poration with the extracellular delivery of mRNA is termed “phototransfection”. Phototransfection provides a means for performing functional genomics manipulations on individual cells and is pictured schematically in Figure 1(b) in contrast to the iPS-based approach for changing cell phenotypes. The current, manual phototransfection procedure consists of the following steps:

  1. Locate cells of interest on a marked cover-slip and record their locations on paper with the same markings
  2. Identify the cytosol and edges of the cell of interest
  3. Define a region to apply a laser to poke a hole in the cell membrane, considering that firing the laser on dendrite (arm) portion of the cell or on the cytosol region of the cell will damage it.
  4. Locally apply mRNA of a donor cell to the target cell using a pipette
  5. Observe cell changes at various time intervals (hours, days, weeks) and repeat as needed
Figure 1
Artificial stem cell creation versus phototransfection for changing cell phenotype: (a) Artificial stem cells (iPS) created from a virus followed by instruction (cell signaling) to induce phenotype change. (b) Phototransfection: combination of laser poration ...

This process is very tedious and inefficient. The overall yield rate of the cells just surviving the process is 70–80%, not that they are necessarily changing from one type of cell to another. The current morphological measurements here are also inadequate. They only provide a cell area metric in which the user first traces the cell border in one particular program and saves it as a cell boundary image. This is followed by importing this new image to another program to fill in the region inside the cell border. This filled cell image is then imported back into the original program to actually measure the area of cell. The throughput for the manual phototransfection process is 20 cells/hour. The goal here is to apply flexible automation techniques in order to increase the throughput to about 360 cells/hour. This is important in order to rapidly explore many different amounts and types of donor RNAs, perform various functional tests to see what genes have been expressed and to fill out microarrays for data analysis and fine tuning of the overall procedure. It is also desired to be able to better quantify the cell morphology for comparisons before/after the process to use as one measure to verify that the cell is indeed changing from one type of cell to the other.

There is related work on automated systems to improve the efficiency, productivity, quality, and reliability for procedures and processes in the life sciences. Applying microrobotic and flexible automation technologies to micromanipulation tasks such as single cell holding, moving, injecting/ejecting materials in/out of cells is becoming an active research area [13], [14]. These types of cell manipulation tasks are important for the characterization and manipulation of single embryo cells in applications such as cloning, gene expression analysis, cell replacement therapy [15], intracytoplasmic sperm injection (ICSI) and embryo pronuclei DNA injection. Much work has been done on creating automated systems to increase the survival and success rates of these types of procedures [1620]. There is also recent work on integrating electroporation into a robotic manipulation system for autonomous injections of single-cells [21]. Various types of other platforms for laboratory automation have also been presented. A “tower-based configuration” for the automatic execution of various biotechnology (genomics and proteomics) protocols is presented in [22] while Choi et al. [23] present a robotic platform for clinical tests suitable for small or medium sized laboratories using mobile robots. A high-throughput automated genome and chemical analysis system is shown in [24] and an automated microscope platform for biological studies, drug discovery, and medical diagnostics is illustrated in [25]. Studies to identify current and future approaches to the design of highly automated systems for life science processes involving humans in control loops in applications such as high-throughput compound screening and high-performance analytical chemistry, adherent cell culturing, and the cultivation of primary and stem cells have been explored in [26] and [27], respectively. Also, prior work on image segmentation techniques for biological applications is shown in [28] and [29] to properly identify the presence of tuberculosis in biologically stained images and for auto-focusing of images of blood smears containing red-blood cells, respectively. An automated microscope system for monitoring the vitality of neuron cells that relies on identifying fluorescent makers has been presented in [30]. In [31], an integrated system for simultaneous measuring of fluorescence microscopic and integrated sensor-based data is presented as a possible enabling technology for future screening assays. Also taking advantage of fluorescent markers is the work by Neumann et al. [32] in which an automated platform for high-content RNA interference (RNAi) screening that uses time-lapse fluorescence microscopy of live HeLa cells expressing histone-GFP to report on chromosome segregation and structure is reported. An automated platform for high-throughput cell phenotype screening combining human live cell arrays, screening microscopy, and machine-learning-based classification methods based on the identification of the subcellular localization of marker proteins as indicators for the cellular state is described in [33].

The work presented this paper describes a framework for fully automating the phototransfection process of single cells (astrocytes and fibroblasts). Two approaches to handle the main automation challenge of processing the cell images in real-time and off-line for morphological comparisons are presented. A software analysis tool for automating cell morphological measurements for quantitative comparison of images of the cells before and after the process is described. This is followed by a detailed description of a proof-of-concept implementation the framework for automating the current manual phototransfection process along with estimated process throughput results. Recommendations for further improvements are also provided.

FRAMEWORK FOR PROCESS AUTOMATION

A framework to automate the actual single cell phototransfection process has been developed and is pictured schematically in Figure 2. The first step in automating the phototransfection process is to instrument an optical microscope with a motorized stage for closed loop positioning of the cover-slips under the microscope field of view (FOV) (Figure 2 (i)). Once this is done, a global and local map of each cover-slip can be constructed, as seen in Figure 2(ii). The stage can be indexed and sequential image captures of the FOV’s in specific locations on the cover-slip performed. A mosaic of all these images can be used to build a global map. This map of the entire cover-slip can then be stored for comparison and analysis at different time intervals. Local maps for individual FOV’s of the cover-slip can also be created, where image processing will be performed. In the individual FOV’s, standard computer vision techniques, such as edge detection, image erosion, dilation, filtering, filling [34], can be used to segment the cell body and processes (dendrite) area from the background in each image (Figure 2(iii)). Local and global image data can then be compiled consisting of cell body coordinates locations, sizes, contour profile statistics, and processes (dendrite) section areas and locations. A program can be written to automatically determine suggested laser target firing locations based on image data for each FOV on the cover-slip. These locations will be high curvature regions on the cell body, away from the dendrites and cytosol of the cell, as shown in Figure 2(iv). Once all locations are set, coordinated stage movements followed by laser firing (Figure 2(v)), micromanipulator positioning of the injection pipette, mRNA release (Figure 2(vi)), and stage re-positioning can be executed across entire global map of the cover-slip, greatly increasing throughput. Once all the FOV’s on a particular cover-slip have been phototransfected, the process will be repeated on the next cover-slip in the petri dish.

Figure 2
Process flow for automated phototransfection: (i) Instrumented microscope with fixed positioned petri dish with cover-slips of cells to be phototransfected. (ii) Local mapping of individual cover-slips for image processing and data retrieval. (iii) Image ...

The main challenge in automating the phototransfection process is in identifying the appropriate features of the cells in the image in order to direct the laser beam to create the pores in the cell membrane where the mRNA can diffuse into it. These features can be identified with image segmentation techniques and then these segmented images can be used to automatically determine morphological measures of the cells (for comparison before and after the process) as well as the laser target firing locations.

SEGMENTATION AND AUTOMATED CELL MORPHOLOGY MEASUREMENTS

Images of the phototransfected cell are observed and recorded before and after the process, at different time intervals, to assess morphological changes in the cell. Cell characterization with morphological measures is one way that biologists can assess the success of the overall procedure, along with other functional tests. However, this is not an easy task. The problem in comparing two different images of the same cell before and after phototransfection is that the changes in the cell are hard to discern because of changes in illumination, camera viewpoint and background in both images. Image segmentation techniques, borrowed from the computer vision literature, are used here to segment the image of the cell from the background in order to compare both images of the cell before and after the process without ambiguities. From a properly segmented image, the morphology is quantified by computing measures such as cell area, perimeter, major axis length, minor axis length, eccentricity, and equivalent diameter. This segmented image can also allow for robust image feature identification and laser target coordinate firing location calculations.

A schematic of some of the morphological measurements are shown in Figure 3. The image on the left represents the appearance of a cell before phototransfection, while the image on right represents its morphology after phototransfection. Ideally, it will start to resemble and function like the donor cell and the calculated morphology will be used as one way to quantify this change. The area of the segmented cell region of the image is defined as the actual number of pixels in the region. The perimeter metric is calculated by determining the distance between each adjoining pair of pixels around the boarder of the contiguous segmented cell region in the image. The major axis and minor axis lengths are lengths in pixels of the major or minor axis of an ellipse that is fit to the segmented region that has the same normalized second central moments. The eccentricity measure is determined from this same ellipse and is the ratio of the distance between the foci of the ellipse and its major axis length. It is between 0 and 1. An ellipse with eccentricity = 0 is actually a circle and an ellipse with eccentricity = 1 is a line segment. The equivalent diameter measure is a scalar value that specifies the diameter of a circle with the same area as the segmented region. It is computed as equation M1.

Figure 3
Schematic of a cell before (left) and after (right) phototransfection and associated morphological measures quantifying cell changes.

Initially, images of the cells were segmented using graph-theoretic clustering techniques, using the image pixels as nodes in the graph [35]. Once a connected, weighted graph is constructed from the image of interest, a graph-cutting algorithm can be executed in order to segment the image. Graph-cutting techniques tackle the minimum cut problem: finding a cut in the graph that has the minimum cost among all the cuts. The algorithm from Boykov and Kolmogorov [36], that is used here, solves this problem by finding the maximum flow from the “Source” nodes to the “Sink” nodes in the graph (Figure 4). That is, the maximum “amount of water” that can be sent from the “Source” to the “Sink” by interpreting graph edges as “pipes” with capacities equal to the edge weights. The output of the algorithm is a label for each node in the graph (pixel in the image) assigned to be either the “Sink” or “Source”. For this application, the “Sink” corresponds to pixels in the background of the image while the “Source” corresponds to pixels belonging to the cell. Edge weights between the nodes in the graph are computed using a weighted sum of distance (Ad), pixel intensity (Ai), and texture (At) affinity measures for particular nodes. The affinity values between similar nodes are large, while the affinity measures connecting different nodes are small. The distance affinity measure goes down sharply once the distance between the pixels is over some threshold. The pixel intensity affinity is large for similar intensities and smaller as the intensity difference increases. Similarly, the texture affinities are large for pixels with similar surrounding textures and smaller as the difference increases. These three different affinity measures between two nodes, N1 and N2, are listed in Equations (1)(3), while the corresponding edge weight, E, is given in Equation (4):

equation M2
(1)

equation M3
(2)

equation M4
(3)

equation M5
(4)

where PN = position of node N, IN = pixel intensity value of node N, TN = average change in pixel value intensity between pixels in a image patch surrounding node N, and the σ parameters are chosen to yield large affinity values for similar pixels while yielding low affinity values for dissimilar pixels. The weights, w1, w2, and w3, are user defined and each are ≤ 1 while their sum = 1.

Figure 4
Connected graph from an image: image pixels = graph nodes

Edge weights between the nodes in the graph and the “Sink” and “Source” nodes also need to be computed to complete the graph. Equations (5)(11) are used for this. Here AdBkg, AiBkg, and AtBkg are distance, intensity, and texture affinities associated with the background (“Sink”) section of the image that are pre-computed from a set of training images.

equation M6
(5)

equation M7
(6)

equation M8
(7)

equation M9
(8)

equation M10
(9)

equation M11
(10)

equation M12
(11)

The raw output from the graph-cut algorithm needs to be filtered in order to come up with the final segmented image of the cell from the background. Image erosion and dilation steps are applied in Matlab® and the largest connected pixel region that is left is used as the segmented cell image and statistics reported on it. Figure 5(a) shows the result from this procedure on images of four fibroblast cells before and after the phototransfection process, with the segmented areas overlaid on the original images. The images in the top row are before the process while the bottom row of images are after the process has been completed. The cell perimeter, area, major and minor axis, and eccentricity (in pixels) are calculated for each set of images and the corresponding changes in these morphological measures reported in Table 1. These metrics show substantial changes after the phototransfection process has been performed. This indicates a successful phototransfection since the fibroblasts now are starting to look like the donor astrocyte cells and there are metrics to support this.

Figure 5
Images of four fibroblast cells before and after the phototransfection process, with the segmented areas from the graph-cuts method overlaid (red) on the original images. The images in the top row are before the process while the bottom row of images ...
Table 1
Morphological measurements from graph-cuts method with good segmentation results

However, due to the large changes in cell morphology, inconsistencies in the lighting conditions, cover-slip markings, and textures of the backgrounds and cells in the images, consistent results for one set of system parameters across all data sets are difficult to achieve. Figure 5(b) shows examples of poor image segmentation, when only a subset region of the actual cell is identified, using the same set of system parameters as those in Figure 5(a). Continuous tuning of the graph parameters can be performed to obtain acceptable results, however it is desired to keep these details transparent to the end-user and instructions how and what to change are not trivial to the expected end-user (biologist). Therefore, a stand-alone, more user-friendly Matlab®-based software tool has been developed. (Note: the term “acceptable” is used in comparison to the results obtained from manual operation or calculation methods. Acceptable performance is deemed within 10% of these manually obtained values.)

This software tool has been specifically designed and implemented for assessing morphological measures in the astrocyte and fibroblast cells before and after the phototransfection process. A screen shot of the AutoPT Cell Morphology (CM) Graphical User Interface (GUI) that operates the program is shown in Figure 6(a). It has been set up for individual image processing as well as the bulk processing of many images. Once the image to be analyzed has been loaded, the user can then choose from a number of different processing options in the Manual Processing Tools panel to apply to the image. These include: equalizing the image (i.e. evenly distributing intensity values throughout the range of intensity values in the image), image darkening/brightening, edge detection, image closing, connected pixel filtering (filtering out connected pixels smaller than specified size), and filling image holes. There is a choice of five common edge detection methods to apply that are all part of Matlab®’s Image Processing Toolbox. The processing can be done in any order, however, typically the order that the tools appear in the Manual Processing Tools panel is the order that they are executed. Figure 6(b) shows an original image and subsequently processed images after application of the manual processing tools in this order. There is also an option to manually select pixels in the processed image to either connect or disconnect them from the processed image. Once the image is properly segmented, the cell statistics for the largest connected pixel region are calculated and displayed in the CM GUI. These statistics include the perimeter, area, major axis length, minor axis length, eccentricity, equivalent diameter, solidity, and extent. The original image of the cell is then overlaid with the segmented image of the cell in both main GUI panel and in a separate window. A new image just of the segmented cell is also generated. The Record Statistics button can be used to write this data to a text file and save the original cell image, segmented cell image, and overlay image of the cell in jpg format. The data file written also contains hyperlinks to these saved images. Once suitable manual processing steps and parameters have been determined for a few test images, bulk processing of all the images in the active directory can be performed with these settings. Inside the Automatic Processing Sequencer panel, the process to be performed can be selected and the corresponding sequence number entered. The processing steps will use the parameters set in Manual Processing Tools panel and execute the processing on all the images in the active directory, write the corresponding statistics to a text file, and record the original, cell, and overlay images, as shown in Figure 7. There are also settings to record just the largest region, three largest, or all the connected pixel regions that are found.

Figure 6
Cell Morphology (CM) image processing toolbox Graphical User Interface (GUI) and typical segmentation processing steps: (a) CM GUI Front Panel Display; (b) Sample output from the execution of typical processing steps in the order presented.
Figure 7
Cell morphology bulk processing image output.

Three image sets, each containing 5 pairs of images corresponding to the same cell before and after the phototransfection process, were used to compare the performance of this software tool to acquire morphological cell measurements against the traditional method. In the traditional method, the user first traces the cell border in one particular program. This is followed by importing this new cell boundary image to another program to fill in the region inside the cell border. This filled cell image is then imported back into the original program to measure the area of cell. The processing time to analyze each image set using this technique along with the percentage change in the area metric for each image pair were recorded and are listed in Table 2 (column 3). The same image sets were analyzed manually using the AutoPT CM GUI (Figure 6(a)) and the processing time for each set along with the percentage area change for each image pair recorded and also shown in Table 2 (column 2). In the case of image sets 1 and 3, the processing time using the AutoPT CM GUI tool is 33% and 38% faster than the traditional method, respectively. The processing time for image set 2 was about the same in both methods. The results for the percentage change in the area metric with the CM GUI program are all within 8% of the results produced with the traditional analysis method. This error is small and can be explained from the fact that the same person did not use both methods (one person used traditional methods while the other used the GUI) and some portions of the cell borders are subject to individual interpretation. It is also expected that more time gains will be realized once the user is more experienced with using the GUI and identifies the best combination of processing controls to segment particular types of images (this is the reason for similar processing times in image set 2). The CM GUI program is also more user-friendly and efficient since all the necessary processing steps are self-contained and there is no need to switch back and forth between different programs to perform the analysis. Further, using the CM GUI provides more than 6X the information than the alternate approach. As stated previously, in addition to the cell area metric, the GUI program yields metrics for the cell perimeter, major axis length, minor axis length, eccentricity, equivalent diameter, and others. This data for the three sets of test images is shown in Table 3. For the metrics listed here, they are all substantially decreased (by an average of 58%) after the phototransfection process. The traditional analysis method cannot provide these extra morphological measurements.

Table 2
Manual processing area metric and processing time comparison
Table 3
Morphological changes - % change from original

CHANGES REQUIRED FOR AUTOMATION

The Bulk Process function in the AutoPT Morphology GUI was also used to automate the processing of the three sets of test images. Using a laptop running Windows XP, with 1.80 GHz Pentium M processor and 1 GB RAM, and depending on the processing parameters selected, the processing time to analyze the set ranged from 4.5 to over 40 minutes. In each case, data for every connected pixel region greater than 500 pixels was recorded, which depending on the settings can result in a lot of extra processing time. Due to the inconsistencies in the images (lighting conditions, focal length, pipette placement, etc.) it was hard to identify one set of image parameters to successfully segment each cell image. This is also the case when processing the cell images in real-time during the phototransfection procedure when trying to calculate the laser target positions on the cell. In the best cases when using the off-line Bulk Process functionality, a particular processing parameter set was able to segment about 60% of the images in the set within an acceptable tolerance. To process the rest of the images, another set of parameters is selected. This is repeated until all the images in the set have acceptable results or the remaining images can just be processed manually. Standardized procedures to determine the image capture settings during the process are required to produce more consistencies among all the images in an image set to increase the efficiency and results of both the bulk processing and real-time processing of the images. Also, optimized code is needed to further increase the processing speeds. Thus, one cannot simply automate a manual process without considering the impact of the manual procedures on the automation task at hand.

PROOF-OF-CONCEPT IMPLEMENTATION FOR AUTOMATED PHOTOTRANSFECTION

A proof-of-concept implementation for automating this phototransfection process has been accomplished using the flexible automation micro/meso-scale manipulation system from [37], [38]. The system setup can be seen in Figure 8(a). Here, an inverted optical microscope (Nikon Eclipse TEU2000-U), motorized XY stage (Prior Scientific H107 ProScan II), and CCD camera (Sony XC-77) are the pertinent pieces of hardware being utilized. There is also a 4-axis computer controlled micromanipulator (Siskiyou Design Instruments MX7600R) and associated controller (Siskiyou Design Instruments MC2000) available for use in the test-bed. The computer controlled micromanipulator can be used to position a pipette for dispensing mRNA. Typically, a 40X objective is used to image the cells for this application. The phototransfection process utilizes a titanium sapphire laser to perforate the cell membranes and the laser can be directed to any region of the microscope FOV to administer the laser beam. There is currently no laser in this implementation but when incorporated into this system in the future, it will be focused to fire at the center of the image in the FOV. The control software to operate the system is written in Visual C#.Net, leveraging the Windows .NET framework, enabling easy integration of software modules that can reside on different workstations. The software includes (a) real-time image capture of images from the microscope; (b) control of the motorized stages; and (c) a simple GUI (Figure 8(b)) for the operator to specify the type of cell he/she is interested in by entering relevant image processing parameters. The image processing routines are written in Matlab® (version 7.2.0.232, R2006a) using functions from the Image Processing Toolbox.

Figure 8
Proof-of-concept implementation of automated phototransfection (AutoPT) platform: (a) Hardware setup consisting of an inverted optical microscope, CCD camera, motorized XY stage, micromanipulators, and petri dish holding cover-slips with cells for phototransfection. ...

The Laser Target Control Panel found in the lower left corner of the GUI (Figure 8b) allows the user to specify the parameters for the image processing code in Matlab®. The tunable parameters include the type of edge detection method to use (Canny, Sobel, Roberts, Prewitt, Laplacian), the diameter for the image closing operation and pixel size for a connected pixel filtering procedure. Note that these parameters are set just once. The Get Target button calculates a recommended laser target firing location of the cell of interest in the FOV. This button saves the image from the current image frame along with the specified parameters and then calls Matlab® to perform the necessary calculations to segment the image of the cell from the background and recommends image coordinates to fire the laser. In this current implementation, this position is just determined as the centroid of the cell body. However, more sophisticated metrics to calculate the laser target position can easily be applied here instead. The laser target position information is then sent back to the main control program and drawn on the screen in pink. Once the laser target position has been established, the Position Target button can be utilized to have the motorized stage automatically translate the cell in the XY plane so that the calculated laser target position is now at the center of the image where the laser will be parked (laser firing location in Figure 8b). The Clear Target button is used to reset the laser target position in the computer memory and move the stage back to its original position. By coupling the Get Target and the Position Target function with the laser firing and mRNA release from a pipette mounted on the motorized manipulator in the system (as planned in the future), the system will be completely automated.

ESTIMATED THROUGHPUT

On a single control computer running Windows XP, with a 2.39 GHz Pentium 4 processor and 1 GB of RAM, it takes 30 seconds to segment and identify a target location for the cell and translate the XY stage to move the cell’s laser target to the center of the image for eventual laser firing and mRNA release. (The laser firing and mRNA release can be done practically simultaneously and is the easiest and fastest part of the phototransfection process, taking about 1–2 seconds to do manually.) The 30 second processing time corresponds to a throughput of about 120 cells/hour, which is a 6X improvement over the current manual procedure (20 cells/hour). By coupling all the software modules more efficiently (eliminating the C# wrappers with Matlab® software) and by processing all cells in the field of view (typically 4–6), the throughput is expected to increase to over 500 cells/hour. This is greater than a 25X improvement. Also, using a faster computer would further decrease the cycle time. This system can also be run continuously, only needing a human to be there to replenish a new batch of cells and remove the processed ones. Assuming a 12-hour day at a rate of 500 cells/hour projects to a throughput of 6000 cells/12-hr day.

As proof-of-concept for the increased time gains from using one integrated program, the C# program functionality was converted to a Matlab® program capable of acquiring images from the CCD camera, processing the image, calculating laser target positions, and moving the XY stage. Running everything in the same program reduced the process time from 30 seconds down to approximately 8 seconds. This corresponds to a throughput of 450 cells/hour, a 23X improvement from the current manual process. Again, assuming that all the cells (typically 4–6) in the FOV can be processed with minimal increased computational overhead, a potential throughput of 2250 cells/hour (>113X improvement) is estimated. Subsequent segmentation on images with 4–6 cells entities has indeed shown no marked increase in the overall processing time. However, in practice, this fully integrated program cannot be written in Matlab® since the images from the confocal microscope, that are used for the actual procedure, are captured with a photomultiplier tube (PMT). The PMT is not compatible with Matlab’s image acquisition toolbox which has been used here to capture images from the CCD camera in the test setup. Therefore, custom software is required to capture the PMT images, perform the appropriate image segmentation, calculate laser target positions, and translate the XY stage in order to achieve these further throughput gains. Another option would be to add an additional optical port to the microscope or an external optical system that a compatible CCD camera could be mounted and hooked into the Matlab® interface. Considering of both of these options are areas of future work. To get the maximum possible throughput out of the entire system, considerations for automatically refilling the micropipette with mRNA should be made along with investigations on how to move the processed cover-slip out of the way, store it in an organized manner while feeding in the next one to be processed, with as limited human interactions as possible.

SUMMARY

Work towards fully automating the single cell manipulation process of phototransfection is presented in this paper. Phototransfection is presently done manually in a very tedious manner. A framework for fully automating this procedure has been designed and proof-of-concept implementation achieved. Computer vision techniques are used to identify the cell of interest in the FOV and determine target locations for the laser beam. A control program takes this information and coordinates movements of the computer controlled XY stage, translating the coordinates of the laser target location to a predefined, fixed, laser firing location. A 23X improvement is possible with this implementation with room for improvement to greater than 110X described. Images of the phototransfected cell have been observed before and after the process and a software tool developed to assess morphological changes in the cell as a way to characterize them and assess the efficacy of the phototransfection process. Image segmentation algorithms were used to segment the cell from the background in order to compare both images of the cell without ambiguities. From the properly segmented image, the morphology is quantified by computing measures such as cell area, asymmetry, perimeter, and eccentricity. Results show a notable decrease in the metrics after the process has been performed, a throughput increase over manual cell morphology measurements, a 6X gain in the number of measurements made, and a more efficient and user-friendly software tool for cell morphological analysis.

ACKNOWLEDGMENTS

The authors gratefully acknowledge funding from NSF Grant IIS-0413138, Dept. of Education GAANN Grant P200A060275, the Keck Foundation and the NIH Director’s Pioneer Award Program, DP1-OD-04117 to support this work and Kitty Wu for discussions on the manual phototransfection procedure.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

David J. Cappelleri, Department of Mechanical Engineering, Stevens Institute of Technology, Hoboken, NJ USA.

Adam Halasz, Dept. of Mathematics, West Virginia University, Morgantown, WV USA.

Jai-Yoon Sul, PENN Genome Frontiers Institute, Dept. of Pharmacology, University of Pennsylvania, Philadelphia, PA, USA.

Tae Kyung Kim, PENN Genome Frontiers Institute, Dept. of Pharmacology, University of Pennsylvania, Philadelphia, PA, USA.

James Eberwine, PENN Genome Frontiers Institute, Dept. of Pharmacology, University of Pennsylvania, Philadelphia, PA, USA.

Vijay Kumar, GRASP Lab, Department of Mechanical Engineering, University of Pennsylvania, Philadelphia, PA USA.

REFERENCES

1. Bianco P, Robey P. Stem cells in tissue engineering. Nature. 2001;414(6859):118–121. [PubMed]
2. Lovell-Badge R. The future for stem cell research. Nature. 2001;414(6859):88–91. [PubMed]
3. Temple S. The development of neural stem cells. Nature. 2001;414(6859):112–117. [PubMed]
4. Takahashi K, Yamanaka S. Cell. 2006;126:663–676. [PubMed]
5. Sul J, Wu C, Zeng F, Jochems J, Lee M, Kim T, Peritz T, Buckley P, Cappelleri D, Maronski D, Kim M, Kumar V, Meaney D, Kim J, Eberwine J. Transcriptome transfer produces a predictable cellular phenotype. Proceedings of the National Academy of Sciences. 2009;106(18):7624–7629. Epub 2009 Apr 20. PMCID: PMC2670883. [PubMed]
6. Barrett LE, Sul JY, Takano H, Van Bockstaele EJ, Haydon PG, Eberwine JH. Region-directed phototransfection reveals the functional significance of a dendritically synthesized transcription factor. Nat. Meth. 2006;3:455–460. (doi:10.1039/b503658e) [PubMed]
7. Stevenson D, Gunn-Moore F, Campbell P, Dholakia K. Review: Single cell optical transfection. J. R. Soc. Interface. 2010 published online 11 January 2010. (doi: 10.1098/rsif.2009.0463) [PMC free article] [PubMed]
8. Brown CTA, Stevenson DJ, Tsampoula X, McDougall C, Lagatsky AA, Sibbett W, Gunn-Moore F, Dholakia K. Enhanced operation of femtosecond lasers and applications in cell transfection. J. Biophot. 2008;1:183–199. (doi:10.1002/jbio.200810011) [PubMed]
9. Tsampoula X, Garces-Chavez V, Comrie M, Stevenson DJ, Agate B, Brown CTA, Gunn-Moore F, Dholakia K. Femtosecond cellular transfection using a nondiffracting light beam. Appl. Phys. Lett. 2007;91:053902. (doi:10.1063/1.2766835)
10. Yao C-P, Zhang Z-X, Rahmanzadeh R, Huettmann G. Laser-Based Gene Transfection and Gene Therapy. IEEE Transactions on Nanobioscience. 2008;7:111–119. 2. [PubMed]
11. Baghdoyan S, Roupioz Y, Pitaval A, Castel D, Khomyakova E, Papine A, Soussaline F, Gidrol X. Quantitative analysis of highly parallel transfection in cell microarrays. Nucleic Acids Research. 2004;32:9. e77. (doi: 10.1093/nar/gnh074) [PMC free article] [PubMed]
12. Uchugonova A, Konig K, Bueckle R, Isemann A, Tempea G. Targeted transfection of stem cells with sub-20 femtosecond laser pulses. Opt. Express. 2008;16:9357–9364. (doi:10.1364/OE.16.009357) [PubMed]
13. Hamilton S, Russo M. Tutorial/workshop: Introduction to laboratory automation. Proceedings of IEEE Conference on Automation Science and Engineering (CASE) 2007
14. Zhang M, Felder R, Kim E, Nelson B, Pruitt B, Zheng Y, Meldrum D. Editoral: Special issue on life science automation. IEEE Transactions on Automation Science and Engineering. 2006;3(2):137–140.
15. Park J, Jung S, Kim Y. Design and fabrication of an integrated cell processor for single embryo cell manipulation. Lab Chip. 2005;5:91–96. [PubMed]
16. Huang H, Sun D, Mills J, Cheng S. Integrated vision and force control in suspended cell injection system: Towards automatic batch biomanipulation. Proceedings of IEEE International Conference on Robotics and Automation (ICRA) 2008
17. Kimura Y, Yanagimachi R. Intracytoplasmic sperm injection in the mouse. Biol. Reprod. 1995;52:709–720. [PubMed]
18. Wang W, Liu X, Sun Y. Autonomous zebrafish embryo injection using a microrobotic system. Proceedings of IEEE Conference on Automation Science and Engineering (CASE) 2007
19. Wang W, Sun Y, Zhang M, Anderson R, Langille L, Chen W. A microrobotic adherent cell injection system for investigating intracellular behavior of quantum dots. Proceedings of IEEE International Conference on Robotics and Automation (ICRA) 2008
20. Desai J, Pillarisetti A, Brooks A. Engineering Approaches to Biomanipulation. Annual Review of Biomedical Engineering. 2007;9:35–53. [PubMed]
21. Sakaki K, Dechev N, Burke R, Park E. Development of an Autonomous Biological Cell Manipulator With Single-Cell Electroporation and Visual Servoing Capabilities. IEEE Transactions on Biomedical Engineering. 2009;56(8):2064–2074. [PubMed]
22. Najmabadi P, Goldenberg A, Emili A. A scalable robotic-based laboratory automation system for medium-sized biotechnology laboratories. Proceedings of IEEE Conference on Automation Science and Engineering (CASE) 2005
23. Choi B, Jin S, Shin S, Koo J, Ryew S, Kim M, Kim J, Son W, Ahn K, Chung W, Choi H. Development of flexible laboratory automation platform using mobile agents in the clinical laboratory. Proceedings of IEEE Conference on Automation Science and Engineering (CASE) 2008
24. Meldrum DR, Fisher CH, Moore MP, Saini M, Holl MR, Pence WH, Moody SE, Cunningham DL, Wiktor PJ. ACAPELLA-5K, A high-throughput automated genome and chemical analysis system. Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS) 2003;3:2321–2328.
25. Potsaid B, Finger F, Wen J. Automation of Challenging Spatial-Temporal Biomedical Observations with the Adaptive Scanning Optical Microscope (ASOM) IEEE Transaction on Automation Science and Engineering. 2009;6(3):525–535.
26. Kaber D, Stoll N, Thurow K. Human-automation interaction strategies for life science applications: Implications and future research. Proceedings of IEEE Conference on Automation Science and Engineering (CASE) 2007
27. Kuncov-Kallio J, Kallio P. Lab automation in cultivation of adherent cells. IEEE Transactions on Automation Science and Engineering. 2006;3(2):177–186.
28. Makkapati V, Agrawal R, Acharya R. Segmentation and classification of tuberculosis bacilli from ZN-stained sputum smear images. Proceedings of IEEE Conference on Automation Science and Engineering (CASE) 2009
29. Makkapati V. Improved wavelet-based microscrope autofocusing for blood smears by using segmentation. Proceedings of IEEE Conference on Automation Science and Engineering (CASE) 2009
30. Arrasate M, Finkbeiner S. Automated microscope system for determining factors that predict neuronal fate. PNAS. 2005;102(10):3840–3845. (doi: 10.1073/pnas.0409777102) [PubMed]
31. Geisler T, Ressler J, Harz H, Wolf B, Uhl R. Automated Multiparametric Platform for High-Content and High-Throughput Analytical Screening on Living Cells. IEEE Transactions on Automation Science and Engineering. 2006;3(2):169–176.
32. Neumann B, Held M, Liebel U, Erfle H, Rogers P, Pepperkok R, Ellenberg J. High-throughput RNAi screening by time-lapse imaging of live human cells. Nature Methods. 2006;3(5):385–390. (doi:10.1038/NMETH876) [PubMed]
33. Conrad C, Erfle H, Warnat P, Daigle N, Lörch T, Ellenberg J, Pepperkok R, Eils R. Automatic Identification of Subcellular Phenotypes on Human Cell Arrays. Genome Res. 2004;14:1130–1136. (doi:10.1101/gr.2383804) [PubMed]
34. Trucco E, Verri A. Introductory Techniques for 3-D Computer Vision. Upper Saddle River, New Jersey: Prentice Hall; 1998.
35. Forsyth D, Ponce J. Computer Vision: A Modern Approach. Upper Saddle River, NJ: Prentice Hall; 2003.
36. Boykov Y, Kolmogorov V. An experimental comparison of mincut/ max-flow algorithms for energy minimization in vision. IEEE Transactions on PAMI. 2004;26(9):1124–1137. [PubMed]
37. Cappelleri D. Ph.D. Dissertation. Philadelphia, PA: University of Pennsylvania; 2008. Flexible Automation of Micro and Meso-Scale Manipulation Tasks with Applications to Manufacturing & Biotechnology.
38. Cheng P, Cappelleri D, Gavrea B, Kumar V. Planning and control of meso-scale manipulation tasks with uncertainties. In Proceedings of Robotics: Science and Systems. 2007