|Home | About | Journals | Submit | Contact Us | Français|
We have designed and implemented a framework for creating a fully automated high-throughput phototransfection system. Integrated image processing, laser target position calculation, and stage movements show a throughput increase of > 23X over the current manual phototransfection method while the potential for even greater throughput improvements (> 110X) is described. A software tool for automated off-line single cell morphological measurements, as well as real-time image segmentation analysis, has also been constructed and shown to be able quantify changes in the cell before and after the process, successfully characterizing them, using metrics such as cell perimeter, area, major and minor axis length, and eccentricity values.
Stem cell research has become very prevalent in recent years , , . Stem cells are naturally produced by the body during embryonic development; thus, this research is popular since these cells contain a blue-print of how to build everything in your body. However, with the possibility of using stem cells in cell replacement therapies for various illnesses a more ready and less controversial source of stem cells has been sought. One approach is to create them artificially by using viruses to deliver a set of transcription factor cDNAs into mature cells that will then dedifferentiate these cells into an induced-pluripotent stem cell (iPS) . Signals (instructions) can then be sent to this iPS cell to lead it down a desired developmental pathway to create specified cell types. This procedure is shown schematically in Figure 1(a). Creating therapeutically relevant cells in this manner suffers from the difficulty in programming stem cells to become a particular cell type. An alternative approach is the direct reprogramming of one cell type into another cell type using the Transcriptome Induced Phenotype Remodeling approach (TIPeR) ,  whereby populations of RNA are introduced into a host cell in an effort to reprogram that host cell. It attempts to wipe out the current instruction set that is in place in the host cell and replace it with another. A key feature of the TIPeR procedure is introducing the RNA population into the host cell. One method for performing TIPeR is through the use of transfection [7–12] to transiently introduce holes into the host cell through which mRNA populations can diffuse. Once the holes reseal, the introduced mRNA will be translated and produce functional proteins that can modify the host cell phenotype. Transfection in  is performed with a titanium sapphire laser and this combination of cell poration with the extracellular delivery of mRNA is termed “phototransfection”. Phototransfection provides a means for performing functional genomics manipulations on individual cells and is pictured schematically in Figure 1(b) in contrast to the iPS-based approach for changing cell phenotypes. The current, manual phototransfection procedure consists of the following steps:
This process is very tedious and inefficient. The overall yield rate of the cells just surviving the process is 70–80%, not that they are necessarily changing from one type of cell to another. The current morphological measurements here are also inadequate. They only provide a cell area metric in which the user first traces the cell border in one particular program and saves it as a cell boundary image. This is followed by importing this new image to another program to fill in the region inside the cell border. This filled cell image is then imported back into the original program to actually measure the area of cell. The throughput for the manual phototransfection process is 20 cells/hour. The goal here is to apply flexible automation techniques in order to increase the throughput to about 360 cells/hour. This is important in order to rapidly explore many different amounts and types of donor RNAs, perform various functional tests to see what genes have been expressed and to fill out microarrays for data analysis and fine tuning of the overall procedure. It is also desired to be able to better quantify the cell morphology for comparisons before/after the process to use as one measure to verify that the cell is indeed changing from one type of cell to the other.
There is related work on automated systems to improve the efficiency, productivity, quality, and reliability for procedures and processes in the life sciences. Applying microrobotic and flexible automation technologies to micromanipulation tasks such as single cell holding, moving, injecting/ejecting materials in/out of cells is becoming an active research area , . These types of cell manipulation tasks are important for the characterization and manipulation of single embryo cells in applications such as cloning, gene expression analysis, cell replacement therapy , intracytoplasmic sperm injection (ICSI) and embryo pronuclei DNA injection. Much work has been done on creating automated systems to increase the survival and success rates of these types of procedures [16–20]. There is also recent work on integrating electroporation into a robotic manipulation system for autonomous injections of single-cells . Various types of other platforms for laboratory automation have also been presented. A “tower-based configuration” for the automatic execution of various biotechnology (genomics and proteomics) protocols is presented in  while Choi et al.  present a robotic platform for clinical tests suitable for small or medium sized laboratories using mobile robots. A high-throughput automated genome and chemical analysis system is shown in  and an automated microscope platform for biological studies, drug discovery, and medical diagnostics is illustrated in . Studies to identify current and future approaches to the design of highly automated systems for life science processes involving humans in control loops in applications such as high-throughput compound screening and high-performance analytical chemistry, adherent cell culturing, and the cultivation of primary and stem cells have been explored in  and , respectively. Also, prior work on image segmentation techniques for biological applications is shown in  and  to properly identify the presence of tuberculosis in biologically stained images and for auto-focusing of images of blood smears containing red-blood cells, respectively. An automated microscope system for monitoring the vitality of neuron cells that relies on identifying fluorescent makers has been presented in . In , an integrated system for simultaneous measuring of fluorescence microscopic and integrated sensor-based data is presented as a possible enabling technology for future screening assays. Also taking advantage of fluorescent markers is the work by Neumann et al.  in which an automated platform for high-content RNA interference (RNAi) screening that uses time-lapse fluorescence microscopy of live HeLa cells expressing histone-GFP to report on chromosome segregation and structure is reported. An automated platform for high-throughput cell phenotype screening combining human live cell arrays, screening microscopy, and machine-learning-based classification methods based on the identification of the subcellular localization of marker proteins as indicators for the cellular state is described in .
The work presented this paper describes a framework for fully automating the phototransfection process of single cells (astrocytes and fibroblasts). Two approaches to handle the main automation challenge of processing the cell images in real-time and off-line for morphological comparisons are presented. A software analysis tool for automating cell morphological measurements for quantitative comparison of images of the cells before and after the process is described. This is followed by a detailed description of a proof-of-concept implementation the framework for automating the current manual phototransfection process along with estimated process throughput results. Recommendations for further improvements are also provided.
A framework to automate the actual single cell phototransfection process has been developed and is pictured schematically in Figure 2. The first step in automating the phototransfection process is to instrument an optical microscope with a motorized stage for closed loop positioning of the cover-slips under the microscope field of view (FOV) (Figure 2 (i)). Once this is done, a global and local map of each cover-slip can be constructed, as seen in Figure 2(ii). The stage can be indexed and sequential image captures of the FOV’s in specific locations on the cover-slip performed. A mosaic of all these images can be used to build a global map. This map of the entire cover-slip can then be stored for comparison and analysis at different time intervals. Local maps for individual FOV’s of the cover-slip can also be created, where image processing will be performed. In the individual FOV’s, standard computer vision techniques, such as edge detection, image erosion, dilation, filtering, filling , can be used to segment the cell body and processes (dendrite) area from the background in each image (Figure 2(iii)). Local and global image data can then be compiled consisting of cell body coordinates locations, sizes, contour profile statistics, and processes (dendrite) section areas and locations. A program can be written to automatically determine suggested laser target firing locations based on image data for each FOV on the cover-slip. These locations will be high curvature regions on the cell body, away from the dendrites and cytosol of the cell, as shown in Figure 2(iv). Once all locations are set, coordinated stage movements followed by laser firing (Figure 2(v)), micromanipulator positioning of the injection pipette, mRNA release (Figure 2(vi)), and stage re-positioning can be executed across entire global map of the cover-slip, greatly increasing throughput. Once all the FOV’s on a particular cover-slip have been phototransfected, the process will be repeated on the next cover-slip in the petri dish.
The main challenge in automating the phototransfection process is in identifying the appropriate features of the cells in the image in order to direct the laser beam to create the pores in the cell membrane where the mRNA can diffuse into it. These features can be identified with image segmentation techniques and then these segmented images can be used to automatically determine morphological measures of the cells (for comparison before and after the process) as well as the laser target firing locations.
Images of the phototransfected cell are observed and recorded before and after the process, at different time intervals, to assess morphological changes in the cell. Cell characterization with morphological measures is one way that biologists can assess the success of the overall procedure, along with other functional tests. However, this is not an easy task. The problem in comparing two different images of the same cell before and after phototransfection is that the changes in the cell are hard to discern because of changes in illumination, camera viewpoint and background in both images. Image segmentation techniques, borrowed from the computer vision literature, are used here to segment the image of the cell from the background in order to compare both images of the cell before and after the process without ambiguities. From a properly segmented image, the morphology is quantified by computing measures such as cell area, perimeter, major axis length, minor axis length, eccentricity, and equivalent diameter. This segmented image can also allow for robust image feature identification and laser target coordinate firing location calculations.
A schematic of some of the morphological measurements are shown in Figure 3. The image on the left represents the appearance of a cell before phototransfection, while the image on right represents its morphology after phototransfection. Ideally, it will start to resemble and function like the donor cell and the calculated morphology will be used as one way to quantify this change. The area of the segmented cell region of the image is defined as the actual number of pixels in the region. The perimeter metric is calculated by determining the distance between each adjoining pair of pixels around the boarder of the contiguous segmented cell region in the image. The major axis and minor axis lengths are lengths in pixels of the major or minor axis of an ellipse that is fit to the segmented region that has the same normalized second central moments. The eccentricity measure is determined from this same ellipse and is the ratio of the distance between the foci of the ellipse and its major axis length. It is between 0 and 1. An ellipse with eccentricity = 0 is actually a circle and an ellipse with eccentricity = 1 is a line segment. The equivalent diameter measure is a scalar value that specifies the diameter of a circle with the same area as the segmented region. It is computed as .
Initially, images of the cells were segmented using graph-theoretic clustering techniques, using the image pixels as nodes in the graph . Once a connected, weighted graph is constructed from the image of interest, a graph-cutting algorithm can be executed in order to segment the image. Graph-cutting techniques tackle the minimum cut problem: finding a cut in the graph that has the minimum cost among all the cuts. The algorithm from Boykov and Kolmogorov , that is used here, solves this problem by finding the maximum flow from the “Source” nodes to the “Sink” nodes in the graph (Figure 4). That is, the maximum “amount of water” that can be sent from the “Source” to the “Sink” by interpreting graph edges as “pipes” with capacities equal to the edge weights. The output of the algorithm is a label for each node in the graph (pixel in the image) assigned to be either the “Sink” or “Source”. For this application, the “Sink” corresponds to pixels in the background of the image while the “Source” corresponds to pixels belonging to the cell. Edge weights between the nodes in the graph are computed using a weighted sum of distance (Ad), pixel intensity (Ai), and texture (At) affinity measures for particular nodes. The affinity values between similar nodes are large, while the affinity measures connecting different nodes are small. The distance affinity measure goes down sharply once the distance between the pixels is over some threshold. The pixel intensity affinity is large for similar intensities and smaller as the intensity difference increases. Similarly, the texture affinities are large for pixels with similar surrounding textures and smaller as the difference increases. These three different affinity measures between two nodes, N1 and N2, are listed in Equations (1)–(3), while the corresponding edge weight, E, is given in Equation (4):
where PN = position of node N, IN = pixel intensity value of node N, TN = average change in pixel value intensity between pixels in a image patch surrounding node N, and the σ parameters are chosen to yield large affinity values for similar pixels while yielding low affinity values for dissimilar pixels. The weights, w1, w2, and w3, are user defined and each are ≤ 1 while their sum = 1.
Edge weights between the nodes in the graph and the “Sink” and “Source” nodes also need to be computed to complete the graph. Equations (5)–(11) are used for this. Here AdBkg, AiBkg, and AtBkg are distance, intensity, and texture affinities associated with the background (“Sink”) section of the image that are pre-computed from a set of training images.
The raw output from the graph-cut algorithm needs to be filtered in order to come up with the final segmented image of the cell from the background. Image erosion and dilation steps are applied in Matlab® and the largest connected pixel region that is left is used as the segmented cell image and statistics reported on it. Figure 5(a) shows the result from this procedure on images of four fibroblast cells before and after the phototransfection process, with the segmented areas overlaid on the original images. The images in the top row are before the process while the bottom row of images are after the process has been completed. The cell perimeter, area, major and minor axis, and eccentricity (in pixels) are calculated for each set of images and the corresponding changes in these morphological measures reported in Table 1. These metrics show substantial changes after the phototransfection process has been performed. This indicates a successful phototransfection since the fibroblasts now are starting to look like the donor astrocyte cells and there are metrics to support this.
However, due to the large changes in cell morphology, inconsistencies in the lighting conditions, cover-slip markings, and textures of the backgrounds and cells in the images, consistent results for one set of system parameters across all data sets are difficult to achieve. Figure 5(b) shows examples of poor image segmentation, when only a subset region of the actual cell is identified, using the same set of system parameters as those in Figure 5(a). Continuous tuning of the graph parameters can be performed to obtain acceptable results, however it is desired to keep these details transparent to the end-user and instructions how and what to change are not trivial to the expected end-user (biologist). Therefore, a stand-alone, more user-friendly Matlab®-based software tool has been developed. (Note: the term “acceptable” is used in comparison to the results obtained from manual operation or calculation methods. Acceptable performance is deemed within 10% of these manually obtained values.)
This software tool has been specifically designed and implemented for assessing morphological measures in the astrocyte and fibroblast cells before and after the phototransfection process. A screen shot of the AutoPT Cell Morphology (CM) Graphical User Interface (GUI) that operates the program is shown in Figure 6(a). It has been set up for individual image processing as well as the bulk processing of many images. Once the image to be analyzed has been loaded, the user can then choose from a number of different processing options in the Manual Processing Tools panel to apply to the image. These include: equalizing the image (i.e. evenly distributing intensity values throughout the range of intensity values in the image), image darkening/brightening, edge detection, image closing, connected pixel filtering (filtering out connected pixels smaller than specified size), and filling image holes. There is a choice of five common edge detection methods to apply that are all part of Matlab®’s Image Processing Toolbox. The processing can be done in any order, however, typically the order that the tools appear in the Manual Processing Tools panel is the order that they are executed. Figure 6(b) shows an original image and subsequently processed images after application of the manual processing tools in this order. There is also an option to manually select pixels in the processed image to either connect or disconnect them from the processed image. Once the image is properly segmented, the cell statistics for the largest connected pixel region are calculated and displayed in the CM GUI. These statistics include the perimeter, area, major axis length, minor axis length, eccentricity, equivalent diameter, solidity, and extent. The original image of the cell is then overlaid with the segmented image of the cell in both main GUI panel and in a separate window. A new image just of the segmented cell is also generated. The Record Statistics button can be used to write this data to a text file and save the original cell image, segmented cell image, and overlay image of the cell in jpg format. The data file written also contains hyperlinks to these saved images. Once suitable manual processing steps and parameters have been determined for a few test images, bulk processing of all the images in the active directory can be performed with these settings. Inside the Automatic Processing Sequencer panel, the process to be performed can be selected and the corresponding sequence number entered. The processing steps will use the parameters set in Manual Processing Tools panel and execute the processing on all the images in the active directory, write the corresponding statistics to a text file, and record the original, cell, and overlay images, as shown in Figure 7. There are also settings to record just the largest region, three largest, or all the connected pixel regions that are found.
Three image sets, each containing 5 pairs of images corresponding to the same cell before and after the phototransfection process, were used to compare the performance of this software tool to acquire morphological cell measurements against the traditional method. In the traditional method, the user first traces the cell border in one particular program. This is followed by importing this new cell boundary image to another program to fill in the region inside the cell border. This filled cell image is then imported back into the original program to measure the area of cell. The processing time to analyze each image set using this technique along with the percentage change in the area metric for each image pair were recorded and are listed in Table 2 (column 3). The same image sets were analyzed manually using the AutoPT CM GUI (Figure 6(a)) and the processing time for each set along with the percentage area change for each image pair recorded and also shown in Table 2 (column 2). In the case of image sets 1 and 3, the processing time using the AutoPT CM GUI tool is 33% and 38% faster than the traditional method, respectively. The processing time for image set 2 was about the same in both methods. The results for the percentage change in the area metric with the CM GUI program are all within 8% of the results produced with the traditional analysis method. This error is small and can be explained from the fact that the same person did not use both methods (one person used traditional methods while the other used the GUI) and some portions of the cell borders are subject to individual interpretation. It is also expected that more time gains will be realized once the user is more experienced with using the GUI and identifies the best combination of processing controls to segment particular types of images (this is the reason for similar processing times in image set 2). The CM GUI program is also more user-friendly and efficient since all the necessary processing steps are self-contained and there is no need to switch back and forth between different programs to perform the analysis. Further, using the CM GUI provides more than 6X the information than the alternate approach. As stated previously, in addition to the cell area metric, the GUI program yields metrics for the cell perimeter, major axis length, minor axis length, eccentricity, equivalent diameter, and others. This data for the three sets of test images is shown in Table 3. For the metrics listed here, they are all substantially decreased (by an average of 58%) after the phototransfection process. The traditional analysis method cannot provide these extra morphological measurements.
The Bulk Process function in the AutoPT Morphology GUI was also used to automate the processing of the three sets of test images. Using a laptop running Windows XP, with 1.80 GHz Pentium M processor and 1 GB RAM, and depending on the processing parameters selected, the processing time to analyze the set ranged from 4.5 to over 40 minutes. In each case, data for every connected pixel region greater than 500 pixels was recorded, which depending on the settings can result in a lot of extra processing time. Due to the inconsistencies in the images (lighting conditions, focal length, pipette placement, etc.) it was hard to identify one set of image parameters to successfully segment each cell image. This is also the case when processing the cell images in real-time during the phototransfection procedure when trying to calculate the laser target positions on the cell. In the best cases when using the off-line Bulk Process functionality, a particular processing parameter set was able to segment about 60% of the images in the set within an acceptable tolerance. To process the rest of the images, another set of parameters is selected. This is repeated until all the images in the set have acceptable results or the remaining images can just be processed manually. Standardized procedures to determine the image capture settings during the process are required to produce more consistencies among all the images in an image set to increase the efficiency and results of both the bulk processing and real-time processing of the images. Also, optimized code is needed to further increase the processing speeds. Thus, one cannot simply automate a manual process without considering the impact of the manual procedures on the automation task at hand.
A proof-of-concept implementation for automating this phototransfection process has been accomplished using the flexible automation micro/meso-scale manipulation system from , . The system setup can be seen in Figure 8(a). Here, an inverted optical microscope (Nikon Eclipse TEU2000-U), motorized XY stage (Prior Scientific H107 ProScan II), and CCD camera (Sony XC-77) are the pertinent pieces of hardware being utilized. There is also a 4-axis computer controlled micromanipulator (Siskiyou Design Instruments MX7600R) and associated controller (Siskiyou Design Instruments MC2000) available for use in the test-bed. The computer controlled micromanipulator can be used to position a pipette for dispensing mRNA. Typically, a 40X objective is used to image the cells for this application. The phototransfection process utilizes a titanium sapphire laser to perforate the cell membranes and the laser can be directed to any region of the microscope FOV to administer the laser beam. There is currently no laser in this implementation but when incorporated into this system in the future, it will be focused to fire at the center of the image in the FOV. The control software to operate the system is written in Visual C#.Net, leveraging the Windows .NET framework, enabling easy integration of software modules that can reside on different workstations. The software includes (a) real-time image capture of images from the microscope; (b) control of the motorized stages; and (c) a simple GUI (Figure 8(b)) for the operator to specify the type of cell he/she is interested in by entering relevant image processing parameters. The image processing routines are written in Matlab® (version 18.104.22.168, R2006a) using functions from the Image Processing Toolbox.
The Laser Target Control Panel found in the lower left corner of the GUI (Figure 8b) allows the user to specify the parameters for the image processing code in Matlab®. The tunable parameters include the type of edge detection method to use (Canny, Sobel, Roberts, Prewitt, Laplacian), the diameter for the image closing operation and pixel size for a connected pixel filtering procedure. Note that these parameters are set just once. The Get Target button calculates a recommended laser target firing location of the cell of interest in the FOV. This button saves the image from the current image frame along with the specified parameters and then calls Matlab® to perform the necessary calculations to segment the image of the cell from the background and recommends image coordinates to fire the laser. In this current implementation, this position is just determined as the centroid of the cell body. However, more sophisticated metrics to calculate the laser target position can easily be applied here instead. The laser target position information is then sent back to the main control program and drawn on the screen in pink. Once the laser target position has been established, the Position Target button can be utilized to have the motorized stage automatically translate the cell in the XY plane so that the calculated laser target position is now at the center of the image where the laser will be parked (laser firing location in Figure 8b). The Clear Target button is used to reset the laser target position in the computer memory and move the stage back to its original position. By coupling the Get Target and the Position Target function with the laser firing and mRNA release from a pipette mounted on the motorized manipulator in the system (as planned in the future), the system will be completely automated.
On a single control computer running Windows XP, with a 2.39 GHz Pentium 4 processor and 1 GB of RAM, it takes 30 seconds to segment and identify a target location for the cell and translate the XY stage to move the cell’s laser target to the center of the image for eventual laser firing and mRNA release. (The laser firing and mRNA release can be done practically simultaneously and is the easiest and fastest part of the phototransfection process, taking about 1–2 seconds to do manually.) The 30 second processing time corresponds to a throughput of about 120 cells/hour, which is a 6X improvement over the current manual procedure (20 cells/hour). By coupling all the software modules more efficiently (eliminating the C# wrappers with Matlab® software) and by processing all cells in the field of view (typically 4–6), the throughput is expected to increase to over 500 cells/hour. This is greater than a 25X improvement. Also, using a faster computer would further decrease the cycle time. This system can also be run continuously, only needing a human to be there to replenish a new batch of cells and remove the processed ones. Assuming a 12-hour day at a rate of 500 cells/hour projects to a throughput of 6000 cells/12-hr day.
As proof-of-concept for the increased time gains from using one integrated program, the C# program functionality was converted to a Matlab® program capable of acquiring images from the CCD camera, processing the image, calculating laser target positions, and moving the XY stage. Running everything in the same program reduced the process time from 30 seconds down to approximately 8 seconds. This corresponds to a throughput of 450 cells/hour, a 23X improvement from the current manual process. Again, assuming that all the cells (typically 4–6) in the FOV can be processed with minimal increased computational overhead, a potential throughput of 2250 cells/hour (>113X improvement) is estimated. Subsequent segmentation on images with 4–6 cells entities has indeed shown no marked increase in the overall processing time. However, in practice, this fully integrated program cannot be written in Matlab® since the images from the confocal microscope, that are used for the actual procedure, are captured with a photomultiplier tube (PMT). The PMT is not compatible with Matlab’s image acquisition toolbox which has been used here to capture images from the CCD camera in the test setup. Therefore, custom software is required to capture the PMT images, perform the appropriate image segmentation, calculate laser target positions, and translate the XY stage in order to achieve these further throughput gains. Another option would be to add an additional optical port to the microscope or an external optical system that a compatible CCD camera could be mounted and hooked into the Matlab® interface. Considering of both of these options are areas of future work. To get the maximum possible throughput out of the entire system, considerations for automatically refilling the micropipette with mRNA should be made along with investigations on how to move the processed cover-slip out of the way, store it in an organized manner while feeding in the next one to be processed, with as limited human interactions as possible.
Work towards fully automating the single cell manipulation process of phototransfection is presented in this paper. Phototransfection is presently done manually in a very tedious manner. A framework for fully automating this procedure has been designed and proof-of-concept implementation achieved. Computer vision techniques are used to identify the cell of interest in the FOV and determine target locations for the laser beam. A control program takes this information and coordinates movements of the computer controlled XY stage, translating the coordinates of the laser target location to a predefined, fixed, laser firing location. A 23X improvement is possible with this implementation with room for improvement to greater than 110X described. Images of the phototransfected cell have been observed before and after the process and a software tool developed to assess morphological changes in the cell as a way to characterize them and assess the efficacy of the phototransfection process. Image segmentation algorithms were used to segment the cell from the background in order to compare both images of the cell without ambiguities. From the properly segmented image, the morphology is quantified by computing measures such as cell area, asymmetry, perimeter, and eccentricity. Results show a notable decrease in the metrics after the process has been performed, a throughput increase over manual cell morphology measurements, a 6X gain in the number of measurements made, and a more efficient and user-friendly software tool for cell morphological analysis.
The authors gratefully acknowledge funding from NSF Grant IIS-0413138, Dept. of Education GAANN Grant P200A060275, the Keck Foundation and the NIH Director’s Pioneer Award Program, DP1-OD-04117 to support this work and Kitty Wu for discussions on the manual phototransfection procedure.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
David J. Cappelleri, Department of Mechanical Engineering, Stevens Institute of Technology, Hoboken, NJ USA.
Adam Halasz, Dept. of Mathematics, West Virginia University, Morgantown, WV USA.
Jai-Yoon Sul, PENN Genome Frontiers Institute, Dept. of Pharmacology, University of Pennsylvania, Philadelphia, PA, USA.
Tae Kyung Kim, PENN Genome Frontiers Institute, Dept. of Pharmacology, University of Pennsylvania, Philadelphia, PA, USA.
James Eberwine, PENN Genome Frontiers Institute, Dept. of Pharmacology, University of Pennsylvania, Philadelphia, PA, USA.
Vijay Kumar, GRASP Lab, Department of Mechanical Engineering, University of Pennsylvania, Philadelphia, PA USA.