Then we implemented the real-time volume rendering of continuous acquired data volume and realized the 10 volume per second 4D FD-OCT “live” image. The acquisition line rate is set to be 125,000 line/s at 1024-OCT mode. The acquisition volume size is set to be 12,500 A-scans, providing 125(X) × 100(Y) × 512(Z) voxels after the signal processing stage, which takes less than 10 ms and leaves more than 90 ms for each volume interval at the volume rate of 10 volume/s. As noticed from , the 1024-OCT has a 10-dB roll-off depth of about 0.8mm, and the background noise also increases with the depth. Therefore the optimum volume for the rendering in the visualization stage is truncated in half from the acquisition volume to be 125(X) × 100(Y) × 256(Z) voxels excluding the DC component and the low SNR portion in each A-scan. Nevertheless, the whole volume rendering is available if a larger image range is required. The image plane is set to 512 × 512 pixels, which means a total number of 5122
= 262144 eye rays are used to accumulate though the whole rendering volume for the ray-casting process according to Eq. (4)
and Eq. (5)
. The actual rendering time is recorded during the imaging processing to be ~3ms for half volume and ~6ms for full volume, which is much shorter than the volume interval residual (>90ms). Also for the purpose of demonstrating the higher acquisition speed case, the data transfer-in, the complete FD-OCT processing and the volume rendering of the same frame were repeated 3 times within each volume period, while still maintaining 10 volume/s real-time rendering. Therefore a minimum effective processing and visualization speeds of 375,000 A-scan/s for 1024-OCT can be expected.
First we tested the real-time visualization ability by imaging non-biological samples. Here the half volume rendering is applied and the real volume size is approximately 4mm × 4mm × 0.66mm. The dynamic scenarios are captured by free screen-recording software (BB FlashBack Express).
presents the top surface of a piece of sugar-shell coated chocolate, which is moving up and down in axial direction with a manual translation stage. Here the perspective projection is used for the eye’s viewpoint [19
], and the rendering volume frame is indicated by the white lines. As played in
, shows the situation when the target surface is truncated by the rendering volume’s boundary, the X-Y plane, where the sugar shell is virtually “peeled” and the inner structures of the chocolate core is clearly recognizable. illustrates a five-layer plastic phantom mimicking the retina, where the layers are easily distinguishable. The volume rendering frame in is configured as “L” shape so the tapes are virtually “cut” to reveal the inside layer structures.
Fig. 9 (a) (
Media 1) The dynamic 3D OCT movie of a piece of sugar-shell coated chocolate; (b) sugar-shell top truncated by the X-Y plane, inner structure visible; (c) a five-layer phantom.
Then we implemented the in vivo
real-time 3D imaging of a human finger tip.
shows the skin and fingernail connection, the full volume rendering is applied here giving a real size of 4mm × 4mm × 1.32mm considering the large topology range of the nail connection region. The major dermatologic structures such as epidermis (E), dermis (D), nail fold (NF), nail root (NF) and nail body (N) are clearly distinguishable from .
captured the dynamic scenario of the finger’s vertical vibration due to artery pulsing when the finger is firmly pressing against the sample stage. The fingerprint is imaged and shown in
in , where the epithelium structures such as sweat duct (SD), stratum corneum (SC) can be clearly identified. offers a top-view of the fingerprint region in
, where the surface is virtually peeled by the image frame and the inner sweat duct are clearly visible. The volume size for and is 2mm × 2mm × 0.66mm.
Fig. 10 In vivo real-time 3D imaging of a human finger tip. (a) (
Media 2) Skin and fingernail connection; (b) (
Media 3) Fingerprint, side-view with “L” volume rendering frame; (c) (
Media 4) Fingerprint, top-view.
Finally, to make full use of the ultrahigh processing speed and the whole 3D data, we implemented multiple 2D frames real-time rendering from the same 3D data set with different model view matrix in
, including side-view [
], top-view  and bottom-view , where and are actually using the same model view matrix but the later displayed with the “L” volume rendering frame to give more information of inside. All frames are rendered within the same volume period and displayed simultaneously, thus gives more comprehensive information of the target. The two vertexes with the big red and green dot indicate the same edge for each rendering volume frame.
Fig. 11 (
Media 5) Multiple 2D frames real-time rendering from the same 3D data set with different model view matrix.
The processing bandwidth showed in Section 4 is much higher than most of the current FD-OCT system’s acquisition speed, which indicates a huge potential for improving the image quality and volume speed of real-time 3D FD-OCT by increasing the acquisition bandwidth. The GPU processing speed can be increased even higher by implementing a multiple-GPU architecture using more than one GPU in parallel. Therefore the bottleneck for 3D FD-OCT imaging would now lie in the acquisition speed.
For all the experiments described above, the only additional device required to implement the real-time high speed OCT data processing and display for most cases is a high-end graphics card which cost far less compared to the most optical setup and acquisition devices. The graphics card is a plug-and-play computer hardware without need for any optical modifications. And it is much simpler than adding a prism to build a linear-k spectrometer or developing a linear-k swept laser. The both are complicated to build and will change the overall physical behavior of the OCT system.