|Home | About | Journals | Submit | Contact Us | Français|
We have constructed a near-real-time combined imager suitable for simultaneous ultrasound and near-infrared diffusive light imaging and coregistration. The imager consists of a combined hand-held probe and the associated electronics for data acquisition. A two-dimensional ultrasound array is deployed at the center of the combined probe, and 12 dual-wavelength laser source fibers (780 and 830 nm) and 8 optical detector fibers are deployed at the periphery. We have experimentally evaluated the effects of missing optical sources in the middle of the combined probe on the accuracy of the reconstructed optical absorption coefficient and assessed the improvements of a reconstructed absorption coefficient with the guidance of the coregistered ultrasound. The results have shown that, when the central ultrasound array area is in the neighborhood of 2 cm × 2 cm, which corresponds to the size of most commercial ultrasound transducers, the optical imaging is not affected. The results have also shown that the iterative inversion algorithm converges quickly with the guidance of a priori three-dimensional target distribution, and only one iteration is needed to reconstruct an accurate optical absorption coefficient.
Ultrasound is used extensively for differentiation of cysts from solid lesions in breast examinations, and it is routinely used in conjunction with mammography. Ultrasound can detect breast lesions a few millimeters in size.1 However, its specificity in breast cancer diagnosis is not considered to be high enough as a result of overlapping characteristics of benign and malignant lesions.2,3 Optical imaging based on diffusive near-infrared (NIR) light has the great potential to differentiate tumors from normal breast tissues through determination of tissue parameters, such as blood volume, blood O2 saturation, tissue light scattering, water concentration, and the concentration and lifetime of exogenous contrast agents.4–12 As a potential diagnostic tool, however, NIR diffusive light imaging suffers from low spatial resolution and lesion location uncertainties because of intense light scattering in tissue.
Most NIR imaging reconstruction algorithms are based on tomographic inversion techniques.13–20 Reconstruction of tissue optical properties in general is underdetermined and ill-posed because the total number of unknown optical properties always exceeds the number of measurements, and the perturbations produced by the heterogeneities are much smaller than the background signals. In addition, the inversion reconstruction algorithms are sensitive to measurement noise and model errors.
Our group and others have introduced a novel hybrid imaging method that combines the complementary features of ultrasound and NIR diffusive light imaging.21–25 The hybrid imaging obtains coregistered ultrasound and NIR diffusive light images through simultaneous deployment of an ultrasound array and NIR source–detector fibers on the same probe.21,22,24 Coregistration permits joint evaluation of acoustic and optical properties of breast lesions and enables use of lesion morphology provided by high-resolution ultrasound to improve the lesion optical property estimate. With the a priori knowledge of lesion location and shape provided by coregistered ultrasound, NIR imaging reconstruction can be localized within specified three-dimensional (3-D) regions. As a result, the reconstruction is overdeter-mined because the total number of unknown optical properties is reduced significantly. In addition, the reconstruction is less sensitive to noise because the convergence can be achieved with a small number of iterations.
The clinical use of the combined diagnosis relies on the coregistration of both ultrasound and NIR sensors at the probe level. Conventional ultrasound pulse-echo imaging requires that an imaging transducer be located on top of the target, whereas NIR diffusive light imaging is feasible when the optical source and detector fibers are distributed at the periphery of the ultrasound transducer. However, the effects of missing optical sources in the middle of the combined probe on the accuracy of the reconstructed optical properties have to be evaluated. In addition, the improvements of reconstructed optical properties with the guidance of the coregistered ultrasound need to be quantitatively assessed. Furthermore, real-time data acquisition is necessary to avoid errors in coregistration caused by patient motion during the clinical experiments. In this paper we report our experimental results on the optimal probe configuration, and we quantify the improvements on reconstructed optical properties using a combined probe. We also demonstrate simultaneous combined imaging with a near-real-time imager.
We used the Born approximation to relate the scattered field Usc′(r, ω) measured at the probe surface to absorption variations in each volume element within the sample. In the Born approximation, the scattered wave originated from a source at rsi, and measured at rdi it can be related to the medium absorption heterogeneity Δμa(rνj) at rνj by
where M is the total number of source–detector pairs, N is the total number of imaging voxels, and is the weight matrix given in Ref. 19. G(rνj, rdi, ω) and Uinc(rνj, rsi, ω) are a Green's function and incident wave, respectively. ω is the modulation frequency and is the average or background diffusion coefficient, which is the average value over the background or whole tissue.
With M measurements obtained from all possible source–detector pairs in the planar array, we can solve N unknowns of μa by inverting the above matrix equation. In general, the perturbation Eq. (1) is underdetermined (M < N) and ill-posed.
NIR imaging by itself generally has poor depth discrimination. However, ultrasound is excellent in providing accurate target depth. Once the target depth is available from coregistered ultrasound, we can set Δμa of a nontarget depth equal to zero. This implies that all the measured perturbations originate from the particular depth that contains the target. Because the number of unknowns is reduced significantly, the reconstruction converges very fast. In Ref. 23 we reported that, with a priori target depth provided by ultrasound, the accuracy of the reconstructed μa has been improved by 15–30% on average, and the speed of reconstruction has been improved by an order of magnitude. In this paper we furthermore demonstrate that, with the 3-D target distribution provided by coregistered ultrasound, the accuracy of reconstructed μa and the reconstruction speed can be further improved.
To solve the unknown optical properties of Eq. (1), we used the total least-squares (TLS) method26,27 to iteratively invert Eq. (1). The TLS method performs better than other least-squares when the measurement data are subject to noise and the linear operator W contains errors. We found that the TLS method provides more accurate reconstructed optical properties than other least-squares methods, and we adopted TLS in solving inverse problems. It has been shown in Ref. 28 that the TLS minimization is equivalent to the following minimization problem:
where X represents unknown optical properties. The conjugate gradient technique was employed to iteratively solve Eq. (2).
There are four basic requirements to guide the design of the combined probe. First, reflection geometry is preferred because a conventional ultrasound scan is performed with this geometry. Second, an ultrasound array needs to occupy the center of the combined probe for coherent imaging. Third, NIR sources and detectors have to be distributed at the periphery. Because photon propagation distribution exhibits a banana shape, imaging of the tissue volume underneath the probe is feasible even through there are no sources and detectors deployed in the central portion of the probe. Fourth, the minimum source–detector separation should be larger than 1 cm for the diffusion approximation to be valid, and the maximum separation should be ~8–9 cm to effectively probe depths of 3–4 cm.
On the basis of these requirements we deployed 12 dual-wavelength optical source fibers and 8 detector fibers over a 9 cm × 9 cm probe area (see Fig. 1). The minimum and maximum source–detector separations in the configuration are 1.4 and 8 cm, respectively. To study the effect of the central optical hole on the accuracy of the reconstructed optical properties, we compared the reconstruction results with an extra center source and without the center source. The configuration without the center source corresponds to a 2 cm × 2 cm hole area. We further moved the noncenter 12 sources and 8 detectors toward periphery by leaving a 3 cm × 3 cm hole area in the middle. Figure 2 shows the picture of a combined probe with the 3 cm × 3 cm central area occupied by an ultrasound array. The ultrasound array consists of 64 elements made of 1.5-mm-diameter piezoelectric transducers (Valpey Fisher Inc). The transducers are deployed in a rectangular matrix with 4-mm spacing in both x and y directions. The center frequency of the transducer is 6 MHz and the bandwidth is 40%. The transducers are made from the same piece of piezoelectric transducer material. Therefore the gain difference among different transducers is less than 3 dB. The 12 dual-wavelength optical laser diode sources (760 and 830 nm) and 8 photomultiplier tube (PMT) detectors are coupled to the probe through optical fibers, which are deployed at the periphery of the two-dimensional (2-D) ultrasound array. This hybrid array deployment compromises ultrasound coherent imaging and NIR diffusive light imaging characteristics.
The 9 cm × 9 cm × 4 cm image volume underneath the probe is discretized into voxels of size 0.4 cm × 0.4 cm × 1 cm. There is a trade-off between the accurate estimation of the weight matrix W and the voxel size. Because Wij is a discrete approximation of the integral
it is more accurate when the voxel size is smaller. However, the total number of reconstructed unknowns will increase dramatically with the decreasing voxel size. Furthermore, the rank of W does not increase in the same order as the total number of voxels when the voxel size decreases. This suggests that neighboring Wij's are correlated when the voxel size is smaller, and a further decrease in voxel size will not add more independent information to the weight matrix. We found that a 0.4 cm × 0.4 cm × 1 cm voxel size is a good compromise. Therefore we used this voxel size in image reconstructions reported in this paper.
We constructed a NIR frequency-domain imaging system. The block diagram of the system is shown in Fig. 3. This system has 12 dual-wavelength source channels and 8 parallel receiving channels. On the transmission part, 12 pairs of dual-wavelength (780 and 830 nm) laser diodes are used as light sources, and their outputs are amplitude modulated at 140.000 MHz. Each one of the 12 optical combiners (OZ Optics Inc.) looks like a Y adapter, guiding the emission of two diodes of different wavelengths through the same thin optical fiber (approximately 0.2 mm in diameter). To reduce noise and interference, an individual driving circuit is built for each diode. As a laser diode works in series, a control board that interprets instructions from a PC is used to coordinate operations of associated components. When a single transmission channel is selected, it turns on the corresponding driving circuit so that a dc driving current can be set up for the diode. At the same time, a selected signal is sent to a rf switching unit, which distributes a rf signal to the right channel to modulate the optical output. On the reception part, eight PMTs are employed to detect diffusely reflected light from turbid media. Each PMT is housed in a sealed aluminum box, shielding both environment lights and electromagnetic fields, and an optical fiber (3 mm in diameter) couples NIR light from the detection point to the reception window of the PMT. The electrical signal converted from the optical input is generally weak and rather high in frequency, so high-gain amplification and frequency transform are necessary before it can be sampled by an analog-to-digital (A/D) board inside the PC. We built eight parallel heterodyne amplification channels to measure the response of all detectors simultaneously, which reduces the data-acquisition time. Each amplification channel consists of a rf amplifier (40 dB), a mixer in which the rf signal (OSC1, 140.000 MHz) is mixed with a local oscillator (OSC2, 140.020 MHz), a bandpass filter centered at 20 kHz, and a low-frequency amplifier of 30 dB. The heterodyned two-stage amplification scheme helps suppress wideband noises efficiently. We also generated a reference signal of 20 kHz by directly mixing OSC1 and OSC2, which is necessary for retrieving phase shifts. Eight detection signals and one reference are sampled, converted, and acquired into the PC simultaneously, in which the Hilbert transform is used to compute the amplitude and phase of each channel. The entire data acquisition takes less than 1 min, which is fast enough to acquire data from patients.
One of the challenges encountered in the design of a NIR imaging system is the huge dynamic range of signals received at various source–detector distances. For example, for a semi-infinite phantom made of 0.5% Intralipid solution, the amplitude measured at 1 cm away from a source is approximately 5000 times larger than that at 8-cm separation. In addition, the perturbation that is due to an embedded heterogeneity with optical properties similar to a tumor is normally a few percent of the background signal. As a result, a reflection-mode NIR imaging system should have at least a 120-dB dynamic range to probe a target up to 4 cm in depth. It is hard to build amplifiers that work linearly over such a wide dynamic range. We overcome this difficulty by implementing two-level source outputs. The dc output of a laser diode is controlled when its feedback loop is adjusted, whereas the rf signal is switched simultaneously by a two-step attenuator (no attenuation or 30-dB attenuation). When the source and detector are close to each other, the source is controlled to yield a low-level output. When the separation becomes larger, a 30-dB higher output level should be used. With this two-level source scheme, our system achieved fairly good linearity over a wide range of source–detector separations (from 1.5 to 8 cm).
Because the parameters of an individual laser diode or a PMT vary considerably from one to another, we have to calibrate the gain and phase shift for each channel. A set of measurements obtained from all source–detector pairs placed on the boundary of a homogeneous medium is
Here, amplitude and phase are related to source α and detector β, and m and n are the total number of sources and detectors, respectively. From the diffusion theory, we can obtain the following set of equations7:
in which Is(α) and s(α) are the relative gain and phase delay associated with source channel α, Id(β) and d(β) are similar quantities associated with detector channel β, ραβ is the corresponding separation, and kr + jki is the complex wave number. We obtain the following set of linear equations by taking a logarithm of the above equations related to amplitude:
Although the optical properties of the calibration medium are known in advance, we leave the wave number as a variable and use fitted kr and ki to calculate the background scattering and absorption coefficients. We verified our calibration method by comparing the best fitted k's with real values. The results of our using 0.5–0.8% Intralipid solutions always yielded scattering and absorption coefficients with a rather good accuracy. With the two unknown wave numbers included, the total number of unknowns is 2(m + n + 2), which is generally far smaller than the number of measurements m × n. Consequently, Eq. (3) is overdetermined. We can solve all Is(α), Id(β), s(α), and d(β) terms as well as two unknown wave numbers in a least-squares sense. Then all measurements can be calibrated accordingly. The results of amplitude Aαβ = exp(−kiραβ)/ραβ2 and phase ϕαβ = krραβ after calibration are shown in Fig. 4. As one can see, the calibrated amplitude (log ραβ2Aαβ) and the phase from various source–detector pairs change linearly with distance.
The ultrasound system diagram is shown in Fig. 5, and the system consists of 64 parallel transmission and receiving channels. Each transmission circuit can generate a high-voltage pulse of 200-ns duration (6 MHz) with 125 V peak to peak to the connected transducer. Each receiving circuit has two-stage amplifiers followed by an A/D converter with 40-MHz sampling frequency. The amplifier gain can be controlled based on the target strength. A group of transmission channels can be addressed simultaneously to transmit pulses from neighbor transducers with specified delays and therefore to focus the transmission beam. The retuned signals can be received simultaneously by a group of transducers, and the signals can be summed with specified delays to form a receiving beam.
The data-acquisition procedure is the following. The first 9-element neighbor subarray (dashed rectangle in Fig. 6) from the 64-element transducer array and the corresponding channels are chosen, and then the transmission delay profiles are generated in the computer according to the prespecified focal depth. The delay profile data are transferred to the 64-channel delay profile generator, which triggers the 64 high-voltage pulsers as well as the receiving channels. The returned ultrasound signals are amplified by two-stage amplifiers and sampled by A/D converters. The data are buffered in the memories and are read by the computer after the entire data acquisition is completed. The second subarray (solid rectangle in Fig. 6) is chosen and the same data-acquisition process is repeated. A total of 64 subarrays is used in the data acquisition. After the 64-subarray data acquisition is completed, the data stored in the memories are read by the computer for image formation. The entire data acquisition and imaging display are performed in approximately 5 s, which is fast enough for clinical experiments. To ensure good signal-to-noise ratio, we perform all the electronics using printed circuit boards.
Figure 7 shows the picture of the entire system and the combined probe. Both the NIR system (top) and the ultrasound system (bottom) are mounted on a hospital cart. The combined probe, which houses the ultrasound array and the NIR source–detector fibers, is designed to be hand held to scan patients.
We used 0.5–0.6% Intralipid solutions to mimic normal human breast tissues in all experiments, and the corresponding reduced scattering coefficient μs′ ranges from 5 to 6 cm−1. The Intralipid is contained in a large fish tank to set up approximately a semi-infinite homogeneous phantom. Small semispherical balls (1 cm in diameter), made of acrylamide gel,22 are inserted into Intralipid to emulate lesions embedded in a breast. The reduced scattering coefficients of the gel phantoms are similar to that of the background medium (μs′ ≈ 6 cm−1), and we changed the absorption coefficients to different values by adding different concentrations of India ink to emulate high-contrast (μa = 0.25-cm−1) and low-contrast (μa = 0.1-cm−1) lesions. Ultrasound scattering particles of 200 μm in diameter are added to the gel phantom before the gel is formed.
A series of experiments was conducted to estimate the optimal hole size. Three probe configurations were investigated: (a) no-hole, (b) 2 cm × 2 cm central hole, and (c) 3 cm × 3 cm hole probes. The no-hole probe was essentially the same as case (b) except that an additional light source was added in the middle. Figure 8 shows reconstructed NIR images for on-center targets of high (μa = 0.25 cm−1, left column) and low contrast (μa = 0.1 cm−1, right column) located 2.5 cm deep inside the Intralipid. The fitted background μa and μs are 0.015 and 5.36 cm−1, respectively. With the target depth provided by ultrasound, we performed reconstruction in the target layer. The centers of the voxels in this layer were (x, y, 2.5 cm), where x and y were discrete spatial x–y coordinates, and the thickness of the layer was 1 cm. For the high-contrast target case, there are no important differences in image quality associated with different probes [Figs. 8(a) and 8(c)] except that with a 3 cm × 3 cm hole. The first row of Table 1 provides measured maximum μa values from the corresponding images. Because of the low spatial resolution of diffusive imaging, the boundaries of the targets are not well defined. The maximum value is a better estimation of reconstructed target μa. From no hole to 2 cm × 2 cm, the reconstructed maximum μa decreases slowly. But for 3 cm × 3 cm, the maximum μa drops suddenly to 0.104 cm−1, which is less than half of the original value. Another imaging parameter we measured is the full width at half-maximum (FWHM) of the corresponding images. Because the image lobes were elliptical in general, we measured the widths of longer and shorter axes and used the geometric mean to estimate the FWHM. The results are shown in Table 1, and the FWHM almost increases with the hole size. We also measured the image artifact level, which was defined as the ratio of the peak artifact to the maximum strength of the image lobe and is given in decibels. The results are shown in Table 1. No artifacts were observed in the images of no-hole and 2 cm × 2 cm hole probes. However, the peak artifact level at the −14.3-dB level was measured in the image of the 3 cm × 3 cm hole probe. When the contrast was low, the reconstructed maximum absorption coefficients and measured FWHMs were essentially the same for the no-hole and 2 cm × 2 cm hole probes. However, the reconstructed maximum value dropped to 60% of the true value for the 3 cm × 3 cm probe. The artifact levels measured in the images of three probe configurations were similar and were worse than the high-contrast case. The image artifacts are related to the reconstruction algorithm. When the target contrast is weak or the signal-to-noise ratio is low, the inversion algorithm produces artifacts around the edges of the images.
For shallow targets (here we set the target depth to be 1.5 cm) the NIR system has a relatively poorer performance. This is due to less source–detector pairs experiencing the existence of a shallow absorber. As shown in Fig. 9, image artifacts are obviously worse compared with Fig. 8. However, the conclusion about the hole size of the probe remains true. Table 2 lists all the measured imaging parameters obtained from three probe configurations. Although a 3 cm × 3 cm hole is somewhat too big to obtain good enough results, the optimal hole size is in the neighborhood of 2 cm × 2 cm. This optimal size is approximately the size of commercial ultrasound transducers.
In the above studies, we used the iteration number obtained from the no-hole configuration for the rest of the configurations. Ideally, the iteration should stop when the object function [see Eq. (2)] or the error performance surface reaches the noise floor. However, system noise, particularly coherent noise, was difficult to estimate from experimental data. In general, we found that the reconstructed values were closer to true values when the object function reached approximately 5–15% of the initial value (total energy in the measurements). Therefore we used this criterion (~10% of the initial value) for the no-hole configuration. Because the signal-to-noise ratio of the data decreased with the increase in hole size, we could not find consistent criterion for both no-hole and hole data. Therefore we used the same iteration number obtained from the no-hole case for the hole configurations, and the comparison was based on the same iteration number.
Three-dimensional ultrasound images can provide 3-D distributions of targets. With the a priori target depth information, the optical reconstruction can be improved significantly. An example is given in Fig. 10. The target again was a 1-cm-diameter gel ball of low (μa = 0.1-cm−1) optical contrast and was embedded at approximately (0, 0, 2.5 cm) inside the Intralipid medium. The fitted background μa and μs′ are 0.02 and 5.08 cm−1, respectively. The combined probe shown in Fig. 2 was used to obtain the ultrasound and NIR data simultaneously. Figure 10(a) shows an A-scan line of a returned ultrasound echo signal received by one ultrasound transducer located on top of the target. As acoustic scatters were uniformly distributed in the target, signals were reflected from inside the target as well as from the surfaces. The reflected signals from the front and back surfaces of the gel ball can be clearly identified in the echo signal. On the basis of the target depth, we reconstructed the optical absorption coefficient at the target depth only (1 cm in thickness) by setting the purturbations from the other depths equal to zero. We also performed 3-D optical-only reconstruction. Figure 10(b) shows the reconstructed absorption image from a 3-D optical-only reconstruction [layer three of voxel coordinates (x, y, 2.5 cm) and 1 cm thick], whereas Fig. 10(c) shows the reconstructed image of the corresponding target from ultrasound-guided reconstruction. For optical-only reconstruction, the algorithm did not converge to a localized spatial region, and the image contrast was poor. The measured maximum absorption coefficient was 0.088 cm−1, which was close to the true value. However, the measured spatial location of the maximum value was (−1.6, −1.2 cm), which was too far from the true target location. With the a priori target depth, the reconstruction performed at the target layer can localize the target to the correct spatial position. The measured maximum absorption coefficient was 0.12 cm−1 and its location was (0, 0.4 cm), which was very close to the true target location. This example demonstrates that a priori target depth can significantly improve the reconstruction accuracy and target localization.
In addition to use of a priori target depth information, we can also use the target spatial distribution provided by ultrasound to guide the reconstruction. We performed a set of experiments with two targets located at 2.5 cm in depth inside the Intralipid. Each target is a 1-cm3 gel cube containing ultrasound scatters. For optical properties, they both could be high contrast (μa = 0.25 cm−1) or low contrast (μa = 0.1 cm−1), but had the same reduced scattering coefficient as the background. The fitted background μa and μs′ are 0.017 and 4.90 cm−1, respectively. One target was centered approximately at (−1.0, −1.0, 2.5 cm), whereas the other was at (1.0, 1.0, 2.5 cm). The distance between the centers of the two targets was 2.8 cm.
Figure 11(a) is the ultrasound image of two high-contrast targets. As the field of view of the ultrasound system was nearly a 3 cm × 3 cm square, these two targets appeared at diagonal corners. The measured peak positions of the two targets were (−0.6, −1.0 cm) and (1.0, 1.0 cm), which differed from the true target locations by only one voxel. The low contrast of the ultrasound image is related to the speckle noise. Because our ultrasound array is sparse, the imaging quality is not state of the art (see more discussion in Section 5). The NIR image of these targets was obtained simultaneously and is shown in Figure 11(b). We performed the reconstruction at the target layer by taking advantage of target depth information. A total of 123 iterations was used to obtain Fig. 11(b). The measured peak positions of the two targets were (−1.4, −1.0 cm) and (0.6, 0.6 cm), which were one voxel off from the true target locations (−1.0, −1.0 cm) and (1.0, 1.0 cm), respectively. The corresponding reconstructed absorption coefficients were 0.242 and 0.251 cm−1, which were close to the true values. However, the two targets were almost connected to each other, and their spatial localization was poor. For low-contrast targets, the ultrasound image is shown in Fig. 11(c), and the measured peak locations of the two targets were (−1.0, −0.6 cm) and (0.6, 1.0 cm), which differed from the true target locations by only one voxel. The corresponding NIR image is shown in Fig. 11(d), and the measured peak locations of the two targets were (−2.2, −1.0 cm) and (0.6, 1.0 cm). The left target was off the true location by three voxels. The corresponding reconstructed absorption coefficients were 0.063 and 0.1004 cm−1 at 87 iteration steps. As one can see, the target shape and localization were poorer than those in the high-contrast case. In addition, an artifact appeared at the edge of the image.
From the coregistered ultrasound images, we obtained spatial distributions of the two targets and specified target regions. Figures 12(a) and 12(c) show the −6-dB contour plots of Figs. 11(a) and 11(c). Applying the same reconstruction scheme to these specific regions, we obtained Figs. 12(b) and 12(c) in one iteration. The reconstructed absorption coefficients were 0.2357 and 0.219 cm−1 for the two high-contrast target cases and 0.123 and 0.131 cm−1 for the low-contrast case. We can see much better improvement in the low-contrast target case when we compare Fig. 12(d) with Fig. 11(d). This example demonstrates that, when the targets are visible in ultrasound images, their morphology information provided by ultrasound can be used to guide the optical reconstruction in the specified regions.
The result regarding the iteration step is significant. As we discussed above, there is no known stopping criterion to terminate the iteration because it is difficult to estimate the noise level in the measurements. With the a priori target depth and spatial distribution provided by coregistered ultrasound, we can obtain an accurate optical absorption coefficient in one iteration. Therefore no stopping criterion is needed for the inversion algorithms. However, this result will need to be further evaluated with more samples of different contrasts.
Commercial ultrasound scanners use one-dimensional probes that provide 2-D images of x–z views of the targets, where x and z are the spatial and propagation dimensions, respectively. Such x–z images cannot coregister with NIR images, which are obtained from x–y views of the targets. Our current 2-D ultrasound array is capable of providing x–y views of the targets, which can be used to coregister with NIR images. However, the array is sparse and therefore the image resolution is not state of the art. Nevertheless, its spatial resolution is comparable to NIR imaging and can be used to guide NIR image reconstruction. With 3-D ultrasound guidance, only one iteration is needed to obtain accurate absorption coefficients. This result is significant because no stopping criterion is necessary. More studies with a variety of target contrasts and locations will be performed to verify this result.
We purchased a 2-D state-of-the-art ultrasound array of 1280 transducer elements and we are building a multiplexing unit for our 64-channel electronics. In addition, the new 2-D transducer size is approximately 2 cm × 3 cm, which is in the neighborhood of the optimal hole size we found through this study. With the new 2-D ultrasound transducer, we will be able to obtain high-resolution ultrasound images and delineate the target boundaries with finer details for optical reconstruction.
Ultrasound contrast depends on lesion acoustic properties, and NIR optical contrast is related to lesion optical properties. Both contrasts exist in tumors, but the sensitivities of these two modalities may be different. It is possible that some early-stage cancers have NIR contrast but are not detectable by ultrasound. It would be desirable if we could obtain sensitivity of optical imaging alone. However, light scattering is a main problem that prevents the accurate and reliable localization of lesions. It is also possible that some lesions have acoustic contrast but no NIR contrast or low NIR contrast. Currently, ultrasound is routinely used as an adjunct tool to x-ray mammography; the combined sensitivity of these two modalities in breast cancer detection is more than 90%.29 Recently, ultrasound has also been advocated to screen dense breasts.30 We anticipate that our combined imaging will add more specificity to the ultrasonically detected lesions.
In the reported phantom studies, we assigned zero perturbations to the regions where no targets were present. In clinical studies, we plan to segment the ultrasound images and specify different tissue types as well as suspicious regions in the segmented images. We will then reduce the reconstructed optical unknowns by assigning unknown optical properties to different tissue types as well as to suspicious regions. Finally, we will reconstruct the reduced sets of unknown optical properties. We expect a more accurate estimation of reconstructed optical properties and fast convergence speed, as reported in this paper. However, it is still too early to judge the clinical performance of the combined method; further clinical studies are needed.
Probing regions of the banana-shaped diffusive photons depend on source–detector separations and measurement geometry. For a semi-infinite geometry, the probing regions extend further into the medium when source–detector separation increases. This is why we have multiple source–detector pairs of various separations to detect targets at variable depths from 0.5 to 4 cm. Of course it is hard to achieve uniform sensitivity in the entire region of interest. For example, a superficial target (~1 cm deep) would cause strong perturbations when it is close to a source or a detector, but will result in much weaker signals when it is located deeper. Normalization of scattering photon density waves with respect to the incident waves makes it possible for reconstruction algorithms to handle the huge dynamic range of signals and to detect a target as deep as 4 cm. This normalization procedure was applied to the reconstruction algorithm used to obtain the reported images.
In this study, the target absorption coefficient was reconstructed from the measurements. Because the target μs′ was similar to the background μs′, the coupling between μa and μs′ in our measurements was negligible. We also performed experiments with gel phantom made with Intralipid with a concentration similar to that of the background and did not observe perturbation beyond the noise level. Similar reconstruction studies can be performed for scattering coefficients as well. Simultaneous reconstruction of both absorption and scattering coefficients is also possible. Because the eigenvalues of the absorption and scattering weight matrices are significantly different, good regulation schemes are needed for simultaneous reconstruction. This subject is one of our topics for further study.
We have constructed a near-real-time imager that can provide coregistered ultrasound and NIR images simultaneously. This new technique is designed to improve the specificity of breast cancer diagnosis. Because the ultrasound transducer needs to occupy the central region of the combined probe, a series of experiments were conducted to investigate the effects of missing optical sensors in the middle of the combined probe on the NIR image quality. Our results have shown that, as long as the central ultrasound transducer area is in the neighborhood of 2 cm × 2 cm, essentially similar reconstruction results as those of no missing optical sensors in the middle of the combined probe can be obtained. This 2 cm × 2 cm dimension is approximately the size of most commercial ultrasound phased-array transducers. When the central missing optical sensor area is increased to 3 cm × 3 cm, however, the reconstructed values are obviously lower than real values. If we increase the iteration steps, artifacts in the reconstructed images would soon become dominant.
With the target 3-D distribution provided by coregistered ultrasound, significant improvements in algorithm convergence and reconstruction speed were achieved. In general, a priori target depth information guides the inversion algorithm to reconstruct the heterogeneities at the correct spatial locations and improves the reconstruction speed by an order of magnitude. In addition, the a priori target spatial distribution can further reduce the iteration to one step and also obtain accurate optical absorption coefficients. Given the fact that no known stopping criterion is available, this result is significant because no iteration is needed. However, this result will need to be evaluated with more samples of different contrasts.
We acknowledge the following for their funding support: the state of Connecticut (99CT21), U.S. Department of Defense Army Breast Cancer Program (DAMD17-00-1-0217, DAMD17-01-1-0216), and Multiple-Dimensional Technology, Inc.