shows the optoelectronic holographic otoscope (OEHO) system being developed for full-field-of-view measurement of nanometer scale displacements of TMs excited by sound in live human ears. Our developments are based on optoelectronic methodologies that make use of miniaturized components in an otoscope-like configuration. The system consists of a high-speed image-processing computer (IP) with advanced control software, a fiber optic subsystem (FS) and an otoscope head subsystem (OH). Integrated into the otoscope head is a sound source (SS) and a microphone (CM) to generate and measure sound stimuli under control of the IP.
Figure 1 Schematics of the OEHO system: the TM is the sample under investigation; OH is the otoscope head containing an interferometer and imaging optics; FS is the fiber optic subsystem containing a laser, beam splitting optics, and laser-to-fiber components (more ...)
The OEHO system is capable of operating in time-averaged and stroboscopic modes. The time-averaged mode is used for rapid, real time identification of resonant frequencies of the vibrating samples and their corresponding mode shapes [26
]. The stroboscopic mode is used to quantify the magnitude and phase of the TM’s entire surface at a nanometer scale [28
]. Both operating modes require the collection of four optically phase-stepped interference images in order to calculate the reconstructed holographic image [30
A more detailed schematic of the OEHO and acoustic system is shown in . The laser beam is provided by a solid-state laser diode (LD), with an operational wavelength of 473 nm and power of 15 mW. The output of the laser is directed through an acousto-optic modulator (AOM) then through the beam splitter (BS), which splits the light into reference beam (RB) and object illumination beam (OB) with an 80:20 power ratio. The OB is directed to a mirror (M) in order to direct the beam to its laser-to fiber coupler assemblies (FA). RB is directed to its laser-to-fiber coupler via a mirror mounted on the piezoelectric modulator (MP) which generates the four steps in optical phase required for holographic reconstruction. The steps in the optical phase are achieved by varying the optical path length of the reference beam by movements of the piezoelectric modulator by fractions of the laser wavelength. Both beams are coupled into single mode fibers using laser-to-fiber coupler assemblies. The frequency generator (FG), operated by the IP, supplies the electrical input to the sound source by a digital to analog converter, and uses an analog to digital converter to measure the signal produced by a microphone near the surface of the TM; in stroboscopic mode, it also provides the timing input to the acousto-optic modulator driver (DD). Activation of the acousto-optic modulator driver by short electrical trigger pulses that are phase-locked to the acoustic stimulus driver results in short stroboscopic pulses (1/10 of the acoustic stimulus period) of laser light during varied phases of the acoustic stimulus.
Experimental setup of the OEHO system. A detailed sketch of the OH is given in .
The design of the optomechanical system was carried out with commercially available software [31
], taking into account the TM anatomy and the results of preliminary holographic measurements of TM displacements in several animal species [32
]. The design of the otoscope head (OH) fulfilled several requirements: firstly, we needed a field-of-view (FOV) of 10 by 10 mm (the human TM diameter is around 8 to 10mm), and then, the image of the TM should cover the entire region of interest (ROI) of 800×800 pixels of the CCD sensor with a pixel size of 6.5 µm; the choice of field-of-view and imaging region of interest define the optical magnification (Mx) of 0.5X. We selected a working distance (WD) of 90 mm because of constraints on the size of the sound source, microphone, and the beam splitter. The focal length was calculated from the WD and Mx to be 30 mm. An aperture was incorporated into the design to increase the depth of field (DOF) to about 5 mm, which was needed to keep the whole TM, with its 3-D shape and orientation in the ear canal, in focus. The imaging performance was evaluated through the modulation transfer function (MTF) using a USAF target [33
]. shows the MTF obtained experimentally by measuring the contrast in the image of the bar target for various spatial frequencies, and shows the theoretical MTF determined by OSLO software. The resolution of imaging optics was also evaluated by the USAF test target, resulting in 57lp/mm and fringe visibility of 0.82, obtained with a beam ratio of 1.2 [34
MTF: (a) measured directly from the USAF target; and (b) evaluated from the OSLO software.
shows details of the otoscope head subsystem, OH. The output of the object beam, OB, is used to illuminate the sample of interest while the imaging system (IS) collects the scattered wavefronts from the reflected object beam at the surface of the sample, TMs. The image collected by the IS is combined with the reference beam, RB, by means of the imaging beam splitter (BSI) and then directed onto the CCD camera detector. The IS includes an achromatic lens and an aperture. The aperture controls both the light entering the imaging system and the depth of field. The object illumination beam is directed at the test object via a speculum. The speculum is coupled to a sound source, SS, for sample excitation and a compact microphone, CM, to monitor the stimulus sound pressure at the TM. An angled glass window at the back of the speculum isolates the sound stimulus within the speculum, allowing larger stimulus sound pressures at lower stimulus frequencies. The sound source is driven by a computer-controlled frequency generator, and the microphone output is monitored by an analog-to-digital convertor controlled by the computer system. shows a photograph of a prototype of the otoscope head with dimensions of 100 × 60 ×120 mm3.
Optical head subsystem: (a) schematic depicting the major components; and (b) built subsystem.
2.1 Control software and Principle of measurement
Specialized software is necessary for image acquisition, control of the optoelectronic devices in the fiber optic subsystem, FS and otoscope head, OH, processing of the acquired images, and the display of the processed images.
The controlling software interfaces with cameras with digital dynamic ranges and spatial resolutions of 10/12 bits and 1.3 megapixels, respectively, with the capability of acquiring and processing images at rates as fast as 40 frames per second.
Optical phase measurements based on phase stepping algorithms are implemented in this software [35
]. The CCD camera acquires 4 successive interference images in which the optical path length (or optical phase) of the reference beam is stepped by zero to three quarter wavelength. If the frame integration period of the camera is much longer than the period of the object vibration, the intensity may be expressed as
refer to the spatial coordinates of the pixels making up the camera image, Io
is the object beam intensity, Ir
is the reference beam intensity, o−r
represents the random phase between the reference beam and the object beam produced by interaction of the beam with the measurement, Jo
is the zero order Bessel function, dz(x,y)
is the out-of-plane vibration displacement of the object [36
is the modulation intensity, IM
, and λ is the wavelength of the laser light.
In the time-averaged mode the four phase-stepped data frames I0
are used to calculate and display the modulation intensity, IM
The stroboscopic mode consists of acquiring and processing two sets, I0
, to I3
of phase stepped images, recorded at acoustic stimulus phase 1, ψ1
, and acoustic stimulus phase 2, ψ2
, respectively. The resultant modulation intensity at each camera pixel IM
is calculated as,
The phase difference, Δ
, in the stroboscopic mode is calculated between acoustic stimulus ψ1
, it is given as,
is the maximum grayscale value, equivalent to 2n
−1, and n
is the bit depth of the camera. The arguments (x,y
) are omitted for simplification.
To acquire a phase-stepped image, the IP uses a data acquisition board to put out a varying analog voltage, which is synchronized with each frame captured from the camera. The output voltage is amplified to drive the piezoelectric modulator for laser light modulation, and hence, to accurately perform the phase step required.
To maximize the speed of image acquisition, processing and display, separate tasks are run in individual threads, using shared memory to communicate among each other. The image acquisition thread captures successive images at the maximum rate allowed by the camera and computer, while storing the images in a circular queue. A record is kept of the index of the most recently captured image. Meanwhile, the display thread uses the most recently captured 4 images to compose the processed image to be displayed.