|Home | About | Journals | Submit | Contact Us | Français|
Current methodologies for characterizing tympanic membrane (TM) motion are usually limited to either average acoustic estimates (admittance or reflectance) or single-point mobility measurements, neither of which suffices to characterize the detailed mechanical response of the TM to sound. Furthermore, while acoustic and single-point measurements may aid in diagnosing some middle-ear disorders, they are not always useful. Measurements of the motion of the entire TM surface can provide more information than these other techniques and may be superior for diagnosing pathology. This paper presents advances in our development of a new compact optoelectronic holographic otoscope (OEHO) system for full-field-of-view characterization of nanometer scale sound-induced displacements of the surface of the TM at video rates. The OEHO system consists of a fiber optic subsystem, a compact otoscope head, and a high-speed image processing computer with advanced software for recording and processing holographic images coupled to a computer-controlled sound-stimulation and recording system. A prototype OEHO system is in use in a medical-research environment to address basic-science questions regarding TM function. The prototype provides real-time observation of sound-induced TM displacement patterns over a broad-frequency range. Representative time-averaged and stroboscopic holographic interferometry results in animals and cadaveric human samples are shown, and their potential utility discussed.
The tympanic membrane (TM) is the most-peripheral structure in the middle ear and is the first ear structure set in motion by air-borne sound. Motion of the TM is transduced by the middle ear system into sound in the cochlear fluids, which then excites the cochlear partition in the inner ear. While measurements of TM motion are not direct measures of hearing sensitivity (which also depends on the function of the inner ear), a TM altered by trauma or middle-ear disease will reduce the acoustic-mechanical input to the inner ear, thereby causing a ‘conductive’ hearing loss. An approach to determine the degree of such a loss in sound conduction to the inner ear is to measure the deformation of the TM with various acoustic stimuli. Most present-day middle-ear diagnostic procedures are based on acoustic measurements that sense the mobility of the entire TM, e.g. multi- or single-frequency tympanometry [1–2], ear-canal reflectance or power absorption [3–5], and static-pressure induced variations in sound pressure . Single-point laser vibrometry measurements of the mobility of the umbo in the TM have also been used as diagnostic aids in clinical settings [7–8].
All of these clinical measurements have weaknesses. The acoustic measurements depend on the sound pressure at the TM and represent the average mobility of the entire TM. The single-point laser vibrometry measurements are much more localized, which may lead to superior ability to distinguish ossicular disorders , but are relatively insensitive to TM disorders at locations other than the umbo. Full-field-of-view measurements of TM mobility may be superior to either of the present techniques in that they will quantify function along the entire surface of the TM. We are investigating optoelectronic holography in this regard, since it has been successfully used in many applications and environments .
Holographic methodologies have been used in the past to enable full-field-of view measurements of the vibrating patterns of the surface of TMs. Time-averaged holographic interferometry has been used to study TM vibrations in both laboratory animals and in human cadavers [11–12]. Løkberg et al. used electronic speckle pattern interferometry (ESPI) to measure vibrations of the human TM in vivo . A paper deals with complicated vibration patterns of the TM in cadaveric guinea pig’s at frequencies up to 4 kHz, measured using time-averaged speckle patterns . The shape and displacement patterns of the cat TM in both normal mobile malleus and immobile malleus have been measured by Moiré holography . Holography and ESPI studies using endoscopes have been used to diagnose TMs [16–17]. Aside from this research work, there have been different proposed models for assessment, mechanical characterization, and dynamic behavior of TMs [18–25]. Though holography has proven to be useful in the investigation of TM mechanics, its application has been generally limited to laboratory studies. In this paper, we describe our advances in the design, fabrication, characterization and use of a compact, stable, flexible OEHO system, with a prototype system currently in use in a medical research environment. The system is capable of providing full-field-of-view measurements of sound-driven displacements of the surface of the TM at video rates.
Figure 1 shows the optoelectronic holographic otoscope (OEHO) system being developed for full-field-of-view measurement of nanometer scale displacements of TMs excited by sound in live human ears. Our developments are based on optoelectronic methodologies that make use of miniaturized components in an otoscope-like configuration. The system consists of a high-speed image-processing computer (IP) with advanced control software, a fiber optic subsystem (FS) and an otoscope head subsystem (OH). Integrated into the otoscope head is a sound source (SS) and a microphone (CM) to generate and measure sound stimuli under control of the IP.
The OEHO system is capable of operating in time-averaged and stroboscopic modes. The time-averaged mode is used for rapid, real time identification of resonant frequencies of the vibrating samples and their corresponding mode shapes [26–27]. The stroboscopic mode is used to quantify the magnitude and phase of the TM’s entire surface at a nanometer scale [28–29]. Both operating modes require the collection of four optically phase-stepped interference images in order to calculate the reconstructed holographic image .
A more detailed schematic of the OEHO and acoustic system is shown in Figure 2. The laser beam is provided by a solid-state laser diode (LD), with an operational wavelength of 473 nm and power of 15 mW. The output of the laser is directed through an acousto-optic modulator (AOM) then through the beam splitter (BS), which splits the light into reference beam (RB) and object illumination beam (OB) with an 80:20 power ratio. The OB is directed to a mirror (M) in order to direct the beam to its laser-to fiber coupler assemblies (FA). RB is directed to its laser-to-fiber coupler via a mirror mounted on the piezoelectric modulator (MP) which generates the four steps in optical phase required for holographic reconstruction. The steps in the optical phase are achieved by varying the optical path length of the reference beam by movements of the piezoelectric modulator by fractions of the laser wavelength. Both beams are coupled into single mode fibers using laser-to-fiber coupler assemblies. The frequency generator (FG), operated by the IP, supplies the electrical input to the sound source by a digital to analog converter, and uses an analog to digital converter to measure the signal produced by a microphone near the surface of the TM; in stroboscopic mode, it also provides the timing input to the acousto-optic modulator driver (DD). Activation of the acousto-optic modulator driver by short electrical trigger pulses that are phase-locked to the acoustic stimulus driver results in short stroboscopic pulses (1/10 of the acoustic stimulus period) of laser light during varied phases of the acoustic stimulus.
The design of the optomechanical system was carried out with commercially available software , taking into account the TM anatomy and the results of preliminary holographic measurements of TM displacements in several animal species . The design of the otoscope head (OH) fulfilled several requirements: firstly, we needed a field-of-view (FOV) of 10 by 10 mm (the human TM diameter is around 8 to 10mm), and then, the image of the TM should cover the entire region of interest (ROI) of 800×800 pixels of the CCD sensor with a pixel size of 6.5 µm; the choice of field-of-view and imaging region of interest define the optical magnification (Mx) of 0.5X. We selected a working distance (WD) of 90 mm because of constraints on the size of the sound source, microphone, and the beam splitter. The focal length was calculated from the WD and Mx to be 30 mm. An aperture was incorporated into the design to increase the depth of field (DOF) to about 5 mm, which was needed to keep the whole TM, with its 3-D shape and orientation in the ear canal, in focus. The imaging performance was evaluated through the modulation transfer function (MTF) using a USAF target . Figure 3a shows the MTF obtained experimentally by measuring the contrast in the image of the bar target for various spatial frequencies, and Figure 3b shows the theoretical MTF determined by OSLO software. The resolution of imaging optics was also evaluated by the USAF test target, resulting in 57lp/mm and fringe visibility of 0.82, obtained with a beam ratio of 1.2 .
Figure 4a shows details of the otoscope head subsystem, OH. The output of the object beam, OB, is used to illuminate the sample of interest while the imaging system (IS) collects the scattered wavefronts from the reflected object beam at the surface of the sample, TMs. The image collected by the IS is combined with the reference beam, RB, by means of the imaging beam splitter (BSI) and then directed onto the CCD camera detector. The IS includes an achromatic lens and an aperture. The aperture controls both the light entering the imaging system and the depth of field. The object illumination beam is directed at the test object via a speculum. The speculum is coupled to a sound source, SS, for sample excitation and a compact microphone, CM, to monitor the stimulus sound pressure at the TM. An angled glass window at the back of the speculum isolates the sound stimulus within the speculum, allowing larger stimulus sound pressures at lower stimulus frequencies. The sound source is driven by a computer-controlled frequency generator, and the microphone output is monitored by an analog-to-digital convertor controlled by the computer system. Figure 4b shows a photograph of a prototype of the otoscope head with dimensions of 100 × 60 ×120 mm3.
Specialized software is necessary for image acquisition, control of the optoelectronic devices in the fiber optic subsystem, FS and otoscope head, OH, processing of the acquired images, and the display of the processed images.
The controlling software interfaces with cameras with digital dynamic ranges and spatial resolutions of 10/12 bits and 1.3 megapixels, respectively, with the capability of acquiring and processing images at rates as fast as 40 frames per second.
Optical phase measurements based on phase stepping algorithms are implemented in this software . The CCD camera acquires 4 successive interference images in which the optical path length (or optical phase) of the reference beam is stepped by zero to three quarter wavelength. If the frame integration period of the camera is much longer than the period of the object vibration, the intensity may be expressed as
where x and y refer to the spatial coordinates of the pixels making up the camera image, Io is the object beam intensity, Ir is the reference beam intensity, o−r represents the random phase between the reference beam and the object beam produced by interaction of the beam with the measurement, Jo is the zero order Bessel function, dz(x,y) is the out-of-plane vibration displacement of the object , is the modulation intensity, IM, and λ is the wavelength of the laser light.
In the time-averaged mode the four phase-stepped data frames I0, I1, I2, I3 are used to calculate and display the modulation intensity, IM
The stroboscopic mode consists of acquiring and processing two sets, I0, to I3 and R0 to R3 of phase stepped images, recorded at acoustic stimulus phase 1, ψ1, and acoustic stimulus phase 2, ψ2, respectively. The resultant modulation intensity at each camera pixel IM is calculated as,
The phase difference, Δ, in the stroboscopic mode is calculated between acoustic stimulus ψ1 and ψ2, it is given as,
where cMn is the maximum grayscale value, equivalent to 2n−1, and n is the bit depth of the camera. The arguments (x,y) are omitted for simplification.
To acquire a phase-stepped image, the IP uses a data acquisition board to put out a varying analog voltage, which is synchronized with each frame captured from the camera. The output voltage is amplified to drive the piezoelectric modulator for laser light modulation, and hence, to accurately perform the phase step required.
To maximize the speed of image acquisition, processing and display, separate tasks are run in individual threads, using shared memory to communicate among each other. The image acquisition thread captures successive images at the maximum rate allowed by the camera and computer, while storing the images in a circular queue. A record is kept of the index of the most recently captured image. Meanwhile, the display thread uses the most recently captured 4 images to compose the processed image to be displayed.
In order to evaluate and demonstrate the capabilities of the OEHO system, we carried out modal analysis of a circular aluminum membrane having a thickness of 0.37 mm and a diameter of 10 mm. The membrane was fabricated by the removal of a cylinder of material from an aluminum block being 76.30, by 79.7 by 5.10 cm in dimension. The manufactured block with the membrane was mounted on a piezoelectric shaker for excitation.
The relative motion of the membrane was investigated with the OH subsystem set at the designed magnification of 0.5X, a field-of-view of 10 mm, and a region of interest of 800 × 800 pixels. In order to maximize fringe visibility, the beam ratio between reference beam and object beam was chosen to be ~1.2.
With the OEHO system running in the time-averaged mode and the membrane excited in the range of 8–90 kHz, fundamental natural resonant frequencies were identified. Figure 5 shows the first 4 resonant modes of vibration of the aluminum membrane. Analysis of the interferograms indicates fringe visibility of 0.82, which is suitable for quantitative analysis.
The stroboscopic mode is used for quantitative displacement measurements. Optoelectronic holograms are acquired from the sample frozen at any point of its vibrating cycle by illuminating the object with short stroboscopic pulses synchronized with the vibration excitation; these images are then processed using the 4 optical phase stepping technique that renders a phase map depicting the sample displacement, Eq. 4. Figure 6 shows the resulting phase maps and displacement patterns computed by the quantitative analysis of stroboscopic measurements for the same aluminum membrane; the frequency f = 47.8 kHz was selected for demonstration. The two stroboscopic images were gathered at stimulus phases −45 ° and 135° to generate negative and positive displacements, respectively, on the aluminum membrane.
We also computed the optical phase between two images gathered without stimulation in order to estimate the noise in the measurement procedure. Figure 7 shows the difference in the results; the fluctuation due to the noise has a peak to peak value of ~π/11 (λ/23).
The OEHO system described above was used to investigate the sound-induced motion of the TM in cadaveric chinchillas and humans, as well as in live chinchillas. Time-average holograms measured with sound stimuli of from 400 to 22000 Hz and varied levels are shown in Figure 8. It was observed that the OEHO system has a high mechanical stability while providing high quality images. One point of ready comparisons is the similarity between the results in live and cadaveric chinchillas. All of the measurement features observed in the live chinchilla ear are also observed in the cadaveric ear. Such similarity between pre- and post-mortem middle-ear function has been observed previously [37–38].
Figure 9 shows the result of stroboscopic mode measurements of a human TM demonstrating the ability to describe differences in optical phase between two stimulus phases (ψ 1 and ψ 2) and to convert those differences to absolute displacements, with nm resolution, of the entire imaged surface. In this data, the two stimulus were set to correspond to the maximum and minimum displacements.
The vibration patterns of the TMs of different species could be grouped into three patterns: simple, complex and ordered. The time-averaged measurements with lowest frequencies show simple patterns of displacement magnitude defined by multiple concentric holographic rings in response to sound stimulation. Simple patterns are restricted to frequencies less than 600 Hz in chinchilla, and to frequencies less than 1 kHz in cadaveric humans. For sound frequencies between about 0.8 to 2 kHz in chinchilla, and between 2 to 8 kHz in cadaveric human ears, we see complex patterns with numerous spatial maxima with interdigitating shapes that are separated by regions of small displacements. At high frequencies, above 3 kHz in chinchilla and above 9 kHz in human, there are ordered patterns of displacement maxima, arranged like pearls on strings. The strings are arranged in roughly circular patterns along the surface of the TM, while the pearls appear to be radially arranged. Patterns similar to the simple and complex patterns of displacement have been reproduced by finite-element and other TM models, but none of these models, to our knowledge, have investigated frequencies high enough to predict the ordered patterns of displacement we observe [18–25].
We have presented advances in our development of a compact optoelectronic holographic otoscope (OEHO) system capable of providing qualitative and quantitative descriptions of full-field-of-view data at a nanometer scale at video rates. Opto-mechanical design parameters of the OEHO account for the physiology of different mammalian species and preliminary holographic measurements of TM motion. The prototype OEHO configuration is characterized by the following parameters: field-of-view, FOV, of 10 mm, depth of field, DOF, of 5 mm, magnification, Mx, of 0.5X, fringe visibility of 0.82 obtained with the beam ratio of 1.2, and resolution of 57 lp/mm.
Prior to deployment into a medical research environment, image quality and resolution were evaluated using a typical test sample. Currently, the OEHO system is being used in the study and examination of TMs in live and cadaveric animals and cadaveric humans in normal conditions and after induced pathology. To illustrate the measurement capabilities of our system, we have shown successful representative results from post-mortem chinchilla and human, as well as in live chinchilla TM at frequencies up to 22 kHz. For the first time, we illustrate measurements of the displacement of the TM surface for frequencies above 8 kHz in both animals and humans. The observed vibration patterns show simple, complex and ordered displacement behavior. The significance of these different patterns is a point of further study. The results gathered using the OEHO system improve our understanding of the function of the TM, and may be helpful in the diagnosis of various ossicular disorders, as well as in assessing the reasons for failure in middle-ear reconstructive surgery . The present system will be useful in studies that include: correlating vibration patterns of the TM across animal species with specific structures of the TM, testing the effects of static air-pressure differences across the TM on auditory mechanics, classifying normal TM function in terms of mode numbers and average displacement versus frequency, and generating more precise models of the biomechanics of TMs.
We are continuing our efforts toward further the miniaturization of the developed OEHO system, and will present the results in forthcoming publications. We hope in the future to test the further miniaturized system as a tool for discriminating different forms of conductive hearing loss in live patients, much like tympanometry and laser-Dopper vibrometry . While useful in diagnostics, those existing tools are limited. Tympanometry, because it represents the average motion of the TM, is not very sensitive to a wide-range of ossicular disorders, especially in circumstances where some part of the TM is abnormally flaccid. Vibrometry, on the other hand, appears insensitive to disorders of parts of the TM that are distant from the measurement point. Holographic, with its ability to define the motion of each point of the entire TM surface, will combine the best of both these techniques.
Supported by the NIDCD, WPI, CIO, MEEI, and a generous donor. We thank Chris Scarpino of the MEEI an alumni of WPI, for facilitating the interactions of our two research groups. Thanks also go to Christian Wester, Benjamin Dwyer and Marty Maccaferri for their contributions to this work.