PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
J Am Acad Audiol. Author manuscript; available in PMC Apr 22, 2013.
Published in final edited form as:
J Am Acad Audiol. Feb 2011; 22(2): 65–80.
doi:  10.3766/jaaa.22.2.2
PMCID: PMC3632371
NIHMSID: NIHMS446538
EVALUATION OF DIFFERENT SIGNAL PROCESSING OPTIONS IN UNILATERAL AND BILATERAL COCHLEAR FREEDOM IMPLANT RECIPIENTS USING R-SPACE™ BACKGROUND NOISE
Alison M. Brockmeyer and Lisa G. Potts
Washington University School of Medicine
Alison Brockmeyer
660 S. Euclid Ave. Campus Box 8115 St. Louis, MO 63110 (314) 362-7245
(314) 362-7245 Fax (314) 367-7346 ; brockmeyeral/at/wusm.wustl.edu
Background
Difficulty understanding in background noise is a common complaint of cochlear implant (CI) recipients. Programming options are available to improve speech recognition in noise for CI users including Automatic Dynamic Range Optimization (ADRO), Autosensitivity Control (ASC), and BEAM. The processing option, however, which results in the best speech recognition in noise, is unknown. In addition, laboratory measures of these processing options often show greater degrees of improvement than reported by participants in everyday listening situations. To address this issue, Compton-Conley and colleagues developed a test system to replicate a restaurant environment. The R-SPACE™ consists of eight loudspeakers positioned in a 360 degree arc and utilizes a recording made at a restaurant of background noise.
Purpose
The present study measured speech recognition in the R-SPACE™ with four processing options: standard dual-port directional (STD), ADRO, ASC, and BEAM.
Research Design
A repeated measures, within-subject design was used to evaluate the four different processing options at two noise levels.
Study Sample
Twenty-seven unilateral and three bilateral adult Nucleus Freedom cochlear implant recipients.
Intervention
The participants’ everyday program (with no additional processing) was used as the STD program. ADRO, ASC, and BEAM were added individually to the STD program to create a total of four programs.
Data Collection and Analysis
Participants repeated HINT sentences presented at a 0 degree azimuth with R-SPACE™ restaurant noise at two noise levels, 60 and 70 dB SPL. The Reception Threshold for Sentences (RTS) was obtained for each processing condition and noise level.
Results
In 60 dB SPL noise, BEAM processing resulted in the best RTS, with a significant improvement over STD and ADRO processing. In 70 dB SPL noise, ASC and BEAM processing had significantly better mean RTSs compared to STD and ADRO processing. Comparison of noise levels showed that STD and BEAM processing resulted in significantly poorer RTSs in 70 dB SPL noise compared to the performance with these processing conditions in 60 dB SPL noise. Bilateral participants demonstrated a bilateral improvement compared to the better monaural condition for both noise levels and all processing conditions, except ASC in 60 dB SPL noise.
Conclusions
The results of this study suggest that the use of processing options that utilize noise reduction, like that available in ASC and BEAM, improve a CI recipient’s ability to understand speech in noise in listening situations similar to those experienced in the real-world. The choice of the best processing option is dependent on the noise level, with BEAM best at moderate noise levels and ASC best at loud noise levels for unilateral CI recipients. Therefore, multiple noise programs or a combination of processing options may be necessary to provide CI users with the best performance in a variety of listening situations.
Keywords: Binaural hearing, cochlear implants, directional microphone, noise reduction, speech perception
The ability of cochlear implants to improve an individual’s speech recognition has been well documented (Tyler and Moore, 1992; Skinner et al, 1997; Fetterman and Domico, 2002; Firszt et al, 2004; Spahr and Dorman, 2004). There has been a dramatic improvement in speech recognition as cochlear implant (CI) equipment and speech processing strategies have advanced over the years (Skinner et al, 1994; Rubinstein et al, 1998). Despite the notable increase in performance with the advancement of CI systems, difficulty understanding in background noise continues to be a common complaint among CI recipients. Research has shown that the unfavorable effects of noise on speech recognition are prominent.
Spahr and Dorman (2004) reported that the average CI user scored 70% on sentence recognition tasks using conversational speech in quiet, which decreased to 42% when the sentences were presented at a +10 signal-to-noise ratio (SNR) and to 27% at a +5 SNR. Firszt et al (2004) had similar findings with CI users scoring from 57-73% on sentence recognition tasks at a variety of intensity levels. When the sentences were presented in noise (+8 SNR), the average score dropped to 48%. The noise condition represented the most difficult listening condition for the participants.
Cochlear implants have incorporated several speech processing options designed to improve speech recognition in noise while providing listening comfort. Speech processing options available in the Nucleus Freedom processor, and later model processors manufactured by Cochlear Americas, include Adaptive Dynamic Range Optimization (ADRO), Autosensitivity Control (ASC), and BEAM. In addition, a traditional dual-port directional microphone has been integrated into the speech processor for many years (Patrick et al, 2006).
Dual-Port Directional Microphone
In a dual-port directional microphone arrangement, sound from behind reaches the rear port before the front port, creating an external time delay. The external time delay depends on the distance between the two microphone ports, which is seven millimeters in the Nucleus Freedom device. The rear port uses an acoustic damper to create a low-pass filter. Sound entering the rear port is processed through the low-pass filter, producing an internal time delay. If the internal and external time delays are equal, sound from the rear will reach both sides of the microphone diaphragm at the same time, generating no net force and suppressing sounds from the rear direction. The direction of maximum suppression varies with the difference between the internal and external time delays (Dillon, 2001; Thompson, 2002).
Automatic Dynamic Range Optimization (ADRO)
ADRO is a preprocessing strategy that alters the gain of the input signal to place the signal in the CI user’s dynamic range. Gain is adjusted individually in each frequency channel according to a specific set of rules, which keeps the output level between a comfort target and an audibility target (James et al, 2002; Dawson et al, 2004). Gain is increased if a sound falls below the audibility target or decreased if a sound rises above the comfort target. When the sound is within the audible and comfortable range, the gain operates in a linear fashion (Blamey, 2005). However, gain cannot exceed a specified maximum amount. This maximum gain rule works to limit the amplification of low-level background noise (James et al, 2002; Dawson et al, 2004).
ADRO was incorporated into the Nucleus CI system in 2002 as an input signal processing option (Patrick et al, 2006). Two studies have examined the functional benefit of ADRO for CI recipients. James and colleagues (2002) presented sentences at 70 dB SPL in the presence of multi-talker babble at +10 and +15 dB SNRs to adult CI recipients using ADRO and a standard speech processing program. ADRO demonstrated significantly better speech recognition scores in quiet for soft and average presentation levels, but there was no significant difference in speech recognition in noise between ADRO and the standard program. Dawson and colleagues (2004) presented sentences at 65 dB SPL in the presence of multi-talker babble to pediatric CI recipients using ADRO and a standard program. The SNRs were selected individually, ranging from 0 to +15 dB, to avoid ceiling effects. ADRO showed a significant improvement in speech recognition in quiet and in noise. From these studies, it appears that the gain adjustments of ADRO lead to improved speech recognition at low and medium presentation levels; however, the ability of ADRO to improve speech recognition in noise is unclear.
Autosensitivity Control (ASC)
The development of the ASC processing option was led by CI users’ reports of reducing the manual sensitivity control in noisy environments. The reduction of the sensitivity resulted in a decrease of the amplification for low-level background noise by changing the automatic gain control (AGC) kneepoint. The AGC kneepoint is the input level at which compression begins. Below the kneepoint, amplification is typically linear (Dillon, 2001; Agnew, 2002b). When the sensitivity of the speech processor is reduced, the AGC kneepoint increases and when the sensitivity is increased, the AGC kneepoint decreases. Therefore, higher sensitivity (lower kneepoint) leads to more gain for soft sounds and greater audibility (Patrick et al, 2006).
ASC is an optional processing scheme that automatically adjusts the sensitivity according to the noise floor, or the intensity level of sound during breaks in speech. When the noise floor reaches the autosensitivity breakpoint, sensitivity is automatically decreased (kneepoint increased) to provide less low-level gain. When the noise floor falls below the breakpoint, sensitivity is automatically increased (kneepoint decreased) to provide more gain for soft sounds. At default settings, the autosensitivity breakpoint is 57 dB SPL, and ASC aims to keep the noise floor at least 15 dB below the AGC kneepoint. The breakpoint can be changed in the software to make ASC more or less responsive to background noise. With ASC active, CI users typically perceive a decrease in the loudness of background noise (Patrick et al, 2006).
Wolfe et al (2009) explored the effect of ASC on speech recognition in quiet and in noise with ten Nucleus Freedom users. Sentences were presented from a loudspeaker at 0 degree azimuth and noise from loudspeakers in the four corners of the room. Sentences were presented at 60 dBA in quiet, 65 dBA with a +10 dB SNR, 70 dBA with a +7 dB SNR, and 74 dBA with a +4 dB SNR. Sentence recognition was not significantly different with ASC on or off in the quiet and +10 dB SNR conditions. However, participants performed significantly better in the +7 and +4 dB SNR conditions with ASC on. These results suggest that ASC significantly improves speech recognition in the presence of high noise levels.
BEAM
A new input signal processing scheme, BEAM, was introduced in the Nucleus Freedom speech processor in 2005. BEAM is a two-stage adaptive beamformer. The first stage utilizes spatial preprocessing through a single-channel, adaptive dual-microphone system that combines the front directional microphone and rear omnidirectional microphone to separate speech from noise. The output from the rear omnidirectional microphone is filtered through a fixed finite impulse response (FIR) filter, a type of digital filtering characterized by a linear phase response (Agnew, 2002a). The output of the FIR filter is subtracted from an electronically delayed version of the output from the front, directional microphone to create the noise reference (Vanden Berghe and Wouters, 1998; Wouters and Vanden Berghe, 2001; Wouters et al, 2002; Spriet et al, 2007). The filtered signal from the omnidirectional microphone is then added to the delayed signal from the directional microphone to create the speech reference. This spatial preprocessing increases sensitivity for sounds arriving from the front while suppressing sounds that arrive between 90 and 270 degree azimuths. BEAM polar plots adapt between cardioid, hypercardioid, and bidirectional patterns as the noise source moves to adjust the null points for maximum noise suppression (Patrick et al, 2006). The second stage of BEAM utilizes adaptive noise cancellation to reduce the remaining noise in the speech reference. The filter coefficients used in the adaptive noise cancellation can only be adjusted during breaks in speech requiring a voice activity detector. These coefficients are then used to filter out the remaining noise in the speech reference (Wouters et al, 2002).
Wouters and Vanden Berghe (2001) investigated the speech recognition of four CI users utilizing a two-stage adaptive beamformer algorithm identical to the one used in BEAM processing. Participants repeated monosyllabic words and numbers presented at 0 degree azimuth at 55, 60, and 65 dB SPL with 60 dB SPL speech-weighted noise presented at 90 degree azimuth on the side with the implant with the beamformer active and inactive. Word recognition was significantly better for all presentation levels with the beamformer active, showing an average SNR improvement of more than 10 dB. Number recognition was also significantly better with the beamformer active, demonstrating an average SNR improvement of 7.2 dB. The authors concluded that the two-stage adaptive beamformer lead to significant improvement in speech recognition in noise for CI users.
Spriet and colleagues (2007) investigated the performance of the BEAM processing strategy in the Nucleus Freedom speech processor with five CI users. Participants repeated sentences in the presence of different types, levels, and locations of background noise using the standard directional microphone and BEAM. Speech-weighted noise and multi-talker babble were presented at constant levels of 55 and 65 dB SPL from either one source located at 90 degree azimuth or from three sources located at 90, 180, and 270 degree azimuths. BEAM improved the average SNR in all conditions when compared to the standard directional microphone. Improvement ranged from 1.5 dB with 55 dB SPL speech-weighted noise presented from three locations to 15.9 dB with 65 dB SPL multi-talker babble presented from one location. Spriet and colleagues (2007), similar to Wouters and Vanden Berghe (2001), concluded that BEAM improves speech recognition in background noise.
Studies by Chung and colleagues also investigated the potential for directional microphones, similar to BEAM, to improve speech recognition in noise for CI recipients. Chung et al (2004) recorded monosyllabic words processed through a hearing aid using the omnidirectional microphone setting, the directional microphone setting, and the directional microphone setting with noise reduction technology active. For the recording, the words were presented at 0 degree azimuth at +3 dB SNR, while speech spectrum noise was presented from seven locations around the recording microphone. The recording was then presented to CI users. Participants performed significantly better with the two directional microphone settings compared to the omnidirectional setting. The directional microphone resulted in an averaged improvement of 11.7 percentage points.
Chung et al (2009) recorded sentences processed through a hearing aid using the omnidirectional, fixed directional, and adaptive directional setting. These recordings were then presented to CI users through direct audio input. Results showed significantly better speech recognition in noise with the adaptive directional setting.
CI users are not alone in their reports of difficulty understanding in background noise, as hearing aid users also report increased difficulty in noise (Kochkin, 2005). There has been a notable amount of research on hearing aid users’ performance in background noise with different processing strategies, some of which are similar to those found in the Freedom device, including traditional directional microphones and adaptive beamformers. The effectiveness has been demonstrated in research studies (Soede et al, 1993; Saunders and Kates, 1997; Ricketts and Mueller, 1999; Wouters et al, 1999; Valente et al, 2000; Pumford et al, 2000; Amlani, 2001; Blamey et al, 2006). However, it has been noted that the improvement measured in the laboratory is often better than what users (both CI and hearing aid) report in their real-world situations. The difficulty of effectively evaluating an individual’s performance in a way that reflects real-world listening is an often recognized concern in hearing research (Walden et al, 1984; Cox and Alexander, 1991; Cox et al, 1991; Revit et al, 2002; Saunders and Forsline, 2006). To address this issue, Compton-Conley and colleagues (2004) developed an eight loudspeaker test system to replicate a restaurant environment, the R-SPACE™.
R-SPACE™
A study was conducted by the developers to assess the validity of the R-SPACE™ and other typical measures of directionality. Three methods of simulating restaurant noise were employed: noise from a single source behind the individual, noise from a single source above the individual, and the R-SPACE™ with noise from eight loudspeakers surrounding the individual. Participants repeated sentences presented from 0 degree azimuth and a Reception Threshold for Sentences (RTS) was calculated. RTS is the SNR needed to obtain 50% correct on the sentence recognition task. These simulations were then compared to measurements taken at an actual restaurant, referred to as the live condition. When noise was presented behind or above the individual, performance varied significantly from the live condition. Differences in the RTS ranged from 1.6 dB to 2.4 dB when comparing the noise behind condition to the live condition and 0.4 dB to 9.1 dB when comparing the noise above condition to the live condition. Variation in scores was dependent upon the microphone configuration tested. The R-SPACE™ simulation, however, was not significantly different from performance in the live condition, with differences in RTS varying from 0.3 dB to 0.5 dB (Compton-Conley et al, 2004).
In addition to how the sound is processed, another factor that typically contributes to CI recipients’ difficulty in background noise is that they are unilaterally stimulated. It has been shown for many years that binaural hearing improves speech recognition in noise (Levitt and Rabiner, 1967; MacKeith and Coles, 1971; Duquesnoy, 1983; Bronkhorst and Plomp, 1989, 1992; Hawley et al, 2004; Dubno et al, 2008). Binaural benefit is thought to emerge from the combination of the acoustic head-shadow effect, binaural squelch, and binaural redundancy. The head-shadow effect occurs when the head physically blocks some of the noise from reaching the far ear, while binaural squelch and binaural redundancy are central auditory processing phenomena, which allow the listener to effectively separate speech and noise. These binaural advantages are comprehensively discussed elsewhere (Dillon, 2001; Tyler et al, 2002; Tyler et al, 2003; Brown and Balkany, 2007; Ching et al, 2007).
Recent research has focused on measuring the effects of the head-shadow effect, binaural squelch, and binaural redundancy in bilateral CI recipients. Research suggests that CI recipients receive the largest bilateral benefit from the head-shadow effect (Gantz et al, 2002; Muller, 2002; Tyler et al, 2002; van Hoesel et al, 2002; van Hoesel and Tyler, 2003; Litovsky et al, 2006; Basura et al, 2009; Laske et al, 2009). The magnitude of the benefit received from the head-shadow effect varies between studies, but is typically estimated to be between 4 and 7 dB (van Hoesel and Tyler, 2003; Litovsky et al, 2006; Basura et al, 2009). The benefit received from binaural squelch and redundancy is not as clear. Several studies showed about half of the participants demonstrating a significant binaural squelch and/or binaural redundancy (Gantz et al, 2002; Tyler et al, 2002; Litovsky, 2006), while other studies observed no significant effect of binaural squelch or redundancy (van Hoesel and Tyler, 2003; Laske et al, 2009). Other recent studies suggest that the benefit of binaural squelch appears after extended bilateral CI use. Buss et al (2008) showed no significant binaural squelch effect after three months of bilateral CI use, but a squelch effect did emerge between six months and one year after bilateral implantation. Meanwhile, Eapen et al (2009) demonstrated continued growth of binaural squelch for four years after bilateral implantation.
Whether the recipient has unilateral or bilateral CIs, understanding speech in the presence of background noise is one of the most challenging tasks. The goal of the present study was to measure speech recognition of unilateral and bilateral CI recipients in background noise with the R-SPACE™. Four signal processing options were measured that include standard directional (STD), ADRO, ASC, and BEAM at two different noise levels, a moderate intensity level of 60 dB SPL and a loud intensity level of 70 dB SPL. This study may help determine the speech processing option that yields the best speech recognition in background noise for CI recipients, which could result in improved programming and increased patient benefit and satisfaction.
Participants
Thirty participants, twenty-seven unilateral and three bilateral CI recipients, took part in this study with a mean age of 60.0 years (range of 25-82 years). Table 1 reports individual demographic and hearing history information for unilateral subjects. Information was obtained from past audiograms and patient reports. The mean years of hearing loss and years of severe-to-profound hearing loss prior to implantation was 30.7 (range of 1-54 years) and 13.8 (range of 1-45 years), respectively. The mean years of hearing aid use prior to implantation in this sample was 20.3 [range of 0 (no experience) to 47 years]. For the bilateral participants, the data from one ear was randomly selected and included in the unilateral data analysis. All participants were implanted with the Nucleus 24 Contour or Contour Advance internal array and were programmed following a clinical protocol developed at Washington University School of Medicine (Skinner, 2003). Specific programming information is reported in Table 2. The mean years of implant use was 3.4 (range of 0.5-7.9 years). Twenty-seven of the thirty participants used the Advanced Combination Encoder (ACE) strategy. The remaining three participants used Spectral Peak (SPEAK), Continuous Interleaved Sampling (CIS), and MP3000 (a research strategy previously studied at Washington University). All participants had open-set speech recognition. Consonant-nucleus-consonant (CNC) word scores with the CI alone ranged from 17 to 86% with a mean score of 56.8%. Table 3 reports the programming information and CI use of the bilateral participants. Bilateral participants (participants #2, #8, and #9) had a mean of 3.3 years (range of 2-4.5 years) between the first and second implant and a mean of 2.7 years (range 1.7-3.4 years) of bilateral use at the time of testing.
Approval for this study (#08-1038) was obtained from the Washington University School of Medicine Human Research Protection Office (HRPO) prior to data collection. Participants signed an informed consent document approved by the HRPO committee. Participants were reimbursed for their time and travel.
Equipment/Test Environment
The Nucleus 24 Contour and Contour Advance internal arrays used in this study consist of a receiver/stimulator with 24 electrodes, 22 intracochlear electrodes and two extracochlear electrodes (Parkinson et al, 2002). The Nucleus Freedom Processor houses the microphones and the main computer, which processes the incoming sound. Custom Sound version 2.0 developed by Cochlear Americas was used to program the speech processor. The speech processor was hardwired to a programming interface (Cochlear Ltd. Programming Pod) connected to a personal Dell computer. The speech processing strategies implemented by this system include Spectral Peak (SPEAK), Advanced Combination Encoder (ACE), and Continuous Interleaved Sampling (CIS) (Skinner et al, 2002). All participants were tested using a loaner processor to ensure the equipment was performing optimally.
For speech testing, eight loudspeakers were positioned in a 360 degree arc, with loudspeakers spaced in increments of 45 degrees. The participant was seated in the center of the arc, 24 inches from each loudspeaker (see Figure 1). Each loudspeaker was at a height of 44 inches, to be ear level for a seated average height adult. All testing was completed in a double-walled sound-treated booth (8′3″× 8′11″), which met the appropriate standard set forth by the American National Standards Institute (ANSI) for permissible ambient noise levels (S3.1-1999, R 2008).
Figure 1
Figure 1
A schematic diagram of the R-SPACE™ Array showing the eight loudspeakers in a 360 degree arc, 24 inches from the listener. Figure taken from Compton-Conley et al (2004) and used with permission from the author.
An Apple IMAC 17 personal computer with a 2 GHz Intel Core 2 Duo Processor and MAC OS 10 operating system was used to operate the R-SPACE™. The R-SPACE™ configuration was implemented via professional audio mixing software (MOTU Digital Performer 5) and an audio interface (MOTU 828mkII, 96 kHz firewire interface). The output of the audio interface was sent to four amplifiers (ART SLA-1, two-channel stereo linear power amp with 100 watts per channel) and then to eight loudspeakers (Boston Acoustic CR67) positioned in a 360 degree array.
For soundfield threshold testing, the participant was seated in a double-walled sound treated booth (8′3″× 8′11″) at 0 degree azimuth, one meter from the loudspeaker (Urei Model 809). A Dell personal computer with a sound card, a power amplifier (Crown, Model D-150), and a custom designed mixing and amplifying network (Tucker-Davis Technologies) was utilized for presenting warble tones.
Test Materials
Frequency-modulated (FM) tones (centered at 250, 500, 1000, 2000, 3000, 4000, and 6000 Hz), sinusoidal carriers modulated with a triangular function over standard bandwidths recommended for use in the soundfield by Walker et al (1984), were used to obtain aided soundfield thresholds prior to speech recognition testing.
For speech testing, the Hearing in Noise Test (HINT sentences) and R-SPACE™ noise was used. The HINT sentences consist of 25 recorded, phonetically balanced lists of 10 sentences each. The lists were recorded by a male talker of American English and were designed for adaptive measurement of the Reception Threshold for Sentences (RTS) (Nilsson et al, 1994).
The R-SPACE™ noise recording was made inside a busy neighborhood restaurant and consists of uncorrelated noise, including sounds of dishes clanking, people talking, and background music (Compton-Conley et al, 2004). It was recorded using the Knowles Electronic Manikin for Acoustic Research (KEMAR), equipped with a circular, horizontal array of eight interference-tube microphones placed in equal 45 degree increments around his head.
Calibration
For calibration of HINT sentences and the R-SPACE™ noise, a sound level meter (Bruel & Kjaer, Model 2230) was placed with the microphone (Bruel & Kjaer, Model 4155) at 90 degree azimuth to the stimulus in the center of the R-SPACE™ loudspeaker array parallel to the center of the loudspeakers. Measurements were made with 0 dB attenuation using a linear– shaped dB sound pressure level (SPL) scale. For the HINT sentences, the overall SPL of all lists was taken as the average of the peaks on the slow, root-mean-square (RMS), linear scale through the front loudspeaker. The maximum output was recorded as 83.7 dB SPL. For the R-SPACE™ noise, an equivalent continuous SPL measure was obtained for five minutes with the sound level meter set using equivalent continuous noise level (dB Leq). The maximum output was 73.9 dB SPL. The magnitude of attenuation was chosen based on the measured maximum output and the desired intensity level of the signal.
Test Procedures
Aided Soundfield Thresholds
FM tone soundfield thresholds were obtained at 250, 500, 1000, 2000, 3000, 4000, and 6000 Hz in a modified Hughson-Westlake procedure (Carhart and Jerger, 1959) with a +2 and −4 dB HL step size. Soundfield thresholds were measured in the STD program to verify audibility. Mean soundfield thresholds are shown in Figure 2.
Figure 2
Figure 2
Mean soundfield thresholds (dB HL) and +/− 1 standard deviation for the CI with STD processing at user settings.
Reception Threshold for Sentences
Two lists of ten HINT sentences, or 20 sentences total, were presented from the loudspeaker located at 0 degree azimuth with the R-SPACE™ noise presented from all eight loudspeakers (0, 45, 90, 135, 180, 225, 270, and 315 degree azimuths). The noise was presented at two different intensity levels, a moderate level of 60 dB SPL and a loud level of 70 dB SPL (Pearsons et al, 1977). A Reception Threshold for Sentences (RTS) was obtained using an adaptive procedure. The level of sentence presentation was adjusted based on correct or incorrect response. If a correct response was obtained, the presentation level of the next sentence was decreased. If an incorrect response was obtained, the presentation level of the next sentence was increased. The presentation level for the first four sentences was adjusted in 4 dB steps. Presentation levels for sentences 5 to 20 were adjusted in 2 dB step sizes. A presentation level for a 21st sentence was calculated dependent upon whether the 20th sentence was repeated correctly or incorrectly. RTS was calculated by averaging across sentences 5 to 21 and subtracting the noise level. One practice list was presented to familiarize the participants with the tasks. The lists were randomly assigned between conditions.
The participant’s preferred everyday program with no additional processing was used for the STD condition. Each processing option was added individually to the STD program to create three additional programs. The participant’s everyday volume (range 7-9) and sensitivity settings (range 9-14) were used for all conditions. The non-test ear was plugged when at least one unaided hearing threshold was at 60 dB HL or better. The four processing options and two noise levels were counterbalanced for testing.
For unilateral CI participants, all testing was performed in one session. Bilateral CI participants attended two sessions, one for each ear, with the bilateral condition tested at 60 dB SPL in the first session and 70 dB SPL in the second.
Statistical Analysis
Unpaired t-tests were performed to compare RTSs within processing options and noise levels, and a mixed model repeated measures analyses of variance (ANOVA) was used to analyze RTSs across all combinations of processing options and noise levels. An unstructured covariance structure was designated within the mixed model to account for the completely within participant crossed study design with a focus on the noise level x processing option interaction. This interaction tested the hypotheses regarding the equality of changes across noise levels and processing options. Tukey-adjusted p-values within the ANOVA model were used to determine significance (p≤0.05) for pairwise comparisons.
Demographic and audiologic variables were investigated to determine if any impacted the interaction between noise level and processing options. The variables of interest included the implanted ear, participant age at testing, years of hearing loss, years of severe-to-profound hearing loss, and years of hearing aid use prior to implantation. Two variables related to the CI were also analyzed. These were years of CI experience and the recipient’s most recent CNC word score. The three-way interaction between potential moderating variables, processing options, and noise levels could not be explored due to sample size limitations. As a result, the potential moderating variables were divided into groups. The continuous variables were divided by the median, with ear of implantation, the only non-continuous variable, divided categorically. Unpaired t-tests were used to compare data between potential moderating variable groups within noise levels and processing options, and a mixed model ANOVA was used to explore the noise level x processing interaction within potential moderating variable groups. If no significant interaction was found, the interaction was dropped from the mixed model and the main effects of processing option and noise level were investigated. All data analysis was produced using SAS software, version 9.2 of the SAS System for Linux (SAS Institute Inc., Cary, NC, USA).
Unilateral Participants
Statistical analyses identified both noise level [F(1,29)=29.8; p<0.0001] and processing option [F(3,29)=22.3; p<0.0001] as significant main effects. A significant [F(3,29)=5.18; p=.006] noise level x processing option interaction was also identified, indicating that processing is differentially affected by noise level. The four processing options investigated showed different patterns of change with increasing noise level. Due to the significant interaction, the effect of noise level and processing option independent of each other was not meaningful.
The results in 60 dB SPL noise for each of the four processing options can be seen in Figure 3. A smaller RTS (shorter bar) indicates better speech recognition in noise. STD processing resulted in a mean RTS of 10.8 dB. The poorest performance was with ADRO processing with a mean RTS of 12.8 dB. ASC and BEAM processing showed an improvement in RTS relative to STD and ADRO processing with means of 9.5 and 8.3 dB, respectively. BEAM was the only processing option that resulted in a statistically significant improvement, with it being better than STD [t(29)=−3.82; p≤0.05] and ADRO processing [t(29)=5.13; p≤0.05]. The mean RTSs for STD, ADRO, and ASC were not statistically different from each other, although ASC performed better than STD and ADRO processing. There was also no statistical difference between ASC and BEAM processing.
Figure 3
Figure 3
Mean RTS for unilateral participants in 60 dB SPL noise with STD, ADRO, ASC, and BEAM processing options. Error bars represent +1 standard deviation. The asterisks represent a significant difference between processing options (p≤0.05).
The results in 70 dB SPL noise for each of the four processing options can be seen in Figure 4. STD and ADRO processing showed similar performance, with mean RTSs of 15.6 and 15.0 dB, respectively. ASC processing had significantly better mean RTSs compared to STD [t(29)=−6.87; p≤0.05] and ADRO processing [t(29)=6.36; p≤0.05]. BEAM processing also exhibited significantly better RTSs than STD [t(29)=−5.29; p≤0.05] and ADRO [t(29)=4.87; p≤0.05] processing. No significant differences were observed between STD and ADRO or between ASC and BEAM. ASC processing had the best mean RTS of the four conditions (9.7 dB), followed by BEAM processing with a mean RTS of 11.4 dB.
Figure 4
Figure 4
Mean RTS for unilateral participants in 70 dB SPL noise with STD, ADRO, ASC, and BEAM processing options. Error bars represent +1 standard deviation. The asterisks represent a significant difference between processing options (p≤0.05).
The difference in performance between 60 and 70 dB SPL noise across the four processing options can be seen in Figure 5. The participants’ performance was poorer for all processing conditions at 70 dB SPL. The amount of decrease varied among the four processing options. The detrimental effect of the noise increased as the level of the noise increased. The smallest decrement was seen with ASC processing, whose performance was essentially the same with a difference of only 0.2 dB. STD processing had the largest change with a decrease in performance of 4.8 dB. ADRO exhibited a decrease in performance of 2.2 dB, and BEAM showed a decrease of 3.1 dB with increased noise. STD [t(29)=−3.94; p≤0.05] and BEAM [t(29)=−5.16; p≤0.05] processing resulted in significantly poorer RTSs in 70 dB SPL noise compared to the performance with these processing conditions at 60 dB SPL. There was no statistical difference between noise levels for ADRO and ASC processing.
Figure 5
Figure 5
Mean RTS difference between noise levels (60 and 70 dB SPL) for unilateral participants (RTS at 70 dB SPL – RTS at 60 dB SPL) with STD, ADRO, ASC, and BEAM processing options. Error bars represent +1 standard deviation. The asterisks represent (more ...)
Large standard deviations were evident throughout the analysis of the results. The standard deviations ranged from 4.87 with STD processing in 70 dB SPL noise to 7.41 with ADRO processing in 60 dB SPL noise. The large standard deviations are most likely due to the significant differences in speech recognition ability of the participants, who were recruited from a large clinical population. To participate in the current study, any level of measurable open-set speech recognition was acceptable. CNC scores in quiet ranged from 19 to 86%.
Moderating Variables
Demographic and audiologic variables were investigated to determine if any had an impact on the interaction between noise level and processing options. If no significant interaction was found, the main effects of noise level and processing option were examined. The variables explored were implanted ear, age at testing, years of hearing loss, years of severe-to-profound hearing loss, and years of hearing aid use prior to implantation. Years of hearing loss, years of severe-to-profound hearing loss, and years of hearing aid use were highly correlated, consequently, only years of hearing loss prior to implantation is further discussed. Implanted ear, age at testing, and years of hearing loss were found to be significant moderators for the noise level x processing option interaction. The right ear CI group [F(3,13)=3.82; p=0.04], younger participants [F(3,14)=4.24; p=0.03], and participants with more years of hearing loss [F(3,15)=6.24; p=0.006] exhibited a significant noise level x processing interaction. This means that the processing options revealed different patterns of change when the noise level increased from 60 to 70 dB SPL. The processing condition was differentially affected by noise level. This can be seen in the decrease in performance of younger participants as the noise level increased with STD, ADRO, and BEAM, while their performance with ASC improved by 0.4 dB with increased noise.
The other groups (left ear CI, older participants, and participants with less years of hearing loss) revealed significant main effects of noise level and processing option but had no significant interaction. This means that performance varied between processing options and noise level independent of each other. Older subjects, for example, demonstrated a significant main effect for both noise level [F(1,14)=25.4; p=0.0002] and processing condition [F(3,14)=19.9; p<0.0001]. The older subjects performed poorer at 70 than at 60 dB SPL for all processing options.
Additional variables related to CI history and performance were also analyzed. Years of CI use and CNC speech recognition word scores in quiet were found to be significant moderators for the noise level x processing option interaction. Participants with more years of CI experience [F(3,14)=8.99; p=0.001] and higher CNC scores [F(3,15)=4.11;p=0.03] showed a noise level x processing interaction, indicating that processing conditions were differentially affected by noise level. For example, performance with ASC for these participants either stayed the same or improved as the noise level increased, while performance with STD, ADRO, and BEAM worsened with increasing noise. Also, for these participants, BEAM showed best performance in 60 dB SPL noise and ASC showed best performance in 70 dB SPL noise.
Participants with less CI experience [F(3,14)=10.9; p=0.0006] and lower CNC scores [F(3,13)=7.33; p=0.004] showed a significant main effect of processing condition. Performance for these participants was better with ASC and BEAM than STD and ADRO regardless of the noise level, with ASC showing best performance in both noise levels. In addition, speech recognition in quiet was the only moderating variable predictive of speech recognition in noise. CI participants with higher speech recognition scores in quiet performed better in noise across all processing options and noise levels (p-values range from 0.06 to 0.0003).
Bilateral Participants
Due to the small sample size, no statistical analyses could be performed on the bilateral data, but performance for the three bilateral CI participants (#2, 8, and 9) is described below. See Table 2 for individual ear and Table 3 for bilateral information for these participants. Figure 6 shows the mean RTSs for the right ear, left ear, and bilateral conditions with the four processing options in 60 dB SPL noise. Bilateral improvement was evident for STD, ADRO, and BEAM processing options. When comparing the bilateral condition to the better monaural ear condition, STD processing revealed a mean improvement of 1.4 dB. ADRO processing had a mean bilateral improvement of 1.3 dB and BEAM processing had a mean improvement of 3.0 dB. ASC processing was the only option in which the bilateral condition did not result in the most favorable RTS. Best performance with ASC processing was seen for the left ear alone condition. This result was influenced by one participant’s very low RTS in the left ear with ASC processing. Overall, the best bilateral performance was with BEAM processing with a mean RTS of 1.6 dB. Table 4 shows the three bilateral participants individual RTSs for the four processing options in 60 dB SPL noise.
Figure 6
Figure 6
Mean RTS of bilateral participants in 60 dB SPL noise with STD, ADRO, ASC, and BEAM processing options. Mean RTSs are shown for unilateral right ear, unilateral left ear, and bilateral conditions.
The mean RTSs for the right ear, left ear, and bilateral conditions can be seen in Figure 7 for 70 dB SPL noise with the four processing options. When comparing the bilateral condition to the better unilateral ear condition, STD processing resulted in a mean improvement of 2.5 dB. ADRO processing revealed a mean RTS improvement of 7.2 dB, and ASC processing improved 4.7 dB. BEAM processing had the largest improvement (9.7 dB) between the unilateral and bilateral conditions among the four processing options. As seen in 60 dB SPL noise, BEAM processing also had the best overall bilateral performance in 70 dB SPL noise with a mean RTS of 0.4 dB. Table 5 shows the three bilateral participants individual RTSs for the four processing options in 70 dB SPL noise.
Figure 7
Figure 7
Mean RTS of bilateral participants in 70 dB SPL noise with STD, ADRO, ASC, and BEAM processing options. Mean RTSs are shown for unilateral right ear, unilateral left ear, and bilateral conditions.
Unilateral performance for the bilateral participants typically followed the trend of the other unilateral participants, showing poorer performance when noise level increased from 60 to 70 dB SPL. This trend did not occur when the participants were tested bilaterally. Three of the processing options (ADRO, ASC, and BEAM) were actually better with 70 dB SPL noise than with 60 dB SPL noise. By comparing the individual data in Tables 4 and and5,5, it is evident that when the three processing options were active, the bilateral RTSs decreased (improved) for all bilateral participants as the noise level increased. The only exception is for participant #9 with BEAM processing. When the decrease in unilateral participants’ performance was combined with the improvement in bilateral participants’ performance from 60 to 70 dB SPL, there was a difference of 5.5 dB for ADRO processing, 3.0 for ASC processing, and 4.3 dB for BEAM processing. These are very large differences and suggest a large bilateral benefit especially as the listening situation becomes more challenging.
The results of this study show that CI recipients can have improved speech recognition in noise with processing options available clinically. ADRO processing demonstrated results similar to STD processing (i.e. no additional processing). This finding agrees with James et al (2002), who found no difference between these processing options in noise for adult CI recipients. Dawson et al (2004), however, did find a difference between ADRO and standard processing in noise with pediatric CI recipients. The difference between these studies may be due to the participants tested, as the Dawson study used pediatric CI recipients and the James study used adult CI recipients. ADRO performance also remained relatively stable when the noise level was increased. This stability across noise levels can most likely be explained by the maximum gain rule of ADRO processing, which does not allow the gain to exceed a specified maximum amount. At the moderate noise level used in this study, the amplification of background noise had already met the maximum amount of allowable gain and therefore, no additional amplification was provided when the noise level was increased.
This study found that BEAM processing resulted in significantly better performance than STD and ADRO processing at both noise levels. The ability of BEAM to improve speech recognition in noise for CI recipients has been demonstrated in previous research. Wouters and Vanden Berghe (2001) and Spriet et al (2007) found larger improvements in SNRs than the current study. However, these models used different noise stimuli (speech-weighted noise and multi-talker babble), which were presented from one to three noise sources. The current study used R-SPACE™ (live restaurant) noise presented from a diffuse field. The R-SPACE™ noise has been previously found to result in a poorer RTS than other noise. Valente and colleagues (2006) tested bilateral hearing aid users in the R-SPACE™ and found that the RTS was1.3 dB poorer for R-SPACE™ noise than for HINT noise, which is filtered to match the average long-term spectrum of HINT sentences. Therefore, speech recognition tasks may be more difficult when the R-SPACE™ noise is used compared to other continuous noise types.
The difference in the R-SPACE™ configuration may also explain the difference between the current findings and previous research. The R-SPACE™ configuration presents noise from all eight loudspeakers. Therefore, the front speaker presents both speech and noise. BEAM utilizes directionality to divide speech from spatially separated noise. When the speech and noise are presented together from the front speaker, BEAM relies on the adaptive noise cancellation stage to reduce the noise. BEAM may be more effective at improving speech recognition in noise when the noise source is spatially separated from the speech signal. Since typical real-world listening situations often include combined speech and noise, previous studies may have overestimated the absolute performance of BEAM, and current results may better predict the performance of BEAM processing in real-world situations similar to that replicated by the R-SPACE™.
BEAM processing showed a significant decrease in performance with the increase in noise level. This reduction in performance is probably due to the second stage of BEAM, which utilizes adaptive noise cancellation. This may affect the clarity of the speech reference by filtering out portions of the speech signal along with the noise.
ASC processing resulted in the best performance at the loud noise level and was almost as good as BEAM at the moderate noise level. This result agrees with the findings of Wolfe et al (2009), where ASC improved speech recognition in the presence of loud noise levels. ASC processing also maintained performance across noise levels having almost equivalent performance at 60 and 70 dB SPL. The benefit of ASC processing at a louder noise level was not necessarily expected in the R-SPACE™, as ASC processing limits background noise by increasing the AGC kneepoint. This results in reduction in amplification for distant, softer sounds, and increased amplification of closer, louder sounds. One would postulate that in the diffuse noise environment of the R-SPACE™, where the noise sources and speech are the same distance, that ASC processing would not significantly benefit speech recognition. The noise sources were not equidistant from the listener in the Wolfe et al (2009) study. The rear noise sources were further from the listener than the front noise sources, and the speech signal was closer to the listener than all noise sources. It is possible in the current study that the regular directional microphone increased the sensitivity of sounds arriving from the front, and the ASC processing maximized suppression of background noise. These two features working in conjunction may be responsible for the performance in a diffuse noise field. Regardless of the mechanisms at work, the findings suggest that ASC processing is a good option to limit amplification of background noise at moderate and loud levels while maintaining speech intelligibility.
It is also important to note the possible effect of infinite compression on speech recognition in noise. The Nucleus Freedom processor, at default settings, codes inputs from 25 to 65 dB SPL into the electrical dynamic range. The threshold (25 dB SPL) can be adjusted in the programming software, but the upper limit (65 dB SPL) is fixed (Wolfe et al, 2009). Therefore, any signal greater than 65 dB SPL would be exposed to high levels of compression.
The RTSs obtained in this study resulted in infinite compression being activated for the majority of participants across processing conditions and noise levels. Five participants were not subject to infinite compression in the 60 dB SPL noise condition, as they obtained RTSs below +5 dB across all processing conditions. Seven participants had infinite compression in some conditions and not in others, as they obtained RTSs above and below +5 dB across processing conditions. The remaining 18 participants were subject to infinite compression across all processing conditions in both noise levels. In addition, ASC changes the magnitude of infinite compression, as ASC aims to keep the noise floor at least 15 dB below the AGC kneepoint. Limiting the background noise to below the point where speech is compressed may be the reason ASC performed best at the loud input level.
The three bilateral participants demonstrated a bilateral benefit with almost all processing options at both noise levels. This supports the findings of previous bilateral CI studies that showed improved speech recognition in noise with binaural hearing. Several studies attribute the majority of bilateral benefit to the head-shadow effect (Gantz et al, 2002; Tyler et al, 2003; van Hoesel and Tyler, 2003; Litovsky et al, 2006; Buss et al, 2008; Basura et al, 2009). In the current study, the noise source is diffuse. The exact SNR at each ear varies as the R-SPACE™ noise changes in real-time. The R-SPACE™ noise is uncorrelated, so the exact level of noise coming out of each loudspeaker may be higher or lower than other loudspeakers at any moment in time. The overall SNR at each ear should be similar when averaged over time. It is possible that a rapid-changing head shadow effect may contribute to the observed bilateral improvement.
The current results with the three bilateral participants showed a mean bilateral improvement as high as 9 dB compared to unilateral performance. Previous studies estimated the head-shadow effect to improve the SNR between 4 and 7 dB (van Hoesel and Tyler, 2003; Litovsky, 2006; Basura et al, 2009). The greater bilateral benefit observed in this study may be attributed to the central phenomena of binaural squelch and redundancy. The noise presented from each speaker is not identical, allowing the brain to use differences in the timing and spectrum of the input signal to separate the speech and noise (Tyler et al, 2002; Tyler et al, 2003; Ching et al, 2007; Brown & Balkany, 2007). Also, the speech presented from the front loudspeaker is perceived by both ears providing redundant information to the brain. This redundancy should allow the brain to develop a better representation of the message (Dillon, 2001; Ching et al, 2007).
The variation in results between the current study and previous ones could also be ascribed to characteristics of the individual participants. These three participants were experienced listeners with their bilateral CIs (mean bilateral experience of 2.7 years). Some studies have measured bilateral benefit shortly after the second CI (Gantz et al, 2002; Tyler et al, 2002; Litovsky et al, 2006). Recent research has indicated that the effect of binaural squelch increases over time (Buss et al. 2008; Basura et al, 2009; Eapen et al, 2009; Litovsky et al, 2009). Eapen et al (2009) found that the squelch effect significantly increased after the first year of bilateral experience. All three of the participants in this study had over one year of bilateral experience, which may have resulted in increased benefit from binaural squelch.
The bilateral participants demonstrated similar speech understanding in quiet with each ear alone. This equivalent performance between right and left ears may allow better integration of the binaural signal in noise. It is unclear how differences between the ears may impact bilateral performance. Finally, the difference in noise types and arrays may also play a role in the variation. The R-SPACE™ noise may better demonstrate the brain’s ability to analyze the differences and similarities between inputs from the two ears to improve the internal representation of speech and noise. However, the small sample size of the current study makes it difficult to draw conclusions or comparisons to other studies.
In addition to the difference in performance between unilateral and bilateral stimulation of these participants, the effect of the noise level is a fascinating finding. These participants’ unilateral performance was similar to the mean unilateral performance of the group, with a poorer performance at the louder noise level. However, this was not true when they were stimulated bilaterally. Their bilateral RTS was better when the noise got louder. This was true for all the bilateral participants with three of the processing options (ADRO, ASC, and BEAM). The bilateral improvement found at the higher noise level suggests that bilateral benefit may be greater as the listening situation becomes more challenging. It is feasible that many traditional clinical measures may not provide adequate evaluation for bilateral stimulation. It has been suggested that bilateral benefit measured in studies may underestimate the benefit received by bilateral CI recipients. It is often the case that the subjective reports of bilateral benefit exceed the measured benefit (Litovsky et al, 2006; Laske et al, 2009). The large bilateral benefit seen in this study may better estimate CI recipients’ everyday performance. The assessment of bilateral benefit, however, is difficult and may vary between individuals and tasks. Although the bilateral trend seen in this study is interesting, results should be interpreted with caution due to the small number of bilateral participants.
Although different processing options can improve the speech recognition in noise for CI recipients, they still perform notably poorer than normal-hearing individuals. In this study, the best speech recognition for the unilateral participants was found with BEAM processing in 60 dB SPL noise, which resulted in a mean RTS of 8.3 dB. This is 11 dB poorer than that reported by Nilsson et al (1992) for normal-hearing individuals using HINT sentences in spectrally-matched noise. For bilateral participants, the best RTS of 0.4 dB was found with BEAM processing in 70 dB SPL noise. The performance of the bilateral participants is on average closer to that of normal-hearing individuals, but is still poorer. Valente et al (2006) evaluated twenty-five bilateral hearing aid users with mild to moderately-severe sensorineural hearing loss using HINT sentences in the R-SPACE™. Average performance of the hearing aid users showed an RTS of 2.0 dB and −0.3 dB with an omnidirectional and directional microphone, respectively. The unilateral and bilateral CI participants in the current study performed poorer than bilateral hearing aid users. However, the average bilateral CI performance was only 1.1 dB poorer than that of bilateral hearing aid users. ASC and BEAM processing improve the ability of CI users to understand speech in background noise, but performance with these strategies is still poorer than bilateral hearing aid users and far from that of normal-hearing individuals.
The results of this study suggest the type of processing and the noise level interact to produce different degrees of speech recognition within the same individual. This has important clinical relevance for programming of different processing options and counseling CI recipients on the use of different processing strategies. This finding supports CI recipients’ subjective reports of preferences for different processing options in different listening environments. Typically, patients are given one program to use in noisy listening environments. However, this study supports providing the patient with two separate noise programs, BEAM for moderate levels of background noise and ASC for loud levels.
These findings support the use of processing options that utilize noise reduction to improve speech recognition in noise for unilateral and bilateral CI recipients. In addition, these options should be part of the standard programming protocol to increase CI recipient satisfaction and benefit. The choice of the best processing option, however, is dependent on the noise level. This finding may help explain the seemingly inconsistent reports by CI recipients. When CI recipients’ are asked to utilize different processing options (programs) in different everyday listening situations, it often appears that their reports are not consistent. For example, it is not uncommon for a recipient to report that when they were out to dinner there was a noticeable difference between the ASC and the BEAM program. Yet when they return the next week, they report that when they were out to dinner there was little difference between the ASC and BEAM programs. This would make it difficult to make appropriate programming decisions. This comment taken in the context of the current finding would suggest that the noise levels and noise sources in the restaurants were different and this results in a difference in performance. During the programming process each CI recipient should not only be given different processing options to try, but be counseled on how to use them in different listening situations to determine which one provides the best speech recognition in that situation. Recipients should be encouraged to keep a diary of situations and the programs they found to be most beneficial in the early months with their CI. This can provide helpful information to the individual and clinician to learn which program performs best for them in their various listening environments.
Results for three bilateral CI participants show a bilateral improvement in speech recognition in noise when compared to the better ear alone. This benefit can most likely be attributed to the effects of binaural squelch and redundancy, as well as a rapid changing head-shadow effect. The most interesting finding was that the bilateral improvement increased as the noise level increased, suggesting a more significant bilateral benefit in more challenging listening situations. Clinically, it has been found that recipient’s subjective reports of bilateral benefit are much higher than the improvement measured in the clinic. It could, however, be that the test measures are not challenging enough and do not mimic real-world listening situations, creating a mismatch between subjective and objective reports. The R-SPACE™ appears to be a more valid measure of bilateral benefit. No statistical analyses could be performed on the bilateral data due to the small sample size in this study. The current trend cannot be generalized to bilateral CI users until more bilateral CI users are evaluated.
Continued research is needed with both unilateral and bilateral CIs utilizing different test procedures at a variety of input levels. Further research should also investigate the performance of these CI processing options with CI recipients who use a hearing aid in the non-implanted ear. This will help provide insight into the differences in hearing ability and how it relates to binaural processing. This study’s findings suggest the need for challenging tests to measure bilateral benefit. Lastly, the Nucleus system now allows for programming of multiple options together, (i.e. ASC+ADRO, ASC+ADRO+BEAM). Additional research needs to evaluate how these processing options interact with each other and which processing option(s) performs best in background noise at a variety of input levels.
Acknowledgments
We would like to thank Laura Holden, Tim Holden, Karen Steger-May, Sally McCoomb, and the Washington University Institute of Clinical and Translational Sciences for their guidance and help with this study. Appreciation is expressed to the thirty study participants who gave of their time and effort to participate in this study.
This publication was made possible by Grant Number UL1 RR024992 from the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH) and NIH Roadmap for Medical Research. Its contents are solely the responsibility of the authors and do not necessarily represent the official view of NCRR or NIH.
Abbreviations
ADROadaptive dynamic range optimization
AGCautomatic gain control
ASCautosensitivity control
BEAMtwo-stage adaptive beamforming algorithm
CIcochlear implant
HINTHearing in Noise Test
RTSreception threshold for sentences
SNRsignal-to-noise ratio
STDstandard program

Footnotes
Portions of this article were presented as a poster at AudiologyNOW! 2009, April 2-4, Dallas and at the Conference on Implantable Auditory Prostheses, July 13-17, 2009, Lake Tahoe.
  • Agnew J. Amplifier and circuit algorithms for contemporary hearing aids. In: Valente M, editor. Hearing Aids: Standards, Options, and Limitations. 2nd ed Thieme Medical Publishers; New York: 2002a. pp. 101–142.
  • Agnew J. Hearing aid adjustments through potentiometer and switch options. In: Valente M, editor. Hearing Aids: Standards, Options, and Limitations. 2nd ed Thieme Medical Publishers; New York: 2002b. pp. 143–177.
  • American National Standards Institute Maximum permissible ambient noise levels for audiometric test rooms (ANSI S3.1-1999, R 2008) Accredited Standards Committee S3, Bioacoustics; Washington, DC: 1999.
  • Amlani AM. Efficacy of directional microphone hearing aids: A meta-analytic perspective. J Am Acad Audiol. 2001;12:202–214. [PubMed]
  • Basura GJ, Eapen R, Buchman CA. Bilateral cochlear implantation: current concepts, indications, and results. Laryngoscope. 2009;119:2395–2401. [PubMed]
  • Blamey P. Adaptive Dynamic Range Optimization (ADRO): a digital amplification strategy for hearing aids and cochlear implants. Trends Amplif. 2005;9:77–98. [PubMed]
  • Blamey PJ, Fiket HJ, Steele BR. Improving speech intelligibility in background noise with an adaptive directional microphone. J Am Acad Audiol. 2006;17:519–530. [PubMed]
  • Bronkhorst AW, Plomp R. Binaural speech intelligibility in noise for hearing-impaired listeners. J Acoust Soc Am. 1989;86:1374–1383. [PubMed]
  • Bronkhorst AW, Plomp R. Effect of speechlike maskers on binaural speech recognition in normal and impaired hearing. J Acoust Soc Am. 1992;92:3132–3139. [PubMed]
  • Brown KD, Balkany TJ. Benefits of bilateral cochlear implantation: a review. Curr Opin Otolaryngol Head Neck Surg. 2007;15:315–318. [PubMed]
  • Buss E, Pillsbury HC, Buchman CA, Pillsbury CH, Clark MS, Haynes DS, Labadie RF, Amberg S, Roland PS, Kruger P, Novak MA, Wirth JA, Black JM, Peters R, Lake J, Wackym PA, Firszt JB, Wilson BS, Lawson DT, Schatzer R, D’Haese PSC, Barco AL. Multicenter U.S. bilateral MED-EL cochlear implantation study: speech perception over the first year of use. Ear Hear. 2008;19:20–32. [PubMed]
  • Carhart R, Jerger JJ. Preferred method for clinical determination of pure-tone thresholds. J Speech Hear Disord. 1959;24:330–345.
  • Chan JCY, Freed DJ, Vermiglio AJ, Soli SD. Evaluation of binaural functions in bilateral cochlear implant users. Int J Audiol. 2008;46:296–310. [PubMed]
  • Ching TYC, van Wanrooy E, Dillon H. Binaural-bimodal fitting or bilateral implantation for managing severe to profound deafness: a review. Trends Amplif. 2007;11:161–192. [PubMed]
  • Chung K, Zeng F. Using hearing aid adaptive directional microphones to enhance cochlear implant performance. Hear Res. 2009;250:27–37. [PubMed]
  • Chung K, Zeng F, Waltzman S. Utilizing advanced hearing aid technologies as pre-processors to enhance cochlear implant performance. Cochlear Implants Int. 2004;5:192–195. [PubMed]
  • Compton-Conley CL, Neuman AC, Killion MC, Levitt H. Performance of directional microphones for hearing aids: real-world versus simulation. J Am Acad Audiol. 2004;15:440–455. [PubMed]
  • Cox RM, Alexander GC. Hearing aid benefit in everyday environments. Ear Hear. 1991;12:127–139. [PubMed]
  • Cox RM, Alexander GC, Rivera IM. Accuracy of audiometric test room simulations of three real-world listening environments. J Acoust Soc Am. 1991;90:764–772.
  • Dawson PW, Decker JA, Psarros CE. Optimizing dynamic range in children using the Nucleus cochlear implant. Ear Hear. 2004;25:230–241. [PubMed]
  • Dillon H. Hearing aids. Thieme Medical Publishers; New York: 2001.
  • Dubno JR, Ahlstrom JB, Horwitz AR. Binaural advantages for younger and older adults with normal hearing. J Sp Lang Hear Res. 2008;51:539–556. [PubMed]
  • Duquesnoy AJ. The intelligibility of sentences in quiet and in noise in aged listeners. J Acoust Soc Am. 1983;74:1136–1144. [PubMed]
  • Eapen RJ, Buss E, Adunka MC, Pillsbury HC, Buchman CA. Hearing-in-noise benefits after bilateral simultaneous cochlear implantation continue to improve 4 years after implantation. Otol Neurotol. 2009;30:153–159. [PMC free article] [PubMed]
  • Feddersen WE, Sandel TT, Teas DC, Jeffress LA. Localization of high-frequency tones. J Acoust Soc Am. 1957;29:988–991.
  • Fetterman BL, Domico EH. Speech recognition in background noise of cochlear implant patients. Otolaryngol Head Neck Surg. 2002;126:257–263. [PubMed]
  • Firszt JB, Holden LK, Skinner MW, Tobey EA, Peterson A, Gaggl W, Runge-Samuelson CL, Wackym PA. Recognition of speech presented at soft to loud levels by adult cochlear implant recipients of three cochlear implant systems. Ear Hear. 2004;25:375–387. [PubMed]
  • Friesen LM, Shannon RV, Baskent D, Wang X. Speech recognition in noise as a function of the number of spectral channels: comparison of acoustic hearing and cochlear implants. J Acoust Soc Am. 2001;110:1150–1163. [PubMed]
  • Fu Q, Shannon RV, Wang X. Effects of noise and spectral resolution on vowel and consonant recognition: acoustic and electric hearing. J Acoust Soc Am. 1998;104:3586–3596. [PubMed]
  • Gantz BJ, Tyler RS, Rubinstein JT, Wolaver A, Lowder M, Abbas P, Brown C, Hughes M, Preece J. Binaural cochlear implants placed during the same operation. Otol Neurotol. 2002;23:169–180. [PubMed]
  • Hawley ML, Litovsky RY, Culling JF. The benefit of binaural hearing in a cocktail party: effect of location and type of interferer. J Acoust Soc Am. 2004;115:833–843. [PubMed]
  • James CJ, Blamey PJ, Martin L, Swanson B, Just Y, Macfarlane D. Adaptive dynamic range optimization for cochlear implants: a preliminary study. Ear Hear. 2002;23:49S–58S. [PubMed]
  • Kochkin S. Marke Trak VII: Customer satisfaction with hearing instruments in the digital age. Hear J. 2005;58:30–43.
  • Laske RD, Veraguth D, Dillier N, Binkert A, Holzmann D, Huber AM. Subjective and objective results after bilateral cochlear implantation in adults. Otol Neurotol. 2009;30:313–318. [PubMed]
  • Levitt H, Rabiner LR. Predicting binaural gain in intelligibility and release from masking for speech. J Acoust Soc Am. 1967;42:820–829. [PubMed]
  • Litovsky RY, Parkinson A, Arcaroli J. Spatial hearing and speech intelligibility in bilateral cochlear implant users. Ear Hear. 2009;30:419–431. [PMC free article] [PubMed]
  • Litovsky R, Parkinson A, Arcaroli J, Sammeth C. Simultaneous bilateral cochlear implantation in adults: a multicenter clinical study. Ear Hear. 2006;27:714–731. [PMC free article] [PubMed]
  • MacKeith NW, Coles RRA. Binaural advantages in hearing of speech. J Laryngol Otol. 1971;85:213–232. [PubMed]
  • Muller J, Schon F, Helms J. Speech understanding in quiet and noise in bilateral users of the MED-EL COMBI 40/40+ cochlear implant system. Ear Hear. 2002;23:198–206. [PubMed]
  • Nilsson M, Gelnett D, Sullivan J, Soli S. Norms for the hearing in noise test: the influence of spatial separation, hearing loss, and English language experience on speech reception thresholds (A) J Acoust Soc Am. 1992;92:2385.
  • Nilsson M, Soli SD, Sullivan JA. Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am. 1994;95:1085–1099. [PubMed]
  • Parkinson AJ, Arcaroli J, Staller SJ, Arndt PL, Cosgriff A, Ebinger K. The Nucleus 24 Contour cochlear implant system: adult clinical trial results. Ear Hear. 2002;21:31–48. [PubMed]
  • Patrick JF, Bugsby PA, Gibson PJ. The development of the Nucleus Freedom cochlear implant system. Trends Amplif. 2006;10:175–200. [PubMed]
  • Pearson KS, Bennett RL, Fidell S. Speech Levels in Various Noise Environments. Environmental Health Effects Research Series, U.S. Environmental Protection Agency; Washington, DC: 1977.
  • Pumford JM, Seewald RC, Scollie S, Jenstad LM. Speech recognition with in-the-ear and behind-the-ear dual-microphone hearing instruments. J Am Acad Audiol. 2000;11:23–35. [PubMed]
  • Revit LJ, Schulein RB, Julstrom SD. Toward accurate assessment of real-world hearing aid benefit. Hear Rev. 2002;9:34–38.
  • Ricketts TA, Dittberner AB. Directional amplification for improved signal-to-noise ratio: strategies, measurements, and limitations. In: Valente M, editor. Hearing Aids: Standards, Options, and Limitations. 2nd ed Thieme Medical Publishers; New York: 2002. pp. 274–346.
  • Ricketts T, Mueller HG. Making sense of directional hearing aids. Am J Audiol. 1999;8:117–127. [PubMed]
  • Rubinstein JT, Parkinson WS, Lowder MW, Gantz BJ, Nadol JB, Jr, Tyler RS. Single-channel to multichannel conversions in adult cochlear implant subjects. Am J Otol. 1998;19:461–466. [PubMed]
  • Saunders GH, Forsline A. The Performance-Perceptual Test (PPT) and its relationship to aided reported handicap and hearing aid satisfaction. Ear Hear. 2006;27:229–242. [PubMed]
  • Saunders GH, Kates JM. Speech intelligibility enhancement using hearing-aid array processing. J Acoust Soc Am. 1997;102:1827–1837. [PubMed]
  • Skinner MW. Optimizing cochlear implant speech performance. Ann Otol Rhinol Laryngol. 2003;112:4–13. [PubMed]
  • Skinner MW, Arndt PL, Staller SJ. Nucleus(R) 24 advanced encoder conversion study: performance versus preference. Ear Hear. 2002;23:2S–17S. [PubMed]
  • Skinner MW, Clark GM, Whitford LA, Seligman PM, Staller SJ, Shipp DB, Shallop JK, Everingham C, Menapace CM, Arndt PL, Antogenelli T, Brimacombe JA, Pijl S, Daniels P, George CR, McDermott HJ, Beiter AL. Evaluation of a new spectral peak coding strategy for the Nucleus 22 channel cochlear implant system. Am J Otol. 1994;15(Suppl 2):15–27. [PubMed]
  • Skinner MW, Holden LK, Holden TA, Demorest ME, Fourakis MS. Speech recognition at simulated soft, conversational, and raised-to-loud vocal efforts by adults with cochlear implants. J Acoust Soc Am. 1997;101:3766–3782. [PubMed]
  • Soede W, Bilsen FA, Berkhout AJ. Assessment of a directional microphone array for hearing-impaired listeners. J Acoust Soc Am. 1993;94:799–808. [PubMed]
  • Spahr AJ, Dorman MF. Performance of subjects fit with the Advanced Bionics CII and Nucleus 3G cochlear implant devices. Arch Otolaryngol Head Neck Surg. 2004;130:624–628. [PubMed]
  • Spriet A, Van Deun L, Eftaxiadis K, Laneau J, Moonen M, van Dijk B, van Wieringen A, Wouters J. Speech understanding in background noise with the two-microphone adaptive beamformer BEAM in the Nucleus Freedom cochlear implant system. Ear Hear. 2007;28:62–72. [PubMed]
  • Thompson SC. Microphone, telecoil, and receiver options: past, present, and future. In: Valente M, editor. Hearing Aids: Standards, Options, and Limitations. 2nd ed Thieme Medical Publishers; New York: 2002. pp. 64–100.
  • Tyler RS, Dunn CC, Witt SA, Preece JP. Update on bilateral cochlear implantation. Curr Opin Otolaryngol Head Neck Surg. 2003;11:388–393. [PubMed]
  • Tyler RS, Gantz BJ, Rubinstein JT, Wilson BS, Parkinson AJ, Wolaver A, Preece JP, Witt S, Lowder MW. Three-month results with bilateral cochlear implants. Ear Hear. 2002;23:80S–89S. [PubMed]
  • Tyler RS, Moore BCJ. Consonant recognition by some of the better cochlear-implant patients. J Acoust Soc Am. 1992;92:3068–3077. [PubMed]
  • Valente M, Mispagel KM, Tchorz J, Fabry D. Effect of type of noise and loudspeaker array on the performance of omnidirectional and directional microphones. J Am Acad Audiol. 2006;17:398–412. [PubMed]
  • Valente M, Schuchman G, Potts LG, Beck LB. Performance of dual microphone in-the-ear hearing aids. J Am Acad Audiol. 2000;11:181–189. [PubMed]
  • van Hoesel R, Ramsden R, O’Driscoll M. Sound-direction identification, interaural time delay discrimination, and speech intelligibility advantages in noise for a bilateral cochlear implant user. Ear Hear. 2002;23:137–149. [PubMed]
  • van Hoesel RJM, Tyler RS. Speech perception, localization, and lateralization with bilateral cochlear implants. J Acoust Soc Am. 2003;113:1617–1630. [PubMed]
  • Vanden Berghe J, Wouters J. An adaptive noise canceller for hearing aids using two nearby microphones. J Acoust Soc Am. 1998;103:3621–3626. [PubMed]
  • Walden BE, Demorest ME, Helper EL. Self-report approach to assessing benefit derived from amplification. J Sp Hear Res. 1984;27:49–56. [PubMed]
  • Walker G, Dillon H, Byrne D. Soundfield audiometry: recommended stimuli and procedures. Ear Hear. 1984;5:13–21. [PubMed]
  • Wolfe J, Schafer EC, Heldner B, Mulder H, Ward E, Vincent B. Evaluation of speech recognition in noise with cochlear implants and dynamic FM. J Am Acad Audiol. 2009;20:409–421. [PubMed]
  • Wouters J, Litiere L, van Wieringen A. Speech intelligibility in noisy environments with one and two-microphone hearing aids. Audiol. 1999;38:91–98. [PubMed]
  • Wouters J, Vanden Berghe J. Speech recognition in noise for cochlear implantees with a two-microphone monaural adaptive noise reduction system. Ear Hear. 2001;22:420–430. [PubMed]
  • Wouters J, Vanden Berghe J, Maj J. Adaptive noise suppression for a dual microphone hearing aid. Int J Audiol. 2002;41:401–407. [PubMed]