PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Otol Neurotol. Author manuscript; available in PMC 2017 August 1.
Published in final edited form as:
PMCID: PMC5302051
NIHMSID: NIHMS794525

Participant-generated Cochlear Implant Programs: Speech Recognition, Sound Quality, and Satisfaction

Abstract

Objective

To determine whether patient-derived programming of one’s cochlear implant (CI) stimulation levels may affect performance outcomes.

Background

Increases in patient population, device complexity, outcome expectations, and clinician responsibility have demonstrated the necessity for improved clinical efficiency.

Methods

Eighteen postlingually deafened adult CI recipients (mean=53 years; range, 24–83 years) participated in a repeated-measures, within-participant study designed to compare their baseline listening program to an experimental program they created.

Results

No significant group differences in aided sound-field thresholds, monosyllabic word recognition, speech understanding in quiet, speech understanding in noise, nor spectral modulation detection (SMD) were observed (p>0.05). Four ears (17%) improved with the experimental program for speech presented at 45 dB SPL and two ears (9%) performed worse. Six ears (27.3%) improved significantly with the self-fit program at +10 dB signal-to-noise ratio (SNR) and four ears (26.6%) improved in speech understanding at +5 dB SNR. No individual scored significantly worse when speech was presented in quiet at 60 dB SPL or in any of the noise conditions tested. All but one participant opted to keep at least one of the self-fitting programs at the completion of this study. Participants viewed the process of creating their program more favorably (t=2.11, p=0.012) and thought creating the program was easier than the traditional fitting methodology (t=2.12, p=0.003). Average time to create the self-fit program was 10 minutes, 10 seconds (mean=9:22; range, 4:46–24:40).

Conclusions

Allowing experienced adult CI recipients to set their own stimulation levels without clinical guidance is not detrimental to success.

Keywords: Cochlear implants, Outcomes, Self-fitting, Self-programming

In 2012, up to 73% of adult cochlear implant (CI) recipients had transitioned from a hearing aid (HA). Driven by an aging population, increased awareness, and changes in device access and pursuit, this demographic is expected to grow at a rate of greater than 11% per year through 2020; while the maturation of newborn screening programs and the relatively stable incidence of congenital hearing losses meeting CI candidacy, the number of recipients under 30 years old is expected to grow at a slower rate of 2.5% per year through 2020 (1). These increases in patient population, as well as device complexity, outcome expectations, and clinician responsibility have demonstrated the necessity for improved clinical efficiency.

Stanley (2) reported a clinic capacity model suggesting that United States CI clinics were utilizing as much as 80% of their capacity to see patients. Thus, a limiting factor on the growth of CI programs is the burden new patients, and current patients who pursue bilateral cochlear implantation, place on current audiological resources. To combat this, some centers have capped the resources available for audiological services by reducing appointment times. The new clinical environment at some centers is to fit more, fit faster, and of course, keep the same high standard of care. This is not sustainable.

If the aforementioned were not cause for concern, the current clinical environment suggests that the forecast of new implant recipients may actually be underestimated in the faster growing older demographic. The analysis does not consider the expanded indications for cochlear implantation seen today. This includes an expanding base of patients with significant low frequency residual hearing albeit poor, CI-qualifying speech understanding. These are individuals who, by current FDA criteria, may not meet traditional CI indications for severity and/or configuration of hearing loss, but who are increasingly being implanted (38). Other trends being observed include cochlear implantation for individuals with single-sided deafness for bilateral hearing and/or tinnitus suppression (911).

Additionally, the partnering of retail-based audiology services with surgical centers will increase awareness and expand the number of patients seen in CI centers in the years to come. Laaman and Rutledge (1) estimated that 150,000 adults in the developed world transitioned to a severe-to-profound sensory hearing loss in 2012. Of these individuals, only about 25,500 pursued cochlear implantation leaving 127,500 with insufficient audiologic intervention. At present, there is an increasing focus on identifying CI candidates being seen in retail-based HA centers.

Basic programming of CI systems constitutes a significant portion of total clinical effort. With financial pressures on centers and the need to see more patients in less time, reducing the clinical effort dedicated to setting baseline stimulation levels and loudness balancing theoretically could free the audiologist to focus on higher order problems including complex troubleshooting, programming difficult cases, aural rehabilitation, assessment, and counseling; by doing so, this would increase the opportunity to provide higher quality care—elevating an audiologist’s scope of practice to the “top of the license”—with the potential to reduce visit duration and increase clinical efficiency.

Allowing HA users to program their HA for frequency specific gain is not a novel idea. Keidser and Dillon (12) pooled findings from five studies (189 subjects) that reported preferred gain deviation from NAL-NL1 targets. They found it much more common for adult HA users to decrease gain from prescribed NAL-NL1 targets. The authors reported a 3.2 dB decrease in overall gain for average level inputs. The reverse has been seen in children and is one of the premises for the development of the DSL hearing aid prescription. DSL was developed to more closely match children’s preferred listening levels, which are higher than adults. Scollie et al. (13), found DSL v4.1 targets to prescribe gain 9 to 11 dB higher than adult preferred listening levels.

Mueller et al. (14) used trainable HAs to examine “real-world” preferred gain in a group of HA users. Trainable HAs automatically adjust the startup gain based on previous gain settings. Twenty-two participants were fitted with two different “startup” gain prescriptions: 6 dB above NAL-NL1 targets and 6 dB below NAL-NL1 targets. During the trial, users were allowed to “train” or set overall gain up or down to their preferred gain as they encountered different real-world listening environments. It was observed that the startup gain had a significant influence on the gain to which the aids were trained. Unfortunately, no performance measures were used to evaluate whether or not the gain to which the aids were trained affected performance, but participants in this study rated loudness perception and satisfaction best in the program with the lower startup gain.

Hornsby and Mueller (15) allowed users to set HAs to their preferred gain levels and evaluated the effects on speech understanding in 16 bilateral HA users. Final gain settings were recorded and compared with NAL-NL1 targets. On average, individuals set gain within 1 to 2 dB of NAL-NL1 targets and, although some users preferred more or less gain than NAL-NL1 targets, speech testing resulted in no significant difference between prescribed and user-set gain settings.

Despite the success of self-fitting studies of adult HA users, there is considerable concern that CI patient-derived stimulation levels could negatively impact speech understanding and sound quality. Thus, the purpose of the present study was to evaluate performance, sound quality, and satisfaction of self-fit CI programs as compared with the standard clinical programs.

MATERIALS AND METHODS

Participants

Eighteen adult CI recipients (23 ears) implanted with the Advanced Bionics (Valencia, CA, U.S.A.) CI system participated in this study conducted in accordance with institutional review board approval. Demographic information is displayed in Table 1. Participants had at least 3 months experience with the implant (M=67.2; range, 3–174 mo.) and scored at least 28 on the mini mental state examination, indicating no significant risk of cognitive impairment (16). A repeated-measures, within-participant design was used to compare the participant’s current listening, or “baseline” program to the self-fit program. Unless otherwise mentioned, each ear was tested individually and individual ears with any level of residual acoustic hearing were occluded with a foam earplug. The following measures were evaluated at the first testing session and a subsequent session 2 to 4 weeks later.

TABLE 1
Participant demographic information: n=18 (8 bilateral, 10 unilateral right, 3 unilateral left). The participants’ ages ranged from 24 to 83 with a mean age of 53.1 years

Sound-field Thresholds and Speech Recognition

Stimuli were presented from a single loudspeaker at 0 degree. Sound-field thresholds were measured at octave and inter-octave frequencies from 250 to 6000 Hz. Speech recognition in quiet was assessed using one 50-word consonant-nucleus-consonant (17) list and one 20-sentence AzBio list (18) in accordance with the minimum speech test battery protocol (19). In addition to presenting AzBio materials at 60 dBA, participants were tested at 45 dBA to ensure that higher level processing (i.e., speech understanding) was not adversely affected for low level inputs. Participants who scored greater than 30% correct in quiet were also tested at a +10 dB signal-to-noise ratio (SNR) using a multi-talker babble. Individuals who scored greater than 30% correct in this condition were further tested at +5 dB SNR.

Quick Spectral Modulation Detection (QSMD)

Spectral envelope perception as measured by spectral modulation detection (SMD) serves as a psychoacoustic estimate of spectral resolution that is highly correlated with speech understanding (2022). For this study, we used the quick SMD (QSMD) task (22) which uses a three-interval, forced choice procedure based on a modified method of constant stimuli in which the listener is asked to differentiate a spectrally modulated band of noise from that of a flat spectrum noise. There are a fixed number of trials at each modulation depth and frequency. Each trial is scored as correct or incorrect and spectral resolution is described as the overall percent correct score for the task (chance=33%). The stimulus was created by applying a logarithmically spaced, sinusoidal modulation to the broadband carrier stimulus with a bandwidth of 125 to 5600 Hz. A total of 60 trials at five modulation depths (10, 11, 13, 14, and 16 dB) and two modulation frequencies (0.5 and 1.0 cycles/octave) were evaluated. All stimuli were calibrated and presented in the free field at 60 dBA from a single loudspeaker.

Consonant Recognition Task

During this closed-set task the participant was presented one of 16 male-voiced consonants in/aCa/format and asked to identify which they heard. Originally taken from the Iowa laser video disk (23), three repetitions of 16 trials for a total of 48 presentations were used to arrive at a total percent correct.

Speech Level, Speech Clarity, and Noise Level Ratings

Stimuli from the Phonak Target™ software were presented in the free field. The noise-level rating task employed the use of 12 environmental sounds, ranging from very quiet sounds (e.g., silence) to very loud (e.g., chorus of car horns in rush hour traffic). Participants rated each stimulus three times (i.e., 36 total presentations). Ratings were made on a 7-point scale ranging from “inaudible” to “unacceptably loud” and were compared between programs and to the ratings collected from 10 normal hearing (NH) adults.

During the speech level task, listeners rated the level of the speech of male and female talkers presented in quiet and in noise. Ratings were made on a 7-point scale ranging from “inaudible” to “unacceptably loud.” Nine different stimuli were presented three times. During the clarity task, listeners were asked to rate the clarity of the same male and female talkers from the speech level task. Ratings were made on a 5-point scale ranging from “completely unclear” to “completely clear.”

Subjective Questionnaires

Subjective surveys of benefit/perception were used to evaluate differences between the programs. Using a 10-point scale, participants were also probed regarding how satisfied they were with the programs they created, the process by which programs were created and their overall program preferences.

The abbreviated profile of hearing aid benefit (APHAB) (24) is a 24-item self-assessment inventory in which patients report the amount of communication difficulty, expressed as a percentage of time that an individual is having difficulty in a variety of different listening environments. A lower number represents fewer problems. Four subscale are used to assess benefit: ease of communication (EC), reverberation (RV), background noise (BN), and aversiveness (AV) (24). Benefit was calculated by comparing the reported difficulty in the baseline program to that of the self-fit program.

The SSQ12 (25) was used to evaluate participants’ perception of auditory performance in real-world listening situations including: speech understanding, segregation of sounds and ability to attend to simultaneous speech sources, spatial hearing (direction, distance and movement), and sound quality (clarity, music, and listening effort). A higher score represents better speech, spatial, and sound qualities of hearing.

The Judgments of Sound Quality (26,27) was used to evaluate sound quality of the programs in the following domains: clarity, fullness, brightness, softness, spaciousness, nearness, loudness, and total impression.

Experimental Program Creation Process

Participants entered the study having experience with a clinician-generated program. This clinical program was used in the baseline assessment of performance. At each of the two visits, participants used a custom programming interface to generate a new listening program. The new programs preserved the parameters of the clinical program [e.g., stimulation strategy, IDR, active microphone(s)]. Upper stimulation levels were reset and then set using a custom programming interface which presented acoustic signals (i.e., speech in quiet, speech in noise, and narrow-band noises) to the participant via a single loudspeaker placed at 0 degree azimuth at a distance of 1 ms. During presentation of these acoustic signals, the participant provided a series of subjective loudness ratings. Electrode stimulation levels were adjusted globally or in small groups based on the response. Lower stimulation levels were set to 10% of upper stimulation levels. The change in upper stimulation levels and affected electrode(s) were controlled by an underlying algorithm designed to allow the participant to explore a range of input levels from soft to loud. The objectives of this algorithm were to arrive at a stimulation contour which provided loudness percepts similar to those experienced by normal hearing listeners, supported a similar level of performance as the clinical program, and was acceptable for daily use in terms of overall loudness and sound quality.

No practice or clinical guidance during the program creation process was given. If needed, an additional global adjustment up or down was made in “live speech mode” as would be completed in a standard clinical visit before the program was written to the participant’s processor. Participants were asked to use the program exclusively for 2 to 4 weeks before returning to evaluate the self-fit program. Upon returning for evaluation, participants created a second program, which was used to evaluate how consistent they were in setting stimulation levels.

RESULTS

Upper Stimulation Levels or M levels

Unless otherwise stated, all statistical analyses were completed using a paired two-sample t test. Average participant-set upper stimulation levels, or M levels, were not significantly different from the baseline clinical program (p>0.05). On average, participants set M levels nine clinical units (CU) lower (SD =~26CU) from the baseline program after a global volume adjustment during the first visit. User-set M levels at the second visit did not differ significantly from those set at the first visit (p>0.05). M levels set on visit 2—averaged across active electrodes—were 6CU higher (i.e., closer to baseline stimulation levels) than M levels averaged across active electrodes during visit 1 (SD=19CU).

Sound-field Thresholds and Speech Recognition

No significant difference in Aided CI sound-field thresholds between programs was observed (p>0.05). A binomial distribution model for monosyllabic word recognition with 50 words (28) revealed no significant differences in word recognition between the two programs for any individual. Figure 1 shows the results from speech recognition testing using AzBio sentence materials. One unilateral participant performed significantly better with the experimental program when speech was presented in quiet at 60 dB SPL (49% versus 67%). No significant differences in speech understanding were observed for the remaining ears tested (n=22). Three ears (13%) scored significantly better with the experimental program for AzBio sentence materials presented in quiet at 45 dB SPL, however, caution should be used interpreting two of these data points as real differences as the function is compressed at the extremes due to floor and ceiling effects (18). Four ears (17%) fell outside the 95% confidence interval scoring worse for sentence materials presented in quiet at 45 dB SPL. Six ears (27.3%) performed significantly better with the experimental program at +10 dB SNR and four ears (26.6%) performed significantly better on speech understanding at +5 dB SNR. No individual scored significantly worse in any of the noise conditions tested. Six individuals fell outside the confidence intervals on more than one measure of speech understanding. Of these: three performed better with the self-fit program in both noise conditions; one performed better in noise and in the soft speech condition; and two others, while they performed better in noise, performed worse in the soft speech condition (45 dB SPL). An examination of lower stimulation levels for did not provide additional insight. While significant individual differences were observed, no significant group differences were observed for speech understanding in quiet or in noise between the two programs (p>0.05).

FIG. 1
Individual speech recognition scores for the baseline clinical (x-axis) and experimental self-fit (y-axis) programs. The dashed lines outline the 95% confidence interval for AzBio sentence materials (18). Data points falling outside this region are considered ...

Average consonant recognition score in the baseline and self-fit program was 61% and 62%, respectively. Average percent correct on the QSMD task in the baseline and self-fit program was 56% and 57%, respectively. These differences were not significant (p>0.05).

Sound Quality

One subject forgot to fill out and return the SSQ. Statistical analysis of SSQ and JSQ results revealed no significant differences between programs (p>0.05) (See Supplemental Digital Content, http://links.lww.com/MAO/A421). APHAB results are shown in Figure 2. The solid black diagonal represents no difference between the baseline program and self-fit program. Anything above this diagonal demonstrates more problems with the self-fit program, while anything below, demonstrates fewer problems with the self-fit program. The 95% critical differences for EC (solid gray line), RV and BN (dotted line), and AV (dashed line) scales are shown (24). Data points outside these regions represent a significant difference in subjective hearing difficulty between the two programs. Two participants experienced fewer problems on the EC scale, one participant experienced fewer problems on the RV scale and one reported fewer problems on the AV scale. No individuals reported fewer problems on more than one scale. No significant group differences were observed between programs for any subscale (p>0.05).

FIG. 2
Individual APHAB scores are shown for the baseline clinical (x-axis) and experimental self-fit (y-axis) programs. The 95% critical differences for EC (solid gray line), RV and BN (dotted line), and AV (dashed line) scales are shown (24). Data points outside ...

Speech Level, Clarity, and Noise Level

No differences between the baseline and self-fit programs for subjective estimates of speech level, clarity, and noise level were observed (p>0.05). Figure 3 shows the average rating participants assigned to stimuli during the noise level task compared with the ratings of 12 NH adults. No difference in loudness rating between programs was observed, suggesting both had similar loudness growth (p>0.05). Figure 3 shows that the average loudness rating of lower and higher level stimuli was perceived as “quieter” to the CI recipients. This is consistent with the effects of the automatic gain control, which applies compression to higher level input stimuli. The perception of lower level stimuli being perceived as “quieter” was not investigated as this study did not modify lower stimulation levels.

FIG. 3
Mean loudness ratings are displayed for the NH listeners (black bars) as well as the study participants for the baseline clinical program (light gray bars) and experimental self-fit program (darker gray bars).

Satisfaction

Figure 4a displays overall satisfaction, satisfaction for listening to low-level or soft sounds and satisfaction for listening in noise with the baseline and self-fit programs. These differences were not significant (p>0.05). Figure 4b displays the participants’ ratings of ease for creating their program and how favorable (or unfavorable) they viewed the process of setting stimulation levels. Statistical analysis revealed that participants thought that the creation of the self-fit program was easier (M=8.05, SD=2.55) than a traditional programming visit in the clinic (M=5.92, SD=2.98). These differences were significant (p<0.05, t=2.12). Participants viewed the process of creating their listening program (M=8.69, SD=2.36) more favorably than the standard clinical visit (M=6.58, SD=3.06)—a difference that was found to be significant (p<0.05, t=2.11). Figure 5 shows the degree to which participants expressed a preference for the baseline program or self-fit program—the majority expressing preference for the self-fit program. One participant was not asked their preference, but continues to use the self-fit program reporting that he heard better on the phone.

FIG. 4
Satisfaction. A: Mean listening satisfaction is displayed for overall listening, sounds perceived as “soft” as well as speech in noise for the baseline clinical (black bars) and experimental self-fit (gray bars) programs, respectively. ...
FIG. 5
Rank-ordered degree of preference for the baseline or self-fit program.

DISCUSSION

In this study we allowed experienced CI recipients to set their own stimulation levels without clinical guidance and compared speech understanding and subjective ratings of this program to that of their baseline program. Though researchers have investigated self-fitting of hearing aids for some time, tools to allow self-fitting technology for individuals with CIs remains a relatively novel idea. A literature review found no other studies that assessed recipient set stimulation levels and its impact on performance outcome measures. The only other more non-traditional method of CI fittings reviewed in the literature was Battmer et al. (29). The “Fitting to Outcomes eXpert” (FOX), employed by The Eargroup (Antwerp, Belgium). FOX was designed to optimize and automate CI programming. FOX uses the results from specific outcome measures to modify recipient programs. Recommendations to optimize a recipients program were based on the results of these outcome measures. These recommendations were created based on an analysis of clinical programs and outcomes from over 600 CI users at the Eargroup’s center (30).

Unlike the Ear group, in our study, the audiologist’s recommended program parameters were kept intact and recipient stimulation levels were set based on behavioral responses to speech stimuli presented in quiet and in noise. While FOX was designed to be a more systematic and outcome-driven approach, here we evaluated a more patient-directed approach similar to how many programming audiologists fit their patients. We also compared the self-fit program to the baseline program to determine whether allowing experienced patients to set stimulation levels would be detrimental to success. While our approach and the FOX system were both able to reduce the time spent fitting patients, theoretically allowing the audiologist to focus on higher order problems, it is not known if the Eargroup’s FOX was detrimental to patient performance as the authors did not directly compare programs created with FOX to programs created with patient behavioral responses (30).

Battmer et al. (29), investigated speech understanding and fitting times in a group of patients activated with FOX versus a standard clinical activation. Unfortunately, the groups were poorly matched for duration of deafness and a direct comparison for speech outcomes could not be completed. Nevertheless, they reported that FOX reduced the time spent fitting during the patient’s first 2 weeks of use. They also found that use of FOX reduced the variability in fitting times between subjects, centers, individuals, or a combination of these.

Here, we examined whether allowing CI recipients to set their own stimulation levels would affect patient performance. On average, mean performance was not significantly different than that obtained with the baseline clinical program. Interestingly enough, even with limited data to support the superiority of one program over the other, the majority of the participants expressed a preference for the self-fit program and all but one participant opted to keep the self-fit program on his/her processor at the completion of this study. This observation might be explained by the participants’ preference for the process of creating the self-fit program or by simply playing a more active role in the programming process. From the comments we received, most participants wanted to keep the self-fit program because they made it. When participants were asked about the process of creating their own listening program, they rated the process more favorably than the standard clinical visit and thought that creating the self-fit program was easier than the traditional fitting methodology, which varies not only across programming centers, but within centers as well.

CONCLUSION

Basic programming of CI systems constitutes a significant portion of total clinical effort. Here, we have shown that 1) CI recipients are consistent in setting upper stimulation levels to a comfortable level; 2) that allowing the user to set stimulation levels is not detrimental to outcomes; 3) allowing CI recipients to take a more active role in the process could be a means to creating efficiencies in the clinic. The average time to complete the workflow was 10 minutes, 10 seconds (M=9:22; range, 4:46–24:40). In our evaluation of performance, sound quality, and satisfaction of self-fitting programs, we found no adverse changes in performance with the self-fit program and process. Despite very limited differences between the baseline and self-fit programs, all but one participant has kept at least one of the self-fit programs and all participants viewed the process positively. It is important to revisit the fact that these data were collected from established users. This process would be most advisable for the experienced user— one not requiring counseling and education on device use. Though additional work in this area is needed to investigate self-fitting for less experienced users and with a larger sample across multiple centers, the results of this study suggest that the use of self-fitting programs for CI users may hold a place in clinical practice to help increase efficiency and patient satisfaction as CI recipients can play a more active role in the ongoing audiologic management of the implanted device.

Supplementary Material

SUPPLEMENTAL CONTENT

Acknowledgments

The authors would like to acknowledge Linsey Sunderhaus, Au.D., for her assistance with data collection.

Study data were collected and managed using REDCap (Research Electronic Data Capture) data management tools. The use of REDCap software is made possible by support from the Vanderbilt Institute for Clinical and Translational Research grant (UL1 TR000445 from NCATS/NIH).

Footnotes

At the time of manuscript submission, René H. Gifford, Ph.D., was a member of the audiology advisory board for Advanced Bionics, Cochlear Americas, and MED-EL. These data were presented at the CI 2014 Pediatric Symposium on Cochlear Implants.

The authors disclose no other conflicts of interest.

Supplemental digital content is available in the text.

References

1. Laaman S, Rutledge J. Cochlear. Morgan Stanley Research; 2012. Nov 28,
2. Morgan Stanley Research. Cochlear: Asia Insight: Clinic Capacity to Constrain Mid-term US Growth. 2015 Available at: http://bg.panlv.net/file2/2012/02/16/0f7ed09da5f991a3.pdf. Accessed December 10, 2015.
3. Amoodi HA, Mick PT, Shipp DB, et al. Results with cochlear implantation in adults with speech recognition scores exceeding current criteria. Otol Neurotol. 2012;33:6–12. [PubMed]
4. Gifford RH, Dorman MF, Shallop JK, Sydlowski SA. Evidence for the expansion of adult cochlear implant candidacy. Ear Hear. 2010;31:186–94. [PMC free article] [PubMed]
5. Gifford RH, Driscoll CL, Davis TJ, Fiebig P, Micco A, Dorman MF. A within-subject comparison of bimodal hearing, bilateral cochlear implantation, and bilateral cochlear implantation with bilateral hearing preservation: high-performing patients. Otol Neurotol. 2015;36:1331–7. [PMC free article] [PubMed]
6. Cadieux JH, Firszt JB, Reeder RM. Cochlear implantation in nontraditional candidates: preliminary results in adolescents with asymmetric hearing loss. Otol Neurotol. 2013;34:408–15. [PMC free article] [PubMed]
7. Carlson ML, Sladen DP, Haynes DS, et al. Evidence for the expansion of pediatric cochlear implant candidacy. Otol Neurotol. 2015;36:43–50. [PubMed]
8. Roland JT, Waltzman SB. Expanded pediatric cochlear implant candidacy. Otolaryngol Head Neck Surg. 2015;152:592–3. [PubMed]
9. Arndt S, Aschendorff A, Laszig R, et al. Comparison of pseudo-binaural hearing to real binaural hearing rehabilitation after cochlear implantation in patients with unilateral deafness and tinnitus. Otol Neurotol. 2011;32:39–47. [PubMed]
10. Arts R, George E, Stokroos R, Vermeire K. Review: cochlear implants as a treatment of tinnitus in single-sided deafness. Curr Opin Otolaryngol Head Neck Surg. 2012;20:398–403. [PubMed]
11. Hassepass F, Aschendorff A, Wesarg T, et al. Unilateral deafness in children: audiologic and subjective assessment of hearing ability after cochlear implantation. Otol Neurotol. 2013;34:53–60. [PubMed]
12. Keidser G, Dillon H. What’s new in prescriptive fittings down under? In: Palmer C, Seewald R, editors. Hearing Care for Adults. Staefa, Switzerland: Phonak AG; 2006. pp. 133–42.
13. Scollie S, Seewald R, Cornelisse L, et al. The desired sensation level multistage input/output algorithm. Trends Amplif. 2005;9:159–97. [PMC free article] [PubMed]
14. Mueller GH, Hornsby BWY, Weber JE. Using trainable hearing aids to examine real-world preferred gain. J Am Acad Audiol. 2008;19:758–73. [PubMed]
15. Hornsby BWY, Mueller GH. User preference and reliability of bilateral hearing aid gain adjustments. J Am Acad Audiol. 2008;19:158–70. [PubMed]
16. Folstein MF, Folstein SE, McHugh PR. Mini-mental state. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12:189–98. [PubMed]
17. Peterson GE, Lehiste I. Revised CNC lists for auditory tests. J Speech Hear Dis. 1962;27:62–70. [PubMed]
18. Spahr AJ, Dorman MF, Litvak LM, et al. Development and validation of the AzBio sentence lists. Ear Hear. 2012;33:112–7. [PMC free article] [PubMed]
19. MSTB: The New Minimum Speech Test Battery for Adult Cochlear Implant Users. Available at: http://auditorypotential.com/MSTB.html. Accessed December 10, 2015.
20. Litvak LM, Spahr AJ, Saoji AA, Fridman GY. Relationship between perception of spectral ripple and speech recognition in cochlear implant and vocoder listeners. J Acoust Soc Am. 2007;122:982–91. [PubMed]
21. Saoji AA, Litvak LM, Spahr AJ, Eddins DA. Spectral modulation detection and vowel and consonant identifications in cochlear implant listeners. J Acoust Soc Am. 2009;126:955–8. [PubMed]
22. Gifford RH, Hedley-Williams A, Spahr AJ. Clinical assessment of spectral modulation detection for cochlear implant recipients: a non-language 480 based measure of performance outcomes. Int J Audiol. 2014;53(3):159–64. [PMC free article] [PubMed]
23. Tyler R, Preece L, Tye-Murray N. The Iowa Phoneme and Sentence Tests. Iowa City: The University of Iowa; 1986.
24. Cox RM, Alexander GC. The abbreviated profile of hearing aid benefit (APHAB) Ear Hear. 1995;16:176–86. [PubMed]
25. Noble W, Jensen NS, Naylor G, Bhullar N, Akeroyd MA. A short form of the speech, spatial and qualities of hearing scale suitable for clinical use: the SSQ12. Int J Audiol. 2013;52:409–12. [PMC free article] [PubMed]
26. Gabrielsson A, Sjögren H. Perceived sound quality of hearing aids. Scand Audiol. 1979;8:159–69. [PubMed]
27. Gabrielsson A, Schenkman BN, Hagerman B. The effects of different frequency responses on sound quality judgments and speech intelligibility. J Speech Hear Res. 1988;31:166–77. [PubMed]
28. Thornton AR, Raffin MJ. Speech-discrimination scores modeled as a binomial variable. J Speech Hear Res. 1978;21:507–18. [PubMed]
29. Battmer R, Borel S, Brendel M, et al. Assessment of “Fitting to Outcomes Expert” FOX™ with new cochlear implant users in a multi-centre study. Cochlear Implant Int. 2015;16:100–9. [PubMed]
30. Govaerts PJ, Vaerenberg B, De Ceulaer G, Daemers K, De Beukelaer C, Schauwers K. Development of a software tool using deterministic logic for the optimization of cochlear implant processor programming. Otol Neurotol. 2010;31:908–18. [PubMed]