PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Ear Hear. Author manuscript; available in PMC 2010 August 1.
Published in final edited form as:
PMCID: PMC2820302
NIHMSID: NIHMS169909

The benefits of hearing aids and closed captioning for television viewing by older adults with hearing loss

Sandra Gordon-Salant, Ph.D.
University of Maryland College Park, MD ; ude.dmu.pseh@nodrogs

Abstract

Objectives

Although watching television is a common leisure activity of older adults, the ability to understand televised speech may be compromised by age-related hearing loss. Two potential assistive devices for improving television viewing are hearing aids and closed captioning, but their use and benefit by older adults with hearing loss are unknown. The primary purpose of this initial investigation was to determine if older hearing-impaired adults show improvements in understanding televised speech with the use of these two assistive devices (hearing aids and closed captioning) compared to conditions without these devices. A secondary purpose was to examine the frequency of hearing aid use and closed captioning use among a sample of older hearing aid wearers.

Design

The investigation entailed a randomized, repeated-measures design of 15 older adults (59–82 years) with bilateral sensorineural hearing losses who wore hearing aids. Participants viewed three types of televised programs (news, drama, game show) that were each edited into lists of speech segments, and provided an identification response. Each participant was tested in four conditions: baseline (no hearing aids or closed captioning), hearing aids only, closed captioning only, and hearing aids + closed captioning. Pilot testing with young normal-hearing listeners was conducted also to establish list equivalence and stimulus intelligibility with a control group. All testing was conducted in a quiet room to simulate a living room, using a 19-in flat screen television. Questionnaires were also administered to participants to determine frequency of hearing aid use and closed captioning use while watching television.

Results

A significant effect of viewing condition was observed for all programs. Participants exhibited significantly better speech recognition scores in conditions with closed captioning than those without closed captioning (p<.01). Use of personal hearing aids did not significantly improve recognition of televised speech compared to the unaided condition. The condition effect was similar across the three different programs. Most of the participants (73%) regularly wore their hearing aids while watching television; very few of them (13%) had ever used closed captioning.

Conclusions

On average, use of closed captioning while watching television dramatically improved speech understanding by a sample of older hearing-impaired adults compared to conditions without closed captioning, including when hearing aids were worn.

INTRODUCTION

Approximately 92% of people between 65 and 74 years, and 95% of people 75 years and older watch television every day (Hanley, 2002). Adults over 65 years watch more television than younger age groups, with an average of 3.5–5.25 hours of television per day (Hanley, 2002; Mares & Woodard, 2006). They rely on television for national and world news and for entertainment (Goodman, 1990). There are many factors, however, that may limit television viewing by older people. One potential factor is hearing loss, which affects at least one-third of adults over 65 years and one-half of those over 80 years (Moscicki et al., 1985; Cruikshanks et al. 1998; Desai, et al. 2001). Age-related hearing loss not only attenuates sound, but also affects the clarity with which a spoken message is received. This limitation may be mitigated somewhat by the availability of speechreading cues, which are highly beneficial for speech understanding when combined with auditory information (Grant et al., 1998). Although recent reports suggest that older individuals do not lipread visual-only signals as well as younger adults (Tye-Murray et al. 2007), older people appear to derive significant benefit from the integration of auditory and visual cues (Walden et al. 1993).

Accurate perception of the televised message also might be compromised in older adults because of a decline in the ability to process auditory signals presented at a rapid rate (Gordon-Salant and Fitzgibbons, 1993; Fitzgibbons, et al. 2007). This age-related decline in processing rapid signals has been attributed to a cognitive decline in speed of information processing (Salthouse, 1996) as well as to a more central deficit in neural synchrony for coding of rapid signal onsets, as suggested by animal models of aging (e.g., Hellstrom and Schmiedt, 1990; Boettcher et al. 1996). Age-related deficits for processing rapid speech, described above, have been observed in older listeners with normal hearing (Gordon-Salant and Fitzgibbons, 1993). Older people with hearing loss have considerably more difficulty accurately perceiving rapid speech than either younger listeners with hearing loss or older listeners with normal hearing, because of the combined effects of hearing loss and decline in speed of signal processing (Gordon-Salant and Fitzgibbons, 1993). It is also noteworthy that older listeners' difficulty in recognizing fast speech is exacerbated when limited semantic or syntactic contextual cues are available (Wingfield et al. 1985). Some television broadcasts present speech at a faster-than-normal rate by applying time compression technology either to commercials (MacLachlan and Siegel, 1980) or to the televised program itself (Uglova and Shevchenko, 2005), thereby compounding the older person's problem in receiving the message because of a hearing loss.

Other factors that may act to reduce the older person's ability to understand the televised message are degradations in the listening/viewing environment and reduced inhibitory mechanisms in older people. Older listeners generally have more difficulty understanding speech when it is presented in a background of noise (e.g., Dubno et al. 1984; Stuart and Phillips, 1996) or in a reverberant environment (Nabelek and Robinson, 1982), either of which may be present during television viewing. Part of the problem may be related to reduced audibility of speech information, as predicted by the Speech Intelligibility Index (ANSI, 1997). Evidence also suggests that as people age, there is a reduction in the ability to inhibit irrelevant information (Hasher and Zacks, 1988). This attribute may partially explain older listeners' difficulty understanding speech in noisy environments. It also suggests that for television viewing, interruptions in the program (such as with advertisements, changes in the scene, etc.) may distract older people, thereby diminishing their ability to follow the primary content of the program message.

A variety of assistive devices are available to improve understanding of television for older people with hearing impairment. The principal assistive listening device is a hearing aid. Although hearing aids are beneficial for improving signal audibility and speech understanding in quiet, one-on-one communication, approximately 25% of older hearing aid users do not report satisfaction with their hearing aids for television viewing (Kochkin, 2005). Empirical assessment of older hearing aid users' accuracy in understanding a televised message for actual television programs has not been reported to date. A second assistive technology that is potentially beneficial for understanding televised speech is closed captioning. Captions are written text strings that appear on the television screen and completely or closely mimic the audio content of a television program. Since 1993, all televisions with a screen size exceeding 13-in have been equipped with closed captioning decoders, as mandated by the Television Decoder Circuitry Act of 1990. As a result, essentially all televisions now incorporate closed captioning decoders that can be activated by the user, and all of the major broadcasting networks have 100% of their televised programming available with closed captions. The extent to which older people with age-related hearing loss use and benefit from closed captioning is virtually unknown. At least one older demographic survey reported that a relatively small proportion of people who used closed captioning were over 45 years of age (Jensema, 1987).

The purpose of the current study was to evaluate the benefit of hearing aids and closed captioning for television viewing by a sample of older adults with hearing loss who use hearing aids. A secondary objective was to determine the frequency of use of hearing aids and closed captions for television viewing by this same sample. The principal hypothesis was that recognition scores for televised speech would be better with use of either hearing aids or closed captioning compared to a baseline condition (no assistive devices), and that use of both assistive devices would produce better scores than those obtained with either one alone. It was also predicted that recognition scores would vary with programming type (game show, news, drama), with game shows producing better scores than news or drama because of the availability of numerous non-verbal, visual cues that convey the intended message. A final hypothesis was that few older hearing-impaired individuals with hearing aids use closed captioning for viewing television.

METHODS

Participants and Recruitment

Participants were included in the study if they had a bilateral, sensorineural hearing loss, currently used binaural hearing aids, and had worn their hearing aids for at least two months to increase the likelihood that hearing aid benefit and acclimitization had occured (Cox, Alexander, Taylor, & Gray, 1996). There were no restrictions on degree of hearing loss, because the goal of this initial study was to sample performance of a random clinical population of older adults who wore hearing aids. Participants were also required to be older adults, age 60 years and older. This age group was chosen because reports have shown that hearing loss begins to affect both males and females at the higher frequencies around the age of 60 (Pearson et al., 1995). Additionally, participants were required to be native speakers of American English, high school graduates (or higher levels of education), and have vision corrected to 20/20 by contact lenses or glasses.

A total of 22 adults who wore binaural hearing aids were recruited from Audiology and/or Hearing Aid Centers in the Birmingham, AL and Atlanta, GA areas. Although all 22 participants were tested for the study, recognition data from only 15 are reported here. The data from 6 adults were excluded because these individuals had a significant conductive component in their hearing loss, and the data of an additional participant were excluded because this individual was considerably younger than the others. The final sample of 15 older adults were aged 59 – 82 years (Mean age = 74.53, SD = 7.33, Male = 9; Female = 6). As noted above, each participant was a native speaker of English who had a bilateral, sensorineural hearing loss and was a current user of binaural hearing aids. Vision problems were reported by the participants to be corrected to 20/20 by contact lenses or glasses. To ensure adequacy of vision abilities for seeing the closed captions, participants were required to view, read and write four practice sentences presented on the television with 75% accuracy, following procedures recommended by Jensema (1998). Hearing sensitivity of the final study group (n = 15) ranged from mild to profound. Individual 4-frequency (.5k, 1k, 2k, 4kHz) pure-tone averages in each ear and aided monosyllabic speech recognition scores (Northwestern University Test No.6, Tillman & Carhart, 1966) are shown in Table 1. The average thresholds across frequency showed a moderate-to-severe, gradually sloping sensorineural hearing loss typical of presbycusis. The hearing aids worn by this group varied in style, power, and manufacturer, to reflect the range of hearing aids worn by a clinical population. All participants had owned their hearing aids for at least two months prior to the study onset; duration of individual listener hearing aid use is reported in Table 1.

Table 1
Age, hearing characteristics, and length of hearing aid use for the 15 Participants

Stimuli

Stimuli included 124 sentences or parts of sentences from three different television programs: ABC World News Tonight (news), Jeopardy (game show), and the West Wing (drama). The programs were originally recorded in fall, 2005 and winter, 2006. Four lists of 10 sentences/each were recorded for each of the three shows (n = 12 lists), yielding 120 scoreable sentences. Four additional practice sentences were recorded for screening purposes.

Sentences contained at least four content words (i.e., nouns, verbs, adjectives, adverbs, prepositions) that could be used for scoring. Each sentence was spoken by one person at a time. However, several different speakers were included in each set of sentence stimuli in a given sentence list.

Selected sentences were edited from the original video using Adobe Premier Pro (v 1.5) video editing software. Thirty seconds of silence and blank (black) screens were inserted between each sentence to provide time for listeners to record their response on an answer form. Closed captions were added to the sentences utilizing Adobe Premier Pro's title effects. A different randomization of the sentence lists was created for each participant, such that the order of presentation of each of the 12 lists (4 lists × 3 programs) was completely randomized over the participants. After digital editing, a master DVD was burned containing all sentence lists.

Pilot testing with 11 young adult listeners with normal hearing was conducted to verify that the final sentence lists for each program type yielded equivalent scores when sentences were presented without closed captioning (audio signal level was 60 dB A) and with closed captioning (sound off). The young adult listeners also had normal vision (with or without correction), as indicated by self-report. There were 50 scoreable words for each final sentence list. Pilot data for the audio-only condition and the closed captioning-only condition are shown in Figures 1 and and2,2, respectively. Analyses of variance were conducted separately for each viewing condition on arc sine transformations of the scores for each of the four lists. Results showed no significant differences between lists (p>.05) for both viewing conditions, confirming list equivalence. The figures demonstrate that average scores across the four sentence lists and three program types in this pilot study ranged from 86–98% correct (mean = 92% correct) without closed captioning, and 95 – 100% correct (mean = 98% correct) with closed captioning.

Figure 1
Percent correct scores of young, normal-hearing listeners (n = 11) for four lists of televised speech stimuli, shown separately for each of three program types, in the viewing condition without closed captioning. Error bars represent 1 s.d.
Figure 2
Percent correct scores of young, normal-hearing listeners (n = 11) for four lists of televised speech stimuli, shown separately for each of three program types, in the viewing condition with closed captioning only (sound turned off). Error bars represent ...

Procedure

Preliminary Testing

The participants filled out a consent form and questionnaire that included information about television use, closed-captioning use, and television preferences. Each participant next had a complete audiometric evaluation to verify the presence of a sensorineural hearing loss. Speech recognition performance was assessed also for monosyllabic word stimuli (Northwestern University Test No. 6, Tillman & Carhart, 1966) presented at 50 dB HL in the sound field, while the participants wore their hearing aids adjusted to normal use settings. The average aided score for the participants on a standard monosyllabic word test was 56%, with a range of 6–96%.

The functioning of the participants' hearing aids was verified electroacoustically using a Frye Electronics FP 35 hearing aid analyzer. A listening check of each hearing aid was also conducted to ensure proper functioning. All of these procedures took place at the University of Alabama at Birmingham's (UAB) Spain Rehabilitation Center.

Experimental Testing

All testing was performed in a quiet room at the UAB Spain Rehabilitation Center. During testing, participants were seated in a comfortable chair 80 inches from a Sylvania 20” flat-screen color television (Model: 6420FF) and viewed segments from three types of television programming in four viewing conditions. A DVD player was used to present the video segments and practice segments. The speech signal was presented through the speakers of the television at an average conversational level (60 dBA), which was calibrated daily (Extech sound level meter, Model 407740). Background noise was measured also, and never exceeded 40 dBA (typical of quiet rooms) for any participant.

Following the initial screening procedure (described above), the test stimuli were presented in one of four conditions, and participants wrote each sentence they perceived. The interstimulus interval was 30 sec, during which a blank (black) screen was visible on the television. The four conditions were: (1) Baseline (BSLN), stimuli were presented without closed captioning and the participants did not wear their hearing aids; (2) Hearing aids (HA), participants wore their own hearing aids adjusted to everyday settings; (3) Closed captioning (CC), closed captioning was turned on but participants were unaided; and (4) Hearing aids + Closed captioning (HA+CC), closed captioning was turned on and the participants wore their own hearing aids. The order of the viewing conditions was randomized across listeners, as was the assignment of sentence list to condition. Thus, each listener received a unique assignment of sentence list to condition across each of the three shows. Examination of the lists used for each condition confirmed that the distribution was equal across the participants. The entire procedure was completed in one session of 2.5 – 3 hrs. Participants were given frequent breaks as necessary to minimize fatigue.

This project was approved by the Institutional Boards for Human Subjects Research at the University of Maryland, College Park and the University of Alabama at Birmingham.

RESULTS

Word recognition scores obtained from the 15 participants measured for three types of programs and four listening/viewing conditions are shown in Figure 3. The primary hypothesis was that elderly listeners who use hearing aids would demonstrate different speech recognition scores for televised programming across the different listening/viewing conditions. Prior to data analysis, word recognition scores were arc-sine transformed. Results of a repeated measures analysis of variance (ANOVA) revealed a significant main effect of program [F(2,28) = 11, p<.01], and condition [F(1.51, 21.13 = 48.24, p<.01], and a significant interaction between program and condition [F(6,84) = 4.16, p<.01]. Post-hoc simple main effects analyses and multiple comparison tests (Bonferroni) indicated that the scores obtained in the HA+CC and CC conditions were significantly higher than scores obtained in the BSLN and HA conditions, for all programs. There were no differences in scores obtained in the HA+CC vs. CC conditions across programs. Similarly, there were no significant differences in the scores measured between the BLSN and HA conditions for all three program types. The source of the interaction effect appeared to be the magnitude of the difference in scores measured in the CC vs. HA only conditions. For both drama and news programming, these differences were highly significant at the p<.001 level, whereas for the game show, this difference was significant at the p<.05 level.

Figure 3
Percent correct scores of older listeners (n = 15) in four viewing conditions, for three program types (top panel: game, middle panel: drama, lower panel: news). Error bars represent 1 s.d.

A secondary hypothesis was that different program types might produce significantly different word recognition scores. Simple main effects analyses indicated no significant differences between word recognition scores for the different types of programming in each listening/viewing condition [BSLN: F(2,42) = .78, p>.05; HA: F(2,42) = 1.45, p>.05; CC: F(2,42) = .01, p>.05; HA+CC: F(2,42) = .01, p>.05].

The possibility that word recognition scores might be correlated with the amount of time spent watching television was investigated. Bivariate correlation analyses of the amount of television viewed and performance in each condition showed that none of the correlations was significant (p>.05).

Participants were asked to rate the frequency of use of their hearing aids and closed captions while watching television every day. It was anticipated that these variables might correlate with performance, but the data were too skewed to permit meaningful analyses. Figures 4 and and55 show the frequency of hearing aid and closed caption use, respectively, among the 15 participants. Approximately 73% of the participants reported “always” or “usually” wearing their hearing aids while watching television. However, only 13% noted “always” using the closed captioning when watching television (none of them reported using the captions “usually” while television viewing). A surprising 87% of the participants reported “never” using the captioning when watching television.

Figure 4
Frequency of participants' (n = 15) self-reported hearing aid use while watching television.
Figure 5
Frequency of participants' (n = 15) self-reported use of closed captioning while watching television.

DISCUSSION

The performance of the elderly hearing-impaired listeners in the baseline condition was quite poor, with an overall average score of 23% correct across the three program types. This level of performance indicates that without any assistive device, this sample of elderly people with hearing impairment understood a very small proportion of a spoken televised message. A striking finding was that the performance of the older adults did not improve significantly with the use of hearing aids compared to performance in the baseline condition, despite the fact that the listeners had audible speech information and speechreading cues. Studies of listener performance with hearing aids usually report that listeners derive significant benefit with amplification for understanding speech presented alone and even greater benefit when amplification is combined with visual speechreading cues (Walden et al. 2001). The lack of congruence between the current findings and those reported previously are likely due to numerous differences in the speech stimuli across studies. For example, previous studies typically have used a single talker who faced the camera with good lighting, and pronounced the words in a clear and deliberate style (known as recitation style). Speechreading cues appear to be consistent and readily available in these types of video recordings. The televised stimulus materials of the current study, however, likely did not provide consistent speechreading cues because the talker did not always face the camera while speaking, which may have contributed to the inability of participants to integrate auditory and speechreading cues in the HA condition. The talker varied from sentence to sentence in the present study, and represented male and female talkers who may have had different jaw movements, speech rates and/or dialects. Older adults have difficulty adapting to changes in the talker from one stimulus to the next, a phenomenon known as perceptual normalization (Sommers, 1997). Finally, the overall rate of speech in the televised programs was generally faster and more variable than that characterized by recitation style speech (Wingfield and Tun, 2001; Uglova and Shevchenko, 2005;). As noted earlier, older adults have considerable difficulty in accurately understanding a spoken message presented at a rapid rate (Gordon-Salant and Fitzgibbons, 1993). The participants' scores in the HA condition while watching television (mean score = 37% correct) were lower than their aided word recognition scores in quiet, as measured during the audiological assessment (mean score was 56% correct). Considering that visual cues were available in the televised materials, which should have aided performance, this observation suggests that televised speech is quite degraded for older listeners compared to standardized speech materials presented in audiometric evaluations. Clearly, hearing aids alone cannot compensate for the resulting deficit. It is noteworthy that the young adult listeners with normal hearing in the pilot study obtained excellent word recognition scores for these same televised speech materials presented without closed captioning. Thus, while televised speech is easily understood by younger adults, it is not well recognized by older adults with hearing impairment, even when they use hearing aids.

One possible source of the minimal benefit of amplification in this sample was that participants with mild hearing losses or profound hearing losses might not have exhibited performance increments in the HA condition over the BSLN condition because they either heard the baseline signals too well (in the case of mild hearing losses) or could not hear the amplified signals (in the case of profound hearing losses). To examine this possibility, the individual participant data were re-drawn for one representative program, news, because news is the type of program watched most frequently by older people (Mares & Woodard, 2006). Figure 6 presents these individual data, together with the mean data. It is apparent that there is considerable variability in performance across all of the listening conditions, particularly in the BSLN and HA conditions. The individuals with severe-to-profound hearing losses were Participants 1 and 7. Both of these individuals showed very poor recognition scores in the baseline and hearing aid conditions, but other participants who exhibited this trend (Participants 9, 19 and 20) had moderate hearing losses. The individual with the mildest hearing loss, Participant 21, did not exhibit ceiling performance in the baseline condition. Thus, examination of the individual data suggests that at least for this small sample, the wide range of hearing losses does not appear to be a consistent reason for the lack of significant improvement in the hearing aid condition over the baseline condition.

Figure 6
Individual percent correct scores and the group mean scores in the four viewing conditions for the news program.

The availability of closed captioning provided significant benefit to the elderly hearing-impaired listeners. The mean percent correct scores across the three program types with closed captioning alone was 75%, and with closed captioning + hearing aids was 81%. Recall that overall percent correct scores for the baseline and hearing aid conditions were 23% and 37%, respectively. Although no significant differences in performance were observed between the two closed captioning conditions, older adults clearly derived considerable advantage for understanding televised speech with closed captioning compared to both the baseline and hearing aid conditions. The visual text signal provided by closed captioning conveys unambiguous information about the spoken message, and is not affected by the audibility of the speech signal nor the availability of clear speechreading cues. One limitation of closed captioning, however, is that it doesn't convey information cued by the fundamental frequency, such as prosody and emotional content. Nevertheless, closed captioning appears to be a simple, cost-effective and readily available assistive listening device that can improve accurate recognition of televised speech by older adults with hearing loss.

Given the consistency of the closed caption signal and its clear representation of the stimulus, it is curious that performance with closed captioning was not 100% for the older adults. Mean scores for the young normal-hearing listeners while viewing the sentence lists with closed captioning and the sound turned off was 98% correct (range 95–100%), verifying that these closed captioned stimuli are highly intelligible for young people without hearing loss. The source of the older listeners' suboptimal scores with closed captioning may be associated with some cognitive changes with aging that could impact performance on this task. For example, the speed with which some of the sentences appeared on the screen may have been too fast for older viewers to process the information. Typical captioning speed is 141 wpm, which is a comfortable caption speed for most young adults (Jensema et al. 1996). Studies have shown that older readers process text, particularly complex text, at a slower rate than younger readers (Smiler et al. 2003; Kemper and Liu, 2007;). Additionally, aging is thought to be accompanied by a reduced ability to inhibit irrelevant information (Hasher and Zacks, 1988; McDowd and Birren, 1990; Kemper and McDowd, 2006), resulting in excessive difficulty by older adults in focusing on relevant information when irrelevant information is present. While the closed captioning was consistent and unambiguous, the televised picture contained varied and continuously changing visual images, which may have made it more difficult for the older adults to parse relevant information from irrelevant information. Finally, the presence of three inputs (auditory, visual-speechreading, and visual-closed captioning) may have placed a heavy cognitive load on older adults such that these inputs could not be adequately integrated. Despite these possible limitations, closed captioning made a substantial difference in the viewing experience for older adults with hearing loss.

Demographic data from this limited sample of participants showed that most of these hearing aid users wore their hearing aids for television viewing on a daily basis. This is somewhat surprising, given that understanding of the televised message was rather poor among this group, even with amplification. Even more revealing is the finding that 87% of the participants reported never using closed captions. This observation underscores the original premise for this study, that older people with hearing loss rarely use closed captioning, despite its obvious benefit for understanding television. The limited use of closed captioning by this older hearing-impaired sample may reflect a pre-conceived notion that this technology is primarily intended for “deaf” people.

Some limitations of the current research include the sample size, auditory characteristics of the participants, hearing aid fittings, and type of speech material. The sample size was small, with complete data reported for only 15 participants. The participants were quite heterogeneous in their degree and configuration of hearing loss, as well as the type of hearing aids they wore, which reflects the varied characteristics of the older, hearing-impaired population. Thus, it is difficult to generalize the current findings to all older listeners with hearing loss who use hearing aids. The hearing aids worn by this sample were not necessarily an ideal fit for each participant; rather, the goal was to evaluate a random sample of hearing aid users who wore their hearing aids on a regular basis, as they were worn in everyday life. Although these initial findings are quite robust, future investigations should assess the effects of hearing aids and closed captioning for older people with different degrees of hearing loss, different types of hearing aid fittings, and with verified well-fit hearing aid configurations. Another useful area of future research is the benefit obtained with other assistive devices for television viewing, such as infra-red systems, in comparison to hearing aids and closed captions. New stimuli were created for this experiment, which were derived from actual television programs in an effort to have face validity. However, the sentences were presented in isolation and out of context in order to obtain immediate identification responses that would minimize memory demands. In real life, people derive meaning from a television program's context over a period of time (about 10 min.). Thus, while the stimuli themselves were commonplace, the task was somewhat novel.

CONCLUSIONS

The results of this initial investigation showed that hearing aid use, on average, did not provide significant improvement in understanding televised speech materials compared to a baseline (unaided) condition for a small sample of older adults with hearing loss. Closed captioning, however, resulted in large and significant improvements in word recognition by older adults with varying degrees of hearing impairment. Most of the older adults indicated that they had never used closed captioning technology, despite its potential to improve understanding of television dramatically for older adults. Because the aging population is growing, and the prevalence of age-related hearing loss is high, primary care physicians, geriatricians, and audiologists need to be aware of simple assistive tools that could enhance their patients' quality of life. Closed captioning appears to be an excellent option for a low-cost, high-quality assistive tool for older adults to improve their understanding of television, which is a common leisure activity enjoyed by this population.

SHORT SUMMARY

The effects of closed captioning (CC) and hearing aid (HA) use on recognition of televised speech were examined in older adults who wore hearing aids. Speech segments were recorded from three program types (game show, drama, news). There were four viewing conditions: baseline (no HAs or CC), HAs only; CC only, and HAs + CC. Participants consistently achieved higher recognition scores in the two CC conditions than in the baseline and HAs only conditions. Very few participants reported using CC while watching television. Closed captioning has the potential to improve television viewing considerably for older adults with hearing loss.

ACKNOWLEDMENTS

The authors wish to acknowledge Joni Talton and Victor Mark at the University of Alabama - Birmingham for their assistance in enabling data collection at the Spain Rehabilitation Center, Frye Electronics for the loan of the Frye Electronics FP 35 hearing aid analyzer, and Josh Walsh for his support in video editing the stimuli. This research was supported, in part, by a grant from the National Institute on Aging.

Research supported in part by a research grant from the National Institute on Aging (#R37AG09191)

REFERENCES

  • American National Standards Institute . Methods for the calculation of the speech intelligibility index (ANSI S3.5-1997, R 2007) American National Standards Institute; NY: 1997.
  • Boettcher FA, Mills JH, Swerdloff JL, et al. Auditory evoked potentials in aged gerbils: responses elicited by noises separated by a silent gap. Hear Res. 1996;102:167–178. [PubMed]
  • Cox RM, Alexander GC, Taylor IM, et al. Benefit acclimitization in elderly hearing aid users. J Am Ac Aud. 1996;7:428–441. [PubMed]
  • Cruickshanks KL, Wiley TL, Tweed TS, et al. Prevalence of hearing loss in older adults in Beaver Dam, Wisconsin: The epidemiology of hearing loss study. Am J Epidemiol. 1998;148:879–886. [PubMed]
  • Desai M, Pratt LA, Lentzner H, et al. Trends in Vision and Hearing Among Older Americans. National Center for Health Statistics; Hyattsville, Maryland: 2001. Aging Trends; No. 2. [PubMed]
  • Dubno JR, Dirks DD, Morgan DE. Effects of age and mild hearing loss on speech recognition in noise. J. Acoust Soc Am. 1984;76:87–96. [PubMed]
  • Fitzgibbons PJ, Gordon-Salant S, Barrett J. Age-related differences in discrimination of an interval separating onsets of successive tone bursts as a function of interval duration. J Acoust Soc Am. 2007;122:458–466. [PubMed]
  • Goodman RI. Television news viewing by older adults. Journalism Quarterly. 1990;67:137–141.
  • Gordon-Salant S, Fitzgibbons P. Temporal factors and speech recognition performance in young and elderly listeners. J Speech Hear Res. 1993;36:1276–1286. [PubMed]
  • Grant KW, Walden BE, Seitz PF. Auditory-visual speech recognition by hearing-impaired subjects: Consonant recognition, sentence recognition, and auditory-visual integration. J Acoust Soc Am. 1998;103:2677–2690. [PubMed]
  • Hanley P. The Numbers Game: Older People and the Media. London Independent Television Commission; 2002.
  • Hasher L, Zacks RT. Working memory, comprehension, and aging: A review and a new view. In: Bower GH, editor. The Psychology of Learning and Motivation. Vol. 22. Academic Press; San Diego, CA: 1988. pp. 193–225.
  • Hellstrom LL, Schmiedt RA. Compound action potential input/output functions in young and quiet-aged gerbils. Hear Res. 1990;50:163–174. [PubMed]
  • Jensema C. A demographic profile of the closed-caption television audience. Am Ann Deaf. 1987;132:389–392. [PubMed]
  • Jensema C. Viewer reaction to different television captioning speeds. Am Ann Deaf. 1998;143:318–324. [PubMed]
  • Jensema C, McCann R, Ramsey S. Closed-caption television presentation speed and vocabulary. Am Ann Deaf. 1996;141:284–292. [PubMed]
  • Kemper S, Liu C. Eye movements of young and older adults during reading. Psychol Aging. 2007;22:84–93. [PMC free article] [PubMed]
  • Kemper S, McDowd J. Eye movements of young and older adults while reading with distraction. Psychol Aging. 2006;21:32–39. [PMC free article] [PubMed]
  • Kochkin S. MarkeTrak VII: Customer satisfaction with hearing instruments in the digital age. Hearing J. 2005;58:30–43.
  • MacLachlan J, Siegel MH. Reducing the costs of TV commercials by use of time compressions. J Marketing Res. 1980;17:52–57.
  • Mares M-L, Woodard EH. In search of the older audience: Adult age differences in television viewing. J Broadcast Elect Media. 2006;50:595–614.
  • McDowd J, Birren JE. Aging and attentional processes. In: Birren JE, Schaie KW, editors. Handbook of the Psychology of Aging. 3rd ed. Academic Press; San Diego, CA: 1990. pp. 222–233.
  • Moscicki EK, Elkins EF, Baum HM, et al. Hearing loss in the elderly: An epidemiologic study of the Framingham Heart Study cohort. Ear Hear. 1985;6:184–190. [PubMed]
  • Nabelek AK, Robinson PK. Monaural and binaural speech perception in reverberation for listeners of various ages. J. Acoust Soc Am. 1982;71:1242–1248. [PubMed]
  • Pearson JD, Morrell CH, Gordon-Salant S, Brant LJ, Metter EJ, Klein LL, fozard JL. Gender differences in a longitudinal study of age-associated hearing loss. J. Acoust Soc Am. 1995;97:1196–1205. [PubMed]
  • Salthouse TA. The processing-speed theory of adult age differences in cognition. Psych Rev. 1996;103:403–428. [PubMed]
  • Smiler AP, Gagne DD, Stine-Morrow EAL. Aging, memory load, and resource allocation during reading. Psychol Aging. 2003;18:203–209. [PubMed]
  • Sommers MS. Stimulus variability and spoken word recognition II. The effects of age and hearing impairment. J Acoust Soc Am. 1997;104:2278–2288. [PubMed]
  • Stuart A, Phillips DP. Word recognition in continuous and interrupted broadband noise by young normal-hearing, older normal-hearing, and presbyacusic listeners. Ear Hear. 1996;17:478–489. [PubMed]
  • Tillman TW, Carhart RC. Northwestern University Auditory Test No. 6 (SAM-TR-66-55) USAF School of Aerospace Medicine; 1966. An expanded test for speech discrimination utilizing CNC monosyllabic words. [PubMed]
  • Tye-Murray N, Sommers MS, Spehar B. The effects of age and gender on lipreading abilities. J Am Acad of Aud. 2007;18:883–892. [PubMed]
  • Uglova N, Shevchenko T. Not so fast please: Temporal features in TV speech. Paper presented at the meeting of the Acoustical Society of America.2005.
  • Walden BE, Busacco DA, Montgomery AA. Benefit from visual cues in auditory-visual speech recognition by middle-aged and elderly persons. J Speech Lang Hear Res. 1993;36:431–436. [PubMed]
  • Walden BE, Grant KW, Cord MT. Effects of amplification and speechreading on consonant recognition by persons with impaired hearing. EarHear. 2001;22:333–341. [PubMed]
  • Wingfield A, Poon LW, Lombardi L, et al. Speed of processing in normal aging: effects of speech rate, linguistic structure, and processing time. J. Gerontol. 1985;40:579–585. [PubMed]
  • Wingfield A, Tun PA. Spoken language comprehension in older adults: Interactions between sensory and cognitive change in normal aging. Seminars in Hearing. 2001;22:287–301.