Picou, E.M., Ricketts, T.A. & Hornsby, B.W.Y. (2013). How hearing aids, background noise and visual cues influence objective listening effort. Ear and Hearing, in press.
This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.
For many people with hearing loss, visual cues from lip-reading are a valuable cue that has been proven to improve speech recognition across a variety of listening conditions (Sumby & Pollock, 1954; Erber, 1975; Grant, et al., 1998). To date is has remained unclear how visual cues, background noise, and hearing aid use interact with each other to affect listening effort.
Listening effort is often described as the allocation of additional cognitive resources to the task of understanding speech. If cognitive resources are finite or limited, then two or more simultaneous tasks will be in competition with each other for cognitive resources. Decrements in performance on one task can be interpreted as an allocation of resources away from the task and toward another concurrent task. Therefore, listening effort is often measured with dual-task paradigms, in which listeners respond to speech stimuli while simultaneously performing another task or responding to another kind of stimulus. Allocation of cognitive resources in this way is thought to represent a competition for working memory resources (Baddeley & Hitch, 1974; Baddeley, 2000).
The Ease of Language Understanding (ELU) model states that the process of understanding language involves matching phonological, syntactic, semantic and prosodic information to stored templates in long-term memory. When there is a mismatch between the incoming sensory information and the stored template, additional effort must be exerted to resolve the ambiguity of the message. This additional listening effort taxes working memory resources and may require the listener to allocate fewer resources to other tasks. Several studies have identified conditions that degrade a speech signal, such as background noise (Murphy, et al., 2000; Larsby et al., 2005; Zekveld et al., 2010) and hearing loss (Rabbitt, 1990; McCoy et al., 2005) in a manner that increases listening effort.
Individuals with reduced working memory capacity may be more negatively affected by conditions that degrade a speech signal. Previous reports have suggested that differences in working memory capacity hold a relationship to speech recognition in noise and performance with hearing aids in noise (Lunner, 2003; Foo et al., 2007). The speed of retrieval from long-term memory may also affect performance and listening effort in adverse listening conditions (Van Rooij et al., 1989; Lunner, 2003). Because sensory inputs decay rapidly (Cowan, 1984), listeners with slow processing speed might not be able to fully process incoming information and match it to long term memory stores before it decays. Therefore, they would have to allocate more effort and resources to the task of matching sensory input to long-term memory templates.
Just as some listener traits might be expected to increase listening effort, some factors might offset adverse listening conditions by providing more information to support the matching of incoming sensory inputs to long-term memory. The use of visual cues is well known to improve speech recognition performance and some studies indicate that individuals with large working memory capacities are better able to make use of visual cues from lipreading (Picou et al., 2011). Additionally, listeners who are better lipreaders may require fewer cognitive resources to understand speech, allowing them to make better use of visual cues in noisy environments (Hasher & Zacks, 1979; Picou et al., 2011).
The purpose of Picou, Ricketts and Hornsby’s study was to examine how listening effort is affected by hearing aid use, visual cues and background noise. A secondary goal of the study was to determine how specific listener traits such as verbal processing speed, working memory and lipreading ability would affect the measured changes in listening effort.
Twenty-seven hearing-impaired adults participated in the study. All were experienced hearing aid users and had corrected binocular vision of 20/40 or better. Participants were fitted with bilateral behind-the-ear hearing aids with non-occluding, non-custom eartips. Advanced features such as directionality and noise reduction were turned off, though feedback management was left on in order to maximize usable gain. Hearing aids were programmed with linear settings to eliminate any potential effect of amplitude compression on listening effort, a relationship which is as of yet unestablished.
A dual-task paradigm with a primary speech recognition task and secondary visual reaction time task was used to measure listening effort. The speech recognition task used monosyllabic words spoken by a female talker (Picou, 2011), presented at 65dB in the presence of multi-talker babble. Prior to the speech recognition task, individual SNRs for auditory only (AO) and auditory-visual (AV) conditions were determined at levels that yielded performance between 50-70% correct, because scores in this range are most likely to show changes in listening effort (Gatehouse & Gordon, 1990).
The reaction time task required participants to press a button in response to a rectangular visual probe that occurred prior to presentation of the speech token. The visual probe was presented prior to the speech tokens, so that the probe would not distract from the use of visual cues during the speech recognition task. The visual and speech stimuli were presented within a narrow enough interval (less than 500 msec) so that cognitive resources would have to be expended for both tasks at the same time (Hick & Tharpe, 2002).
Three listener traits were examined with regard to listening effort in quiet and noisy conditions, with and without visual cues. Visual working memory was evaluated with the Automated Operation Span (AOSPAN) test (Unsworth et al., 2005). The AOSPAN requires subjects to solve math equations and memorize letters. After seeing a math equation and identifying the answer, subjects are shown a letter which disappears after 800 msec. Following a series of equations they are then asked to identify the letters that they saw, in the order that they appeared. Scores are based on the number of letters that are recalled correctly.
Verbal processing speed was assessed with a lexical decision task (LDT) in which subjects were presented with a string of letters and were asked to indicate, as quickly as possible, if the letters formed a real word. The test consisted of 50 common monosyllabic English words and 50 monosyllabic nonwords. The task reflects verbal processing speed because it requires the participant to match the stimuli to representations of familiar words stored in long-term memory (Meyer & Schvaneveldt, 1971; Milberg & Blumstein, 1981; Van Rooij et al., 1989). The reaction time to respond to the stimuli was used as a measure of verbal processing speed.
Finally, lipreading ability was measured with the Revised Shortened Utley Sentence Lipreading Test (ReSULT; Updike, 1989). The test required participants to repeat sentences spoken by a female talker, when the talker’s face was visible but speech was inaudible. Responses were scored based on the number of words repeated correctly in each sentence.
Subjects participated in two test sessions. At the first session, vision and hearing was tested, individual SNR levels were determined for the speech recognition task and AOSPAN, LDT and ReSULT scores were obtained. At the second session, subjects completed practice sequences with AO and AV stimuli, then the dual speech recognition and visual reaction time tests were administered in eight counterbalanced conditions listed below. Due to the number of experimental conditions, only select outcomes of this study will be reviewed.
1. auditory only in quiet, unaided
2. auditory only in noise, unaided
3. auditory-visual in quiet, unaided
4. auditory-visual in noise, unaided
5. auditory only in quiet, aided
6. auditory only in noise, aided
7. auditory-visual in quiet, aided
8. auditory-visual in noise, aided
The main analysis showed that background noise impaired performance in all conditions and hearing aid use and visual cues improved performance. However, there were significant interactions between hearing aid use and visual cues, hearing aids and background noise, and visual cues and background noise, indicating that the effect of hearing aid use depended on the test modality (AV or AO), and background noise (present or absent), and the effect of visual cues depended on background noise (present or absent). Hearing aid benefit proved to be larger in AO conditions than in AV conditions and was larger in quiet conditions than in noisy conditions. The effect of noise was greater in the AV conditions than in the AO conditions, but the authors suggest that this could have been related to the individualized SNRs chosen for the test procedure.
On the reaction time task, background noise increased listening effort and hearing aid use reduced listening effort, though there was high variability and the effects of both variables were small. Additional analysis determined that the individual SNRs chosen for the dual task did not affect the hearing aid benefits that were measured. The availability of visual cues did not change overall reaction times and it was therefore determined that visual cues did not affect listening effort in this task of reaction time.
With regard to listening effort benefits derived from hearing aid use, the performance in quiet conditions was strongly related to performance in noise. In other words, subjects who obtained benefit from hearing aid use in quiet also obtained benefit in noise and individuals with slower verbal processing speed were more likely to derive benefit from hearing aid use. With regard to visual cues, there were several correlations with listener traits. Subjects who were better lipreaders derived more benefit from visual cues and those with smaller working memory capacities also showed more benefit from visual cues. These correlations were significant in quiet and noisy conditions. For quiet conditions, there was a positive correlation between verbal processing speed and benefit from visual cues, with better verbal processors showing more benefit from visual cues. There were no correlations between background noise and any of the measured listener traits.
The overall findings that visual cues and hearing aid use had positive effects and background noise had a negative effect on speech perception performance were not surprising. Similarly, the findings that hearing aid benefit was reduced for AV conditions versus AO conditions and for noisy versus quiet conditions were consistent with previous reports (Cox & Alexander, 1991; Walden et al., 2001; Duquesnoy & Plomp, 1983). Because hearing aid use improves audibility, visual cues might not have been needed as much as they were in unaided conditions and the presence of noise may have counteracted the improved audibility by masking a portion of the speech cues needed for correct understanding, especially with the omnidirectional, linear instruments used in this study.
The ability of hearing aids to decrease listening effort was significant, in keeping with previously published results, but the improvements were lesser than than those reported in some previous studies. This could be related to the non-simultaneous timing of the tasks in the dual-task paradigm, but the authors surmise that it could be related to the way their subjects’ hearing aids were programmed. In most previous studies, individuals used their own hearing aids, set to individually prescribed and modified settings. In the current study, all participants used the same hearing aid circuit set to linear, unmodified targets. Advanced features like directionality and noise reduction, which are likely to impact listening effort (Sarampalis, 2009), speech discrimination ability and perceived ease of listening in everyday situations, were turned off.
There was a significant relationship between verbal processing speed and hearing aid benefit, in that subjects with slower processing speed were more likely to benefit from hearing aid use. Sensory input decays rapidly and requires additional cognitive effort when it is mismatched with long-term memory stores. Any factor that improves the sensory input may facilitate the matching process. The authors posited that slow verbal processors might benefit more from amplification because hearing aids improved the quality of the sensory input, thereby reducing the cognitive effort and time that would otherwise be required to match the input to long-term memory templates.
On average, the availability of visual cues did not have a significant effect on listening effort. This may be a surprising result given the well-known positive effects of visual cues for speech recognition. However, there was high variability among subjects and it was apparent that better lipreaders were more able to make use of visual cues, especially in quiet conditions without hearing aids. Working memory capacity was negatively correlated with benefit from visual cues, indicating that subjects with better working memory capacity derived less benefit from visual cues on average. The relationship between these variables is unclear, but the authors suggest that individuals with lower working memory capacities may be more susceptible to changes in listening effort and therefore more likely to benefit from additional sensory information such as visual cues.
Understanding how individual traits affect listening effort and susceptibility to noise is important to audiologists for a number of reasons, partly because we often work with older individuals. Working memory declines as a result of the normal aging process and may begin in middle age (Wang, et al., 2011). Similarly, the speed of cognitive processing slows and visual impairment becomes more likely with increasing age (Clay, et al., 2009). Many patients seeking audiological care may also suffer from these deficits in working memory, verbal processing, and visual acuity. Though more research is needed to understand how these variables relate to one another, they should be considered in clinical evaluations and hearing aid fittings. Advanced hearing aid features that counteract the degrading effects of noise and reverberation may be particularly important for elderly or visually impaired hearing aid users. As shown in the reviewed study, these patients will benefit significantly from face-to-face conversation, slow speaking rates and reduced environmental distractions. Counseling sessions should include discussion of these issues so that patients and family members understand how they can use strategic listening techniques, in addition to hearing aids, to improve speech recognition and reduce cognitive effort.
References
Clay, O., Edwards, J., Ross, L., Okonkwo, O., Wadley, V., Roth, D. & Ball, K. (2009). Visual function and cognitive speed of processing mediate age-related decline in memory span and fluid intelligence. Journal of Aging and Health 21(4), 547-566.
Cox, R.M. & Alexander, G.C. (1991). Hearing aid benefit in everyday environments. Ear and Hearing 12, 127-139.
Downs, D.W. (1982). Effects of hearing aid use on speech discrimination and listening effort. Journal of Speech and Hearing Disorders 47, 189-193.
Duquesnoy, A.J. & Plomp, R. (1983). The effect of a hearing aid on the speech reception threshold of hearing impaired listeners in quiet and in noise. Journal of the Acoustical Society of America 73, 2166-2173.
Erber, N.P. (1975). Auditory-visual perception of speech. Journal of Speech and Hearing Disorders 40, 481-492.
Foo, C., Rudner, M. & Ronnberg, J. (2007). Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. Journal of the American Academy of Audiology 18, 618-631.
Gatehouse, S., Naylor, G. & Elberling, C. (2003). Benefits from hearing aids in relation to the interaction between the user and the environment. International Journal of Audiology 42 Suppl 1, S77-S85.
Gatehouse, S. & Gordon, J. (1990). Response times to speech stimuli as measures of benefit from amplification. British Journal of Audiology 24, 63-68.
Grant, K.W., Walden, B.F. & Seitz, P.F. (1998). Auditory visual speech recognition by hearing impaired subjects. Consonant recognition, sentence recognition and auditory-visual integration. Journal of the Acoustical Society of America 103, 2677-2690.
Hick, C.B. & Tharpe, A.M. (2002). Listening effort and fatigue in school-age children with and without hearing loss. Journal of Speech, Language and Hearing Research 45, 573-584.
Hornsby, B.W.Y. (2013). The Effects of Hearing Aid Use on Listening Effort and Mental Fatigue Associated with Sustained Speech Processing Demands. Ear and Hearing, in press.
Meyer, D.E. & Schvaneveldt, R.W. (1971). Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology 90, 227-234.
Milberg, W. & Blumstein, S.E. (1981). Lexical decision and aphasia: Evidence for semantic processing. Brain and Language 14, 371-385.
Picou, F.M., Ricketts, T.A. & Hornsby, B.W.Y (2011). Visual cues and listening effort: Individual variability. Journal of Speech, Language and Hearing Research 54, 1416-1430.
Picou, E.M., Ricketts, T.A. & Hornsby, B.W.Y. (2013). How hearing aids, background noise and visual cues influence objective listening effort. Ear and Hearing, in press.
Rudner, M., Foo, C. & Ronnberg, J. (2009). Cognition and aided speech recognition in noise: Specific role for cognitive factors following nine week experience with adjusted compression settings in hearing aids. Scandinavian Journal of Psychology 50, 405-418.
Sarampalis, A., Kalluri, S., Edwards, B. & Hafter, E. (2009) Objective measures of listening effort: effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research 52, 1230–1240.
Sumby, W.H. & Pollock, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26, 212-215.
Unsworth, N., Heitz, R.P. & Schrock, J.C. (2005). An automated version of the operation span task. Behavioral Research Methods 37, 498-505.
Van Rooij, J.C., Plomp, R. & Orlebeke, J.F. (1989). Auditive and cognitive factors in speech perception by elderly listeners. I: Development of test battery. Journal of the Acoustical Society of America 86, 1294-1309.
Walden, B.F., Grant, K.W. & Cord, M.T. (2001). Effects of amplification and speechreading on consonant recognition by persons with impaired hearing. Ear and Hearing 22, 333-341.
Wang, M., Gamo, N., Yang, Y., Jin, L., Wang, X., Laubach, M., Mazer, J., Lee, D. & Arnsten, A. (2011). Neuronal basis of age-related working memory decline. Nature 476, 210-213.