Starkey Evidence Blog

Home | RSS Feed

Acclimatizing to hearing aids may not mean what you think it means

Dawes, P., Munro, K., Kalluri, S., & Edwards, B. (2014). Acclimatization to hearing aids. Ear and Hearing, Published Ahead-of-Print.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

New patients frequently report that their new hearing aids sound tinny, metallic, loud, or unnatural. The clinical audiologist recognizes that these comments will decrease in frequency with time. A process often described as acclimatization: a reaction to new hearing aids that occurs because the patient has adjusted to hearing sound filtered by their hearing loss. When amplification is introduced, the subsequent increase in audibility and loudness perception is unfamiliar and therefore unnatural.

A smooth transition to hearing aid use can be achieved through counseling prior to fitting, preparing the individual for a period of unnatural sound quality. At the fitting, the instruments can be set below prescribed target, allowing the listener a more comfortable period of adjustment. Most individuals will accept increased gains, approaching prescribed target over 2 or 3 months. Some patients, however, require a much longer period of acclimatization of one to two years (Keidser, et al., 2008).

In addition to changes in the preferred gain of new hearing aid users, other improvements due to acclimatization have been proposed: speech discrimination over time (Bentler, et al.1993a, Gatehouse, 1992), subjective benefit and sound quality over time (Bentler, et al., 1993b; Ovegard, et al., 1997) and loudness perception and intensity discrimination over time (Olsen, et al., 1999; Philibert et al., 2002). Most of these studies reported small but significant acclimatization effects; while others found no significant differences between new and experienced hearing aid users (Smeds et al, 2006a, 2006b).

Ultimately, there is little agreement on the definition of this effect and even less agreement in the methods that quantify these changes. A high degree of response variability is usually noted, indicating that several factors (degree, etiology, and configuration of hearing loss) may contribute to the adjustment that is experienced by new hearing aid users.

Dawes and his colleagues outlined a number of goals for their study:  First, they hoped to determine if there is an acclimatization effect for aided speech recognition with current, nonlinear hearing aids and if there is a difference between unilateral and bilateral fittings. Second, they wanted to know if new hearing aid users’ self-reports would indicate a period of acclimatization. Third, they sought to determine if acclimatization could be predicted by the degree of hearing loss, prior hearing aid use or cognitive capacity.

Forty-nine subjects participated in the study, recruited from four audiology clinics. There were 16 new unilateral hearing aid users, 16 new bilateral users and 17 experienced users, including 8 bilateral and 9 unilateral users. Experienced subjects used their own hearing aids and new users were fitted with BTE or CIC instruments with comparable circuit technology.  New instruments were fitted to NAL-NL1 targets and verified with real-ear measurements. Newly-fitted subjects had a few days of hearing aid use prior to commencement of the study and were allowed gain adjustments only if necessary due to discomfort with prescribed gain levels.

To measure speech recognition, a 4-alternative forced-choice procedure was used, in which listeners were asked to select one word from a closed set of four rhyming words, in response to the prompt, “Can you hear the word X clearly?” In addition to the speech recognition test, subjects completed the Spatial, Speech and Qualities of Hearing Questionnaire – Difference version (SSQ-D; Gatehouse & Noble, 2004), as well as two measures of cognitive processing. The SSQ-D was administered after 12 weeks and allowed the subjects to judge their own changes in performance and listening effort with the hearing aids over the course of the study.

Two cognitive tests were administered. The first, a visual reaction time task, required participants to watch digits presented on a computer monitor and press the corresponding numbers on a keypad as quickly as possible. Responses were scored as correct or incorrect and response times were measured in milliseconds. Working memory was also evaluated, using the Digits Backwards subtest from the Weschler Adult Intelligence Scale – III (WAIS-III; Wechsler, 1997).  Subjects listened to lists of digits and were asked to repeat them in reverse order. Lists increased in length as the test progressed and responses were correct if all digits were repeated in the correct order.

In all test conditions, variability was high and a small improvement was noted over time, likely due to practice effects. The mean SNR required to achieve 50% performance did not differ between new unilateral and new bilateral hearing aid users, but experienced users required significantly more favorable SNRs to achieve this level of performance, compared to new users. This was attributed to the older average age and poorer hearing thresholds of the experienced user group.

For the new user groups, if acclimatization occurred it was expected that performance would improve in aided conditions over time. Instead there were small trends of improvements in unaided and aided conditions. For unilateral users, the trend was noted in the fitted ear, whereas for bilateral users, small improvements were noted for both ears.  Of all the variables studied, the only one to have a significant effect on performance was time, which yielded a small consistent improvement across groups and listening conditions.  When place, manner and voicing errors were analyzed, there was no significant difference for type of error, nor was there a significant interaction with the other variables of group, aiding, ear or presentation level.

Because of the high variability in responses, correlations were measured for effects of hearing aid usage, degree of hearing loss, cognitive capacity, and a change in audibility referred to as “stimulus novelty”. For new hearing aid users, there was no significant correlation between the change speech recognition scores, severity of hearing loss, cognitive test score, or hearing aid variables. Older age was only correlated with slower reaction time scores and a higher amount of time spent in quiet conditions. There were no significant correlations for SSQ-D scores and change in aided performance in any of the listening conditions. Disparate SSQ-D scores did indicate that new hearing aid users perceived improvements over the course of the study, whereas experienced users did not.

Though there were small increases in speech recognition performance over time in all conditions, this was consistent with a practice effect and was not taken as evidence for acclimatization. Self-reports from the SSQ-D showed that new users experienced improvements with amplification that were significantly greater than those reported for experienced users. It is not surprising that SSQ-D scores might still show improvement, as the SSQ-D probes subjective perceptions of performance, including listening effort and sound quality. These elements may well improve with consistent use of new hearing aids even if actual speech recognition has not changed significantly. Improved audibility may allow the listener to function well in everyday environments with significantly less effort, making a positive impression on the listener, more so than small but measurable improvements in word recognition.

Another potential explanation for the lack of agreement between objective and subjective measures in this study could be related to the actual comparison that was made by the subjects when the responded to the SSQ-D items.  Because new users probably experienced noticeable benefits from the hearing aids, they may have had trouble comparing their performance immediately post-fitting versus 12 weeks later and may have inadvertently compared pre-fitting and post-fitting performance, yielding a larger SSQ-D score.

Though the results of this study did not support an acclimatization effect for speech recognition, they do not rule out the existence of acclimatization altogether. Preferred gain, perceived listening effort, and sound quality improvements, among other effects, may well occur for most new hearing aid users, to varying degrees based on degree of hearing loss, duration of prior hearing loss and prior experience with hearing aids.

The subjects in this study were fitted with either BTE or CIC hearing aids but the hearing aid style was not examined with regard to acclimatization. CIC users often experience occlusion and adjustment to their own voices in the early days of hearing aid use; much more so than BTE users who probably have less occlusion than commonly found with CIC hearing aids. Whether this could have an impact on speech recognition acclimatization is questionable, but it could have affect subjective reports. Similarly, individuals using hearing aid features such as frequency-lowering or wireless routing of signal may demonstrate other perceptual learning or acclimatization effects.

Perhaps the greatest finding of this study was the contrast between measurable outcomes in the domain of subjective spatial perception and traditional measures of speech recognition. Many failed attempts to document acclimatization have focused on speech recognition or loudness perception rather than probing the patient’s perception of their acoustic environment—something achieved with the SSQ-D. The apparent sensitivity of this measure should direct future experimental design in this area. For the practicing clinician, this contrast can aid in developing counseling approaches: it’s clear that speech recognition won’t change over time, but the complexity or overwhelming nature of the acoustic environment may become simpler with time.

References

Bentler, R.A., Niebuhr, D.P., Getta, J.P. & Anderson, C.V. 1993a. Longitudinal study of hearing aid effectiveness. I. Objective measures.  Journal of Speech and Hearing Research 36, 808-819.

Gatehouse, S. 1992. The time course and magnitude of perceptual acclimatization to frequency responses: Evidence from monaural fitting of hearing aids. Journal of the Acoustical Society of America 92, 1258-1268.

Gatehouse, S. & Noble, W. (2004). The Speech, Spatial and Qualities of Hearing Scale (SSQ). International Journal of Audiology 43, 85-99.

Keidser, G., O’Brien, A., Carter, L., McLelland, M., and Yeend, I. (2008). Variation in Preferred Gain with Experience for Hearing-Aid Users.  International Journal of Audiology 47 (10), 621 – 635.

Munro, K.J. & Lutman, M.E. 2003. The effect of speech presentation level on measurement of auditory acclimatization to amplified speech.  Journal of the Acoustical Society of America, 114, 484-495.

Ovegard, A., Lundberg, G., Hagerman, B., Gabrielsson, A., Bengtsson, M. 1997. Sound quality judgment during acclimatization of hearing aid. Scandinavian Audiology, 26, 43-51.

Palmer, C.V., Nelson, C.T. & Lindley, G.A. 1998. The functionally and physiologically plastic adult auditory system. Journal of the Acoustical Society of America, 103, 1705-1721.

Philibert, B., Collet, L., Vesson, J.F. & Veuillet, E. 2002. Intensity-related performances are modified by long-term hearing aid use: A functional plasticity? Hearing Research, 165, 142-151.

Philibert, B., Collet, L., Vesson, J.F. & Veuillet, E. 2005. The auditory acclimatization effect in sensorineural hearing-impaired listeners: Evidence for functional plasticity. Hearing Research, 205, 131-142.

Ronnberg, J., Rudner, M. & Foo, C. (2008). Cognition counts: A working memory system for ease of language understanding (ELU). International Journal of Audiology 47 (Suppl 2), S99-105.

Saunders, G.H. & Cienkowski, K. (1997). Acclimatization to hearing aids. Ear and Hearing 18, 129-139.

Smeds, K., Keidser, G., Zakis, J., Dillon, H., Leijon, A. 2006a. Preferred overall loudness. I. Sound field presentation in the laboratory. International Journal of Audiology, 45, 12-25.

Smeds, K., Keidser, G., Zakis, J., Dillon, H., Leijon, A. 2006b. Preferred overall loudness. II. Listening through hearing aids in field and laboratory tests. International Journal of Audiology, 45, 12-25.

Taubman, L., Palmer, C. & Durrant, J. (1999). Accuracy of hearing aid use time as reported by experienced hearing aid wearers. Ear and Hearing 20, 299-305.

Wechsler, D. (1997). Wechsler Adult Intelligence Scale (3rd ed.) Oxford: Pearson Assessment.

Willott, J.F. 1996. Physiological plasticity in the auditory system and its possible relevance to hearing aid use, deprivation effects, and acclimatization. Ear and Hearing, 17, 66S-77S.

Yund, E.W., Roup, C.M. & Simon, H.J. (2006). Acclimatization in wide dynamic range multichannel compression and linear amplification hearing aids.  Journal of Rehabilitation Research and Development 43, 517-536.

On the Prevalence of Cochlear Dead Regions

Pepler, A., Munro, K., Lewis, K. & Kluk, K. (2014). Prevalence of Cochlear Dead Regions in New Referrals and Existing Adult Hearing Aid Users. Ear and Hearing 20(10), 1-11.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Cochlear dead regions are areas in which, due to inner hair cell and/or nerve damage, responses to acoustic stimuli occur not at the area of peak basilar membrane stimulation but instead occur at adjacent regions in the cochlea. Professor Brian Moore defined dead regions as a total loss of inner hair cell function across a limited region of the basilar membrane (Moore, et al., 1999b). This hair cell loss does not result in an inability to perceive sound at a given frequency range, rather the sound is perceived via off-place or off-frequency listening, a spread of excitation to adjacent regions in the cochlea where inner hair cells are still functioning (Moore, 2004).  Because the response is spread across a broad tonotopic area, individuals with cochlear dead regions may perceive pure tones as “clicks”, “buzzes” or “whooshes”.

Cochlear dead regions are identified and measured by a variety of masking techniques. The most accurate method is the calculation of psychophysical tuning curves (PTCs), originally developed to measure frequency selectivity (Moore & Alcantara 2001). A PTC plots the level required to mask a stimulus frequency as a function of the masker frequency. For a normally hearing ear, the PTC peak will align with the point at which the stimulus can be masked by the lowest level masker.  In ears with dead regions, the tip of the PTC is shifted off of the signal frequency to indicate that the signal is being detected in an adjacent region. Though PTCs are an effective method of identifying and delineating the edges of cochlear dead regions, they are time consuming and ill-suited to clinical use.

The test used most frequently for clinical identification of cochlear dead regions is the Threshold Equalizing Test (TEN; Moore et al., 2000; 2004). The TEN test was developed with the idea that tones detected by off-frequency listening, in ears with dead regions, should be easier to mask with broadband noise than they would in ears without dead regions. With the TEN (HL) test, masked thresholds are measured across the range of 500Hz to 4000Hz, allowing the approximate identification of a cochlear dead region.

There are currently no standards for clinical management of cochlear dead regions. Some reports suggest that affect speech, pitch, loudness perception, and general sound quality (Vickers et al., 2001; Baer et al., 2002; Mackersie et al., 2004; Huss et al., 2005a; 2005b). Some researchers have specified amplification characteristics to be used with patients with diagnosed dead regions, but there is no consensus and different studies have arrived at conflicting recommendations. While some recommend limiting amplification to a range up to 1.7 times the edge frequency of the dead region (Vickers et al., 2001; Baer et al., 2002), others advise the use of prescribed settings and recommend against limiting high frequency amplification (Cox et al., 2012; see link for a review).  Because of these conflicting recommendations, it remains unclear how clinicians should modify their treatment plans, if at all, for hearing aid patients with dead regions.

Previous research on the prevalence of dead regions has reported widely varying results, possibly due to differences in test methodology or subject characteristics. In a study of hearing aid candidates, Cox et al. (2011) reported a dead region prevalence of 31%, but their strict inclusion criteria likely missed individuals with milder hearing losses, so their prevalence estimate may be different from that of hearing aid candidates at large. Vinay and Moore (2007) reported higher prevalence of 57% in a study that did include individuals with thresholds down to 15dB HL at some frequencies, but the median hearing loss of their subjects was higher than that of the Cox et al. study, which likely impacted the higher prevalence estimate in their subject group.

In the study being reviewed, Pepler and her colleagues aimed to determine how prevalent cochlear dead regions are among a population of individuals who have or are being assessed for hearing aids. Because dead regions become more likely as hearing loss increases, and established hearing aid patients are more likely to have greater degrees of hearing loss, they also investigated whether established hearing aid patients would be more likely to have dead regions than newly referred individuals.  Finally, they studied whether age, gender, hearing thresholds or slope of hearing loss could predict the presence of cochlear dead regions.

The researchers gathered data from a group of 376 patients selected from the database of a hospital audiology clinic in Manchester, UK. Of the original group, 343 individuals met inclusion criteria; 193 were new referrals and 150 were established patients and experienced hearing aid users.  Of the new referrals, 161 individuals were offered and accepted hearing aids, 16 were offered and declined hearing aids and 16 were not offered hearing aids because their losses were of mild degree.  The 161 individuals who were fitted with new hearing aids were referred to as “new” hearing aid users for the purposes of the study. All subjects had normal middle ear function and otoscopic examinations and on average had moderate sensorineural hearing losses.

When reported as a proportion of the total subjects in the study, Pepler and her colleagues found dead region prevalence of 36%.  When reported as the proportion of ears with dead regions, the prevalence was 26% indicating that some subjects had dead regions in one ear only. Follow-up analysis on 64 patients with unilateral dead regions revealed that the ears with dead regions had significantly greater audiometric thresholds than the ears without dead regions. Only 3% of study participants had dead regions extending across at three or more consecutive test frequencies. Ears with contiguous dead regions had greater hearing loss than those without.  Among new hearing aid users, 33% had dead regions while the prevalence was 43% among experienced hearing aid users. On average, the experienced hearing aid users had poorer audiometric thresholds on average than new users.

Pepler and colleagues excluded hearing losses above 85dB HL because effective TEN masking could not be achieved. Therefore, dead regions were most common in hearing losses from 50 to 85dB HL, though a few were measured below that range. There were no measurable dead regions for hearing thresholds below 40dB HL. Ears with greater audiometric slopes were more likely to have dead regions, but further analysis revealed that only 4 kHz thresholds had a significant predictive contribution and the slopes of high-frequency hearing loss only predicted dead regions because of the increased degree of hearing loss at 4 kHz.

Demographically, more men than women had dead regions in at least one ear, but their audiometric configurations were different: women had poorer low frequency thresholds whereas men had poorer high frequency thresholds. It appears that the gender effect actually due to the difference in audiometric configuration, specifically the men’s poorer high frequency thresholds. A similar result was reported for the analysis of age effects. Older subjects had a higher prevalence of dead regions but also had significantly poorer hearing thresholds.  Though poorer hearing thresholds at 4kHz did slightly increase the likelihood of dead regions, regression analysis of the variables of age, gender and hearing thresholds found that none of these factors were significant predictors.

Pepler et al’s prevalence data agree with the 31% reported by Cox et al (2012), but are lower than that reported by Vinay and Moore (2007), possibly because the subjects in the latter study had greater average hearing loss than those in the other studies.  But when Pepler and her colleagues used similar inclusion criteria to the Cox study, they found a prevalence of 59%, much higher than the report by Cox and her colleagues and likely due to the exclusion of subjects with normal low frequency hearing in the Cox study. The authors proposed that Cox’s exclusion of subjects with normal low frequency thresholds could have reduced the overall prevalence by increasing the proportion of subjects with metabolic presbyacusis and eliminating some subjects with sensory presbyacusis—sensory presbyacusis is often associated with steeply sloping hearing loss and involves atrophy of cochlear structures (Shuknecht, 1964).

 In summary:

The study reported here shows that roughly a third of established and newly referred hearing aid patients are likely to have at least one cochlear dead region, in at least one ear. A very low proportion (3% reported here) of individuals are likely to have dead regions spanning multiple octaves. The only factor that predicted the presence of dead regions was hearing threshold at 4 kHz.

On the lack of clinical guidance:

As more information is gained about prevalence and risk factors, what remains missing are clinical guidelines for management of hearing aid users with diagnosed high-frequency dead regions. Conflicting recommendations have been proposed for either limiting high frequency amplification or preserving high frequency amplification and working within prescribed targets. The data available today suggest that prevalence of contiguous multi-octave dead regions is very low and a further subset of hearing aid users with contiguous dead regions experience any negative effects of high-frequency amplification. With consideration to these observations, it seems prudent that the prescription of high-frequency gain should adhere to the prescribed targets for all patients at the initial fitting. Any reduction to high-frequency gains should be managed as a result of subjective feedback from the patient after they have completed a trial period with their hearing aids.

On frequency lowering and dead regions:

Some clarity is required regarding the role of frequency lowering and the treatment of cochlear dead regions. Because acoustic information in speech extends out to 10 kHz and because most hearing aid frequency responses roll off significantly after 4-5 kHz, the mild prescription of frequency lowering can be beneficial to many hearing aid users. It must be noted that the benefits of this technology arise largely from the acoustic limitations of the device and not the presence or absence of a cochlear dead region. There are presently no recommendations for the selection of frequency lowering parameters in cases of cochlear dead regions. In the absence of these recommendations, the best practice for the prescription of frequency lowering would follow the same guidelines as any other patient with hearing loss; validation and verification should be performed to document benefit with the algorithm and identify appropriate selection of algorithm parameters.

On the low-frequency dead region: 

The effects of low-frequency dead regions are not well studied and may have more significant impact on hearing aid performance.  Hornsby (2011) reported potential negative effects of low frequency amplification if it extends into the range of low-frequency dead regions (Vinay et al., 2007; 2008). In some cases performance decrements reached 30%, so the authors recommended using low-frequency gain limits of 0.57 times the low-frequency edge of the dead region in order to preserve speech recognition ability. Though dead regions are less common in the low frequencies than in the high frequencies, more study on this topic is needed to determine clinical testing and treatment implications.

References

Baer, T., Moore, B. C. and Kluk, K. (2002). Effects of low pass filtering on the intelligibility of speech in noise for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America 112(3 Pt 1), 1133-44.

Cox, R., Alexander, G., Johnson, J., Rivera, I. (2011). Cochlear dead regions in typical hearing aid candidates: Prevalence and implications for use of high-frequency speech cues. Ear and Hearing 32(3), 339 – 348.

Cox, R.M., Johnson, J.A. & Alexander, G.C. (2012).  Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss. Ear and Hearing 33(5), 573-87.

Hornsby, B. (2011) Dead regions and hearing aid fitting. Ask the Experts, Audiology Online October 3, 2011.

Huss, M. & Moore, B. (2005a). Dead regions and pitch perception. Journal of the Acoustical Society of America 117, 3841-3852.

Huss, M. & Moore, B. (2005b). Dead regions and noisiness of pure tones. International Journal of Audiology 44, 599-611.

Mackersie, C. L., Crocker, T. L. and Davis, R. A. (2004). Limiting high-frequency hearing aid gain in listeners with and without suspected cochlear dead regions. Journal of the American Academy of Audiology 15(7), 498-507.

Moore, B., Huss, M. & Vickers, D. (2000). A test for the diagnosis of dead regions in the cochlea. British Journal of Audiology 34, 205-224.

Moore, B. (2004). Dead regions in the cochlea: Conceptual foundations, diagnosis and clinical applications. Ear and Hearing 25, 98-116.

Moore, B. & Alcantara, J. (2001). The use of psychophysical tuning curves to explore dead regions in the cochlea. Ear and Hearing 22, 268-278.

Moore, B.C., Glasberg, B. & Vickers, D.A. (1999b). Further evaluation of a model of loudness perception applied to cochlear hearing loss. Journal of the Acoustical Society of America 106, 898-907.

Pepler, A., Munro, K., Lewis, K. & Kluk, K. (2014). Prevalence of Cochlear Dead Regions in New Referrals and Existing Adult Hearing Aid Users. Ear and Hearing 20(10), 1-11.

Schuknecht HF. Further observations on the pathology of presbycusis. Archives of Otolaryngology 1964;80:369—382

Vickers, D., Moore, B. & Baer, , T. (2001). Effects of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America 110, 1164-1175.

Vinay and Moore, B. C. (2007). Speech recognition as a function of high-pass filter cutoff frequency for people with and without low-frequency cochlear dead regions. Journal of the Acoustical Society of America 122(1), 542-53.

Vinay, Baer, T. and Moore, B. C. (2008). Speech recognition in noise as a function of high pass-filter cutoff frequency for people with and without low-frequency cochlear dead regions. Journal of the Acoustical Society of America 123(2), 606-9.

Should you prescribe digital noise reduction to children?

Pittman, A. (2011). Age-related benefits of digital noise reduction for short term word learning in children with hearing loss. Journal of Speech, Language and Hearing Research 54, 1448-1463.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

A child’s ability to learn new words has important implications for language acquisition, mental and social development as well as academic achievement.  How easily a child acquires new vocabulary words can be affected by numerous factors, including age, working memory and current vocabulary (Alt, 2010). Hearing loss is known to adversely affect children’s ability to learn new words and the more severe the loss, the more significant the effect on word learning (Pittman, et al., 2005; Blamey et al., 2001). The effect of hearing loss on word learning may be related to a decreased ability to encode the degraded stimuli into working memory. Indeed, in a study with normal-hearing and hearing-impaired children, Pittman found that word stimuli that were modified with narrowed bandwidths were harder for children to learn (Pittman, 2008). Similar results indicating that degraded perception adversely affects children’s phonological processing have been reported elsewhere (Briscoe, et al., 2001).

In many everyday listening situations, speech must be perceived in the presence of noise or other competing sounds. Noise can degrade the speech information, making words more difficult to encode into working memory and identify correctly. Individuals with hearing loss are more adversely affected by the presence of background noise (Kochkin, 2002; McCoy et al., 2005; Picou et al., 2013), which is of particular concern when the effects of noise on word learning are considered. Hearing aids can at least partially mitigate effects of background noise with the use advanced signal processing like directional microphones and digital noise reduction (DNR). However, little evidence exists to support beneficial effects of DNR on word learning. Pittman suggests that there is reason for concern as DNR could impose negative effects on word learning because of reductions in overall amplification. Additionally, the effect of DNR on connected speech, which offers semantic and syntactic context, may be very different than the effects on isolated word learning, so the everyday experience of hearing aid users could be different from laboratory results that measured perception of isolated words.

This study examined how DNR affects word learning in hearing-impaired children with hearing aids. The authors presented these hypotheses:

1)              Word learning would decrease in noise for children with normal hearing and those with hearing loss.

2)              Word learning rates would slow in noise, due to the reduction in overall amplification imposed by DNR.

Forty-one children with normal hearing and 26 participants with mild-to-moderate hearing loss participated in the study. The treatment groups were comprised of two age sub-groups: a younger group of children from age 8-10 and a slightly older group of children from age 11-12. The children with hearing loss had been diagnosed at an average age of approximately 3 years and all but one wore personal hearing aids. Participants with hearing loss were fitted with BTE hearing aids programmed to DSL v5.0 targets, verified with real-ear measures and set with two programs. In Program 1, advanced signal processing features like noise reduction, impulse reduction, wind noise reduction and feedback management were turned off. In Program 2, these features remained disabled except for noise reduction, which was set to maximum.

Word learning was tested using nonsense words, presented in three sets of five words each. All were two-syllable words and each list contained words with the same vowels in the first and last syllables. Stimuli were presented in sound field by a female talker at a level of 50dB SPL and SNR of 0dB. Children were seated at a small table about one meter away from the speaker. Nonsense words were presented on a computer screen, along with five pictures of nonsense objects categorized as toys, flowers or aliens. The children were asked to select the appropriate picture to go with the word and were given positive reinforcement for selecting the correct picture. No reinforcement was provided for selecting the wrong picture. Children therefore learned the new words via a process of trial and error.

The first goal of the study was to examine the impact on noise on children’s word learning ability. Statistical analyses indicated that NH participants learned words faster than the HL participants did, older children learned faster than younger children and learning in quiet was faster than learning in noise. The presence of noise resulted in further decrements in performance for HL listeners, indicating that noise had a more deleterious effect on word learning in noise for participants with hearing loss than it did for normally hearing participants.

The second goal of the study was to determine if DNR affected word learning for children with hearing loss. When DNR trials were compared to quiet and noise trials, younger children performed the same in noise whether or not they were using DNR in their hearing aids. Performance for both noise conditions was significantly poorer than performance in quiet. In contrast, the performance of older participants improved with DNR, with DNR performance closely approximating performance in quiet.

When results from the word learning task were examined with reference to Peabody vocabulary scores, the results indicated that participants with hearing loss had lower vocabulary ages than the normally hearing participants. For the experimental tasks, normally hearing participants required fewer trials to reach 70% performance than the participants with hearing loss. Further analysis revealed that the age of identification, age of amplification and years of amplification use accounted for 85% of the variance, but follow-up tests revealed significant relations between word learning and age, but not word learning and hearing history. These results suggest that despite individual variability, word learning in noise was most related to the factors of age and vocabulary.

In sum, the results of this investigation suggest that DNR did not have an effect, positive or negative, on younger participants. It did improve performance for older children, however, regardless of their hearing history or years of amplification. The author points out that childrens’ speech perception in noise is known to improve with age (Elliott, 1979; Scollie, 2008) but the participants in this study demonstrated age effects only when DNR was used. It appears that the combination of DNR and greater vocabulary knowledge allowed the older listeners to demonstrate superior word learning.

There are many factors to consider when prescribing amplification characteristics for children. Word learning is a critical developmental process for children, with important implications for future social and academic accomplishments.  The documented beneficial effects of DNR on word learning in complex listening environments could be a strong motivator for selection in a pediatric hearing aid. In addition to potential word learning benefits, DNR could make amplification more comfortable in noisy conditions, thereby increasing the acceptance of hearing aids and expanding potential opportunities for communication and further word learning.

Some caution should be voiced in the selection of DNR for pediatric use. Many of these algorithms reduce frequency-specific hearing aid gains, presenting the opportunity to compromise audibility of some speech sounds when listening in noise. Prior to consideration of any DNR algorithm in pediatric populations, data should be presented that ensure the maintenance of speech audibility when that particular DNR algorithm is active and noise is presented at a levels typical of the child’s academic setting (see: Stelmachowicz et al., 2010).

The outcomes reported here provide general support for the use of DNR in school-age children. It must be clarified that the documented benefits do not suggest improved speech understanding, as this is not a function of the algorithm. Rather, the documented improvements in word learning most likely arise from the fact that noise in the absence of speech was reduced in level, reducing the effort required to listen to the individual words as they were presented.

For additional information on the prescription of hearing aid signal processing features in pediatric populations, please reference the 2013 Pedatric Amplification Guidelines, published by the American Academy of Audiology: http://audiology.org/resources/documentlibrary/Documents/PediatricAmplificationGuidelines.pdf

 

References

Alt, M. (2010). Phonological working memory impairments in children with specific language impairment: Where does the problem lie? Journal of Communication Disorders 44, 173-185.

Bentler, R., Wu, Y., Kettel, J. & Hurtig, R. (2008). Digital noise reduction: Outcomes from laboratory and field studies. International Journal of Audiology 47, 447-460.

Blamey, P., Sarant, J., Paatsch, L., Barry, J., Bow, C., Wales, R. & Rattigan, K. (2001). Relationships among speech perception, production, language, hearing loss and age in children with impaired hearing. Journal of Speech, Language and Hearing Research 44, 264-285.

Briscoe, J., Bishop, D. & Norbury, C. (2001). Phonological processing, language and literacy: a comparison of children with mild-to-moderate sensorineural hearing loss and those with specific language impairment. Journal of Child Psychology and Psychiatry and Allied Disciplines 42, 329-340.

Dunn, L. & Dunn, L. (2006). Peabody Picture Vocabulary Test – III. Circle Pines, MN:AGS.

Kochkin, S. (2002). MarkeTrak VI: 10-year customer satisfaction trends in the US hearing instrument market. The Hearing Review 9 (10), 14-25, 46.

McCoy, S.L., Tun, P.A. & Cox, L.C. (2005). Hearing loss and perceptual effort: downstream effects on older adults’ memory for speech. Quarterly Journal of Experimental Psychology A, 58, 22-33.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W.Y. (2013). How hearing aids, background noise and visual cues influence objective listening effort. Ear and Hearing 34 (5).

Pittman, A., Lewis, D., Hoover, B. & Stelmachowicz, P. (2005). Rapid word learning in normal-hearing and hearing-impaired children. Effects of age, receptive vocabulary and high-frequency amplification. Ear and Hearing 26, 619-629.

Pittman, A. (2008).  Short-term word learning rate in children with normal hearing and children with hearing loss in limited and extended high-frequency bandwidths. Journal of Speech, Language and Hearing Research 51, 785-797.

Pittman, A. (2011). Age-related benefits of digital noise reduction for short term word learning in children with hearing loss. Journal of Speech, Language and Hearing Research 54, 1448-1463.

Ng, E., Rudner, M., Lunner, T., Syskind Pedersen, M. & Rönnberg, J. (2013).  Effects of noise and working memory capacity on memory processing of speech for hearing aid users. International Journal of Audiology, Early Online: 1–9

Ricketts, T. & Hornsby, B. (2005). Sound quality measures for speech in noise through a commercial hearing aid implementing digital noise reduction. Journal of the American Academy of Audiology 16, 270-277.

Sarampalis, A., Kalluri, S. & Edwards, B. (2009). Objective measures of listening effort: effects of background noise and noise reduction. Journal of Speech Language and Hearing Research 52, 1230-1240.

Stelmachowicz, P., Lewis, D., Hoover, B., Nishi, K., McCreery, R. & Woods, W. (2010). Effects of digital noise reduction on speech perception for children with hearing loss. Ear and Hearing 31, 245-355.

The Top 5 Hearing Aid Research Articles from 2013!

1) The Clinical Practice Guidelines in Pediatric Amplification

After a 10-year wait, the guidelines for prescription of hearing aids to children were updated in 2013—making them the most modern of any peer-reviewed guidelines. There is little doubt that these recommendations will impact future publication and fitting protocols at clinical sites around the world. The guidelines are freely available at the link below.

American Academy of Audiology. (2013). Clinical Practice Guidelines Pediatric Amplification. Reston, VA: Ching, T., Galster, J., Grimes, A., Johnson, C., Lewis, D., McCreery, R…Yoshinago-Itano, C.

http://buff.ly/18TNGsz

2) Placebo effects in hearing aid trials are reliable

This article echoes publications from the early 2000’s (e.g., Bentler et al., 2003) that reported on blinded comparisons of analog and digital hearing aids. In those early studies, participants showed clear bias when primed to believe that option ‘A’ was a higher technology than option ‘B’. That early work was more focused on comparing technologies than this insightful report on placebo effects. Dawes and colleagues share an important reminder that placebo is real and should be accounted for in experimental design, whenever possible.

Dawes, P., Hopkins, R., & Munro, K. (2013). Placebo effects in hearing aid trials are reliable. International Journal of Audiology, 52(7), 472-477.

http://buff.ly/JF7DHM

3) Effects of hearing aid use on listening effort and mental fatigue

In the last few years, a number of research audiologists and hearing scientists have worked to document relationships between cognitive capacity, listening effort, and hearing aid use. An undertone of these efforts has been the assumption that a person with hearing loss will be less fatigued when listening with hearing aids. This article is one of the first published attempts at clearly documenting this fatiguing effect.

Hornsby, B.W. (2013). Effects of hearing aid use on listening effort and mental fatigue associated with sustained speech processing demands. Ear & Hearing, 34(5), 523-534.

http://buff.ly/JF7vrH

4) Characteristics of hearing aid fittings in infants and young children

The recent publication of updated pediatric fitting guidelines leads one to wonder how well fundamental aspects of these recommendations are being followed. This report from McCreery and colleagues is a clear indication that superior pediatric hearing care is uncommon and most often found in large pediatric medical centers. They also reinforce the consideration that consistent care from a single center may result in the most prescriptively appropriate hearing aid fitting.

McCreery, R., Bentler, R., & Roush, P. (2013). Characteristics of hearing aid fittings in infants and young children. Ear & Hearing, 34(6), 701-710.

http://buff.ly/18TNnhp

5) The Style Preference Survey (SPS): a report on psychometric properties and a cross-validation experiment

Closing out the Top 5: this article warrants high regard for rigor in design and quality of reporting. The authors delivered an article that will educate future researchers on the development and validation of questionnaires. Beyond this utility, the results are some of the first to identify the dimensions of preference that underlie the well-established bias toward preference of open-canal hearing aids.

Smith, S., Ricketts, T., McArdle, R., Chisolm, T., Alexander, G., & Bratt, G. (2013). Style preference survey: a report on the psychometric properties and a cross-validation experiement. Journal of the American Academy of Audiology, 24(2), 89-104.

http://buff.ly/JF740H

Patients with higher cognitive function may benefit more from hearing aid features

Ng, E.H.N., Rudner, M., Lunner, T., Pedersen, M.S., & Ronnberg, J. (2013). Effects of noise and working memory capacity on memory processing of speech for hearing-aid users. International Journal of Audiology, Early Online, 1-9.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Research reports as well as clinical observations indicate that competing noise increases the cognitive demands of listening, an effect that is especially impactful for individuals with hearing loss (McCoy et al., 2005; Picou et al., 2013; Rudner et al., 2011).  Listening effort is a cognitive dimension of listening that is thought to represent the allocation of cognitive resources needed for speech recognition (Hick & Tharpe, 2002). Working memory, is a further dimension of cognition that involves the simultaneous processing and storage of information; its effect on speech processing may vary depending on the listening conditions (Rudner et al., 2011).

The concept of effortful listening can be characterized with the Ease of Language Understanding (ELU) model (Ronnberg, 2003; Ronnberg et al., 2008). In quiet conditions when the speech is audible and clear, the speech input is intact and is automatically and easily matched to stored representations in the lexicon. When speech inputs are weak, distorted or obscured by noise, mismatches may occur and speech inputs may need to be compared to multiple stored representations to arrive at the most likely match. In these conditions, allocation of additional cognitive resources, is required. Efficient cognitive functioning and large working memory capacity allows more rapid and successful matches between speech inputs and stored representations. Several studies have indicated a relationship between cognitive ability and speech perception: Humes (2007) found that cognitive function was the best predictor of speech understanding in noise and Lunner (2003) reported that participants with better working memory capacity and verbal processing speed had better speech perception performance.

Following the ELU model, hearing aids may allow listeners to match inputs and stored representations more successfully, with less explicit processing. Noise reduction, as implemented in hearing aids, has been proposed as a technology that may ease effortful listening. In contrast, however, it has been suggested that hearing aid signal processing may introduce unwanted artifacts or alter the speech inputs so that more explicit processing is required to match them to stored images (Lunner et al., 2009). If this is the case, hearing aid users with good working memory may function better with amplification because their expanded working memory capacity allows more resources to be applied to the task of matching speech inputs to long-term memory stores.

Elaine Ng and her colleagues investigated the effect of noise and noise reduction on word recall and identification and examined whether individuals were affected by these variables differently based on their working memory capacity. The authors had several hypotheses:

1. Noise would adversely affect memory, with poorer memory performance for speech in noise than in quiet.

2. Memory performance in noise would be at least partially restored by the use of noise reduction.

3. The effect of noise reduction on memory would be greater for items in late list positions because participants were older and therefore likely to have slower memory encoding speeds.

4. Memory in competing speech would be worse than in stationary noise because of the stronger masking effect of competing speech.

5. Overall memory performance would be better for participants with higher working memory capacity in the presence of noise reduction. This effect should be more apparent for late list items presented with competing speech babble.

Twenty-six native Swedish-speaking individuals with moderate to moderately-severe, high-frequency sensorineural hearing loss participated in the authors’ study. Prior to commencement of the study, participants were tested to ensure that they had age-appropriate cognitive performance. A battery of tests was administered and results were comparable to previously reported performance for their age group (Ronnberg, 1990).

Two tests were administered to study participants. First, a reading span test evaluated working memory capacity.  Participants were presented with a total of 24 three-word sentences and sub-lists of 3, 4 and 5 sentences were presented in ascending order. Participants were asked to judge whether the sentences were sensible or nonsense. At the end of each sub-list of sentences, listeners were prompted to recall either the first or final words of each sentence, in the order in which they were presented. Tests were scored as the total number of items correctly recalled.

The second test was a sentence-final word identification and recall (SWIR) test, consisting of 140 everyday sentences from the Swedish Hearing In Noise Test (HINT; Hallgren et al, 2006). This test involved two different tasks. The first was an identification task in which participants were asked to report the final word of each sentence immediately after listening to it.  The second task was a free recall task; after reporting the final word of the eighth sentence of the list, they were asked to recall all the words that they had previously reported. Three of seven tested conditions included variations of noise reduction algorithms, ranging from one similar to those implemented in modern hearing aids to an ‘ideal’ noise reduction algorithm.

Prior to the main analyses of working memory and recall performance, two sets of groups were created based on reading span scores, using two different grouping methods. In the first set, two groups were created by splitting the group at the median score so that 13 individuals were in a high reading span group and the remaining 13 were in a low reading span group. In the second set, participants who scored in the mid-range on the reading span test were excluded from the analysis, creating High reading span and Low reading span groups of 10 participants each. There was no significant difference between groups based on age, pure tone average or word identification performance, in any of the noise conditions. Overall reading span scores for participants in this study were comparable to previously reported results (Lunner, 2003; Foo, 2007).

Also prior to the main analysis, the SWIR results were analyzed to compare noise reduction and ideal noise reduction conditions. There was no significant difference between noise reduction and ideal noise reduction conditions in the identification or free recall tasks, nor was there an interaction of noise reduction condition with reading span score. Therefore, only the noise reduction condition was considered in the subsequent analyses.

The relationship between reading span score (representing working memory capacity) and SWIR recall was examined for all the test conditions. Reading span score correlated with overall recall performance in all conditions but one. When recall was analyzed as a function of list position (beginning or final), reading span scores correlated significantly with beginning (primacy) positions in quiet and most noise conditions. There was no significant correlation between overall reading span scores and items in final (recency) position in any of the noise conditions.

There were significant main effects for noise, list position and reading span group. In other words, when noise reduction was implemented, the negative effects of noise were lessened. There was a recency effect, in that performance was better for late list positions than for early list positions. Overall, the high reading span groups scored better than the low reading span groups, for both median-split and mid-range exclusion groups. The high reading span groups showed improved recall with noise reduction, whereas the low reading span groups exhibited no change in performance with noise reduction versus quiet.  The use of four-talker babble had a negative effect on late list positions, but did not affect items in other positions, suggesting that four-talker babble disrupted working memory more than steady-state noise. These analyses supported hypotheses 1, 2, 3 and 5, indicating that noise adversely affects memory performance (1), that noise reduction and list position interact with this effect (2,3) especially for individuals with high working memory capacity (5).

The results also supported hypothesis 4, which suggested that competing speech babble would affect memory performance more than steady state noise. Recall performance was significantly better in the presence of steady-state noise than it was in 4-talker babble. Though there was no significant effect of noise reduction overall, high reading span participants once again outperformed low reading span participants with noise reduction.

In summary, the results of this study determined that noise had an adverse effect on recall, but that this effect was mildly mitigated by the use of noise reduction. Four-talker babble was more disruptive to recall performance than was steady-state noise. Recall performance was better for individuals with higher working memory capacity. These individuals also demonstrated more of a benefit from noise reduction than did those with lower working memory capacity.

Recall performance is better in quiet conditions than in noise because presumably fewer cognitive resources are required to encode the speech input (Murphy, et al., 2000). Ng and her colleagues suggest that noise reduction helps to perceptually segregate speech from noise, allowing the speech input to be matched to stored lexical representations with less cognitive demand. So, noise reduction may at least partially reverse the negative effect of noise on working memory.

Competing speech babble is more likely to be cognitively demanding than steady-state noise (such as an air conditioner) because it contains meaningful information that is more distracting and harder to separate from the speech of interest (Sorqvist & Ronnberg, 2012). Not only is the speech signal of interest degraded by the presence of competing sound and therefore harder to encode, but additional cognitive resources are required to inhibit the unwanted or irrelevant linguistic information (Macken, 2009).  Because competing speech puts more demands on cognitive resources, it is more potentially disruptive than steady-state noise to perception of the speech signal of interest.

Unfortunately, much of the background noise encountered by hearing aid wearers is competing speech. The classic example of the cocktail party illustrates one of the most challenging situations for hearing-impaired individuals, in which they must try to attend to a proximal conversation while ignoring multiple conversations surrounding them. The results of this study suggest that noise reduction may be more useful in these situations for listeners with better working memory capacity; however, noise reduction should still be considered for all hearing aid users, with comprehensive follow-up care to make adjustments for individuals who are not functioning well in noisy conditions. Noise reduction may generally alleviate perceived effort or annoyance, allowing a listener to be more attentive to the speech signal of interest or to remain in a noisy situation that would otherwise be uncomfortable or aggravating.

More research is needed on the effects of noise, noise reduction and advanced signal processing on listening effort and memory in everyday situations. It is likely that performance is affected by numerous variables of the hearing aid, including compression characteristics, directionality, noise reduction, as well as the automatic implementation or adjustment of these features. These variables in turn combine with user-related characteristics such as age, degree of hearing loss, word recognition ability, cognitive capacity and more.

References

Foo, C., Rudner, M., & Ronnberg, J. (2007). Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. Journal of the American Academy of Audiology 18, 618-631.

Hallgren, M., Larsby, B. & Arlinger, S. (2006). A Swedish version of the hearing in noise test (HINT) for measurement of speech recognition. International Journal of Audiology 45, 227-237.

Hick, C. B., & Tharpe, A. M. (2002). Listening effort and fatigue in school-age children with and without hearing loss. Journal of Speech Language and Hearing Research 45, 573–584.

Humes, L. (2007). The contributions of audibility and cognitive factors to the benefit provided by amplified speech to older adults. Journal of the American Academy of Audiology 18, 590-603.

Lunner, T. (2003). Cognitive function in relation to hearing aid use. International Journal of Audiology 42, (Suppl. 1), S49-S58.

Lunner, T., Rudner, M. & Ronnberg, J. (2009). Cognition and hearing aids. Scandinavian Journal of Psychology 50, 395-403.

Macken, W.J., Phelps, F.G. & Jones, D.M. (2009). What causes auditory distraction? Psychonomic Bulletin and Review 16, 139-144.

McCoy, S.L., Tun, P.A. & Cox, L.C. (2005). Hearing loss and perceptual effort: downstream effects on older adults’ memory for speech. Quarterly Journal of Experimental Psychology A, 58, 22-33.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W.Y. (2013). How hearing aids, background noise and visual cues influence objective listening effort. Ear and Hearing 34 (5).

Ronnberg, J. (2003). Cognition in the hearing impaired and deaf as a bridge between signal and dialogue: a framework and a model. International Journal of Audiology 42 (Suppl. 1), S68-S76.

Ronnberg, J., Rudner, M. & Foo, C. (2008). Cognition counts: A working memory system for ease of language understanding (ELU). International Journal of Audiology 47 (Suppl. 2), S99-S105.

Rudner, M., Ronnberg, J. & Lunner, T. (2011). Working memory supports listening in noise for persons with hearing impairment. Journal of the American Academy of Audiology 22, 156-167.

Sorqvist, P. & Ronnberg, J. (2012). Episodic long-term memory of spoken discourse masked by speech: What role for working memory capacity? Journal of Speech Language and Hearing Research 55, 210-218.

Do the benefits of tinnitus therapy increase with time?

Parazzini, M., Del Bo, L., Jastreboff, M., Tognola, G. & Ravazzani, P. (2011). Open ear hearing aids in tinnitus therapy: An efficacy comparison with sound generators. International Journal of Audiology, 50(8), 548-553.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Tinnitus management can include a variety of treatment approaches but the most effective usually include a combination of counseling and sound therapy (Jastreboff, 1990; Jastreboff & Hazell, 2004). For many individuals with hearing loss and tinnitus, hearing aids may be the only tinnitus treatment they participate in. Specific treatment recommendations vary depending on a number of patient characteristics, such as degree of hearing loss and severity of the tinnitus disturbance.

Tinnitus Retraining Therapy (TRT; Jastreboff, 1995; Henry et al., 2002, 2003; Jastreboff & Jastreboff, 2006) is a widely known therapeutic approach using counseling and sound therapy, based on the neurophysiological model of tinnitus, that stresses the importance of helping individuals understand their condition, reducing awareness and attention to the tinnitus, providing or restoring appropriate auditory input and eventually training the auditory system to habituate to the tinnitus. Jastreboff & Hazell (2004) have proposed a classification system in which patients are assigned to one of five categories: 0 = mild or recent tinnitus, 1 = normal hearing and severe tinnitus, 2 = significant hearing loss, 3 = hyperacusis and 4 = prolonged worsening of tinnitus or hyperacusis following sound exposure. A patient’s classification on this scale can guide treatment recommendations thereafter. Counseling educates patients about their hearing loss and tinnitus, helping them cope with the stress and annoyance of tinnitus in their everyday lives. Sound therapy treatment aims to help patients habituate to their tinnitus, employing ear-level sound generators for individuals without hearing loss (category 1; described above) whereas hearing aids are recommended for tinnitus sufferers with significant hearing loss (category 2).

Individuals who fall into the borderline area between categories 1 and 2 could theoretically be treated with either sound generators or hearing aids. Presently, there is little evidence to suggest that one of these approaches is superior to the other. Therefore, the purpose of Parazzini et al.’s study was to compare the efficacy of sound therapy treatments with sound generators versus open-fit hearing aids for tinnitus patients whose characteristics fall between categories 1 and 2.

91 participants completed the study. All participants met the requirements for tinnitus categorization between Jastreboff categories 1 and 2, with pure tone thresholds equal to or less than 25dB HL at 2kHz and greater than or equal to 25dB at frequencies higher than 2kHz. None of the participants had used hearing aids or been treated with tinnitus retraining therapy prior to the study. Participants were randomly assigned to one of two treatment groups: those fitted with small, ear-level sound generators (SG group) and those fitted with binaural open fit hearing aids (HA group). All participants used the devices for at least 8 hours per day. Participants completed the Tinnitus Handicap Inventory (THI; Newman et al., 1996) at each of four appointments scheduled at three-month intervals over a year. Structured interviews were completed at each visit. During these interviews the following variables were examined: the effect of tinnitus on life, tinnitus loudness and tinnitus annoyance.

Analysis revealed that participants showed a marked reduction in scores over time, beginning at the first session three months after initiation of therapy and continuing progressively over subsequent measurements every three months up to the last visit at 12 months.  Results with ear-level sound generators and those with hearing aids were essentially identical. All three variables decreased by approximately 50% from the initial assessment to the final session at 12 months. The mean THI score decreased 52% from 57.9 to 27.9, the effect of tinnitus on life decreased 51% from 6.5 to 3.2, and tinnitus loudness ratings decreased from 7 to 3.6, a reduction of 48%. The common clinical criteria for significant improvement on the THI is 20 points (Newman et al., 1998) and 62% of the participants in the current study reached this goal by 6 months and 74% reached it by 12 months. Applying a criterion of 40% improvement to reflect a reduction in tinnitus disturbance—as proposed by P.J. Jastreboff—51% of the subjects achieved the goal by 6 months and 72% reached it by 12 months.

For all recorded variables, the time of treatment was always statistically significant, indicating that subjects were improving steadily over time. There was never a significant difference based on the type of device, indicating that sound generators and open-fit hearing aids were equally successful at alleviating tinnitus symptoms and reactions, at least for the subjects in this population, whose characteristics fell between categories 1 and 2 in Jastreboff & Hazell’s classification system.

Parazzini and colleagues evaluated tinnitus sufferers with mild high frequency hearing loss and measured their responses for up to 12 months. Though there was no evidence of plateaus in the data, it remains unknown whether improvements would continue if treatment were to continue beyond this point. Longer term studies would be valuable to determine at what point improvements plateau and if longer measurement periods yield differences between hearing aids and sound generator devices.

The instruments used in Parazzini’s study were either sound generators or hearing aids; none of the devices had both features. Many hearing aids available today offer tinnitus masking stimuli along with traditional amplification features. A similar paradigm examining hearing aids as well as combination devices could offer practical insight into tinnitus treatment options with currently available hearing instrument product lines. Because a goal of tinnitus retraining therapy is to restore auditory inputs to reduce awareness of the tinnitus, hearing aids could have particular benefits over sound generators, because they stimulate the auditory system with meaningful environmental sounds which may more effective at drawing attention away from the tinnitus, in addition to masking the tinnitus with the amplified sound.

Open-fit, behind-the-ear hearing aids appear to be a good solution for tinnitus patients: the ear canal remains open and unoccluded, thereby reducing the likelihood of increased tinnitus awareness. Another consideration is whether receiver-in-canal (RIC) instruments would be an even better choice. RICs are equally as effective as traditional open-fit hearing aids at minimizing occlusion and offer the opportunity to provide a broader high frequency range and more stable high frequency gain than is available when sound is routed thin or standard thickness tubing (Alworth, et al., 2010). This opportunity to provide an extended high-frequency amplification would be expected to increase auditory input in the frequency range where tinnitus is often perceived. Therefore, RICs may more effectively mask the tinnitus via amplification of environmental sounds, reducing tinnitus awareness and potentially, tinnitus annoyance and stress.

Parazzini’s study offers strong support for the use of open-fit hearing aids with tinnitus patients. Advances in hearing aid technology, such as feedback management, automatic signal processing, and the availability of tinnitus masking stimuli may make modern hearing aids even better suited for this purpose. As mentioned earlier, many opportunities exist for research in the treatment of tinnitus with hearing aids: effects of hearing aid style, sound therapy parameters, treatment and counseling strategies, and duration of treatment all remain white space for future researchers.

References

Alworth, L.N., Plyler, P.N., Bertges-Reber, M. & Johnstone, P.M. (2010). Microphone, performance and subjective measures with open canal hearing instruments. Journal of the American Academy of Audiology 21(4), 249-266.

Del Bo, L. & Ambrosetti, U. (2007). Hearing aids for the treatment of tinnitus. Progress in Brain Research 166, 341-345.

Henry J.A., Jastreboff M.M., Jastreboff P.J., Schechter M.A. & Fausti S.A.(2002).  Assessment of patients for treatment with tinnitus retraining therapy. Journal of the American Academy of Audiology, 13, 523 – 44.

Henry J.A., Jastreboff M.M., Jastreboff P.J., Schechter M.A. & Fausti S.A. (2003). Guide to conducting tinnitus retraining therapy initial and follow-up interviews. Journal of Rehabilitation Research and Development 40, 157 – 177.

Jastreboff, P.J. (1990). Phantom auditory perception (tinnitus): Mechanisms of generation and perception. Neuroscience Research 8, 221-254.

Jastreboff, P.J. & Hazell, J.W.P. (2004). Tinnitus Retraining Therapy: Implementing the Neurophysiological Model. Cambridge University Press.

Jastreboff P.J. & Jastreboff M.M. 2006. Tinnitus retraining therapy: A different view on tinnitus. Otorhinolaryngology and Related Specialties 68, 23 – 29.

Newman, C.W., Jacobson, G.P. & Spitzer, J.B. (1996). Development of the Tinnitus Handicap Inventory. Archives of Otolaryngology Head Neck Surgery 122, 143-148.

Newman, C.W., Sandridge, S.A. & Jacobson, G.P. (1998). Psychometric adequacy of the Tinnitus Handicap Inventory (THI) for evaluating treatment outcome. Journal of the American Academy of Audiology 9, 153-160.

Parazzini, M., Del Bo, L., Jastreboff, M., Tognola, G. & Ravazzani, P. (2011). Open ear hearing aids in tinnitus therapy: An efficacy comparison with sound generators. International Journal of Audiology, 50(8), 548-553.

Hearing Aids Alone can be Adjusted to Help with Tinnitus Relief

Shekhawat, G.S., Searchfield, G.D., Kobayashi, K. & Stinear, C. (2013). Prescription of hearing aid output for tinnitus relief. International Journal of Audiology 2013, early online: 1-9.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

The American Tinnitus Association (ATA) reports that approximately 50 million people in the United States experience some degree of tinnitus.About one third of tinnitus sufferers consider it severe enough to seek medical attention. Fortunately only a small proportion of tinnitus sufferers experience symptoms that are debilitating enough that they feel they cannot function normally. But even if it does not cause debilitating symptoms, for many tinnitus still causes a number of disruptive effects such as sleep interference, difficulty concentrating, anxiety, frustration and depression  (Tyler & Baker, 1983; Stouffer & Tyler, 1990; Axelsson, 1992; Meikle 1992; Dobie, 2004).

Therapeutic treatments for tinnitus include the use of tinnitus maskers, tinnitus retraining therapy, biofeedback and counseling . Though these methods provide relief for many the tendency for tinnitus to co-occur with sensorineural hearing loss (Hoffman & Reed, 2004) leads the majority of individuals to attempt management of tinnitus with the use of hearing aids alone (Henry, et al., 2005; Kochkin & Tyler, 2008; Shekhawat et al., 2013).  There are a number of benefits that hearing aids may offer for individuals with tinnitus:  audiological counseling during the fitting process may provide the individual with a better understanding of hearing loss and tinnitus (Searchfield et al., 2010); hearing aids may reduce the stress related to struggling to hear and understand; amplification of environmental sound may reduce perceived loudness of tinnitus (Tyler, 2008).

Prescriptive hearing aid fitting procedures are designed to improve audibility and assist hearing loss rather than address tinnitus concerns. Yet the majority of studies show that hearing aids alone can be useful for tinnitus management (Shekhawat et al., 2013). The Better Hearing Institute reports that approximately 28% of hearing aid users achieve moderate to substantial tinnitus relief with hearing aid use (Tyler, 2008). Approximately 66% of these individuals said their hearing aids offered tinnitus relief most or all the time and 29% reported that their hearing aids relieved their tinnitus all the time. However, little is known about how hearing aids should be adjusted to optimize this apparent relief from tinnitus. In a study comparing DSL I/O v4.0 and NAL-NL1, Wise (2003) found that low compression kneepoints in the DSL formula reduced tinnitus awareness for 80% of subjects, but these settings also made environmental sounds more annoying. Conversely, they had higher word recognition scores with NAL-NL1 but did not receive equal tinnitus reduction. The proposed explanation for this was the increased low-intensity, low-frequency gain of the DSL I/O formula versus the increased high frequency emphasis of NAL-NL1. Based on these findings, the author suggested the use of separate programs for regular use and for tinnitus relief.

Shekhawat and his colleagues began to address the issue of prescriptive hearing aid fitting for tinnitus by studying how output characteristics should be tailored to meet the needs of hearing aid users with tinnitus.  Specifically, they examined how modifying the high frequency characteristics of the DSL v5 (Scollie et al., 2005) prescription would affect subjects’ short term tinnitus perception.  Speech files with variable high frequency cut-offs and gain settings were designed and presented to subjects in matched pairs to arrive at the most favorable configuration for tinnitus relief.

Twenty-five participants mild to moderate high-frequency sensorineural hearing loss were recruited for participation. None of the participants had used hearing aids before but all indicated interest in trying hearing aids to alleviate their tinnitus.  All subjects had experienced chronic, bothersome tinnitus for at least two years and the average perception of tinnitus loudness was 62.6 on a scale from 1-100, where 1 is very faint and 100 is very loud. Subjects had a mean Tinnitus Functional Index (TFI; Meikle et al., 2012) score of 39.30. Six participants reported unilateral tinnitus localized to the left side, 15 had bilateral tinnitus and 4 reported tinnitus that localized to the center of the head, which is likely to be present bilaterally though not necessarily symmetrical.  The majority (40%) of the subjects reported their tinnitus quality as tonal, whereas 28% described it as noise, 20% as crickets and 12% as a combination of sound qualities. Tinnitus pitch matching was conducted using pairs of tones in which subjects were repeatedly asked to indicate which of the tones more closely matched the pitch of their tinnitus. The average matched tinnitus pitch was 7.892kHz with a range from 800Hz to 14.5kHz. When asked to describe the pitch of their tinnitus, most subjects defined it as “very high pitched”, some said “high pitched” and some said “medium pitched”.

There were 13 speech files, based on sentences spoken by a female talker, with variable high frequency characteristics. There were three cut-off frequencies (2, 4 and 6kHz) and four high frequency gain settings (+6, +3, -3 and -6dB). Stimuli were presented via a master hearing aid with settings programmed to match DSL I/O v5.0 prescriptive targets for each subject’s hearing loss.  Pairs of sentences were presented in a round robin tournament procedure  and subjects were asked to choose which one interfered most with their tinnitus and made it less audible. A computer program tabulated the number of “wins” for each sentence and collapsed the information across subjects to determine a “winner”, or the sentence that was most effective at reducing tinnitus audibility.  Real-ear measures were used to compare DSL v5 prescribed settings with the characteristics of the winning sentence and outputs were recorded from 250Hz to 6000Hz.

The most preferred output for interfering with tinnitus perception was a 6dB reduction at 2kHz, which was chosen by 26.47% of the participants.  A 6dB reduction at 4kHz was preferred by 14.74% of the subjects, followed by a 3dB reduction at 2kHz, which was preferred by 11.76%.  There were no significant differences between the preferences for any of these settings.

They found that when tinnitus pitch was lower than 4kHz, the preferred setting had lower output than DSL v5 across the frequency range. The difference was small (1-3dB) and became smaller as tinnitus pitch increased. When tinnitus pitch was between 4-8kHz, subjects preferred slightly less output than DSL v5 for high frequencies and slightly more output for low frequencies, though these differences were minimal as well. When tinnitus pitch was higher than 8kHz, participants preferred output that was slightly greater than DSL v5 at three frequencies: 750Hz, 1kHz and 6kHz. From these results a trend emerged: as tinnitus pitch increased, preferred output became lower than DSL v5 though the differences were not statistically significant.

Few studies investigating the use of hearing aids for tinnitus management have considered the perceived pitch of the tinnitus or the prescriptive method of the hearing aids (Shekhawat et al., 2013). The results of this study suggest that DSL v5 could be an effective prescriptive formula for hearing aids used in a tinnitus treatment plan, though the pitch of the individual’s tinnitus might affect the optimal output settings. In general, they found that the higher the tinnitus pitch, the more the preferred output matched with DSL I.O v5.0 targets. This study agrees with an earlier report by Wise (2003) in which subjects preferred DSL v5 over NAL-NL1 for interfering with and reducing tinnitus. It is unknown how NAL-NL2 targets would fare in a similar comparison, though the NAL-NL2 formula may provide more tinnitus relief than its predecessor because it tends to prescribe slightly higher gain for low frequencies and lower compression ratios which could potentially provide more of a masking effect from environmental sounds. The NAL-NL2 formula should be studied as it pertains to tinnitus management, perhaps along with consideration of other factors including degree of loss, gender and prior experience with hearing aids, since these affect the targets prescribed by the updated formula (Keidser & Dillon, 2006; Keidser et al., 2008). The subjects in the present study had similar degrees of loss and all lacked prior experience with amplification; the NAL-NL2 formula takes these factors into consideration, prescribing slightly different gain based on degree of loss or for those who have used hearing aids before.

The authors recommend offering separate hearing aid programs for use when the listener desires tinnitus relief. Most fitting formulae are designed to optimize speech intelligibility and audibility, and based on previous reports, an individual might prefer one formula when speech understanding and communication is their top priority, and may prefer another, used with or without an added noise masker, when their tinnitus is bothering them.

They also propose that tinnitus pitch matching should be considered when programming hearing aids, though there is often quite a bit of variability in results and testing needs to be repeated several times to increase reliability.  Still, their study agrees with prior work in suggesting that the pitch of the tinnitus affects how likely hearing aids are to reduce it and whether output adjustments can impact how effective the hearing aids are to this end. Schaette (2010) found that individuals with tinnitus pitch lower than 6kHz showed more reduction of tinnitus with hearing aid use than did subjects whose pitch was higher than 6kHz. This makes sense because of the typical bandwidth of hearing aids, in which most gain is delivered below this frequency range. Not surprisingly, another study reported that hearing aids were most effective at reducing tinnitus when the pitch of the tinnitus was within the frequency response range of the hearing aids (McNeil et al., 2012).  Though incorporating tinnitus pitch matching into a clinical protocol might seem daunting or time consuming, it is probably possible to use an informal bracketing procedure, similar to one used for MCLs, to get an idea of the individual’s tinnitus pitch range. Testing can be repeated at subsequent visits to eventually arrive at a more reliable estimate.  If pitch matching measures are not possible, clinicians can question the patient about their perceived tinnitus pitch range and, with reference the current study, adjust outputs in the 2kHz to 4kHz range to determine if the individual experiences improvement in tinnitus relief.

Proposed are a series of considerations for fitting hearing instruments on tinnitus sufferers and for employing dedicated tinnitus programs:

- noise reduction should be disabled;

- fixed activation of omnidirectional microphones introduce more environmental noise;

- in contrast to the previous recommendation, full-time activation of directional microphones will increase the hearing aid noise floor;

- lower compression knee points increase amplification for softer sounds;

- expansion should be turned off to increase amplification of low-level background sound;

- efforts should be made to  minimize occlusion, which can emphasize the perception of tinnitus;

- ensuring physical comfort of the devices can minimize the user’s general awareness of their ears and the hearing aids, potentially reducing their attention to the tinnitus as well (Sheldrake & Jastreboff, 2004; Searchfield, 2006);

- user controls are important as they allow access to alternate hearing aid programs and sound therapy options.

Dr. Shekhawat and his colleagues also underscore the importance of counseling tinnitus sufferers who choose hearing aids. Clinicians need to ensure that these patients have realistic expectations about the potential benefits of hearing aids and that they know the devices will not cure their tinnitus. Follow-up care is especially important to determine if adjustments or further training is necessary to improve the performance of the aids for all of their intended purposes.

Currently, little is known about how to optimize hearing aid settings for tinnitus relief and there are no prescriptive recommendations targeted specifically for tinnitus sufferers. Shekhawat and his colleagues propose that the DSL v5 formula may be an appropriate starting point for these individuals, as their basic program and/or in an alternate program designated for use when their tinnitus is particularly bothersome.  Most importantly, however, are the observations that intentional manipulation of parameters common to most hearing aid fittings may increase likelihood of tinnitus relief with hearing aid use. Further investigation into the optimization of these fitting parameters may reveal a prescriptive combination that audiologists can leverage to benefit individuals with hearing loss who also seek relief from the stress and annoyance of tinnitus.

 

References

American Tinnitus Association (ATA) reporting data from the 1999-2004 National Health and Nutrition Examination Survey (NHANES), conducted by the Centers for Disease Control and Prevention (CDC). www.ata.org, retrieved 9-10-13.

Axelsson, A. (1992). Conclusion to Panel Discussion on Evaluation of Tinnitus Treatments. In J.M. Aran & R. Dauman (Eds) Tinnitus 91. Proceedings of the Fourth International Tinnitus Seminar (pp. 453-455). New York, NY: Kugler Publications.

Cornelisse, L.E., Seewald, R.C. & Jamieson, D.G. (1995). The input/output formula: A theoretical approach to the fitting of personal amplification devices. Journal of the Acoustical Society of America 97, 1854-1864.

Dobie, R.A. (2004). Overview: Suffering From Tinnitus. In J.B. Snow (Ed) Tinnitus: Theory and Management (pp.1-7). Lewiston, NY: BC Decker Inc.

Henry, J.A., Dennis, K.C. & Schechter, M.A. (2005). General review of tinnitus: Prevalence, mechanisms, effects and management. Journal of Speech, Language and Hearing Research 48, 1204-1235.

Hoffman, H.J. & Reed, G.W. (2004). Epidemiology of tinnitus. In: J.B. Snow (ed.) Tinnitus: Theory and Management. Hamilton, Ontario: BC Decker.

Keidser, G. & Dillon, H. (2006). What’s new in prescriptive fittings down under? In: Palmer, C.V., Seewald, R. (Eds.), Hearing Care for Adults 2006. Phonak AG, Stafa, Switzerland, pp. 133-142.

Keidser, G., O’Brien, A., Carter, L., McLelland, M. & Yeend, I. (2008). Variation in preferred gain with experience for hearing aid users. International Journal of Audiology 47(10), 621-635.

Kochkin, S. & Tyler, R. (2008). Tinnitus treatment and effectiveness of hearing aids: Hearing care professional perceptions. Hearing Review 15, 14-18.

McNeil, C., Tavora-Vieira, D., Alnafjan, F., Searchfield, G.D. & Welch, D. (2012). Tinnitus pitch, masking and the effectiveness of hearing aids for tinnitus therapy. International Journal of Audiology 51, 914-919.

Meikle, M.B. (1992). Methods for Evaluation of Tinnitus Relief Procedures. In J.M. Aran & R. Dauman (Eds.) Tinnitus 91: Proceedings of the Fourth International Tinnitus Seminar (pp. 555-562). New York, NY: Kugler Publications.

Meikle, M.B., Henry, J.A., Griest, S.E., Stewart, B.J., Abrams, H.B., McArdle, R., Myers, P.J., Newman, C.W., Sandridge, S., Turk, D.C., Folmer, R.L., Frederick, E.J., House, J.W., Jacobson, G.P., Kinney, S.E., Martin, W.H., Nagler, S.M., Reich, G.E., Searchfield, G., Sweetow, R. & Vernon, J.A. (2012). The Tinnitus Functional Index:  Development of a new clinical measure for chronic, intrusive tinnitus. Ear & Hearing 33(2), 153-176.

Moffat, G., Adjout, K., Gallego, S., Thai-Van, H. & Collet, L. (2009). Effects of hearing aid fitting on the perceptual characteristics of tinnitus. Hearing Research 254, 82-91.

Schaette, R., Konig, O., Hornig, D., Gross, M. & Kempter, R. (2010). Acoustic stimulation treatments against tinnitus could be most effective when tinnitus pitch is within the stimulated frequency range. Hearing Research 269, 95-101.

Shekhawat, G.S., Searchfield, G.D., Kobayashi, K. & Stinear, C. (2013). Prescription of hearing aid output for tinnitus relief. International Journal of Audiology 2013, early online: 1-9.

Shekhawat, G.S., Searchfield, G.D. & Stinear, C.M. In press (2013). Role of hearing aids in tinnitus intervention: A scoping review. Journal of the American Academy of Audiology.

Searchfield, G.D. (2006). Hearing aids and tinnitus. In: R.S. Tyler (ed). Tinnitus Treatment, Clinical Protocols. New York: Thieme Medical Publishers, pp. 161-175.

Searchfield, G.D., Kaur, M. & Martin, W.H. (2010). Hearing aids as an adjunct to counseling: Tinnitus patients who choose amplification do better than those that don’t. International Journal of Audiology 49, 574-579.

Sheldrake, J.B. & Jastreboff, M.M. (2004). Role of hearing aids in management of tinnitus. In: J.B. Sheldrake, Jr. (ed.) Tinnitus: Theory and Management. London: BC Decker Inc, pp. 310-313.

Stouffer, J.L. & Tyler, R. (1990). Characterization of tinnitus by tinnitus patients. Journal of Speech and Hearing Disorders 55, 439-453.

Tyler, R.S.(Ed). (2008). The Consumer Handbook on Tinnitus. Auricle Ink Publishers., Sedona, AZ.

Tyler, R. & Baker, L.J. (1983). Difficulties experienced by tinnitus sufferers. Journal of Speech and Hearing Disorders 48, 150-154.

Wise, K. (2003). Amplification of sound for tinnitus management: A comparison of DSL i/o and NAL-NL1 prescriptive procedures and the influence of compression threshold on tinnitus audibility. Section of Audiology, Auckland: University of Auckland.

 

Hearing Aid Behavior in the Real World

Banerjee, S. (2011). Hearing aids in the real world: typical automatic behavior of expansion, directionality and noise management. Journal of the American Academy of Audiology 22, 34-48.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Hearing aid signal processing offers proven advantages for many everyday listening situations. Directional microphones improve speech recognition in the presence of competing sounds and noise reduction decreases annoyance of surrounding noise while possibly improving ease of listening (Sarampalis et al., 2009). Expansion reduces the annoyance of low-level environmental noise as well as circuit noise from the hearing aid.  It is typical for modern hearing aids to offer automatic activation of signal processing features based on various information derived through acoustic analysis of the environment. In the case of some signal processing features, these can be assigned to independent, manually accessible hearing aid memories. The opportunity to manually activate a hearing aid feature allows patients to make conscious decisions about the acoustic conditions of the environment and access an appropriately optimized memory configuration (Keidser, 1996; Surr et al., 2002).

However, many hearing aid users who need directionality and noise reduction may be unable to manually adjust their hearing aids, due to physical limitations or an inability to determine the optimal setting for a situation. Other users may be reluctant to make manual adjustments for fear of drawing attention to the hearing aids and therefore the hearing impairment. Cord et al (2002) reported that as many as 23% of users with manual controls do not use their additional programs and leave the aids in a default mode at all times. Most hearing aids now offer automatic directionality and noise reduction, taking the responsibility for situational adjustments away from the user. This allows more hearing aid users the ability to experience advanced signal processing benefits and reduces the need for manual adjustments.

The decision to provide automatic activation of expansion, directionality, and noise reduction is based on their known benefits for particular acoustic conditions, but it is not well understood how these features interact with each other or with changing listening environments in every day use.  This poses a challenge to clinicians when it comes to follow-up fine-tuning, because it is impossible to determine what features were activated at any particular moment. Datalogging offers opportunity to better interpret a patient’s experience outside of the clinic or laboratory. Datalogging reports often include average daily or total hours of use as well as the proportion of time an individual has spent in quiet or noisy environments but these are general reports and do not provide insight into the activation of some signal processing features and the acoustic environment that occurred at the time of feature activation. For example, a clinician may be able to determine that an aid was in a directional mode 20% of the time and that the user spent 26% of their time listening to speech in the presence of noise, but it does not indicate whether directional processing was active during these exposures to speech in noise. Therefore, the clinician must rely on user reports and observations to determine the appropriate adjustments, which may not reliably represent the array of listening experiences and acoustic environments that were encountered (Wagener, 2008).

In the study discussed here, Banerjee investigated the implementation of automatic expansion, directionality and noise management features. She measured environmental sound levels to determine the proportion of time individuals spent in quiet and noisy environments, as well as how these input levels related to activation of automatic features. She also examined bilateral agreement across a pair of independently functioning hearing aids to determine the proportion of time that the aids demonstrated similar processing strategies.

Ten subjects with symmetrical, sensorineural hearing loss were fitted with bilateral, behind-the-ear hearing aids. Age ranged from 49-78 years with a mean of 62.3 years of age. All of the subjects were experienced hearing aid users.  Some subjects were employed and most participated in regular social activities with family and other groups. The hearing aids were 8-channel WDRC instruments programmed to match targets from the manufacturer’s proprietary fitting formula.  Activation of the automatic directional microphone required input levels of 60dB or above, with the presence of noise in the environment and speech located in front of the wearer. Automatic noise management resulted in gain reductions in one or more of the 8 channels, based on the presence of noise-like sounds classified as “wind, mechanical sounds or other sounds” based on their spectral and temporal characteristics. No gain reductions were applied for sounds classified as “speech”.  Expansion was active for inputs below the compression thresholds, which ranged from 54 to 27dB SPL.

All participants carried a Personal Digital Assistants (PDA) connected via programming boots to their hearing aids. This PDA logged environmental broadband input level as well as the status of expansion, directionality, noise management and channel-specific gain reduction. Participants were asked to wear the hearing aids connected to the PDA for as much of the day as possible and measurements were made in 5-sec intervals to allow time for hearing aid features to update several times between readings.  The PDAs were worn with the hearing aids for a period of 4-5 weeks and at the end of data collection a total of 741 hours of hearing aid use were logged and studied.

Examination of the input level measurements revealed that subjects spent about half of their time in quiet environments with input levels of 50dB SPL or lower. Less than 5% of their time was spent in environments with input levels exceeding 65dB and the maximum recorded input level was 105dB SPL. This concurs with previous studies that reported high proportions of time spent in quiet environments such as living rooms or offices (Walden et al., 2004; Wagener et al., 2008).  The interaural difference in input level was 1dB about 50% of the time and exceeded 5dB only 5% of the time. Interaural differences were attributed to head shadow effects and asymmetrical sound sources as well as occasional accidental physical contact with the hearing aids, such as adjusting eyeglasses or rubbing the pinna.

Expansion was analyzed in terms of the proportion of time it was activated and whether the aids were in bilateral agreement. Expansion thresholds are meant to approximate low-level speech presented at 50dB.  In this study, expansion was active between 42% and 54% of the time, which is consistent with its intended activation, because about half the time the input levels were at or below 50dB SPL.  Bilateral agreement was relatively high at 77-81%.

Directional microphone status was measured according to the proportion of time that directionality was active and whether there was bilateral agreement. Again, directional status was consistent with the broadband input level measurements, in that directionality was active only about 10% of the time. The instruments were designed to switch to directional mode only when input levels were higher than 60dBA, and the broadband input measurements showed that participants encountered inputs higher than 65dB only about 5% of the time. Bilateral agreement for directionality was very high at 97%. Interestingly, the hearing aids were in directional mode only about 50% of the time in the louder environments.  This is likely attributable to the requirement for not only high input levels but also speech located in front of the listener in the presence of surrounding noise. A loud environment alone should not trigger directionality without the presence of speech in front of the listener.

Noise reduction was active 21% of the time with bilateral agreement of 95%. Again, this corresponds well with the input level measurements because noise reduction is designed to activate only in levels exceeding 50dB SPL. This does not indicate how often it was activated in the presence of moderate to loud noise, but as input levels rose, gain reductions resulting from noise management steadily increased as well. Gain reduction was 3-5dB greater in channels below 2250Hz than in the high frequency channels, consistent with the idea that environmental noise contains more energy in the low frequencies. Interaural differences in noise management were very small with a median difference in gain reduction of 0dB in all channels and exceeding 1dB only 5% of the time.

Bilateral agreement was generally quite high. Conditions in which there was less bilateral agreement may reflect asymmetric sound sources, accidental physical contact with the hearing instruments or true disagreement based on small differences in input levels arriving at the two ears. There may be everyday situations in which hearing aids might not perform in bilateral agreement, but this is not necessarily a disadvantage to the user. For instance, a driver in a car might experience directionality in the left aid but omnidirectional pickup from the right aid. This may be advantageous for the driver if there is another occupant in the passenger’s seat. Similarly, at a restaurant a hearing aid user might experience disproportionate noise or multi-talker babble from one side, depending on where he is situated relative to other people. Omnidirectional pickup on the quieter side of the listener with directionality on the opposite side might be desirable and more conducive to conversation. Similar arguments could be proposed for asymmetrical activation of noise management and its potential effects on comfort and ease of listening in noisy environments.

Banerjee’s investigation is an important step toward understanding how hearing aid signal processing is activated in everyday conditions. Though datalogging helps provide an overall snapshot of usage patterns and listening environments, the gross reporting of data limits utility in fine-tuning of hearing aid parameters. This study, and others like it, will provide useful information for clinicians providing follow-up care with hearing aid users.

It is noteworthy that participants spent about 50% of their time in environments with 50dB of broadband input or lower. While some participants were employed and others were not, this remains an acoustic reality of the hearing aid wearer. Subsequent studies with targeted samples would help further determine how special features apply to everyday environments among participants that lead a more consistently active lifestyle.

Automatic, adaptive signal processing features have potential benefits for many hearing aid users, especially those who are unable to or prefer not to operate manual controls. However, proper recommendations and programming adjustments can only be made if clinicians understand how these features are implemented in everyday life. This study provides evidence that some features perform as designed and offers insight for clinicians to leverage when making fine-tuning instruments based on real world hearing aid behavior.

 

References

Banerjee, S. (2011). Hearing aids in the real world: typical automatic behavior of expansion, directionality and noise management. Journal of the American Academy of Audiology 22, 34-48.

Cord, M., Surr, R., Walden, B. & Olsen, L. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology 13, 295-307.

Keidser, G. (1996). Selecting different amplification for different listening conditions. Journal of the American Academy of Audiology 7, 92-104.

Sarampalis, A., Kalluri, S., Edwards, B. & Hafter, E. (2009). Objective measures of listening effort: effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research 52, 1230–1240.

Surr, R., Walden, B., Cord, M. & Olsen, L. (2002). Influence of environmental factors on hearing aid microphone preference. Journal of the American Academy of Audiology 13, 308-322.

Wagener, K., Hansen, M. & Ludvigsen, C. (2008). Recording and classification of the acoustic environment of hearing aid users. Journal of the American Academy of Audiology 19, 348-370.

You’re getting older. Are your listening demands decreasing?

Wu, Y. & Bentler, R. (2012). Do older adults have social lifestyles that place fewer demands on hearing? Journal of the American Academy of Audiology 23, 697-711.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Activities and lifestyle are important considerations for potential hearing aid users because of the variability in listening environments that they may encounter. Individuals who work or have active social lives may be more likely to benefit from advanced signal processing and features like directionality and noise reduction than individuals with less social lifestyles in which a large proportion of time is spent at home or in quiet conditions.

It is often assumed that older individuals have quieter social lives and therefore fewer listening demands. This has been supported by a number of studies showing that older adults report less exposure to noisy environments and less communication demand in a variety of environments (Garstecki & Erler, 1996; Erdman & Demorest, 1998; Kricos, et al., 2007).  Despite the fact that older adults are more likely to experience hearing loss and poorer word recognition ability, older adults generally report less hearing disability and less social or emotional impact from their hearing loss than younger adults do (Gatehouse, 1990, 1994; Gordon-Salant et al., 1994; Garstecki & Erler, 1996; Uchida et al., 2003).  One explanation for this apparent contradiction is that older adults may have less demanding lifestyles than younger adults because they may encounter fewer challenging listening situations. This is assumed to be the case because older adults may participate in fewer social activities and have smaller social networks than younger adults.

The assumption that older adults are less prone to social interaction could be countered by the suggestion that retirement allows more time for social activities that could present communication challenges.  In fact, following retirement, older adults report having more time to travel, visit with family, and volunteer (Wiley et al., 2000).

The purpose of Wu and Bentler’s investigation was to compare auditory lifestyles of younger and older hearing-impaired adults and to study the relationships among age, auditory lifestyle and social lifestyle. They hypothesized that older adults would have quieter, less demanding lifestyles and that the relationship between age and auditory lifestyle would be affected by how socially active the older individuals were.

Twenty-seven hearing-impaired adults, ranging from 40 to 88 years of age, participated in the study. All subjects had symmetrical, sloping, sensorineural hearing losses. The majority of subjects were experienced hearing aid users. Auditory lifestyle, or the auditory environments encountered in typical daily activities, was measured using portable noise dosimeters, worn in a pack over the shoulder, for 7 consecutive days. The dosimeters were capable of measuring overall sound level over time. Though the dosimeters were not capable of specifically measuring signal-to-noise ratio (SNR), previous work has indicated that high overall sound level is associated with low SNR (Pearsons et al., 1976; Banerjee, 2011). Therefore, the authors assumed that the dosimeter reading were providing an indirect measurement of the SNRs encountered in the subjects’ daily lives and offered an indirect assessment of their typical daily listening demands.

Participants supplemented the dosimeter measurements with written journals describing the listening situations that they participated in during the week. They recorded their listening activities as well as the listening environments that they encountered. Listening activities were classified according to 6 categories:

1.              Conversation in small group (3 or fewer people)

2.              Conversation in large group (more than four people)

3.              Conversation on the phone

4.              Speech listening – live talker

5.              Speech listening – media

6.              Little or no conversation

There were five environment categories:

1.              Outdoors – traffic

2.              Outdoors – other than traffic

3.              Home – 10 people or fewer

4.              Indoors other than home – 10 people or fewer

5.              Crowd of people (more than 11 people)

Auditory lifestyle was evaluated with the Auditory Lifestyle and Demand Questionnaire (ALDQ; Gatehouse et al., 1999), which assesses the diversity of listening situations encountered by an individual. It is scaled according to frequency and importance of each situation and higher scores represent more diverse auditory lifestyles.

Social lifestyle was measured with three self-report questionnaires. The Social Network Index (SNI; Cohen et al., 1997) assesses the different social roles or identities held by an individual. For instance, a person could be a spouse, parent, employee or club member. Points are assigned for the various social roles assumed by the individual and higher point values indicate more active social lifestyles.

The second questionnaire, The Welin Activity Scale (WAS; Welin et al., 1992) measures the frequency of 32 activities, divided into three categories: home (e.g., reading), outside home (e.g., dining at restaurant) and social activities (e.g., visiting with friends). Subjects indicate how often they participate in each activity on a 3-point scale.  The sum of points for all activities and for activities outside the home are scored, and higher scores indicate more active lifestyles.

The third scale that was used to measure social lifestyle was the Social Convoy Questionnaire (SCQ; Kahn & Antonucci, 1980; Antonucci, 1986; Lang & Carstensen, 1994).  This questionnaire requires respondents to assign social partners to one of three concentric circles. The ratio of  inner circle partners to those in the outer two circles represents the closeness of social partners. Previous research has shown that younger adults have more peripheral partners than older adults, yielding lower SCQ scores than older adults in general (Lang & Carstensen, 1994).

Journal entries provided information about the proportion of time that subjects spent in speech-related activities, in quiet and noisy conditions. Participants in both age groups spent the highest proportion of time listening to media at home, followed by small-group conversations at home and small-group conversations away from home. The proportion of time spent in phone conversations or outdoors was relatively small for both groups. There were no significant differences between young and old subject groups for the percentages of time spent in any of the activity categories.

Analysis of the dosimetry measurements was conducted to determine the proportion of time participants spent in noisy conditions and the intensity of the sound they encountered.  The sound levels encountered by both groups had a spread of approximately 30dB and not surprisingly, the highest levels occurred in crowds and traffic and the lowest levels occurred at home.  The measured sound levels were higher for younger listeners than older listeners for most of the frequently encountered listening events though age-related differences reached significance for only two events: small group conversation in traffic and media listening in traffic.

ALDQ scores assessed the listeners’ auditory lifestyles and although older subjects had lower scores, suggesting that older listeners experienced less demanding auditory lifestyles, there were no significant differences between the two groups. Social lifestyle was measured with the SNI, WAS and SCQ scales. The only scale to yield a significant age-related difference was the SNI scale, in which younger listeners had higher scores than older listeners. This difference is in keeping with previous reports and indicates that older listeners in this study had less diverse and smaller social networks than younger subjects.

Prior to any further analysis, journal entries and dosimetry information were examined to come up with an indicator of listening demand, which was labeled LD-65. This score represents the amount of time a subject spent in speech-related conditions in which the sound levels were 65dBA or higher. Listeners had indicated that levels of 65dBA were “somewhat noisy”, so levels above this point were assumed to be “noisy”. Therefore, LD-65 was used as a measure of listening demand because higher LD-65 scores indicate that listeners were participating in more speech-related activities in conditions that were likely to be noisy.

Significant correlations were found for age versus SNI as well as age versus LD-65, indicating that older subjects had smaller social networks and were also likely to experience fewer listening demands than younger subjects. Additional analyses were required to determine that the effects of age on listening demand were mediated by social lifestyle. In other words, age did not affect listening demand on its own as much as it did when social lifestyle was also considered.

The results of this study indicate that younger and older adults have similar auditory lifestyles, in terms of the proportion of time they spend in speech-related activities, in quiet and noisy conditions. But whether or not older individuals experience fewer listening demands is a more complicated issue.

Depending upon the analysis, the results of this study may suggest little age-related difference between groups, while contrasting analyses suggest younger adults encountered higher sound levels than older adults did in comparable listening situations. This difference may relate to behavioral as well as situational differences. For instance, younger adults might drive faster, listen to louder music, or drive on the highway more often than older adults, which would have the effect of increasing sound level measurements in these conditions. Similarly, if some of the noisy situations encountered by younger adults were in bars or clubs, they would yield higher sound level measurements than moderately noisy restaurants. Although the age difference for the dosimeter measurements was significant, the difference in mean levels was only 2.8dB. The authors question whether this difference is truly noticeable and appropriately point out that there were not strict controls on placement of the dosimeter packs, so variability in placement could have affected the measurements somewhat.

The findings of this study suggest that assumptions about age should not wholly dictate clinical decisions in structuring a treatment plan so much as social activities and lifestyle should. Certainly, individuals of any age with diverse social activities will experience more listening demands than those with quieter lifestyles. Still, the experiences of employed individuals in the workplace may present more complicated listening demands for reasons other than overall sound levels and duration of exposure.  Employed hearing aid users may experience stress related to their communication ability when interacting with co-workers, managers, and supervisors that is not comparable to the listening demand experienced in purely social situations with similar sound levels. Because the selection of hearing aids can be affected by all of these variables, self-report inventories and detailed clinical histories that illuminate each individual’s social and auditory lifestyle will help to arrive at decisions appropriate for the patient.

 

References

Antonucci, T. (1986). Hierarchical mapping technique. Generations 10 (4), 10-12.

Banerjee, S. (2011). Hearing aids in the real world: typical automatic behavior of expansion, directionality and noise management. Journal of the American Academy of Audiology 22 (1), 34-48.

Cohen, S., Doyle, W., Skoner, D., Rabin, B. & Gwaltney, J. (1997). Social ties and susceptibility to the common cold. Journal of the American Medical Association 277 (24), 1940-1944.

Erdman, S. & Demorest, M. (1998). Adjustment to hearing impairment II: audiological and demographic correlates. Journal of Speech, Language and Hearing Research 41 (1), 123-136.

Garstecki, D. & Erler, S. (1996). Older adult performance on the communication profile for the hearing impaired. Journal of Speech and Hearing Research 39 (1), 28-42.

Gatehouse, S. (1990). The role of non-auditory factors in measured and self-reported disability. Acta Otolaryngologica Supplement 476, 249-256.

Gatehouse, S. (1994). Components and determinants of hearing aid benefit. Ear and Hearing 15 (1), 30-49.

Gatehouse, S., Elberling, C. & Naylor, G. (1999). Aspects of auditory ecology and psychoacoustic function as determinants of benefits from and candidature for non-linear processing hearing aids. In: Rasmussen, A.N., Osterhammel, P.A., Andersen, T., Poulsen, T., eds. Auditory Models and Non-Linear Hearing Instruments. Denmark: The Danavox Jubilee Foundation, 221-233.

Gordon-Salant, S., Lantz, J. & Fitzgibbons, P.J. (1994).  Age effects on measures of hearing disability. Ear and Hearing 15 (3), 262-265.

Kahn, R. & Antonucci, T. (1980). Convoys over the life course: attachment, roles and social support. In: Baltes, P.B., Brim, O.G., eds. Life-span Development and Behavior. San Diego, CA: Academic Press.

Kricos, P., Erdman, S., Bratt, G. & Williams, D. (2007). Psychosocial correlates of hearing aid adjustment. Journal of the American Academy of Audiology 18 (4), 304-322.

Lang, F. & Carstensen, L. (1994). Close emotional relationships in late life: further support for proactive aging in the social domain. Psychology of Aging 9 (2), 315-324.

Pearsons, K., Bennett, R. & Fidell, S. (1976). Speech Levels in Various Environments: Report to the Office of Resources and Development, Environmental Protection Agency. BBN Report #3281. Cambridge: Bolt, Beranek and Newman.

Uchida, Y., Nakashima, T., Ando, F., Niino, N. & Shimokata, H. (2003). Prevalence of self-perceived auditory problems and their relation to audiometric thresholds in a middle-aged to elderly population. Acta Otolaryngologica 123 (5), 618-626.

Welin, L., Larsson, B., Svardsudd, K., Tibblin, B. & Tibblin, G. (1992). Social network and activities in relation to mortality from cardiovascular diseases, cancer and other causes: a 12-year follow up of the study of men born in 1913 and 1923. Journal of Epidemiology and Community Health 46 (2), 127-132.

Wiley, T., Cruickshanks, K., Nondahl, D. & Tweed, S. (2000). Self-reported hearing handicap and audiometric measures in older adults. Journal of the American Academy of Audiology 11 (2), 67-75.

Wu, Y. & Bentler, R. (2012). Do older adults have social lifestyles that place fewer demands on hearing? Journal of the American Academy of Audiology 23, 697-711.

Does lip reading take the effort out of speech understanding?

Picou, E.M., Ricketts, T.A. & Hornsby, B.W.Y. (2013). How hearing aids, background noise and visual cues influence objective listening effort. Ear and Hearing, in press.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

For many people with hearing loss, visual cues from lip-reading are a valuable cue that has been proven to improve speech recognition across a variety of listening conditions (Sumby & Pollock, 1954; Erber, 1975; Grant, et al., 1998). To date is has remained unclear how visual cues, background noise, and hearing aid use interact with each other to affect listening effort.

Listening effort is often described as the allocation of additional cognitive resources to the task of understanding speech. If cognitive resources are finite or limited, then two or more simultaneous tasks will be in competition with each other for cognitive resources. Decrements in performance on one task can be interpreted as an allocation of resources away from the task and toward another concurrent task. Therefore, listening effort is often measured with dual-task paradigms, in which listeners respond to speech stimuli while simultaneously performing another task or responding to another kind of stimulus. Allocation of cognitive resources in this way is thought to represent a competition for working memory resources (Baddeley & Hitch, 1974; Baddeley, 2000).

The Ease of Language Understanding (ELU) model states that the process of understanding language involves matching phonological, syntactic, semantic and prosodic information to stored templates in long-term memory. When there is a mismatch between the incoming sensory information and the stored template, additional effort must be exerted to resolve the ambiguity of the message. This additional listening effort taxes working memory resources and may require the listener to allocate fewer resources to other tasks. Several studies have identified conditions that degrade a speech signal, such as background noise (Murphy, et al., 2000; Larsby et al., 2005; Zekveld et al., 2010) and hearing loss (Rabbitt, 1990; McCoy et al., 2005) in a manner that increases listening effort.

Individuals with reduced working memory capacity may be more negatively affected by conditions that degrade a speech signal. Previous reports have suggested that differences in working memory capacity hold a relationship to speech recognition in noise and performance with hearing aids in noise (Lunner, 2003; Foo et al., 2007).  The speed of retrieval from long-term memory may also affect performance and listening effort in adverse listening conditions (Van Rooij et al., 1989; Lunner, 2003). Because sensory inputs decay rapidly (Cowan, 1984), listeners with slow processing speed might not be able to fully process incoming information and match it to long term memory stores before it decays. Therefore, they would have to allocate more effort and resources to the task of matching sensory input to long-term memory templates.

Just as some listener traits might be expected to increase listening effort, some factors might offset adverse listening conditions by providing more information to support the matching of incoming sensory inputs to long-term memory. The use of visual cues is well known to improve speech recognition performance and some studies indicate that individuals with large working memory capacities are better able to make use of visual cues from lipreading (Picou et al., 2011).  Additionally, listeners who are better lipreaders may require fewer cognitive resources to understand speech, allowing them to make better use of visual cues in noisy environments (Hasher & Zacks, 1979; Picou et al., 2011).

The purpose of Picou, Ricketts and Hornsby’s study was to examine how listening effort is affected by hearing aid use, visual cues and background noise. A secondary goal of the study was to determine how specific listener traits such as verbal processing speed, working memory and lipreading ability would affect the measured changes in listening effort.

Twenty-seven hearing-impaired adults participated in the study. All were experienced hearing aid users and had corrected binocular vision of 20/40 or better. Participants were fitted with bilateral behind-the-ear hearing aids with non-occluding, non-custom eartips. Advanced features such as directionality and noise reduction were turned off, though feedback management was left on in order to maximize usable gain. Hearing aids were programmed with linear settings to eliminate any potential effect of amplitude compression on listening effort, a relationship which is as of yet unestablished.

A dual-task paradigm with a primary speech recognition task and secondary visual reaction time task was used to measure listening effort. The speech recognition task used monosyllabic words spoken by a female talker (Picou, 2011), presented at 65dB in the presence of multi-talker babble. Prior to the speech recognition task, individual SNRs for auditory only (AO) and auditory-visual (AV) conditions were determined at levels that yielded performance between 50-70% correct, because scores in this range are most likely to show changes in listening effort (Gatehouse & Gordon, 1990).

The reaction time task required participants to press a button in response to a rectangular visual probe that occurred prior to presentation of the speech token. The visual probe was presented prior to the speech tokens, so that the probe would not distract from the use of visual cues during the speech recognition task. The visual and speech stimuli were presented within a narrow enough interval (less than 500 msec) so that cognitive resources would have to be expended for both tasks at the same time (Hick & Tharpe, 2002).

Three listener traits were examined with regard to listening effort in quiet and noisy conditions, with and without visual cues. Visual working memory was evaluated with the Automated Operation Span (AOSPAN) test (Unsworth et al., 2005). The AOSPAN requires subjects to solve math equations and memorize letters. After seeing a math equation and identifying the answer, subjects are shown a letter which disappears after 800 msec. Following a series of equations they are then asked to identify the letters that they saw, in the order that they appeared. Scores are based on the number of letters that are recalled correctly.

Verbal processing speed was assessed with a lexical decision task (LDT) in which subjects were presented with a string of letters and were asked to indicate, as quickly as possible, if the letters formed a real word.  The test consisted of 50 common monosyllabic English words and 50 monosyllabic nonwords. The task reflects verbal processing speed because it requires the participant to match the stimuli to representations of familiar words stored in long-term memory (Meyer & Schvaneveldt, 1971; Milberg & Blumstein, 1981; Van Rooij et al., 1989). The reaction time to respond to the stimuli was used as a measure of verbal processing speed.

Finally, lipreading ability was measured with the Revised Shortened Utley Sentence Lipreading Test (ReSULT; Updike, 1989). The test required participants to repeat sentences spoken by a female talker, when the talker’s face was visible but speech was inaudible. Responses were scored based on the number of words repeated correctly in each sentence.

Subjects participated in two test sessions. At the first session, vision and hearing was tested, individual SNR levels were determined for the speech recognition task and AOSPAN, LDT and ReSULT scores were obtained.  At the second session, subjects completed practice sequences with AO and AV stimuli, then the dual speech recognition and visual reaction time tests were administered in eight counterbalanced conditions listed below. Due to the number of experimental conditions, only select outcomes of this study will be reviewed.

1.         auditory only in quiet, unaided

2.         auditory only in noise, unaided

3.         auditory-visual in quiet, unaided

4.         auditory-visual in noise, unaided

5.         auditory only in quiet, aided

6.         auditory only in noise, aided

7.         auditory-visual in quiet, aided

8.         auditory-visual in noise, aided

The main analysis showed that background noise impaired performance in all conditions and hearing aid use and visual cues improved performance. However, there were significant interactions between hearing aid use and visual cues, hearing aids and background noise, and visual cues and background noise, indicating that the effect of hearing aid use depended on the test modality (AV or AO), and background noise (present or absent), and the effect of visual cues depended on background noise (present or absent).  Hearing aid benefit proved to be larger in AO conditions than in AV conditions and was larger in quiet conditions than in noisy conditions.  The effect of noise was greater in the AV conditions than in the AO conditions, but the authors suggest that this could have been related to the individualized SNRs chosen for the test procedure.

On the reaction time task, background noise increased listening effort and hearing aid use reduced listening effort, though there was high variability and the effects of both variables were small. Additional analysis determined that the individual SNRs chosen for the dual task did not affect the hearing aid benefits that were measured. The availability of visual cues did not change overall reaction times and it was therefore determined that visual cues did not affect listening effort in this task of reaction time.

With regard to listening effort benefits derived from hearing aid use, the performance in quiet conditions was strongly related to performance in noise. In other words, subjects who obtained benefit from hearing aid use in quiet also obtained benefit in noise and individuals with slower verbal processing speed were more likely to derive benefit from hearing aid use. With regard to visual cues, there were several correlations with listener traits. Subjects who were better lipreaders derived more benefit from visual cues and those with smaller working memory capacities also showed more benefit from visual cues. These correlations were significant in quiet and noisy conditions. For quiet conditions, there was a positive correlation between verbal processing speed and benefit from visual cues, with better verbal processors showing more benefit from visual cues. There were no correlations between background noise and any of the measured listener traits.

The overall findings that visual cues and hearing aid use had positive effects and background noise had a negative effect on speech perception performance were not surprising. Similarly, the findings that hearing aid benefit was reduced for AV conditions versus AO conditions and for noisy versus quiet conditions were consistent with previous reports (Cox & Alexander, 1991; Walden et al., 2001; Duquesnoy & Plomp, 1983).  Because hearing aid use improves audibility, visual cues might not have been needed as much as they were in unaided conditions and the presence of noise may have counteracted the improved audibility by masking a portion of the speech cues needed for correct understanding, especially with the omnidirectional, linear instruments used in this study.

The ability of hearing aids to decrease listening effort was significant, in keeping with previously published results, but the improvements were lesser than than those reported in some previous studies. This could be related to the non-simultaneous timing of the tasks in the dual-task paradigm, but the authors surmise that it could be related to the way their subjects’ hearing aids were programmed. In most previous studies, individuals used their own hearing aids, set to individually prescribed and modified settings. In the current study, all participants used the same hearing aid circuit set to linear, unmodified targets. Advanced features like directionality and noise reduction, which are likely to impact listening effort (Sarampalis, 2009), speech discrimination ability and perceived ease of listening in everyday situations, were turned off.

There was a significant relationship between verbal processing speed and hearing aid benefit, in that subjects with slower processing speed were more likely to benefit from hearing aid use.  Sensory input decays rapidly and requires additional cognitive effort when it is mismatched with long-term memory stores. Any factor that improves the sensory input may facilitate the matching process. The authors posited that slow verbal processors might benefit more from amplification because hearing aids improved the quality of the sensory input, thereby reducing the cognitive effort and time that would otherwise be required to match the input to long-term memory templates.

On average, the availability of visual cues did not have a significant effect on listening effort. This may be a surprising result given the well-known positive effects of visual cues for speech recognition. However, there was high variability among subjects and it was apparent that better lipreaders were more able to make use of visual cues, especially in quiet conditions without hearing aids. Working memory capacity was negatively correlated with benefit from visual cues, indicating that subjects with better working memory capacity derived less benefit from visual cues on average. The relationship between these variables is unclear, but the authors suggest that individuals with lower working memory capacities may be more susceptible to changes in listening effort and therefore more likely to benefit from additional sensory information such as visual cues.

Understanding how individual traits affect listening effort and susceptibility to noise is important to audiologists for a number of reasons, partly because we often work with older individuals. Working memory declines as a result of the normal aging process and may begin in middle age (Wang, et al., 2011).  Similarly, the speed of cognitive processing slows and visual impairment becomes more likely with increasing age (Clay, et al., 2009). Many patients seeking audiological care may also suffer from these deficits in working memory, verbal processing, and visual acuity. Though more research is needed to understand how these variables relate to one another, they should be considered in clinical evaluations and hearing aid fittings.  Advanced hearing aid features that counteract the degrading effects of noise and reverberation may be particularly important for elderly or visually impaired hearing aid users. As shown in the reviewed study, these patients will benefit significantly from face-to-face conversation, slow speaking rates and reduced environmental distractions. Counseling sessions should include discussion of these issues so that patients and family members understand how they can use strategic listening techniques, in addition to hearing aids, to improve speech recognition and reduce cognitive effort.

References

Clay, O., Edwards, J., Ross, L., Okonkwo, O., Wadley, V., Roth, D. & Ball, K. (2009). Visual function and cognitive speed of processing mediate age-related decline in memory span and fluid intelligence. Journal of Aging and Health 21(4), 547-566.

Cox, R.M. & Alexander, G.C. (1991).  Hearing aid benefit in everyday environments. Ear and Hearing 12, 127-139.

Downs, D.W. (1982). Effects of hearing aid use on speech discrimination and listening effort. Journal of Speech and Hearing Disorders 47, 189-193.

Duquesnoy, A.J. & Plomp, R. (1983). The effect of a hearing aid on the speech reception threshold of hearing impaired listeners in quiet and in noise. Journal of the Acoustical Society of America 73, 2166-2173.

Erber, N.P. (1975). Auditory-visual perception of speech. Journal of Speech and Hearing Disorders 40, 481-492.

Foo, C., Rudner, M. & Ronnberg, J. (2007). Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. Journal of the American Academy of Audiology 18, 618-631.

Gatehouse, S., Naylor, G. & Elberling, C. (2003). Benefits from hearing aids in relation to the interaction between the user and the environment. International Journal of Audiology 42 Suppl 1, S77-S85.

Gatehouse, S. & Gordon, J. (1990). Response times to speech stimuli as measures of benefit from amplification. British Journal of Audiology 24, 63-68.

Grant, K.W., Walden, B.F. & Seitz, P.F. (1998).  Auditory visual speech recognition by hearing impaired subjects. Consonant recognition, sentence recognition and auditory-visual integration. Journal of the Acoustical Society of America 103, 2677-2690.

Hick, C.B. & Tharpe, A.M. (2002). Listening effort and fatigue in school-age children with and without hearing loss. Journal of Speech, Language and Hearing Research 45, 573-584.

Hornsby, B.W.Y. (2013).  The Effects of Hearing Aid Use on Listening Effort and Mental Fatigue Associated with Sustained Speech Processing Demands. Ear and Hearing, in press.

Meyer, D.E. & Schvaneveldt, R.W. (1971). Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology 90, 227-234.

Milberg, W. & Blumstein, S.E. (1981). Lexical decision and aphasia: Evidence for semantic processing. Brain and Language 14, 371-385.

Picou, F.M., Ricketts, T.A. & Hornsby, B.W.Y (2011). Visual cues and listening effort: Individual variability. Journal of Speech, Language and Hearing Research 54, 1416-1430.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W.Y. (2013). How hearing aids, background noise and visual cues influence objective listening effort. Ear and Hearing, in press.

Rudner, M., Foo, C. & Ronnberg, J. (2009). Cognition and aided speech recognition in noise: Specific role for cognitive factors following nine week experience with adjusted compression settings in hearing aids. Scandinavian Journal of Psychology 50, 405-418.

Sarampalis, A., Kalluri, S., Edwards, B. & Hafter, E. (2009) Objective measures of listening effort: effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research 52, 1230–1240.

Sumby, W.H. & Pollock, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26, 212-215.

Unsworth, N., Heitz, R.P. & Schrock, J.C. (2005). An automated version of the operation span task. Behavioral Research Methods 37, 498-505.

Van Rooij, J.C., Plomp, R. & Orlebeke, J.F. (1989).  Auditive and cognitive factors in speech perception by elderly listeners. I: Development of test battery. Journal of the Acoustical Society of America 86, 1294-1309.

Walden, B.F., Grant, K.W. & Cord, M.T. (2001). Effects of amplification and speechreading on consonant recognition by persons with impaired hearing. Ear and Hearing 22, 333-341.

Wang, M., Gamo, N., Yang, Y., Jin, L., Wang, X., Laubach, M., Mazer, J., Lee, D. & Arnsten, A. (2011). Neuronal basis of age-related working memory decline. Nature 476, 210-213.