Starkey Research & Clinical Blog

Listening gets more effortful in your forties

DeGeest, S., Keppler, H. & Corthals, P. (2015) The effect of age on listening effort. Journal of Speech, Language and Hearing Research 58(5), 1592-1600.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

The ability to understand conversational speech in everyday situations is affected by many obstacles. A large proportion of our work involves determining the best treatment plan to help hearing-impaired patients overcome these obstacles.  Though understanding speech in noise poses difficulty for hearing-impaired individuals of all ages, several studies have indicated that in the absence of hearing loss, older adults face increased challenges in noisy environments (Pichora-Fuller & Singh, 2006; Duquesnoy, 1983; Dubno et al., 1984; Helfer & Freyman, 2008); some reports suggest that middle-aged adults have significantly poorer speech recognition in noise compared to young adults. (Helfer & Vargo, 2009).

Competing environmental noise reduces the audibility of acoustic speech information, increasing reliance upon visual, situational and contextual cues, that in turn requires a greater delegation of cognitive resources (Schneider et al., 2002), making listening more effortful. Increases in listening effort in noise could be related to decreases in hearing thresholds or available cognitive resources, as both are known to decrease with advancing age.  But the fact that normal-hearing individuals also experience more difficulty hearing in noise suggests that factors other than hearing loss may be involved, including working memory, processing speed and selective attention (Akeroyd, 2008; Pichora-Fuller et al., 1995).

The work of DeGeest and colleagues examined listening effort and speech recognition in adult subjects from 20 to 77 years of age. All of the subjects were determined to have normal “age corrected” hearing thresholds from 250Hz through 8000Hz, though older subjects had average high-frequency pure tone thresholds in the mild to moderate range of hearing loss. Subjects over age 60 were screened with the Montreal Cognitive Assessment (MoCA; Nasreddine et al., 2005), no specific cognitive performance measures were included in data analysis.  Listening effort was evaluated using a dual-task paradigm in which subjects performed a speech recognition task while simultaneously performing a visual memory task. Speech recognition ability was measured with 10-item sets of two-syllable digits, presented at two SNR levels: +2dB SNR and -10dB SNR.  Performance on the dual-task presentation was examined in comparison to baseline measures of each test in isolation. Listening effort was defined as the change in performance on the visual memory task when the dual-task condition was compared to baseline. Speech recognition ability was not expected to change from baseline when measured in the dual-task condition.

The investigators found that listening effort increased in parallel with advancing age. Though subjects were initially determined to have “age corrected” normal hearing, which meant some participants had high frequency hearing loss, the correlation between listening effort and age was maintained even when the factors of pure tone threshold and baseline word recognition performance were controlled. Of note was the observation that listening effort started to increase notably between +2dB and -10dB SNRs at ages of 40.5 years and 44.1 years, respectively. Their determination that listening effort begins to increase in the mid 40’s is in agreement with other research that reported cognitive declines beginning around age 45 years (Singh-Manoux et al., 2012).  The authors suggest that further investigations of listening effort and word recognition in middle-aged and older adults should examine cognitive ability in more detail with specific tests of working memory, processing speed and selection attention included in the data analyses.

Although middle-aged adults are less likely to demonstrate outward effects of cognitive decline than older adults, the should not be regarded as immune to changes in cognitive ability and resulting listening effort.  Middle-aged individuals are more likely than their older counterparts to be working full time and may have more active lifestyles.  Hearing-impaired individuals of middle-age who work in reverberant or noisy environments may face additional challenges to job performance if they are also experiencing changes in processing speed or memory or if they struggle with even mild attentional deficits.  These are tangible considerations that might impact the entirety of treatment plan development, from the selection of hearing aids and assistive technologies to the communication and counseling strategies that are selected for the patient and their family members.

References

Akeroyd, M. (2008). Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. International Journal of Audiology 47 (Suppl 2), S53-S71.

DeGeest, S., Keppler, H. & Corthals, P. (2015) The effect of age on listening effort. Journal of Speech, Language and Hearing Research 58(5), 1592-1600.

Desjardins, J. & Doherty, K. (2014). The effect of hearing aid noise reduction on listening effort in hearing-impaired adults. Ear and Hearing 35 (6), 600-610.

Dubno, J., Dirks, D. & Morgan, D. (1984). Effects of age and mild hearing loss on speech recognition in noise. Journal of the Acoustical Society of America 76, 87-96.

Duquesnoy, J. (1983). The intelligibility of sentences in quiet and noise in aged listeners. Journal of the Acoustical Society of America 74, 1136-1144.

Helfer, K. & Freyman, R. (2008).  Aging and speech on speech masking. Ear and Hearing 29, 87-98.

Keppler, H., Dhooge, I., Corthals, P., Maes, L., D’haenens, W., Bockstael, A. & Vinck, B. (2010). The effects of aging on evoked otoacoustic emissions and efferent suppression of transient evoked otoacoustic emissions. Clinical Neurophysiology 121, 359-365.

Nasreddine, Z., Phillips, M., Bedirian, V., Charbonneau, S., Whitehead, V., Collin, I. & Chertkow, H. (2005).  The Montreal Cognitive Assessment, MoCA: A brief screening tool for mild cognitive impairment. Journal of the American Geriatrics Society 53, 695-699.

Pichora-Fuller, M., Schneider, B. & Daneman, M. (1995).  How young and old adults listen to and remember speech in noise. The Journal of the Acoustical Society of America 97, 593-608.

Pichora-Fuller, M. & Singh, G. (2006). Effects of age on auditory and cognitive processing: implications for hearing aid fitting and audiologic rehabilitation. Trends in Amplification 10, 29-59.

Sarampalis, A., Kalluri, S. & Edwards, B. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech, Language and Hearing Research 52, 1230-1240.

Schneider, B., Daneman, M. & Pichora-Fuller, M. (2002). Listening in aging adults: from discourse comprehension to psychoacoustics. Canadian Journal of Experimental Psychology 56, 139-152.

Can hearing aids reduce listening fatigue?

Hornsby, B.W.Y. (2013). The Effects of Hearing Aid Use on Listening Effort and Mental Fatigue Associated with Sustained Speech Processing Demands. Ear and Hearing, Published Ahead-of-Print.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

A patient recently told me that he wanted to put on his glasses so he could hear me better.  He was joking, but was correct in understanding that visual cues help facilitate speech understanding. When engaged in conversation, a listener uses many sources of information to supplement the auditory stimulus. Visual cues from lip-reading, gestures and expressions as well as situational cues, conversational context and the listener’s knowledge of grammar all help limit the possible interpretations of the message. Conditions that degrade the auditory stimulus, such as reverberation, background noise and hearing loss cause increased reliance on other cues in order for the listener to “fill in the blanks” and understand the spoken message. The use of these additional information sources amounts to an increased allocation of cognitive resources, which has also been referred to as increased “listening effort” (Downs, 1982; Hick & Tharpe, 2002; McCoy et al., 2005).

Research suggests that the increased cognitive effort required for hearing-impaired individuals to understand speech may lead to subjective reports of mental fatigue (Hetu et al., 1988; Ringdahl & Grimby, 2000; Kramer et al., 2006). This may be of particular concern to elderly people and those with cognitive, memory or other sensory deficits. The increased listening effort caused by hearing loss is associated with self-reports of stress, tension and fatigue (Copithorne 2006; Edwards 2007). In a study of factory workers, Hetu et al. (1988) reported that individuals with difficulty hearing at work needed increased attention, concentration and effort, leading to increased stress and fatigue. It is reasonable to conclude that listening effort as studied in the laboratory should be linked to subjective associations of hearing loss with mental fatigue, but the relationship is not clear. Dr. Hornsby points out that laboratory studies typically evaluate short-term changes in resource allocation as listening ease is manipulated in the experimental task. However, perceived mental fatigue is more likely to result from sustained listening demands over a longer period of time, e.g., a work day or social engagement lasting several hours (Hetu et al., 1988; Kramer et al., 2006).

The purpose of Dr. Hornsby’s study was to determine if hearing aids, with and without advanced features like directionality and noise reduction, reduce listening effort and subsequent susceptibility to mental fatigue. He also investigated the relationship between objective measures of speech discrimination and listening effort in the laboratory with subjective self-reports of mental fatigue.

Sixteen adult subjects participated in the study. All had bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Twelve subjects were employed full-time and reported being communicatively active about 65% of the time during the day. The remaining subjects were not employed but reported being communicatively active about 61% of the day. Twelve subjects were bilateral hearing aid users and four subjects were non-users. Subjects were screened to rule out cognitive dysfunction. All participants were fitted with bilateral behind-the-ear hearing aids with slim tubes and dome ear tips.  Hearing aids were programmed in basic and advanced modes. In basic mode, the microphones were omnidirectional and all advanced features except feedback suppression were turned off. In advanced mode, the hearing aids were set to manufacturer’s defaults with automatically adaptive directionality, noise reduction, reverberation reduction and wind noise reduction. All subjects wore the study hearing aids for at least 1-2 weeks before the experimental sessions began.

For the objective measurements of listening effort, subjects completed a word recognition in noise task paired with an auditory word recall task and a measure of visual reaction time.  Subjects heard random sets of 8 to 12 monosyllabic words preceded by the carrier phrase, “Say the word…” They were asked to repeat the words aloud and the percentage of correct responses was scored. In addition, subjects were asked to remember the last 5 words of each list. The end of the list was indicated by the word “STOP” on a screen in front of the speaker. Subjects were instructed to press a button as quickly as possible when the visual prompt appeared. Because the lists varied from 8 to 12 items, subjects never knew when to expect the visual prompt.  To control for variability in motor function, visual reaction time was measured alone in a separate session, during which subjects were instructed to simply ignore the speech and noise.

Subjective ratings of listening effort and fatigue were obtained with a five-item scale, administered prior to the experimental sessions. Three questions were adapted from the Speech Spatial and Qualities of Hearing Questionnaire (SSQ: Gatehouse & Noble, 2004) and the remaining items were formulated specifically for the study. Questions were phrased to elicit responses related to that particular day (“Did you have to put in a lot of effort to hear what was being said in conversation today?”, “How mentally/physically drained are you right now?”).  The final two questions were administered before and after the dual-task session and measured changes in attention and fatigue due to participation in the experimental tasks.

The word recognition in noise test yielded significantly better results in both aided conditions than in the unaided condition, though there was no difference between the basic and advanced aided conditions. The differences between unaided and aided scores varied considerably, suggesting that listening effort for individual subjects varied across conditions.  Unaided word recall was significantly poorer than basic or advanced aided performance. There was a small, significant difference between the two aided conditions, with advanced settings outperforming basic settings. In follow-up planned comparison tests, the aided vs. unaided difference was maintained though there was not a significant difference between the two aided conditions.

The reaction time measurement also assessed listening effort or the cognitive resources required for the word recognition test.  Reaction times were analyzed according to listening condition as well as block, which compared the first three trials (initial block) to the last three trials (final block).  Increases in reaction time by block represented the effect of task-related fatigue.  Analysis by listening condition showed that unaided reaction times increased more than reaction times for the advanced aided condition but not the basic aided condition. In other words, subjects required more time to react to the visual stimulus in the unaided condition than they did in the advanced aided condition. There was no significant difference between the two aided conditions.  There was a significant main effect for block; reaction times increased over the duration of the task. There was no interaction between listening condition and block; changes in performance over time were consistent across unaided and aided conditions.

One purpose of the study was to investigate the effect of hearing aid use on mental fatigue. Interestingly, comparison of initial and final blocks indicated that word recognition scores increased about 1-2% over time but improvement over time did not vary across listening conditions. There was no decrease in performance on word recall over time, nor did changes in performance over time vary significantly across listening conditions.  But reaction time did increase over time for all conditions, indicating a shift in cognitive resources away from the reaction time task and toward the primary word recognition task. Though the effect of hearing aid use was not significant, a trend appeared suggesting that fewer aided listeners had increased reaction.

The questionnaires administered before the session probed perceived effort and fatigue throughout the day, whereas the questions administered before and after the task probed focus, attention and mental fatigue before and after the test session. In all listening conditions there was a significant increase in mental fatigue and difficulty maintaining attention after completion of the test session. A non-significant trend suggested some difference between unaided and aided conditions.

To identify other factors that may have contributed to variability, correlations for age, pure tone average, high frequency pure tone average, unaided word recognition score, SNR during testing, employment status and self-rated percentage of daily communicative activity were calculated with the subjective and objective measurements. None of the correlations were significant, indicating that none of these factors contributed substantially to the variability observed in the study.

Cognitive resource allocation is often studied with dual-task paradigms like the one used in this study. Decrements in performance on the secondary task indicate a shift in cognitive resources to the primary task. Presumably, factors that increase difficulty in the primary task will increase allocation of resources to the primary task.  In these experiments, the primary task was a word recognition test and the secondary tasks were word recall and reaction time measurements. Improved word recall and quicker reaction times in aided conditions indicate that the use of hearing aids made the primary word recognition task easier, allowing listeners to allocate more cognitive resources to the secondary tasks. Furthermore, reaction times increased less over time in aided conditions than in unaided conditions.  These findings specifically suggest that decreased listening effort with hearing aid use may have made listeners less susceptible to fatigue as the dual-task session progressed.

Though subjective reports in this study showed a general trend toward reduced listening effort and concentration in aided conditions, there was not a significant improvement with hearing aid use. This contrasts with previous work that has shown reductions in subjective listening effort with the use of hearing aids (Humes et al., 1999; Hallgren et al., 2005; Noble & Gatehouse, 2006). The author notes that auditory demands vary widely and that participants were asked to rate their effort and fatigue based on “today”, which didn’t assess perceptions of sustained listening effort over a longer period of time may not have detected subtle differences among subjects.  For instance, working in a quiet office environment may not highlight the benefit of hearing aids or the difference between an omnidirectional or directional microphone program, simply because the acoustic environment did not trigger the advanced features often enough. In contrast, working in a school or restaurant might show a more noticeable difference between unaided listening, basic amplification and advanced signal processing. Though subjects reported being communicatively active about the same proportion of the day, this inquiry didn’t account for sustained listening effort over long periods of time, or varying work and social environments. These differences would likely affect overall listening effort and fatigue, as well as the value of advanced hearing aid features.

Clinical observations support the notion that hearing aid use can reduce listening effort and fatigue.  Prior to hearing aid use, hearing-impaired patients often report feeling exhausted from trying to keep up with social interactions or workplace demands. After receiving hearing aids, patients commonly report being more engaged, more able to participate in conversation and less drained at the end of the day. Though previous reports have supported the value of amplification on reduced listening effort, Hornsby’s study is the first to provide experimental data for the potential ability of hearing aid use to reduce mental fatigue.

These findings have important implications for all hearing aid users, but may have particular importance for working individuals with hearing loss as well as elderly hearing impaired individuals.  It is important for any working person to maintain a high level of job performance and to establish their value at work. Individuals with hearing loss face additional challenges in this regard and often take pains to prove that their hearing loss is not adversely affecting their work.  Studies in workplace productivity underscore the importance of reducing distractions for maintaining focus, reducing stress and persisting at difficult tasks (Clements-Croome, 2000; Hua et al., 2011). Studies indicating that hearing aids reduce listening effort and fatigue, presumably by improving audibility and reducing the potential distraction of competing sounds, should provide additional encouragement for employed hearing-impaired individuals to pursue hearing aids.

 

References

Baldwin, C.L. & Ash, I.K. (2011). Impact of sensory acuity on auditory working memory span in young and older adults. Psychology of Aging 26, 85-91.

Bentler, R.A., Wu, Y., Kettel, J. (2008). Digital noise reduction: outcomes from laboratory and field studies. International Journal of Audiology 47, 447-460.

Clements-Croome, D. (2000). Creating the productive workplace. Publisher: London, E & FN Spon.

Copithorne, D. (2006). The fatigue factor: How I learned to love power naps, meditation and other tricks to cope with hearing-loss exhaustion. [Healthy Hearing Website, August 21, 2006].

Downs, M. (1982). Effects of hearing aid use on speech discrimination and listening effort. Journal of Speech and Hearing Disorders 47, 189-193.

Edwards, B. (2007). The future of hearing aid technology. Trends in Amplification 11, 31-45.

Gatehouse, S. & Noble, W. (2004). The speech, Spatial and Qualities of Hearing Scale (SSQ). International Journal of Audiology 43, 85-99.

Hallgren, M., Larsby, B. & Lyxell, B. (2005). Speech understanding in quiet and noise, with and without hearing aids. International Journal of Audiology 44, 574-583.

Hetu, R., Riverin, L. & Lalande, N. (1988). Qualitative analysis of the handicap associated with occupational hearing loss. British Journal of Audiology 22, 251-264.

Hick, C.B. & Tharpe, A.M. (2002). Listening effort and fatigue in school-age children with and without hearing loss. Journal of Speech, Language and Hearing Research 45, 573-584.

Hua, Y., Loftness, V., Heerwagen, J. & Powell, K. (2011). Relationship between workplace spatial settings and occupant-perceived support for collaboration. Environment and Behavior 43, 807-826.

Humes, L.E., Christensen, L. & Thomas, T. (1999). A comparison of the aided performance and benefit provided by a linear and a two-channel wide dynamic range compression hearing aid. Journal of Speech, Language and Hearing Research 42, 65-79.

Kramer, S.E., Kapteyn, T.S. & Houtgast, T. (2006). Occupational performance: comparing normal-hearing and hearing-impaired employees using the Amsterdam Checklist for Hearing and Work. International Journal of Audiology 45, 503-512.

McCoy, S.L., Tun, P.A. & Cox, L.C. (2005). Hearing loss and perceptual effort: Downstream effects on older adults’ memory for speech. Quarterly Journal of Experimental Psychology A 58, 22-33.

Noble, W. & Gatehouse, S. (2006). Effects of bilateral versus unilateral hearing aid fitting on abilities measured by the SSQ. International Journal of Audiology 45, 172-181.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W. (2011). Visual cues and listening effort: Individual variability. Journal of Speech, Language and Hearing Research 54, 1416-1430.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W. (2013). The effect of individual variability on listening effort in unaided and aided conditions. Ear and Hearing (in press).

Ringdahl, A. & Grimby, A. (2000). Severe-profound hearing impairment and health related quality of life among post-lingual deafened Swedish adults. Scandinavian Audiology 29, 266-275

Sarampalis,  A., Kalluri, S. & Edwards, B. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech, Language and Hearing Research 52, 1230-1240.

Valente, M. & Mispagel, K. (2008) Unaided and aided performance with a directional open-fit hearing aid. International Journal of Audiology 47(6), 329-336.

Considerations for Directional Microphone Use in the Classroom

Directional Benefit in Simulated Classroom Environments

Ricketts, T., Galster, J. and Tharpe, A.M. (2007)

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Classroom acoustic environments vary widely and are affected by a number of factors including reverberation and noise from within the classroom and adjacent areas. Signal-to-noise ratio (SNR) is known to affect speech perception for children with normal hearing and those with hearing loss (Crandell, 1993; Finitzo-Hieber & Tillman, 1978). Because listeners with hearing loss typically require more favorable SNRs to achieve the same performance as normal hearing listeners, hearing-impaired students are particularly challenged by high levels of classroom noise.

FM systems are often recommended as a method for improving SNR in the classroom. However, they may not effectively convey voices other than the teacher’s, so children may be less able to hear comments or questions from other students.  The additional bulk of ear level FM systems may prompt reluctance to wear the FM system, as the student may perceive this as calling attention to their hearing loss.  Because of these and other potential limitations of FM systems, the use of hearing aids with directional microphones is an opportunity to improve SNR for hearing-impaired children.

The benefits of directional microphones for speech perception in the presence of background noise are well known for adults (Bentler, 2005; Ricketts & Dittberner, 2002; Ricketts, Henry & Gnewikow, 2003). Research has shown that children also benefit from directionality in laboratory conditions (Gravel, Fausel, Liskow & Chobot, 1999; Hawkins, 1984; Kuk, et al., 1999), but more information is needed on the effect of directional microphone use in classroom environments.  The study summarized in this post evaluated directional microphone use in simulated classroom situations and the subjective reaction to omnidirectional and directional modes by children and parents.

The authors recruited twenty-six hearing-impaired subjects ranging in age from 10-17 years participated in the experiment.  All but two had prior experience with hearing aids.  Subjects were fitted bilaterally with behind-the-ear hearing instruments that were programmed with omnidirectional and directional modes.  Digital noise reduction and feedback suppression features were disabled and all participants were fitted with unvented, vinyl, full-shell earmolds.

This study consisted of three individual experiments. The first investigated directional versus omnidirectional performance in noise in five simulated classroom scenarios:

1) Teacher Front – speech stimuli presented in front of the listener.

2) Teacher Back – speech presented behind the listener.

3) Desk Work – speech presented in front of the listener, the listener’s head oriented down toward desk

4) Discussion – three speech sources at 0 and 50-degree azimuth (left and right), simulating a round table discussion

5) Bench Seating – speech presented at 90-degree azimuth (left and right)

Speech recognition performance was evaluated in each of these scenarios using a modified version of the Hearing in Noise Test for Children (HINT-C, Nilsson, Soli & Sullivan, 1994). Speech stimuli were initially presented at 65dB SPL for the five test conditions. Noise was presented from four loudspeakers positioned 2 meters from each corner of the room. For conditions 1-3, the noise level was 55dB. For conditions 4 and 5, noise levels were fixed at 65dB.

A second experiment examined the performance of omnidirectional versus directional modes in the presence of multiple talkers. Monosyllabic words from the NU-6 lists (Tillman & Carhart, 1966) were randomly presented at 63dB SPL from speakers positioned 1.5 meters, surrounding the listener at three angles: 0 degrees (in front of listener), 135 degrees (back right) and 225 degrees (back left). Noise was presented at 57dB SPL, which again yielded an SNR of 6dB.

Not surprisingly, the results of the first experiment showed that directional performance was significantly better than omnidirectional performance for Teacher Front, Desk Work and Discussion conditions, but was significantly worse for the Teacher Back condition.  There was no significant difference between omnidirectional and directional modes for the Bench Seating condition.  In the Bench Seating condition, however, subjects were not specifically instructed to look at the speaker. If some subjects did look at the speaker and others did not, individual differences between omnidirectional and directional modes may have been obscured on average.  Improved performance was generally noted as the distance between speaker and listener decreased. This is consistent with previous studies with adult listeners, which showed increased directional benefit with decreasing distance (Ricketts & Hornsby, 2003, 2007).

The second experiment yielded no significant difference in performance between omnidirectional and directional modes when speech was in front of the listener. When speech was presented behind the listener, omnidirectional mode was significantly better than directional in both the back-right and back-left conditions.  The authors surmised that the directional benefit may have been reduced because subjects were told that all of the talkers were important and because 2/3 of the talkers were behind them, they may have been more focused on speech coming from the back.

The current study offers insight into the potential benefit of directional microphones for classroom environments. An FM system remains the primary recommendation for improving signal-to-noise ratio of a teacher’s voice, but overhearing other students and multiple talkers can be compromised by FM technology.  Additionally, because of social, cosmetic or financial concerns FM use may not be feasible for many students. Therefore, directional hearing instruments will likely continue to be widely recommended for hearing-impaired schoolchildren. This study reported a directional benefit ranging from 2.2 to 3.3 dB, which is consistent with studies of adult listeners (Ricketts, 2001).  Therefore, directional microphone use in classrooms may indeed be beneficial, as long as the teacher or speaker of interest is in front of the listener. However, for round table or small group arrangements, directionality could be detrimental, especially when talkers are behind the listener.  The authors point out that many school scenarios involve multiple talkers or speech from the sides and back, so directional microphone benefit may be limited overall.

The results of these experiments underscore the importance of counseling for school-age hearing aid users, as well as their parents and teachers. It is common practice to recommend preferential seating close to the teacher in the front of the classroom. Improved performance with decreases in distance from the speech source, in this and other studies, shows that this recommendation is particularly important for hearing aid users, whether or not they are in a directional mode. Furthermore, hearing-impaired students should be instructed to face the teacher so they can benefit from directional processing as well as visual cues. This should also be discussed in detail with teachers so that efforts can be made to arrange classroom seating accordingly.

An incidental finding of the first experiment showed that performance for the Desk Work condition was better than the Teacher Front condition, even though the distance between speaker and listener was comparable.  In the Desk Work condition, subjects were instructed to work on an assignment on the desk as they listened. Therefore, the listener’s head position was pointed slightly downward, which may have resulted in more optimal, horizontal positioning of the microphone ports, increasing directional effect. This finding demonstrates the importance of selecting the proper tubing or wire length, to position the hearing aid near the top of the pinna and align the microphone ports along the intended plane.

Overall, directional processing improved performance for speech sources in front of the listener and reduced performance for speech sources behind the listener. The instruments in this study were full-time omnidirectional or directional instruments, so it is unknown how automatic, adaptive directional instruments would perform under similar conditions. Because of the prevalence of automatic directionality in current hearing instruments, this is a question with important implications for school-age hearing aid users.  Perhaps automatic directionality could provide better overall access to speech in many classroom environments, but controlled study is needed before specific recommendations can be made.

References

Anderson, K.L. & Smaldino, J.J. (2000). The Children’s Home Inventory of Listening Difficulties. Retrieved from http://www.edaud.org.

Bentler, R.A. (2005). Effectiveness of directional microphones and noise reduction schemes in hearing aids: A systematic review of the evidence. Journal of the American Academy of Audiology, 16, 473-484.

Crandell, C. (1993). Speech recognition in noise by children with minimal degrees of sensorineural hearing loss. Ear and Hearing 14, 210-216.

Finitzo-Hieber, T. & Tillman, T. (1978). Room acoustics effects on monosyllabic word discrimination ability for normal and hearing-impaired children. Journal of Speech and Hearing Research, 21, 440-458.

Gravel, J., Fausel, N., Liskow, C. & Chobot, J. (1999). Children’s speech recognition in noise using omnidirectional and dual-microphone hearing aid technology. Ear and Hearing, 20, 1-11.

Hawkins, D.B. (1984). Comparisons of speech recognition in noise by mildly-to-moderately hearing-impaired children using hearing aids and FM systems. Journal of Speech and Hearing Disorders, 49, 409-418.

Kuk, F.K., Kollofski, C., Brown, S., Melum, A. & Rosenthal, A. (1999). Use of a digital hearing aid with directional microphones in school-aged children. Journal of the American Academy of Audiology, 10, 535-548.

Nilsson, M., Soli, S.D.  & Sullivan, J. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. The Journal of the Acoustical Society of America, 95, 1085-1099.

Resnick, S.B., Dubno, J.R., Hoffnung, S. & Levitt, H. (1975). Phoneme errors on a nonsense syllable test. The Journal of the Acoustical Society of America, 58, 114.

Ricketts, T., Lindley, G., & Henry, P (2001). Impact of compression and hearing aid style on directional hearing aid benefit and performance. Ear and Hearing, 22, 348-361.

Ricketts, T. & Dittberner, A.B. (2002). Directional amplification for improved signal-to-noise ratio: Strategies, measurement and limitations. In M. Valente (Ed.), Hearing aids: Standards, options and limitations (2nd ed., pp. 274-346). New York: Thieme Medical.

Ricketts, T., Galster, J. & Tharpe, A.M. (2007). Directional benefit in simulated classroom environments. American Journal of Audiology, 16, 130-144.

Ricketts, T., Henry, P. & Gnewikow, D. (2003). Full time directional versus user selectable microphone modes in hearing aids. Ear and Hearing, 24, 424-439.

Ricketts, T. & Hornsby, B.( 2003). Distance and reverberation effects on directional benefit. Ear and Hearing, 24, 472-484.

Ricketts, T. & Hornsby, B.(2007). Estimation of directional benefit in real rooms: A clinically viable method. In R.C. Seewald (Ed.), Hearing care for adults: Proceedings of the First International Conference (pp 195-206). Chicago: Phonak.

Tillman, T. & Carhart, R. (1966). An expanded test for speech discrimination using CNC monosyllables (Northwestern University Auditory Test No. 6) SAM-TB-66-55. Evanston, IL: Northwestern University Press.

Effects of Digital Noise Reduction on Children’s Speech Understanding

Effects of Digital Noise Reduction on Speech Perception for Children with Hearing Loss

Stelmachowicz, P., Lewis, D., Hoover, B., Nishi, K., McCreery, R., and Woods, W. (2010)

Because a great deal of everyday communication takes place in the presence of some level of background noise, hearing aid performance in noise is of interest to researchers, clinicians and hearing aid users. It is well established that directional microphones can improve signal-to-noise ratio (SNR) for adult hearing aid users as well as children (Valente et al. 1995; Gravel et al. 1999). It is generally accepted that Digital Noise Reduction (DNR) will not improve speech recognition in noise (Levitt et al. 1993; Bentler et al. 2008). Digital Noise Reduction has, however, resulted in improved overall sound quality judgments and decreased listening effort (Boymans & Dreschler, 2000; Walden et al., 2000; Sarampalis et al., 2009).

In noisy situations, adult listeners use a variety of cues to understand conversational speech, including visual cues, situational cues, semantic and grammatical context. Young children with limited language skills may not be able to take advantage of this information and may rely more on acoustic cues. Indeed, most studies show that children require better SNRs than adults (Blandy & Lutman, 2005; Jamieson et al. 2004).

For hearing-impaired children, hearing aids are more than a tool for the recognition of speech, they facilitate speech and language acquisition and development. As the authors of the current study pointed out, “amplification must facilitate the development of early auditory skills, laying the foundation for the extraction of regularities in the speech signal and the development of language.” Therefore, improving access to speech is of particular importance for young hearing aid users. Conversely, it is also must be determined that DNR or directional processing is not degrading the speech signal.

The purpose of the present study was to determine the effect of DNR on children’s perception of nonsense syllables, words and sentences in the presence of noise. Sixteen children with mild to moderately severe hearing loss participated in the study.  Subjects were divided into two groups: 5-7 year olds and 8-10 year olds. The authors chose these age groups to evaluate the effect of development on the perception of speech stimuli with varying levels of context.  Subjects were fitted with binaural behind-the-ear hearing aids with DNR and amplitude compression. Directional microphones were not activated.  Hearing aids were programmed to DSL 5.0 targets and settings were verified with real-ear measurements.

The children were presented with speech stimuli mixed with speech-shaped noise at SNRs of 0dB, +5dB and +10dB. Three levels of context were represented:

  • VCV (vowel-consonant-vowel) nonsense syllables, 15 consonants combined with /a/
  • Monosyllabic words from the Phonetically Balanced Kindergarten List (PBK – Haskins 1949)
  • Meaningful sentences with three key words each (Bench et al. 1979)

Data analysis revealed that noise reduction did not have a significant positive or negative effect on performance.  There was no significant main effect for context, but not surprisingly, post hoc testing revealed that scores for both age groups were higher for sentences than they were for both nonsense syllables and monosyllables.  Also not surprisingly, performance improved with increases in SNR for all types of speech stimuli. There was a significant effect of age, with older subjects demonstrating better overall performance than younger subjects.  There was no interaction between age and noise reduction, indicating that the use of noise reduction did not affect performance of younger and older subjects differently. There was no interaction between age group and context, indicating that both age groups benefitted from context equivalently.

The authors observed a great deal of variability among subjects, especially the younger group. Though noise reduction did not significantly affect performance overall, the authors found that more than half of the younger subjects demonstrated poorer recognition of words in the DNR-on condition.  The most common consonant confusions were:  /f/ for /t/, /g/ for /d/, and /b/ for /v/, suggesting that voicing information was perceived correctly but place and manner of articulation were not easily distinguished. This finding is in agreement with previous results reported by Jamieson et al (1995) who found that DNR processing resulted in either no improvement or a slight decrement in performance and that consonant place of articulation was particularly affected. Granted, there are several cues that affect consonant perception and slight decrements in the acoustic representation of a consonant may be offset by the availability of other cues. For example, though /f/ and /t/ may be difficult to discriminate, a participant in face to face conversation benefits from visual cues to help identify these consonants. Still, the opportunity exists to further study the effect of noise reduction on consonant perception, with adult and pediatric subjects.

Despite the minimal effect of noise reduction on speech recognition, all listeners in Jamieson’s 1995 study reported a strong preference for DNR processing when hearing continuous speech in a variety of listening environments. This leads to an important consideration regarding the use of noise reduction processing in hearing aids for children. Although the current investigation did not address listening preference, previous studies with adults have often shown positive effects of noise reduction processing on listening effort and sound quality.  The current authors suggested that if this were also the case for children, it could improve attentiveness and increase “time on task” in difficult listening situations. This is an interesting hypothesis, since attention and focus is essential for understanding speech in noise and many hearing-impaired children may demonstrate attention deficits.

Audiologists working with pediatric patients should consider noise reduction settings carefully.  Although there were no statistically significant effects of noise reduction on speech perception in this study, decreases in word recognition scores for younger children in the DNR-on condition is a concern and warrants further study. The authors point out that a child’s ability to recognize and understand speech requires ongoing, consistent auditory experiences. Previous use of amplification, age of identification and consistency of hearing aid use may have influenced the results of this study and may affect success with DNR processing in general.  The effect of degree of hearing loss should also be considered, as it is possible that individuals with severe hearing losses could be adversely affected by even small decrements in speech information resulting from DNR processing.

Clinically, an important highlight of this study is the fact that individual performance among children is highly variable.  Digital Noise Reduction has the potential to ease listening, but may compromise clarity of speech. And directional microphones may improve access to speech, but also risk compromising speech audibility for off-axis talkers.  These considerations suggest that some advanced features should be reserved for older children and specific environments. Among that older population, there may be some inclination to allow manual adjustment of hearing aid settings. However, Ricketts and Galster (2008) correctly point out that children cannot be expected to adjust manual directionality controls reliably. This ultimately results in a fitting rationale that avoids the fitting of some advanced features or allows them to function automatically, with the assumption that they will only be active in the appropriate situations and “do no harm” in regard to speech recognition.

Further study of the perceptual effects of noise reduction and subjective preferences in children is needed. The possibility remains that DNR may offer hearing-impaired children other benefits such as improved attention and comfort in noise, possibly leading to increased satisfaction and compliance from pediatric patients.

References

Bench, J., Kowal, A., & Bamford, J. (1979). The BKB sentence lists for partially-hearing children. British Journal of Audiology 13, 108-112.

Bentler, R., Wu, Y.H., Kettel, J. (2008). Digital noise reduction: Outcomes from laboratory and field studies. International Journal of Audiology 47, 447-460.

Blandy, S. & Lutman, M. (2005). Hearing threshold levels and speech recognition in noise in 7-year-olds. International Journal of Audiology 44, 435-443.

Boymans, M., & Dreschler, W.A. (2000). Field trials using a digital hearing aid with active noise reduction and dual-microphone directionality.  Audiology 39, 260-268.

Gravel, J.S., Fausel, N., Liskow, C. (1999). Children’s speech recognition in noise using omnidirectional and dual microphone hearing aid technology. Ear and Hearing 20, 1-11.

Haskins, H.A. (1949). A phonetically balanced test of speech discrimination for children. Master’s thesis, Northwestern University, Evanston, IL.

Jamieson, D.G., Kranjc, G., Yu, K. (2004). Speech intelligibility of young school-aged children in the presence of real-life classroom noise. Journal of the American Academy of Audiology 15, 508-517.

Levitt, H., Bakke, M., Kates, J. (1993). Signal processing for hearing impairment. Scandanavian Audiology Supplement 38, 7-19.

Ricketts, T.A. & Galster, J. (2008). Head angle and elevation in classroom environments: implications for amplification. Journal of Speech, Language and Hearing Research 15, 516-525.

Sarampalis, A., Kalluri, S., Edwards, B., & Hafter, E. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech Language and Hearing Research, 52, 1230-1240.

Valente, M., Fabry, D., Potts, L.G. (1995). Recognition of speech in noise with hearing aids using dual microphones. Journal of the American Academy of Audiology 6, 440-449.

Walden, B.E., Surr, R.K., Cord, M.T. (2000). Comparison of benefits provided by different hearing aid technologies. Journal of the American Academy of Audiology 11, 540-560.

A comparison of Receiver-In-Canal (RIC) and Receiver-In-The-Aid (RITA) hearing aids

Article of interest:

The Effects of Receiver Placement on Probe Microphone, Performance and Subjective Measures with Open Canal Hearing Instruments

Alworth, L., Plyler, P., Bertges-Reber, M. & Johnstone, P. (2010)

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Open-fit behind-the-ear hearing instruments are favored by audiologists and patients alike, because of their small size and discreet appearance, as well as their ability to minimize occlusion. The performance of open-fit instruments with the Receiver-In-The-Aid (RITA) and Receiver-In-Canal (RIC) has been compared to unaided conditions and to traditional, custom-molded instruments. However, few studies have examined the effect of receiver location on performance by comparing RITA and RIC instruments to each other. In the current paper, Alworth and her associates were interested in the effect of receiver location on:

- occlusion

- maximum gain before feedback

- speech perception in quiet and noise

- subjective performance and listener preferences

Theoretically, RIC instruments should outperform RITA instruments for a number of reasons. Delivery of sound through the thin tube on a RITA instrument can cause peaks in the frequency response, resulting in upward spread of masking (Hoen & Fabry, 2007). Such masking effects are of particular concern for typical open-fit hearing aid users; individuals with high-frequency hearing loss. RIC instruments are also capable of a broader bandwidth than RITA aids (Kuk & Baekgaard, 2008) and may present lowered feedback risk because of the distance between the microphone and receiver (Ross & Cirmo, 1980), and increased maximum gain before feedback (Hoen & Fabry, 2007; Hallenbeck & Groth, 2008).

The authors recruited twenty-five subjects with mild to moderate, high frequency, sensorineural hearing loss participated in the study. Fifteen had no prior experience with open-canal hearing instruments, whereas 10 had some prior experience. Each subject was fitted bilaterally with RIC and RITA instruments with identical signal processing characteristics, programmed to match NAL-NAL1 targets. Directional microphones and digital noise reduction features were deactivated. Subjects used one instrument type (RIC or RITA) for six weeks before testing and then wore the other type for six weeks before being tested again. The instrument style was counterbalanced among the subjects.

Probe microphone measures were conducted to evaluate occlusion and maximum gain before feedback. Speech perception was evaluated with the Connected Speech Test -CST (Cox et al, 1987), the Hearing in Noise test -HINT (Nilsson, et al, 1994), the High Frequency Word List – HFWL (Pascoe, 1975) and the Acceptable Noise Level – ANL test (Nabelek et al, 2004). Subjective responses were evaluated with the Abbreviated Profile of Hearing Aid Benefit – APHAB (Cox & Alexander, 1995), overall listener preferences for quiet and noise, and satisfaction ratings for five criteria: sound quality, appearance, retention and comfort, speech clarity and ease of use and care.

Real-Ear Occluded Response measurements showed minimal occlusion for both types of instruments in this study. Although there was more occlusion overall for RIC instruments, the difference between RIC and RITA hearing instruments was not significant. Overall maximum gain before feedback did not differ between RIC and RITA instruments. However, when analyzed by frequency, the authors found significantly greater maximum gain in the 4000-6000Hz range for RIC hearing instruments.

On the four speech tests, there were no significant differences between RITA versus RIC instruments. Furthermore, there were no significant improvements for aided listening over unaided, except for experienced users with RIC instruments on the Connected Speech Test (CST). It appears that amplification did not significantly improve scores in quiet conditions, for either instrument type, because of ceiling effects. The high unaided speech scores indicated that the subjects in this study, because of their audiometric configurations, already had broad enough access to high frequency speech cues, even in the unaided conditions. Aided performance in noise was significantly poorer than unaided on the HINT test, but no other significant differences were found for aided versus unaided conditions. This finding was in agreement with previous studies that also found degraded HINT scores for aided versus unaided conditions (Klemp & Dhar, 2008; Valente & Mispagel, 2008).

APHAB responses indicated better aided performance for both instrument types than for unaided conditions on all APHAB categories except aversiveness, in which aided performance was worse than unaided. There were no significant differences between RIC and RITA instruments. However, satisfaction ratings were significantly higher for RIC hearing instruments. New users reported more satisfaction with the appearance of RIC instruments; experienced users indicated more satisfaction with appearance, retention, comfort and speech clarity. Overall listener preferences were similar, with 80% of experienced users and 74% of new users preferring RIC instruments over RITA instruments.

The findings of Alworth and colleages are useful information for clinicians and their open-fit hearing aid candidates. Because they provided significantly more high frequency gain before feedback than RITA instruments, RIC instruments may be more appropriate for patients with significant high-frequency hearing loss. Indeed, this result may suggest that RIC instruments should be the preferred recommendation for open-fit candidates. The results of this study also underscore the importance of using subjective measures with hearing aid patients. Objective speech discrimination testing did not yield significant performance differences between RIC and RITA instruments, but participants showed significant preference for RIC instruments.

Further information is needed to compare performance in noise with RIC and RITA instruments. In this study and others, some objective scores and subjective ratings were poorer for aided conditions than unaided conditions. It is important to note that in the current study, all noise and speech was presented at a 0° azimuth angle, with directional microphones disabled. In real-life environments, it is likely that users would have directional microphones and would participate in conversations with various noise sources surrounding them. Previous work has shown significant improvements with directionality in open-fit instruments (Valente & Mispagel, 2008; Klemp & Dhar, 2008). Future work comparing directional RIC and RITA instruments, in a variety of listening environments, would be helpful for clinical decision making.

Although the performance effects and preference ratings reported here support recommendation of RIC instruments clinicians should still consider other factors when discussing options with individual patients. For instance, small ear canals may preclude the use of RIC instruments because of retention, comfort or occlusion concerns. Patients with excessive cerumen may prefer RITA instruments because of easier maintenance and care, or those with cosmetic concerns may prefer the smaller size of RIC instruments. Every patient’s individual characteristics and concerns must be considered, but the potential benefits of RIC instruments warrant further examination and may indicate that this receiver configuration should be recommended over slim-tube fittings.

References

Alworth, L.N., Plyler, P.N., Rebert, M.N., & Johstone, P.M. (2010). The effects of receiver placement on probe microphone, performance, and subjective measrues with open canal hearing instruments. Journal of the American Academy of Audiology, 21, 249-266.

Cox, R.M., & Alexander, G.C. (1995). The Abbreviated Profile of Hearing Aid Benefit. Ear and Hearing, 16, 176-186.

Cox, R.M., Alexander, G.C. & Gilmore, C. (1987). Development of the Connected Speech Test (CST). Ear and Hearing, 8, 119-126.

Hallenbeck, S.A., & Groth, J. (2008). Thin-tube and receiver-in-canal devices: there is positive feedback on both! Hearing Journal, 61(1), 28-34.

Hoen, M. & Fabry, D. (2007). Hearing aids with external receivers: can they offer power and cosmetics? Hearing Journal, 60(1), 28-34.

Klemp, E.J. & Dhar, S. (2008). Speech perception in noise using directional microphones in open-canal hearing aids. Journal of the American Academy of Audiology, 19(7), 571-578.

Kuk, F. & Baekgaard, L. (2008). Hearing aid selection and BTEs: choosing among various “open ear” and “receiver in canal” options. Hearing Review, 15(3), 22-36.

Nabelek, A.K., Tampas, J.W. & Burchfield, S.B. (2004). Comparison of speech perception in background noise with acceptance of background noise in aided and unaided conditions. Journal of Speech and Hearing Research, 47, 1001-1011.

Nilsson, M., Soli, S. & Sullivan, J. (1994). Development of the Hearing in Noise Test for the measurement of speech reception threshold in quiet and in noise. Journal of the Acoustical Society of America, 95, 1085-1099.

Pascoe, D. (1975). Frequency responses of hearing aids and their effects on the speech perception of hearing impaired subjects. Annals of Otology, Rhinology and Laryngology suppl. 23, 84: #5, part 2.

Valente, M. & Mispagel, K. (2008). Unaided and aided performance with a directional open-fit hearing aid. International Journal of Audiology, 47, 329-336.