Starkey Research & Clinical Blog

Patients with higher cognitive function may benefit more from hearing aid features

Ng, E.H.N., Rudner, M., Lunner, T., Pedersen, M.S., & Ronnberg, J. (2013). Effects of noise and working memory capacity on memory processing of speech for hearing-aid users. International Journal of Audiology, Early Online, 1-9.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Research reports as well as clinical observations indicate that competing noise increases the cognitive demands of listening, an effect that is especially impactful for individuals with hearing loss (McCoy et al., 2005; Picou et al., 2013; Rudner et al., 2011).  Listening effort is a cognitive dimension of listening that is thought to represent the allocation of cognitive resources needed for speech recognition (Hick & Tharpe, 2002). Working memory, is a further dimension of cognition that involves the simultaneous processing and storage of information; its effect on speech processing may vary depending on the listening conditions (Rudner et al., 2011).

The concept of effortful listening can be characterized with the Ease of Language Understanding (ELU) model (Ronnberg, 2003; Ronnberg et al., 2008). In quiet conditions when the speech is audible and clear, the speech input is intact and is automatically and easily matched to stored representations in the lexicon. When speech inputs are weak, distorted or obscured by noise, mismatches may occur and speech inputs may need to be compared to multiple stored representations to arrive at the most likely match. In these conditions, allocation of additional cognitive resources, is required. Efficient cognitive functioning and large working memory capacity allows more rapid and successful matches between speech inputs and stored representations. Several studies have indicated a relationship between cognitive ability and speech perception: Humes (2007) found that cognitive function was the best predictor of speech understanding in noise and Lunner (2003) reported that participants with better working memory capacity and verbal processing speed had better speech perception performance.

Following the ELU model, hearing aids may allow listeners to match inputs and stored representations more successfully, with less explicit processing. Noise reduction, as implemented in hearing aids, has been proposed as a technology that may ease effortful listening. In contrast, however, it has been suggested that hearing aid signal processing may introduce unwanted artifacts or alter the speech inputs so that more explicit processing is required to match them to stored images (Lunner et al., 2009). If this is the case, hearing aid users with good working memory may function better with amplification because their expanded working memory capacity allows more resources to be applied to the task of matching speech inputs to long-term memory stores.

Elaine Ng and her colleagues investigated the effect of noise and noise reduction on word recall and identification and examined whether individuals were affected by these variables differently based on their working memory capacity. The authors had several hypotheses:

1. Noise would adversely affect memory, with poorer memory performance for speech in noise than in quiet.

2. Memory performance in noise would be at least partially restored by the use of noise reduction.

3. The effect of noise reduction on memory would be greater for items in late list positions because participants were older and therefore likely to have slower memory encoding speeds.

4. Memory in competing speech would be worse than in stationary noise because of the stronger masking effect of competing speech.

5. Overall memory performance would be better for participants with higher working memory capacity in the presence of noise reduction. This effect should be more apparent for late list items presented with competing speech babble.

Twenty-six native Swedish-speaking individuals with moderate to moderately-severe, high-frequency sensorineural hearing loss participated in the authors’ study. Prior to commencement of the study, participants were tested to ensure that they had age-appropriate cognitive performance. A battery of tests was administered and results were comparable to previously reported performance for their age group (Ronnberg, 1990).

Two tests were administered to study participants. First, a reading span test evaluated working memory capacity.  Participants were presented with a total of 24 three-word sentences and sub-lists of 3, 4 and 5 sentences were presented in ascending order. Participants were asked to judge whether the sentences were sensible or nonsense. At the end of each sub-list of sentences, listeners were prompted to recall either the first or final words of each sentence, in the order in which they were presented. Tests were scored as the total number of items correctly recalled.

The second test was a sentence-final word identification and recall (SWIR) test, consisting of 140 everyday sentences from the Swedish Hearing In Noise Test (HINT; Hallgren et al, 2006). This test involved two different tasks. The first was an identification task in which participants were asked to report the final word of each sentence immediately after listening to it.  The second task was a free recall task; after reporting the final word of the eighth sentence of the list, they were asked to recall all the words that they had previously reported. Three of seven tested conditions included variations of noise reduction algorithms, ranging from one similar to those implemented in modern hearing aids to an ‘ideal’ noise reduction algorithm.

Prior to the main analyses of working memory and recall performance, two sets of groups were created based on reading span scores, using two different grouping methods. In the first set, two groups were created by splitting the group at the median score so that 13 individuals were in a high reading span group and the remaining 13 were in a low reading span group. In the second set, participants who scored in the mid-range on the reading span test were excluded from the analysis, creating High reading span and Low reading span groups of 10 participants each. There was no significant difference between groups based on age, pure tone average or word identification performance, in any of the noise conditions. Overall reading span scores for participants in this study were comparable to previously reported results (Lunner, 2003; Foo, 2007).

Also prior to the main analysis, the SWIR results were analyzed to compare noise reduction and ideal noise reduction conditions. There was no significant difference between noise reduction and ideal noise reduction conditions in the identification or free recall tasks, nor was there an interaction of noise reduction condition with reading span score. Therefore, only the noise reduction condition was considered in the subsequent analyses.

The relationship between reading span score (representing working memory capacity) and SWIR recall was examined for all the test conditions. Reading span score correlated with overall recall performance in all conditions but one. When recall was analyzed as a function of list position (beginning or final), reading span scores correlated significantly with beginning (primacy) positions in quiet and most noise conditions. There was no significant correlation between overall reading span scores and items in final (recency) position in any of the noise conditions.

There were significant main effects for noise, list position and reading span group. In other words, when noise reduction was implemented, the negative effects of noise were lessened. There was a recency effect, in that performance was better for late list positions than for early list positions. Overall, the high reading span groups scored better than the low reading span groups, for both median-split and mid-range exclusion groups. The high reading span groups showed improved recall with noise reduction, whereas the low reading span groups exhibited no change in performance with noise reduction versus quiet.  The use of four-talker babble had a negative effect on late list positions, but did not affect items in other positions, suggesting that four-talker babble disrupted working memory more than steady-state noise. These analyses supported hypotheses 1, 2, 3 and 5, indicating that noise adversely affects memory performance (1), that noise reduction and list position interact with this effect (2,3) especially for individuals with high working memory capacity (5).

The results also supported hypothesis 4, which suggested that competing speech babble would affect memory performance more than steady state noise. Recall performance was significantly better in the presence of steady-state noise than it was in 4-talker babble. Though there was no significant effect of noise reduction overall, high reading span participants once again outperformed low reading span participants with noise reduction.

In summary, the results of this study determined that noise had an adverse effect on recall, but that this effect was mildly mitigated by the use of noise reduction. Four-talker babble was more disruptive to recall performance than was steady-state noise. Recall performance was better for individuals with higher working memory capacity. These individuals also demonstrated more of a benefit from noise reduction than did those with lower working memory capacity.

Recall performance is better in quiet conditions than in noise because presumably fewer cognitive resources are required to encode the speech input (Murphy, et al., 2000). Ng and her colleagues suggest that noise reduction helps to perceptually segregate speech from noise, allowing the speech input to be matched to stored lexical representations with less cognitive demand. So, noise reduction may at least partially reverse the negative effect of noise on working memory.

Competing speech babble is more likely to be cognitively demanding than steady-state noise (such as an air conditioner) because it contains meaningful information that is more distracting and harder to separate from the speech of interest (Sorqvist & Ronnberg, 2012). Not only is the speech signal of interest degraded by the presence of competing sound and therefore harder to encode, but additional cognitive resources are required to inhibit the unwanted or irrelevant linguistic information (Macken, 2009).  Because competing speech puts more demands on cognitive resources, it is more potentially disruptive than steady-state noise to perception of the speech signal of interest.

Unfortunately, much of the background noise encountered by hearing aid wearers is competing speech. The classic example of the cocktail party illustrates one of the most challenging situations for hearing-impaired individuals, in which they must try to attend to a proximal conversation while ignoring multiple conversations surrounding them. The results of this study suggest that noise reduction may be more useful in these situations for listeners with better working memory capacity; however, noise reduction should still be considered for all hearing aid users, with comprehensive follow-up care to make adjustments for individuals who are not functioning well in noisy conditions. Noise reduction may generally alleviate perceived effort or annoyance, allowing a listener to be more attentive to the speech signal of interest or to remain in a noisy situation that would otherwise be uncomfortable or aggravating.

More research is needed on the effects of noise, noise reduction and advanced signal processing on listening effort and memory in everyday situations. It is likely that performance is affected by numerous variables of the hearing aid, including compression characteristics, directionality, noise reduction, as well as the automatic implementation or adjustment of these features. These variables in turn combine with user-related characteristics such as age, degree of hearing loss, word recognition ability, cognitive capacity and more.

References

Foo, C., Rudner, M., & Ronnberg, J. (2007). Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. Journal of the American Academy of Audiology 18, 618-631.

Hallgren, M., Larsby, B. & Arlinger, S. (2006). A Swedish version of the hearing in noise test (HINT) for measurement of speech recognition. International Journal of Audiology 45, 227-237.

Hick, C. B., & Tharpe, A. M. (2002). Listening effort and fatigue in school-age children with and without hearing loss. Journal of Speech Language and Hearing Research 45, 573–584.

Humes, L. (2007). The contributions of audibility and cognitive factors to the benefit provided by amplified speech to older adults. Journal of the American Academy of Audiology 18, 590-603.

Lunner, T. (2003). Cognitive function in relation to hearing aid use. International Journal of Audiology 42, (Suppl. 1), S49-S58.

Lunner, T., Rudner, M. & Ronnberg, J. (2009). Cognition and hearing aids. Scandinavian Journal of Psychology 50, 395-403.

Macken, W.J., Phelps, F.G. & Jones, D.M. (2009). What causes auditory distraction? Psychonomic Bulletin and Review 16, 139-144.

McCoy, S.L., Tun, P.A. & Cox, L.C. (2005). Hearing loss and perceptual effort: downstream effects on older adults’ memory for speech. Quarterly Journal of Experimental Psychology A, 58, 22-33.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W.Y. (2013). How hearing aids, background noise and visual cues influence objective listening effort. Ear and Hearing 34 (5).

Ronnberg, J. (2003). Cognition in the hearing impaired and deaf as a bridge between signal and dialogue: a framework and a model. International Journal of Audiology 42 (Suppl. 1), S68-S76.

Ronnberg, J., Rudner, M. & Foo, C. (2008). Cognition counts: A working memory system for ease of language understanding (ELU). International Journal of Audiology 47 (Suppl. 2), S99-S105.

Rudner, M., Ronnberg, J. & Lunner, T. (2011). Working memory supports listening in noise for persons with hearing impairment. Journal of the American Academy of Audiology 22, 156-167.

Sorqvist, P. & Ronnberg, J. (2012). Episodic long-term memory of spoken discourse masked by speech: What role for working memory capacity? Journal of Speech Language and Hearing Research 55, 210-218.

The Speech, Spatial and Qualities of Hearing Scale (SSQ): A Gatehouse Legacy

Gatehouse, S. & Noble, W. (2004). The speech, spatial and qualities of hearing scale (SSQ). International Journal of Audiology, 43, 85-99.

This editorial discusses the clinical implications of an independent research study. This editorial does not represent the opinions of the original authors.

Self-assessment scales provide insight into everyday experiences and perceptions of hearing impaired individuals making them valuable companions to laboratory research and helpful tools for clinicians.  A variety of self-assessment indices are available for use with aided or unaided individuals and target a variety of issues including hearing aid usage patterns, binaural or monaural preference, volume and program preferences, ability to understand speech in quiet and noise, and ability to function in social situations.  Gatehouse and Noble point out that most laboratory research and self-assessment scales view speech perception as the primary issue related to hearing handicap, with improved audibility and suppression of competing noise being the primary goal of auditory rehabilitation. But in everyday life, understanding speech may constitute only part of a hearing-impaired individual’s perceived difficulties.  For instance, it is important to locate and identify audible events in order to be fully aware of the environment and safely navigate a variety of surroundings. With the development of the SSQ, Gatehouse and Noble hoped to provide a more comprehensive measure of hearing disability, taking into account the perception of both spatial relationships and sound quality. Furthermore, they investigated the relationship between disabilities in these areas to perceived hearing handicap.

Research regarding auditory scene analysis by Bregman (1990) indicates that a listener in a group situation must first parse the complex acoustic environment into sound sources or “streams” so that they can be attended to and monitored individually. In other words, in a noisy situation, a listener must be able to group together the acoustic elements that make up one particular voice before processing the content of the message. Although it would be easier if it did, conversation rarely proceeds in an orderly fashion with one participant speaking at a time. Rather, in groups, one participant might initiate a response while the previous person is still speaking, or two individuals might speak at the same time. This requires a listener to not only attend to specific speech streams, but to monitor other speech sources in order to be ready to switch attention when necessary.  Accomplishing this task involves binaural hearing, localization, attention, cognition, and vision, and successful communication in groups can be affected by all of these variables. Because the SSQ investigates auditory perceptions of movement, location, and distance as well as sound quality perceptions, such as mood and voice identification in addition to issues related to speech communication, it may more realistically address how hearing loss affects an individual’s everyday life.

The goal of Gatehouse and Noble’s study was twofold: to use the SSQ to examine what is disabling about hearing impairment and to determine how those disabilities affect hearing handicap. There were 153 participants in the study: 80 females and 73 males, with an average age of 71 years. The better-ear average (BEA) for octave frequencies from 500 to 4000Hz was 38.8dB.  The worse-ear average (WEA) was 52.7dB. In addition to the SSQ, subjects completed a 12-question hearing handicap scale developed in part from the Hearing Disabilities and Handicaps Scale (Hetu et al., 1994) and from an unpublished general health scale (Glasgow Health Status Inventory). The items were scored using a 5-point scale, yielding a global handicap score. A higher score indicated greater handicap. Negative scores on the SSQ indicate greater disability, so negative correlations between the SSQ and handicap scores were expected.

The SSQ was designed to be administered as an interview rather than as a self-administered scale. The interview format ensures that the subject understands the questions and can request clarification when necessary. The scale is divided into three domains: 14 items on speech hearing, 17 items on spatial hearing, and 18 items on “other” functions and qualities of hearing. The “other” qualities section contains items related to recognition and segregation of sounds, clarity, naturalness, and listening effort.  Items are scored with ratings of 1 to 10, with the most positive response always represented with a higher number, on the right side of the response sheet. For example, the left side of the scale represents a complete absence of a quality, complete inability to perform a task, or complete effort required. The right side of the scale indicates complete presence of a quality, complete ability, or complete absence of effort.  The left to right, negative-to-positive scoring of items was consistently maintained throughout the scale in an effort to minimize confusion.

They found that degree of hearing impairment correlated well with disability as measured by the SSQ and that the SSQ scores in turn correlated well with handicap, but that impairment itself did not correlate well with handicap. This result was expected and was in agreement with previous research. Asymmetry of hearing loss was not correlated significantly with items in the speech-hearing domain, but did correlate strongly with spatial-hearing and “other” quality domain items such as ease of listening, clarity, and sound segregation.

Examination of SSQ scores within the speech hearing domain showed that the highest ratings were given for one-to-one conversation in quiet. The lowest ratings were for group conversations and contexts in which attention must be divided among two or more sound sources simultaneously.  In the spatial hearing domain, respondents generally rated their directional hearing ability higher than the ability to judge distance or movement. For the “other” qualities domain, items related to naturalness of one’s own voice, recognition of the mood of others from their voices, and recognition of music had the highest scores and those related to ease of listening had the lowest scores.

Following examination of the SSQ scores themselves, the individual items within each of the three SSQ domains were ranked according to the strength of their correlation with hearing handicap. Within the speech-hearing domain, hearing handicap was most influenced by disability in contexts requiring divided or rapidly shifting attention: conversation in a group of people, following two conversations at once, and missing the start of what the next speaker says.  However, handicap was also influenced by difficulty talking to one person in quiet conditions. It is not surprising that a person who perceives difficulty understanding speech in relatively favorable conditions would experience greater concern about their overall communication ability.  Difficulty understanding conversation in noisy situations can be externalized or blamed on the environmental conditions, whereas difficulty in quiet is likely to be internalized and attributed to one’s own disability.

Interestingly, many items in the spatial-hearing domain were as highly correlated with handicap as those within the speech-hearing domain. Questions related to determining the distance and paths of movement, the distance of vehicles, the direction of a barking dog, and locating a speaker in a group setting all contributed to perceived handicap. This underscores the importance of spatial hearing for environmental awareness as well as successful participation in conversation and suggests that examination of spatial hearing may help clinicians and researchers better understand an individual’s experience with their hearing loss.

Several of the items in the “other” qualities section of the SSQ were strongly correlated with handicap. The ability to identify a speaker, to judge a speaker’s mood, the clarity and naturalness of sound, and the effort needed to engage in conversation were among the items most strongly related to hearing handicap.  The authors explain that these abilities affect an individual’s sense of social competence. Failure to accurately interpret the identity or mood of a speaker or the need for increased effort to participate in conversation may have an isolating effect, causing an individual to avoid social situations or even telephone conversations because they fear they will be unable to participate fully or successfully.

Not surprisingly, Gatehouse and Noble found that hearing thresholds were related to SSQ disability scores and SSQ scores were related to handicap, but impairment itself was not strongly correlated to handicap. This finding was expected and is in agreement with previous reports (Hetu et al., 1994). The relationship between impairment, disability, and handicap is important and is familiar to audiologists, in that we routinely discuss how a patient’s hearing loss affects his or her activities and everyday lifestyle. Though consideration of the audiogram is of course important, the way hearing loss interacts with work-related and social activities – things a person must do or enjoys doing  - more likely determines their perceived handicap and therefore their motivation to pursue auditory rehabilitation.

The finding that spatial hearing disability was strongly correlated with handicap may have implications for asymmetric hearing loss as well as the fitting of bilateral hearing aids. Individuals with asymmetric hearing thresholds will have more difficulty localizing sound and therefore may experience more of a handicap related to the discrimination of auditory spatial relationships and movement.  For instance, an individual with asymmetric hearing loss might hear conversation easily but might experience stress because they are unable to judge the location or approach of a car that is not visible. Because individuals with a better or normal ear might rate their speech discrimination performance relatively well in quiet and even moderately noisy places, an assessment scale that examines only speech-related hearing disability might underestimate their perceived hearing handicap. Consideration of spatial hearing deficits might therefore provide a more realistic and helpful assessment of an individual’s functional difficulties. Perception of auditory spatial relationships is likely to be improved with the use of bilateral hearing aids for individuals with binaural hearing loss, so the correlation between spatial hearing items on the SSQ to hearing handicap may also be viewed as further support for the recommendation of two hearing aids.

The correlations across the three domains of the SSQ to the scores on the handicap scale indicate that the SSQ is effectively addressing several variables that contribute to perceived hearing handicap. The impact of speech recognition and discrimination on perceived handicap is well established. The impact of other skills such as determining distance, movement, voice quality, and mood is less well understood but may be an equally important component in understanding an individual’s feelings of social competence and confidence as well as their sense of safety and orientation in a variety of environments. The SSQ provides clinicians and researchers with an additional tool to more fully understand the impact of hearing loss on everyday lives.

References

Bregman, A. (1990). Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, MA: MIT Press.

Gatehouse, S. & Noble, W. (2004). The speech, spatial and qualities of hearing scale (SSQ). International Journal of Audiology 43, 85-99.

Hetu, R., Getty, L., Phlibert, L., Desilets, F., Noble, W. (1994). Development of a clinical tool for the measurement of the severity of hearing disabilities and handicaps. Journal of Speech-Language Pathology and Audiology 18, 83-95.

Do hearing aid wearers benefit from visual cues?

Wu, Y-H. & Bentler, R.A. (2010) Impact of visual cues on directional benefit and preference: Part I – laboratory tests. Ear and Hearing 31(1), 22-34.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors. 

The benefits of directional microphone use have been consistently supported by experimental data in the laboratory (Valente et al. 1995; Ricketts & Hornsby 2006; Gravel et al. 1999; Kuk et al. 1999). Similarly, hearing aid users have indicated a preference for directional microphones over omnidirectional processing in noise in controlled environments (Preves et al. 1999; Walden et al. 2005; Amlani et al. 2006). Despite the robust directional benefit reported in laboratory studies, field studies have yielded less impressive results; with some studies reporting perceived benefit (Preves et al. 1999; Ricketts et al. 2003) while others have not (Walden et al. 2000; Cord et al. 2002, 2004; Palmer et al. 2006).

One factor that could account for reduced directional benefit reported in field studies is the availability of visual cues. It is well established that visual cues, including lip-reading (Sumby & Pollack 1954) as well as eyebrow (Bernstein et al. 1989) and head movements (Munhall et al. 2004), can improve speech recognition ability in the presence of noise. In field studies, the availability of visual cues could result in a decreased directional benefit due to ceiling effects. In other words, the benefit of audio-visual (AV) speech cues might result in omnidirectional performance so close to a listener’s maximum ability that directionality may offer only limited additional improvement.  This could reduce both measured and perceived directional benefits.  It follows that ceiling effects from the availability of AV speech cues could also reduce the ability of auditory-only (AO) laboratory findings to accurately predict real-world performance.

Few studies have investigated the effect of visual cues on hearing aid performance or directional benefit. Wu and Bentler’s goal in the current study was to determine if visual cues could partially account for the discrepancy between laboratory and field studies of directional benefit. They outlined three experimental hypotheses:

1. Listeners would obtain less directional benefit and would prefer directional over omnidirectional microphone modes less frequently in auditory-visual (AV) conditions than in auditory-only (AO) conditions.

2. The AV directional benefit would not be predicted by the AO directional benefit.

3. Listeners with greater lip-reading skills would obtain less AV directional benefit than would listeners with lesser lip-reading skills.

Twenty-four adults with hearing loss participated in the study. Participants were between the ages of 20-79 years, had bilaterally symmetrical, downward-sloping, sensorineural hearing losses, normal or corrected normal vision and were native English speakers. Participants were fitted with bilateral, digital, in-the-ear hearing instruments with manually-accessible omnidirectional and directional microphone modes.

Directional benefit was assessed with two speech recognition measures:  the AV version of the Connected Speech Test (CST; Cox et al., 1987) and the Hearing in Noise Test (HINT; Nilsson et al., 1994). For the AV CST the talker was displayed on a 17” monitor. Participants listened to sets of CST sentences again in a second session to evaluate subjective preference for directional versus omnidirectional microphone modes. Speech stimuli were presented in six signal-to-noise (SNR) conditions ranging from -10dB to +10dB in 4db steps. Lip-reading ability was assessed with the Utley test (Utley, 1946), an inventory of 31 sentences recited without sound or facial exaggeration.

Analysis of the CST scores yielded significant main effects for SNR, microphone mode and presentation mode (AV vs. AO) as well as significant interactions among the variables. The benefit for visual cues was greater than the benefit afforded by directionality. As the authors expected, for most SNRs the directional benefit was smaller for AV conditions than AO conditions with the exception of the poorest SNR condition, -10dB.  Scores for all conditions (AV-DIR, AV-OMNI, AO-DIR, AO-OMNI) plateau at ceiling levels for the most favorable SNRs; meaning that both AV benefit and directional benefit decreased as SNR improved to +10dB.  HINT scores, which did not take into account ceiling effects, yielded a significant mean directional benefit of 3.9dB.

Participants preferred the directional microphone mode in the AO condition, especially at SNRs between -6dB to +2dB. At more favorable SNRs, there was essentially no preference. In the AV condition, participants were less likely to prefer the directional mode, except at the poorest SNR, -10dB. Further analysis revealed that the odds of preferring directional mode in AO condition were 1.37 times higher than in the AV condition. In other words, adding visual cues reduced overall preference for the directional microphone mode.

At intermediate and favorable SNRs there was no significant correlation between AV directional benefit and the Utley lip-reading scores. For unfavorable SNRs, the negative correlation between these variables was significant, indicating that in the most difficult listening conditions, listeners with better lip-reading skills obtained less AV directional benefit than those participants who were less adept at lip-reading.

The outcomes of these experiments generally support the authors’ hypotheses. Visual cues significantly improved speech recognition scores in omnidirectional trials close to ceiling levels, reducing directional benefit and subjective preference for directional microphone modes.  Auditory-only (AO) performance, typical of laboratory testing, was not predictive of auditory-visual (AV) performance. This is in agreement with prior indications that AO directional benefit as measured in laboratory conditions doesn’t match real-world directional benefit and suggests that the availability of visual cues can at least partially explain the discrepancy.  The authors suggested that directional benefit should theoretically allow a listener to rely less on visual cues. However, face-to-face conversation is natural and hearing-impaired listeners should leverage avoid visual cues when they are available.

The results of Wu and Bentler’s study suggest that directional microphones may provide only limited additional benefit when visual cues are available, for all but the most difficult listening environments. In the poorest SNRs, directional microphones may be leverages for greater benefit.  Still, the authors point out that mean speech recognition scores were best when both directionality and visual cues were available. It follows that directional microphones should be recommended for use in the presence of competing noise, especially in high noise conditions. Even if speech recognition ability is not significantly improved with the use of directional microphones in many typical SNRs, there may be other subjective benefits to directionality, such as reduced listening effort, distraction or annoyance that listeners respond favorably to.

It is important for clinicians to prepare new users of directional microphones to have realistic expectations. Clients should be advised that directionality can reduce competing noise but not eliminate it. Hearing aid users should be encouraged to consider their positioning relative to competing noise sources and always face the speech source that they wish to attend to.  Although visual cues appear to offer greater benefits to speech recognition than directional microphones alone; the availability of visual speech cues may be compromised by poor lighting, glare, crowded conditions or visual disabilities, making directional microphones all the more important for many everyday situations. Thus all efforts should be made to maximize directionality and the availability of visual cues in day-to-day situations, as both offer potential real-world benefits.

References

Amlani, A.M., Rakerd, B. & Punch, J.L. (2006). Speech-clarity judgments of hearing aid processed speech in noise: differing polar patterns and acoustic environments. International Journal of Audiology 12, 202-214.

Bernstein, L.E., Eberhardt, S.P. & Demorest, M.E. (1989). Single-channel vibrotactile supplements to visual perception of intonation and stress. Journal of the Acoustical Society of America 85, 397-405.

Cord, M.T., Surr, R.K., Walden, B.E., et al. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology 13, 295-307.

Cord, M.T., Surr, R.K., Walden, B.E., et al. (2004). Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids. Journal of the American Academy of Audiology 15, 353-364.

Cox, R.M., Alexander, G.C. & Gilmore, C. (1987). Development of the Connected Speech Test (CST). Ear and Hearing 8, 119S-126S.

Gravel, J.S., Fausel, N., Liskow, C., et al. (1999). Children’s speech recognition in noise using omnidirectional and dual-microphone hearing aid technology. Ear and Hearing 20, 1-11.

Kuk, F., Kollofski, C., Brown, S., et al. (1999). Use of a digital hearing aid with directional microphones in school-aged children. Journal of the American Academy of Audiology 10, 535-548.

Lee L., Lau, C. & Sullivan, D. (1998). The advantage of a low compression threshold in directional microphones. Hearing Review 5, 30-32.

Leeuw, A.R. & Dreschler, W.A. (1991). Advantages of directional hearing aid microphones related to room acoustics. Audiology 30, 330-344.

Munhall, K.G., Jones, J.A., Callan, D.E., et al. (2004). Visual prosody and speech intelligibility: head movement improves auditory speech perception. Psychological Science 15, 133-137.

Nilsson, M., Soli, S. & Sullivan, J.A. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95, 1085-1099.

Palmer, C., Bentler, R., & Mueller, H.G. (2006). Evaluation of a second order directional microphone hearing aid: Part II – Self-report outcomes. Journal of the American Academy of Audiology 17, 190-201.

Preves, D.A., Sammeth, C.A. & Wynne, M.K. (1999). Field trial evaluations of a switched directional/omnidirectional in-the-ear hearing instrument.  Journal of the American Academy of Audiology 10, 273-284.

Ricketts, T. (2000b). Impact of noise source configuration on directional hearing aid benefit and performance. Ear and Hearing 21, 194-205.

Ricketts, T. & Hornsby, B.W. (2003). Distance and reverberation effects on directional benefit. Ear and Hearing 24, 472-484.

Ricketts, T. & Hornsby, B.W. (2006). Directional hearing aid benefit in listeners with severe hearing loss. International Journal of Audiology 45, 190-197.

Ricketts, T., Henry, P. & Gnewikow, D. (2003). Full time directional versus user selectable microphone modes in hearing aids. Ear and Hearing 24, 424-439.

Sumby, W.H. & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26, 212-215.

Valente, M., Fabry, D.A. & Potts, L.G. (1995). Recognition of speech in noise with hearing aids using dual microphones. Journal of the American Academy of Audiology 6, 440-449.

Walden, B.E., Surr, R.K., Cord, M.T., et al. (2000). Comparison of benefits provided by different hearing aid technologies. Journal of the American Academy of Audiology 11, 540-560.

Walden, B.E., Surr, R.K . Grant, K.W., et al. (2005). Effect of signal-to-noise ratio on directional microphone benefit and preference. Journal of the American Academy of Audiology 16, 662-676.

Wu, Y-H. & Bentler, R.A. (2010) Impact of visual cues on directional benefit and preference: Part I – laboratory tests. Ear and Hearing 31(1), 22-34.

Differences Between Directional Benefit in the Lab and Real-World

Relationship Between Laboratory Measures of Directional Advantage and Everyday Success with Directional Microphone Hearing Aids

Cord, M., Surr, R., Walden, B. & Dyrlund, O. (2004). Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids.Journal of the American Academy of Audiology 15, 353-364.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

People with hearing loss require a better signal-to-noise ratio (SNR) than individuals with normal hearing (Dubno et al, 1984; Gelfand et al, 1988; Bronkhorst and Plomp, 1990).  Among many technological improvements, a directional microphone is arguably the only effective hearing aid feature for improving SNR and subsequently, improving speech understanding in noise. A wide range of studies support the benefit of directionality for speech perception in competing noise (Agnew & Block, 1997; Nilsson et al, 1994; Ricketts and Henry, 2002; Valente, 1995) Directional benefit is defined as the difference in speech recognition ability between omnidirectional and directional microphone modes. In laboratory conditions, directional benefit averages around 7-8dB but varies considerably and has ranged from 2-3dB up to 14-16dB (Valente et al, 1995; Agnew & Block, 1997).

An individual’s perception of directional benefit varies considerably among hearing aid users. Cord et al (2002) interviewed individuals who wore hearing aids with switchable directional microphones and 23% reported that they did not use the directional feature. Many respondents said they had initially tried the directional mode but did not notice adequate improvement in their ability to understand speech and therefore stopped using the directional mode. This discrepancy between measured and perceived benefit has prompted exploration of the variables that affect performance with directional hearing aids. Under laboratory conditions, Ricketts and Mueller (2000) examined the effect of audiometric configuration, degree of high frequency hearing loss and aided omnidirectional performance on directional benefit, but found no significant interactions among any of these variables.

The current study by Cord and her colleagues examined the relationship between measured directional advantage in the laboratory and success with directional microphones in everyday life. The authors studied a number of demographic and audiological variables, including audiometric configuration, unaided SRT, hours of daily hearing aid use and length of experience with current hearing aids, in an effort to determine their value for predicting everyday success with directional microphones.

Twenty hearing-impaired individuals were selected to participate in one of two subject groups. The “successful” group consisted of individuals who reported regular use of omnidirectional and directional microphone modes. The “unsuccessful” group of individuals reported not using their directional mode and using their omnidirectional mode all the time. Analysis of audiological and demographic information showed that the only significant differences in audiometric threshold between the successful and unsuccessful group were at 6-8 kHz, otherwise the two groups had very similar audiometric configurations, on average. There were no significant differences between the two groups for age, unaided SRT, unaided word recognition scores, hours of daily use or length of experience with hearing aids.

Subjects were fitted with a variety of styles – some BTE and some custom – but all had manually accessible omnidirectional and directional settings. The Hearing in Noise Test (HINT; Nilsson et al, 1994) was administered to subjects with their hearing aids in directional and omnidirectional modes. Sentence stimuli were presented in front of the subject and correlated competing noise was presented through three speakers: directly behind the subject and on each side. Following the HINT participants completed the Listening Situations Survey (LSS), a questionnaire developed specifically for this study. The LSS was designed to assess how likely participants were to encounter disruptive background noise in everyday situations, to determine if unsuccessful and successful directional microphone users were equally likely to encounter noisy situations in everyday life.  The survey consisted of four questions:

1) On average, how often are you in listening situations in which bothersome background noise is present?

2) How often are you in social situations in which at least 3 other people are present?

3) How often are you in meetings (e.g. community, religious, work, classroom, etc.)?

4) How often are you talking with someone in a restaurant or dining hall setting?

The HINT results suggest average directional benefit of 3.2dB for successful users and 2.1dB for unsuccessful users. Although directional benefit was slightly greater for the successful users, the difference between the groups was not statistically significant.  There was a broad range of directional benefit for both groups: from -0.8 to 6.0dB for successful users and from -3.4 to 10.5dB for the unsuccessful users. Interestingly, three of the ten successful users obtained little or no directional benefit, whereas seven of the ten unsuccessful users obtained positive directional benefit.

Analysis of the LSS results showed that successful users of directional microphones were somewhat more likely than unsuccessful users to encounter listening situations with bothersome background noise and to encounter social situations with more than three other people present. However, statistical analysis showed no significant differences between the two groups for any items on the LSS survey, indicating that users who perceived directional benefit and used their directional microphones were not significantly more likely to encounter noisy situations in everyday life.

These observations led the authors to conclude that directional benefit as measured in the laboratory did not predict success with directional microphones in everyday life. Some participants with positive directional advantage scores were unsuccessful directional microphone users and conversely, some successful users showed little or no directional advantage. There are a number of potential explanations for their findings. First, despite the LSS results, it is possible that unsuccessful users did not encounter real-life listening situations in which directional microphones would be likely to help. Directional microphone benefit is dependent on specific characteristics of the listening environment (Cord et al, 2002; Surr et al, 2002; Walden et al, 2004), and is most likely to help when the speech source is in front of and relatively close to the listener, with spatial separation between the speech and noise sources. Individuals who rarely encounter this specific listening situation would have limited opportunity to evaluate directional microphones and may therefore perceive only limited benefit from them.

Unsuccessful directional microphone users may have also had unrealistically high expectations about directional benefits. Directionality can be a subtle but effective way of improving speech understanding in noise. Reduction of sound from the back and sides helps the listener focus attention on the speaker and ignore competing noise. Directional benefit is based on the concept of face-to-face communication, if users expect their hearing aids to reduce all background noise from all angles they are likely to be disappointed. Similarly, if they expect the aids to completely eliminate background noise, rather than slightly reduce it, they will be unimpressed. It is helpful for hearing aid users, especially those new to directional microphones, to be counseled about realistic expectations as well as proper positioning in noisy environments. If listeners know what to expect and are able to position themselves for maximum directional effect they are more likely to perceive benefit from their hearing aids in noisy conditions.

To date, it has been difficult to correlate directional benefit under laboratory conditions with perceived directional benefit. It is clear that directionality offers performance benefits in noise, but directional benefit measured in a sound booth does not seem to predict everyday success with directional microphones. There are many factors that are likely affect real-life performance with directional microphone hearing aids, including audiometric variables, the frequency response and gain equalization of the directional mode, the venting of the hearing aid and the contribution of visual cues to speech understanding (Ricketts, 2000a; 2000b). Further investigation is still needed to elucidate the impact of these variables on the everyday experiences of hearing aid users.

As is true for all hearing aid features, directional microphones must be prescribed appropriately and hearing aid users should be counseled about realistic expectations and appropriate circumstances in which they are beneficial. Although most modern hearing instruments have the ability to adjust automatically to changing environments, manually accessed directional modes offer hearing aid wearers increased flexibility and may increase use by allowing the individual to make decisions regarding their improved comfort and performance in noisy places. Routine reinforcement of techniques for proper directional microphone use are encouraged. Hearing aid users should be encouraged to experiment with their directional programs to determine where and when they are most helpful. For the patient, proper identification of and positioning in noisy environments is essential step toward meeting their specific listening needs and preferences.

References

Agnew, J. & Block, M. (1997). HINT thresholds for a dual-microphone BTE. Hearing Review 4, 26-30.

Bronkhorst, A. & Plomp, R. (1990). A clinical test for the assessment of binaural speech perception in noise. Audiology 29, 275-285.

Cord, M.T., Surr, R.K., Walden, B.E. & Olson, L. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology 13, 295-307.

Cord, M., Surr, R., Walden, B. & Dyrlund, O. (2004). Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids. Journal of the American Academy of Audiology 15, 353-364.

Dubno, J.R., Dirks, D.D. & Morgan, D.E. (1984).  Effects of age and mild hearing loss on speech recognition in noise. Journal of the Acoustical Society of America 76, 87-96.

Gelfand, S.A., Ross, L. & Miller, S. (1988). Sentence reception in noise from one versus two sources: effects of aging and hearing loss. Journal of the Acoustical Society of America 83, 248-256.

Kochkin, S. (1993). MarkeTrak III identifies key factors in determining customer satisfaction. Hearing Journal 46, 39-44.

Nilsson, M., Soli, S.D. & Sullivan, J.A. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95, 1085-1099.

Ricketts, T. (2000a). Directivity quantification in hearing aids: fitting and measurement effects. Ear and Hearing 21, 44-58.

Ricketts, T. (2000b). Impact of noise source configuration on directional hearing aid benefit and performance. Ear and Hearing 21, 194-205.

Ricketts, T. (2001). Directional hearing aids. Trends in Amplification 5, 139-175.

Ricketts, T.  & Henry, P. (2002). Evaluation of an adaptive, directional microphone hearing aid. International Journal of Audiology 41, 100-112.

Ricketts, T. & Henry, P. (2003). Low-frequency gain compensation in directional hearing aids. American Journal of Audiology 11, 1-13.

Ricketts, T. & Mueller, H.G. (2000). Predicting directional hearing aid benefit for individual listeners. Journal the American Academy of Audiology 11, 561-569.

Surr, R.K., Walden, B.E. Cord, M.T. & Olson, L. (2002). Influence of environmental factors on hearing aid microphone preference. Journal of the American Academy of Audiology 13, 308-322.

Valente, M., Fabry, D.A. & Potts, L.G. (1995). Recognition of speech in noise with hearing aids using dual microphones. Journal of the American Academy of Audiology 6, 440-449.

Walden, B.E., Surr, R.K., Cord, M.T. & Dyrlund, O. (2004). Predicting microphone preference in everyday living. Journal of the American Academy of Audiology 15, 365-396.

Are you prescribing an appropriate MPO?

Effect of MPO and Noise Reduction on Speech Recognition in Noise

Kuk, F., Peeters, H., Korhonen, P. & Lau, C. (2010). Effect of MPO and noise reduction on speech recognition in noise. Journal of the American Academy of Audiology, submitted November 2010.

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the original authors.

A component of clinical best practice would suggest that clinicians determine a patient’s uncomfortable listening levels in order to prescribe the output limiting characteristics of a hearing aid (Hawkins et al., 1987). The optimal maximum power output (MPO) should be based on two goals: preventing loudness discomfort and avoiding distorted sound quality at high input levels. The upper limit of a prescribed MPO must allow comfortable listening; less consideration is given to the consequences that under prescribing MPO might have on hearing aid and patient performance.

There are two primary concerns related to the acceptable lower MPO limit: saturation and insufficient loudness. Saturation occurs when the input level of a stimulus plus gains applied by the hearing aid exceed the MPO, causing distortion and temporal smearing (Dillon & Storey, 1998). This results in a degradation of speech cues and a perceived lack of clarity, particularly in the presence of competing noise. Similarly, insufficient loudness reduces the availability of speech cues. There are numerous reports of subjective degradation of sound when MPO is set lower than prescribed levels, particularly in linear hearing instruments (Kuk et al., 2008; Storey et al., 1998; Preminger, et al., 2001). There is not yet consensus on whether low MPO levels also cause objective degradation in performance.

The purpose of the study described here was to determine if sub-optimal MPO could affect speech intelligibility in the presence of noise, even in a multi-channel, nonlinear hearing aid. Furthermore, the authors examined if gain reductions from a noise reduction algorithm could mitigate the detrimental effects of the lower MPO. The authors reasoned that a reduction in output at higher input levels, via compression and noise reduction, could reduce saturation and temporal distortion.

Eleven adults with flat, severe hearing losses participated in the reviewed study. Subjects were fitted bilaterally with 15-channel, wide dynamic range compression, behind-the-ear hearing aids. Microphones were set to omnidirectional and other than noise reduction, no special features were activated during the study. Subjects responded to stimuli from the Hearing in Noise Test (HINT, Nilsson et al., 1994) presented at a 0-degree azimuth angle in the presence of continuous speech-shaped noise. The HINT stimuli yielded reception thresholds for speech (RTS) scores for each test condition.

Test conditions included two MPO prescriptions: the default MPO level (Pascoe, 1989) and 10dB below that level. The lower setting was chosen based on previous work that reported an approximately 18dB acceptable MPO range for listeners with severe hearing loss  (Storey et al., 1998). MPOs set at 10dB below default would therefore be likely to approach the low end of the acceptable range, resulting in perceptual consequences. Speech-shaped noise was presented at two levels: 68dB and 75dB. Testing was done with and without digital noise reduction (DNR).

Analysis of the HINT RTS scores yielded significant main effects of MPO and DNR, as well as significant interactions between MPO and DNR, and DNR and noise level. There was no significant difference between the two noise level conditions. Subjects performed better with the default MPO setting versus the reduced MPO setting. The interaction between the MPO and DNR showed that subjects’ performance in the low-MPO condition was less degraded when DNR was activated. These findings support the authors’ hypotheses that reduced MPO can adversely affect speech discrimination and that noise reduction processing can at least partially mitigate these adverse effects.

Prescriptive formulae have proven to be reasonably good predictors of acceptable MPO levels (Storey et al., 1988; Preminger et al., 2001). In contrast, there is some question as to the value of clinical UCL testing prior to fitting, especially when validation with loudness measures is performed after the fitting (Mackersie, 2006). Improper instruction for the UCL task may yield inappropriately low UCL estimates, resulting in compromised performance and sound quality. The authors of the current paper recommend following prescriptive targets for MPO and conducting verification measures after the fitting, such as real-ear saturation response (RESR) and subjective loudness judgments.

Another scenario, and an ultimately avoidable one, involves individuals who have been fitted with inappropriate instruments for their loss, usually because of cosmetic concerns. It is unfortunately not so unusual for some individuals with severe hearing losses to be fitted with RIC or CIC instruments because of their desirable cosmetic characteristics. Smaller receivers will likely have MPOs that are too low for hearing aid users with severe hearing loss. Many hearing-aid users may not realize they are giving anything up when they select a CIC or RIC and may view these styles as equally appropriate options for their loss. The hearing aid selection process must therefore be guided by the clinician; clients should be educated about the benefits and limitations of various hearing aid options and counseled about the adverse effects of under-fitting their loss with a more cosmetically appealing option.

The results of the current study are important because they illuminate an issue related to hearing aid output that might not always be taken into clinical consideration. MPO settings are usually thought of as a way to prevent loudness discomfort, so the concern is to avoid setting the MPO too high. Kuk and his colleagues have shown that an MPO that is too low could also have adverse effects and have provided valuable information to help clinicians select appropriate MPO settings. Additionally, their findings show objective benefits and support the use of noise reduction strategies, particularly for individuals with reduced dynamic range due to severe hearing loss or tolerance issues. Of course their findings may not be generalizable to all multi-channel compression instruments, with the wide variety of compression characteristics that are available, but they present important considerations that should be examined in further detail with other instruments.

References

ANSI (1997). ANSI S3.5-1997. American National Standards methods for the calculation of the speech intelligibility index. American National Standards Institute, New York.

Dillon, H. & Storey, L. (1998). The National Acoustic Laboratories’ procedure for selecting the saturation sound pressure level of hearing aids: theoretical derivation. Ear and Hearing 19(4), 255-266.

Hawkins, D., Walden, B., Montgomery, A. & Prosek, R. (1987). Description and validation of an LDL procedure designed to select SSPL90. Ear and Hearing 8, 162-169.

Kuk , F., Korhonen, P., Baekgaard, L. & Jessen, A. (2008). MPO: A forgotten parameter in hearing aid fitting. Hearing Review 15(6), 34-40.

Kuk et al., (2010). Effect of MPO and noise reduction on speech recognition in noise. Journal of the American Academy of Audiology, submitted November 2010, fast track article.

Kuk, F. & Paludan-Muller, C. (2006). Noise management algorithm may improve speech intelligibility in noise. Hearing Journal 59(4), 62-65.

Mackersie, C. (2006). Hearing aid maximum output and loudness discomfort: are unaided loudness measures needed? Journal of the American Academy of Audiology 18 (6), 504-514.

Nilsson, M., Soli, S. & Sullivan, J. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95(2), 1085-1099.

Pascoe, D. (1989). Clinical measurements of the auditory dynamic range and their relation to formulae for hearing aid gain. In J. Jensen (Ed.), Hearing Aid Fitting: Theoretical and Practical Views. Proceedings of the 13th Danavox Symposium. Copenhagen: Danavox, pp. 129-152.

Preminger, J., Neuman, A. & Cunningham, D. (2001). The selection and validation of output sound pressure level in multichannel hearing aids. Ear and Hearing 22(6), 487-500.

Storey, L., Dillon, H., Yeend, I. & Wigney, D. (1998). The National Acoustic Laboratories, procedure for selecting the saturation sound pressure level of hearing aids: experimental validation. Ear and Hearing 19(4), 267-279.

Effects of Digital Noise Reduction on Children’s Speech Understanding

Effects of Digital Noise Reduction on Speech Perception for Children with Hearing Loss

Stelmachowicz, P., Lewis, D., Hoover, B., Nishi, K., McCreery, R., and Woods, W. (2010)

Because a great deal of everyday communication takes place in the presence of some level of background noise, hearing aid performance in noise is of interest to researchers, clinicians and hearing aid users. It is well established that directional microphones can improve signal-to-noise ratio (SNR) for adult hearing aid users as well as children (Valente et al. 1995; Gravel et al. 1999). It is generally accepted that Digital Noise Reduction (DNR) will not improve speech recognition in noise (Levitt et al. 1993; Bentler et al. 2008). Digital Noise Reduction has, however, resulted in improved overall sound quality judgments and decreased listening effort (Boymans & Dreschler, 2000; Walden et al., 2000; Sarampalis et al., 2009).

In noisy situations, adult listeners use a variety of cues to understand conversational speech, including visual cues, situational cues, semantic and grammatical context. Young children with limited language skills may not be able to take advantage of this information and may rely more on acoustic cues. Indeed, most studies show that children require better SNRs than adults (Blandy & Lutman, 2005; Jamieson et al. 2004).

For hearing-impaired children, hearing aids are more than a tool for the recognition of speech, they facilitate speech and language acquisition and development. As the authors of the current study pointed out, “amplification must facilitate the development of early auditory skills, laying the foundation for the extraction of regularities in the speech signal and the development of language.” Therefore, improving access to speech is of particular importance for young hearing aid users. Conversely, it is also must be determined that DNR or directional processing is not degrading the speech signal.

The purpose of the present study was to determine the effect of DNR on children’s perception of nonsense syllables, words and sentences in the presence of noise. Sixteen children with mild to moderately severe hearing loss participated in the study.  Subjects were divided into two groups: 5-7 year olds and 8-10 year olds. The authors chose these age groups to evaluate the effect of development on the perception of speech stimuli with varying levels of context.  Subjects were fitted with binaural behind-the-ear hearing aids with DNR and amplitude compression. Directional microphones were not activated.  Hearing aids were programmed to DSL 5.0 targets and settings were verified with real-ear measurements.

The children were presented with speech stimuli mixed with speech-shaped noise at SNRs of 0dB, +5dB and +10dB. Three levels of context were represented:

  • VCV (vowel-consonant-vowel) nonsense syllables, 15 consonants combined with /a/
  • Monosyllabic words from the Phonetically Balanced Kindergarten List (PBK – Haskins 1949)
  • Meaningful sentences with three key words each (Bench et al. 1979)

Data analysis revealed that noise reduction did not have a significant positive or negative effect on performance.  There was no significant main effect for context, but not surprisingly, post hoc testing revealed that scores for both age groups were higher for sentences than they were for both nonsense syllables and monosyllables.  Also not surprisingly, performance improved with increases in SNR for all types of speech stimuli. There was a significant effect of age, with older subjects demonstrating better overall performance than younger subjects.  There was no interaction between age and noise reduction, indicating that the use of noise reduction did not affect performance of younger and older subjects differently. There was no interaction between age group and context, indicating that both age groups benefitted from context equivalently.

The authors observed a great deal of variability among subjects, especially the younger group. Though noise reduction did not significantly affect performance overall, the authors found that more than half of the younger subjects demonstrated poorer recognition of words in the DNR-on condition.  The most common consonant confusions were:  /f/ for /t/, /g/ for /d/, and /b/ for /v/, suggesting that voicing information was perceived correctly but place and manner of articulation were not easily distinguished. This finding is in agreement with previous results reported by Jamieson et al (1995) who found that DNR processing resulted in either no improvement or a slight decrement in performance and that consonant place of articulation was particularly affected. Granted, there are several cues that affect consonant perception and slight decrements in the acoustic representation of a consonant may be offset by the availability of other cues. For example, though /f/ and /t/ may be difficult to discriminate, a participant in face to face conversation benefits from visual cues to help identify these consonants. Still, the opportunity exists to further study the effect of noise reduction on consonant perception, with adult and pediatric subjects.

Despite the minimal effect of noise reduction on speech recognition, all listeners in Jamieson’s 1995 study reported a strong preference for DNR processing when hearing continuous speech in a variety of listening environments. This leads to an important consideration regarding the use of noise reduction processing in hearing aids for children. Although the current investigation did not address listening preference, previous studies with adults have often shown positive effects of noise reduction processing on listening effort and sound quality.  The current authors suggested that if this were also the case for children, it could improve attentiveness and increase “time on task” in difficult listening situations. This is an interesting hypothesis, since attention and focus is essential for understanding speech in noise and many hearing-impaired children may demonstrate attention deficits.

Audiologists working with pediatric patients should consider noise reduction settings carefully.  Although there were no statistically significant effects of noise reduction on speech perception in this study, decreases in word recognition scores for younger children in the DNR-on condition is a concern and warrants further study. The authors point out that a child’s ability to recognize and understand speech requires ongoing, consistent auditory experiences. Previous use of amplification, age of identification and consistency of hearing aid use may have influenced the results of this study and may affect success with DNR processing in general.  The effect of degree of hearing loss should also be considered, as it is possible that individuals with severe hearing losses could be adversely affected by even small decrements in speech information resulting from DNR processing.

Clinically, an important highlight of this study is the fact that individual performance among children is highly variable.  Digital Noise Reduction has the potential to ease listening, but may compromise clarity of speech. And directional microphones may improve access to speech, but also risk compromising speech audibility for off-axis talkers.  These considerations suggest that some advanced features should be reserved for older children and specific environments. Among that older population, there may be some inclination to allow manual adjustment of hearing aid settings. However, Ricketts and Galster (2008) correctly point out that children cannot be expected to adjust manual directionality controls reliably. This ultimately results in a fitting rationale that avoids the fitting of some advanced features or allows them to function automatically, with the assumption that they will only be active in the appropriate situations and “do no harm” in regard to speech recognition.

Further study of the perceptual effects of noise reduction and subjective preferences in children is needed. The possibility remains that DNR may offer hearing-impaired children other benefits such as improved attention and comfort in noise, possibly leading to increased satisfaction and compliance from pediatric patients.

References

Bench, J., Kowal, A., & Bamford, J. (1979). The BKB sentence lists for partially-hearing children. British Journal of Audiology 13, 108-112.

Bentler, R., Wu, Y.H., Kettel, J. (2008). Digital noise reduction: Outcomes from laboratory and field studies. International Journal of Audiology 47, 447-460.

Blandy, S. & Lutman, M. (2005). Hearing threshold levels and speech recognition in noise in 7-year-olds. International Journal of Audiology 44, 435-443.

Boymans, M., & Dreschler, W.A. (2000). Field trials using a digital hearing aid with active noise reduction and dual-microphone directionality.  Audiology 39, 260-268.

Gravel, J.S., Fausel, N., Liskow, C. (1999). Children’s speech recognition in noise using omnidirectional and dual microphone hearing aid technology. Ear and Hearing 20, 1-11.

Haskins, H.A. (1949). A phonetically balanced test of speech discrimination for children. Master’s thesis, Northwestern University, Evanston, IL.

Jamieson, D.G., Kranjc, G., Yu, K. (2004). Speech intelligibility of young school-aged children in the presence of real-life classroom noise. Journal of the American Academy of Audiology 15, 508-517.

Levitt, H., Bakke, M., Kates, J. (1993). Signal processing for hearing impairment. Scandanavian Audiology Supplement 38, 7-19.

Ricketts, T.A. & Galster, J. (2008). Head angle and elevation in classroom environments: implications for amplification. Journal of Speech, Language and Hearing Research 15, 516-525.

Sarampalis, A., Kalluri, S., Edwards, B., & Hafter, E. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech Language and Hearing Research, 52, 1230-1240.

Valente, M., Fabry, D., Potts, L.G. (1995). Recognition of speech in noise with hearing aids using dual microphones. Journal of the American Academy of Audiology 6, 440-449.

Walden, B.E., Surr, R.K., Cord, M.T. (2000). Comparison of benefits provided by different hearing aid technologies. Journal of the American Academy of Audiology 11, 540-560.

A comparison of Receiver-In-Canal (RIC) and Receiver-In-The-Aid (RITA) hearing aids

Article of interest:

The Effects of Receiver Placement on Probe Microphone, Performance and Subjective Measures with Open Canal Hearing Instruments

Alworth, L., Plyler, P., Bertges-Reber, M. & Johnstone, P. (2010)

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors.

Open-fit behind-the-ear hearing instruments are favored by audiologists and patients alike, because of their small size and discreet appearance, as well as their ability to minimize occlusion. The performance of open-fit instruments with the Receiver-In-The-Aid (RITA) and Receiver-In-Canal (RIC) has been compared to unaided conditions and to traditional, custom-molded instruments. However, few studies have examined the effect of receiver location on performance by comparing RITA and RIC instruments to each other. In the current paper, Alworth and her associates were interested in the effect of receiver location on:

- occlusion

- maximum gain before feedback

- speech perception in quiet and noise

- subjective performance and listener preferences

Theoretically, RIC instruments should outperform RITA instruments for a number of reasons. Delivery of sound through the thin tube on a RITA instrument can cause peaks in the frequency response, resulting in upward spread of masking (Hoen & Fabry, 2007). Such masking effects are of particular concern for typical open-fit hearing aid users; individuals with high-frequency hearing loss. RIC instruments are also capable of a broader bandwidth than RITA aids (Kuk & Baekgaard, 2008) and may present lowered feedback risk because of the distance between the microphone and receiver (Ross & Cirmo, 1980), and increased maximum gain before feedback (Hoen & Fabry, 2007; Hallenbeck & Groth, 2008).

The authors recruited twenty-five subjects with mild to moderate, high frequency, sensorineural hearing loss participated in the study. Fifteen had no prior experience with open-canal hearing instruments, whereas 10 had some prior experience. Each subject was fitted bilaterally with RIC and RITA instruments with identical signal processing characteristics, programmed to match NAL-NAL1 targets. Directional microphones and digital noise reduction features were deactivated. Subjects used one instrument type (RIC or RITA) for six weeks before testing and then wore the other type for six weeks before being tested again. The instrument style was counterbalanced among the subjects.

Probe microphone measures were conducted to evaluate occlusion and maximum gain before feedback. Speech perception was evaluated with the Connected Speech Test -CST (Cox et al, 1987), the Hearing in Noise test -HINT (Nilsson, et al, 1994), the High Frequency Word List – HFWL (Pascoe, 1975) and the Acceptable Noise Level – ANL test (Nabelek et al, 2004). Subjective responses were evaluated with the Abbreviated Profile of Hearing Aid Benefit – APHAB (Cox & Alexander, 1995), overall listener preferences for quiet and noise, and satisfaction ratings for five criteria: sound quality, appearance, retention and comfort, speech clarity and ease of use and care.

Real-Ear Occluded Response measurements showed minimal occlusion for both types of instruments in this study. Although there was more occlusion overall for RIC instruments, the difference between RIC and RITA hearing instruments was not significant. Overall maximum gain before feedback did not differ between RIC and RITA instruments. However, when analyzed by frequency, the authors found significantly greater maximum gain in the 4000-6000Hz range for RIC hearing instruments.

On the four speech tests, there were no significant differences between RITA versus RIC instruments. Furthermore, there were no significant improvements for aided listening over unaided, except for experienced users with RIC instruments on the Connected Speech Test (CST). It appears that amplification did not significantly improve scores in quiet conditions, for either instrument type, because of ceiling effects. The high unaided speech scores indicated that the subjects in this study, because of their audiometric configurations, already had broad enough access to high frequency speech cues, even in the unaided conditions. Aided performance in noise was significantly poorer than unaided on the HINT test, but no other significant differences were found for aided versus unaided conditions. This finding was in agreement with previous studies that also found degraded HINT scores for aided versus unaided conditions (Klemp & Dhar, 2008; Valente & Mispagel, 2008).

APHAB responses indicated better aided performance for both instrument types than for unaided conditions on all APHAB categories except aversiveness, in which aided performance was worse than unaided. There were no significant differences between RIC and RITA instruments. However, satisfaction ratings were significantly higher for RIC hearing instruments. New users reported more satisfaction with the appearance of RIC instruments; experienced users indicated more satisfaction with appearance, retention, comfort and speech clarity. Overall listener preferences were similar, with 80% of experienced users and 74% of new users preferring RIC instruments over RITA instruments.

The findings of Alworth and colleages are useful information for clinicians and their open-fit hearing aid candidates. Because they provided significantly more high frequency gain before feedback than RITA instruments, RIC instruments may be more appropriate for patients with significant high-frequency hearing loss. Indeed, this result may suggest that RIC instruments should be the preferred recommendation for open-fit candidates. The results of this study also underscore the importance of using subjective measures with hearing aid patients. Objective speech discrimination testing did not yield significant performance differences between RIC and RITA instruments, but participants showed significant preference for RIC instruments.

Further information is needed to compare performance in noise with RIC and RITA instruments. In this study and others, some objective scores and subjective ratings were poorer for aided conditions than unaided conditions. It is important to note that in the current study, all noise and speech was presented at a 0° azimuth angle, with directional microphones disabled. In real-life environments, it is likely that users would have directional microphones and would participate in conversations with various noise sources surrounding them. Previous work has shown significant improvements with directionality in open-fit instruments (Valente & Mispagel, 2008; Klemp & Dhar, 2008). Future work comparing directional RIC and RITA instruments, in a variety of listening environments, would be helpful for clinical decision making.

Although the performance effects and preference ratings reported here support recommendation of RIC instruments clinicians should still consider other factors when discussing options with individual patients. For instance, small ear canals may preclude the use of RIC instruments because of retention, comfort or occlusion concerns. Patients with excessive cerumen may prefer RITA instruments because of easier maintenance and care, or those with cosmetic concerns may prefer the smaller size of RIC instruments. Every patient’s individual characteristics and concerns must be considered, but the potential benefits of RIC instruments warrant further examination and may indicate that this receiver configuration should be recommended over slim-tube fittings.

References

Alworth, L.N., Plyler, P.N., Rebert, M.N., & Johstone, P.M. (2010). The effects of receiver placement on probe microphone, performance, and subjective measrues with open canal hearing instruments. Journal of the American Academy of Audiology, 21, 249-266.

Cox, R.M., & Alexander, G.C. (1995). The Abbreviated Profile of Hearing Aid Benefit. Ear and Hearing, 16, 176-186.

Cox, R.M., Alexander, G.C. & Gilmore, C. (1987). Development of the Connected Speech Test (CST). Ear and Hearing, 8, 119-126.

Hallenbeck, S.A., & Groth, J. (2008). Thin-tube and receiver-in-canal devices: there is positive feedback on both! Hearing Journal, 61(1), 28-34.

Hoen, M. & Fabry, D. (2007). Hearing aids with external receivers: can they offer power and cosmetics? Hearing Journal, 60(1), 28-34.

Klemp, E.J. & Dhar, S. (2008). Speech perception in noise using directional microphones in open-canal hearing aids. Journal of the American Academy of Audiology, 19(7), 571-578.

Kuk, F. & Baekgaard, L. (2008). Hearing aid selection and BTEs: choosing among various “open ear” and “receiver in canal” options. Hearing Review, 15(3), 22-36.

Nabelek, A.K., Tampas, J.W. & Burchfield, S.B. (2004). Comparison of speech perception in background noise with acceptance of background noise in aided and unaided conditions. Journal of Speech and Hearing Research, 47, 1001-1011.

Nilsson, M., Soli, S. & Sullivan, J. (1994). Development of the Hearing in Noise Test for the measurement of speech reception threshold in quiet and in noise. Journal of the Acoustical Society of America, 95, 1085-1099.

Pascoe, D. (1975). Frequency responses of hearing aids and their effects on the speech perception of hearing impaired subjects. Annals of Otology, Rhinology and Laryngology suppl. 23, 84: #5, part 2.

Valente, M. & Mispagel, K. (2008). Unaided and aided performance with a directional open-fit hearing aid. International Journal of Audiology, 47, 329-336.

Reviewing the benefits of open-fit hearing aids

Article of interest:

Unaided and Aided Performance with a Directional Open-Fit Hearing Aid

Valente, M., and Mispagel, K.M. (2008)

This editorial discusses the clinical implications of an independent research study. The original work was not associated with Starkey Laboratories and does not reflect the opinions of the authors. 

With the continued popularity of directional microphone use in open-fit and receiver-in-canal (RIC) hearing aids, there has been increasing interest in evaluating their performance in noisy environments. A number of studies have investigated the performance of directional, open-fit BTEs in laboratory conditions. (Valente et al., 1995; Ricketts, 2000a; Ricketts, 2000b). Some have evaluated directional microphone performance in real-life or simulated real-life noise environments (Ching et al, 2009). In the current study, the authors compared performance in omnidirectional, directional and unaided conditions using RIC instruments in R-SpaceTM (Revitt et al, 2000) recorded restaurant noise. Their goal was to obtain more externally valid results by using real-life noise in a controlled, laboratory setting.

The R-SpaceTM method involved recordings of real restaurant noise from an 8-microphone, circular array. For the test conditions, these recordings were presented through an 8-speaker, circular array to simulate the conditions in the busy restaurant. One important factor that distinguishes this study from most others is that the subjects listened to speech stimuli in the presence of noise from all directions, including the front. At the time of this study only a few other studies had tested directional microphone performance in the presence of multiple noise sources, including frontal (Ricketts, 2000a; Ricketts, 2001; Bentler et al., 2004).

The authors recruited 26 adults with no prior hearing aid experience for the study. They were fitted with binaural receiver-in-canal (RIC) instruments. The instruments were programmed without noise reduction processing and with independent omnidirectional and directional settings. Subjects were counseled on use and care of the instruments, including proper use of omnidirectional and directional programs. They returned for follow-up adjustments one week after their fitting then used their instruments for four weeks before returning for testing. Subjects were given the opportunity to either purchase the hearing aids after the study at a 50% discount or receive a $200 payment for participation.

Hearing in Noise Test (HINT) (Nilsson et al., 1994) sentence reception thresholds were obtained to evaluate sentence perception in the uncorrelated R-Space noise. The Abbreviated Profile of Hearing Aid Benefit (APHAB) (Cox & Alexander, 1995) was also administered to evaluate perceived benefit from the instruments in the study. Four APHAB subscales were evaluated independently:

- Ease of communication (EC)
- Reverberation (RV)
- Background noise (BN)
- Aversiveness to loud sounds (AV)

The authors found that subjects’ performance in the directional condition was significantly better than both omnidirectional and unaided conditions. The omnidirectional condition was not significantly better than unaided; in fact results were slightly worse than those obtained in the unaided condition.

For the APHAB results, the authors found that on the EC, RV and BV subscales, aided scores were significantly better than unaided scores. Perhaps not surprisingly, the AV score, which evaluates “aversiveness to noise” was worse in the aided conditions. The aided results combined omnidirectional and directional conditions, so it is possible that aversion to noise in omnidirectional conditions was greater than the directional conditions. However, this was not specifically evaluated in the current study.

The authors pointed out that their directional benefit, which on average was 1.7dB, was lower than those found in other studies of open-fit or RIC hearing instruments (Ricketts, 2000b; Ricketts, 2001; Bentler, 2004; Pumford et al., 2000). However, they mention that most of those studies did not use frontal noise sources in their arrays. Frontal noise sources should have obvious detrimental effects on directional microphone performance, so it is likely that the speaker arrangement in the current study affected the measured directional improvement. At the time of this publication one other study had been conducted using the R-SpaceTM restaurant noise (Compton-Conley et al 2004). They found mean directional benefits of 3.6 to 5.8 dB, but their subjects had normal hearing and the hearing aids they used were not an open-fit design and were very different from the ones in the current study..

Clinicians can gain a number of important insights from Valente and Mispagel’s study. First and foremost, directional microphones are likely to provide significant benefits for users of RIC hearing aids. At the time of publication, the authors noted that directional improvement should be studied in order to warrant the extra expense of adding directional microphones to an open-fit hearing aid order. However, most of today’s open-fit and RIC instruments already come standard with directional microphones, many of which are automatically adjustable. So there is no need to justify the use of directional microphones on a cost basis, as they usually add nothing to the hearing aid purchase price.

This study provided more evidence for directional benefit in noise, but further work is needed to determine performance differences between directional and omnidirectional microphones in quiet conditions. Dispensing clinicians should always order instruments that have omnidirectional and directional modes, whether manually or automatically adjustable. This helps ensure that the instruments will perform optimally in most situations. Even instruments with automatically adjustable directional microphones often have push-buttons that allow us to give patients additional programs. For example, a manually accessible, directional program, perhaps with more aggressive noise reduction, offers the user another option for excessively noisy situations.

The current study obtained slightly reduced directional effects compared to other studies that tested subjects in speaker arrays without frontal noise sources. This underscores the importance of counseling patients about proper positioning when using directional settings. In general, patients should understand that they will be better off when they can put as much noise behind them as possible. But, it is also important to ensure that patients have reasonable expectations about directional microphones. They must understand that the directional microphone will help them focus on conversation in front of them, but will not completely remove competing noise behind them. Patients must also understand that omnidirectional settings are likely to offer no improvement in noise and might even be a detriment to speech perception in some noisy environments.

Subjects in Valente and Mispagel’s study were offered the opportunity to purchase their hearing instruments at a 50% discount after the study’s completion. Only 8 of the 26 subjects opted to do so. Of the remaining subjects, 3 reported that the perceived benefit was not enough to justify the purchase, whereas 15 subjects did not report any significant perceived benefit. This leads to another important point about patient counseling.

The subjects in this study, like most candidates for open-fit or RIC instruments, had normal low-frequency hearing. Therefore, they may have had less of a perceived need for hearing aids in the first place. It is important for audiologists to discuss realistic expectations and likely hearing aid benefits with patients in detail at the hearing aid selection appointment, before hearing aids are ordered. Patients who are unmotivated or do not perceive enough need for hearing assistance will ultimately be less likely to perceive significant benefit from their hearing aids. This is particularly true in everyday clinical situations, in which patients are not typically offered a 50% discount and will have to factor financial constraints into their decisions. For most open-fit or RIC candidates, their motivation and perceived handicap will be related to their lifestyle: their social activities, employment situation, hobbies, etc. Because a patient who has a less than satisfying experience with hearing aids may be reluctant to pursue them again in the future, it is critical for the clinician to help them establish realistic goals early on, before hearing aid options are discussed.

References
Bentler, R., Egge, J., Tubbs, J., Dittberner, A., and Flamme, G. (2004). Quantification of directional benefit across different polar response patterns. Journal of the American Academy of Audiology 15(9), 649-659.

Ching, T.C., O’Brien, A., Dillon, H., Chalupper, J., Hartley, L., Hartley, D., Raicevich, G., and Hain, J. (2009). Journal of Speech, Language and Hearing Research 52, 1241-1254.

Compton-Conley, C., Neuman, A., Killion, M., and Levitt, H. (2004). Performance of directional microphones for hearing aids: real world versus simulation. Journal of the American Academy of Audiology 15, 440-455.

Cox, R.M. and Alexander, G.C. (1995). The abbreviated profile of hearing-aid benefit. Ear and Hearing 16, 176-183.

Nilsson, M., Soli, S. and Sullivan, J. (1994). Development of the hearing in noise test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America 95, 1085-1099.

Pumford, J., Seewald, R,. Scollie, S. and Jenstad, L. (2000). Speech recognition with in-the-ear and behind-the-ear dual microphone hearing instruments. Journal of the American Academy of Audiology 11, 23-35.

Revit, L., Schulein, R., and Julstrom, S. (2002). Toward accurate assessment of real-world hearing aid benefit. Hearing Review 9, 34-38, 51.

Ricketts, T. (2000a). The impact of head angle on monaural and bilateral performance with directional and omnidirectional hearing aids. Ear and Hearing 21, 318-329.

Ricketts, T. (2000b). Impact of noise source configuration on directional hearing aid benefit and performance. Ear and Hearing 21, 194-205.

Ricketts, T., Lindley, G., and Henry, P. (2001). Impact of compression and hearing aid style on directional hearing aid benefit and performance. Ear and Hearing 22, 348-360.

Valente, M., Fabry, D., and Potts, L. (1995). Recognition of speech in noise with hearing aids using a dual microphone. Journal of the American Academy of Audiology 6, 440-449.

Valente, M., & Mispagel, K.M. (2008). Unaided and aided performance with a directional open-fit hearing aid. International Journal of Audiology, 47, 329-336.

The effect of digital noise reduction on listening effort: an article review

This article marks the first in a monthly series for StarkeyEvidence.com.

Each month scholarly journals publish articles on a wide array of topics. Some of these valuable articles and their useful conclusions never reach professionals in the clinical arena. The aim of these entries is to discuss research findings and their implications for hearing professionals living a daily clinical routine. Some of these topics may have general clinical relevance, while other may target specific aspects of hearing aids and their application.

This first discussion revolves around an article by authors Sarampalis, Kalluri, Edwards, and Hafter entitled “Objective measures of listening effort: Effects of background noise and noise reduction”. In this 2009 study, the authors pursue the sometimes elusive benefits of digital noise reduction. A review of past literature suggests that digital noise reduction, as implemented in hearing aids, benefits patients through improved sound quality, ease of listening and a possible perceived improvement in speech understanding. Significant improvements in speech understanding are, however, not a routinely observed benefit of digital noise reduction and some studies have shown significant decreases in speech understanding with active digital noise reduction.

In a 1992 article, authors Hafter and Schlauch suggest that noise reduction may lighten a patient’s cognitive load, essentially freeing resources for other tasks. To better understand the proposed effect, imagine driving a car in an unfamiliar area. It’s common for drivers to turn their stereo down, or off, when driving in a demanding situation. This is beneficial, not because music affects driving ability, but because the additional auditory input is distracting, effectively increasing the driver’s cognitive load. By removing the distraction of the stereo, more cognitive resources are freed and the ability to focus, or pay attention to the complex task of driving is improved.

In order to better understand how digital noise reduction may affect attention and cognitive load, two experiments were completed. In the first experiment, research participants were asked to repeat the last word of sentences presented in a background of noise. After eight sentences the listener attempted to repeat as many of the target words as they could. The sentence material contained both high-context and no-context conditions, for example:

High context: A chimpanzee is an ape

No context: She might have discussed the ape

In the second experiment listeners were asked to judge if a random number between one and eight was even or odd, while at the same time listening to and repeating sentences presented in a background of noise. Both experiments incorporated a dual-task paradigm: the first asked participants to repeat select words presented in noise, while also remembering these words for later recall. The second required participants to repeat an entire sentence, presented in noise, while also completing a complex visual task.

Highlights from experiment one show:

  • performance in all conditions decreased as the signal-to-noise ratio became more difficult;
  • overall performance in the no-context conditions was lower than in the high-context conditions;
  • a comparison between performance with and without digital noise reduction showed a significant improvement in recall ability with digital noise reduction

Highlights from experiment two show:

  • performance in all conditions decreased as the signal-to-noise ratio became more difficult;
  • reaction times increased with decreased signal-to-noise ratio;
  • at -6 dB SNR, reaction times were significantly improved with digital noise reduction

The findings of this study show that the cognitive demands of non-auditory tasks, such as visual and memory tasks, inhibit the ability of a person to understand speech-in-noise. In other words, secondary tasks make speech understanding more difficult. Additionally, digital noise reduction algorithms can reduce cognitive effort under adverse listening conditions. The authors discuss the value of using cognitive measures in hearing aid research and speculate that directional microphones may provide a cognitive benefit as well.

The clinical implications of this study suggest that patients may find benefits of wearing hearing aids that go beyond improved speech audibility. Modern signal processing may provide benefits that are only now being understood. For instance, a patient may report that hearing aids have made listening easier, that their new hearing instruments seem to suppress noise more than the old ones, but routine evaluation of speech understanding may not show significant differences between the two hearing aids.

Hearing aid success and benefit has traditionally been defined with the results of speech testing, or questionnaires. If advanced technology can ease the task of listening, patients may be receiving benefits from their hearing aids that we are not currently prepared to evaluate in the clinic. Hopefully, work in this area will continue, increasing our understanding of the role that cognition plays in the success of the hearing aid wearer.

References:
Bentler, R., Wu, Y., Kettle, J., & Hurtig, R. (2008). Digital Noise Reduction: Outcomes from laboratory and field studies. International Journal of Audiology, 47:8, 447-460.

Hafter, E. R., & Schlauch, R. S. (1992). Cognitive factors and selection of auditory listening bands. In A. Dancer, D. Henderson, R. J. Salvi, & R. P. Hammernik ( Eds.), Noise-induced hearing loss (pp. 303–310). Philadelphia: B.C. Decker.

Sarampalis, A., Kalluri, S., Edwards, B., & Hafter, E. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech Language and Hearing Research, 52, 1230-1240.