Starkey Research & Clinical Blog

Patients with higher cognitive function may benefit more from hearing aid features

Ng, E.H.N., Rudner, M., Lunner, T., Pedersen, M.S., & Ronnberg, J. (2013). Effects of noise and working memory capacity on memory processing of speech for hearing-aid users. International Journal of Audiology, Early Online, 1-9.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Research reports as well as clinical observations indicate that competing noise increases the cognitive demands of listening, an effect that is especially impactful for individuals with hearing loss (McCoy et al., 2005; Picou et al., 2013; Rudner et al., 2011).  Listening effort is a cognitive dimension of listening that is thought to represent the allocation of cognitive resources needed for speech recognition (Hick & Tharpe, 2002). Working memory, is a further dimension of cognition that involves the simultaneous processing and storage of information; its effect on speech processing may vary depending on the listening conditions (Rudner et al., 2011).

The concept of effortful listening can be characterized with the Ease of Language Understanding (ELU) model (Ronnberg, 2003; Ronnberg et al., 2008). In quiet conditions when the speech is audible and clear, the speech input is intact and is automatically and easily matched to stored representations in the lexicon. When speech inputs are weak, distorted or obscured by noise, mismatches may occur and speech inputs may need to be compared to multiple stored representations to arrive at the most likely match. In these conditions, allocation of additional cognitive resources, is required. Efficient cognitive functioning and large working memory capacity allows more rapid and successful matches between speech inputs and stored representations. Several studies have indicated a relationship between cognitive ability and speech perception: Humes (2007) found that cognitive function was the best predictor of speech understanding in noise and Lunner (2003) reported that participants with better working memory capacity and verbal processing speed had better speech perception performance.

Following the ELU model, hearing aids may allow listeners to match inputs and stored representations more successfully, with less explicit processing. Noise reduction, as implemented in hearing aids, has been proposed as a technology that may ease effortful listening. In contrast, however, it has been suggested that hearing aid signal processing may introduce unwanted artifacts or alter the speech inputs so that more explicit processing is required to match them to stored images (Lunner et al., 2009). If this is the case, hearing aid users with good working memory may function better with amplification because their expanded working memory capacity allows more resources to be applied to the task of matching speech inputs to long-term memory stores.

Elaine Ng and her colleagues investigated the effect of noise and noise reduction on word recall and identification and examined whether individuals were affected by these variables differently based on their working memory capacity. The authors had several hypotheses:

1. Noise would adversely affect memory, with poorer memory performance for speech in noise than in quiet.

2. Memory performance in noise would be at least partially restored by the use of noise reduction.

3. The effect of noise reduction on memory would be greater for items in late list positions because participants were older and therefore likely to have slower memory encoding speeds.

4. Memory in competing speech would be worse than in stationary noise because of the stronger masking effect of competing speech.

5. Overall memory performance would be better for participants with higher working memory capacity in the presence of noise reduction. This effect should be more apparent for late list items presented with competing speech babble.

Twenty-six native Swedish-speaking individuals with moderate to moderately-severe, high-frequency sensorineural hearing loss participated in the authors’ study. Prior to commencement of the study, participants were tested to ensure that they had age-appropriate cognitive performance. A battery of tests was administered and results were comparable to previously reported performance for their age group (Ronnberg, 1990).

Two tests were administered to study participants. First, a reading span test evaluated working memory capacity.  Participants were presented with a total of 24 three-word sentences and sub-lists of 3, 4 and 5 sentences were presented in ascending order. Participants were asked to judge whether the sentences were sensible or nonsense. At the end of each sub-list of sentences, listeners were prompted to recall either the first or final words of each sentence, in the order in which they were presented. Tests were scored as the total number of items correctly recalled.

The second test was a sentence-final word identification and recall (SWIR) test, consisting of 140 everyday sentences from the Swedish Hearing In Noise Test (HINT; Hallgren et al, 2006). This test involved two different tasks. The first was an identification task in which participants were asked to report the final word of each sentence immediately after listening to it.  The second task was a free recall task; after reporting the final word of the eighth sentence of the list, they were asked to recall all the words that they had previously reported. Three of seven tested conditions included variations of noise reduction algorithms, ranging from one similar to those implemented in modern hearing aids to an ‘ideal’ noise reduction algorithm.

Prior to the main analyses of working memory and recall performance, two sets of groups were created based on reading span scores, using two different grouping methods. In the first set, two groups were created by splitting the group at the median score so that 13 individuals were in a high reading span group and the remaining 13 were in a low reading span group. In the second set, participants who scored in the mid-range on the reading span test were excluded from the analysis, creating High reading span and Low reading span groups of 10 participants each. There was no significant difference between groups based on age, pure tone average or word identification performance, in any of the noise conditions. Overall reading span scores for participants in this study were comparable to previously reported results (Lunner, 2003; Foo, 2007).

Also prior to the main analysis, the SWIR results were analyzed to compare noise reduction and ideal noise reduction conditions. There was no significant difference between noise reduction and ideal noise reduction conditions in the identification or free recall tasks, nor was there an interaction of noise reduction condition with reading span score. Therefore, only the noise reduction condition was considered in the subsequent analyses.

The relationship between reading span score (representing working memory capacity) and SWIR recall was examined for all the test conditions. Reading span score correlated with overall recall performance in all conditions but one. When recall was analyzed as a function of list position (beginning or final), reading span scores correlated significantly with beginning (primacy) positions in quiet and most noise conditions. There was no significant correlation between overall reading span scores and items in final (recency) position in any of the noise conditions.

There were significant main effects for noise, list position and reading span group. In other words, when noise reduction was implemented, the negative effects of noise were lessened. There was a recency effect, in that performance was better for late list positions than for early list positions. Overall, the high reading span groups scored better than the low reading span groups, for both median-split and mid-range exclusion groups. The high reading span groups showed improved recall with noise reduction, whereas the low reading span groups exhibited no change in performance with noise reduction versus quiet.  The use of four-talker babble had a negative effect on late list positions, but did not affect items in other positions, suggesting that four-talker babble disrupted working memory more than steady-state noise. These analyses supported hypotheses 1, 2, 3 and 5, indicating that noise adversely affects memory performance (1), that noise reduction and list position interact with this effect (2,3) especially for individuals with high working memory capacity (5).

The results also supported hypothesis 4, which suggested that competing speech babble would affect memory performance more than steady state noise. Recall performance was significantly better in the presence of steady-state noise than it was in 4-talker babble. Though there was no significant effect of noise reduction overall, high reading span participants once again outperformed low reading span participants with noise reduction.

In summary, the results of this study determined that noise had an adverse effect on recall, but that this effect was mildly mitigated by the use of noise reduction. Four-talker babble was more disruptive to recall performance than was steady-state noise. Recall performance was better for individuals with higher working memory capacity. These individuals also demonstrated more of a benefit from noise reduction than did those with lower working memory capacity.

Recall performance is better in quiet conditions than in noise because presumably fewer cognitive resources are required to encode the speech input (Murphy, et al., 2000). Ng and her colleagues suggest that noise reduction helps to perceptually segregate speech from noise, allowing the speech input to be matched to stored lexical representations with less cognitive demand. So, noise reduction may at least partially reverse the negative effect of noise on working memory.

Competing speech babble is more likely to be cognitively demanding than steady-state noise (such as an air conditioner) because it contains meaningful information that is more distracting and harder to separate from the speech of interest (Sorqvist & Ronnberg, 2012). Not only is the speech signal of interest degraded by the presence of competing sound and therefore harder to encode, but additional cognitive resources are required to inhibit the unwanted or irrelevant linguistic information (Macken, 2009).  Because competing speech puts more demands on cognitive resources, it is more potentially disruptive than steady-state noise to perception of the speech signal of interest.

Unfortunately, much of the background noise encountered by hearing aid wearers is competing speech. The classic example of the cocktail party illustrates one of the most challenging situations for hearing-impaired individuals, in which they must try to attend to a proximal conversation while ignoring multiple conversations surrounding them. The results of this study suggest that noise reduction may be more useful in these situations for listeners with better working memory capacity; however, noise reduction should still be considered for all hearing aid users, with comprehensive follow-up care to make adjustments for individuals who are not functioning well in noisy conditions. Noise reduction may generally alleviate perceived effort or annoyance, allowing a listener to be more attentive to the speech signal of interest or to remain in a noisy situation that would otherwise be uncomfortable or aggravating.

More research is needed on the effects of noise, noise reduction and advanced signal processing on listening effort and memory in everyday situations. It is likely that performance is affected by numerous variables of the hearing aid, including compression characteristics, directionality, noise reduction, as well as the automatic implementation or adjustment of these features. These variables in turn combine with user-related characteristics such as age, degree of hearing loss, word recognition ability, cognitive capacity and more.


Foo, C., Rudner, M., & Ronnberg, J. (2007). Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. Journal of the American Academy of Audiology 18, 618-631.

Hallgren, M., Larsby, B. & Arlinger, S. (2006). A Swedish version of the hearing in noise test (HINT) for measurement of speech recognition. International Journal of Audiology 45, 227-237.

Hick, C. B., & Tharpe, A. M. (2002). Listening effort and fatigue in school-age children with and without hearing loss. Journal of Speech Language and Hearing Research 45, 573–584.

Humes, L. (2007). The contributions of audibility and cognitive factors to the benefit provided by amplified speech to older adults. Journal of the American Academy of Audiology 18, 590-603.

Lunner, T. (2003). Cognitive function in relation to hearing aid use. International Journal of Audiology 42, (Suppl. 1), S49-S58.

Lunner, T., Rudner, M. & Ronnberg, J. (2009). Cognition and hearing aids. Scandinavian Journal of Psychology 50, 395-403.

Macken, W.J., Phelps, F.G. & Jones, D.M. (2009). What causes auditory distraction? Psychonomic Bulletin and Review 16, 139-144.

McCoy, S.L., Tun, P.A. & Cox, L.C. (2005). Hearing loss and perceptual effort: downstream effects on older adults’ memory for speech. Quarterly Journal of Experimental Psychology A, 58, 22-33.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W.Y. (2013). How hearing aids, background noise and visual cues influence objective listening effort. Ear and Hearing 34 (5).

Ronnberg, J. (2003). Cognition in the hearing impaired and deaf as a bridge between signal and dialogue: a framework and a model. International Journal of Audiology 42 (Suppl. 1), S68-S76.

Ronnberg, J., Rudner, M. & Foo, C. (2008). Cognition counts: A working memory system for ease of language understanding (ELU). International Journal of Audiology 47 (Suppl. 2), S99-S105.

Rudner, M., Ronnberg, J. & Lunner, T. (2011). Working memory supports listening in noise for persons with hearing impairment. Journal of the American Academy of Audiology 22, 156-167.

Sorqvist, P. & Ronnberg, J. (2012). Episodic long-term memory of spoken discourse masked by speech: What role for working memory capacity? Journal of Speech Language and Hearing Research 55, 210-218.

Hearing Aid Behavior in the Real World

Banerjee, S. (2011). Hearing aids in the real world: typical automatic behavior of expansion, directionality and noise management. Journal of the American Academy of Audiology 22, 34-48.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Hearing aid signal processing offers proven advantages for many everyday listening situations. Directional microphones improve speech recognition in the presence of competing sounds and noise reduction decreases annoyance of surrounding noise while possibly improving ease of listening (Sarampalis et al., 2009). Expansion reduces the annoyance of low-level environmental noise as well as circuit noise from the hearing aid.  It is typical for modern hearing aids to offer automatic activation of signal processing features based on various information derived through acoustic analysis of the environment. In the case of some signal processing features, these can be assigned to independent, manually accessible hearing aid memories. The opportunity to manually activate a hearing aid feature allows patients to make conscious decisions about the acoustic conditions of the environment and access an appropriately optimized memory configuration (Keidser, 1996; Surr et al., 2002).

However, many hearing aid users who need directionality and noise reduction may be unable to manually adjust their hearing aids, due to physical limitations or an inability to determine the optimal setting for a situation. Other users may be reluctant to make manual adjustments for fear of drawing attention to the hearing aids and therefore the hearing impairment. Cord et al (2002) reported that as many as 23% of users with manual controls do not use their additional programs and leave the aids in a default mode at all times. Most hearing aids now offer automatic directionality and noise reduction, taking the responsibility for situational adjustments away from the user. This allows more hearing aid users the ability to experience advanced signal processing benefits and reduces the need for manual adjustments.

The decision to provide automatic activation of expansion, directionality, and noise reduction is based on their known benefits for particular acoustic conditions, but it is not well understood how these features interact with each other or with changing listening environments in every day use.  This poses a challenge to clinicians when it comes to follow-up fine-tuning, because it is impossible to determine what features were activated at any particular moment. Datalogging offers opportunity to better interpret a patient’s experience outside of the clinic or laboratory. Datalogging reports often include average daily or total hours of use as well as the proportion of time an individual has spent in quiet or noisy environments but these are general reports and do not provide insight into the activation of some signal processing features and the acoustic environment that occurred at the time of feature activation. For example, a clinician may be able to determine that an aid was in a directional mode 20% of the time and that the user spent 26% of their time listening to speech in the presence of noise, but it does not indicate whether directional processing was active during these exposures to speech in noise. Therefore, the clinician must rely on user reports and observations to determine the appropriate adjustments, which may not reliably represent the array of listening experiences and acoustic environments that were encountered (Wagener, 2008).

In the study discussed here, Banerjee investigated the implementation of automatic expansion, directionality and noise management features. She measured environmental sound levels to determine the proportion of time individuals spent in quiet and noisy environments, as well as how these input levels related to activation of automatic features. She also examined bilateral agreement across a pair of independently functioning hearing aids to determine the proportion of time that the aids demonstrated similar processing strategies.

Ten subjects with symmetrical, sensorineural hearing loss were fitted with bilateral, behind-the-ear hearing aids. Age ranged from 49-78 years with a mean of 62.3 years of age. All of the subjects were experienced hearing aid users.  Some subjects were employed and most participated in regular social activities with family and other groups. The hearing aids were 8-channel WDRC instruments programmed to match targets from the manufacturer’s proprietary fitting formula.  Activation of the automatic directional microphone required input levels of 60dB or above, with the presence of noise in the environment and speech located in front of the wearer. Automatic noise management resulted in gain reductions in one or more of the 8 channels, based on the presence of noise-like sounds classified as “wind, mechanical sounds or other sounds” based on their spectral and temporal characteristics. No gain reductions were applied for sounds classified as “speech”.  Expansion was active for inputs below the compression thresholds, which ranged from 54 to 27dB SPL.

All participants carried a Personal Digital Assistants (PDA) connected via programming boots to their hearing aids. This PDA logged environmental broadband input level as well as the status of expansion, directionality, noise management and channel-specific gain reduction. Participants were asked to wear the hearing aids connected to the PDA for as much of the day as possible and measurements were made in 5-sec intervals to allow time for hearing aid features to update several times between readings.  The PDAs were worn with the hearing aids for a period of 4-5 weeks and at the end of data collection a total of 741 hours of hearing aid use were logged and studied.

Examination of the input level measurements revealed that subjects spent about half of their time in quiet environments with input levels of 50dB SPL or lower. Less than 5% of their time was spent in environments with input levels exceeding 65dB and the maximum recorded input level was 105dB SPL. This concurs with previous studies that reported high proportions of time spent in quiet environments such as living rooms or offices (Walden et al., 2004; Wagener et al., 2008).  The interaural difference in input level was 1dB about 50% of the time and exceeded 5dB only 5% of the time. Interaural differences were attributed to head shadow effects and asymmetrical sound sources as well as occasional accidental physical contact with the hearing aids, such as adjusting eyeglasses or rubbing the pinna.

Expansion was analyzed in terms of the proportion of time it was activated and whether the aids were in bilateral agreement. Expansion thresholds are meant to approximate low-level speech presented at 50dB.  In this study, expansion was active between 42% and 54% of the time, which is consistent with its intended activation, because about half the time the input levels were at or below 50dB SPL.  Bilateral agreement was relatively high at 77-81%.

Directional microphone status was measured according to the proportion of time that directionality was active and whether there was bilateral agreement. Again, directional status was consistent with the broadband input level measurements, in that directionality was active only about 10% of the time. The instruments were designed to switch to directional mode only when input levels were higher than 60dBA, and the broadband input measurements showed that participants encountered inputs higher than 65dB only about 5% of the time. Bilateral agreement for directionality was very high at 97%. Interestingly, the hearing aids were in directional mode only about 50% of the time in the louder environments.  This is likely attributable to the requirement for not only high input levels but also speech located in front of the listener in the presence of surrounding noise. A loud environment alone should not trigger directionality without the presence of speech in front of the listener.

Noise reduction was active 21% of the time with bilateral agreement of 95%. Again, this corresponds well with the input level measurements because noise reduction is designed to activate only in levels exceeding 50dB SPL. This does not indicate how often it was activated in the presence of moderate to loud noise, but as input levels rose, gain reductions resulting from noise management steadily increased as well. Gain reduction was 3-5dB greater in channels below 2250Hz than in the high frequency channels, consistent with the idea that environmental noise contains more energy in the low frequencies. Interaural differences in noise management were very small with a median difference in gain reduction of 0dB in all channels and exceeding 1dB only 5% of the time.

Bilateral agreement was generally quite high. Conditions in which there was less bilateral agreement may reflect asymmetric sound sources, accidental physical contact with the hearing instruments or true disagreement based on small differences in input levels arriving at the two ears. There may be everyday situations in which hearing aids might not perform in bilateral agreement, but this is not necessarily a disadvantage to the user. For instance, a driver in a car might experience directionality in the left aid but omnidirectional pickup from the right aid. This may be advantageous for the driver if there is another occupant in the passenger’s seat. Similarly, at a restaurant a hearing aid user might experience disproportionate noise or multi-talker babble from one side, depending on where he is situated relative to other people. Omnidirectional pickup on the quieter side of the listener with directionality on the opposite side might be desirable and more conducive to conversation. Similar arguments could be proposed for asymmetrical activation of noise management and its potential effects on comfort and ease of listening in noisy environments.

Banerjee’s investigation is an important step toward understanding how hearing aid signal processing is activated in everyday conditions. Though datalogging helps provide an overall snapshot of usage patterns and listening environments, the gross reporting of data limits utility in fine-tuning of hearing aid parameters. This study, and others like it, will provide useful information for clinicians providing follow-up care with hearing aid users.

It is noteworthy that participants spent about 50% of their time in environments with 50dB of broadband input or lower. While some participants were employed and others were not, this remains an acoustic reality of the hearing aid wearer. Subsequent studies with targeted samples would help further determine how special features apply to everyday environments among participants that lead a more consistently active lifestyle.

Automatic, adaptive signal processing features have potential benefits for many hearing aid users, especially those who are unable to or prefer not to operate manual controls. However, proper recommendations and programming adjustments can only be made if clinicians understand how these features are implemented in everyday life. This study provides evidence that some features perform as designed and offers insight for clinicians to leverage when making fine-tuning instruments based on real world hearing aid behavior.



Banerjee, S. (2011). Hearing aids in the real world: typical automatic behavior of expansion, directionality and noise management. Journal of the American Academy of Audiology 22, 34-48.

Cord, M., Surr, R., Walden, B. & Olsen, L. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology 13, 295-307.

Keidser, G. (1996). Selecting different amplification for different listening conditions. Journal of the American Academy of Audiology 7, 92-104.

Sarampalis, A., Kalluri, S., Edwards, B. & Hafter, E. (2009). Objective measures of listening effort: effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research 52, 1230–1240.

Surr, R., Walden, B., Cord, M. & Olsen, L. (2002). Influence of environmental factors on hearing aid microphone preference. Journal of the American Academy of Audiology 13, 308-322.

Wagener, K., Hansen, M. & Ludvigsen, C. (2008). Recording and classification of the acoustic environment of hearing aid users. Journal of the American Academy of Audiology 19, 348-370.