Starkey Research & Clinical Blog

Hearing Aid Behavior in the Real World

Banerjee, S. (2011). Hearing aids in the real world: typical automatic behavior of expansion, directionality and noise management. Journal of the American Academy of Audiology 22, 34-48.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Hearing aid signal processing offers proven advantages for many everyday listening situations. Directional microphones improve speech recognition in the presence of competing sounds and noise reduction decreases annoyance of surrounding noise while possibly improving ease of listening (Sarampalis et al., 2009). Expansion reduces the annoyance of low-level environmental noise as well as circuit noise from the hearing aid.  It is typical for modern hearing aids to offer automatic activation of signal processing features based on various information derived through acoustic analysis of the environment. In the case of some signal processing features, these can be assigned to independent, manually accessible hearing aid memories. The opportunity to manually activate a hearing aid feature allows patients to make conscious decisions about the acoustic conditions of the environment and access an appropriately optimized memory configuration (Keidser, 1996; Surr et al., 2002).

However, many hearing aid users who need directionality and noise reduction may be unable to manually adjust their hearing aids, due to physical limitations or an inability to determine the optimal setting for a situation. Other users may be reluctant to make manual adjustments for fear of drawing attention to the hearing aids and therefore the hearing impairment. Cord et al (2002) reported that as many as 23% of users with manual controls do not use their additional programs and leave the aids in a default mode at all times. Most hearing aids now offer automatic directionality and noise reduction, taking the responsibility for situational adjustments away from the user. This allows more hearing aid users the ability to experience advanced signal processing benefits and reduces the need for manual adjustments.

The decision to provide automatic activation of expansion, directionality, and noise reduction is based on their known benefits for particular acoustic conditions, but it is not well understood how these features interact with each other or with changing listening environments in every day use.  This poses a challenge to clinicians when it comes to follow-up fine-tuning, because it is impossible to determine what features were activated at any particular moment. Datalogging offers opportunity to better interpret a patient’s experience outside of the clinic or laboratory. Datalogging reports often include average daily or total hours of use as well as the proportion of time an individual has spent in quiet or noisy environments but these are general reports and do not provide insight into the activation of some signal processing features and the acoustic environment that occurred at the time of feature activation. For example, a clinician may be able to determine that an aid was in a directional mode 20% of the time and that the user spent 26% of their time listening to speech in the presence of noise, but it does not indicate whether directional processing was active during these exposures to speech in noise. Therefore, the clinician must rely on user reports and observations to determine the appropriate adjustments, which may not reliably represent the array of listening experiences and acoustic environments that were encountered (Wagener, 2008).

In the study discussed here, Banerjee investigated the implementation of automatic expansion, directionality and noise management features. She measured environmental sound levels to determine the proportion of time individuals spent in quiet and noisy environments, as well as how these input levels related to activation of automatic features. She also examined bilateral agreement across a pair of independently functioning hearing aids to determine the proportion of time that the aids demonstrated similar processing strategies.

Ten subjects with symmetrical, sensorineural hearing loss were fitted with bilateral, behind-the-ear hearing aids. Age ranged from 49-78 years with a mean of 62.3 years of age. All of the subjects were experienced hearing aid users.  Some subjects were employed and most participated in regular social activities with family and other groups. The hearing aids were 8-channel WDRC instruments programmed to match targets from the manufacturer’s proprietary fitting formula.  Activation of the automatic directional microphone required input levels of 60dB or above, with the presence of noise in the environment and speech located in front of the wearer. Automatic noise management resulted in gain reductions in one or more of the 8 channels, based on the presence of noise-like sounds classified as “wind, mechanical sounds or other sounds” based on their spectral and temporal characteristics. No gain reductions were applied for sounds classified as “speech”.  Expansion was active for inputs below the compression thresholds, which ranged from 54 to 27dB SPL.

All participants carried a Personal Digital Assistants (PDA) connected via programming boots to their hearing aids. This PDA logged environmental broadband input level as well as the status of expansion, directionality, noise management and channel-specific gain reduction. Participants were asked to wear the hearing aids connected to the PDA for as much of the day as possible and measurements were made in 5-sec intervals to allow time for hearing aid features to update several times between readings.  The PDAs were worn with the hearing aids for a period of 4-5 weeks and at the end of data collection a total of 741 hours of hearing aid use were logged and studied.

Examination of the input level measurements revealed that subjects spent about half of their time in quiet environments with input levels of 50dB SPL or lower. Less than 5% of their time was spent in environments with input levels exceeding 65dB and the maximum recorded input level was 105dB SPL. This concurs with previous studies that reported high proportions of time spent in quiet environments such as living rooms or offices (Walden et al., 2004; Wagener et al., 2008).  The interaural difference in input level was 1dB about 50% of the time and exceeded 5dB only 5% of the time. Interaural differences were attributed to head shadow effects and asymmetrical sound sources as well as occasional accidental physical contact with the hearing aids, such as adjusting eyeglasses or rubbing the pinna.

Expansion was analyzed in terms of the proportion of time it was activated and whether the aids were in bilateral agreement. Expansion thresholds are meant to approximate low-level speech presented at 50dB.  In this study, expansion was active between 42% and 54% of the time, which is consistent with its intended activation, because about half the time the input levels were at or below 50dB SPL.  Bilateral agreement was relatively high at 77-81%.

Directional microphone status was measured according to the proportion of time that directionality was active and whether there was bilateral agreement. Again, directional status was consistent with the broadband input level measurements, in that directionality was active only about 10% of the time. The instruments were designed to switch to directional mode only when input levels were higher than 60dBA, and the broadband input measurements showed that participants encountered inputs higher than 65dB only about 5% of the time. Bilateral agreement for directionality was very high at 97%. Interestingly, the hearing aids were in directional mode only about 50% of the time in the louder environments.  This is likely attributable to the requirement for not only high input levels but also speech located in front of the listener in the presence of surrounding noise. A loud environment alone should not trigger directionality without the presence of speech in front of the listener.

Noise reduction was active 21% of the time with bilateral agreement of 95%. Again, this corresponds well with the input level measurements because noise reduction is designed to activate only in levels exceeding 50dB SPL. This does not indicate how often it was activated in the presence of moderate to loud noise, but as input levels rose, gain reductions resulting from noise management steadily increased as well. Gain reduction was 3-5dB greater in channels below 2250Hz than in the high frequency channels, consistent with the idea that environmental noise contains more energy in the low frequencies. Interaural differences in noise management were very small with a median difference in gain reduction of 0dB in all channels and exceeding 1dB only 5% of the time.

Bilateral agreement was generally quite high. Conditions in which there was less bilateral agreement may reflect asymmetric sound sources, accidental physical contact with the hearing instruments or true disagreement based on small differences in input levels arriving at the two ears. There may be everyday situations in which hearing aids might not perform in bilateral agreement, but this is not necessarily a disadvantage to the user. For instance, a driver in a car might experience directionality in the left aid but omnidirectional pickup from the right aid. This may be advantageous for the driver if there is another occupant in the passenger’s seat. Similarly, at a restaurant a hearing aid user might experience disproportionate noise or multi-talker babble from one side, depending on where he is situated relative to other people. Omnidirectional pickup on the quieter side of the listener with directionality on the opposite side might be desirable and more conducive to conversation. Similar arguments could be proposed for asymmetrical activation of noise management and its potential effects on comfort and ease of listening in noisy environments.

Banerjee’s investigation is an important step toward understanding how hearing aid signal processing is activated in everyday conditions. Though datalogging helps provide an overall snapshot of usage patterns and listening environments, the gross reporting of data limits utility in fine-tuning of hearing aid parameters. This study, and others like it, will provide useful information for clinicians providing follow-up care with hearing aid users.

It is noteworthy that participants spent about 50% of their time in environments with 50dB of broadband input or lower. While some participants were employed and others were not, this remains an acoustic reality of the hearing aid wearer. Subsequent studies with targeted samples would help further determine how special features apply to everyday environments among participants that lead a more consistently active lifestyle.

Automatic, adaptive signal processing features have potential benefits for many hearing aid users, especially those who are unable to or prefer not to operate manual controls. However, proper recommendations and programming adjustments can only be made if clinicians understand how these features are implemented in everyday life. This study provides evidence that some features perform as designed and offers insight for clinicians to leverage when making fine-tuning instruments based on real world hearing aid behavior.

 

References

Banerjee, S. (2011). Hearing aids in the real world: typical automatic behavior of expansion, directionality and noise management. Journal of the American Academy of Audiology 22, 34-48.

Cord, M., Surr, R., Walden, B. & Olsen, L. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology 13, 295-307.

Keidser, G. (1996). Selecting different amplification for different listening conditions. Journal of the American Academy of Audiology 7, 92-104.

Sarampalis, A., Kalluri, S., Edwards, B. & Hafter, E. (2009). Objective measures of listening effort: effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research 52, 1230–1240.

Surr, R., Walden, B., Cord, M. & Olsen, L. (2002). Influence of environmental factors on hearing aid microphone preference. Journal of the American Academy of Audiology 13, 308-322.

Wagener, K., Hansen, M. & Ludvigsen, C. (2008). Recording and classification of the acoustic environment of hearing aid users. Journal of the American Academy of Audiology 19, 348-370.