Starkey Research & Clinical Blog

Considerations for music processing through hearing aids

Arehart, K., Kates, J. & Anderson, M. (2011) Effects of Noise, Nonlinear Processing and Linear Filtering on Perceived Music Quality, International Journal of Audiology, 50(3), 177-190.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

The primary goal of most hearing aid selections and fittings is to improve communication by optimizing the perceived quality and recognition of speech sounds.  While speech is arguably the most important sound that normal hearing or hearing-impaired listeners encounter on a daily basis, the perception of other sounds should be taken into consideration, including music.  Music perception and sound quality is particularly important for hearing aid users who are musicians or music enthusiasts, or those who use music for therapeutic purposes related to stress reduction.

Though some hearing aid users report satisfaction with the performance of their hearing aids for music listening (Kochkin, 2000) the signal processing characteristics that are most appropriate for speech are not ideal for music perception.  Speech is produced by variants of one type of “instrument”, whereas music is produced by a range of instruments that create sounds with diverse timing, frequency and intensity characteristics. The perception of speech and music both rely on a broad frequency range, though high frequency sounds carry particular importance for speech perception and lower frequency sounds may be more important for music perception and enjoyment (Colucci, 2013).  Furthermore, the dynamic range, temporal and spectral characteristics may vary tremendously from one genre of music or even one piece of music to another.  Hearing aid circuitry that is designed, selected and programmed specifically to optimize speech recognition may compromise the perception and enjoyment of music and the effects may vary across musical genres.

A number of studies have examined the effects of non-linear hearing aid processing on music quality judgments.  Studies comparing compression limiting and peak clipping typically found that listeners preferred compression over peak clipping (Hawkins & Naidoo 1993; Tan et al., 2004). Whereas some studies found that listeners preferred less compression (Tan et al., 2004; Tan & Moore, 2008; Van Buuren et al., 1999) or longer compression release times (Hansen, 2002), others determined that listeners preferred wide-dynamic-range compression (WDRC) over compression limiting and peak clipping (Davies-Venn et al., 2007).

Arehart and her colleagues examined the effect of a variety of signal processing conditions on music quality ratings for normal-hearing and hearing-impaired individuals. They used simulated hearing aid processing to examine the effects of noise and nonlinear processing, linear filtering and combinations of noise, nonlinear processing and linear filtering. Their study had three primary goals:

1. To determine the effects of these processing conditions in isolation and in combination.

2. To examine the effects of nonlinear processing, noise and linear filtering on three different music genres.

3. To examine how these signal processing conditions affect the music quality ratings of normal-hearing and hearing-impaired individuals.

Subjects included a group of 19 normal-hearing adults with a mean age of 40 years (range 18-64 years) and a group of hearing-impaired adults with a mean age of 63 years (range 50 to 82 years).  The normal-hearing subjects had audiometric thresholds of 20dBHL or better from 250 through 8000Hz and the hearing-impaired subjects had sloping, mild to moderately-severe hearing losses.

Participants listened to music samples from three genres: a jazz trio consisting of piano, acoustic bass and drums; a full orchestra including string, wind and brass instruments performing an excerpt from Haydn’s Symphony No. 82; and a “vocalese” sample consisting of a female jazz vocalist singing nonsense syllables without accompaniment from other instruments. All music samples were 7 seconds in duration.  Long-term spectra of the music samples showed that they all had less high-frequency energy than the long-term spectrum of speech, with the vocalese and jazz samples having a steeper downward slope to their spectra than the Haydn sample which was mildly sloping through almost 5000Hz.

Music samples were presented in 100 signal processing conditions: 32 separate conditions of noise or nonlinear processing (e.g., speech babble, speech-shaped noise, compression, peak clipping), 32 conditions of linear filtering (e.g., high, low and bandpass filters, various positive and negative spectral tilts) and 36 combination conditions. Additionally, listeners were presented with a reference condition of “clean”, unprocessed music in each genre. Listeners were asked to judge the quality of the music samples on a scale from 1 (bad) to 5 (excellent). They listened to and made judgments on the full stimulus set twice.

The music samples were presented under headphones. Normal-hearing listeners heard stimuli at a level of 72dB SPL, whereas the hearing-impaired listeners heard stimuli as amplified according to the NAL-R linear prescription, to ensure audibility (Byrne & Dillon, 1986). The NAL-R linear prescription was intentionally selected to avoid confounding effects of wide dynamic range compression which could further distort the stimuli and mask the effects of the variables under study.

Both subject groups rated the clean, unprocessed music samples highly. Overall, hearing loss did not significantly affect the music quality ratings and general outcomes were similar between the two subject groups. Average music quality ratings were much higher for the linear processing conditions than for the nonlinear processing conditions. Most noise and nonlinear processing conditions were rated as significantly poorer than the clean samples, whereas many linear conditions were rated as having more similar quality to the clean samples. Compression, 7-bit quantization and spectral subtraction plus speech babble were the only nonlinear conditions that did not differ significantly from clean music samples.

The genre of music was a significant factor in the quality ratings, but the effects were complex, and some processing types affected one music genre more than others. For instance, hearing-impaired listeners judged vocalese samples processed with compression as similar to clean samples, whereas vocalese processed with a negative spectral tilt was judged as having much poorer quality. In contrast, hearing-impaired listeners rated higher mean differences between clean music and compressed samples for Haydn and jazz than they did for vocalese samples, indicating that compression had more of a negative effect on the classical and jazz samples than the vocalese sample.

The outcomes of this study indicate that normal-hearing and hearing-impaired listeners judged the effects of noise, nonlinear and linear processing on the quality of music samples in a similar way and that noise and nonlinear processing had significantly more negative impact on music quality than linear processing did.  The effects of the different types of processing on the three music genres was complex and it was clear that different types of music are affected in different ways. Interestingly, these diverse effects were noted even though the music samples in this study were all acoustic samples, with no electronic or amplified instruments included in the samples. The fact that quality judgments of three acoustic genres were affected in different ways by nonlinear signal manipulation implies that the quality of pop, rock and other genres that use amplified and electronic instruments may also be affected in different and unique ways.

Hearing aid manufacturers have begun to offer automatic and manual program options with settings that have been optimized for music listening, though many clinicians may still be faced with the task of customizing programs for their clients who are musicians or music enthusiasts.  To complicate matters, the outcomes of this study demonstrate that the optimal signal processing parameters for one genre might not be best for another. In addition, individual preferences could be affected by hearing thresholds and audiometric slopes, though in this study, the hearing-impaired and normal-hearing listeners demonstrated similar preferences and quality judgments, independent of hearing status.

Clearly, more study is needed in this area, but hearing care professionals can safely draw a few general conclusions about appropriate settings for music listening programs. Music spectra contain more low-frequency energy on average than speech spectra, so a flatter or slightly negatively-sloping frequency response with more low and mid-frequency emphasis is probably desirable. As such, music programs may require different compression ratios, compression thresholds and release times than would be prescribed for speech listening. While other special signal processing features like noise reduction, frequency lowering, and fast-acting compression for impulse sounds may need to be reduced or turned off in a music program. These factors combine to suggest a much different prescriptive rationale for music listening than would be require for daily use.

 

References

Arehart, K., Kates, J. & Anderson, M. (2011). Effects of Noise, Nonlinear Processing and Linear Filtering on Perceived Music Quality, International Journal of Audiology, 50, 177-190.

Byrne, D. & Dillon, H. (1986). The National Acoustic Laboratories (NAL) new procedure for selecting the gain and frequency response of a hearing aid. Ear and Hearing 7, 257-265.

Colucci, D. (2013). Aided music mapping for musicians: back to basics. The Hearing Journal 66(10), 40.

Davies-Venn, E., Souza, P. & Fabry, D. (2007). Speech and music quality ratings for linear and nonlinear hearing aid circuitry. Journal of the American Academy of Audiology 18, 688-699.

Hansen, M. (2002). Effects of multi-channel compression time constants on subjectively perceived sound quality and speech intelligibility. Ear and Hearing 23, 369-380.

Hawkins, D. & Naidoo, S. (1993). Comparison of sound quality and clarity with asymmetrical peak clipping and output limiting compression. Journal of the American Academy of Audiology 4, 221-8.

Kochkin, S. (2000). MarkeTrak VIII: Customer satisfaction with hearing aids is slowly increasing. Hearing Journal 63, 11-19.

Ricketts, T., Dittberner, A. & Johnson, E. (2008). High-frequency amplification and sound quality in listeners with normal through moderate hearing loss. Journal of Speech, Language and Hearing Research 51, 1328-1340.

Tan, C. & Moore, B. (2008). Perception of nonlinear distortion by hearing impaired people. International Journal of Audiology 47, 246-256.

Van Buuren, R., Festen, J. & Houtgast, T. (1999). Compression and expansion of the temporal envelope: Evaluation of speech intelligibility and sound quality. Journal of the Acoustical Society of America 105, 2903-2913.

Informed Dreaming for a Better Hearing Tomorrow

Part of my day job is to dream – not to daydream but to dream in a disciplined and focused way! I call this informed dreaming, and I believe it is essential for some of the other parts of my job. Because what I do is invent the future. Not the whole future – just a little slice. But this is a very important slice of the future. As Senior Director of Research at Starkey Hearing Technologies, envisioning the future is an essential part of designing the listening technology for tomorrow.

Hearing aids have undergone amazing changes over the last couple of decades. The move to digital ushered in a new age. Enabling technologies such as multiband compression, feedback cancellation, noise reduction, speech enhancement, environment classification and a host of other signal processing technologies that have significantly extended listening capability.

Wireless was the next major stepping stone, allowing direct communication and control from smart phones, the development of enhanced directional technologies, binaural linking and preservation of spatial cues, and new forms of noise reduction. The well is far from dry.

But what’s the next big step? Good research — research that takes the solutions to the next level and has a time horizon beyond the immediate capabilities of current platforms and technologies. Ten or even five years out, we have to imagine the capabilities of the technological environments in which our new devices will land. This is where the informed dreaming comes in. Predicting the future is a perilous business but an essential component of the sorts of applied research that we do at Starkey.

So what might this future world look like? The Greek philosopher Heraclitus (also known as “The Obscure” or the “weeping philosopher”) wrote that the only constant was change – quoted by Plato as saying that “you could not step twice into the same river.” Heraclitus could have never imagined how fast that river could flow – a torrent, a rapid that sweeps all before it! Today, the landscape, the very course of the river changes before our eyes.

What we do in research now is based on the science and research of millions of scientists across the world. One estimate of the size of today’s scientific knowledge is the number of peer reviewed articles, which according to the influential scientific journal Nature last year totalled 1.8 million peer reviewed articles published cross 28,000 scientific journals. More to the point, this number is increasing with a compound growth rate of 9 percent a year – this means that the scientific knowledge is doubling every nine years! It shouldn’t surprise us then that in 10 years’ time, like Dorothy, we might suspect that “Toto, I’ve a feeling we’re not in Kansas anymore.”

Over the next few weeks this blog will explore technology and social changes that are extremely relevant to our mission to transform the lives of millions of people whose hearing is challenged. Beethoven, the musical genius who bridged and defined the Classical to the Romantic periods of western music, wrote to his brothers at the onset of his own deafness.  For him it was the crippling social impairment, the loss of his ability to communicate with those he cared for and loved, that drove him to contemplate suicide. It wasn’t his inability to hear the notes of the piano that made him most desperate (although he lamented this most keenly). The great insult to his life was the social isolation that deafness forced upon him. He could still hear his music in his mind. He could only guess at the rest. Fortunately for us, he chose a more philosophical route. In 1802, he wrote

“Forced already in my 28th year to become a philosopher, O it is not easy, less easy for the artist than for anyone else – Divine One thou lookest into my inmost soul, thou knowest it, thou knowest that love of man and desire to do good live therein.”  (see HERE for a scan of the original letter and a translation)

His brothers (Carl and Johann) never received his letter – it was found amongst his papers after his death, but it is a most poignant statement of the catastrophe that hearing impairment visits upon all humankind.

It is critical that we understand the possibilities that the raging river of scientific discovery can provide to remove this veil of isolation, this inability to communicate that forces itself upon otherwise engaged and productive individuals.

Over the next few weeks, this blog will introduce us to the Internet of Things – a near future state, where not only are the things in the world connected and communicating but include a huge range of sensors and data gathering devices that provide a rich and detailed real-time picture of the world. This blog will touch on Big Data, the Semantic Web, Artificial Intelligence and Super Intelligence. We are already immersed in some of this and the only uncertainty is not “if” but “when.” Wearables and hearables, biosensors that touch the skin or dwell beneath the skin, tattoos that transmit, jewellery that knows the focus of the mind’s eye and much more!

My challenge and the challenge of my team, is to understand how we leverage these technologies and this tumultuous torrent of scientific discovery to improve the lives of millions.

 

Considerations for Music Listening

Croghan, N., Arehart, K. & Kates, J.  (2014). Music preferences with hearing aids: effects of signal properties, compression settings and listener characteristics. Ear & Hearing, in press.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

For the hearing aid wearer, speech is arguably the most important sound but hearing aid satisfaction is affected by the way in which the devices process other environmental sounds, including music. Adoption of hearing aids by active, technology savvy users makes their ability to process music with optimal sound quality and minimal distortion is more important than ever.  Though modern hearing aids do an effective job of processing speech, even in the presence of competing noise, many hearing aid users report that hearing aids either make no difference or make music less enjoyable (Leek et al., 2008).

Music and speech have different spectral and temporal characteristics, with music often being higher in intensity and more dynamic than speech (Chasin, 2003, 2006, 2010).  Speech maintains somewhat similar and predictable acoustic characteristics across talkers; in contrast, the spectral and temporal characteristics of music vary widely from one instrument to another and one piece to another (Chason & Russo, 2004). Not surprisingly, some studies have indicated that the best hearing aid circuit characteristics and settings for speech recognition may not be optimal for music perception (Higgins et al, 2012; van Buuren et al., 1999). For instance, faster compression time constants may be helpful for restoring speech audibility and loudness perception (Moore, 2008) but listeners may prefer longer release times for listening to music (Hansen, 2002; Moore, 2011).

Recorded music heard by hearing-aid users is subject to two stages of compression; compression limiting during the studio recording and wide-dynamic range compression in the hearing aid. Processing at both of these stages could impact music sound quality and subsequent enjoyment by the listener.  Croghan and colleagues investigated the acoustic and perceptual effects of compression on music processed through hearing aids. They examined the effect of compression limiting prior to hearing aid processing and compared slow versus fast hearing aid compression time constants as well as small versus large numbers of channels. In addition to these compression variables, they examined potential effects of suprathreshold processing and prior musical training.

Eighteen hearing aid users, ranging from 49 to 87 years of age, participated in the study. Subjects were divided into non-musician and musician groups. Two pieces of music, one classical and one rock, were used in the study. The pieces were selected to be relatively unfamiliar to the subjects, to reduce any effect of prior experience or expectations. To simulate studio processing, the music samples were recorded in three compression limiting conditions: no compression, mild compression limiting, and heavy compression limiting.  These compression conditions were applied to the music samples prior to hearing aid processing.

Music was presented over binaural headphones, via a simulated hearing aid.  Individual WDRC hearing aid simulations were programmed according to NAL-NL1 formulae for each subject. Stimuli were processed with two sets of compression release times and processing channels: fast (50msec) vs. slow (1000msec) release times and 3-channels vs. 18 channels.  Two linear conditions were also included, using the NAL-R prescription with 3-channels and 18-channels for frequency shaping.  The combination of 3 compression limiting, 4 WDRC and 2 linear conditions resulted in 18 processing conditions for each piece of music.   Stimuli were presented at 65dB SPL and subjects made preference judgments in a 2-interval forced-choice paradigm.

To examine the effect of suprathreshold processing, three psychophysical tests were administered. Loudness perception was measured with the Contour Test of Loudness Perception (Cox et al., 1997), amplitude modulation depth discrimination was measured using speech-shaped noise modulated at 4Hz and frequency selectivity was measured with psychophysical tuning curves (Sek et al. 2005; Sek & Moore, 2011). The music stimuli were also analyzed with a modification of the Hearing Aid Speech Quality Index (HASQI; Kates & Arehart, 2010). Roughly stated, the HASQI provides an object sound quality rating by comparing time-frequency modulation and long-term spectrum of an unmodified signal to a modified one (the modified signal being one with the targeted signal processing applied). Not surprisingly, the lowest HASQI values, indicating the most difference between unprocessed and processed stimuli, were observed for fast WDRC combined with heavy compression limiting.

The 18 stimulus conditions were examined for the effect of compression on the overall dynamic range, amplitude by frequency and modulation of the music samples.  Generally any increase in processing – increasing compression limiting, increasing the number of channels, going from linear to slow WDRC or slow to fast WDRC – reduced the dynamic range of the classical and rock music samples. WDRC caused more dynamic range reduction in the high frequencies. Compression limiting affected classical music similarly across frequencies, whereas rock music was affected more in the high and low-frequency regions than in the mid-frequencies. Compression – either WDRC or limiting – reduced the magnitude of modulation, likely making the rhythmic structure of the music less distinguishable.

Listener preference results for compression limiting and WDRC indicated some differences based on the type of music that was presented. For classical music, there was no significant difference between slow WDRC and linear processing, but both of these were preferred over fast WDRC. Mild or no compression was significantly preferred over heavy compression limiting.  There was no effect for the number of channels on classical music preferences.  Slightly different results were obtained for rock music: linear processing was preferred over both WDRC conditions and slow WDRC was significantly preferred over fast WDRC. There was no significant effect of compression limiting but the 3-channel condition was rated significantly better than the 18-channel condition.

The following listener-related factors were examined for their effects on preference: gender, musician vs. non-musician, PTA, dynamic range, tuning curve bandwidth and modulation depth discrimination threshold. Because PTA and dynamic range were strongly correlated to each other, these factors were excluded from the analysis.  For classical music, the only significant findings were interactions among tuning curve bandwidth, WDRC condition and number of processing channels. Listeners with broader tuning curves showed a slight preference for linear amplification over WDRC and 3-channel over 18-channel processing. In contrast, the group with narrower tuning curves had a slight preference for slow WDRC and 18-channel processing. There were no other significant findings for listener-related factors. The authors posit that listeners with better frequency resolution may have preferred slow WDRC and 18-channel processing because they were able to resolve the harmonics and benefit from greater audibility. Conversely, listeners with poorer frequency resolution may have responded to reduced distortion in the linear and 3-channel conditions, despite potentially reduced audibility.

In this study, Croghan and her colleagues found that for music stimuli, compression limiting and WDRC reduced temporal envelope contrasts. These results are in agreement with previous studies using speech stimuli (Bor et al., 2008; Jenstad & Souza, 2005). They also found that compression limiting was more likely than WDRC to reduce the peaks of the modulation spectrum. This is somewhat in agreement with a previous report by Souza & Gallun (2010) on consonant discrimination, in which hearing aid compression limiting had an adverse effect but multi-channel, fast WDRC was beneficial.  However, the authors point out that hearing aid compression limiting is different from music industry compression limiting in that the former compresses only high-level sounds and does not affect average (RMS) sound level.

The results of this study indicate that music was adversely affected by compression limiting and WDRC and that in general, listeners preferred listening to music with little or no compression. Listeners with broad psychophysical tuning curves showed a preference for 3-channel processing, whereas those with narrower tuning curves preferred 18-channel processing. This may be related to the ability of those with narrower tuning curves to perceive harmonics, especially in the classical piece, which is related to the perceived quality of stringed instruments (Chasin & Russo, 2004).  This result is similar to a report using speech stimuli by Souza et al. (2012) in which listeners with better frequency resolution were more able to benefit from multi-channel compression. More research is needed to illuminate the relationship between suprathreshold processing and music perception. Traditional measures of psychophysical tuning curves is an unwieldy proposition for clinicians, but hearing aid users with impaired frequency resolution may require a modified treatment approach.

In contrast to previous studies in which musicians outperformed non-musicians on tests of frequency discrimination, speech discrimination and working memory (Parbery-Clark et al., 2009; 2012), Croghan and her colleagues found no significant difference in the psychophysical tests or preference ratings for musicians versus non-musicians.  They point out, however, that their study used recorded music samples and preferences for live music cannot be extrapolated from their results. Clinicians should expect musicians to be analytical about the sound quality of their hearing aids and be prepared to offer a separate, manually accessible programs for music listening. Similarly, many non-musicians are music aficionados who would also appreciate an alternate program for music. In many hearing instruments, alternate music programs can be added using defaults available in manufacturer software, or can be individually customized. In music listening programs, special features like automatic directionality and noise reduction should be disabled and based on Croghan’s report, should probably have more linear processing and less compression than primary, everyday listening programs.

Croghan’s study provides insight into the ways in which hearing aid signal processing affects music acoustics and perception. Our current knowledge of music acoustics and hearing aid signal processing may be more meaningfully applied to the technical design of hearing aids than to routine clinical practice. While opportunities remain for meaningful advancement in the processing of music through hearing aids, some clinical advice can be offered:

  • Musicians or individuals with strong musical interests may benefit from a dedicated memory, optimized for music listening.
  • Optimization of a dedicated music listening memory is best attempted following a patient’s initial adaptation period to new hearing aids.
  • Follow-up visits addressing music perception and sound quality should include multiple music samples of the patient’s own selection.
  • Using default settings for music listening, the patient should be prompted to set the playback loudspeakers to a level they find pleasing for a given music sample.

Although the preferences of each patient are different, these suggestions are a solid foundation for providing patients with a high-quality music listening experience.

 

References

Chasin, M. (2003). Music and hearing aids. Hearing Journal 56, 36-41.

Chasin, M. (2006). Hearing aids for musicians. Hearing Review 13, 11-16.

Chasin, M. (2010). Amplification fit for music lovers. Hearing Journal 63, 27-30.

Chason, M. & Russo, F. (2004). Hearing aids and music. Trends in Amplification 8, 35-47.

Cox, R., Alexander, G. & Taylor, I. (1997). The contour test of loudness perception. Ear and Hearing 18, 388-400.

Croghan, N., Arehart, K. & Kates, J.  (2014). Music preferences with hearing aids:

effects of signal properties, compression settings and listener characteristics. Ear & Hearing, in press.

Hansen, M. (2002). Effects of multi-channel compression time constants on subjectively perceived sound quality and speech intelligibility. Ear and Hearing 23, 369-380.

Higgins, P., Searchfield, G. & Coad, G. (2012). A comparison between the first-fit settings of two multichannel digital signal-processing strategies: music quality ratings and speech-in-noise scores. American Journal of Audiology 21, 13-21.

Leek, M., Molis, M. & Kubli, L. (2008).  Enjoyment of music by elderly hearing-impaired listeners. Journal of the American Academy of Audiology 19, 519-526.

Moore, B. (2008) . The choice of compression speed in hearing aids: Theoretical and practical considerations and the role of individual differences. Trends in Amplification 12, 103-112.

Moore, B., Fullgrabe, C. & Stone, M. (2011).  Determination of preferred parameters for multichannel compression using individually fitted simulated hearing aids and paired comparisons. Ear and Hearing 32, 556-568.

Moore, B. & Glasberg, B. (1997) A model of loudness perception applied to cochlear hearing loss. Auditory Neuroscience 3, 289-311.

Neumann, A., Bakke, M. & Hellman, S. (1995a). Preferred listening levels for linear and slow-acting compression hearing aids. Ear and Hearing 16, 407-416.

Parbery-Clark, A., Skoe, E. & Lam, C. (2009). Musician enhancement for speech-in-noise. Ear and Hearing 30, 653-661.

Parbery-Clark, A., Tierney, A. & Strait, D. (2012). Musicians have fine-tuned neural distinction of speech syllables. Neuroscience 219, 111-119.

Sek, A., Alcantara, J. & Moore, B. ( 2005). Development of a fast method for determining psychophysical tuning curves. International Journal of Audiology 44, 408-420.

Sek, A. & Moore, B. (2011). Implementation of a fast method for measuring psychophysical tuning curves. International Journal of Audiology 50, 237-242.

van Buuren, R., Festen, J. & Houtgast, T. (1999).  Compression and expansion of the temporal envelope: Evaluation of speech intelligibility and sound quality. Journal of the Acoustical Society of America 105, 2903-2913.

A Pediatric Prescription for Listening in Noise

Crukley, J. & Scollie, S. (2012). Children’s speech recognition and loudness perception with the Desired Sensation Level v5 Quite and Noise prescriptions. American Journal of Audiology 21, 149-162.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Most hearing aid prescription formulas attempt to balance audibility of sound with perception of loudness, while keeping the amplified sound within a patient’s dynamic range (Dillon, 2001; Gagne et al., 1991a; Gagne et al., 1991b; Seewald et al., 1985). Use of a prescriptively appropriate hearing aid fitting is particularly important for children with hearing loss. For the needs of language development, they benefit from a higher proportion of audible sound and broader bandwidth than diagnostically similar older children and adults (Pittman & Stelmachowicz, 2000; Stelmachowicz et al., 2000; Stelmachowicz et al., 2001; Stelmachowicz et al., 2004; Stelmachowicz et al., 2007).

Historically, provision of access to speech in quiet has been a primary driver in the development of prescription formulas for hearing aid.  However, difficulty understanding speech in noise is one of the primary complaints of all hearing aid users, including children. In a series of studies compared NAL-NL1 and DSL v4.1 fittings and examined children’s listening needs and preferences (Ching et al., 2010; Ching et al., 2010; Scollie et al., 2010) two distinct listening categories were identified: loud, noisy and reverberant environments and quiet or low-level listening situations. The investigators found that children preferred the DSL fitting in quiet conditions but preferred the NAL fitting for louder sounds and when listening in noisy environments. Examination of the electroacoustic differences between the two fittings showed that the DSL fittings provided more gain overall and approximately 10dB more low-frequency gain than the NAL-NL1 fittings.

To address the concerns of listening in noisy and reverberant conditions, DSL v5 includes separate prescriptions for quiet and noise. Relative to the formula for quiet conditions, the noise formula prescribes higher compression thresholds, lower overall gain, lower low-frequency gain and more relative gain in the high frequencies.  This study of Crukley and Scollie addressed whether the use of the DSL v5 Quiet and Noise formulae resulted in differences in consonant recognition in quiet, sentence recognition in noise and different loudness ratings.  Because of the lower gain in the noise formula, it was expected to reduce loudness ratings and consonant recognition scores in quiet because of potentially reduced audibility. There was no expected difference for speech recognition in noise, as the noise floor was considered the primary limitation to audibility in noisy conditions.

Eleven children participated in the study; five elementary school children with an average age of 8.85 years and six high school children with an average age of 15.18 years. All subjects were experienced, full-time hearing aid users with congenital, sensorineural hearing losses, ranging from moderate to profound.  All participants were fitted with behind-the-ear hearing aids programmed with two separate programs: one for DSL Quiet targets and one for DSL Noise targets. The Noise targets had, on average, 10dB lower low-frequency gain and 5dB lower high-frequency gain, relative to the Quiet targets. Testing took place in two classrooms: one at the elementary school and one at the high school.

Consonant recognition in quiet conditions was tested with the University of Western Ontario Distinctive Features Differences Test (UWO-DFD; Cheesman & Jamieson, 1996). Stimuli were presented at 50dB and 70dB SPL, by a male talker and a female talker. Sentence recognition in noise was performed with the Bamford-Kowal-Bench Speech in Noise Test (BKB-SIN; Niquette et al., 2003). BKB-SIN results are scored as the SNR at which 50% performance can be achieved (SNR-50).

Loudness testing was conducted with the Contour Test of Loudness Perception (Cox et al., 1997; Cox & Gray, 2001), using BKB sentences presented in ascending then descending steps of 4dB from 52dB to 80dB SPL. Subjects rated their perceived loudness on an 8-point scale ranging from “didn’t hear it” up to “uncomfortably loud” and indicated their response on a computer screen. Small children were assisted by a researcher, using a piece of paper with the loudness ratings, and then the researcher entered the response.

The hypotheses outlined above were generally supported by the results of the study. Consonant recognition scores in quiet were better at 70dB than 50dB for both prescriptions and there was no significant difference between the Quiet and Noise fittings. There was, however, a significant interaction between prescription and presentation level, showing that performance for the Quiet fittings was consistent at the two levels but was lower at 50dB than 70dB for the Noise fittings. The change in score from Quiet to Noise at 50dB was 4.2% on average, indicating that reduced audibility in the Noise fitting may have adversely affected scores at the lower presentation level. On the sentence recognition in noise test, BKB-SIN scores did not differ significantly between the Quiet and Noise prescriptions, with some subjects scoring better in the Quiet program, some scoring better in the Noise program and most not demonstrating any significant difference between the two. Loudness ratings were lower on average for the Noise prescription. When ratings for 52-68dB SPL and 72-80dB SPL were analyzed separately, there was no difference between the Quiet and Noise prescriptions for the lower levels but at 72dB and above, the Noise prescription yielded significantly lower loudness ratings.

Although the average consonant recognition scores for the Noise prescription were only slightly lower than those for the Quiet prescription, it may not be advisable to use the Noise prescription as the primary program for regular daily use, because of the risk of reduced audibility. This is especially true for pediatric hearing aid patients, for whom maximal audibility is essential for speech and language development. Rather, the Noise prescription is better used as an alternate program, to be accessed manually by the patient, teacher or caregiver, or via automatic classification algorithms within the hearing aid. Though the Noise prescription did not improve speech recognition in noise, it did not result in a decrement in performance and yielded lower loudness ratings, suggesting that in real world situations it would improve comfort in noise while still maintaining adequate speech intelligibility.

Many audiologists find that patients prefer a primary program set to a prescriptive formula (DSL v5, NAL-NL2 or proprietary targets) for daily use but appreciate a separate, manually accessible noise program with reduced low-frequency gain and increased noise reduction. This is true even for the majority of patients who have automatically switching primary programs, with built-in noise modes. Anecdotal remarks from adult patients using manually accessible noise programs agree with the findings of the present study, in that most people use them for comfort in noisy conditions and find that they are still able to enjoy conversation.

For the pediatric patient, prescription of environment specific memories should be done on a case-by-case basis. Patients functioning as teenagers might be capable of managing manual selection of a noise program in appropriate conditions. Those of a functionally younger age will require assistance from a supervising adult. Personalized, written instructions will assist adult caregivers to ensure that they understand which listening conditions may be uncomfortable and what actions should be taken to adjust the hearing aids. Most modern hearing aids feature some form of automatic environmental classification: ambient noise level estimation being one of the more robust classifications. Automatic classification and switching may be sufficient to address concerns of discomfort. However, the details of this behavior vary greatly among hearing aids. It is essential that the prescribing audiologist is aware of any automatic switching behavior and works to verify each of the accessible hearing aid memories.

Crukley and Scollie’s study supports the use of separate programs for everyday use and noisy conditions and indicates that children could benefit from this approach. The DSL Quiet and Noise prescriptive targets offer a consistent and verifiable method for this approach with children, while also providing potential guidelines for designing alternate noise programs for use by adults with hearing aids.

 

References

Cheesman, M. & Jamieson, D. (1996). Development, evaluation and scoring of a nonsense word test suitable for use with speakers of Canadian English. Canadian Acoustics 24, 3-11.

Ching, T., Scollie, S., Dillon, H. & Seewald, R. (2010). A crossover, double-blind comparison of the NAL-NL1 and the DSL v4.1 prescriptions for children with mild to moderately severe hearing loss. International Journal of Audiology 49 (Suppl. 1), S4-S15.

Ching, T., Scollie, S., Dillon, H., Seewald, R., Britton, L. & Steinberg, J. (2010). Prescribed real-ear and achieved real life differences in children’s hearing aids adjusted according to the NAL-NL1 and the DSL v4.1 prescriptions. International Journal of Audiology 49 (Suppl. 1), S16-25.

Cox, R., Alexander, G., Taylor, I. & Gray, G. (1997). The contour test of loudness perception. Ear and Hearing 18, 388-400.

Cox, R. & Gray, G. (2001). Verifying loudness perception after hearing aid fitting. American Journal of Audiology 10, 91-98.

Crandell, C. & Smaldino, J. (2000). Classroom acoustics for children with normal hearing and hearing impairment. Language, Speech and Hearing Services in Schools 31, 362-370.

Crukley, J. & Scollie, S. (2012). Children’s speech recognition and loudness perception with the Desired Sensation Level v5 Quite and Noise prescriptions. American Journal of Audiology 21, 149-162.

Dillon, H. (2001). Prescribing hearing aid performance. Hearing Aids (pp. 234-278). New York, NY: Thieme.

Jenstad, L., Seewald, R., Cornelisse, L. & Shantz, J. (1999). Comparison of linear gain and wide dynamic range compression hearing aid circuits: Aided speech perception measures. Ear and Hearing 20, 117-126.

Niquette, P., Arcaroli, J., Revit, L., Parkinson, A., Staller, S., Skinner, M. & Killion, M. (2003). Development of the BKB-SIN test. Paper presented at the annual meeting of the American Auditory Society, Scottsdale, AZ.

Pittman, A. & Stelmachowicz, P. (2000). Perception of voiceless fricatives by normal hearing and hearing-impaired children and adults. Journal of Speech, Language and Hearing Research 43, 1389-1401.

Scollie, S. (2008). Children’s speech recognition scores: The speech intelligibility index and proficiency factors for age and hearing level. Ear and Hearing 29, 543-556.

Scollie, S., Ching, T., Seewald, R., Dillon, H., Britton, L., Steinberg, J. & Corcoran, J. (2010). Evaluation of the NAL-NL1 and DSL v4.1 prescriptions for children: Preference in real world use. International Journal of Audiology 49 (Suppl. 1), S49-S63.

Scollie, S., Ching, T., Seewald, R., Dillon, H., Britton, L., Steinberg, J. & King, K. (2010). Children’s speech perception and loudness ratings when fitted with hearing aids using the DSL v4.1 and NAL-NL1 prescriptions. International Journal of Audiology 49 (Suppl. 1), S26-S34.

Seewald, R., Ross, M. & Spiro, M. (1985). Selecting amplification characteristics for young hearing-impaired children. Ear and Hearing 6, 48-53.

Stelmachowicz, P., Hoover, B., Lewis, D., Kortekaas, R. & Pittman, A. (2000). The relation between stimulus context, speech audibility and perception for normal hearing and hearing impaired children. Journal of Speech, Language and Hearing Research 43, 902-914.

Stelmachowicz, P., Pittman, A., Hoover, B. & Lewis, D. (2001). Effect of stimulus bandwidth on the perception of /s/ in normal and hearing impaired children and adults. The Journal of the Acoustical Society of America 110, 2183-2190.

Stelmachowicz, P. Pittman, A., Hoover, B. & Lewis, D. (2004). Novel word learning in children with normal hearing and hearing loss. Ear and Hearing 25, 47-56.

Stelmachowicz, P. Pittman, A., Hoover, B., Lewis, D. & Moeller, M. (2004). The importance of high-frequency audibility in the speech and language development of children with hearing loss. Archives of Otolaryngology, Head and Neck Surgery 130, 556-562.

Stelmachowicz, P., Lewis, D., Choi, S. & Hoover, B. (2007).  Effect of stimulus bandwidth on auditory skills in normal hearing and hearing impaired children.  Ear and Hearing 28, 483-494.

On the Prevalence of Cochlear Dead Regions

Pepler, A., Munro, K., Lewis, K. & Kluk, K. (2014). Prevalence of Cochlear Dead Regions in New Referrals and Existing Adult Hearing Aid Users. Ear and Hearing 20(10), 1-11.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Cochlear dead regions are areas in which, due to inner hair cell and/or nerve damage, responses to acoustic stimuli occur not at the area of peak basilar membrane stimulation but instead occur at adjacent regions in the cochlea. Professor Brian Moore defined dead regions as a total loss of inner hair cell function across a limited region of the basilar membrane (Moore, et al., 1999b). This hair cell loss does not result in an inability to perceive sound at a given frequency range, rather the sound is perceived via off-place or off-frequency listening, a spread of excitation to adjacent regions in the cochlea where inner hair cells are still functioning (Moore, 2004).  Because the response is spread across a broad tonotopic area, individuals with cochlear dead regions may perceive pure tones as “clicks”, “buzzes” or “whooshes”.

Cochlear dead regions are identified and measured by a variety of masking techniques. The most accurate method is the calculation of psychophysical tuning curves (PTCs), originally developed to measure frequency selectivity (Moore & Alcantara 2001). A PTC plots the level required to mask a stimulus frequency as a function of the masker frequency. For a normally hearing ear, the PTC peak will align with the point at which the stimulus can be masked by the lowest level masker.  In ears with dead regions, the tip of the PTC is shifted off of the signal frequency to indicate that the signal is being detected in an adjacent region. Though PTCs are an effective method of identifying and delineating the edges of cochlear dead regions, they are time consuming and ill-suited to clinical use.

The test used most frequently for clinical identification of cochlear dead regions is the Threshold Equalizing Test (TEN; Moore et al., 2000; 2004). The TEN test was developed with the idea that tones detected by off-frequency listening, in ears with dead regions, should be easier to mask with broadband noise than they would in ears without dead regions. With the TEN (HL) test, masked thresholds are measured across the range of 500Hz to 4000Hz, allowing the approximate identification of a cochlear dead region.

There are currently no standards for clinical management of cochlear dead regions. Some reports suggest that affect speech, pitch, loudness perception, and general sound quality (Vickers et al., 2001; Baer et al., 2002; Mackersie et al., 2004; Huss et al., 2005a; 2005b). Some researchers have specified amplification characteristics to be used with patients with diagnosed dead regions, but there is no consensus and different studies have arrived at conflicting recommendations. While some recommend limiting amplification to a range up to 1.7 times the edge frequency of the dead region (Vickers et al., 2001; Baer et al., 2002), others advise the use of prescribed settings and recommend against limiting high frequency amplification (Cox et al., 2012; see link for a review).  Because of these conflicting recommendations, it remains unclear how clinicians should modify their treatment plans, if at all, for hearing aid patients with dead regions.

Previous research on the prevalence of dead regions has reported widely varying results, possibly due to differences in test methodology or subject characteristics. In a study of hearing aid candidates, Cox et al. (2011) reported a dead region prevalence of 31%, but their strict inclusion criteria likely missed individuals with milder hearing losses, so their prevalence estimate may be different from that of hearing aid candidates at large. Vinay and Moore (2007) reported higher prevalence of 57% in a study that did include individuals with thresholds down to 15dB HL at some frequencies, but the median hearing loss of their subjects was higher than that of the Cox et al. study, which likely impacted the higher prevalence estimate in their subject group.

In the study being reviewed, Pepler and her colleagues aimed to determine how prevalent cochlear dead regions are among a population of individuals who have or are being assessed for hearing aids. Because dead regions become more likely as hearing loss increases, and established hearing aid patients are more likely to have greater degrees of hearing loss, they also investigated whether established hearing aid patients would be more likely to have dead regions than newly referred individuals.  Finally, they studied whether age, gender, hearing thresholds or slope of hearing loss could predict the presence of cochlear dead regions.

The researchers gathered data from a group of 376 patients selected from the database of a hospital audiology clinic in Manchester, UK. Of the original group, 343 individuals met inclusion criteria; 193 were new referrals and 150 were established patients and experienced hearing aid users.  Of the new referrals, 161 individuals were offered and accepted hearing aids, 16 were offered and declined hearing aids and 16 were not offered hearing aids because their losses were of mild degree.  The 161 individuals who were fitted with new hearing aids were referred to as “new” hearing aid users for the purposes of the study. All subjects had normal middle ear function and otoscopic examinations and on average had moderate sensorineural hearing losses.

When reported as a proportion of the total subjects in the study, Pepler and her colleagues found dead region prevalence of 36%.  When reported as the proportion of ears with dead regions, the prevalence was 26% indicating that some subjects had dead regions in one ear only. Follow-up analysis on 64 patients with unilateral dead regions revealed that the ears with dead regions had significantly greater audiometric thresholds than the ears without dead regions. Only 3% of study participants had dead regions extending across at three or more consecutive test frequencies. Ears with contiguous dead regions had greater hearing loss than those without.  Among new hearing aid users, 33% had dead regions while the prevalence was 43% among experienced hearing aid users. On average, the experienced hearing aid users had poorer audiometric thresholds on average than new users.

Pepler and colleagues excluded hearing losses above 85dB HL because effective TEN masking could not be achieved. Therefore, dead regions were most common in hearing losses from 50 to 85dB HL, though a few were measured below that range. There were no measurable dead regions for hearing thresholds below 40dB HL. Ears with greater audiometric slopes were more likely to have dead regions, but further analysis revealed that only 4 kHz thresholds had a significant predictive contribution and the slopes of high-frequency hearing loss only predicted dead regions because of the increased degree of hearing loss at 4 kHz.

Demographically, more men than women had dead regions in at least one ear, but their audiometric configurations were different: women had poorer low frequency thresholds whereas men had poorer high frequency thresholds. It appears that the gender effect actually due to the difference in audiometric configuration, specifically the men’s poorer high frequency thresholds. A similar result was reported for the analysis of age effects. Older subjects had a higher prevalence of dead regions but also had significantly poorer hearing thresholds.  Though poorer hearing thresholds at 4kHz did slightly increase the likelihood of dead regions, regression analysis of the variables of age, gender and hearing thresholds found that none of these factors were significant predictors.

Pepler et al’s prevalence data agree with the 31% reported by Cox et al (2012), but are lower than that reported by Vinay and Moore (2007), possibly because the subjects in the latter study had greater average hearing loss than those in the other studies.  But when Pepler and her colleagues used similar inclusion criteria to the Cox study, they found a prevalence of 59%, much higher than the report by Cox and her colleagues and likely due to the exclusion of subjects with normal low frequency hearing in the Cox study. The authors proposed that Cox’s exclusion of subjects with normal low frequency thresholds could have reduced the overall prevalence by increasing the proportion of subjects with metabolic presbyacusis and eliminating some subjects with sensory presbyacusis—sensory presbyacusis is often associated with steeply sloping hearing loss and involves atrophy of cochlear structures (Shuknecht, 1964).

 In summary:

The study reported here shows that roughly a third of established and newly referred hearing aid patients are likely to have at least one cochlear dead region, in at least one ear. A very low proportion (3% reported here) of individuals are likely to have dead regions spanning multiple octaves. The only factor that predicted the presence of dead regions was hearing threshold at 4 kHz.

On the lack of clinical guidance:

As more information is gained about prevalence and risk factors, what remains missing are clinical guidelines for management of hearing aid users with diagnosed high-frequency dead regions. Conflicting recommendations have been proposed for either limiting high frequency amplification or preserving high frequency amplification and working within prescribed targets. The data available today suggest that prevalence of contiguous multi-octave dead regions is very low and a further subset of hearing aid users with contiguous dead regions experience any negative effects of high-frequency amplification. With consideration to these observations, it seems prudent that the prescription of high-frequency gain should adhere to the prescribed targets for all patients at the initial fitting. Any reduction to high-frequency gains should be managed as a result of subjective feedback from the patient after they have completed a trial period with their hearing aids.

On frequency lowering and dead regions:

Some clarity is required regarding the role of frequency lowering and the treatment of cochlear dead regions. Because acoustic information in speech extends out to 10 kHz and because most hearing aid frequency responses roll off significantly after 4-5 kHz, the mild prescription of frequency lowering can be beneficial to many hearing aid users. It must be noted that the benefits of this technology arise largely from the acoustic limitations of the device and not the presence or absence of a cochlear dead region. There are presently no recommendations for the selection of frequency lowering parameters in cases of cochlear dead regions. In the absence of these recommendations, the best practice for the prescription of frequency lowering would follow the same guidelines as any other patient with hearing loss; validation and verification should be performed to document benefit with the algorithm and identify appropriate selection of algorithm parameters.

On the low-frequency dead region: 

The effects of low-frequency dead regions are not well studied and may have more significant impact on hearing aid performance.  Hornsby (2011) reported potential negative effects of low frequency amplification if it extends into the range of low-frequency dead regions (Vinay et al., 2007; 2008). In some cases performance decrements reached 30%, so the authors recommended using low-frequency gain limits of 0.57 times the low-frequency edge of the dead region in order to preserve speech recognition ability. Though dead regions are less common in the low frequencies than in the high frequencies, more study on this topic is needed to determine clinical testing and treatment implications.

References

Baer, T., Moore, B. C. and Kluk, K. (2002). Effects of low pass filtering on the intelligibility of speech in noise for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America 112(3 Pt 1), 1133-44.

Cox, R., Alexander, G., Johnson, J., Rivera, I. (2011). Cochlear dead regions in typical hearing aid candidates: Prevalence and implications for use of high-frequency speech cues. Ear and Hearing 32(3), 339 – 348.

Cox, R.M., Johnson, J.A. & Alexander, G.C. (2012).  Implications of high-frequency cochlear dead regions for fitting hearing aids to adults with mild to moderately severe hearing loss. Ear and Hearing 33(5), 573-87.

Hornsby, B. (2011) Dead regions and hearing aid fitting. Ask the Experts, Audiology Online October 3, 2011.

Huss, M. & Moore, B. (2005a). Dead regions and pitch perception. Journal of the Acoustical Society of America 117, 3841-3852.

Huss, M. & Moore, B. (2005b). Dead regions and noisiness of pure tones. International Journal of Audiology 44, 599-611.

Mackersie, C. L., Crocker, T. L. and Davis, R. A. (2004). Limiting high-frequency hearing aid gain in listeners with and without suspected cochlear dead regions. Journal of the American Academy of Audiology 15(7), 498-507.

Moore, B., Huss, M. & Vickers, D. (2000). A test for the diagnosis of dead regions in the cochlea. British Journal of Audiology 34, 205-224.

Moore, B. (2004). Dead regions in the cochlea: Conceptual foundations, diagnosis and clinical applications. Ear and Hearing 25, 98-116.

Moore, B. & Alcantara, J. (2001). The use of psychophysical tuning curves to explore dead regions in the cochlea. Ear and Hearing 22, 268-278.

Moore, B.C., Glasberg, B. & Vickers, D.A. (1999b). Further evaluation of a model of loudness perception applied to cochlear hearing loss. Journal of the Acoustical Society of America 106, 898-907.

Pepler, A., Munro, K., Lewis, K. & Kluk, K. (2014). Prevalence of Cochlear Dead Regions in New Referrals and Existing Adult Hearing Aid Users. Ear and Hearing 20(10), 1-11.

Schuknecht HF. Further observations on the pathology of presbycusis. Archives of Otolaryngology 1964;80:369—382

Vickers, D., Moore, B. & Baer, , T. (2001). Effects of low-pass filtering on the intelligibility of speech in quiet for people with and without dead regions at high frequencies. Journal of the Acoustical Society of America 110, 1164-1175.

Vinay and Moore, B. C. (2007). Speech recognition as a function of high-pass filter cutoff frequency for people with and without low-frequency cochlear dead regions. Journal of the Acoustical Society of America 122(1), 542-53.

Vinay, Baer, T. and Moore, B. C. (2008). Speech recognition in noise as a function of high pass-filter cutoff frequency for people with and without low-frequency cochlear dead regions. Journal of the Acoustical Society of America 123(2), 606-9.

Patients with higher cognitive function may benefit more from hearing aid features

Ng, E.H.N., Rudner, M., Lunner, T., Pedersen, M.S., & Ronnberg, J. (2013). Effects of noise and working memory capacity on memory processing of speech for hearing-aid users. International Journal of Audiology, Early Online, 1-9.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Research reports as well as clinical observations indicate that competing noise increases the cognitive demands of listening, an effect that is especially impactful for individuals with hearing loss (McCoy et al., 2005; Picou et al., 2013; Rudner et al., 2011).  Listening effort is a cognitive dimension of listening that is thought to represent the allocation of cognitive resources needed for speech recognition (Hick & Tharpe, 2002). Working memory, is a further dimension of cognition that involves the simultaneous processing and storage of information; its effect on speech processing may vary depending on the listening conditions (Rudner et al., 2011).

The concept of effortful listening can be characterized with the Ease of Language Understanding (ELU) model (Ronnberg, 2003; Ronnberg et al., 2008). In quiet conditions when the speech is audible and clear, the speech input is intact and is automatically and easily matched to stored representations in the lexicon. When speech inputs are weak, distorted or obscured by noise, mismatches may occur and speech inputs may need to be compared to multiple stored representations to arrive at the most likely match. In these conditions, allocation of additional cognitive resources, is required. Efficient cognitive functioning and large working memory capacity allows more rapid and successful matches between speech inputs and stored representations. Several studies have indicated a relationship between cognitive ability and speech perception: Humes (2007) found that cognitive function was the best predictor of speech understanding in noise and Lunner (2003) reported that participants with better working memory capacity and verbal processing speed had better speech perception performance.

Following the ELU model, hearing aids may allow listeners to match inputs and stored representations more successfully, with less explicit processing. Noise reduction, as implemented in hearing aids, has been proposed as a technology that may ease effortful listening. In contrast, however, it has been suggested that hearing aid signal processing may introduce unwanted artifacts or alter the speech inputs so that more explicit processing is required to match them to stored images (Lunner et al., 2009). If this is the case, hearing aid users with good working memory may function better with amplification because their expanded working memory capacity allows more resources to be applied to the task of matching speech inputs to long-term memory stores.

Elaine Ng and her colleagues investigated the effect of noise and noise reduction on word recall and identification and examined whether individuals were affected by these variables differently based on their working memory capacity. The authors had several hypotheses:

1. Noise would adversely affect memory, with poorer memory performance for speech in noise than in quiet.

2. Memory performance in noise would be at least partially restored by the use of noise reduction.

3. The effect of noise reduction on memory would be greater for items in late list positions because participants were older and therefore likely to have slower memory encoding speeds.

4. Memory in competing speech would be worse than in stationary noise because of the stronger masking effect of competing speech.

5. Overall memory performance would be better for participants with higher working memory capacity in the presence of noise reduction. This effect should be more apparent for late list items presented with competing speech babble.

Twenty-six native Swedish-speaking individuals with moderate to moderately-severe, high-frequency sensorineural hearing loss participated in the authors’ study. Prior to commencement of the study, participants were tested to ensure that they had age-appropriate cognitive performance. A battery of tests was administered and results were comparable to previously reported performance for their age group (Ronnberg, 1990).

Two tests were administered to study participants. First, a reading span test evaluated working memory capacity.  Participants were presented with a total of 24 three-word sentences and sub-lists of 3, 4 and 5 sentences were presented in ascending order. Participants were asked to judge whether the sentences were sensible or nonsense. At the end of each sub-list of sentences, listeners were prompted to recall either the first or final words of each sentence, in the order in which they were presented. Tests were scored as the total number of items correctly recalled.

The second test was a sentence-final word identification and recall (SWIR) test, consisting of 140 everyday sentences from the Swedish Hearing In Noise Test (HINT; Hallgren et al, 2006). This test involved two different tasks. The first was an identification task in which participants were asked to report the final word of each sentence immediately after listening to it.  The second task was a free recall task; after reporting the final word of the eighth sentence of the list, they were asked to recall all the words that they had previously reported. Three of seven tested conditions included variations of noise reduction algorithms, ranging from one similar to those implemented in modern hearing aids to an ‘ideal’ noise reduction algorithm.

Prior to the main analyses of working memory and recall performance, two sets of groups were created based on reading span scores, using two different grouping methods. In the first set, two groups were created by splitting the group at the median score so that 13 individuals were in a high reading span group and the remaining 13 were in a low reading span group. In the second set, participants who scored in the mid-range on the reading span test were excluded from the analysis, creating High reading span and Low reading span groups of 10 participants each. There was no significant difference between groups based on age, pure tone average or word identification performance, in any of the noise conditions. Overall reading span scores for participants in this study were comparable to previously reported results (Lunner, 2003; Foo, 2007).

Also prior to the main analysis, the SWIR results were analyzed to compare noise reduction and ideal noise reduction conditions. There was no significant difference between noise reduction and ideal noise reduction conditions in the identification or free recall tasks, nor was there an interaction of noise reduction condition with reading span score. Therefore, only the noise reduction condition was considered in the subsequent analyses.

The relationship between reading span score (representing working memory capacity) and SWIR recall was examined for all the test conditions. Reading span score correlated with overall recall performance in all conditions but one. When recall was analyzed as a function of list position (beginning or final), reading span scores correlated significantly with beginning (primacy) positions in quiet and most noise conditions. There was no significant correlation between overall reading span scores and items in final (recency) position in any of the noise conditions.

There were significant main effects for noise, list position and reading span group. In other words, when noise reduction was implemented, the negative effects of noise were lessened. There was a recency effect, in that performance was better for late list positions than for early list positions. Overall, the high reading span groups scored better than the low reading span groups, for both median-split and mid-range exclusion groups. The high reading span groups showed improved recall with noise reduction, whereas the low reading span groups exhibited no change in performance with noise reduction versus quiet.  The use of four-talker babble had a negative effect on late list positions, but did not affect items in other positions, suggesting that four-talker babble disrupted working memory more than steady-state noise. These analyses supported hypotheses 1, 2, 3 and 5, indicating that noise adversely affects memory performance (1), that noise reduction and list position interact with this effect (2,3) especially for individuals with high working memory capacity (5).

The results also supported hypothesis 4, which suggested that competing speech babble would affect memory performance more than steady state noise. Recall performance was significantly better in the presence of steady-state noise than it was in 4-talker babble. Though there was no significant effect of noise reduction overall, high reading span participants once again outperformed low reading span participants with noise reduction.

In summary, the results of this study determined that noise had an adverse effect on recall, but that this effect was mildly mitigated by the use of noise reduction. Four-talker babble was more disruptive to recall performance than was steady-state noise. Recall performance was better for individuals with higher working memory capacity. These individuals also demonstrated more of a benefit from noise reduction than did those with lower working memory capacity.

Recall performance is better in quiet conditions than in noise because presumably fewer cognitive resources are required to encode the speech input (Murphy, et al., 2000). Ng and her colleagues suggest that noise reduction helps to perceptually segregate speech from noise, allowing the speech input to be matched to stored lexical representations with less cognitive demand. So, noise reduction may at least partially reverse the negative effect of noise on working memory.

Competing speech babble is more likely to be cognitively demanding than steady-state noise (such as an air conditioner) because it contains meaningful information that is more distracting and harder to separate from the speech of interest (Sorqvist & Ronnberg, 2012). Not only is the speech signal of interest degraded by the presence of competing sound and therefore harder to encode, but additional cognitive resources are required to inhibit the unwanted or irrelevant linguistic information (Macken, 2009).  Because competing speech puts more demands on cognitive resources, it is more potentially disruptive than steady-state noise to perception of the speech signal of interest.

Unfortunately, much of the background noise encountered by hearing aid wearers is competing speech. The classic example of the cocktail party illustrates one of the most challenging situations for hearing-impaired individuals, in which they must try to attend to a proximal conversation while ignoring multiple conversations surrounding them. The results of this study suggest that noise reduction may be more useful in these situations for listeners with better working memory capacity; however, noise reduction should still be considered for all hearing aid users, with comprehensive follow-up care to make adjustments for individuals who are not functioning well in noisy conditions. Noise reduction may generally alleviate perceived effort or annoyance, allowing a listener to be more attentive to the speech signal of interest or to remain in a noisy situation that would otherwise be uncomfortable or aggravating.

More research is needed on the effects of noise, noise reduction and advanced signal processing on listening effort and memory in everyday situations. It is likely that performance is affected by numerous variables of the hearing aid, including compression characteristics, directionality, noise reduction, as well as the automatic implementation or adjustment of these features. These variables in turn combine with user-related characteristics such as age, degree of hearing loss, word recognition ability, cognitive capacity and more.

References

Foo, C., Rudner, M., & Ronnberg, J. (2007). Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. Journal of the American Academy of Audiology 18, 618-631.

Hallgren, M., Larsby, B. & Arlinger, S. (2006). A Swedish version of the hearing in noise test (HINT) for measurement of speech recognition. International Journal of Audiology 45, 227-237.

Hick, C. B., & Tharpe, A. M. (2002). Listening effort and fatigue in school-age children with and without hearing loss. Journal of Speech Language and Hearing Research 45, 573–584.

Humes, L. (2007). The contributions of audibility and cognitive factors to the benefit provided by amplified speech to older adults. Journal of the American Academy of Audiology 18, 590-603.

Lunner, T. (2003). Cognitive function in relation to hearing aid use. International Journal of Audiology 42, (Suppl. 1), S49-S58.

Lunner, T., Rudner, M. & Ronnberg, J. (2009). Cognition and hearing aids. Scandinavian Journal of Psychology 50, 395-403.

Macken, W.J., Phelps, F.G. & Jones, D.M. (2009). What causes auditory distraction? Psychonomic Bulletin and Review 16, 139-144.

McCoy, S.L., Tun, P.A. & Cox, L.C. (2005). Hearing loss and perceptual effort: downstream effects on older adults’ memory for speech. Quarterly Journal of Experimental Psychology A, 58, 22-33.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W.Y. (2013). How hearing aids, background noise and visual cues influence objective listening effort. Ear and Hearing 34 (5).

Ronnberg, J. (2003). Cognition in the hearing impaired and deaf as a bridge between signal and dialogue: a framework and a model. International Journal of Audiology 42 (Suppl. 1), S68-S76.

Ronnberg, J., Rudner, M. & Foo, C. (2008). Cognition counts: A working memory system for ease of language understanding (ELU). International Journal of Audiology 47 (Suppl. 2), S99-S105.

Rudner, M., Ronnberg, J. & Lunner, T. (2011). Working memory supports listening in noise for persons with hearing impairment. Journal of the American Academy of Audiology 22, 156-167.

Sorqvist, P. & Ronnberg, J. (2012). Episodic long-term memory of spoken discourse masked by speech: What role for working memory capacity? Journal of Speech Language and Hearing Research 55, 210-218.

Hearing Aid Behavior in the Real World

Banerjee, S. (2011). Hearing aids in the real world: typical automatic behavior of expansion, directionality and noise management. Journal of the American Academy of Audiology 22, 34-48.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

Hearing aid signal processing offers proven advantages for many everyday listening situations. Directional microphones improve speech recognition in the presence of competing sounds and noise reduction decreases annoyance of surrounding noise while possibly improving ease of listening (Sarampalis et al., 2009). Expansion reduces the annoyance of low-level environmental noise as well as circuit noise from the hearing aid.  It is typical for modern hearing aids to offer automatic activation of signal processing features based on various information derived through acoustic analysis of the environment. In the case of some signal processing features, these can be assigned to independent, manually accessible hearing aid memories. The opportunity to manually activate a hearing aid feature allows patients to make conscious decisions about the acoustic conditions of the environment and access an appropriately optimized memory configuration (Keidser, 1996; Surr et al., 2002).

However, many hearing aid users who need directionality and noise reduction may be unable to manually adjust their hearing aids, due to physical limitations or an inability to determine the optimal setting for a situation. Other users may be reluctant to make manual adjustments for fear of drawing attention to the hearing aids and therefore the hearing impairment. Cord et al (2002) reported that as many as 23% of users with manual controls do not use their additional programs and leave the aids in a default mode at all times. Most hearing aids now offer automatic directionality and noise reduction, taking the responsibility for situational adjustments away from the user. This allows more hearing aid users the ability to experience advanced signal processing benefits and reduces the need for manual adjustments.

The decision to provide automatic activation of expansion, directionality, and noise reduction is based on their known benefits for particular acoustic conditions, but it is not well understood how these features interact with each other or with changing listening environments in every day use.  This poses a challenge to clinicians when it comes to follow-up fine-tuning, because it is impossible to determine what features were activated at any particular moment. Datalogging offers opportunity to better interpret a patient’s experience outside of the clinic or laboratory. Datalogging reports often include average daily or total hours of use as well as the proportion of time an individual has spent in quiet or noisy environments but these are general reports and do not provide insight into the activation of some signal processing features and the acoustic environment that occurred at the time of feature activation. For example, a clinician may be able to determine that an aid was in a directional mode 20% of the time and that the user spent 26% of their time listening to speech in the presence of noise, but it does not indicate whether directional processing was active during these exposures to speech in noise. Therefore, the clinician must rely on user reports and observations to determine the appropriate adjustments, which may not reliably represent the array of listening experiences and acoustic environments that were encountered (Wagener, 2008).

In the study discussed here, Banerjee investigated the implementation of automatic expansion, directionality and noise management features. She measured environmental sound levels to determine the proportion of time individuals spent in quiet and noisy environments, as well as how these input levels related to activation of automatic features. She also examined bilateral agreement across a pair of independently functioning hearing aids to determine the proportion of time that the aids demonstrated similar processing strategies.

Ten subjects with symmetrical, sensorineural hearing loss were fitted with bilateral, behind-the-ear hearing aids. Age ranged from 49-78 years with a mean of 62.3 years of age. All of the subjects were experienced hearing aid users.  Some subjects were employed and most participated in regular social activities with family and other groups. The hearing aids were 8-channel WDRC instruments programmed to match targets from the manufacturer’s proprietary fitting formula.  Activation of the automatic directional microphone required input levels of 60dB or above, with the presence of noise in the environment and speech located in front of the wearer. Automatic noise management resulted in gain reductions in one or more of the 8 channels, based on the presence of noise-like sounds classified as “wind, mechanical sounds or other sounds” based on their spectral and temporal characteristics. No gain reductions were applied for sounds classified as “speech”.  Expansion was active for inputs below the compression thresholds, which ranged from 54 to 27dB SPL.

All participants carried a Personal Digital Assistants (PDA) connected via programming boots to their hearing aids. This PDA logged environmental broadband input level as well as the status of expansion, directionality, noise management and channel-specific gain reduction. Participants were asked to wear the hearing aids connected to the PDA for as much of the day as possible and measurements were made in 5-sec intervals to allow time for hearing aid features to update several times between readings.  The PDAs were worn with the hearing aids for a period of 4-5 weeks and at the end of data collection a total of 741 hours of hearing aid use were logged and studied.

Examination of the input level measurements revealed that subjects spent about half of their time in quiet environments with input levels of 50dB SPL or lower. Less than 5% of their time was spent in environments with input levels exceeding 65dB and the maximum recorded input level was 105dB SPL. This concurs with previous studies that reported high proportions of time spent in quiet environments such as living rooms or offices (Walden et al., 2004; Wagener et al., 2008).  The interaural difference in input level was 1dB about 50% of the time and exceeded 5dB only 5% of the time. Interaural differences were attributed to head shadow effects and asymmetrical sound sources as well as occasional accidental physical contact with the hearing aids, such as adjusting eyeglasses or rubbing the pinna.

Expansion was analyzed in terms of the proportion of time it was activated and whether the aids were in bilateral agreement. Expansion thresholds are meant to approximate low-level speech presented at 50dB.  In this study, expansion was active between 42% and 54% of the time, which is consistent with its intended activation, because about half the time the input levels were at or below 50dB SPL.  Bilateral agreement was relatively high at 77-81%.

Directional microphone status was measured according to the proportion of time that directionality was active and whether there was bilateral agreement. Again, directional status was consistent with the broadband input level measurements, in that directionality was active only about 10% of the time. The instruments were designed to switch to directional mode only when input levels were higher than 60dBA, and the broadband input measurements showed that participants encountered inputs higher than 65dB only about 5% of the time. Bilateral agreement for directionality was very high at 97%. Interestingly, the hearing aids were in directional mode only about 50% of the time in the louder environments.  This is likely attributable to the requirement for not only high input levels but also speech located in front of the listener in the presence of surrounding noise. A loud environment alone should not trigger directionality without the presence of speech in front of the listener.

Noise reduction was active 21% of the time with bilateral agreement of 95%. Again, this corresponds well with the input level measurements because noise reduction is designed to activate only in levels exceeding 50dB SPL. This does not indicate how often it was activated in the presence of moderate to loud noise, but as input levels rose, gain reductions resulting from noise management steadily increased as well. Gain reduction was 3-5dB greater in channels below 2250Hz than in the high frequency channels, consistent with the idea that environmental noise contains more energy in the low frequencies. Interaural differences in noise management were very small with a median difference in gain reduction of 0dB in all channels and exceeding 1dB only 5% of the time.

Bilateral agreement was generally quite high. Conditions in which there was less bilateral agreement may reflect asymmetric sound sources, accidental physical contact with the hearing instruments or true disagreement based on small differences in input levels arriving at the two ears. There may be everyday situations in which hearing aids might not perform in bilateral agreement, but this is not necessarily a disadvantage to the user. For instance, a driver in a car might experience directionality in the left aid but omnidirectional pickup from the right aid. This may be advantageous for the driver if there is another occupant in the passenger’s seat. Similarly, at a restaurant a hearing aid user might experience disproportionate noise or multi-talker babble from one side, depending on where he is situated relative to other people. Omnidirectional pickup on the quieter side of the listener with directionality on the opposite side might be desirable and more conducive to conversation. Similar arguments could be proposed for asymmetrical activation of noise management and its potential effects on comfort and ease of listening in noisy environments.

Banerjee’s investigation is an important step toward understanding how hearing aid signal processing is activated in everyday conditions. Though datalogging helps provide an overall snapshot of usage patterns and listening environments, the gross reporting of data limits utility in fine-tuning of hearing aid parameters. This study, and others like it, will provide useful information for clinicians providing follow-up care with hearing aid users.

It is noteworthy that participants spent about 50% of their time in environments with 50dB of broadband input or lower. While some participants were employed and others were not, this remains an acoustic reality of the hearing aid wearer. Subsequent studies with targeted samples would help further determine how special features apply to everyday environments among participants that lead a more consistently active lifestyle.

Automatic, adaptive signal processing features have potential benefits for many hearing aid users, especially those who are unable to or prefer not to operate manual controls. However, proper recommendations and programming adjustments can only be made if clinicians understand how these features are implemented in everyday life. This study provides evidence that some features perform as designed and offers insight for clinicians to leverage when making fine-tuning instruments based on real world hearing aid behavior.

 

References

Banerjee, S. (2011). Hearing aids in the real world: typical automatic behavior of expansion, directionality and noise management. Journal of the American Academy of Audiology 22, 34-48.

Cord, M., Surr, R., Walden, B. & Olsen, L. (2002). Performance of directional microphone hearing aids in everyday life. Journal of the American Academy of Audiology 13, 295-307.

Keidser, G. (1996). Selecting different amplification for different listening conditions. Journal of the American Academy of Audiology 7, 92-104.

Sarampalis, A., Kalluri, S., Edwards, B. & Hafter, E. (2009). Objective measures of listening effort: effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research 52, 1230–1240.

Surr, R., Walden, B., Cord, M. & Olsen, L. (2002). Influence of environmental factors on hearing aid microphone preference. Journal of the American Academy of Audiology 13, 308-322.

Wagener, K., Hansen, M. & Ludvigsen, C. (2008). Recording and classification of the acoustic environment of hearing aid users. Journal of the American Academy of Audiology 19, 348-370.

Does lip reading take the effort out of speech understanding?

Picou, E.M., Ricketts, T.A. & Hornsby, B.W.Y. (2013). How hearing aids, background noise and visual cues influence objective listening effort. Ear and Hearing, in press.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

For many people with hearing loss, visual cues from lip-reading are a valuable cue that has been proven to improve speech recognition across a variety of listening conditions (Sumby & Pollock, 1954; Erber, 1975; Grant, et al., 1998). To date is has remained unclear how visual cues, background noise, and hearing aid use interact with each other to affect listening effort.

Listening effort is often described as the allocation of additional cognitive resources to the task of understanding speech. If cognitive resources are finite or limited, then two or more simultaneous tasks will be in competition with each other for cognitive resources. Decrements in performance on one task can be interpreted as an allocation of resources away from the task and toward another concurrent task. Therefore, listening effort is often measured with dual-task paradigms, in which listeners respond to speech stimuli while simultaneously performing another task or responding to another kind of stimulus. Allocation of cognitive resources in this way is thought to represent a competition for working memory resources (Baddeley & Hitch, 1974; Baddeley, 2000).

The Ease of Language Understanding (ELU) model states that the process of understanding language involves matching phonological, syntactic, semantic and prosodic information to stored templates in long-term memory. When there is a mismatch between the incoming sensory information and the stored template, additional effort must be exerted to resolve the ambiguity of the message. This additional listening effort taxes working memory resources and may require the listener to allocate fewer resources to other tasks. Several studies have identified conditions that degrade a speech signal, such as background noise (Murphy, et al., 2000; Larsby et al., 2005; Zekveld et al., 2010) and hearing loss (Rabbitt, 1990; McCoy et al., 2005) in a manner that increases listening effort.

Individuals with reduced working memory capacity may be more negatively affected by conditions that degrade a speech signal. Previous reports have suggested that differences in working memory capacity hold a relationship to speech recognition in noise and performance with hearing aids in noise (Lunner, 2003; Foo et al., 2007).  The speed of retrieval from long-term memory may also affect performance and listening effort in adverse listening conditions (Van Rooij et al., 1989; Lunner, 2003). Because sensory inputs decay rapidly (Cowan, 1984), listeners with slow processing speed might not be able to fully process incoming information and match it to long term memory stores before it decays. Therefore, they would have to allocate more effort and resources to the task of matching sensory input to long-term memory templates.

Just as some listener traits might be expected to increase listening effort, some factors might offset adverse listening conditions by providing more information to support the matching of incoming sensory inputs to long-term memory. The use of visual cues is well known to improve speech recognition performance and some studies indicate that individuals with large working memory capacities are better able to make use of visual cues from lipreading (Picou et al., 2011).  Additionally, listeners who are better lipreaders may require fewer cognitive resources to understand speech, allowing them to make better use of visual cues in noisy environments (Hasher & Zacks, 1979; Picou et al., 2011).

The purpose of Picou, Ricketts and Hornsby’s study was to examine how listening effort is affected by hearing aid use, visual cues and background noise. A secondary goal of the study was to determine how specific listener traits such as verbal processing speed, working memory and lipreading ability would affect the measured changes in listening effort.

Twenty-seven hearing-impaired adults participated in the study. All were experienced hearing aid users and had corrected binocular vision of 20/40 or better. Participants were fitted with bilateral behind-the-ear hearing aids with non-occluding, non-custom eartips. Advanced features such as directionality and noise reduction were turned off, though feedback management was left on in order to maximize usable gain. Hearing aids were programmed with linear settings to eliminate any potential effect of amplitude compression on listening effort, a relationship which is as of yet unestablished.

A dual-task paradigm with a primary speech recognition task and secondary visual reaction time task was used to measure listening effort. The speech recognition task used monosyllabic words spoken by a female talker (Picou, 2011), presented at 65dB in the presence of multi-talker babble. Prior to the speech recognition task, individual SNRs for auditory only (AO) and auditory-visual (AV) conditions were determined at levels that yielded performance between 50-70% correct, because scores in this range are most likely to show changes in listening effort (Gatehouse & Gordon, 1990).

The reaction time task required participants to press a button in response to a rectangular visual probe that occurred prior to presentation of the speech token. The visual probe was presented prior to the speech tokens, so that the probe would not distract from the use of visual cues during the speech recognition task. The visual and speech stimuli were presented within a narrow enough interval (less than 500 msec) so that cognitive resources would have to be expended for both tasks at the same time (Hick & Tharpe, 2002).

Three listener traits were examined with regard to listening effort in quiet and noisy conditions, with and without visual cues. Visual working memory was evaluated with the Automated Operation Span (AOSPAN) test (Unsworth et al., 2005). The AOSPAN requires subjects to solve math equations and memorize letters. After seeing a math equation and identifying the answer, subjects are shown a letter which disappears after 800 msec. Following a series of equations they are then asked to identify the letters that they saw, in the order that they appeared. Scores are based on the number of letters that are recalled correctly.

Verbal processing speed was assessed with a lexical decision task (LDT) in which subjects were presented with a string of letters and were asked to indicate, as quickly as possible, if the letters formed a real word.  The test consisted of 50 common monosyllabic English words and 50 monosyllabic nonwords. The task reflects verbal processing speed because it requires the participant to match the stimuli to representations of familiar words stored in long-term memory (Meyer & Schvaneveldt, 1971; Milberg & Blumstein, 1981; Van Rooij et al., 1989). The reaction time to respond to the stimuli was used as a measure of verbal processing speed.

Finally, lipreading ability was measured with the Revised Shortened Utley Sentence Lipreading Test (ReSULT; Updike, 1989). The test required participants to repeat sentences spoken by a female talker, when the talker’s face was visible but speech was inaudible. Responses were scored based on the number of words repeated correctly in each sentence.

Subjects participated in two test sessions. At the first session, vision and hearing was tested, individual SNR levels were determined for the speech recognition task and AOSPAN, LDT and ReSULT scores were obtained.  At the second session, subjects completed practice sequences with AO and AV stimuli, then the dual speech recognition and visual reaction time tests were administered in eight counterbalanced conditions listed below. Due to the number of experimental conditions, only select outcomes of this study will be reviewed.

1.         auditory only in quiet, unaided

2.         auditory only in noise, unaided

3.         auditory-visual in quiet, unaided

4.         auditory-visual in noise, unaided

5.         auditory only in quiet, aided

6.         auditory only in noise, aided

7.         auditory-visual in quiet, aided

8.         auditory-visual in noise, aided

The main analysis showed that background noise impaired performance in all conditions and hearing aid use and visual cues improved performance. However, there were significant interactions between hearing aid use and visual cues, hearing aids and background noise, and visual cues and background noise, indicating that the effect of hearing aid use depended on the test modality (AV or AO), and background noise (present or absent), and the effect of visual cues depended on background noise (present or absent).  Hearing aid benefit proved to be larger in AO conditions than in AV conditions and was larger in quiet conditions than in noisy conditions.  The effect of noise was greater in the AV conditions than in the AO conditions, but the authors suggest that this could have been related to the individualized SNRs chosen for the test procedure.

On the reaction time task, background noise increased listening effort and hearing aid use reduced listening effort, though there was high variability and the effects of both variables were small. Additional analysis determined that the individual SNRs chosen for the dual task did not affect the hearing aid benefits that were measured. The availability of visual cues did not change overall reaction times and it was therefore determined that visual cues did not affect listening effort in this task of reaction time.

With regard to listening effort benefits derived from hearing aid use, the performance in quiet conditions was strongly related to performance in noise. In other words, subjects who obtained benefit from hearing aid use in quiet also obtained benefit in noise and individuals with slower verbal processing speed were more likely to derive benefit from hearing aid use. With regard to visual cues, there were several correlations with listener traits. Subjects who were better lipreaders derived more benefit from visual cues and those with smaller working memory capacities also showed more benefit from visual cues. These correlations were significant in quiet and noisy conditions. For quiet conditions, there was a positive correlation between verbal processing speed and benefit from visual cues, with better verbal processors showing more benefit from visual cues. There were no correlations between background noise and any of the measured listener traits.

The overall findings that visual cues and hearing aid use had positive effects and background noise had a negative effect on speech perception performance were not surprising. Similarly, the findings that hearing aid benefit was reduced for AV conditions versus AO conditions and for noisy versus quiet conditions were consistent with previous reports (Cox & Alexander, 1991; Walden et al., 2001; Duquesnoy & Plomp, 1983).  Because hearing aid use improves audibility, visual cues might not have been needed as much as they were in unaided conditions and the presence of noise may have counteracted the improved audibility by masking a portion of the speech cues needed for correct understanding, especially with the omnidirectional, linear instruments used in this study.

The ability of hearing aids to decrease listening effort was significant, in keeping with previously published results, but the improvements were lesser than than those reported in some previous studies. This could be related to the non-simultaneous timing of the tasks in the dual-task paradigm, but the authors surmise that it could be related to the way their subjects’ hearing aids were programmed. In most previous studies, individuals used their own hearing aids, set to individually prescribed and modified settings. In the current study, all participants used the same hearing aid circuit set to linear, unmodified targets. Advanced features like directionality and noise reduction, which are likely to impact listening effort (Sarampalis, 2009), speech discrimination ability and perceived ease of listening in everyday situations, were turned off.

There was a significant relationship between verbal processing speed and hearing aid benefit, in that subjects with slower processing speed were more likely to benefit from hearing aid use.  Sensory input decays rapidly and requires additional cognitive effort when it is mismatched with long-term memory stores. Any factor that improves the sensory input may facilitate the matching process. The authors posited that slow verbal processors might benefit more from amplification because hearing aids improved the quality of the sensory input, thereby reducing the cognitive effort and time that would otherwise be required to match the input to long-term memory templates.

On average, the availability of visual cues did not have a significant effect on listening effort. This may be a surprising result given the well-known positive effects of visual cues for speech recognition. However, there was high variability among subjects and it was apparent that better lipreaders were more able to make use of visual cues, especially in quiet conditions without hearing aids. Working memory capacity was negatively correlated with benefit from visual cues, indicating that subjects with better working memory capacity derived less benefit from visual cues on average. The relationship between these variables is unclear, but the authors suggest that individuals with lower working memory capacities may be more susceptible to changes in listening effort and therefore more likely to benefit from additional sensory information such as visual cues.

Understanding how individual traits affect listening effort and susceptibility to noise is important to audiologists for a number of reasons, partly because we often work with older individuals. Working memory declines as a result of the normal aging process and may begin in middle age (Wang, et al., 2011).  Similarly, the speed of cognitive processing slows and visual impairment becomes more likely with increasing age (Clay, et al., 2009). Many patients seeking audiological care may also suffer from these deficits in working memory, verbal processing, and visual acuity. Though more research is needed to understand how these variables relate to one another, they should be considered in clinical evaluations and hearing aid fittings.  Advanced hearing aid features that counteract the degrading effects of noise and reverberation may be particularly important for elderly or visually impaired hearing aid users. As shown in the reviewed study, these patients will benefit significantly from face-to-face conversation, slow speaking rates and reduced environmental distractions. Counseling sessions should include discussion of these issues so that patients and family members understand how they can use strategic listening techniques, in addition to hearing aids, to improve speech recognition and reduce cognitive effort.

References

Clay, O., Edwards, J., Ross, L., Okonkwo, O., Wadley, V., Roth, D. & Ball, K. (2009). Visual function and cognitive speed of processing mediate age-related decline in memory span and fluid intelligence. Journal of Aging and Health 21(4), 547-566.

Cox, R.M. & Alexander, G.C. (1991).  Hearing aid benefit in everyday environments. Ear and Hearing 12, 127-139.

Downs, D.W. (1982). Effects of hearing aid use on speech discrimination and listening effort. Journal of Speech and Hearing Disorders 47, 189-193.

Duquesnoy, A.J. & Plomp, R. (1983). The effect of a hearing aid on the speech reception threshold of hearing impaired listeners in quiet and in noise. Journal of the Acoustical Society of America 73, 2166-2173.

Erber, N.P. (1975). Auditory-visual perception of speech. Journal of Speech and Hearing Disorders 40, 481-492.

Foo, C., Rudner, M. & Ronnberg, J. (2007). Recognition of speech in noise with new hearing instrument compression release settings requires explicit cognitive storage and processing capacity. Journal of the American Academy of Audiology 18, 618-631.

Gatehouse, S., Naylor, G. & Elberling, C. (2003). Benefits from hearing aids in relation to the interaction between the user and the environment. International Journal of Audiology 42 Suppl 1, S77-S85.

Gatehouse, S. & Gordon, J. (1990). Response times to speech stimuli as measures of benefit from amplification. British Journal of Audiology 24, 63-68.

Grant, K.W., Walden, B.F. & Seitz, P.F. (1998).  Auditory visual speech recognition by hearing impaired subjects. Consonant recognition, sentence recognition and auditory-visual integration. Journal of the Acoustical Society of America 103, 2677-2690.

Hick, C.B. & Tharpe, A.M. (2002). Listening effort and fatigue in school-age children with and without hearing loss. Journal of Speech, Language and Hearing Research 45, 573-584.

Hornsby, B.W.Y. (2013).  The Effects of Hearing Aid Use on Listening Effort and Mental Fatigue Associated with Sustained Speech Processing Demands. Ear and Hearing, in press.

Meyer, D.E. & Schvaneveldt, R.W. (1971). Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Journal of Experimental Psychology 90, 227-234.

Milberg, W. & Blumstein, S.E. (1981). Lexical decision and aphasia: Evidence for semantic processing. Brain and Language 14, 371-385.

Picou, F.M., Ricketts, T.A. & Hornsby, B.W.Y (2011). Visual cues and listening effort: Individual variability. Journal of Speech, Language and Hearing Research 54, 1416-1430.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W.Y. (2013). How hearing aids, background noise and visual cues influence objective listening effort. Ear and Hearing, in press.

Rudner, M., Foo, C. & Ronnberg, J. (2009). Cognition and aided speech recognition in noise: Specific role for cognitive factors following nine week experience with adjusted compression settings in hearing aids. Scandinavian Journal of Psychology 50, 405-418.

Sarampalis, A., Kalluri, S., Edwards, B. & Hafter, E. (2009) Objective measures of listening effort: effects of background noise and noise reduction. Journal of Speech, Language, and Hearing Research 52, 1230–1240.

Sumby, W.H. & Pollock, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26, 212-215.

Unsworth, N., Heitz, R.P. & Schrock, J.C. (2005). An automated version of the operation span task. Behavioral Research Methods 37, 498-505.

Van Rooij, J.C., Plomp, R. & Orlebeke, J.F. (1989).  Auditive and cognitive factors in speech perception by elderly listeners. I: Development of test battery. Journal of the Acoustical Society of America 86, 1294-1309.

Walden, B.F., Grant, K.W. & Cord, M.T. (2001). Effects of amplification and speechreading on consonant recognition by persons with impaired hearing. Ear and Hearing 22, 333-341.

Wang, M., Gamo, N., Yang, Y., Jin, L., Wang, X., Laubach, M., Mazer, J., Lee, D. & Arnsten, A. (2011). Neuronal basis of age-related working memory decline. Nature 476, 210-213.

Can hearing aids reduce listening fatigue?

Hornsby, B.W.Y. (2013). The Effects of Hearing Aid Use on Listening Effort and Mental Fatigue Associated with Sustained Speech Processing Demands. Ear and Hearing, Published Ahead-of-Print.

This editorial discusses the clinical implications of an independent research study and does not represent the opinions of the original authors.

A patient recently told me that he wanted to put on his glasses so he could hear me better.  He was joking, but was correct in understanding that visual cues help facilitate speech understanding. When engaged in conversation, a listener uses many sources of information to supplement the auditory stimulus. Visual cues from lip-reading, gestures and expressions as well as situational cues, conversational context and the listener’s knowledge of grammar all help limit the possible interpretations of the message. Conditions that degrade the auditory stimulus, such as reverberation, background noise and hearing loss cause increased reliance on other cues in order for the listener to “fill in the blanks” and understand the spoken message. The use of these additional information sources amounts to an increased allocation of cognitive resources, which has also been referred to as increased “listening effort” (Downs, 1982; Hick & Tharpe, 2002; McCoy et al., 2005).

Research suggests that the increased cognitive effort required for hearing-impaired individuals to understand speech may lead to subjective reports of mental fatigue (Hetu et al., 1988; Ringdahl & Grimby, 2000; Kramer et al., 2006). This may be of particular concern to elderly people and those with cognitive, memory or other sensory deficits. The increased listening effort caused by hearing loss is associated with self-reports of stress, tension and fatigue (Copithorne 2006; Edwards 2007). In a study of factory workers, Hetu et al. (1988) reported that individuals with difficulty hearing at work needed increased attention, concentration and effort, leading to increased stress and fatigue. It is reasonable to conclude that listening effort as studied in the laboratory should be linked to subjective associations of hearing loss with mental fatigue, but the relationship is not clear. Dr. Hornsby points out that laboratory studies typically evaluate short-term changes in resource allocation as listening ease is manipulated in the experimental task. However, perceived mental fatigue is more likely to result from sustained listening demands over a longer period of time, e.g., a work day or social engagement lasting several hours (Hetu et al., 1988; Kramer et al., 2006).

The purpose of Dr. Hornsby’s study was to determine if hearing aids, with and without advanced features like directionality and noise reduction, reduce listening effort and subsequent susceptibility to mental fatigue. He also investigated the relationship between objective measures of speech discrimination and listening effort in the laboratory with subjective self-reports of mental fatigue.

Sixteen adult subjects participated in the study. All had bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Twelve subjects were employed full-time and reported being communicatively active about 65% of the time during the day. The remaining subjects were not employed but reported being communicatively active about 61% of the day. Twelve subjects were bilateral hearing aid users and four subjects were non-users. Subjects were screened to rule out cognitive dysfunction. All participants were fitted with bilateral behind-the-ear hearing aids with slim tubes and dome ear tips.  Hearing aids were programmed in basic and advanced modes. In basic mode, the microphones were omnidirectional and all advanced features except feedback suppression were turned off. In advanced mode, the hearing aids were set to manufacturer’s defaults with automatically adaptive directionality, noise reduction, reverberation reduction and wind noise reduction. All subjects wore the study hearing aids for at least 1-2 weeks before the experimental sessions began.

For the objective measurements of listening effort, subjects completed a word recognition in noise task paired with an auditory word recall task and a measure of visual reaction time.  Subjects heard random sets of 8 to 12 monosyllabic words preceded by the carrier phrase, “Say the word…” They were asked to repeat the words aloud and the percentage of correct responses was scored. In addition, subjects were asked to remember the last 5 words of each list. The end of the list was indicated by the word “STOP” on a screen in front of the speaker. Subjects were instructed to press a button as quickly as possible when the visual prompt appeared. Because the lists varied from 8 to 12 items, subjects never knew when to expect the visual prompt.  To control for variability in motor function, visual reaction time was measured alone in a separate session, during which subjects were instructed to simply ignore the speech and noise.

Subjective ratings of listening effort and fatigue were obtained with a five-item scale, administered prior to the experimental sessions. Three questions were adapted from the Speech Spatial and Qualities of Hearing Questionnaire (SSQ: Gatehouse & Noble, 2004) and the remaining items were formulated specifically for the study. Questions were phrased to elicit responses related to that particular day (“Did you have to put in a lot of effort to hear what was being said in conversation today?”, “How mentally/physically drained are you right now?”).  The final two questions were administered before and after the dual-task session and measured changes in attention and fatigue due to participation in the experimental tasks.

The word recognition in noise test yielded significantly better results in both aided conditions than in the unaided condition, though there was no difference between the basic and advanced aided conditions. The differences between unaided and aided scores varied considerably, suggesting that listening effort for individual subjects varied across conditions.  Unaided word recall was significantly poorer than basic or advanced aided performance. There was a small, significant difference between the two aided conditions, with advanced settings outperforming basic settings. In follow-up planned comparison tests, the aided vs. unaided difference was maintained though there was not a significant difference between the two aided conditions.

The reaction time measurement also assessed listening effort or the cognitive resources required for the word recognition test.  Reaction times were analyzed according to listening condition as well as block, which compared the first three trials (initial block) to the last three trials (final block).  Increases in reaction time by block represented the effect of task-related fatigue.  Analysis by listening condition showed that unaided reaction times increased more than reaction times for the advanced aided condition but not the basic aided condition. In other words, subjects required more time to react to the visual stimulus in the unaided condition than they did in the advanced aided condition. There was no significant difference between the two aided conditions.  There was a significant main effect for block; reaction times increased over the duration of the task. There was no interaction between listening condition and block; changes in performance over time were consistent across unaided and aided conditions.

One purpose of the study was to investigate the effect of hearing aid use on mental fatigue. Interestingly, comparison of initial and final blocks indicated that word recognition scores increased about 1-2% over time but improvement over time did not vary across listening conditions. There was no decrease in performance on word recall over time, nor did changes in performance over time vary significantly across listening conditions.  But reaction time did increase over time for all conditions, indicating a shift in cognitive resources away from the reaction time task and toward the primary word recognition task. Though the effect of hearing aid use was not significant, a trend appeared suggesting that fewer aided listeners had increased reaction.

The questionnaires administered before the session probed perceived effort and fatigue throughout the day, whereas the questions administered before and after the task probed focus, attention and mental fatigue before and after the test session. In all listening conditions there was a significant increase in mental fatigue and difficulty maintaining attention after completion of the test session. A non-significant trend suggested some difference between unaided and aided conditions.

To identify other factors that may have contributed to variability, correlations for age, pure tone average, high frequency pure tone average, unaided word recognition score, SNR during testing, employment status and self-rated percentage of daily communicative activity were calculated with the subjective and objective measurements. None of the correlations were significant, indicating that none of these factors contributed substantially to the variability observed in the study.

Cognitive resource allocation is often studied with dual-task paradigms like the one used in this study. Decrements in performance on the secondary task indicate a shift in cognitive resources to the primary task. Presumably, factors that increase difficulty in the primary task will increase allocation of resources to the primary task.  In these experiments, the primary task was a word recognition test and the secondary tasks were word recall and reaction time measurements. Improved word recall and quicker reaction times in aided conditions indicate that the use of hearing aids made the primary word recognition task easier, allowing listeners to allocate more cognitive resources to the secondary tasks. Furthermore, reaction times increased less over time in aided conditions than in unaided conditions.  These findings specifically suggest that decreased listening effort with hearing aid use may have made listeners less susceptible to fatigue as the dual-task session progressed.

Though subjective reports in this study showed a general trend toward reduced listening effort and concentration in aided conditions, there was not a significant improvement with hearing aid use. This contrasts with previous work that has shown reductions in subjective listening effort with the use of hearing aids (Humes et al., 1999; Hallgren et al., 2005; Noble & Gatehouse, 2006). The author notes that auditory demands vary widely and that participants were asked to rate their effort and fatigue based on “today”, which didn’t assess perceptions of sustained listening effort over a longer period of time may not have detected subtle differences among subjects.  For instance, working in a quiet office environment may not highlight the benefit of hearing aids or the difference between an omnidirectional or directional microphone program, simply because the acoustic environment did not trigger the advanced features often enough. In contrast, working in a school or restaurant might show a more noticeable difference between unaided listening, basic amplification and advanced signal processing. Though subjects reported being communicatively active about the same proportion of the day, this inquiry didn’t account for sustained listening effort over long periods of time, or varying work and social environments. These differences would likely affect overall listening effort and fatigue, as well as the value of advanced hearing aid features.

Clinical observations support the notion that hearing aid use can reduce listening effort and fatigue.  Prior to hearing aid use, hearing-impaired patients often report feeling exhausted from trying to keep up with social interactions or workplace demands. After receiving hearing aids, patients commonly report being more engaged, more able to participate in conversation and less drained at the end of the day. Though previous reports have supported the value of amplification on reduced listening effort, Hornsby’s study is the first to provide experimental data for the potential ability of hearing aid use to reduce mental fatigue.

These findings have important implications for all hearing aid users, but may have particular importance for working individuals with hearing loss as well as elderly hearing impaired individuals.  It is important for any working person to maintain a high level of job performance and to establish their value at work. Individuals with hearing loss face additional challenges in this regard and often take pains to prove that their hearing loss is not adversely affecting their work.  Studies in workplace productivity underscore the importance of reducing distractions for maintaining focus, reducing stress and persisting at difficult tasks (Clements-Croome, 2000; Hua et al., 2011). Studies indicating that hearing aids reduce listening effort and fatigue, presumably by improving audibility and reducing the potential distraction of competing sounds, should provide additional encouragement for employed hearing-impaired individuals to pursue hearing aids.

 

References

Baldwin, C.L. & Ash, I.K. (2011). Impact of sensory acuity on auditory working memory span in young and older adults. Psychology of Aging 26, 85-91.

Bentler, R.A., Wu, Y., Kettel, J. (2008). Digital noise reduction: outcomes from laboratory and field studies. International Journal of Audiology 47, 447-460.

Clements-Croome, D. (2000). Creating the productive workplace. Publisher: London, E & FN Spon.

Copithorne, D. (2006). The fatigue factor: How I learned to love power naps, meditation and other tricks to cope with hearing-loss exhaustion. [Healthy Hearing Website, August 21, 2006].

Downs, M. (1982). Effects of hearing aid use on speech discrimination and listening effort. Journal of Speech and Hearing Disorders 47, 189-193.

Edwards, B. (2007). The future of hearing aid technology. Trends in Amplification 11, 31-45.

Gatehouse, S. & Noble, W. (2004). The speech, Spatial and Qualities of Hearing Scale (SSQ). International Journal of Audiology 43, 85-99.

Hallgren, M., Larsby, B. & Lyxell, B. (2005). Speech understanding in quiet and noise, with and without hearing aids. International Journal of Audiology 44, 574-583.

Hetu, R., Riverin, L. & Lalande, N. (1988). Qualitative analysis of the handicap associated with occupational hearing loss. British Journal of Audiology 22, 251-264.

Hick, C.B. & Tharpe, A.M. (2002). Listening effort and fatigue in school-age children with and without hearing loss. Journal of Speech, Language and Hearing Research 45, 573-584.

Hua, Y., Loftness, V., Heerwagen, J. & Powell, K. (2011). Relationship between workplace spatial settings and occupant-perceived support for collaboration. Environment and Behavior 43, 807-826.

Humes, L.E., Christensen, L. & Thomas, T. (1999). A comparison of the aided performance and benefit provided by a linear and a two-channel wide dynamic range compression hearing aid. Journal of Speech, Language and Hearing Research 42, 65-79.

Kramer, S.E., Kapteyn, T.S. & Houtgast, T. (2006). Occupational performance: comparing normal-hearing and hearing-impaired employees using the Amsterdam Checklist for Hearing and Work. International Journal of Audiology 45, 503-512.

McCoy, S.L., Tun, P.A. & Cox, L.C. (2005). Hearing loss and perceptual effort: Downstream effects on older adults’ memory for speech. Quarterly Journal of Experimental Psychology A 58, 22-33.

Noble, W. & Gatehouse, S. (2006). Effects of bilateral versus unilateral hearing aid fitting on abilities measured by the SSQ. International Journal of Audiology 45, 172-181.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W. (2011). Visual cues and listening effort: Individual variability. Journal of Speech, Language and Hearing Research 54, 1416-1430.

Picou, E.M., Ricketts, T.A. & Hornsby, B.W. (2013). The effect of individual variability on listening effort in unaided and aided conditions. Ear and Hearing (in press).

Ringdahl, A. & Grimby, A. (2000). Severe-profound hearing impairment and health related quality of life among post-lingual deafened Swedish adults. Scandinavian Audiology 29, 266-275

Sarampalis,  A., Kalluri, S. & Edwards, B. (2009). Objective measures of listening effort: Effects of background noise and noise reduction. Journal of Speech, Language and Hearing Research 52, 1230-1240.

Valente, M. & Mispagel, K. (2008) Unaided and aided performance with a directional open-fit hearing aid. International Journal of Audiology 47(6), 329-336.

Evidence for the Value of Real-Ear Measurement

Abrams, H.B., Chisolm, T.H., McManus, M., & McArdle, R. (2012). Initial-fit approach versus verified prescription: Comparing self-perceived hearing aid benefit. Journal of the American Academy of Audiology, 23(10), 768-778.

Audiology best practice guidelines state that probe microphone verification measures should be done to ensure that hearing aid gain and output characteristics meet prescribed targets for the individual. In the American Academy of Audiology’s Guidelines for the Audiologic Management of Adult Hearing Impairment, an expert task force recommends that “prescribed gain from a validated prescriptive method should be verified using a probe microphone approach that is referenced to ear canal SPL” (Valente, et al., 2006). Similarly, the Academy’s Pediatric Amplification Protocol (AAA, 2003) states that hearing aid output characteristics should be verified with real-ear measures or with real-ear-to-coupler-difference (RECD) calculations when lengthy adjustments subsequent to real-ear measurement are not possible.

In contrast to these recommendations, the majority of hearing aid providers are not routinely conducting real-ear verification measures. In a survey of audiologists and hearing instrument specialists, Mueller and Picou (2010) found that respondents used real-ear verification only about 40% of the time and Bamford (2001) reported that only about 20% of individuals fitting pediatric patients used real-ear measures. The reasons most often cited for skipping probe microphone measures are based on financial, time, or space constraints.

When probe microphone measures are not conducted, other verification techniques may be used such as aided word recognition, but these not likely to provide reliable information (Thornton & Raffin, 1978).  Or, verification may not be attempted at all, with fitting parameters being chosen based on the manufacturer’s initial-fit specifications. Although most fitting software allows for entry of age, experience and acoustic information such as canal length and venting characteristics, their predictions are based on average data and cannot account for individual ear canal effects.

Numerous studies have shown that initial-fit algorithms often deviate significantly from prescribed targets, usually underestimating required gain, especially in the high frequencies. Hawkins & Cook (2003) found that simulated fittings from one manufacturer’s initial-fit algorithm over-estimated the coupler gain and in-situ response by as much as 20dB, especially in the low and high frequencies.  Bentler (2004) compared the 2cc coupler response from six different hearing aids programmed with initial-fit algorithms and found that the responses were different for each manufacturer and deviated from prescriptive targets by as much as 15dB, usually falling below prescribed targets. Similarly, Bretz (2006) studied three manufacturers’ pediatric first-fit algorithms and found that the average output varied by about 20dB and initial-fit gain values were below both NAL-NL1 and DSL (i/o) targets. This is of particular concern because pediatric patients may be less able than adults to provide subjective responses to hearing aid settings, rendering objective measures such as real-ear verification even more important.

These studies and others illuminate the potential difference between first-fit hearing aid settings and those verified by objective measures, but it is not well known how this affects the user’s perceived benefit.  Some early reports using linear amplification targets indicated that verification did not predict perceived benefit (Nerbonne et al., 1995; Weinstein et al., 1995), but more recent work indicates that adults fit to DSL v5.0a targets demonstrated benefit as measured by the Client Oriented Scale of Improvement (COSI, Dillon & Ginis, 1997). A recent survey by Kochkin et al. (2010) found that patients whose fittings were verified with a comprehensive protocol including real-ear verification reported increased hearing aid usage, benefit and satisfaction. Furthermore, these respondents were more likely to recommend their hearing care professional to friends and family than were the respondents who were not fitted with real-ear verification.

The purpose of the study discussed here was to determine if perceived hearing aid benefit differed based on whether the user was fitted with an initial-fit algorithm only or with modified settings based on probe-microphone verification. Twenty-two experienced hearing aid users with mild to moderately-severe hearing loss participated in the study. All were fitted with binaural hearing aids, though a variety of hearing aid styles and manufacturers were represented.  Probe microphone measurements were conducted on all subjects, but  those in the initial-fitting group did not receive adjustments based on the verification measures.

Perceived hearing aid benefit was measured using the Abbreviated Profile of Hearing Aid Benefit (APHAB, Cox & Alexander, 1995). The APHAB consists of 24 items in four subscales: ease of communication (EC), reverberation (RV), background noise (BN) and aversiveness of sounds (AV).  In addition to subscale scores, an average global score can be calculated, as well as a benefit score which represents the difference between unaided and aided responses.

Prior to being fitted with their hearing aids, participants completed the APHAB questionnaire. Because all were experienced hearing aid users, they were asked to base their answers on their experiences without amplification.  Hearing aid fittings and probe microphone verification were then conducted on all subjects, but half of the subjects received adjustments to match prescribed targets and half of the subjects maintained their first-fit settings. Efforts were made to ensure that subjects were not aware of the difference between the initial-fit and verified fitting methods. The only adjustments that subjects in the initial-fit group received were based on issues that could affect their willingness to wear the hearing aids, such as loudness discomfort or feedback.

One month following the first appointment, subjects returned to the clinic and were administered the APHAB again. They were given their initial “unaided” APHAB responses to use as a comparison. After completion of the APHAB, the subjects who had been fitted with the initial-fit algorithms were switched to verified fittings and those had been fitted to prescribed targets were switched to the manufacturer’s initial-fit settings. All subjects were re-tested with probe microphone measures and those with loudness or feedback complaints received minor adjustments.

One month after the second appointment, subjects returned to complete the APHAB and were again allowed to use their original APHAB responses as a basis for comparison. They were not allowed to view their responses to the APHAB that was administered after the first hearing aid trial. Participants were also asked to indicate which fitting method (Session 1 or Session 2) they preferred and would want permanently programmed into their hearing aids.

Analysis of the probe microphone measurements indicated, not surprisingly, that the verified fittings were more closely matched to prescriptive targets than the fittings based on the first-fit algorithms, even after minor adjustments based on comfort and user preferences.  For three of the APHAB subscales – ease of communication, reverberation and background noise – scores obtained with verified fittings were superior to those obtained with the initial-fit approach and the main effect of fitting approach was found to be statistically significant. There was no interaction between fitting approach and APHAB subscale, indicating that the better outcomes obtained with verified fittings were not related to any specific listening environment.

When asked to indicate their preferred fitting method, 7 of the 22 participants selected the initial-fit approach, whereas more than twice as many subjects, 15 out of 22, selected the verified fitting. For all but 5 subjects, the global difference score on the APHAB predicted their preferred fitting method, and the relationship between global score and final preference was statistically significant.

The findings of this study and of related reports bring up some philosophical and practical considerations for audiologists. One of our primary goals is to provide effective rehabilitation for hearing-impaired patients and this is most often accomplished by fitting and dispensing quality hearing instruments. Clinical and research data repeatedly indicates the importance of probe microphone verification. It serves the best interest of our patients to offer them the most effective fitting approach, so it follows that probe microphone verification measures should be a routine, essential part of our clinical protocol.

The reports that a minority of hearing aid fittings are being verified with real-ear measures indicates that many clinicians are not following recommended best practices. Indeed, Palmer (2009) points out that failure to follow best practice guidelines is a departure from the ethical standards of professional competence. Failure to provide the recommended objective verification for hearing aid fittings does run counter to our clinical goals and as Palmer suggests may even be damaging to our “collective reputation” as a profession.

Philosophical arguments notwithstanding, there are also practical reasons to incorporate real-ear measures into the fitting protocol. In the MarkeTrak VIII survey, Kochkin reported that hearing aid users who received probe microphone verification testing as part of a detailed fitting protocol were more satisfied with their hearing instruments and were more likely to refer their clinician to friends. In the current field of hearing aid service provision, it is important for audiologists to consider ways that they can meaningfully distinguish themselves from online, mail-order and big-box retail competitors. Hearing aid users are becoming well-informed consumers and it is clear that establishing a base of satisfied patients who feel they have received comprehensive, competent care is crucial for growing a private practice. Probe microphone verification is a brief yet effective part of ensuring successful hearing aid fittings and it benefits our patients and our profession to provide this essential service.

References

Abrams, H.B., Chisolm, T.H., McManus, M., & McArdle, R. (2012). Initial-fit approach versus verified prescription: Comparing self-perceived hearing aid benefit. Journal of the American Academy of Audiology, 23(10), 768-778.

American Academy of Audiology (2003). Pediatric Amplification Protocol. www.audiology.org, (accessed 3-3-13).

Bamford,  J., Beresford, D., Mencher, G.(2001). Provision and fitting of new technology hearing aids: implications from a survey of some “good practice services” in UK and USA. In: Seewald, R.C., Gravel, J.S., eds. A Sound Foundation Through Early Amplification: Proceedings of an International Conference. Stafa, Switzerland: Phonak AG, 213–219.

Bentler, R. (2004). Advanced hearing aid features: Do they work? Paper presented at the convention of the American Speech-Language-Hearing Association, Washington, D.C.

Bretz, K. (2006). A comparison of three hearing aid manufacturers’ recommended first fit to two generic prescriptive targets with the pediatric population. Independent Studies and Capstones, Paper 189. Program in Audiology and Communication Sciences, Washington University School of Medicine. http://digitalcommons.wustl.edu/pacs_capstones/189.

Cox, R. & Alexander, G. (1995). The abbreviated profile of hearing aid benefit. Ear and Hearing 16, 176-183.

Dillon, H. & Ginis, J. (1997). Client Oriented Scale of Improvement (COSI) and its relationship to several other measures of benefit and satisfaction provided by hearing aids. Journal of the American Academy of Audiology 8: 27-43.

Hawkins, D. & Cook, J. (2003). Hearing aid software predictive gain values: How accurate are they? Hearing Journal 56, 26-34.

Kochkin, S., Beck, D., & Christensen, L. (2010). MarkeTrak VIII: The impact of the hearing health care professional on hearing aid user success. Hearing Review 17, 12-34.

Mueller, H., & Picou, E. (2010). Survey examines popularity of real-ear probe-microphone measures. Hearing Journal 63, 27-32.

Nerbonne, M., Christman, W. & Fleschner, C. (1995). Comparing objective and subjective measures of hearing aid benefit. Poster presentation at the annual convention of the American Academy of Audiology, Dallas, TX.

Palmer, C.V. (2006). Best practice: it’s a matter of ethics. Audiology Today, Sept-Oct.,31-35.

Thornton, A. & Raffin, M. (1978) Speech-discrimination scores modeled as a binomial variable. Journal of Speech and Hearing Research 21, 507–518.

Valente, M., Abrams, H., Benson, D., Chisolm, T., Citron, D., Hampton, D., Loavenbruck, A., Ricketts, T., Solodar, H. &  Sweetow, R. (2006). Guidelines for the Audiological Management of Adult Hearing Impairment. Audiology Today, Vol 18.

Weinstein, B., Newman, C. & Montano, J. (1995). A multidimensional analysis of hearing aid benefit. Paper presented at the 1st Biennial Hearing Aid Research & Development Conference, Bethesda, MD.